This module provides a single class, RobotFileParser, which answers
questions about whether or not a particular user agent can fetch a URL on the Web site that
published the robots.txt file. For more details on the structure of robots.txt files, see http://www.robotstxt.org/wc/norobots.html.
This class provides a set of methods to read, parse and answer questions about a single
- Sets the URL referring to a robots.txt file.
- Reads the robots.txt URL and feeds it to the parser.
- Parses the lines argument.
True if the useragent is allowed to fetch the url
according to the rules contained in the parsed robots.txt
- Returns the time the
robots.txt file was last fetched. This is useful
for long-running web spiders that need to check for new
- Sets the time the
robots.txt file was last fetched to the current time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import robotparser
>>> rp = robotparser.RobotFileParser()
>>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco")
>>> rp.can_fetch("*", "http://www.musi-cal.com/")