Robots.txt is a document that specifies which parts of a website bots are and are not allowed to visit. While it’s not a legally binding document, it has long been common practice for bots to obey the rules listed in robots.txt.
in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:
robots.txt is fundamentally a list of rules, not a single line of code
robots.txt can allow bots to access certain parts of a website, it doesn’t have to ban bots entirely
it’s not legally binding, but it is still customary for bots to follow it
i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.
Websites actually just list broad areas, as listing every file/page would be far too verbose for many websites and impossible for any website that has dynamic/user-generated content.
You can view examples by going to most any websites base-url and then adding /robots.txt to the end of it.
Out of curiosity, how would you word it?
i would probably word it as something like:
in that description, i’m trying to keep the accessible tone that they were going for in the article (so i wrote “document” instead of file format/IETF standard), while still trying to focus on the following points:
i did also neglect to mention that robots.txt allows you to specify different rules for different bots, but that didn’t seem particularly relevant here.
List of files/pages that a website owner doesn’t want bots to crawl. Or something like that.
Websites actually just list broad areas, as listing every file/page would be far too verbose for many websites and impossible for any website that has dynamic/user-generated content.
You can view examples by going to most any websites base-url and then adding /robots.txt to the end of it.
For example www.google.com/robots.txt