Robot.txt is a file used by search engines’ crawlers in your website to classify pages that people can visit. In certain cases, web developers provide a PUBLIC page for users, not search engines such as Google, Bing, and Yahoo.
The purpose of this file is a robot exclusion protocol. It is de facto standard in the communication law and a border between websites and non-human users.
Robots exclusion protocol or robotstxt allows web developers to decide in which part/file/folder of their website that can be accessed by bot or crawler.
user-agent: googlebot disallow: /login user-agent: googlebot-news disallow: /media user-agent: googlebot-image
Based on the syntax sample above, here is the explanation:
- Googlebot user-agent is prohibited to crawl into the /loginfolder.
- Googlebot-news user-agent is prohibited to crawl into the /media folder.
- Googlebot-image user-agent is allowed to look over into all of the folders inside the www.cmlabs.co website without any limitations.
In general cases, robots.txt implementation is NOT VALID for a subdomain, protocol, and port. However, it will be VALID for all files in all of the sub-directories on the host, protocol, and port.
Check the sample location of robots.txt file in the directory of website server:
CONTOH VALID http://robots.co/robots.txt http://robots.co/folder/file/robots.txt
CONTOH TIDAK VALID http://other.cmlabs.co/robots.txt https://cmlabs.co/robots.txt http://cmlabs.co:8181/robots.txt
Robots.txt is not the right file to be used to hide a file or page from the crawler of search engines.
The right answer for: what should we do to hide files from Google? Is by inserting nonindex tag.
<meta name="robots" content="noindex"> <meta name="googlebot" content="noindex">
HTTP/1.1 200 OK (…) X-Robots-Tag: noindex (…)
On July, 1st 2019, Google through its official blog announced that robots.txt protocol was prepared to be the Internet standard. It means that all of the search engines will be agreed to this provision.