Where might you have seen our work?

Small places create combinations, but crosses that occur cannot provide many combinations. So be careful in making justifications, especially SEO.

Robots.txt

Last updated: Mar 14, 2021

CMLABS' SEO TERMS

WHAT IS ROBOTS.TXT

Definition of robots.txt

Robot.txt is a file used by search engines’ crawlers in your website to classify pages that people can visit. In certain cases, web developers provide a PUBLIC page for users, not search engines such as Google, Bing, and Yahoo.

The purpose of this file is a robot exclusion protocol. It is de facto standard in the communication law and a border between websites and non-human users.

Robots exclusion protocol or robotstxt allows web developers to decide in which part/file/folder of their website that can be accessed by bot or crawler.

Samples of Codes or Robots.txt Syntax

user-agent: googlebot disallow: /login  user-agent: googlebot-news disallow: /media  user-agent: googlebot-image

Based on the syntax sample above, here is the explanation:

  • Googlebot user-agent is prohibited to crawl into the /loginfolder.
  • Googlebot-news user-agent is prohibited to crawl into the /media folder.
  • Googlebot-image user-agent is allowed to look over into all of the folders inside the www.cmlabs.co website without any limitations.

Sample and Implementation of robots.txt URL

In general cases, robots.txt implementation is NOT VALID for a subdomain, protocol, and port. However, it will be VALID for all files in all of the sub-directories on the host, protocol, and port.

Check the sample location of robots.txt file in the directory of website server:

CONTOH VALID http://robots.co/robots.txt http://robots.co/folder/file/robots.txt
CONTOH TIDAK VALID http://other.cmlabs.co/robots.txt https://cmlabs.co/robots.txt http://cmlabs.co:8181/robots.txt

Important note

When this page is published (on May 21st, 2020), the definition and implementation of the robots.txt are only applicable to Google. In another word, other search engines such as Bing, Yahoo, Yandex, etc do not always use the same standard.

However, a global standarization has been a discussion among the international communities.

Misunderstanding

Robots.txt is not the right file to be used to hide a file or page from the crawler of search engines.

The right answer for: what should we do to hide files from Google? Is by inserting nonindex tag.

<meta name="robots" content="noindex"> <meta name="googlebot" content="noindex">

RESPONSE HEADER

HTTP/1.1 200 OK (…) X-Robots-Tag: noindex (…)

Changes of Protocol Standards

On July, 1st 2019, Google through its official blog announced that robots.txt protocol was prepared to be the Internet standard. It means that all of the search engines will be agreed to this provision.

Related Terms

User-agent / bot

User-agent is a robot that is used by search engines to crawl all websites on the internet.

cmlabs

cmlabs

Note: We have attached some of the most common questions asked by users below, along with their answers. To use the cmlabs Keyword Ranking Tracker application, you don't need to request for a quote from marketing. Please click login to the application.

WDYT, you like my article?

Latest Update

Google PageRank
Last updated: Mar 27, 2021
JSON-LD
Last updated: Mar 27, 2021
Hosting
Last updated: Mar 27, 2021

Need help?

Tell us your SEO needs, our marketing team will help you find the best solution

Marketing Teams
Desak

For Product

Ask Me
Marketing Teams
Laras

For Services

Ask Me
Marketing Teams
Vanessa

For Services

Ask Me