Robots.txt is a file that instructs search engine robots on which pages or sections of a website should not be indexed or crawled. This file is located in the root directory of a website and can be used to prevent sensitive content (such as login pages or private data) from being indexed by search engines. Robots.txt is also used to prevent search engines from indexing duplicate content, spammy pages, or pages with low-quality content that could harm a website's search engine rankings.

No previous glossary item
No next glossary item