The robots.txt files allow you to customize how your documentation is indexed in search engines. It’s useful for:
Hiding various pages from search engines
Disabling certain web crawlers from accessing your documentation
Disallowing any indexing of your documentation
Read the Docs automatically generates one for you with a configuration that works for most projects.
By default, the automatically created
Hides versions which are set to Hidden from being indexed.
Allows indexing of all other versions.
robots.txt files are respected by most search engines,
but they aren’t a guarantee that your pages will not be indexed.
Search engines may choose to ignore your
and index your docs anyway.
If you require private documentation, please see Private documentation sharing.
How it works
You can customize this file to add more rules to it.
robots.txt file will be served from the default version of your project.
This is because the
robots.txt file is served at the top-level of your domain,
so we must choose a version to find the file in.
The default version is the best place to look for it.
Documentation tools will have different ways of generating a
We have examples for some of the most popular tools below.
Sphinx uses the html_extra_path configuration value to add static files to its final HTML output.
You need to create a
robots.txt file and put it under the path defined in
MkDocs needs the
robots.txt to be at the directory defined by the docs_dir configuration value.