Read the Docs: documentation simplified

Read the Docs tutorial

In this tutorial you will create a documentation project on Read the Docs by importing a Sphinx project from a GitHub repository, tailor its configuration, and explore several useful features of the platform.

The tutorial is aimed at people interested in learning how to use Read the Docs to host their documentation projects. You will fork a fictional software library similar to the one developed in the official Sphinx tutorial. No prior experience with Sphinx is required and you can follow this tutorial without having done the Sphinx one.

The only things you will need are a web browser, an Internet connection, and a GitHub account (you can register for a free account if you don’t have one). You will use Read the Docs Community, which means that the project will be public.

Getting started

Preparing your project on GitHub

To start, sign in to GitHub and navigate to the tutorial GitHub template, where you will see a green Use this template button. Click it to open a new page that will ask you for some details:

  • Leave the default “Owner”, or change it to something better for a tutorial project.

  • Introduce an appropriate “Repository name”, for example rtd-tutorial.

  • Make sure the project is “Public”, rather than “Private”.

After that, click on the green Create repository from template button, which will generate a new repository on your personal account (or the one of your choosing). This is the repository you will import on Read the Docs, and it contains the following files:

.readthedocs.yaml

Read the Docs configuration file. Required to setup the documentation build process.

README.rst

Basic description of the repository, you will leave it untouched.

pyproject.toml

Python project metadata that makes it installable. Useful for automatic documentation generation from sources.

lumache.py

Source code of the fictional Python library.

docs/

Directory holding all the Sphinx documentation sources, including the Sphinx configuration docs/source/conf.py and the root document docs/source/index.rst written in reStructuredText.

GitHub template for the tutorial

GitHub template for the tutorial

Sign up for Read the Docs

To sign up for a Read the Docs account, navigate to the Sign Up page and choose the option Sign up with GitHub. On the authorization page, click the green Authorize readthedocs button.

GitHub authorization page

GitHub authorization page

Note

Read the Docs needs elevated permissions to perform certain operations that ensure that the workflow is as smooth as possible, like installing webhooks. If you want to learn more, check out Permissions for connected accounts.

After that, you will be redirected to Read the Docs, where you will need to confirm your e-mail and username. Clicking the Sign Up » button will create your account and redirect you to your dashboard.

By now, you should have two email notifications:

  • One from GitHub, telling you that “A third-party OAuth application … was recently authorized to access your account”. You don’t need to do anything about it.

  • Another one from Read the Docs, prompting you to “verify your email address”. Click on the link to finalize the process.

Once done, your Read the Docs account is created and ready to import your first project.

Welcome!

Read the Docs empty dashboard

Read the Docs empty dashboard

Note

Our commercial site offers some extra features, like support for private projects. You can learn more about our two different sites.

First steps

Importing the project to Read the Docs

To import your GitHub project to Read the Docs, first click on the Import a Project button on your dashboard (or browse to the import page directly). You should see your GitHub account under the “Filter repositories” list on the right. If the list of repositories is empty, click the 🔄 button, and after that all your repositories will appear on the center.

Import projects workflow

Import projects workflow

Locate your rtd-tutorial project (possibly clicking next ›› at the bottom if you have several pages of projects), and then click on the ➕ button to the right of the name. The next page will ask you to fill some details about your Read the Docs project:

Name

The name of the project. It has to be unique across all the service, so it is better if you prepend your username, for example {username}-rtd-tutorial.

Repository URL

The URL that contains the sources. Leave the automatically filled value.

Default branch

Name of the default branch of the project, leave it as main.

After hitting the Next button, you will be redirected to the project home. You just created your first project on Read the Docs! 🎉

Project home

Project home

Checking the first build

Read the Docs will try to build the documentation of your project right after you create it. To see the build logs, click on the Your documentation is building link on the project home, or alternatively navigate to the “Builds” page, then open the one on top (the most recent one).

If the build has not finished yet by the time you open it, you will see a spinner next to a “Installing” or “Building” indicator, meaning that it is still in progress.

First successful documentation build

First successful documentation build

When the build finishes, you will see a green “Build completed” indicator, the completion date, the elapsed time, and a link to see the corresponding documentation. If you now click on View docs, you will see your documentation live!

HTML documentation live on Read the Docs

HTML documentation live on Read the Docs

Note

Advertisement is one of our main sources of revenue. If you want to learn more about how do we fund our operations and explore options to go ad-free, check out our Sustainability page.

If you don’t see the ad, you might be using an ad blocker. Our EthicalAds network respects your privacy, doesn’t target you, and tries to be as unobstrusive as possible, so we would like to kindly ask you to not block us ❤️

Basic configuration changes

You can now proceed to make some basic configuration adjustments. Navigate back to the project page and click on the ⚙ Admin button, which will open the Settings page.

First of all, add the following text in the description:

Lumache (/lu’make/) is a Python library for cooks and food lovers that creates recipes mixing random ingredients.

Then set the project homepage to https://world.openfoodfacts.org/, and write food, python in the list of tags. All this information will be shown on your project home.

After that, configure your email so you get a notification if the build fails. To do so, click on the Notifications link on the left, type the email where you would like to get the notification, and click the Add button. After that, your email will be shown under “Existing Notifications”.

Trigger a build from a pull request

Read the Docs allows you to trigger builds from GitHub pull requests and gives you a preview of how the documentation would look like with those changes.

To enable that functionality, first click on the Settings link on the left under the ⚙ Admin menu, check the “Build pull requests for this project” checkbox, and click the Save button at the bottom of the page.

Next, navigate to your GitHub repository, locate the file docs/source/index.rst, and click on the ✏️ icon on the top-right with the tooltip “Edit this file” to open a web editor (more information on their documentation).

File view on GitHub before launching the editor

File view on GitHub before launching the editor

In the editor, add the following sentence to the file:

docs/source/index.rst
Lumache has its documentation hosted on Read the Docs.

Write an appropriate commit message, and choose the “Create a new branch for this commit and start a pull request” option, typing a name for the new branch. When you are done, click the green Propose changes button, which will take you to the new pull request page, and there click the Create pull request button below the description.

Read the Docs building the pull request from GitHub

Read the Docs building the pull request from GitHub

After opening the pull request, a Read the Docs check will appear indicating that it is building the documentation for that pull request. If you click on the Details link while it is building, you will access the build logs, otherwise it will take you directly to the documentation. When you are satisfied, you can merge the pull request!

Adding a configuration file

The Admin tab of the project home allows you to change some global configuration values of your project. In addition, you can further customize the building process using the .readthedocs.yaml configuration file. This has several advantages:

  • The configuration lives next to your code and documentation, tracked by version control.

  • It can be different for every version (more on versioning in the next section).

  • Some configurations are only available using the config file.

This configuration file should be part of your Git repository. It should be located in the base folder of the repository and be named .readthedocs.yaml.

In this section, we will show you some examples of what a configuration file should contain.

Tip

Settings that apply to the entire project are controlled in the web dashboard, while settings that are version or build specific are better in the YAML file.

Changing the Python version

For example, to explicitly use Python 3.8 to build your project, navigate to your GitHub repository, click on .readthedocs.yaml file and then in the pencil icon ✏️ to edit the file and change the Python version as follows:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "3.8"

python:
  install:
    - requirements: docs/requirements.txt

sphinx:
  configuration: docs/source/conf.py

The purpose of each key is:

version

Mandatory, specifies version 2 of the configuration file.

build.os

Required to specify the Python version, states the name of the base image.

build.tools.python

Declares the Python version to be used.

python.install.requirements

Specifies the Python dependencies to install required to build the documentation.

After you commit these changes, go back to your project home, navigate to the “Builds” page, and open the new build that just started. You will notice that one of the lines contains python -mvirtualenv: if you click on it, you will see the full output of the corresponding command, stating that it used Python 3.8.6 to create the virtual environment.

Read the Docs build using Python 3.8

Read the Docs build using Python 3.8

Making warnings more visible

If you navigate to your HTML documentation, you will notice that the index page looks correct but the API section is empty. This is a very common issue with Sphinx, and the reason is stated in the build logs. On the build page you opened before, click on the View raw link on the top right, which opens the build logs in plain text, and you will see several warnings:

WARNING: [autosummary] failed to import 'lumache': no module named lumache
...
WARNING: autodoc: failed to import function 'get_random_ingredients' from module 'lumache'; the following exception was raised:
No module named 'lumache'
WARNING: autodoc: failed to import exception 'InvalidKindError' from module 'lumache'; the following exception was raised:
No module named 'lumache'

To spot these warnings more easily and allow you to address them, you can add the sphinx.fail_on_warning option to your Read the Docs configuration file. For that, navigate to GitHub, locate the .readthedocs.yaml file you created earlier, click on the ✏️ icon, and add these contents:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "3.8"

python:
  install:
    - requirements: docs/requirements.txt

sphinx:
  configuration: docs/source/conf.py
  fail_on_warning: true

At this point, if you navigate back to your “Builds” page, you will see a Failed build, which is exactly the intended result: the Sphinx project is not properly configured yet, and instead of rendering an empty API page, now the build fails.

The reason sphinx.ext.autosummary and sphinx.ext.autodoc fail to import the code is because it is not installed. Luckily, the .readthedocs.yaml also allows you to specify which requirements to install.

To install the library code of your project, go back to editing .readthedocs.yaml on GitHub and modify it as follows:

.readthedocs.yaml
python:
  install:
    - requirements: docs/requirements.txt
    # Install our python package before building the docs
    - method: pip
      path: .

With this change, Read the Docs will install the Python code before starting the Sphinx build, which will finish seamlessly. If you go now to the API page of your HTML documentation, you will see the lumache summary!

Enabling PDF and EPUB builds

Sphinx can build several other formats in addition to HTML, such as PDF and EPUB. You might want to enable these formats for your project so your users can read the documentation offline.

To do so, add this extra content to your .readthedocs.yaml:

.readthedocs.yaml
sphinx:
  configuration: docs/source/conf.py
  fail_on_warning: true

formats:
  - pdf
  - epub

After this change, PDF and EPUB downloads will be available both from the “Downloads” section of the project home, as well as the flyout menu.

Downloads available from the flyout menu

Downloads available from the flyout menu

Versioning documentation

Read the Docs allows you to have several versions of your documentation, in the same way that you have several versions of your code. By default, it creates a latest version that points to the default branch of your version control system (main in the case of this tutorial), and that’s why the URLs of your HTML documentation contain the string /latest/.

Creating a new version

Let’s say you want to create a 1.0 version of your code, with a corresponding 1.0 version of the documentation. For that, first navigate to your GitHub repository, click on the branch selector, type 1.0.x, and click on “Create branch: 1.0.x from ‘main’” (more information on their documentation).

Next, go to your project home, click on the Versions button, and under “Active Versions” you will see two entries:

  • The latest version, pointing to the main branch.

  • A new stable version, pointing to the origin/1.0.x branch.

List of active versions of the project

List of active versions of the project

Right after you created your branch, Read the Docs created a new special version called stable pointing to it, and started building it. When the build finishes, the stable version will be listed in the flyout menu and your readers will be able to choose it.

Note

Read the Docs follows some rules to decide whether to create a stable version pointing to your new branch or tag. To simplify, it will check if the name resembles a version number like 1.0, 2.0.3 or 4.x.

Now you might want to set stable as the default version, rather than latest, so that users see the stable documentation when they visit the root URL of your documentation (while still being able to change the version in the flyout menu).

For that, go to the Settings link under the ⚙ Admin menu of your project home, choose stable in the “Default version*” dropdown, and hit Save at the bottom. Done!

Modifying versions

Both latest and stable are now active, which means that they are visible for users, and new builds can be triggered for them. In addition to these, Read the Docs also created an inactive 1.0.x version, which will always point to the 1.0.x branch of your repository.

List of inactive versions of the project

List of inactive versions of the project

Let’s activate the 1.0.x version. For that, go to the “Versions” on your project home, locate 1.0.x under “Activate a version”, and click on the Activate button. This will take you to a new page with two checkboxes, “Active” and “Hidden”. Check only “Active”, and click Save.

After you do this, 1.0.x will appear on the “Active Versions” section, and a new build will be triggered for it.

Note

You can read more about hidden versions in our documentation.

Getting insights from your projects

Once your project is up and running, you will probably want to understand how readers are using your documentation, addressing some common questions like:

  • what pages are the most visited pages?

  • what search terms are the most frequently used?

  • are readers finding what they look for?

Read the Docs offers you some analytics tools to find out the answers.

Browsing traffic analytics

The How to use traffic analytics view shows the top viewed documentation pages of the past 30 days, plus a visualization of the daily views during that period. To generate some artificial views on your newly created project, you can first click around the different pages of your project, which will be accounted immediately for the current day statistics.

To see the Traffic Analytics view, go back the project page again, click on the ⚙ Admin button, and then click on the Traffic Analytics section. You will see the list of pages in descending order of visits, as well as a plot similar to the one below.

Traffic Analytics plot

Traffic Analytics plot

Note

The Traffic Analytics view explained above gives you a simple overview of how your readers browse your documentation. It has the advantage that it stores no identifying information about your visitors, and therefore it respects their privacy. However, you might want to get more detailed data by enabling Google Analytics. Notice though that we take some extra measures to respect user privacy when they visit projects that have Google Analytics enabled, and this might reduce the number of visits counted.

Finally, you can also download this data for closer inspection. To do that, scroll to the bottom of the page and click on the Download all data button. That will prompt you to download a CSV file that you can process any way you want.

Browsing search analytics

Apart from traffic analytics, Read the Docs also offers the possibility to inspect what search terms your readers use on your documentation. This can inform decisions on what areas to reinforce, or what parts of your project are less understood or more difficult to find.

To generate some artificial search statistics on the project, go to the HTML documentation, locate the Sphinx search box on the left, type ingredients, and press the Enter key. You will be redirected to the search results page, which will show two entries.

Next, go back to the ⚙ Admin section of your project page, and then click on the Search Analytics section. You will see a table with the most searched queries (including the ingredients one you just typed), how many results did each query return, and how many times it was searched. Below the queries table, you will also see a visualization of the daily number of search queries during the past 30 days.

Most searched terms

Most searched terms

Like the Traffic Analytics, you can also download the whole dataset in CSV format by clicking on the Download all data button.

Where to go from here

This is the end of the tutorial. You started by forking a GitHub repository and importing it on Read the Docs, building its HTML documentation, and then went through a series of steps to customize the build process, tweak the project configuration, and add new versions.

Here you have some resources to continue learning about documentation and Read the Docs:

Happy documenting!

Choosing between our two platforms

Users often ask what the differences are between Read the Docs Community and Read the Docs for Business.

While many of our features are available on both of these platforms, there are some key differences between our two platforms.

Read the Docs Community

Read the Docs Community is exclusively for hosting open source documentation. We support open source communities by providing free documentation building and hosting services, for projects of all sizes.

Important points:

  • Open source project hosting is always free

  • All documentation sites include advertising

  • Only supports public VCS repositories

  • All documentation is publicly accessible to the world

  • Less build time and fewer build resources (memory & CPU)

  • Email support included only for issues with our platform

  • Documentation is organized by projects

You can sign up for an account at https://readthedocs.org.

Read the Docs for Business

Read the Docs for Business is meant for companies and users who have more complex requirements for their documentation project. This can include commercial projects with private source code, projects that can only be viewed with authentication, and even large scale projects that are publicly available.

Important points:

  • Hosting plans require a paid subscription plan

  • There is no advertising on documentation sites

  • Allows importing private and public repositories from VCS

  • Supports private versions that require authentication to view

  • Supports team authentication, including SSO with Google, GitHub, GitLab, and Bitbucket

  • More build time and more build resources (memory & CPU)

  • Includes 24x5 email support, with 24x7 SLA support available

  • Documentation is organized by organization, giving more control over permissions

You can sign up for an account at https://readthedocs.com.

Questions?

If you have a question about which platform would be best, email us at support@readthedocs.org.

Getting started with Sphinx

Sphinx is a powerful documentation generator that has many great features for writing technical documentation including:

  • Generate web pages, printable PDFs, documents for e-readers (ePub), and more all from the same sources

  • You can use reStructuredText or Markdown to write documentation

  • An extensive system of cross-referencing code and documentation

  • Syntax highlighted code samples

  • A vibrant ecosystem of first and third-party extensions

If you want to learn more about how to create your first Sphinx project, read on. If you are interested in exploring the Read the Docs platform using an already existing Sphinx project, check out Read the Docs tutorial.

Quick start

See also

If you already have a Sphinx project, check out our Importing your documentation guide.

Assuming you have Python already, install Sphinx:

pip install sphinx

Create a directory inside your project to hold your docs:

cd /path/to/project
mkdir docs

Run sphinx-quickstart in there:

cd docs
sphinx-quickstart

This quick start will walk you through creating the basic configuration; in most cases, you can just accept the defaults. When it’s done, you’ll have an index.rst, a conf.py and some other files. Add these to revision control.

Now, edit your index.rst and add some information about your project. Include as much detail as you like (refer to the reStructuredText syntax or this template if you need help). Build them to see how they look:

make html

Your index.rst has been built into index.html in your documentation output directory (typically _build/html/index.html). Open this file in your web browser to see your docs.

_images/sphinx-hello-world.png

Your Sphinx project is built

Edit your files and rebuild until you like what you see, then commit your changes and push to your public repository. Once you have Sphinx documentation in a public repository, you can start using Read the Docs by importing your docs.

Warning

We strongly recommend to pin the Sphinx version used for your project to build the docs to avoid potential future incompatibilities.

Using Markdown with Sphinx

You can use Markdown using MyST and reStructuredText in the same Sphinx project. We support this natively on Read the Docs, and you can do it locally:

pip install myst-parser

Then in your conf.py:

extensions = ["myst_parser"]

You can now continue writing your docs in .md files and it will work with Sphinx. Read the Getting started with MyST in Sphinx docs for additional instructions.

Get inspired!

You might learn more and find the first ingredients for starting your own documentation project by looking at Example projects - view live example renditions and copy & paste from the accompanying source code.

External resources

Here are some external resources to help you learn more about Sphinx.

Getting started with MkDocs

MkDocs is a documentation generator that focuses on speed and simplicity. It has many great features including:

  • Preview your documentation as you write it

  • Easy customization with themes and extensions

  • Writing documentation with Markdown

Note

MkDocs is a great choice for building technical documentation. However, Read the Docs also supports Sphinx, another tool for writing and building documentation.

Quick start

See also

If you already have a Mkdocs project, check out our Importing your documentation guide.

Assuming you have Python already, install MkDocs:

pip install mkdocs

Setup your MkDocs project:

mkdocs new .

This command creates mkdocs.yml which holds your MkDocs configuration, and docs/index.md which is the Markdown file that is the entry point for your documentation.

You can edit this index.md file to add more details about your project and then you can build your documentation:

mkdocs serve

This command builds your Markdown files into HTML and starts a development server to browse your documentation. Open up http://127.0.0.1:8000/ in your web browser to see your documentation. You can make changes to your Markdown files and your docs will automatically rebuild.

_images/mkdocs-hello-world.png

Your MkDocs project is built

Once you have your documentation in a public repository such as GitHub, Bitbucket, or GitLab, you can start using Read the Docs by importing your docs.

Warning

We strongly recommend to pin the MkDocs version used for your project to build the docs to avoid potential future incompatibilities.

Get inspired!

You might learn more and find the first ingredients for starting your own documentation project by looking at Example projects - view live example renditions and copy & paste from the accompanying source code.

External resources

Here are some external resources to help you learn more about MkDocs.

Importing your documentation

To import a public documentation repository, visit your Read the Docs dashboard and click Import. For private repositories, please use Read the Docs for Business.

Automatically import your docs

If you have connected your Read the Docs account to GitHub, Bitbucket, or GitLab, you will see a list of your repositories that we are able to import. To import one of these projects, just click the import icon next to the repository you’d like to import. This will bring up a form that is already filled with your project’s information. Feel free to edit any of these properties, and then click Next to build your documentation.

_images/import-a-repository.png

Importing a repository

Manually import your docs

If you have not connected a Git provider account, you will need to select Import Manually and enter the information for your repository yourself. You will also need to manually configure the webhook for your repository as well. When importing your project, you will be asked for the repository URL, along with some other information for your new project. The URL is normally the URL or path name you’d use to checkout, clone, or branch your repository. Some examples:

  • Git: https://github.com/ericholscher/django-kong.git

  • Mercurial: https://bitbucket.org/ianb/pip

  • Subversion: http://varnish-cache.org/svn/trunk

  • Bazaar: lp:pasta

Add an optional homepage URL and some tags, and then click Next.

Once your project is created, you’ll need to manually configure the repository webhook if you would like to have new changes trigger builds for your project on Read the Docs. Go to your project’s Admin > Integrations page to configure a new webhook.

See also

How to manually configure a Git repository integration

Once you have imported your git project, use this guide to manually set up basic and additional webhook integration.

Note

The Admin page can be found at https://readthedocs.org/dashboard/<project-slug>/edit/. You can access all of the project settings from the admin page sidebar.

_images/admin-panel.png

Building your documentation

Within a few seconds of completing the import process, your code will automatically be fetched from your repository, and the documentation will be built. Check out our Build process overview page to learn more about how Read the Docs builds your docs, and to troubleshoot any issues that arise.

We require an additional configuration file to build your project. This allows you to specifying special requirements for your build, such as your version of Python or how you wish to install addition Python requirements. You can configure these settings in a .readthedocs.yaml file. See our Configuration file overview docs for more details.

Note

Using a configuration file is required from September 2023.

It is also important to note that the default version of Sphinx is v1.8.5. We recommend to set the version your project uses explicitily with pinned dependencies.

Read the Docs will host multiple versions of your code. You can read more about how to use this well on our Versions page.

If you have any more trouble, don’t hesitate to reach out to us. The Site support page has more information on getting in touch.

Example projects

  • Need inspiration?

  • Want to bootstrap a new documentation project?

  • Want to showcase your own solution?

The following example projects show a rich variety of uses of Read the Docs. You can use them for inspiration, for learning and for recipes to start your own documentation projects. View the rendered version of each project and then head over to the Git source to see how it’s done and reuse the code.

Sphinx and MkDocs examples

Topic

Framework

Links

Description

Basic Sphinx

Sphinx

[Git] [Rendered]

Sphinx example with versioning and Python doc autogeneration

Basic MkDocs

MkDocs

[Git] [Rendered]

Basic example of using MkDocs

Jupyter Book

Jupyter Book and Sphinx

[Git] [Rendered]

Jupyter Book with popular integrations configured

Basic AsciiDoc

Antora

[Git] [Rendered]

Antora with asciidoctor-kroki extension configured for AsciiDoc and Diagram as Code.

Real-life examples

Awesome List badge

We maintain an Awesome List where you can contribute new shiny examples of using Read the Docs. Please refer to the instructions on how to submit new entries on Awesome Read the Docs Projects.

Contributing an example project

We would love to add more examples that showcase features of Read the Docs or great tools or methods to build documentation projects.

We require that an example project:

  • is hosted and maintained by you in its own Git repository, example-<topic>.

  • contains a README.

  • uses a .readthedocs.yaml configuration.

  • is added to the above list by opening a PR targeting examples.rst.

We recommend that your project:

  • has continuous integration and PR builds.

  • is versioned as a real software project, i.e. using git tags.

  • covers your most important scenarios, but references external real-life projects whenever possible.

  • has a minimal tech stack – or whatever you feel comfortable about maintaining.

  • copies from an existing example project as a template to get started.

We’re excited to see what you come up with!

Configuration file overview

As part of the initial set up for your Read the Docs site, you need to create a configuration file called .readthedocs.yaml. The configuration file tells Read the Docs what specific settings to use for your project.

This tutorial covers:

  1. Where to put your configuration file.

  2. What to put in the configuration file.

  3. How to customize the configuration for your project.

See also

Read the Docs tutorial.

Following the steps in our tutorial will help you setup your first documentation project.

Where to put your configuration file

The .readthedocs.yaml file should be placed in the top-most directory of your project’s repository. We will get to the contents of the file in the next steps.

When you have changed the configuration file, you need to commit and push the changes to your Git repository. Read the Docs will then automatically find and use the configuration to build your project.

Note

The Read the Docs configuration file is a YAML file. YAML is a human-friendly data serialization language for all programming languages. To learn more about the structure of these files, see the YAML language overview.

Getting started with a template

Here are some configuration file examples to help you get started. Pick an example based on the tool that your project is using, copy its contents to .readthedocs.yaml and add the file to your Git repository.

If your project uses Sphinx, we offer a special builder optimized for Sphinx projects.

.readthedocs.yaml
 1# Read the Docs configuration file for Sphinx projects
 2# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
 3
 4# Required
 5version: 2
 6
 7# Set the OS, Python version and other tools you might need
 8build:
 9  os: ubuntu-22.04
10  tools:
11    python: "3.12"
12    # You can also specify other tool versions:
13    # nodejs: "20"
14    # rust: "1.70"
15    # golang: "1.20"
16
17# Build documentation in the "docs/" directory with Sphinx
18sphinx:
19  configuration: docs/conf.py
20  # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs
21  # builder: "dirhtml"
22  # Fail on all warnings to avoid broken references
23  # fail_on_warning: true
24
25# Optionally build your docs in additional formats such as PDF and ePub
26# formats:
27#   - pdf
28#   - epub
29
30# Optional but recommended, declare the Python requirements required
31# to build your documentation
32# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
33# python:
34#   install:
35#     - requirements: docs/requirements.txt

Editing the template

Now that you have a .readthedocs.yaml file added to your Git repository, you should see Read the Docs trying to build your project with the configuration file. The configuration file probably needs some adjustments to accommodate exactly your project setup.

Note

If you added the configuration file in a separate branch, you may have to activate a version for that branch.

If you have added the file in a pull request, you should enable pull request builds.

Skip: file header and comments

There are some parts of the templates that you can leave in place:

Comments

We added comments that explain the configuration options and optional features. These lines begin with a #.

Commented out features

We use the # in front of some popular configuration options. They are there as examples, which you can choose to enable, delete or save for later.

version key

The version key tells the system how to read the rest of the configuration file. The current and only supported version is version 2.

Adjust: build.os

In our examples, we are using Read the Docs’ custom image based on the latest Ubuntu release. Package versions in these images will not change drastically, though will receive periodic security updates.

You should pay attention to this field if your project needs to build on an older version of Ubuntu, or in the future when you need features from a newer Ubuntu.

See also

build.os

Configuration file reference with all values possible for build.os.

Adjust: Python configuration

If you are using Python in your builds, you should define the Python version in build.tools.python.

The python key contains a list of sub-keys, specifying the requirements to install.

  • Use python.install.package to install the project itself as a Python package using pip

  • Use python.install.requirements to install packages from a requirements file

  • Use build.jobs to install packages using Poetry or PDM

See also

build.tools.python

Configuration file reference with all Python versions available for build.tools.python.

python

Configuration file reference for configuring the Python environment activated by build.tools.python.

Adjust: Sphinx and MkDocs version

If you are using either the sphinx or mkdocs builder, then Sphinx or MkDocs will be installed automatically in its latest version.

But we recommend that you specify the version that your documentation project uses. The requirements key is a file path that points to a text (.txt) file that lists the Python packages you want Read the Docs to install.

See also

Use a requirements file for Python dependencies

This guide explains how to specify Python requirements, such as the version of Sphinx or MkDocs.

sphinx

Configuration file reference for configuring the Sphinx builder.

mkdocs

Configuration file reference for configuring the MkDocs builder.

Next steps

There are more configuration options that the ones mentioned in this guide.

After you add a configuration file your Git repository, and you can see that Read the Docs is building your documentation using the file, you should have a look at the complete configuration file reference for options that might apply to your project.

See also

Configuration file reference.

The complete list of all possible .readthedocs.yaml settings, including the optional settings not covered in on this page.

Build process customization

Are familiar with running a command line? Perhaps there are special commands that you know you want Read the Docs to run. Read this guide and learn more about how you add your own commands to .readthedocs.yaml.

Configuration file reference

Read the Docs supports configuring your documentation builds with a configuration file. This file is named .readthedocs.yaml and should be placed in the top level of your Git repository.

The .readthedocs.yaml file can contain a number of settings that are not accessible through the Read the Docs website.

Because the file is stored in Git, the configuration will apply to the exact version that is being built. This allows you to store different configurations for different versions of your documentation.

Below is an example YAML file which shows the most common configuration options:

.readthedocs.yaml
 1# Read the Docs configuration file for Sphinx projects
 2# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
 3
 4# Required
 5version: 2
 6
 7# Set the OS, Python version and other tools you might need
 8build:
 9  os: ubuntu-22.04
10  tools:
11    python: "3.12"
12    # You can also specify other tool versions:
13    # nodejs: "20"
14    # rust: "1.70"
15    # golang: "1.20"
16
17# Build documentation in the "docs/" directory with Sphinx
18sphinx:
19  configuration: docs/conf.py
20  # You can configure Sphinx to use a different builder, for instance use the dirhtml builder for simpler URLs
21  # builder: "dirhtml"
22  # Fail on all warnings to avoid broken references
23  # fail_on_warning: true
24
25# Optionally build your docs in additional formats such as PDF and ePub
26# formats:
27#   - pdf
28#   - epub
29
30# Optional but recommended, declare the Python requirements required
31# to build your documentation
32# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
33# python:
34#   install:
35#     - requirements: docs/requirements.txt

See also

Configuration file overview

Practical steps to add a configuration file to your documentation project.

Supported settings

Read the Docs validates every configuration file. Any configuration option that isn’t supported will make the build fail. This is to avoid typos and provide feedback on invalid configurations.

Warning

When using a v2 configuration file, the local settings from the web interface are ignored.

version

Required:

true

Example:

version: 2

formats

Additional formats of the documentation to be built, apart from the default HTML.

Type:

list

Options:

htmlzip, pdf, epub, all

Default:

[]

Example:

version: 2

# Default
formats: []
version: 2

# Build PDF & ePub
formats:
  - epub
  - pdf

Note

You can use the all keyword to indicate all formats.

version: 2

# Build all formats
formats: all

Warning

At the moment, only Sphinx supports additional formats. pdf, epub, and htmlzip output is not yet supported when using MkDocs.

With builds from pull requests, only HTML formats are generated. Other formats are resource intensive and will be built after merging.

python

Configuration of the Python environment to be used.

version: 2

python:
  install:
    - requirements: docs/requirements.txt
    - method: pip
      path: .
      extra_requirements:
        - docs
    - method: pip
      path: another/package
python.install

List of installation methods of packages and requirements. You can have several of the following methods.

Type:

list

Default:

[]

Requirements file

Install packages from a requirements file.

The path to the requirements file, relative to the root of the project.

Key:

requirements

Type:

path

Required:

false

Example:

version: 2

python:
  install:
    - requirements: docs/requirements.txt
    - requirements: requirements.txt

Warning

If you are using a Conda environment to manage the build, this setting will not have any effect. Instead add the extra requirements to the environment file of Conda.

Packages

Install the project using pip install (recommended) or python setup.py install (deprecated).

The path to the package, relative to the root of the project.

Key:

path

Type:

path

Required:

false

The installation method.

Key:

method

Options:

pip, setuptools (deprecated)

Default:

pip

Extra requirements section to install in addition to the package dependencies.

Warning

You need to install your project with pip to use extra_requirements.

Key:

extra_requirements

Type:

list

Default:

[]

Example:

version: 2

python:
  install:
    - method: pip
      path: .
      extra_requirements:
        - docs

With the previous settings, Read the Docs will execute the next commands:

pip install .[docs]

conda

Configuration for Conda support.

version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "mambaforge-22.9"

conda:
  environment: environment.yml
conda.environment

The path to the Conda environment file, relative to the root of the project.

Type:

path

Required:

false

Note

When using Conda, it’s required to specify build.tools.python to tell Read the Docs to use whether Conda or Mamba to create the environment.

build

Configuration for the documentation build process. This allows you to specify the base Read the Docs image used to build the documentation, and control the versions of several tools: Python, Node.js, Rust, and Go.

version: 2

build:
  os: ubuntu-22.04
  tools:
    python: "3.12"
    nodejs: "18"
    rust: "1.64"
    golang: "1.19"
build.os

The Docker image used for building the docs. Image names refer to the operating system Read the Docs uses to build them.

Note

Arbitrary Docker images are not supported.

Type:

string

Options:

ubuntu-20.04, ubuntu-22.04, ubuntu-lts-latest

Required:

true

Note

The ubuntu-lts-latest option refers to the latest Ubuntu LTS version of Ubuntu available on Read the Docs, which may not match the latest Ubuntu LTS officially released.

Warning

Using ubuntu-lts-latest may break your builds unexpectedly if your project isn’t compatible with the newest Ubuntu LTS version when it’s updated by Read the Docs.

build.tools

Version specifiers for each tool. It must contain at least one tool.

Type:

dict

Options:

python, nodejs, ruby, rust, golang

Required:

true

Note

Each tool has a latest option available, which refers to the latest version available on Read the Docs, which may not match the latest version officially released. Versions and the latest option are updated at least once every six months to keep up with the latest releases.

Warning

Using latest may break your builds unexpectedly if your project isn’t compatible with the newest version of the tool when it’s updated by Read the Docs.

build.tools.python

Python version to use. You can use several interpreters and versions, from CPython, Miniconda, and Mamba.

Note

If you use Miniconda3 or Mambaforge, you can select the Python version using the environment.yml file. See our How to use Conda as your Python environment guide for more information.

Type:

string

Options:
  • 2.7

  • 3 (alias for the latest 3.x version available on Read the Docs)

  • 3.6

  • 3.7

  • 3.8

  • 3.9

  • 3.10

  • 3.11

  • 3.12

  • latest (alias for the latest version available on Read the Docs)

  • miniconda3-4.7

  • miniconda-latest (alias for the latest version available on Read the Docs)

  • mambaforge-4.10

  • mambaforge-22.9

  • mambaforge-latest (alias for the latest version available on Read the Docs)

build.tools.nodejs

Node.js version to use.

Type:

string

Options:
  • 14

  • 16

  • 18

  • 19

  • 20

  • latest (alias for the latest version available on Read the Docs)

build.tools.ruby

Ruby version to use.

Type:

string

Options:
  • 3.3

  • latest (alias for the latest version available on Read the Docs)

build.tools.rust

Rust version to use.

Type:

string

Options:
  • 1.55

  • 1.61

  • 1.64

  • 1.70

  • 1.75

  • latest (alias for the latest version available on Read the Docs)

build.tools.golang

Go version to use.

Type:

string

Options:
  • 1.17

  • 1.18

  • 1.19

  • 1.20

  • 1.21

  • latest (alias for the latest version available on Read the Docs)

build.apt_packages

List of APT packages to install. Our build servers run various Ubuntu LTS versions with the default set of package repositories installed. We don’t currently support PPA’s or other custom repositories.

Type:

list

Default:

[]

version: 2

build:
  apt_packages:
    - libclang
    - cmake

Note

When possible avoid installing Python packages using apt (python3-numpy for example), use pip or conda instead.

Warning

Currently, it’s not possible to use this option when using build.commands.

build.jobs

Commands to be run before or after a Read the Docs pre-defined build jobs. This allows you to run custom commands at a particular moment in the build process. See Build process customization for more details.

version: 2

build:
  os: ubuntu-22.04
  tools:
    python: "3.12"
  jobs:
    pre_create_environment:
      - echo "Command run at 'pre_create_environment' step"
    post_build:
      - echo "Command run at 'post_build' step"
      - echo `date`

Note

Each key under build.jobs must be a list of strings. build.os and build.tools are also required to use build.jobs.

Type:

dict

Allowed keys:

post_checkout, pre_system_dependencies, post_system_dependencies, pre_create_environment, post_create_environment, pre_install, post_install, pre_build, post_build

Required:

false

Default:

{}

build.commands

Specify a list of commands that Read the Docs will run on the build process. When build.commands is used, none of the pre-defined build jobs will be executed. (see Build process customization for more details). This allows you to run custom commands and control the build process completely. The $READTHEDOCS_OUTPUT/html directory will be uploaded and hosted by Read the Docs.

Warning

This feature is in a beta phase and could suffer incompatible changes or even removed completely in the near feature. We are currently testing the new addons integrations we are building on projects using build.commands configuration key. Use it under your own responsibility.

version: 2

build:
  os: ubuntu-22.04
  tools:
    python: "3.12"
  commands:
    - pip install pelican
    - pelican --settings docs/pelicanconf.py --output $READTHEDOCS_OUTPUT/html/ docs/

Note

build.os and build.tools are also required when using build.commands.

Type:

list

Required:

false

Default:

[]

sphinx

Configuration for Sphinx documentation (this is the default documentation type).

version: 2

sphinx:
  builder: html
  configuration: conf.py
  fail_on_warning: true

Note

If you want to pin Sphinx to a specific version, use a requirements.txt or environment.yml file (see Requirements file and conda.environment). If you are using a metadata file to describe code dependencies like setup.py, pyproject.toml, or similar, you can use the extra_requirements option (see Packages). This also allows you to override the default pinning done by Read the Docs if your project was created before October 2020.

sphinx.builder

The builder type for the Sphinx documentation.

Type:

string

Options:

html, dirhtml, singlehtml

Default:

html

Note

The htmldir builder option was renamed to dirhtml to use the same name as sphinx. Configurations using the old name will continue working.

sphinx.configuration

The path to the conf.py file, relative to the root of the project.

Type:

path

Default:

null

If the value is null, Read the Docs will try to find a conf.py file in your project.

sphinx.fail_on_warning

Turn warnings into errors (-W and --keep-going options). This means the build fails if there is a warning and exits with exit status 1.

Type:

bool

Default:

false

mkdocs

Configuration for MkDocs documentation.

version: 2

mkdocs:
  configuration: mkdocs.yml
  fail_on_warning: false

Note

If you want to pin MkDocs to a specific version, use a requirements.txt or environment.yml file (see Requirements file and conda.environment). If you are using a metadata file to describe code dependencies like setup.py, pyproject.toml, or similar, you can use the extra_requirements option (see Packages). This also allows you to override the default pinning done by Read the Docs if your project was created before March 2021.

mkdocs.configuration

The path to the mkdocs.yml file, relative to the root of the project.

Type:

path

Default:

null

If the value is null, Read the Docs will try to find a mkdocs.yml file in your project.

mkdocs.fail_on_warning

Turn warnings into errors. This means that the build stops at the first warning and exits with exit status 1.

Type:

bool

Default:

false

submodules

VCS submodules configuration.

Note

Only Git is supported at the moment.

Warning

You can’t use include and exclude settings for submodules at the same time.

version: 2

submodules:
  include:
    - one
    - two
  recursive: true
submodules.include

List of submodules to be included.

Type:

list

Default:

[]

Note

You can use the all keyword to include all submodules.

version: 2

submodules:
  include: all
submodules.exclude

List of submodules to be excluded.

Type:

list

Default:

[]

Note

You can use the all keyword to exclude all submodules. This is the same as include: [].

version: 2

submodules:
  exclude: all
submodules.recursive

Do a recursive clone of the submodules.

Type:

bool

Default:

false

Note

This is ignored if there aren’t submodules to clone.

Schema

You can see the complete schema here. This schema is available at Schema Store, use it with your favorite editor for validation and autocompletion.

Automation rules

Automation rules allow project maintainers to automate actions on new branches and tags in Git repositories. If you are familiar with GitOps, this might seem familiar. The goal of automation rules is to be able to control versioning through your Git repository and avoid duplicating these efforts on Read the Docs.

See also

How to manage versions automatically

A practical guide to managing automated versioning of your documentation.

Versions

General explanation of how versioning works on Read the Docs

How automation rules work

When a new tag or branch is pushed to your repository, Read the Docs receives a webhook. We then create a new Read the Docs version that matches your new Git tag or branch.

All automation rules are evaluated for this version, in the order they are listed. If the version matches the version type and the pattern in the rule, the specified action is performed on that version.

Note

Versions can match multiple automation rules, and all matching actions will be performed on the version.

Matching a version in Git

We have a couple predefined ways to match against versions that are created, and you can also define your own.

Predefined matches

Automation rules support two predefined version matches:

  • Any version: All new versions will match the rule.

  • SemVer versions: All new versions that follow semantic versioning will match the rule.

Custom matches

If none of the above predefined matches meet your use case, you can use a Custom match.

The custom match should be a valid Python regular expression. Each new version will be tested against this regular expression.

Actions for versions

When an automation rule matches a new version, the specified action is performed on that version. Currently, the following actions are available:

Activate version

Activates and builds the version.

Hide version

Hides the version. If the version is not active, activates it and builds the version. See Version states.

Make version public

Sets the version’s privacy level to public. See Privacy levels.

Make version private

Sets the version’s privacy level to private. See Privacy levels.

Set version as default

Sets the version as the default version. It also activates and builds the version. See Root URL redirect at /.

Delete version

When a branch or tag is deleted from your repository, Read the Docs will delete it only if isn’t active. This action allows you to delete active versions when a branch or tag is deleted from your repository.

There are a couple caveats to these rules that are useful:

  • The default version isn’t deleted even if it matches a rule. You can use the Set version as default action to change the default version before deleting the current one.

  • If your versions follow PEP 440, Read the Docs activates and builds the version if it’s greater than the current stable version. The stable version is also automatically updated at the same time. See more in Versions.

Order

When a new Read the Docs version is created, all rules with a successful match will have their action triggered, in the order they appear on the Automation Rules page.

Examples

Activate all new tags

  • Match: Any version

  • Version type: Tag

  • Action: Activate version

Activate only new branches that belong to the 1.x release

  • Custom match: ^1\.\d+$

  • Version type: Branch

  • Action: Activate version

Delete an active version when a branch is deleted

  • Match: Any version

  • Version type: Branch

  • Action: Delete version

Set as default new tags that have the -stable or -release suffix

  • Custom match: -(stable|release)$

  • Version type: Tag

  • Action: Set version as default

Note

You can also create two rules: one to match -stable and other to match -release.

Activate all new tags and branches that start with v or V

  • Custom match: ^[vV]

  • Version type: Tag

  • Action: Activate version

  • Custom match: ^[vV]

  • Version type: Branch

  • Action: Activate version

Activate all new tags that don’t contain the -nightly suffix

  • Custom match: .*(?<!-nightly)$

  • Version type: Tag

  • Action: Activate version

How to create reproducible builds

Your documentation depends on a number of dependencies to be built. If your docs don’t have reproducible builds, an update in a dependency can break your builds when least expected, or make your docs look different from your local version. This guide will help you to keep your builds working over time, so that you can focus on content.

Use a .readthedocs.yaml configuration file

We recommend using a configuration file to manage your documentation. Our config file provides you per version settings, and those settings live in your Git repository.

This allows you to validate changes using pull requests, and ensures that all your versions can be rebuilt from a reproducible configuration.

Use a requirements file for Python dependencies

We recommend using a Pip requirements file or Conda environment file to pin Python dependencies. This ensures that top-level dependencies and extensions don’t change.

A configuration file with explicit dependencies looks like this:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "3.12"

# Build from the docs/ directory with Sphinx
sphinx:
  configuration: docs/conf.py

# Explicitly set the version of Python and its requirements
python:
  install:
    - requirements: docs/requirements.txt
docs/requirements.txt
# Defining the exact version will make sure things don't break
sphinx==5.3.0
sphinx_rtd_theme==1.1.1
readthedocs-sphinx-search==0.1.1

Tip

Remember to update your docs’ dependencies from time to time to get new improvements and fixes. It also makes it easy to manage in case a version reaches its end of support date.

Pin your transitive dependencies

Once you have pinned your own dependencies, the next things to worry about are the dependencies of your dependencies. These are called transitive dependencies, and they can upgrade without warning if you do not pin these packages as well.

We recommend pip-tools to help address this problem. It allows you to specify a requirements.in file with your top-level dependencies, and it generates a requirements.txt file with the full set of transitive dependencies.

✅ Good:

All your transitive dependencies are defined, which ensures new package releases will not break your docs.

docs/requirements.in
sphinx==5.3.0
docs/requirements.txt
#
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
#    pip-compile docs.in
#
alabaster==0.7.12
    # via sphinx
babel==2.11.0
    # via sphinx
certifi==2022.12.7
    # via requests
charset-normalizer==2.1.1
    # via requests
docutils==0.19
    # via sphinx
idna==3.4
    # via requests
imagesize==1.4.1
    # via sphinx
jinja2==3.1.2
    # via sphinx
markupsafe==2.1.1
    # via jinja2
packaging==22.0
    # via sphinx
pygments==2.13.0
    # via sphinx
pytz==2022.7
    # via babel
requests==2.28.1
    # via sphinx
snowballstemmer==2.2.0
    # via sphinx
sphinx==5.3.0
    # via -r docs.in
sphinxcontrib-applehelp==1.0.2
    # via sphinx
sphinxcontrib-devhelp==1.0.2
    # via sphinx
sphinxcontrib-htmlhelp==2.0.0
    # via sphinx
sphinxcontrib-jsmath==1.0.1
    # via sphinx
sphinxcontrib-qthelp==1.0.3
    # via sphinx
sphinxcontrib-serializinghtml==1.1.5
    # via sphinx
urllib3==1.26.13
    # via requests

Check list ✅

If you followed this guide, you have pinned:

  • tool versions (Python, Node)

  • top-level dependencies (Sphinx, Sphinx extensions)

  • transitive dependencies (Pytz, Jinja2)

This will protect your builds from failures because of a random tool or dependency update.

You do still need to upgrade your dependencies from time to time, but you should do that on your own schedule.

See also

Configuration file reference

Configuration file reference

Build process overview

Build process information

Build process customization

Customizing builds to do more

Build process overview

Once a project has been imported and a build is triggered, Read the Docs executes a set of pre-defined jobs to build and upload documentation. This page explains in detail what happens behind the scenes, and includes an overview of how you can change this process.

Understanding the build process

Understanding how your content is built helps with debugging any problems you might hit. It also gives you the knowledge to customize the build process.

Note

All the steps are run inside a Docker container, using the image defined in build.os. The build has access to all pre-defined environment variables and custom environment variables.

The build process includes the following jobs:

checkout:

Checks out a project’s code from the repository URL. On Read the Docs for Business, this environment includes the SSH deploy key that gives access to the repository.

system_dependencies:

Installs operating system and runtime dependencies. This includes specific versions of a language (e.g. Python, Node.js, Go, Rust) and also apt packages.

build.tools can be used to define a language version, and build.apt_packages to define apt packages.

create_environment:

Creates a Python environment to install all the dependencies in an isolated and reproducible way. Depending on what’s defined by the project, a virtualenv or a conda environment (conda) will be used.

install:

Install default and project dependencies. This includes any requirements you have configured in Requirements file.

If the project has extra Python requirements, python.install can be used to specify them.

Tip

We strongly recommend pinning all the versions required to build the documentation to avoid unexpected build errors.

build:

Runs the main command to build the documentation for each of the formats declared (formats). It will use Sphinx (sphinx) or MkDocs (mkdocs) depending on the project.

upload:

Once the build process finishes successfully, the resulting artifacts are uploaded to our servers. Our CDN is then purged so your docs are always up to date.

See also

If you require additional build steps or customization, it’s possible to run user-defined commands and customize the build process.

Cancelling builds

There may be situations where you want to cancel a running build. Cancelling builds allows your team to speed up review times and also help us reduce server costs and our environmental footprint.

A couple common reasons you might want to cancel builds are:

  • the build has an external dependency that hasn’t been updated

  • there were no changes on the documentation files

For these scenarios, Read the Docs supports three different mechanisms to cancel a running build:

Manually:

Once a build was triggered, project administrators can go to the build detail page and click Cancel build.

Automatically:

When Read the Docs detects a push to a version that is already building, it cancels the running build and starts a new build using the latest commit.

Programatically:

You can use user-defined commands on build.jobs or build.commands (see Build process customization) to check for your own cancellation condition and then return exit code 183 to cancel a build. You can exit with the code 0 to continue running the build.

When this happens, Read the Docs will notify your Git platform (GitHub/GitLab) that the build succeeded (✅), so the pull request doesn’t have any failing checks.

Tip

Take a look at Cancel build based on a condition section for some examples.

Build resources

Every build has limited resources assigned to it. Generally, Read the Docs for Business users get double the build resources, with the option to increase that.

Our build limits are:

  • 30 minutes build time

  • 7GB of memory

  • Concurrent builds vary based on your pricing plan

If you are having trouble with your documentation builds, you can reach our support at support@readthedocs.com.

Build process customization

Read the Docs has a well-defined build process that works for many projects. We also allow customization of builds in two ways:

Extend the build process

Keep using the default build process, adding your own commands.

Override the build process

This option gives you full control over your build. Read the Docs supports any tool that generates HTML.

Extend the build process

In the normal build process, the pre-defined jobs checkout, system_dependencies, create_environment, install, build and upload are executed. Read the Docs also exposes these jobs, which allows you to customize the build process by adding shell commands.

The jobs where users can customize our default build process are:

Step

Customizable jobs

Checkout

post_checkout

System dependencies

pre_system_dependencies, post_system_dependencies

Create environment

pre_create_environment, post_create_environment

Install

pre_install, post_install

Build

pre_build, post_build

Upload

No customizable jobs currently

Note

The pre-defined jobs (checkout, system_dependencies, etc) cannot be overridden or skipped. You can fully customize things in Override the build process.

These jobs are defined using the Configuration file reference with the build.jobs key. This example configuration defines commands to be executed before installing and after the build has finished:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.10"
  jobs:
    pre_install:
      - bash ./scripts/pre_install.sh
    post_build:
      - curl -X POST \
        -F "project=${READTHEDOCS_PROJECT}" \
        -F "version=${READTHEDOCS_VERSION}" https://example.com/webhooks/readthedocs/

User-defined job limitations

  • The current working directory is at the root of your project’s cloned repository

  • Environment variables are expanded for each individual command (see Environment variable reference)

  • Each command is executed in a new shell process, so modifications done to the shell environment do not persist between commands

  • Any command returning non-zero exit code will cause the build to fail immediately (note there is a special exit code to cancel the build)

  • build.os and build.tools are required when using build.jobs

build.jobs examples

We’ve included some common examples where using build.jobs will be useful. These examples may require some adaptation for each projects’ use case, we recommend you use them as a starting point.

Unshallow git clone

Read the Docs does not perform a full clone in the checkout job in order to reduce network data and speed up the build process. Instead, it performs a shallow clone and only fetches the branch or tag that you are building documentation for. Because of this, extensions that depend on the full Git history will fail. To avoid this, it’s possible to unshallow the git clone:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    post_checkout:
      - git fetch --unshallow || true

If your build also relies on the contents of other branches, it may also be necessary to re-configure git to fetch these:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    post_checkout:
      - git fetch --unshallow || true
      - git config remote.origin.fetch '+refs/heads/*:refs/remotes/origin/*' || true
      - git fetch --all --tags || true
Cancel build based on a condition

When a command exits with code 183, Read the Docs will cancel the build immediately. You can use this approach to cancel builds that you don’t want to complete based on some conditional logic.

Note

Why 183 was chosen for the exit code?

It’s the word “skip” encoded in ASCII. Then it’s taken the 256 modulo of it because the Unix implementation does this automatically for exit codes greater than 255.

>>> sum(list("skip".encode("ascii")))
439
>>> 439 % 256
183

Here is an example that cancels builds from pull requests when there are no changes to the docs/ folder compared to the origin/main branch:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.12"
  jobs:
    post_checkout:
      # Cancel building pull requests when there aren't changed in the docs directory or YAML file.
      # You can add any other files or directories that you'd like here as well,
      # like your docs requirements file, or other files that will change your docs build.
      #
      # If there are no changes (git diff exits with 0) we force the command to return with 183.
      # This is a special exit code on Read the Docs that will cancel the build immediately.
      - |
        if [ "$READTHEDOCS_VERSION_TYPE" = "external" ] && git diff --quiet origin/main -- docs/ .readthedocs.yaml;
        then
          exit 183;
        fi

This other example shows how to cancel a build if the commit message contains skip ci on it:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.12"
  jobs:
    post_checkout:
      # Use `git log` to check if the latest commit contains "skip ci",
      # in that case exit the command with 183 to cancel the build
      - (git --no-pager log --pretty="tformat:%s -- %b" -1 | grep -viq "skip ci") || exit 183
Generate documentation from annotated sources with Doxygen

It’s possible to run Doxygen as part of the build process to generate documentation from annotated sources:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_build:
    # Note that this HTML won't be automatically uploaded,
    # unless your documentation build includes it somehow.
      - doxygen
Use MkDocs extensions with extra required steps

There are some MkDocs extensions that require specific commands to be run to generate extra pages before performing the build. For example, pydoc-markdown

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_build:
      - pydoc-markdown --build --site-dir "$READTHEDOCS_OUTPUT/html"
Avoid having a dirty Git index

Read the Docs needs to modify some files before performing the build to be able to integrate with some of its features. Because of this reason, it could happen the Git index gets dirty (it will detect modified files). In case this happens and the project is using any kind of extension that generates a version based on Git metadata (like setuptools_scm), this could cause an invalid version number to be generated. In that case, the Git index can be updated to ignore the files that Read the Docs has modified.

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    pre_install:
      - git update-index --assume-unchanged environment.yml docs/conf.py
Support Git LFS (Large File Storage)

In case the repository contains large files that are tracked with Git LFS, there are some extra steps required to be able to download their content. It’s possible to use post_checkout user-defined job for this.

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-20.04"
  tools:
    python: "3.10"
  jobs:
    post_checkout:
      # Download and uncompress the binary
      # https://git-lfs.github.com/
      - wget https://github.com/git-lfs/git-lfs/releases/download/v3.1.4/git-lfs-linux-amd64-v3.1.4.tar.gz
      - tar xvfz git-lfs-linux-amd64-v3.1.4.tar.gz
      # Modify LFS config paths to point where git-lfs binary was downloaded
      - git config filter.lfs.process "`pwd`/git-lfs filter-process"
      - git config filter.lfs.smudge  "`pwd`/git-lfs smudge -- %f"
      - git config filter.lfs.clean "`pwd`/git-lfs clean -- %f"
      # Make LFS available in current repository
      - ./git-lfs install
      # Download content from remote
      - ./git-lfs fetch
      # Make local files to have the real content on them
      - ./git-lfs checkout
Install Node.js dependencies

It’s possible to install Node.js together with the required dependencies by using user-defined build jobs. To setup it, you need to define the version of Node.js to use and install the dependencies by using build.jobs.post_install:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.9"
    nodejs: "16"
  jobs:
    post_install:
      # Install dependencies defined in your ``package.json``
      - npm ci
      # Install any other extra dependencies to build the docs
      - npm install -g jsdoc
Install dependencies with Poetry

Projects managed with Poetry, can use the post_create_environment user-defined job to use Poetry for installing Python dependencies. Take a look at the following example:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "3.10"
  jobs:
    post_create_environment:
      # Install poetry
      # https://python-poetry.org/docs/#installing-manually
      - pip install poetry
    post_install:
      # Install dependencies with 'docs' dependency group
      # https://python-poetry.org/docs/managing-dependencies/#dependency-groups
      # VIRTUAL_ENV needs to be set manually for now.
      # See https://github.com/readthedocs/readthedocs.org/pull/11152/
      - VIRTUAL_ENV=$READTHEDOCS_VIRTUALENV_PATH poetry install --with docs

sphinx:
  configuration: docs/conf.py
Install dependencies with uv

Projects managed with uv, can use the post_create_environment user-defined job to use uv for installing Python dependencies. Take a look at the following example:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-22.04"
  tools:
    python: "3.10"
  jobs:
    post_create_environment:
      # Install uv
      - pip install uv
    post_install:
      # Install dependencies with 'docs' dependency group
      # VIRTUAL_ENV needs to be set manually for now.
      # See https://github.com/readthedocs/readthedocs.org/pull/11152/
      - VIRTUAL_ENV=$READTHEDOCS_VIRTUALENV_PATH uv pip install .[docs]

sphinx:
  configuration: docs/conf.py
Update Conda version

Projects using Conda may need to install the latest available version of Conda. This can be done by using the pre_create_environment user-defined job to update Conda before creating the environment. Take a look at the following example:

.readthedocs.yaml
 version: 2

 build:
   os: "ubuntu-22.04"
   tools:
     python: "miniconda3-4.7"
   jobs:
     pre_create_environment:
       - conda update --yes --quiet --name=base --channel=defaults conda

 conda:
   environment: environment.yml

Override the build process

Warning

This feature is in beta and could change without warning. We are currently testing the new addons integrations we are building on projects using build.commands configuration key.

If your project requires full control of the build process, and extending the build process is not enough, all the commands executed during builds can be overridden using the build.commands.

As Read the Docs does not have control over the build process, you are responsible for running all the commands required to install requirements and build your project.

Where to put files

It is your responsibility to generate HTML and other formats of your documentation using build.commands. The contents of the $READTHEDOCS_OUTPUT/<format>/ directory will be hosted as part of your documentation.

We store the the base folder name _readthedocs/ in the environment variable $READTHEDOCS_OUTPUT and encourage that you use this to generate paths.

Supported formats are published if they exist in the following directories:

  • $READTHEDOCS_OUTPUT/html/ (required)

  • $READTHEDOCS_OUTPUT/htmlzip/

  • $READTHEDOCS_OUTPUT/pdf/

  • $READTHEDOCS_OUTPUT/epub/

Note

Remember to create the folders before adding content to them. You can ensure that the output folder exists by adding the following command:

mkdir -p $READTHEDOCS_OUTPUT/html/

Search support

Read the Docs will automatically index the content of all your HTML files, respecting the search option.

You can access the search from the Read the Docs dashboard, or by using the Server side search API.

Note

In order for Read the Docs to index your HTML files correctly, they should follow the conventions described at Server side search integration.

build.commands examples

This section contains examples that showcase what is possible with build.commands. Note that you may need to modify and adapt these examples depending on your needs.

Pelican

Pelican is a well-known static site generator that’s commonly used for blogs and landing pages. If you are building your project with Pelican you could use a configuration file similar to the following:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.10"
  commands:
    - pip install pelican[markdown]
    - pelican --settings docs/pelicanconf.py --output $READTHEDOCS_OUTPUT/html/ docs/
Docsify

Docsify generates documentation websites on the fly, without the need to build static HTML. These projects can be built using a configuration file like this:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    nodejs: "16"
  commands:
    - mkdir --parents $READTHEDOCS_OUTPUT/html/
    - cp --recursive docs/* $READTHEDOCS_OUTPUT/html/
Asciidoc

Asciidoctor is a fast processor for converting and generating documentation from AsciiDoc source. The Asciidoctor toolchain includes Asciidoctor.js which you can use with custom build commands. Here is an example configuration file:

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    nodejs: "20"
  commands:
    - npm install -g asciidoctor
    - asciidoctor -D $READTHEDOCS_OUTPUT/html index.asciidoc

Git integration (GitHub, GitLab, Bitbucket)

Your Read the Docs account can be connected to your Git provider’s account. Connecting your account provides the following features:

🔑️ Easy login

Log in to Read the Docs with your GitHub, Bitbucket, or GitLab account.

🔁️ List your projects

Select a project to automatically import from all your Git repositories and organizations. See: Importing your documentation.

⚙️ Automatic configuration

Have your Git repository automatically configured with your Read the Docs webhook, which allows Read the Docs to build your docs on every change to your repository.

🚥️ Commit status

See your documentation build status as a commit status indicator on pull request builds.

Note

Are you using GitHub Enterprise?

We offer customized enterprise plans for organizations. Please contact support@readthedocs.com.

Other Git providers

We also generally support all Git providers through manual configuration.

Screenshot of the Dashboard view for the incoming webhook

All calls to the incoming webhook are logged. Each call can trigger builds and version synchronization.

Read the Docs incoming webhook

Accounts with GitHub, Bitbucket, and GitLab integration automatically have Read the Docs’ incoming webhook configured on all Git repositories that are imported. Other setups can setup the webhook through manual configuration.

When an incoming webhook notification is received, we ensure that it matches an existing Read the Docs project. Once we have validated the webhook, we take an action based on the information inside of the webhook.

Possible webhook actions outcomes are:

  • Builds the latest commit.

  • Synchronizes your versions based on the latest tag and branch data in Git.

  • Runs your automation rules.

  • Auto-cancels any currently running builds of the same version.

Other features enabled by Git integration

We have additional documentation around features provided by our Git integrations:

See also

Pull request previews

Your Read the Docs project will automatically be configured to send back build notifications, which can be viewed as commit statuses and on pull requests.

Single Sign-on with GitHub, Bitbucket, or GitLab

Git integration makes it possible for us to synchronize your Git repository’s access rights from your Git provider. That way, the same access rights are effective on Read the Docs and you don’t have to configure access in two places.

Pull request previews

Your project can be configured to build and host documentation for every new pull request. Previewing changes during review makes it easier to catch formatting and display issues before they go live.

Features

Build on pull request events

We create and build a new version when a pull request is opened, and rebuild the version whenever a new commit is pushed.

Build status report

Your project’s pull request build status will show as one of your pull request’s checks. This status will update as the build is running, and will show a success or failure status when the build completes.

GitHub build status reporting for pull requests.

GitHub build status reporting

Warning banner

A warning banner is shown at the top of documentation pages to let readers know that this version isn’t the main version for the project.

Note

Warning banners are available only for Sphinx projects.

See also

How to configure pull request builds

A guide to configuring pull request builds on Read the Docs.

Security

If pull request previews are enabled for your project, anyone who can open a pull request on your repository will be able to trigger a build of your documentation. For this reason, pull request previews are served from a different domain than your main documentation (org.readthedocs.build and com.readthedocs.build).

Builds from pull requests have access to environment variables that are marked as Public only, if you have environment variables with private information, make sure they aren’t marked as Public. See Environment variables and build process for more information.

On Read the Docs for Business you can set pull request previews to be private or public, If you didn’t import your project manually and your repository is public, the privacy level of pull request previews will be set to Public. Public pull request previews are available to anyone with the link to the preview, while private previews are only available to users with access to the Read the Docs project.

Warning

If you set the privacy level of pull request previews to Private, make sure that only trusted users can open pull requests in your repository.

Setting pull request previews to private on a public repository can allow a malicious user to access read-only APIs using the user’s session that is reading the pull request preview. Similar to GHSA-pw32-ffxw-68rh.

Build failure notifications

Build notifications can alert you when your documentation builds fail so you can take immediate action. We offer the following methods for being notified:

Email notifications:

Read the Docs allows you to configure build notifications via email. When builds fail, configured email addresses are notified.

Build Status Webhooks:

Build notifications can happen via webhooks. This means that we are able to support a wide variety of services that receive notifications.

Slack and Discord are supported through ready-made templates.

Webhooks can be customized through your own template and a variety of variable substitutions.

Note

We don’t trigger email notifications or build status webhooks on builds from pull requests.

See also

How to setup email notifications

Enable email notifications on failed builds, so you always know that your docs are deploying successfully.

How to setup build status webhooks

Steps for setting up build notifications via webhooks, including examples for popular platforms like Slack and Discord.

Environment variable overview

Read the Docs allows you to define your own environment variables to be used in the build process. It also defines a set of default environment variables with information about your build. These are useful for different purposes:

  • Custom environment variables are useful for adding build secrets such as API tokens.

  • Default environment variables are useful for varying your build specifically for Read the Docs or specific types of builds on Read the Docs.

Custom environment variables are defined in the dashboard interface in Admin ‣ Environment variables. Environment variables are defined for a project’s entire build process, with 2 important exceptions.

Aside from storing secrets, there are other patterns that take advantage of environment variables, like reusing the same monorepo configuration in multiple documentation projects. In cases where the environment variable isn’t a secret, like a build tool flag, you should also be aware of the alternatives to environment variables.

See also

How to use custom environment variables

A practical example of adding and accessing custom environment variables.

Environment variable reference

Reference to all pre-defined environment variables for your build environments.

Public API reference: Environment variables

Reference for managing custom environments via Read the Docs’ API.

Environment variables and build process

When a build process is started, pre-defined environment variables and custom environment variables are added at each step of the build process. The two sets of environment variables are merged together during the build process and are exposed to all of the executed commands, with pre-defined variables taking precedence over custom environment variables.

There are two noteworthy exceptions for custom environment variables:

Build checkout step

Custom environment variables are not available during the checkout step of the build process

Pull Request builds

Custom environment variables that are not marked as Public will not be available in pull request builds

Patterns of using environment variables

Aside from storing secrets, environment variables are also useful if you need to make either your .readthedocs.yaml or the commands called in the build process behave depending on pre-defined environment variables or your own custom environment variables.

Example: Multiple projects from the same Git repo

If you have the need to build multiple documentation websites from the same Git repository, you can use an environment variable to configure the behavior of your build commands or Sphinx conf.py file.

An example of this is found in the documentation project that you are looking at now. Using the Sphinx extension sphinx-multiproject, the following configuration code decides whether to build the user or developer documentation. This is defined by the PROJECT environment variable:

Read the Docs’ conf.py [1] is used to build 2 documentation projects.
from multiproject.utils import get_project

# (...)

multiproject_projects = {
    "user": {
        "use_config_file": False,
        "config": {
            "project": "Read the Docs user documentation",
        },
    },
    "dev": {
        "use_config_file": False,
        "config": {
            "project": "Read the Docs developer documentation",
        },
    },
}


docset = get_project(multiproject_projects)

Alternatives to environment variables

In some scenarios, it’s more feasible to define your build’s environment variables using the .readthedocs.yaml configuration file. Using the dashboard for administering environment variables may not be the right fit if you already know that you want to manage environment variables as code.

Consider the following scenario:

  • The environment variable is not a secret.

    and

  • The environment variable is used just once for a custom command.

In this case, you can define the environment variable as code using Build process customization. The following example shows how a non-secret single-purpose environment variable can also be used.

.readthedocs.yaml
version: 2
build:
  os: "ubuntu-22.04"
  tools:
    python: "3.12"
  jobs:
    post_build:
      - EXAMPLE_ENVIRONMENT_VARIABLE=foobar command --flag

Environment variable reference

All build processes have the following environment variables automatically defined and available for each build step:

READTHEDOCS

Whether the build is running inside Read the Docs.

Default:

True

READTHEDOCS_PROJECT

The slug of the project being built. For example, my-example-project.

READTHEDOCS_LANGUAGE

The locale name, or the identifier for the locale, for the project being built. This value comes from the project’s configured language.

Example:

en

Example:

it

Example:

de_AT

Example:

es

Example:

pt_BR

READTHEDOCS_VERSION

The slug of the version being built, such as latest, stable, or a branch name like feature-1234. For pull request builds, the value will be the pull request number.

READTHEDOCS_VERSION_NAME

The verbose name of the version being built, such as latest, stable, or a branch name like feature/1234.

READTHEDOCS_VERSION_TYPE

The type of the version being built.

Example:

branch

Example:

tag

Example:

external (for pull request builds)

Example:

unknown

READTHEDOCS_VIRTUALENV_PATH

Path for the virtualenv that was created for this build. Only exists for builds using Virtualenv and not Conda.

Example:

/home/docs/checkouts/readthedocs.org/user_builds/project/envs/version

READTHEDOCS_OUTPUT

Base path for well-known output directories. Files in these directories will automatically be found, uploaded and published.

You need to concatenate an output format to this variable. Currently valid formats are html, pdf, htmlzip and epub. (e.g. $READTHEDOCS_OUTPUT/html/ or $READTHEDOCS_OUTPUT/pdf/) You also need to create the directory before moving outputs into the destination. You can create it with the following command mkdir -p $READTHEDOCS_OUTPUT/html/. Note that only html supports multiple files, the other formats should have one and only one file to be uploaded.

See also

Where to put files

Information about using custom commands to generate output that will automatically be published once your build succeeds.

READTHEDOCS_CANONICAL_URL

Canonical base URL for the version that is built. If the project has configured a custom domain (e.g. docs.example.com) it will be used in the resulting canonical URL. Otherwise, your project’s default subdomain will be used.

The path for the language and version is appended to the domain, so the final canonical base URLs can look like the following examples:

Example:

https://docs.example.com/en/latest/

Example:

https://docs.readthedocs.io/ja/stable/

Example:

https://example--17.org.readthedocs.build/fr/17/

READTHEDOCS_GIT_CLONE_URL

URL for the remote source repository, from which the documentation is cloned. It could be HTTPS, SSH or any other URL scheme supported by Git. This is the same URL defined in your Project’s dashboard in Admin ‣ Settings ‣ Repository URL.

Example:

https://github.com/readthedocs/readthedocs.org

Example:

git@github.com:readthedocs/readthedocs.org.git

READTHEDOCS_GIT_IDENTIFIER

Contains the Git identifier that was checked out from the remote repository URL. Possible values are either a branch or tag name.

Example:

v1.x

Example:

bugfix/docs-typo

Example:

feature/signup

Example:

update-readme

READTHEDOCS_GIT_COMMIT_HASH

Git commit hash identifier checked out from the repository URL.

Example:

1f94e04b7f596c309b7efab4e7630ed78e85a1f1

See also

Environment variable overview

General information about how environment variables are used in the build process.

How to use custom environment variables

Learn how to define your own custom environment variables, in addition to the pre-defined ones.

Versions

Read the Docs supports multiple versions of your repository. On initial import, we will create a latest version. This will point at the default branch defined in your VCS control (by default, main on Git and default in Mercurial).

If your project has any tags or branches with a name following semantic versioning, we also create a stable version, tracking your most recent release. If you want a custom stable version, create either a tag or branch in your project with that name.

When you have Continuous Documentation Deployment configured for your repository, we will automatically build each version when you push a commit.

How we envision versions working

In the normal case, the latest version will always point to the most up to date development code. If you develop on a branch that is different than the default for your VCS, you should set the Default Branch to that branch.

You should push a tag for each version of your project. These tags should be numbered in a way that is consistent with semantic versioning. This will map to your stable branch by default.

Note

We in fact are parsing your tag names against the rules given by PEP 440. This spec allows “normal” version numbers like 1.4.2 as well as pre-releases. An alpha version or a release candidate are examples of pre-releases and they look like this: 2.0a1.

We only consider non pre-releases for the stable version of your documentation.

If you have documentation changes on a long-lived branch, you can build those too. This will allow you to see how the new docs will be built in this branch of the code. Generally you won’t have more than 1 active branch over a long period of time. The main exception here would be release branches, which are branches that are maintained over time for a specific release number.

Version states

States define the visibility of a version across the site. You can change the states of a version from the Versions tab of your project.

Active

  • Active

    • Docs for this version are visible

    • Builds can be triggered for this version

  • Inactive

    • Docs for this version aren’t visible

    • Builds can’t be triggered for this version

When you deactivate a version, its docs are removed.

Hidden

  • Not hidden and Active

    • This version is listed on the flyout menu on the docs site

    • This version is shown in search results on the docs site

  • Hidden and Active

    • This version isn’t listed on the flyout menu on the docs site

    • This version isn’t shown in search results from another version on the docs site (like on search results from a superproject)

Hiding a version doesn’t make it private, any user with a link to its docs would be able to see it. This is useful when:

  • You no longer support a version, but you don’t want to remove its docs.

  • You have a work in progress version and don’t want to publish its docs just yet.

Note

Active versions that are hidden will be listed as Disallow: /path/to/version/ in the default robots.txt file created by Read the Docs.

Privacy levels

Note

Privacy levels are only supported on Business hosting.

Public

It means that everything is available to be seen by everyone.

Private

Private versions are available only to people who have permissions to see them. They will not display on any list view, and will 404 when you link them to others. If you want to share your docs temporarily, see Sharing private documentation.

In addition, if you want other users to view the build page of your public versions, you’ll need to the set the privacy level of your project to public.

Logging out

When you log in to a documentation site, you will be logged in until close your browser. To log out, click on the Log out link in your documentation’s flyout menu. This is usually located in the bottom right or bottom left, depending on the theme design. This will log you out from the current domain, but not end any other session that you have active.

_images/logout-button.png

Tags and branches

Read the Docs supports two workflows for versioning: based on tags or branches. If you have at least one tag, tags will take preference over branches when selecting the stable version.

Version Control Support Matrix

git

hg

bzr

svn

Tags

Yes

Yes

Yes

No

Branches

Yes

Yes

Yes

No

Default

master

default

trunk

Version warning

A banner can be automatically displayed to notify viewers that there may be a more stable version of the documentation available. Specifically:

  • When the latest version is being shown, and there’s also a stable version active and not hidden, then the banner will remind the viewer that some of the documented features may not yet be available, and suggest that the viewer switch to the stable version.

  • When a version is being shown that is not the stable version, and there’s a stable version available, then the banner will suggest that the viewer switch to the stable version to see the newest documentation.

This feature is enabled by default on projects using the new beta addons. The beta addons can be enabled by using build.commands config key or via the new beta dashboard (https://beta.readthedocs.org) going to the admin section of your docs (Admin > Settings)

Note

An older version of this feature is currently only available to projects that have already enabled it. When the updated feature development is finished the toggle setting will be enabled for all projects.

Redirects on root URLs

When a user hits the root URL for your documentation, for example https://pip.readthedocs.io/, they will be redirected to the Default version. This defaults to latest, but could also point to your latest released version.

Subprojects

In this article, you can learn more about how several documentation projects can be combined and presented to the reader on the same website.

Read the Docs can be configured to make other projects available on the website of the main project as subprojects. This allows for documentation projects to share a search index and a namespace or custom domain, but still be maintained independently.

This is useful for:

  • Organizations that need all their projects visible in one documentation portal or landing page

  • Projects that document and release several packages or extensions

  • Organizations or projects that want to have a common search function for several sets of documentation

For a main project example-project, a subproject example-project-plugin can be made available as follows:

See also

How to manage subprojects

Learn how to create and manage subprojects

How to link to other documentation projects with Intersphinx

Learn how to use references between different Sphinx projects, for instance between subprojects

Sharing a custom domain

Projects and subprojects can be used to share a custom domain. To configure this, one project should be established as the main project and configured with a custom domain. Other projects are then added as subprojects to the main project.

If the example project example-project was set up with a custom domain, docs.example.com, the URLs for projects example-project and example-project-plugin with alias plugin would respectively be at:

Using aliases

Adding an alias for the subproject allows you to override the URL that is used to access it, giving more control over how you want to structure your projects. You can choose an alias for the subproject when it is created.

You can set your subproject’s project name and slug however you want, but we suggest prefixing it with the name of the main project.

Typically, a subproject is created with a <mainproject>- prefix, for instance if the main project is called example-project and the subproject is called plugin, then the subproject’s Read the Docs project slug will be example-project-plugin. When adding the subproject, the alias is set to plugin and the project’s URL becomes example-project.readthedocs.io/projects/plugin.

When you add a subproject, the subproject will not be directly available anymore from its own domain. For instance, example-project-plugin.readthedocs.io/ will redirect to example-project.readthedocs.io/projects/plugin.

Custom domain on subprojects

Adding a custom domain to a subproject is not allowed, since your documentation will always be served from the domain of the parent project.

Separate release cycles

By using subprojects, you can present the documentation of several projects even though they have separate release cycles.

Your main project may have its own versions and releases, while all of its subprojects maintain their own individual versions and releases. We recommend that documentation follows the release cycle of whatever it is documenting, meaning that your subprojects should be free to follow their own release cycle.

This is solved by having an individual flyout menu active for the project that’s viewed. When the user navigates to a subproject, they are presented with a flyout menu matching the subproject’s versions and Offline formats (PDF, ePub, HTML).

Localization and Internationalization

In this article, we explain high-level approaches to internationalizing and localizing your documentation.

By default, Read the Docs assumes that your documentation is or might become multilingual one day. The initial default language is English and therefore you often see the initial build of your documentation published at /en/latest/, where the /en denotes that it’s in English. By having the en URL component present from the beginning, you are ready for the eventuality that you would want a second language.

Read the Docs supports hosting your documentation in multiple languages. Read below for the various approaches that we support.

Projects with one language

If your documentation isn’t in English (the default), you should indicate which language you have written it in.

It is easy to set the Language of your project. On the project Admin page (or Import page), simply select your desired Language from the dropdown. This will tell Read the Docs that your project is in the language. The language will be represented in the URL for your project.

For example, a project that is in Spanish will have a default URL of /es/latest/ instead of /en/latest/.

Projects with multiple translations (Sphinx-only)

See also

How to manage translations for Sphinx projects

Describes the whole process for a documentation with multiples languages in the same repository and how to keep the translations updated on time.

This situation is a bit more complicated. To support this, you will have one parent project and a number of projects marked as translations of that parent. Let’s use phpmyadmin as an example.

The main phpmyadmin project is the parent for all translations. Then you must create a project for each translation, for example phpmyadmin-spanish. You will set the Language for phpmyadmin-spanish to Spanish. In the parent projects Translations page, you will say that phpmyadmin-spanish is a translation for your project.

This has the results of serving:

  • phpmyadmin at http://phpmyadmin.readthedocs.io/en/latest/

  • phpmyadmin-spanish at http://phpmyadmin.readthedocs.io/es/latest/

It also gets included in the Read the Docs flyout menu:

_images/translation_bar.png

Note

The default language of a custom domain is determined by the language of the parent project that the domain was configured on. See Custom domains for more information.

Note

You can include multiple translations in the same repository, with same conf.py and .rst files, but each project must specify the language to build for those docs.

Note

You must commit the .po files for Read the Docs to translate your documentation.

Translation workflows

When you work with translations, the workflow of your translators becomes a critical component.

Considerations include:

  • Are your translators able to use a git workflow? For instance, are they able to translate directly via GitHub?

  • Do you benefit from machine translation?

  • Do you need different roles, for instance do you need translators and editors?

  • What is your source language?

  • When are your translated versions published?

By using Sphinx and .po files, you will be able to automatically synchronize between your documentation source messages on your git platform and your translation platform.

There are many translation platforms that support this workflow. These include:

Because Read the Docs builds your git repository, you can use any of the above solutions. Any solution that synchronizes your translations with your git repository will ensure that your translations are automatically published with Read the Docs.

URL versioning schemes

The versioning scheme of your project defines the URL of your documentation, and if your project supports multiple versions or translations.

Read the Docs supports three different versioning schemes:

See also

How to change the versioning scheme of your project

How to configure your project to use a specific versioning scheme.

Versions

General explanation of how versioning works on Read the Docs.

Multiple versions with translations

This is the default versioning scheme, it’s the recommend one if your project has multiple versions, and has or plans to support translations.

The URLs of your documentation will look like:

  • /en/latest/

  • /en/1.5/

  • /es/latest/install.html

  • /es/1.5/contributing.html

Multiple versions without translations

Use this versioning scheme if you want to have multiple versions of your documentation, but don’t want to have translations.

The URLs of your documentation will look like:

  • /latest/

  • /1.5/install.html

Warning

This means you can’t have translations for your documentation.

Single version without translations

Having a single version of a documentation project can be considered the better choice in cases where there should only always exist one unambiguous copy of your project. For example:

  • A research project may wish to only expose readers to their latest list of publications and research data.

  • A SaaS application might only ever have one version live.

The URLs of your documentation will look like:

  • /

  • /install.html

Warning

This means you can’t have translations or multiple versions for your documentation.

Custom domains

By configuring a custom domain for your project, your project can serve documentation from a domain you control, for instance docs.example.com. This is great for maintaining a consistent brand for your product and its documentation.

Default subdomains

Without a custom domain configured, your project’s documentation is served from a Read the Docs domain using a unique subdomain for your project:

  • <project name>.readthedocs.io for Read the Docs Community.

  • <organization name>-<project name>.readthedocs-hosted.com for Read the Docs for Business. The addition of the organization name allows multiple organizations to have projects with the same name.

See also

How to manage custom domains

How to create and manage custom domains for your project.

Features

Automatic SSL

SSL certificates are automatically issued through Cloudflare for every custom domain. No extra set up is required beyond configuring your project’s custom domain.

CDN caching

Response caching is provided through a CDN for all documentation projects, including projects using a custom domain. CDN caching improves page response time for your documentation’s users, and the CDN edge network provides low latency response times regardless of location.

Multiple domains

Projects can be configured to be served from multiple domains, which always includes the project’s default subdomain. Only one domain can be configured as the canonical domain however, and any requests to non-canonical domains and subdomains will redirect to the canonical domain.

Canonical domains

The canonical domain configures the primary domain the documentation will serve from, and also sets the domain search engines use for search results when hosting from multiple domains. Projects can only have one canonical domain, which is the project’s default subdomain if no other canonical domain is defined.

See also

Canonical URLs

How canonical domains affect your project’s canonical URL, and why canonical URLs are important.

Subprojects

How to share a custom domain between multiple projects.

Canonical URLs

A canonical URL allows you to specify the preferred version of a web page to prevent duplicated content. Here are some examples of when a canonical URL is used:

  • Search engines use your canonical URL to link users to the correct version and domain of your documentation.

  • Many popular chat clients and social media networks generate link previews, using your canonical URL as the final destination.

If canonical URLs aren’t used, it’s easy for outdated documentation to be the top search result for various pages in your documentation. This is not a perfect solution for this problem, but generally people finding outdated documentation is a big problem, and this is one of the suggested ways to solve it from search engines.

Tip

In most cases, Read the Docs will automatically generate a canonical URL for Sphinx projects. Most Sphinx users do not need to take further action.

See also

How to enable canonical URLs

More information on how to enable canonical URLs in your project.

How Read the Docs generates canonical URLs

The canonical URL takes the following into account:

  • The default version of your project (usually “latest” or “stable”).

  • The canonical custom domain if you have one, otherwise the default subdomain will be used.

For example, if you have a project named example-docs with a custom domain https://docs.example.com, then your documentation will be served at https://example-docs.readthedocs.io and https://docs.example.com. Without specifying a canonical URL, a search engine like Google will index both domains.

You’ll want to use https://docs.example.com as your canonical domain. This means that when Google indexes a page like https://example-docs.readthedocs.io/en/latest/, it will know that it should really point at https://docs.example.com/en/latest/, thus avoiding duplicating the content.

Note

If you want your custom domain to be set as the canonical, you need to set Canonical:  This domain is the primary one where the documentation is served from in the Admin > Domains section of your project settings.

Implementation

A canonical URL is automatically specified in the HTML output with a <link> element. For instance, regardless of whether you are viewing this page on /en/latest or /en/stable, the following HTML header data will be present:

<link rel="canonical" href="https://docs.readthedocs.io/en/stable/canonical-urls.html" />

Content Delivery Network (CDN) and caching

A CDN is used for making documentation pages fast for your users. CDNs increase speed by caching documentation content in multiple data centers around the world, and then serving docs from the data center closest to the user.

We support CDNs on both of our sites:

  • On Read the Docs Community,

    we are able to provide a CDN to all the projects that we host. This service is graciously sponsored by Cloudflare.

  • On Read the Docs for Business,

    the CDN is included as part of our all of our plans. We use Cloudflare for this as well.

CDN benefits

Having a CDN in front of your documentation has many benefits:

  • Improved reliability: Since docs are served from multiple places, one can go down and the docs are still accessible.

  • Improved performance: Data takes time to travel across space, so connecting to a server closer to the user makes documentation load faster.

Automatic cache refresh

We automatically refresh the cache on the CDN when the following actions happen:

  • Your project is saved.

  • Your domain is saved.

  • A new version of your documentation is built.

By refreshing the cache according to these rules, readers should never see outdated content. This makes the end-user experience seamless, and fast.

sitemap.xml support

Sitemaps allow you to inform search engines about URLs that are available for crawling. This makes your content more discoverable, and improves your Search Engine Optimization (SEO).

How it works

The sitemap.xml file is read by search engines in order to index your documentation. It contains information such as:

  • When a URL was last updated.

  • How often that URL changes.

  • How important this URL is in relation to other URLs in the site.

  • What translations are available for a page.

Read the Docs automatically generates a sitemap.xml for your project,

By default the sitemap includes:

  • Each version of your documentation and when it was last updated, sorted by version number.

This allows search engines to prioritize results based on the version number, sorted by semantic versioning.

Custom sitemap.xml

You can control the sitemap that is used via the robots.txt file. Our robots.txt support allows you to host a custom version of this file.

An example would look like:

User-agent: *
Allow: /

Sitemap: https://docs.example.com/en/stable/sitemap.xml

404 Not Found pages

If you want your project to use a custom or branded 404 Not Found page, you can put a 404.html or 404/index.html at the top level of your project’s HTML output.

How it works

When our servers return a 404 Not Found error, we check if there is a 404.html or 404/index.html in the root of your project’s output.

The following locations are checked, in order:

  • /404.html or 404/index.html in the current documentation version.

  • /404.html or 404/index.html in the default documentation version.

Tool integration

Documentation tools will have different ways of generating a 404.html or 404/index.html file. We have examples for some of the most popular tools below.

We recommend the sphinx-notfound-page extension, which Read the Docs maintains. It automatically creates a 404.html page for your documentation, matching the theme of your project. See its documentation for how to install and customize it.

If you want to create a custom 404.html, Sphinx uses html_extra_path option to add static files to the output. You need to create a 404.html file and put it under the path defined in html_extra_path.

If you are using the DirHTML builder, no further steps are required. Sphinx will automatically apply the <page-name>/index.html folder structure to your 404 page: 404/index.html. Read the Docs also detects 404 pages named this way.

robots.txt support

The robots.txt files allow you to customize how your documentation is indexed in search engines. It’s useful for:

  • Hiding various pages from search engines

  • Disabling certain web crawlers from accessing your documentation

  • Disallowing any indexing of your documentation

Read the Docs automatically generates one for you with a configuration that works for most projects. By default, the automatically created robots.txt:

  • Hides versions which are set to Hidden from being indexed.

  • Allows indexing of all other versions.

Warning

robots.txt files are respected by most search engines, but they aren’t a guarantee that your pages will not be indexed. Search engines may choose to ignore your robots.txt file, and index your docs anyway.

If you require private documentation, please see Sharing private documentation.

How it works

You can customize this file to add more rules to it. The robots.txt file will be served from the default version of your project. This is because the robots.txt file is served at the top-level of your domain, so we must choose a version to find the file in. The default version is the best place to look for it.

Tool integration

Documentation tools will have different ways of generating a robots.txt file. We have examples for some of the most popular tools below.

Sphinx uses the html_extra_path configuration value to add static files to its final HTML output. You need to create a robots.txt file and put it under the path defined in html_extra_path.

Offline formats (PDF, ePub, HTML)

This page will provide an overview of a core Read the Docs feature: building docs in multiple formats.

Read the Docs supports the following formats by default:

  • PDF

  • ePub

  • Zipped HTML

This means that every commit that you push will automatically update your offline formats as well as your documentation website.

Use cases

This functionality is great for anyone who needs documentation when they aren’t connected to the internet. Users who are about to get on a plane can grab a single file and have the entire documentation during their trip. Many academic and scientific projects benefit from these additional formats.

PDF versions are also helpful to automatically create printable versions of your documentation. The source of your documentation will be structured to support both online and offline formats. This means that a documentation project displayed as a website can be downloaded as a PDF, ready to be printed as a report or a book.

Offline formats also support having the entire documentation in a single file. Your entire documentation can now be delivered as an email attachment, uploaded to an eReader, or accessed and searched locally without online latency. This makes your documentation project easy to redistribute or archive.

Accessing offline formats

You can download offline formats in the Project dashboard > Downloads:

_images/offline-formats.jpg

When you are browsing a documentation project, they can also be accessed directly from the Flyout menu.

Examples

If you want to see an example, you can download the Read the Docs documentation in the following formats:

Continue learning

Downloadable documentation formats are built by your documentation framework. They are then published by Read the Docs and included in your Flyout menu. Therefore, it’s your framework that decides exactly how each output is built and which formats are supported:

Sphinx

All output formats are built mostly lossless from the documentation source, meaning that your documentation source (reStructuredText or Markdown/MyST) is built from scratch for each output format.

MkDocs and Docsify + more

The common case for most documentation frameworks is that several alternative extensions exist supporting various output formats. Most of the extensions export the HTML outputs as another format (for instance PDF) through a conversion process.

Because Sphinx supports the generation of offline formats through an official process, we are also able to support it officially. Other alternatives can also work, provided that you identify which extension you want to use and configure the environment for it to run. Other formats aren’t natively supported by Read the Docs, but support is coming soon.

See also

Other pages in our documentation are relevant to this feature, and might be a useful next step.

How to embed content from your documentation

Read the Docs allows you to embed content from any of the projects we host and specific allowed external domains (currently, docs.python.org, docs.scipy.org, docs.sympy.org, numpy.org) This allows reuse of content across sites, making sure the content is always up to date.

There are a number of use cases for embedding content, so we’ve built our integration in a way that enables users to build on top of it. This guide will show you some of our favorite integrations:

Contextualized tooltips on documentation pages

Tooltips on your own documentation are really useful to add more context to the current page the user is reading. You can embed any content that is available via reference in Sphinx, including:

  • Python object references

  • Full documentation pages

  • Sphinx references

  • Term definitions

We built a Sphinx extension called sphinx-hoverxref on top of our Embed API you can install in your project with minimal configuration.

Here is an example showing a tooltip when you hover with the mouse a reference:

_images/sphinx-hoverxref-example.png

Tooltip shown when hovering on a reference using sphinx-hoverxref.

You can find more information about this extension, how to install and configure it in the hoverxref documentation.

Inline help on application website

This allows us to keep the official documentation as the single source of truth, while having great inline help in our application website as well. On the “Automation Rules” admin page we could embed the content of our Automation rules documentation page and be sure it will be always up to date.

Note

We recommend you point at tagged releases instead of latest. Tags don’t change over time, so you don’t have to worry about the content you are embedding disappearing.

The following example will fetch the section “Creating an automation rule” in page automation-rules.html from our own docs and will populate the content of it into the #help-container div element.

<script type="text/javascript">
var params = {
  'url': 'https://docs.readthedocs.io/en/latest/automation-rules.html%23creating-an-automation-rule',
  // 'doctool': 'sphinx',
  // 'doctoolversion': '4.2.0',
};
var url = 'https://readthedocs.org/api/v3/embed/?' + $.param(params);
$.get(url, function(data) {
  $('#help-container').content(data['content']);
});
</script>

<div id="help-container"></div>

You can modify this example to subscribe to .onclick Javascript event, and show a modal when the user clicks in a “Help” link.

Tip

Take into account that if the title changes, your section argument will break. To avoid that, you can manually define Sphinx references above the sections you don’t want to break. For example,

.. in your .rst document file

.. _unbreakable-section-reference:

Creating an automation rule
---------------------------

This is the text of the section.

To link to the section “Creating an automation rule” you can send section=unbreakable-section-reference. If you change the title it won’t break the embedded content because the label for that title will still be unbreakable-section-reference.

Please, take a look at the Sphinx :ref: role documentation for more information about how to create references.

Calling the Embed API directly

Embed API lives under https://readthedocs.org/api/v3/embed/ URL and accept the URL of the content you want to embed. Take a look at its own documentation to find out more details.

You can click on the following links and check a live response directly in the browser as examples:

Note

All relative links to pages contained in the remote content will continue to point at the remote page.

Search query syntax

When searching on Read the Docs with server side search, you can use some parameters in your query in order to search on given projects, versions, or to get more accurate results.

Parameters

Parameters are in the form of name:value, they can appear anywhere in the query, and depending on the parameter, you can use one or more of them.

Any other text that isn’t a parameter will be part of the search query. If you don’t want your search term to be interpreted as a parameter, you can escape it like project\:docs.

Note

Unknown parameters like foo:bar don’t require escaping

The available parameters are:

project

Indicates the project and version to includes results from (this doesn’t include subprojects or translations). If the version isn’t provided, the default version will be used. More than one parameter can be included.

Examples:

  • project:docs test

  • project:docs/latest test

  • project:docs/stable project:dev test

subprojects

Include results from the given project and its subprojects. If the version isn’t provided, the default version of all projects will be used, if a version is provided, all subprojects matching that version will be included, and if they don’t have a version with that name, we use their default version. More than one parameter can be included.

Examples:

  • subprojects:docs test

  • subprojects:docs/latest test

  • subprojects:docs/stable subprojects:dev test

user

Include results from projects the given user has access to. The only supported value is @me, which is an alias for the current user. Only one parameter can be included, if duplicated, the last one will override the previous one.

Examples:

  • user:@me test

Permissions

If the user doesn’t have permission over one version, or if the version doesn’t exist, we don’t include results from that version.

The API will return all the projects that were used in the final search, with that information you can check which projects were used in the search.

Limitations

In order to keep our search usable for everyone, you can search up to 100 projects at a time. If the resulting query includes more than 100 projects, they will be omitted from the final search.

This syntax is only available when using our search API V3 or when using the global search (https://readthedocs.org/search/).

Searching multiple versions of the same project isn’t supported, the last version will override the previous one.

Special queries

Read the Docs uses the Simple Query String feature from Elasticsearch. This means that as the search query becomes more complex, the results yielded become more specific.

Prefix query

* (asterisk) at the end of any term signifies a prefix query. It returns the results containing the words with specific prefix.

Examples:

  • test*

  • build*

Fuzziness

~N (tilde followed by a number) after a word indicates edit distance (fuzziness). This type of query is helpful when the exact spelling of the keyword is unknown. It returns results that contain terms similar to the search term.

Examples:

  • doks~1

  • test~2

  • getter~2

Words close to each other

~N (tilde followed by a number) after a phrase can be used to match words that are near to each other.

Examples:

  • "dashboard admin"~2

  • "single documentation"~1

  • "read the docs policy"~5

Flyout menu

When you are using a Read the Docs site, you will likely notice that we embed a menu on all the documentation pages we serve. This is a way to expose the functionality of Read the Docs on the page, without having to have the documentation theme integrate it directly.

Functionality

The flyout menu provides access to the following bits of Read the Docs functionality:

  • A version switcher that shows users all of the active, unhidden versions they have access to.

  • Offline formats for the current version, including HTML & PDF downloads that are enabled by the project.

  • Links to the Read the Docs dashboard for the project.

  • Links to your VCS provider that allow the user to quickly find the exact file that the documentation was rendered from.

  • A search bar that gives users access to our Server side search of the current version.

Closed

_images/flyout-closed.png

The flyout when it’s closed

Open

_images/flyout-open.png

The opened flyout

Information for theme authors

Warning

This is currently deprecated in favor of the new Read the Docs Addons approach. We are working on an idea that exposes all the required data to build the flyout menu via a JavaScript CustomEvent. Take a look at an example of this approach at https://github.com/readthedocs/sphinx_rtd_theme/pull/1526.

We are looking for feedback on this approach before making it public. Please, comment on that PR or the linked issue from its description letting us know if it would cover your use case.

People who are making custom documentation themes often want to specify where the flyout is injected, and also what it looks like. We support both of these use cases for themes.

Defining where the flyout menu is injected

The flyout menu injection looks for a specific selector (#readthedocs-embed-flyout), in order to inject the flyout. You can add <div id="readthedocs-embed-flyout"> in your theme, and our JavaScript code will inject the flyout there. All other themes except for the sphinx_rtd_theme have the flyout appended to the <body>.

Styling the flyout

HTML themes can style the flyout to make it match the overall style of the HTML. By default the flyout has it’s own CSS file, which you can look at to see the basic CSS class names.

The example HTML that the flyout uses is included here, so that you can style it in your HTML theme:

<div class="injected">
   <div class="rst-versions rst-badge shift-up" data-toggle="rst-versions">
      <span class="rst-current-version" data-toggle="rst-current-version">
      <span class="fa fa-book">&nbsp;</span>
      v: 2.1.x
      <span class="fa fa-caret-down"></span>
      </span>
      <div class="rst-other-versions">
         <!-- "Languages" section (``dl`` tag) is not included if the project does not have translations -->
         <dl>
            <dt>Languages</dt>
            <dd class="rtd-current-item">
               <a href="https://flask.palletsproject.com/en/2.1.x">en</a>
            </dd>
            <dd>
               <a href="https://flask.palletsproject.com/es/2.1.x">es</a>
            </dd>
         </dl>

         <!-- "Versions" section (``dl`` tag) is not included if the project is single version -->
         <dl>
            <dt>Versions</dt>
            <dd>
               <a href="https://flask.palletsprojects.com/en/latest/">latest</a>
            </dd>
            <dd class="rtd-current-item">
               <a href="https://flask.palletsprojects.com/en/2.1.x/">2.1.x</a>
            </dd>
         </dl>

         <!-- "Downloads" section (``dl`` tag) is not included if the project does not have artifacts to download -->
         <dl>
            <dt>Downloads</dt>
            <dd>
               <a href="//flask.palletsprojects.com/_/downloads/en/2.1.x/pdf/">PDF</a>
             </dd>
            <dd>
               <a href="//flask.palletsprojects.com/_/downloads/en/2.1.x/htmlzip/">HTML</a>
             </dd>
         </dl>

         <dl>
            <dt>On Read the Docs</dt>
            <dd>
               <a href="//readthedocs.org/projects/flask/">Project Home</a>
            </dd>
            <dd>
               <a href="//readthedocs.org/projects/flask/builds/">Builds</a>
            </dd>
            <dd>
               <a href="//readthedocs.org/projects/flask/downloads/">Downloads</a>
            </dd>
         </dl>

         <dl>
            <dt>On GitHub</dt>
            <dd>
               <a href="https://github.com/pallets/flask/blob/2.1.x/docs/index.rst">View</a>
            </dd>
            <dd>
               <a href="https://github.com/pallets/flask/edit/2.1.x/docs/index.rst">Edit</a>
            </dd>
         </dl>

         <dl>
            <dt>Search</dt>
            <dd>
               <div style="padding: 6px;">
                  <form id="flyout-search-form" class="wy-form" target="_blank" action="//readthedocs.org/projects/flask/search/" method="get">
                     <input type="text" name="q" aria-label="Search docs" placeholder="Search docs">
                  </form>
               </div>
            </dd>
         </dl>

         <hr>
         <small>
         <span>Hosted by <a href="https://readthedocs.org">Read the Docs</a></span>
         <span> &middot; </span>
         <a href="https://docs.readthedocs.io/page/privacy-policy.html">Privacy Policy</a>
         </small>
      </div>
   </div>
</div>

Redirects

Over time, a documentation project may want to rename and move contents around. Redirects allow changes in a documentation project to happen without bad user experiences.

If you do not manage URL structures, users will eventually encounter 404 File Not Found errors. While this may be acceptable in some cases, the bad user experience of a 404 page is usually best to avoid.

Built-in redirects ⬇️

Allows for simple and long-term sharing of external references to your documentation.

User-defined redirects ⬇️

Makes it easier to move contents around

See also

How to use custom URL redirects in documentation projects

This guide shows you how to add redirects with practical examples.

Best practices for linking to your documentation

Information and tips about creating and handling external references.

How to deprecate content

A guide to deprecating features and other topics in a documentation.

Built-in redirects

This section explains the redirects that are automatically active for all projects and how they are useful. Built-in redirects are especially useful for creating and sharing incoming links, which is discussed indepth in Best practices for linking to your documentation.

Page redirects at /page/

You can link to a specific page and have it redirect to your default version, allowing you to create links on external sources that are always up to date. This is done with the /page/ URL prefix.

For instance, you can reach the page you are reading now by going to https://docs.readthedocs.io/page/guides/best-practice/links.html.

Another way to handle this is the latest version. You can set your latest version to a specific version and just always link to latest. You can reach this page by going to https://docs.readthedocs.io/en/latest/guides/best-practice/links.html.

Root URL redirect at /

A link to the root of your documentation (<slug>.readthedocs.io/) will redirect to the default version, as set in your project settings.

This works for both readthedocs.io (Read the Docs Community), readthedocs-hosted.com (Read the Docs for Business), and custom domains.

For example:

docs.readthedocs.io -> docs.readthedocs.io/en/stable/

Warning

You cannot use the root redirect to reference specific pages. / only redirects to the default version, whereas /some/page.html will not redirect to /en/latest/some/page.html. Instead, use Page redirects at /page/.

You can choose which is the default version for Read the Docs to display. This usually corresponds to the most recent official release from your project.

Root language redirect at /<lang>/

A link to the root language of your documentation (<slug>.readthedocs.io/en/) will redirect to the default version of that language.

For example, accessing the English language of the project will redirect you to the its default version (stable):

https://docs.readthedocs.io/en/ -> https://docs.readthedocs.io/en/stable/

User-defined redirects

Page redirects

Page Redirects let you redirect a page across all versions of your documentation.

Note

Since pages redirects apply to all versions, From URL doesn’t need to include the /<language>/<version> prefix (e.g. /en/latest), but just the version-specific part of the URL. If you want to set redirects only for some languages or some versions, you should use Exact redirects with the fully-specified path.

Exact redirects

Exact Redirects take into account the full URL (including language and version), allowing you to create a redirect for a specific version or language of your documentation.

Clean/HTML URLs redirects

If you decide to change the style of the URLs of your documentation, you can use Clean URL to HTML or HTML to clean URL redirects to redirect users to the new URL style.

For example, if a previous page was at /en/latest/install.html, and now is served at /en/latest/install/, or vice versa, users will be redirected to the new URL.

Limitations and observations

  • Read the Docs Community users are limited to 100 redirects per project, and Read the Docs for Business users have a number of redirects limited by their plan.

  • By default, redirects only apply on pages that don’t exist. Forced redirects allow you to apply redirects on existing pages.

  • Redirects aren’t applied on previews of pull requests. You should treat these domains as ephemeral and not rely on them for user-facing content.

  • You can redirect to URLs outside Read the Docs, just include the protocol in To URL, e.g https://example.com.

  • A wildcard can be used at the end of From URL (suffix wildcard) to redirect all pages matching a prefix. Prefix and infix wildcards are not supported.

  • If a wildcard is used in From URL, the part of the URL that matches the wildcard can be used in To URL with the :splat placeholder.

  • Redirects without a wildcard match paths with or without a trailing slash, e.g. /install matches /install and /install/.

  • The order of redirects matters. If multiple redirects match the same URL, the first one will be applied. The order of redirects can be changed from your project’s dashboard.

  • If an infinite redirect is detected, a 404 error will be returned, and no other redirects will be applied.

Examples

Redirecting a page

Say you move the example.html page into a subdirectory of examples: examples/intro.html. You can create a redirect with the following configuration:

Type: Page Redirect
From URL: /example.html
To URL: /examples/intro.html

Users will now be redirected:

  • From https://docs.example.com/en/latest/example.html to https://docs.example.com/en/latest/examples/intro.html.

  • From https://docs.example.com/en/stable/example.html to https://docs.example.com/en/stable/examples/intro.html.

If you want this redirect to apply to a specific version of your documentation, you can create a redirect with the following configuration:

Type: Exact Redirect
From URL: /en/latest/example.html
To URL: /en/latest/examples/intro.html

Note

Use the desired version and language instead of latest and en.

Redirecting a directory

Say you rename the /api/ directory to /api/v1/. Instead of creating a redirect for each page in the directory, you can use a wildcard to redirect all pages in that directory:

Type: Page Redirect
From URL: /api/*
To URL: /api/v1/:splat

Users will now be redirected:

  • From https://docs.example.com/en/latest/api/ to https://docs.example.com/en/latest/api/v1/.

  • From https://docs.example.com/en/latest/api/projects.html to https://docs.example.com/en/latest/api/v1/projects.html.

If you want this redirect to apply to a specific version of your documentation, you can create a redirect with the following configuration:

Type: Exact Redirect
From URL: /en/latest/api/*
To URL: /en/latest/api/v1/:splat

Note

Use the desired version and language instead of latest and en.

Redirecting a directory to a single page

Say you put the contents of the /examples/ directory into a single page at /examples.html. You can use a wildcard to redirect all pages in that directory to the new page:

Type: Page Redirect
From URL: /examples/*
To URL: /examples.html

Users will now be redirected:

  • From https://docs.example.com/en/latest/examples/ to https://docs.example.com/en/latest/examples.html.

  • From https://docs.example.com/en/latest/examples/intro.html to https://docs.example.com/en/latest/examples.html.

If you want this redirect to apply to a specific version of your documentation, you can create a redirect with the following configuration:

Type: Exact Redirect
From URL: /en/latest/examples/*
To URL: /en/latest/examples.html

Note

Use the desired version and language instead of latest and en.

Redirecting a page to the latest version

Say you want your users to always be redirected to the latest version of a page, your security policy (/security.html) for example. You can use a wildcard with a forced redirect to redirect all versions of that page to the latest version:

Type: Page Redirect
From URL: /security.html
To URL: https://docs.example.com/en/latest/security.html
Force Redirect: True

Users will now be redirected:

  • From https://docs.example.com/en/v1.0/security.html to https://docs.example.com/en/latest/security.html.

  • From https://docs.example.com/en/v2.5/security.html to https://docs.example.com/en/latest/security.html.

Note

To URL includes the domain, this is required, otherwise the redirect will be relative to the current version, resulting in a redirect to https://docs.example.com/en/v1.0/en/latest/security.html.

Redirecting an old version to a new one

Let’s say that you want to redirect your readers of your version 2.0 of your documentation under /en/2.0/ because it’s deprecated, to the newest 3.0 version of it at /en/3.0/. You can use an exact redirect to do so:

Type: Exact Redirect
From URL: /en/2.0/*
To URL: /en/3.0/:splat

Users will now be redirected:

  • From https://docs.example.com/en/2.0/dev/install.html to https://docs.example.com/en/3.0/dev/install.html.

Note

For this redirect to work, your old version must be disabled, if the version is still active, you can use the Force Redirect option.

Migrating your docs to Read the Docs

Say that you previously had your docs hosted at https://docs.example.com/dev/, and choose to migrate to Read the Docs with support for multiple versions and translations. Your documentation will now be served at https://docs.example.com/en/latest/, but your users may have bookmarks saved with the old URL structure, for example https://docs.example.com/dev/install.html.

You can use an exact redirect with a wildcard to redirect all pages from the old URL structure to the new one:

Type: Exact Redirect
From URL: /dev/*
To URL: /en/latest/:splat

Users will now be redirected:

  • From https://docs.example.com/dev/install.html to https://docs.example.com/en/latest/install.html.

Migrating your documentation to another domain

You can use an exact redirect with the force option to migrate your documentation to another domain, for example:

Type: Exact Redirect
From URL: /*
To URL: https://newdocs.example.com/:splat
Force Redirect: True

Users will now be redirected:

  • From https://docs.example.com/en/latest/install.html to https://newdocs.example.com/en/latest/install.html.

Changing your Sphinx builder from html to dirhtml

When you change your Sphinx builder from html to dirhtml, all your URLs will change from /page.html to /page/. You can create a redirect of type HTML to clean URL to redirect all your old URLs to the new style.

Analytics for search and traffic

Read the Docs supports analytics for search and traffic. When someone reads your documentation, we collect data about the vist and the referer with full respect of the privacy of the visitor.

Traffic analytics

Read the Docs aggregates statistics about visits to your documentation. This is mainly information about how often pages are viewed, and which return a 404 Not Found error code.

Traffic Analytics lets you see which documents your users are reading. This allows you to understand how your documentation is being used, so you can focus on expanding and updating parts people are reading most.

If you require more detailed analytics, Read the Docs has native support for Google Analytics. It’s also possible to customize your documentation to include other analytics frameworks.

Learn more in How to use traffic analytics.

Search analytics

When someone visits your documentation and uses the built-in server side search feature, Read the Docs will collect analytics on their search term.

Those are aggregated into a simple view of the “Top queries in the past 30 days”. You can also download this data.

This is helpful to optimize your documentation in alignment with your readers’ interests. You can discover new trends and expand your documentation to new needs.

Learn more in How to use search analytics.

Security logs

Security logs allow you to audit what has happened recently in your organization or account. This feature is quite important for many security compliance programs, as well as the general peace of mind of knowing what is happening on your account. We store the IP address and the browser used on each event, so that you can confirm this access was from the intended person.

Security logs are only visible to organization owners. You can invite other team members as owners.

See also

Security policy

General information and reference about how security is handled on Read the Docs.

User security log

We store a user security log for the latest 90 days of activity. This log is useful to validate that no unauthorized events have occurred.

The security log tracks the following events:

  • Authentication on the dashboard.

  • Authentication on documentation pages (Business hosting only).

  • When invitations to manage a project are sent, accepted, revoked or declined.

Authentication failures and successes are both tracked.

Logs are available in <Username dropdown> ‣ Settings ‣ Security Log.

Organization security log

Note

This feature is only available on Read the Docs for Business.

The length of log storage varies with your plan, check our pricing page for more details. Your organization security log is a great place to check periodically to ensure there hasn’t been unauthorized access to your organization.

Organization logs track the following events:

  • Authentication on documentation pages from your organization.

  • User accesses a documentation page from your organization (Enterprise plans only).

  • User accesses a documentation’s downloadable formats (Enterprise plans only).

  • Invitations to organization teams are sent, revoked or accepted.

Authentication failures and successes are both tracked.

Logs are available in <Username dropdown> ‣ Organizations ‣ <Organization name> ‣ Settings ‣ Security Log.

If you have any additional information that you wished the security log was capturing, you can always reach out to Site support.

See also

Security reports

Security information related to our own platform, personal data treatment, and how to report a security issue.

Status badges

Status badges let you show the state of your documentation to your users. They will show if the latest build has passed, failed, or is in an unknown state. They are great for embedding in your README, or putting inside your actual doc pages.

You can see a badge in action in the Read the Docs README.

Display states

Badges have the following states which can be shown to users:

  • Green: passing - the last build was successful.

  • Red: failing - the last build failed.

  • Yellow: unknown - we couldn’t figure out the status of your last build.

An example of each is shown here:

green red yellow

Automatically generated

On the dashboard of a project, an example badge is displayed together with code snippets for reStructuredText, Markdown, and HTML.

Badges are generated on-demand for all Read the Docs projects, using the following URL syntax:

https://readthedocs.org/projects/<project-slug>/badge/?version=<version>&style=<style>

Style

You can pass the style GET argument to get custom styled badges. This allows you to match the look and feel of your website. By default, the flat style is used.

Style

Badge

flat - default

Flat Badge

flat-square

Flat-Square Badge

for-the-badge

Badge

plastic

Plastic Badge

social

Social Badge

Version-specific badges

You can change the version of the documentation your badge points to. To do this, you can pass the version GET argument to the badge URL. By default, it will point at the default version you have specified for your project.

The badge URL looks like this:

https://readthedocs.org/projects/<project-slug>/badge/?version=latest

Badges on dashboard pages

On each project home page there is a badge that communicates the status of the default version. If you click on the badge icon, you will be given snippets for reStructuredText, Markdown, and HTML to make embedding it easier.

Badges for private projects

Note

This feature is only available on Read the Docs for Business.

For private projects, a badge URL cannot be guessed. A token is needed to display it. Private badge URLs are displayed on a private project’s dashboard in place of public URLs.

How to structure your documentation

A documentation project’s ultimate goal is to be read and understood by a reader. Readers need to be able to discover the information that they need. Without an defined structure, readers either won’t find information that they need or get frustrated on the way.

One of the largest benefits of a good structure is that it removes questions that keep authors from writing documentation. Starting with a blank page is often the hardest part of documentation, so anything we can do to remove this problem is a win.

Choosing a structure

Good news! You don’t have to invent all of the structure yourself, since a lot of experience-based work has been done to come up with a universal documentation structure.

In order to avoid starting with a blank page, we recommend a simple process:

  • Choose a structure for your documentation. We recommend Diátaxis for this.

  • Find a example project or template to start from.

  • Start writing by filling in the structure.

This process helps you get started quickly, and helps keep things consistent for the reader of your documentation.

Diátaxis Methodology

The documentation you’re reading is written using the Diátaxis framework. It has four major parts as summarized by this image:

https://diataxis.fr/_images/diataxis.png

We recommend that you read more about it in the official Diátaxis documentation.

Explaining the structure to your users

One of the benefits of Diátaxis is that it’s a well-known structure, and users might already be familiar with it. As long as you stick to the structure, your users should be able to use existing experience to navigate.

Using the names that are defined (eg. Tutorials, Explanation) in a user-facing way also helps here.

Best practices for linking to your documentation

Once you start to publish documentation, external sources will inevitably link to specific pages in your documentation.

Sources of incoming links vary greatly depending on the type of documentation project that is published. They can include everything from old emails to GitHub issues, wiki articles, software comments, PDF publications, or StackOverflow answers. Most of these incoming sources are not in your control.

Read the Docs makes it easier to create and manage incoming links by redirecting certain URLs automatically and giving you access to define your own redirects.

In this article, we explain how our built-in redirects work and what we consider “best practice” for managing incoming links.

See also

Versions

Read more about how to handle versioned documentation and URL structures.

Redirects

Overview of all the redirect features available on Read the Docs. Many of the redirect features are useful either for building external links or handling requests to old URLs.

How to use custom URL redirects in documentation projects

How to add a user-defined redirect, step-by-step. Useful if your content changes location!

Security considerations for documentation pages

This article explains the security implications of documentation pages, this doesn’t apply to the main dashboard (readthedocs.org/readthedocs.com), only to documentation pages (readthedocs.io, readthedocs-hosted.com, and custom domains).

See also

Cross-site requests

Learn about cross-origin requests in our public APIs.

Cross-origin requests

Read the Docs allows cross-origin requests for documentation resources it serves. However, internal and proxied APIs, typically found under the /_/ path don’t allow cross-origin requests.

To facilitate this, the following headers are added to all responses from documentation pages:

  • Access-Control-Allow-Origin: *

  • Access-Control-Allow-Methods: GET, HEAD, OPTIONS

These headers allow cross-origin requests from any origin and only allow the GET, HEAD and OPTIONS methods. It’s important to note that passing credentials (such as cookies or HTTP authentication) in cross-origin requests is not allowed, ensuring access to public resources only.

Having cross-origin requests enabled allows third-party websites to make use of files from your documentation (as long as they are public), which allows various third-party integrations to work.

If needed, the Access-Control headers can be changed for your documentation pages by contacting support. You are responsible for providing the correct values for these headers, and making sure they don’t break your documentation pages.

Cookies

On Read the Docs Community, we don’t use cookies, as all resources are public.

On Read the Docs for Business, we use cookies to store user sessions. These cookies are set when a user authenticates to access private documentation. Session cookies have the SameSite attribute set to None, which allows them to be sent in cross-origin requests where allowed (see Cross-origin requests), for example, when embedding private documentation pages in an iframe (see Embedding documentation pages).

Embedding documentation pages

Embedding documentation pages in an iframe is allowed. Read the Docs doesn’t set the X-Frame-Options or Content-Security-Policy headers, which means that the browser’s default behavior is used.

Embedding private documentation pages in an iframe is possible, but it requires users to be previously authenticated in the embedded domain.

It’s important to note that embedding documentation pages in an iframe does not grant the parent page access the iframe’s content. Documentation pages serve static content only, and the exposed APIs are read-only, making the exploitation of a clickjacking vulnerability very unlikely.

If needed, the X-Frame-Options and Content-Security-Policy headers can be set on your documentation pages by contacting support. You are responsible for providing the correct values for these headers, and making sure they don’t break your documentation pages.

Business hosting

We offer Read the Docs for Business for building and hosting commercial documentation.

Our commercial solutions are provided as a set of subscriptions that are paid and managed through an online interface. In order to get started quickly and easily, a trial option is provided free of charge.

See also

Read the Docs website: Features

A high-level overview of platform features is available on our website, and the pricing page has a feature breakdown by subscription level.

Read the Docs website: Company

Information about the company running Read the Docs, including our mission, team, and community.

Commercial documentation solutions

In addition to providing the same features as Read the Docs Community, commercial subscriptions to Read the Docs add additional features and run on separate infrastructure.

_images/community_in_business.png

Read the Docs for Community and Business are a combined system: All features developed for community benefit the business solution, and most solutions developed for business users are implemented for the community.

The following list is a high-level overview of the areas covered by Read the Docs for Business. If you want a full feature breakdown, please refer to our pricing page.

Private repositories and private documentation

The largest difference between the community solution and our commercial offering is the ability to connect to private repositories, to restrict documentation access to certain users, or to share private documentation via private hyperlinks.

Additional build resources

Do you have a complicated build process that uses large amounts of CPU, memory, disk, or networking resources? Commercial subscriptions offer more resources which results in faster documentation build times. We can also customize builders, such as with a GPU or multiple CPUs.

Priority support

We have a dedicated support team that responds to support requests during business hours.

Advertising-free

All commercially hosted documentation is always ad-free.

Business features

Enjoy additional functionality specifically for larger organizations such as team management, single-sign on, and audit logging.

Organizations

Note

This feature is only available on Read the Docs for Business.

In this article, we explain how the organizations feature on Read the Docs allows you to manage access to your projects. On Read the Docs for Business, your account is linked to an organization. Organizations allow you to define both individual and team permissions for your projects.

In this article, we use ACME Corporation as our example organization. ACME has a few people inside their organization, some who need full access and some who just need access to one project.

See also

How to manage Read the Docs teams

A step-by-step guide to managing teams.

Member types

  • Owners – Get full access to both view and edit the Organization and all Projects

  • Members – Get access to a subset of the Organization projects

  • Teams – Where you give members access to a set of projects.

The best way to think about this relationship is:

Owners will create Teams to assign permissions to all Members.

Warning

Owners, Members and Teams behave differently if you are using Single Sign-on with GitHub, Bitbucket, or GitLab.

Team types

You can create two types of Teams:

  • Admins – These teams have full access to administer the projects in the team. They are allowed to change all of the settings, set notifications, and perform any action under the Admin tab.

  • Read Only – These teams are only able to read and search inside the documents.

Example

ACME would set up Owners of their organization, for example Frank Roadrunner would be an owner. He has full access to the organization and all projects.

Wile E. Coyote is a contractor, and will just have access to the new project Road Builder.

Roadrunner would set up a Team called Contractors. That team would have Read Only access to the Road Builder project. Then he would add Wile E. Coyote to the team. This would give him access to just this one project inside the organization.

Single Sign-On (SSO)

Note

This feature is only available on Read the Docs for Business.

Single sign-on is an optional feature on Read the Docs for Business for all users. By default, you will use teams within Read the Docs to manage user authorization. SSO will allow you to grant permissions to your organization’s projects in an easy way.

Currently, we support two different types of single sign-on:

  • Authentication and authorization are managed by the identity provider (GitHub, Bitbucket or GitLab)

  • Authentication (only) is managed by the identity provider (Google Workspace account with a verified email address)

Users can log out by using the Log Out link in the RTD flyout menu.

Single Sign-on with GitHub, Bitbucket, or GitLab

Using an identity provider that supports authentication and authorization allows organization owners to manage who has access to projects on Read the Docs, directly from the provider itself. If a user needs access to your documentation project on Read the Docs, that user just needs to be granted permissions in the Git repository associated with the project.

Once you enable this option, your existing Read the Docs teams will not be used. All authentication will be done using your git provider, including any two-factor authentication and additional Single Sign-on that they support.

Learn how to configure this SSO method with our How to setup Single Sign-On (SSO) with GitHub, GitLab, or Bitbucket.

SSO with Google Workspace

This feature allows you to easily manage access to users with a specific email address (e.g. employee@company.com), where company.com is a registered Google Workspace domain. As this identity provider does not provide information about which projects a user has access to, permissions are managed by the internal Read the Docs’s teams authorization system.

This feature is only available on the Pro plan and above. Learn how to configure this SSO method with our How to setup Single Sign-On (SSO) with Google Workspace.

Requesting additional providers

We are always interested in hearing from our users about what authentication needs they have. You can reach out to our Site support to talk with us about any additional authentication needs you might have.

Tip

Many additional providers can be supported via GitHub, Bitbucket, and GitLab SSO. We will depend on those sites in order to authenticate you, so you can use all your existing SSO methods already configured on those services.

Sharing private documentation

Note

This feature is only available on Read the Docs for Business.

You can share your project with users outside of your company:

  • by sending them a secret link,

  • by giving them a password.

These methods will allow them to view specific projects or versions of a project inside your organization.

Additionally, you can use a HTTP Authorization Header. This is useful to have access from a script.

Enabling sharing

  • Go into your project’s Admin page and click on Sharing.

  • Click on New Share

  • Select access type (secret link, password, or HTTP header token), add an expiration date and a Description so you remember who you’re sharing it with.

  • Check Allow access to all versions? if you want to grant access to all versions, or uncheck that option and select the specific versions you want grant access to.

  • Click Save.

  • Get the info needed to share your documentation with other users:

    • If you have selected secret link, copy the link that is generated

    • In case of password, copy the link and password

    • For HTTP header token, you need to pass the Authorization header in your HTTP request.

  • Give that information to the person who you want to give access.

Note

You can always revoke access in the same panel.

Users can log out by using the Log Out link in the RTD flyout menu.

Sharing methods

Password

Once the person you send the link to clicks on the link, they will see an Authorization required page asking them for the password you generated. When the user enters the password, they will have access to view your project.

Tip

This is useful for when you have documentation you want users to bookmark. They can enter a URL directly and enter the password when prompted.

HTTP Authorization Header

Tip

This approach is useful for automated scripts. It only allows access to a page when the header is present, so it doesn’t allow browsing docs inside of a browser.

Token Authorization

You need to send the Authorization header with the token on each HTTP request. The header has the form Authorization: Token <ACCESS_TOKEN>. For example:

curl -H "Authorization: Token 19okmz5k0i6yk17jp70jlnv91v" https://docs.example.com/en/latest/example.html
Basic Authorization

You can also use basic authorization, with the token as user and an empty password. For example:

curl --url https://docs.example.com/en/latest/example.html --user '19okmz5k0i6yk17jp70jlnv91v:'

How to manage your subscription

We want to make it easy to manage your billing information. All organization owners can manage the subscription for that organization. It’s easy to achieve a number of common tasks in this dashboard:

  • Update your credit card information.

  • Upgrade, downgrade, or cancel your plan.

  • View, download, and pay invoices.

  • Add additional tax (VAT/EIN) or contact email addresses on your invoices.

You can always find our most up to date pricing information on our pricing page.

Managing your subscription

  1. Navigate to the subscription management page.

  2. Click Manage Subscription.

This action will take you to the Stripe billing portal where you can manage your subscription.

Note

You will need to be an organization owner to view subscription information. If you do not have permission, you can ask one of your existing organization owners to make any required changes.

Cancelling your subscription

Cancelling your subscription can be done following the instructions in Managing your subscription. Your subscription will remain active for the remainder of the current billing period, and will not renew for the next billing period.

We can not cancel subscriptions through an email request, as email is an insecure method of verifying a user’s identity. If you email us about this, we require you to verify your identity by logging into your Read the Docs account and submitting an official support request there.

Billing information

We provide both monthly and annual subscriptions for all plans. Annual plans are given a 2 month discount compared to monthly billing. We only support credit card billing for our Basic and Advanced plans. For our Pro and Enterprise users, we support invoice-based and PO billing.

Tip

We recommend paying by credit card for all users, as this greatly simplifies the billing process.

Discounts and credits

We do not generally discount our software. We provide an ad-supported service to the community with Read the Docs Community, but we do have one standard discount that we offer.

Non-profit and academic organizations

Our community hosting, provided for free and open source projects, is generally where we recommend most academic projects to host their projects. If you have constraints on how public your documentation can be, our commercial hosting is probably a better fit.

We offer a 50% discount on our all of our commercial plans to certified academic and non-profit organizations. Please contact Site support to request this discount.

How-to guides: project setup and configuration

The following how-to guides help you solve common tasks and challenges in the setup and configuration stages.

⏩️ Connecting your Read the Docs account to your Git provider

Steps to connect an account on GitHub, Bitbucket, or GitLab with your Read the Docs account.

⏩️ Configuring a Git repository automatically

Once your account is connected to your Git provider, adding and configuring a Git repository automatically is possible for GitHub, Bitbucket, and GitLab.

⏩️ Configuring a Git repository manually

If you are connecting a Git repository from another provider (for instance Gitea or Codeberg), this guide tells you how to add and configure the webhook manually.

⏩️ Managing custom domains

Hosting your documentation using your own domain name, such as docs.example.com.

⏩️ Using custom URL redirects in documentation projects

Configuring your Read the Docs project for redirecting visitors from one location to another.

⏩️ Managing subprojects

Need several projects under the same umbrella? Start using subprojects, which is a way to host multiple projects under a “main project”.

⏩️ Using a .readthedocs.yaml file in a sub-folder

This guide shows how to configure a Read the Docs project to use a custom path for the .readthedocs.yaml build configuration. Monorepos that have multiple documentation projects in the same Git repository can benefit from this feature.

⏩️ Hiding a version

Is your version (flyout) menu overwhelmed and hard to navigate? Here’s how to make it shorter.

⏩️ Changing the versioning scheme of your project

Change how the URLs of your documentation look like, and if your project supports multiple versions or translations.

See also

Read the Docs tutorial

All you need to know to get started.

How to connect your Read the Docs account to your Git provider

In this how to article, you are shown the steps to connect an account on GitHub, Bitbucket, or GitLab with your Read the Docs account. This is relevant if you have signed up for Read the Docs with your email or if you have signed up using a Git provider account and want to connect additional providers.

If you are going to import repositories from GitHub, Bitbucket, or GitLab, you should connect your Read the Docs account to your Git provider first.

Note

If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, you’re all done. Your account is connected ✅️. You only need this how-to if you want to connect additional Git providers.

Adding a connection

To connect your Read the Docs account with a Git provider, go to the main login menu: <Username dropdown> > Settings > Connected Services.

From here, you’ll be able to connect to your GitHub, Bitbucket, or GitLab account. This process will ask you to authorize an integration with Read the Docs.

Screenshot of example OAuth dialog on GitHub

An example of how your OAuth dialog on GitHub may look.

After approving the request, you will be taken back to Read the Docs. You will now see the account appear in the list of connected services.

Screenshot of Read the Docs "Connected Services" page with multiple services connected

Connected Services [1] [2] shows the list of Git providers that

Now your connection is ready and you will be able to import and configure Git repositories with just a few clicks.

See also

How to automatically configure a Git repository

Learn how the connected account is used for automatically configuring Git repositories and Read the Docs projects and which permissions that are required from your Git provider.

Removing a connection

You may at any time delete the connection from Read the Docs. Delete the connection makes Read the Docs forget the immediate access, but you should also disable our OAuth Application from your Git provider.

How to automatically configure a Git repository

In this article, we explain how connecting your Read the Docs account to GitHub, Bitbucket, or GitLab makes Read the Docs able to automatically configure your imported Git repositories and your Read the Docs projects.

✅️ Signed up with your Git provider?

If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, you’re all done. Your account is connected.

The rest of this guide helps to understand how the automatic configuration works.

⏩️️ Signed up with your email address?

If you have signed up to Read the Docs with your email address, you can add the connection to the Git provider afterwards. You can also add a connection to an additional Git provider this way.

Please follow How to connect your Read the Docs account to your Git provider in this case.

Automatic configuration

When your Read the Docs account is connected to GitHub, Bitbucket, or GitLab and you import a Git repository, the integration will automatically be configured on the Read the Docs project and on your Git repository.

Here is an outline of what happens:

  1. A list of repositories that you have access to are automatically listed on Read the Docs’ project import.

  2. You choose a Git repository from the list (see Importing your documentation).

  3. Data about the repository is now fetched using the account connection and you are asked to confirm the setup.

  4. When Read the Docs creates your project, it automatically sets up an integration with the Git provider, and creates an incoming webhook whereby Read the Docs is notified of all future changes to commits, branches and tags in the Git repository.

  5. Your project’s incoming webhook is automatically added to your Git repository’s settings using the account connection.

  6. Read the Docs also configures your project to use the Git provider’s webhook via your account connection, so your project is ready to enable Pull Request builds.

After the import, you can continue to configure the project. All settings can be modified, including the ones that were automatically created.

See also

Manually import your docs

Using a different provider? Read the Docs still supports other providers such as Gitea or GitHub Enterprise. In fact, any Git repository URL can be configured manually.

Tip

A single Read the Docs account can connect to many different Git providers. This allows you to have a single login for all your various identities.

How does the connection work?

Read the Docs uses OAuth to connect to your account at GitHub, Bitbucket, or GitLab, You are asked to grant permissions for Read the Docs to perform a number of actions on your behalf.

At the same time, we use this process for authentication (login) since we trust that GitHub, Bitbucket, or GitLab have verified your user account and email address.

By granting Read the Docs the requested permissions, we are issued a secret OAuth token from your Git provider.

Using the secret token, we can automatically configure the repository that you select in the project import. We also use the token to send back build statuses and preview URLs for pull requests.

Note

Access granted to Read the Docs can always be revoked. This is a function offered by all Git providers.

Git provider integrations

If your project is using Organizations (Read the Docs for Business) or maintainers (Read the Docs Community), then you need to be aware of who is setting up the integration for the project.

The Read the Docs user who sets up the project through the automatic import should also have admin rights to the Git repository.

A Git provider integration is active through the authentication of the user that creates the integration. If this user is removed, make sure to verify and potentially recreate all Git integrations for the project.

Permissions for connected accounts

Read the Docs does not generally ask for write permission to your repository code (with one exception detailed below) and since we only connect to public repositories we don’t need special permissions to read them. However, we do need permissions for authorizing your account so that you can login to Read the Docs with your connected account credentials and to setup Continuous Documentation Deployment which allow us to build your documentation on every change to your repository.

Read the Docs requests the following permissions (more precisely, OAuth scopes) when connecting your Read the Docs account to GitHub.

Read access to your email address (user:email)

We ask for this so you can create a Read the Docs account and login with your GitHub credentials.

Administering webhooks (admin:repo_hook)

We ask for this so we can create webhooks on your repositories when you import them into Read the Docs. This allows us to build the docs when you push new commits.

Read access to your organizations (read:org)

We ask for this so we know which organizations you have access to. This allows you to filter repositories by organization when importing repositories.

Repository status (repo:status)

Repository statuses allow Read the Docs to report the status (eg. passed, failed, pending) of pull requests to GitHub. This is used for a feature currently in beta testing that builds documentation on each pull request similar to a continuous integration service.

Note

Read the Docs for Business asks for one additional permission (repo) to allow access to private repositories and to allow us to setup SSH keys to clone your private repositories. Unfortunately, this is the permission for read/write control of the repository but there isn’t a more granular permission that only allows setting up SSH keys for read access.

GitHub permission troubleshooting

Repositories not in your list to import.

Many organizations require approval for each OAuth application that is used, or you might have disabled it in the past for your personal account. This can happen at the personal or organization level, depending on where the project you are trying to access has permissions from.

You need to make sure that you have granted access to the Read the Docs OAuth App to your personal GitHub account. If you do not see Read the Docs in the OAuth App settings, you might need to disconnect and reconnect the GitHub service.

See also

GitHub docs on requesting access to your personal OAuth for step-by-step instructions.

How to manually configure a Git repository integration

In this guide, you will find the steps to manually integrate your Read the Docs project with any Git provider, including GitHub, Bitbucket, and GitLab.

See also

How to automatically configure a Git repository

You are now reading the guide to configuring a Git repository manually. If your Read the Docs account is connected to the Git provider, we can setup the integration automatically.

Manual integration setup

You need to configure your Git provider integration to call a webhook that alerts Read the Docs of changes. Read the Docs will sync versions and build your documentation when your Git repository is updated.

  • Go to the Settings page for your GitHub project

  • Click Webhooks > Add webhook

  • For Payload URL, use the URL of the integration on your Read the Docs project, found on the project’s Admin > Integrations page. You may need to prepend https:// to the URL.

  • For Content type, both application/json and application/x-www-form-urlencoded work

  • Fill the Secret field with the value from the integration on Read the Docs

  • Select Let me select individual events, and mark Branch or tag creation, Branch or tag deletion, Pull requests and Pushes events

  • Ensure Active is enabled; it is by default

  • Finish by clicking Add webhook. You may be prompted to enter your GitHub password to confirm your action.

You can verify if the webhook is working at the bottom of the GitHub page under Recent Deliveries. If you see a Response 200, then the webhook is correctly configured. For a 403 error, it’s likely that the Payload URL is incorrect.

Additional integration

You can configure multiple incoming webhooks.

To manually set up an integration, go to Admin > Integrations > Add integration dashboard page and select the integration type you’d like to add. After you have added the integration, you’ll see a link to information about the integration.

As an example, the URL pattern looks like this: https://readthedocs.org/api/v2/webhook/<project-name>/<id>/*.

Use this URL when setting up a new integration with your provider, as explained above.

Warning

Git repositories that are imported manually do not have the required setup to send back a commit status. If you need this integration, you have to configure the repository automatically.

See also

How to setup email notifications

Quickly enable email notifications.

How to setup build status webhooks

Learn how to add custom webhook notifications.

Using the generic API integration

For repositories that are not hosted with a supported provider, we also offer a generic API endpoint for triggering project builds. Similar to webhook integrations, this integration has a specific URL, which can be found on the project’s Integrations dashboard page (Admin > Integrations).

Token authentication is required to use the generic endpoint, you will find this token on the integration details page. The token should be passed in as a request parameter, either as form data or as part of JSON data input.

Parameters

This endpoint accepts the following arguments during an HTTP POST:

branches

The names of the branches to trigger builds for. This can either be an array of branch name strings, or just a single branch name string.

Default: latest

token

The integration token found on the project’s Integrations dashboard page (Admin > Integrations).

default_branch

This is the default branch of the repository (ie. the one checked out when cloning the repository without arguments)

Optional

For example, the cURL command to build the dev branch, using the token 1234, would be:

curl -X POST -d "branches=dev" -d "token=1234" -d "default_branch=main"
https://readthedocs.org/api/v2/webhook/example-project/1/

A command like the one above could be called from a cron job or from a hook inside Git, Subversion, Mercurial, or Bazaar.

Authentication

This endpoint requires authentication. If authenticating with an integration token, a check will determine if the token is valid and matches the given project. If instead an authenticated user is used to make this request, a check will be performed to ensure the authenticated user is an owner of the project.

Payload validation

All integrations are created with a secret token, this offers a way to verify that a webhook request is legitimate.

This validation is done according to each provider:

Troubleshooting

Debugging webhooks

If you are experiencing problems with an existing webhook, you may be able to use the integration detail page to help debug the issue. Each project integration, such as a webhook or the generic API endpoint, stores the HTTP exchange that takes place between Read the Docs and the external source. You’ll find a list of these exchanges in any of the integration detail pages.

Webhook activation failed. Make sure you have the necessary permissions

If you find this error, make sure your user has permissions over the repository. In case of GitHub, check that you have granted access to the Read the Docs OAuth App to your organization.

My project isn’t automatically building

If your project isn’t automatically building, you can check your integration on Read the Docs to see the payload sent to our servers. If there is no recent activity on your Read the Docs project webhook integration, then it’s likely that your VCS provider is not configured correctly. If there is payload information on your Read the Docs project, you might need to verify that your versions are configured to build correctly.

How to manage custom domains

This guide describes how to host your documentation using your own domain name, such as docs.example.com.

Adding a custom domain

To setup your custom domain, follow these steps:

  1. Go the Admin tab of your project.

  2. Click on Domains.

  3. Enter the domain where you want to serve the documentation from (e.g. docs.example.com).

  4. Mark the Canonical option if you want use this domain as your canonical domain.

  5. Click on Add.

  6. At the top of the next page you’ll find the value of the DNS record that you need to point your domain to. For Read the Docs Community this is readthedocs.io, and for Business hosting the record is in the form of <hash>.domains.readthedocs.com.

Once you have completed these steps and your new DNS entry has propagated (usually takes a few minutes), you need to build your project’s published branches again so the Canonical URL is correct.

Note

For a subdomain like docs.example.com add a CNAME record, and for a root domain like example.com use an ANAME or ALIAS record.

We provide a validated SSL certificate for the domain, managed by Cloudflare. The SSL certificate issuance should happen within a few minutes, but might take up to one hour. See SSL certificate issue delays for more troubleshooting options.

To see if your DNS change has propagated, you can use a tool like dig to inspect your domain from your command line. As an example, our blog’s DNS record looks like this:

dig +short CNAME blog.readthedocs.com
 readthedocs.io.

Warning

We don’t support pointing subdomains or root domains to a project using A records. DNS A records require a static IP address and our IPs may change without notice.

Removing a custom domain

To remove a custom domain:

  1. Go the Admin tab of your project.

  2. Click on Domains.

  3. Click the Remove button next to the domain.

  4. Click Confirm on the confirmation page.

Warning

Once a domain is removed, your previous documentation domain is no longer served by Read the Docs, and any request for it will return a 404 Not Found!

Strict Transport Security (HSTS) and other custom headers

By default, we do not return a Strict Transport Security header (HSTS) for user custom domains. This is a conscious decision as it can be misconfigured in a not easily reversible way. For both Read the Docs Community and Read the Docs for Business, HSTS and other custom headers can be set upon request.

We always return the HSTS header with a max-age of at least one year for our own domains including *.readthedocs.io, *.readthedocs-hosted.com, readthedocs.org and readthedocs.com.

Note

Please contact Site support if you want to add a custom header to your domain.

Multiple documentation sites as sub-folders of a domain

You may host multiple documentation repositories as sub-folders of a single domain. For example, docs.example.org/projects/repo1 and docs.example.org/projects/repo2. This is a way to boost the SEO of your website.

See also

Subprojects

Further information about hosting multiple documentation repositories, using the subproject feature.

Troubleshooting

SSL certificate issue delays

The status of your domain validation and certificate can always be seen on the details page for your domain under Admin > Domains > YOURDOMAIN.TLD (details).

Domains are usually validated and a certificate issued within minutes. However, if you setup the domain in Read the Docs without provisioning the necessary DNS changes and then update DNS hours or days later, this can cause a delay in validating because there is an exponential back-off in validation.

Tip

Loading the domain details in the Read the Docs dashboard and saving the domain again will force a revalidation.

The validation process period has ended

After you add a new custom domain, you have 30 days to complete the configuration. Once that period has ended, we will stop trying to validate your domain. If you still want to complete the configuration, go to your domain and click on Save to restart the process.

Migrating from GitBook

If your custom domain was previously used in GitBook, contact GitBook support (via live chat in their website) to remove the domain name from their DNS Zone in order for your domain name to work with Read the Docs, otherwise it will always redirect to GitBook.

How to manage subprojects

This guide shows you how to manage subprojects, which is a way to host multiple projects under a “main project”.

See also

Subprojects

Read more about what the subproject feature can do and how it works.

Adding a subproject

In the admin dashboard for your project, select Subprojects from the left menu. From this page you can add a subproject by choosing a project from the Subproject dropdown and typing an alias in the Alias field.

Immediately after adding the subproject, it will be hosted at the URL displayed in the updated list of subprojects.

Screenshot of a subproject immediately visible in the list after creation

Note

Read the Docs Community

You need to be maintainer of a subproject in order to choose it from your main project.

Read the Docs for Business

You need to have admin access to the subproject in order to choose it from your main project.

Editing a subproject

You can edit a subproject at any time by clicking 📝️ in the list of subprojects. On the following page, it’s possible to both change the subproject and its alias using the Subproject dropdown and the Alias field. Click Update subproject to save your changes.

The documentations served at /projects/<subproject-alias>/ will be updated immediately when you save your changes.

Deleting a subproject

You can delete a subproject at any time by clicking 📝️ in the list of subprojects. On the edit page, click Delete subproject.

Your subproject will be removed immediately and will be served from it’s own domain:

  • Previously it was served at: <main-project-domain>/projects/<subproject-alias>/

  • Now it will be served at <subproject-domain>/

Deleting a subproject only removes the reference from the main project. It does not completely remove that project.

How to hide a version and keep its documentation online

If you manage a project with a lot of versions, the version (flyout) menu of your docs can be easily overwhelmed and hard to navigate.

_images/flyout-overwhelmed.png

Overwhelmed flyout menu

You can deactivate the version to remove its docs, but removing its docs isn’t always an option. To not list a version in the flyout menu while keeping its docs online, you can mark it as hidden. Go to the Versions tab of your project, click on Edit and mark the Hidden option.

Users that have a link to your old version will still be able to see your docs. And new users can see all your versions (including hidden versions) in the versions tab of your project at https://readthedocs.org/projects/<your-project>/versions/

Check the docs about versions’ states for more information.

How to use a .readthedocs.yaml file in a sub-folder

This guide shows how to configure a Read the Docs project to use a custom path for the .readthedocs.yaml build configuration. Monorepos that have multiple documentation projects in the same Git repository can benefit from this feature.

By default, Read the Docs will use the .readthedocs.yaml at the top level of your Git repository. But if a Git repository contains multiple documentation projects that need different build configurations, you will need to have a .readthedocs.yaml file in multiple sub-folders.

See also

sphinx-multiproject

If you are only using Sphinx projects and want to share the same build configuration, you can also use the sphinx-multiproject extension.

How to use custom environment variables

You might also be able to reuse the same configuration file across multiple projects, using only environment variables. This is possible if the configuration pattern is very similar and the documentation tool is the same.

Implementation considerations

This feature is currently project-wide. A custom build configuration file path is applied to all versions of your documentation.

Warning

Changing the configuration path will apply to all versions. Different versions of the project may not be able to build again if this path is changed.

Adding an additional project from the same repository

Once you have added the first project from the Import Wizard, it will show as if it has already been imported and cannot be imported again. In order to add another project with the same repository, you will need to use the Manual Import.

Setting the custom build configuration file

Once you have added a Git repository to a project that needs a custom configuration file path, navigate to Admin ‣ Settings and add the path to the Build configuration file field.

Screenshot of where to find the :guilabel:`Build configuration file` setting.

After pressing Save, you need to ensure that relevant versions of your documentation are built again.

Tip

Having multiple different build configuration files can be complex. We recommend setting up 1-2 projects in your Monorepo and getting them to build and publish successfully before adding additional projects to the equation.

Next steps

Once you have your monorepo pattern implemented and tested and it’s ready to roll out to all your projects, you should also consider the Read the Docs project setup for these individual projects.

Having individual projects gives you the full flexibility of the Read the Docs platform to make individual setups for each project.

For each project, it’s now possible to configure:

…and much more. All settings for a Read the Docs project is available for each individual project.

See also

How to manage subprojects

More information on nesting one project inside another project. In this setup, it is still possible to use the same monorepo for each subproject.

Other tips

For a monorepo, it’s not desirable to have changes in unrelated sub-folders trigger new builds.

Therefore, you should consider setting up conditional build cancellation rules. The configuration is added in each .readthedocs.yaml, making it possible to write one conditional build rules per documentation project in the Monorepo 💯️

How to use custom URL redirects in documentation projects

In this guide, you will learn the steps necessary to configure your Read the Docs project for redirecting visitors from one location to another.

User-defined redirects are issued by our servers when a reader visits an old URL, which means that the reader is automatically redirected to a new URL.

See also

Best practices for linking to your documentation

The need for a redirect often comes from external links to your documentation. Read more about handling links in this explanation of best practices.

Redirects

If you want to know more about our implementation of redirects, you can look up more examples in our reference before continuing with the how-to.

Adding a redirect rule

Redirects are configured in the project dashboard, go to Admin > Redirects.

Screenshot of the Redirect admin page

After clicking Add Redirect, you need to select a Redirect Type. This is where things get a bit more complicated you need to fill in specific information according to that choice.

Choosing a Redirect Type

There are different types of redirect rules targeting different needs. For each choice in Redirect Type, you can mark the choice in order to experiment and preview the final rule generated.

Screenshot of the Redirect "Add Redirect" form

Here is a quick overview of the options available in Redirect Type:

Page redirect

With this option, you can specify a page in your documentation to redirect elsewhere. The rule triggers no matter the version of your documentation that the user is visiting. This rule can also redirect to another website.

Read more about this option in Page redirects

Exact redirect

With this option, you can specify a page in your documentation to redirect elsewhere. The rule is specific to the language and version of your documentation that the user is visiting. This rule can also redirect to another website.

Read more about this option in Exact redirects

Clean URL to HTML

If you choose to change the style of your URLs from clean URLs (/en/latest/tutorial/) to HTML URLs (/en/latest/tutorial.html), you can redirect all mismatches automatically.

Read more about this option in Clean/HTML URLs redirects

HTML to clean URL

Similarly to the former option, if you choose to change the style of your URLs from HTML URLs (/en/latest/tutorial.html) to clean URLs (/en/latest/tutorial/), you can redirect all mismatches automatically.

Read more about this option in Clean/HTML URLs redirects

Note

By default, redirects are followed only if the requested page doesn’t exist (404 File Not Found error). If you need to apply a redirect for files that exist, you can have a Apply even if the page exists option visible. This option is only available on some plan levels. Please ask support to enable it for you.

Defining the redirect rule

As mentioned before, you can pick and choose a Redirect Type that fits your redirect need. When you have entered a From URL and To URL and the redirect preview looks good, you are ready to save the rule.

Saving the redirect

The redirect is not activated before you click Save. Before clicking, you are free to experiment and preview the effects. Your redirect rules is added and effective immediately after saving it.

After adding the rule, you can add more redirects as needed. There are no immediate upper bounds to how many redirect rules a project may define.

Editing and deleting redirect rules

You can always revisit Admin > Redirects. in order to delete a rule or edit it.

When editing a rule, you can change its Redirect Type and its From URL or To URL.

Changing the order of redirects

The order of redirects is important, if you have multiple rules that match the same URL, the first redirect in the list will be used.

You can change the order of the redirect from the Admin > Redirects page, by using the up and down arrow buttons.

New redirects are added at the start of the list (i.e. they have the highest priority).

How to change the versioning scheme of your project

In this guide, we show you how to change the versioning scheme of your project on Read the Docs.

Changing the versioning scheme of your project will affect the URLs of your documentation, any existing links to your documentation will break. If you want to keep the old URLs working, you can create redirects.

See also

URL versioning schemes

Reference of all the versioning schemes supported by Read the Docs.

Versions

General explanation of how versioning works on Read the Docs.

Changing the versioning scheme

  1. Go the Admin tab of your project.

  2. Click on Settings.

  3. Select the new versioning scheme in the Versioning scheme dropdown.

  4. Click on Save.

How-to guides: build process

⏩️ Setup email notifications

Email notifications can alert you when your builds fail. This is the most simple way to monitor your documentation builds, it only requires you to switch it on.

⏩️ Setup webhook notifications

Webhook notifications can alert you when your builds fail so you can take immediate action. We show examples of how to use the webhooks on popular platforms like Slack and Discord.

⏩️ Configuring pull request builds

Have your documentation built and access a preview for every pull request builds.

⏩️ Using custom environment variables

Extra environment variables, for instance secrets, may be needed in the build process and can be defined from the project’s dashboard.

⏩️ Managing versions automatically

Automating your versioning on Read the Docs means you only have to handle your versioning logic in Git. Learn how to define rules to automate creation of new versions on Read the Docs, entirely using your Git repository’s version logic.

How to setup email notifications

In this brief guide, you can learn how to setup a simple build notification via email.

Read the Docs allows you to configure emails that will be notified on failing builds. This makes sure that you are aware of failures happening in an otherwise automated process.

See also

How to setup build status webhooks

How to use webhooks to be notified about builds on popular platforms like Slack and Discord.

Pull request previews

Configure automated feedback and documentation site previews for your pull requests.

Email notifications

Follow these steps to add an email address to be notified about build failures:

  • Go to Admin ‣ Notifications in your project.

  • Fill in the Email field under the New Email Notifications heading

  • Press Add and the email is saved and will be displayed in the list of Existing notifications.

The newly added email address will be notified once a build fails.

Note

We don’t send email notifications on builds from pull requests.

How to setup build status webhooks

In this guide, you can learn how to setup build notifications via webhooks.

When a documentation build is triggered, successful or failed, Read the Docs can notify external APIs using webhooks. In that way, you can receive build notifications in your own monitoring channels and be alerted you when your builds fail so you can take immediate action.

See also

How to setup email notifications

Setup basic email notifications for build failures.

Pull request previews

Configure automated feedback and documentation site previews for your pull requests.

Build status webhooks

Take these steps to enable build notifications using a webhook:

  • Go to Admin ‣ Webhooks in your project.

  • Fill in the URL field and select what events will trigger the webhook

  • Modify the payload or leave the default (see below)

  • Click on Save

URL and events for a webhook

URL and events for a webhook

Every time one of the checked events triggers, Read the Docs will send a POST request to your webhook URL. The default payload will look like this:

{
    "event": "build:triggered",
    "name": "docs",
    "slug": "docs",
    "version": "latest",
    "commit": "2552bb609ca46865dc36401dee0b1865a0aee52d",
    "build": "15173336",
    "start_date": "2021-11-03T16:23:14",
    "build_url": "https://readthedocs.org/projects/docs/builds/15173336/",
    "docs_url": "https://docs.readthedocs.io/en/latest/"
}

When a webhook is sent, a new entry will be added to the “Recent Activity” table. By clicking on each individual entry, you will see the server response, the webhook request, and the payload.

Activity of a webhook

Activity of a webhook

Note

We don’t trigger webhooks on builds from pull requests.

Custom payload examples

You can customize the payload of the webhook to suit your needs, as long as it is valid JSON. Below you have a couple of examples, and in the following section you will find all the available variables.

Custom payload

Custom payload

{
  "attachments": [
    {
      "color": "#db3238",
      "blocks": [
        {
          "type": "section",
          "text": {
            "type": "mrkdwn",
            "text": "*Read the Docs build failed*"
          }
        },
        {
          "type": "section",
          "fields": [
            {
              "type": "mrkdwn",
              "text": "*Project*: <{{ project.url }}|{{ project.name }}>"
            },
            {
              "type": "mrkdwn",
              "text": "*Version*: {{ version.name }} ({{ build.commit }})"
            },
            {
              "type": "mrkdwn",
              "text": "*Build*: <{{ build.url }}|{{ build.id }}>"
            }
          ]
        }
      ]
    }
  ]
}

More information on the Slack Incoming Webhooks documentation.

Variable substitutions reference
{{ event }}

Event that triggered the webhook, one of build:triggered, build:failed, or build:passed.

{{ build.id }}

Build ID.

{{ build.commit }}

Commit corresponding to the build, if present (otherwise "").

{{ build.url }}

URL of the build, for example https://readthedocs.org/projects/docs/builds/15173336/.

{{ build.docs_url }}

URL of the documentation corresponding to the build, for example https://docs.readthedocs.io/en/latest/.

{{ build.start_date }}

Start date of the build (UTC, ISO format), for example 2021-11-03T16:23:14.

{{ organization.name }}

Organization name (Commercial only).

{{ organization.slug }}

Organization slug (Commercial only).

{{ project.slug }}

Project slug.

{{ project.name }}

Project name.

{{ project.url }}

URL of the project dashboard.

{{ version.slug }}

Version slug.

{{ version.name }}

Version name.

Validating the payload

After you add a new webhook, Read the Docs will generate a secret key for it and uses it to generate a hash signature (HMAC-SHA256) for each payload that is included in the X-Hub-Signature header of the request.

Webhook secret

Webhook secret

We highly recommend using this signature to verify that the webhook is coming from Read the Docs. To do so, you can add some custom code on your server, like this:

import hashlib
import hmac
import os


def verify_signature(payload, request_headers):
    """
    Verify that the signature of payload is the same as the one coming from request_headers.
    """
    digest = hmac.new(
        key=os.environ["WEBHOOK_SECRET"].encode(),
        msg=payload.encode(),
        digestmod=hashlib.sha256,
    )
    expected_signature = digest.hexdigest()

    return hmac.compare_digest(
        request_headers["X-Hub-Signature"].encode(),
        expected_signature.encode(),
    )

Legacy webhooks

Webhooks created before the custom payloads functionality was added to Read the Docs send a payload with the following structure:

{
    "name": "Read the Docs",
    "slug": "rtd",
    "build": {
        "id": 6321373,
        "commit": "e8dd17a3f1627dd206d721e4be08ae6766fda40",
        "state": "finished",
        "success": false,
        "date": "2017-02-15 20:35:54"
    }
}

To migrate to the new webhooks and keep a similar structure, you can use this payload:

{
    "name": "{{ project.name }}",
    "slug": "{{ project.slug }}",
    "build": {
        "id": "{{ build.id }}",
        "commit": "{{ build.commit }}",
        "state": "{{ event }}",
        "date": "{{ build.start_date }}"
    }
}

Troubleshooting webhooks and payload discovery

You can use public tools to discover, inspect and test webhook integration. These tools act as catch-all endpoints for HTTP requests and respond with a 200 OK HTTP status code. You can use these payloads to develop your webhook services. You should exercise caution when using these tools as you might be sending sensitive data to external tools.

These public tools include:

  • Beeceptor to create a temporary HTTPS endpoint and inspect incoming payloads. It lets you respond custom response code or messages from named HTTP mock server.

  • Webhook Tester to inspect and debug incoming payloads. It lets you inspect all incoming requests to it’s URL/bucket.

How to configure pull request builds

In this section, you can learn how to configure pull request builds.

To enable pull request builds for your project, your Read the Docs account needs to be connected to an account with a supported Git provider. See Limitations for more information.

If your account is already connected:

  1. Go to your project dashboard

  2. Go to Admin, then Settings

  3. Enable the Build pull requests for this project option

  4. Click on Save

Tip

Pull requests opened before enabling pull request builds will not trigger new builds automatically. Push a new commit to the pull request to trigger its first build.

Privacy levels

Note

Privacy levels are only supported on Business hosting.

If you didn’t import your project manually and your repository is public, the privacy level of pull request previews will be set to Public, otherwise it will be set to Private. Public pull request previews are available to anyone with the link to the preview, while private previews are only available to users with access to the Read the Docs project.

Warning

If you set the privacy level of pull request previews to Private, make sure that only trusted users can open pull requests in your repository.

Setting pull request previews to private on a public repository can allow a malicious user to access read-only APIs using the user’s session that is reading the pull request preview. Similar to GHSA-pw32-ffxw-68rh.

To change the privacy level:

  1. Go to your project dashboard

  2. Go to Admin, then Settings

  3. Select your option in Privacy level of builds from pull requests

  4. Click on Save

Privacy levels work the same way as normal versions.

Limitations

  • Only available for GitHub and GitLab currently. Bitbucket is not yet supported.

  • To enable this feature, your Read the Docs account needs to be connected to an account with your Git provider.

  • Builds from pull requests have the same memory and time limitations as regular builds.

  • Additional formats like PDF and EPUB aren’t built, to reduce build time.

  • Search queries will default to the default experience for your tool. This is a feature we plan to add, but don’t want to overwhelm our search indexes used in production.

  • The built documentation is kept for 90 days after the pull request has been closed or merged.

Troubleshooting

No new builds are started when I open a pull request

The most common cause is that your repository’s webhook is not configured to send Read the Docs pull request events. You’ll need to re-sync your project’s webhook integration to reconfigure the Read the Docs webhook.

To resync your project’s webhook, go to your project’s admin dashboard, Integrations, and then select the webhook integration for your provider. Follow the directions on to re-sync the webhook, or create a new webhook integration.

You may also notice this behavior if your Read the Docs account is not connected to your Git provider account, or if it needs to be reconnected. You can (re)connect your account by going to your <Username dropdown>, Settings, then to Connected Services.

Build status is not being reported to your Git provider

If opening a pull request does start a new build, but the build status is not being updated with your Git provider, then your connected account may have out dated or insufficient permisisons.

Make sure that you have granted access to the Read the Docs OAuth App for your personal or organization GitHub account. You can also try reconnecting your account with your Git provider.

How to use custom environment variables

If extra environment variables are needed in the build process, you can define them from the project’s dashboard.

See also

Environment variable overview

Learn more about how Read the Docs applies environment variables in your builds.

Go to your project’s Admin ‣ Environment Variables and click on Add Environment Variable. You will then see the form for adding an environment variable:

Screenshot of the form for adding an environment variable
  1. Fill in the Name field, this is the name of your variable, for instance SECRET_TOKEN or PIP_EXTRA_INDEX_URL.

  2. Fill in the Value field with the environment variable’s value, for instance a secret token or a build configuration.

  3. Check the Public option if you want to expose this environment variable to builds from pull requests.

    Warning

    If you make your environment variable public, any user that can create a pull request on your repository will be able to see the value of this environment variable. In other words, do not use this option if your environment variable is a secret.

Finally, click on Save. Your custom environment variable is immediately active for all future builds and you are ready to start using it.

Note that once you create an environment variable, you won’t be able to edit or view its value. The only way to edit an environment variable is to delete and create it again.

Keep reading for a few examples of using environment variables. ⬇️

Accessing environment variables in code

After adding an environment variable, you can read it from your build process, for example in your Sphinx’s configuration file:

conf.py
import os
import requests

# Access to our custom environment variables
username = os.environ.get("USERNAME")
password = os.environ.get("PASSWORD")

# Request a username/password protected URL
response = requests.get(
    "https://httpbin.org/basic-auth/username/password",
    auth=(username, password),
)

Accessing environment variables in build commands

You can also use any of these variables from user-defined build jobs in your project’s configuration file:

.readthedocs.yaml
version: 2
build:
  os: ubuntu-22.04
  tools:
    python: 3.10
  jobs:
    post_install:
      - curl -u ${USERNAME}:${PASSWORD} https://httpbin.org/basic-auth/username/password

Note

If you use ${SECRET_ENV} in a command in .readthedocs.yaml, the private value of the environment variable is not substituted in log entries of the command. It will also be logged as ${SECRET_ENV}.

How to manage versions automatically

In this guide, we show you how to define rules to automate creation of new versions on Read the Docs, using your Git repository’s version logic. Automating your versioning on Read the Docs means you only have to handle your versioning logic in Git.

See also

Versions

Learn more about versioning of documentation in general.

Automation rules

Reference for all different rules and actions possible with automation.

Adding a new automation rule

First you need to go to the automation rule creation page:

  1. Navigate to Admin ‣ Automation Rules.

  2. Click on Add Rule and you will see the following form.

Screenshot of the "Add Rule" form

In the Automation Rule form, you need to fill in 4 fields:

  1. Enter a Description that you can refer to later. For example, entering “Create new stable version” is a good title, as it explains the intention of the rule.

  2. Choose a Match, which is the pattern you wish to detect in either a Git branch or tag.

  3. Choose a Version type. You can choose between Tag or Branch, denoting Git tag or Git branch.

  4. Finally, choose the Action:

Now your rule is ready and you can press Save. The rule takes effect immediately when a new version is created, but does not apply to old versions.

Tip

Examples of common usage

See the list of examples for rules that are commonly used.

Want to test if your rule works?

If you are using Git in order to create new versions, create a Git tag or branch that matches the rule and check if your automation action is triggered. After the experiment, you can delete both from Git and Read the Docs.

Ordering your rules

The order your rules are listed in Admin ‣ Automation Rules matters. Each action will be performed in that order, so earlier rules have a higher priority.

You can change the order using the up and down arrow buttons.

Note

New rules are added at the start of the list (i.e. they have the highest priority).

How-to guides: upgrading and maintaining projects

⏩️ Creating reproducible builds

Using Sphinx, themes and extensions means that your documentation project needs to fetch a set of dependencies, each with a special version. Over time, using an unspecified version means that documentation projects suddenly start breaking. In this guide, you learn how to secure your project against sudden breakage. This is one of our most popular guides!

⏩️ Using Conda as your Python environment

Read the Docs supports Conda and Mamba as an environment management tools. In this guide, we show the practical steps of using Conda or Mamba.

How to use Conda as your Python environment

Read the Docs supports Conda as an environment management tool, along with Virtualenv. Conda support is useful for people who depend on C libraries, and need them installed when building their documentation.

This work was funded by Clinical Graphics – many thanks for their support of open source.

Activating conda

Conda support is available using a Configuration file overview, see conda.

Our Docker images use Miniconda, a minimal conda installer. After specifying your project requirements using a conda environment.yml file, Read the Docs will create the environment (using conda env create) and add the core dependencies needed to build the documentation.

Creating the environment.yml

There are several ways of exporting a conda environment:

  • conda env export will produce a complete list of all the packages installed in the environment with their exact versions. This is the best option to ensure reproducibility, but can create problems if done from a different operative system than the target machine, in our case Ubuntu Linux.

  • conda env export --from-history will only include packages that were explicitly requested in the environment, excluding the transitive dependencies. This is the best option to maximize cross-platform compatibility, however it may include packages that are not needed to build your docs.

  • And finally, you can also write it by hand. This allows you to pick exactly the packages needed to build your docs (which also results in faster builds) and overcomes some limitations in the conda exporting capabilities.

For example, using the second method for an existing environment:

$ conda activate rtd38
(rtd38) $ conda env export --from-history | tee environment.yml
name: rtd38
channels:
  - defaults
  - conda-forge
dependencies:
  - rasterio==1.2
  - python=3.8
  - pytorch-cpu=1.7
prefix: /home/docs/.conda/envs/rtd38

Read the Docs will override the name and prefix of the environment when creating it, so they can have any value, or not be present at all.

Tip

Bear in mind that rasterio==1.2 (double ==) will install version 1.2.0, whereas python=3.8 (single =) will fetch the latest 3.8.* version, which is 3.8.8 at the time of writing.

Effective use of channels

Conda packages are usually hosted on https://anaconda.org/, a registration-free artifact archive maintained by Anaconda Inc. Contrary to what happens with the Python Package Index, different users can potentially host the same package in the same repository, each of them using their own channel. Therefore, when installing a conda package, conda also needs to know which channels to use, and which ones take precedence.

If not specified, conda will use defaults, the channel maintained by Anaconda Inc. and subject to Anaconda Terms of Service. It contains well-tested versions of the most widely used packages. However, some packages are not available on the defaults channel, and even if they are, they might not be on their latest versions.

As an alternative, there are channels maintained by the community that have a broader selection of packages and more up-to-date versions of them, the most popular one being conda-forge.

To use the conda-forge channel when specifying your project dependencies, include it in the list of channels in environment.yml, and conda will rank them in order of appearance. To maximize compatibility, we recommend putting conda-forge above defaults:

name: rtd38
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.8
  # Rest of the dependencies

Tip

If you want to opt out the defaults channel completely, replace it by nodefaults in the list of channels. See the relevant conda docs for more information.

Making builds faster with mamba

One important thing to note is that, when enabling the conda-forge channel, the conda dependency solver requires a large amount of RAM and long solve times. This is a known issue due to the sheer amount of packages available in conda-forge.

As an alternative, you can instruct Read the Docs to use mamba, a drop-in replacement for conda that is much faster and reduces the memory consumption of the dependency solving process.

To do that, add a .readthedocs.yaml configuration file with these contents:

.readthedocs.yaml
version: 2

build:
  os: "ubuntu-20.04"
  tools:
    python: "mambaforge-22.9"

conda:
  environment: environment.yml

You can read more about the build.tools.python configuration in our documentation.

Mixing conda and pip packages

There are valid reasons to use pip inside a conda environment: some dependency might not be available yet as a conda package in any channel, or you might want to avoid precompiled binaries entirely. In either case, it is possible to specify the subset of packages that will be installed with pip in the environment.yml file. For example:

name: rtd38
channels:
  - conda-forge
  - defaults
dependencies:
  - rasterio==1.2
  - python=3.8
  - pytorch-cpu=1.7
  - pip>=20.1  # pip is needed as dependency
  - pip:
    - black==20.8b1

The conda developers recommend in their best practices to install as many requirements as possible with conda, then use pip to minimize possible conflicts and interoperability issues.

Warning

Notice that conda env export --from-history does not include packages installed with pip, see this conda issue for discussion.

Compiling your project sources

If your project contains extension modules written in a compiled language (C, C++, FORTRAN) or server-side JavaScript, you might need special tools to build it from source that are not readily available on our Docker images, such as a suitable compiler, CMake, Node.js, and others.

Luckily, conda is a language-agnostic package manager, and many of these development tools are already packaged on conda-forge or more specialized channels.

For example, this conda environment contains the required dependencies to compile Slycot on Read the Docs:

name: slycot38
channels:
  - conda-forge
  - defaults
dependencies:
  - python=3.8
  - cmake
  - numpy
  - compilers

Troubleshooting

If you have problems on the environment creation phase, either because the build runs out of memory or time or because some conflicts are found, you can try some of these mitigations:

  • Reduce the number of channels in environment.yml, even leaving conda-forge only and opting out of the defaults adding nodefaults.

  • Constrain the package versions as much as possible to reduce the solution space.

  • Use mamba, an alternative package manager fully compatible with conda packages.

  • And, if all else fails, request more resources.

Custom Installs

If you are running a custom installation of Read the Docs, you will need the conda executable installed somewhere on your PATH. Because of the way conda works, we can’t safely install it as a normal dependency into the normal Python virtualenv.

Warning

Installing conda into a virtualenv will override the activate script, making it so you can’t properly activate that virtualenv anymore.

How-to guides: content, themes and SEO

⏩️ Search engine optimization (SEO) for documentation projects

This article explains how documentation can be optimized to appear in search results, ultimately increasing traffic to your docs.

⏩️ Enabling canonical URLs

In this guide, we introduce relevant settings for enabling canonical URLs in popular documentation frameworks.

⏩️ Using traffic analytics

In this guide, you can learn to use Read the Docs’ built-in traffic analytics for your documentation project. You will also learn how to optionally add your own Google Analytics account or completely disable Google Analytics on your project.

⏩️ Managing translations for Sphinx projects

This guide walks through the process needed to manage translations of your documentation. Once this work is done, you can setup your project under Read the Docs to build each language of your documentation by reading Localization and Internationalization.

⏩️ Supporting Unicode in Sphinx PDFs

Sphinx offers different LaTeX engines that have better support for Unicode characters, relevant for instance for Japanese or Chinese.

⏩️ Cross-referencing with Sphinx

When writing documentation you often need to link to other pages of your documentation, other sections of the current page, or sections from other pages.

⏩️ Linking to other projects with Intersphinx

This section shows you how to maintain references to named sections of other external Sphinx projects.

⏩️ Using Jupyter notebooks in Sphinx

There are a few extensions that allow integrating Jupyter and Sphinx, and this document will explain how to achieve some of the most commonly requested features.

⏩️ Migrating from rST to MyST

In this guide, you will find how you can start writing Markdown in your existing reStructuredText project, or migrate it completely.

⏩️ Enabling offline formats

This guide provides step-by-step instructions to enabling offline formats of your documentation.

⏩️ Using search analytics

In this guide, you can learn to use Read the Docs’ built-in search analytics for your documentation project.

⏩️ Adding custom CSS or JavaScript to Sphinx documentation

Adding additional CSS or JavaScript files to your Sphinx documentation can let you customize the look and feel of your docs or add additional functionality.

⏩️ Embedding content from your documentation

Did you know that Read the Docs has a public API that you can use to embed documentation content? There are a number of use cases for embedding content, so we’ve built our integration in a way that enables users to build on top of it.

⏩️ Removing “Edit on …” buttons from documentation

When building your documentation, Read the Docs automatically adds buttons at the top of your documentation and in the versions menu that point readers to your repository to make changes. Here’s how to remove it.

⏩️ Adding “Edit Source” links on your Sphinx theme

Using your own theme? Read the Docs injects some extra variables in the Sphinx html_context, some of which you can use to add an “edit source” link at the top of all pages.

How to do search engine optimization (SEO) for documentation projects

This article explains how documentation can be optimized to appear in search results, ultimately increasing traffic to your docs.

While you optimize your docs to make them more friendly for search engine spiders/crawlers, it’s important to keep in mind that your ultimate goal is to make your docs more discoverable for your users.

By following our best practices for SEO, you can ensure that when a user types a question into a search engine, they can get the answers from your documentation in the search results.

See also

This guide isn’t meant to be your only resource on SEO, and there’s a lot of SEO topics not covered here. For additional reading, please see the external resources section.

SEO basics

Search engines like Google and Bing crawl through the internet following links in an attempt to understand and build an index of what various pages and sites are about. This is called “crawling” or “indexing”. When a person sends a query to a search engine, the search engine evaluates this index using a number of factors and attempts to return the results most likely to answer that person’s question.

How search engines “rank” sites based on a person’s query is part of their secret sauce. While some search engines publish the basics of their algorithms (see Google’s published details on PageRank), few search engines give all of the details in an attempt to prevent users from gaming the rankings with low value content which happens to rank well.

Both Google and Bing publish a set of guidelines to help make sites easier to understand for search engines and rank better. To summarize some of the most important aspects as they apply to technical documentation, your site should:

  • Use descriptive and accurate titles in the HTML <title> tag. For Sphinx, the <title> comes from the first heading on the page.

  • Ensure your URLs are descriptive. They are displayed in search results. Sphinx uses the source filename without the file extension as the URL.

  • Make sure the words your readers would search for to find your site are actually included on your pages.

  • Avoid low content pages or pages with very little original content.

  • Avoid tactics that attempt to increase your search engine ranking without actually improving content.

  • Google specifically warns about automatically generated content although this applies primarily to keyword stuffing and low value content. High quality documentation generated from source code (eg. auto generated API documentation) seems OK.

While both Google and Bing discuss site performance as an important factor in search result ranking, this guide is not going to discuss it in detail. Most technical documentation that uses Sphinx or Read the Docs generates static HTML and the performance is typically decent relative to most of the internet.

Best practices for documentation SEO

Once a crawler or spider finds your site, it will follow links and redirects in an attempt to find any and all pages on your site. While there are a few ways to guide the search engine in its crawl for example by using a sitemap or a robots.txt file which we’ll discuss shortly, the most important thing is making sure the spider can follow links on your site and get to all your pages.

Avoid unlinked pages ✅️

When building your documentation, you should ensure that pages aren’t unlinked, meaning that no other pages or navigation have a link to them.

Search engine crawlers will not discover pages that aren’t linked from somewhere else on your site.

Sphinx calls pages that don’t have links to them “orphans” and will throw a warning while building documentation that contains an orphan unless the warning is silenced with the orphan directive.

We recommend failing your builds whenever Sphinx warns you, using the fail_on_warnings option in .readthedocs.yaml.

Here is an example of a warning of an unreferenced page:

$ make html
sphinx-build -b html -d _build/doctrees . _build/html
Running Sphinx v1.8.5
...
checking consistency... /path/to/file.rst: WARNING: document isn't included in any toctree
done
...
build finished with problems, 1 warning.
Avoid uncrawlable content ✅️

While typically this isn’t a problem with technical documentation, try to avoid content that is “hidden” from search engines. This includes content hidden in images or videos which the crawler may not understand. For example, if you do have a video in your docs, make sure the rest of that page describes the content of the video.

When using images, make sure to set the image alt text or set a caption on figures.

For Sphinx, the image and figure directives support both alt texts and captions:

.. image:: your-image.png
    :alt: A description of this image

.. figure:: your-image.png

    A caption for this figure
Redirects ✅️

Redirects tell search engines when content has moved. For example, if this guide moved from guides/technical-docs-seo-guide.html to guides/sphinx-seo-guide.html, there will be a time period where search engines will still have the old URL in their index and will still be showing it to users. This is why it is important to update your own links within your docs as well as redirecting. If the hostname moved from docs.readthedocs.io to docs.readthedocs.org, this would be even more important!

Read the Docs supports a few different kinds of user defined redirects that should cover all the different cases such as redirecting a certain page for all project versions, or redirecting one version to another.

Canonical URLs ✅️

Anytime very similar content is hosted at multiple URLs, it is pretty important to set a canonical URL. The canonical URL tells search engines where the original version your documentation is even if you have multiple versions on the internet (for example, incomplete translations or deprecated versions).

Read the Docs supports setting the canonical URL if you are using a custom domain under Admin > Domains in the Read the Docs dashboard.

Use a robots.txt file ✅️

A robots.txt file is readable by crawlers and lives at the root of your site (eg. https://docs.readthedocs.io/robots.txt). It tells search engines which pages to crawl or not to crawl and can allow you to control how a search engine crawls your site. For example, you may want to request that search engines ignore unsupported versions of your documentation while keeping those docs online in case people need them.

By default, Read the Docs serves a robots.txt for you. To customize this file, you can create a robots.txt file that is written to your documentation root on your default branch/version.

See the Google’s documentation on robots.txt for additional details.

Use a sitemap.xml file ✅️

A sitemap is a file readable by crawlers that contains a list of pages and other files on your site and some metadata or relationships about them (eg. https://docs.readthedocs.io/sitemap.xml). A good sitemaps provides information like how frequently a page or file is updated or any alternate language versions of a page.

Read the Docs generates a sitemap for you that contains the last time your documentation was updated as well as links to active versions, subprojects, and translations your project has. We have a small separate guide on sitemaps.

See the Google docs on building a sitemap.

Use meta tags ✅️

Using a meta description allows you to customize how your pages look in search engine result pages.

Typically search engines will use the first few sentences of a page if no meta description is provided. In Sphinx, you can customize your meta description using the following RestructuredText:

.. meta::
    :description lang=en:
        Adding additional CSS or JavaScript files to your Sphinx documentation
        can let you customize the look and feel of your docs or add additional functionality.
_images/google-search-engine-results.png

Google search engine results showing a customized meta description

Moz.com, an authority on search engine optimization, makes the following suggestions for meta descriptions:

  • Your meta description should have the most relevant content of the page. A searcher should know whether they’ve found the right page from the description.

  • The meta description should be between 150-300 characters and it may be truncated down to around 150 characters in some situations.

  • Meta descriptions are used for display but not for ranking.

Search engines don’t always use your customized meta description if they think a snippet from the page is a better description.

Measure, iterate, & improve

Search engines (and soon, Read the Docs itself) can provide useful data that you can use to improve your docs’ ranking on search engines.

Search engine feedback

Google Search Console and Bing Webmaster Tools are tools for webmasters to get feedback about the crawling of their sites (or docs in our case). Some of the most valuable feedback these provide include:

  • Google and Bing will show pages that were previously indexed that now give a 404 (or more rarely a 500 or other status code). These will remain in the index for some time but will eventually be removed. This is a good opportunity to create a redirect.

  • These tools will show any crawl issues with your documentation.

  • Search Console and Webmaster Tools will highlight security issues found or if Google or Bing took action against your site because they believe it is spammy.

Analytics tools

A tool like Google Analytics can give you feedback about the search terms people use to find your docs, your most popular pages, and lots of other useful data.

Search term feedback can be used to help you optimize content for certain keywords or for related keywords. For Sphinx documentation, or other technical documentation that has its own search features, analytics tools can also tell you the terms people search for within your site.

Knowing your popular pages can help you prioritize where to spend your SEO efforts. Optimizing your already popular pages can have a significant impact.

External resources

Here are a few additional resources to help you learn more about SEO and rank better with search engines.

How to use traffic analytics

In this guide, you can learn to use Read the Docs’ built-in traffic analytics for your documentation project. You will also learn how to optionally add your own Google Analytics account or completely disable Google Analytics on your project.

Traffic Analytics lets you see which documents your users are reading. This allows you to understand how your documentation is being used, so you can focus on expanding and updating parts people are reading most.

To see a list of the top pages from the last month, go to the Admin tab of your project, and then click on Traffic Analytics.

Traffic analytics demo

Traffic analytics demo

You can also access analytics data from search results.

Note

The amount of analytics data stored for download depends which site you’re using:

  • On the Community site, the last 90 days are stored.

  • On the Commercial one, it goes from 30 to infinite storage

    (check out the pricing page).

Enabling Google Analytics on your project

Read the Docs has native support for Google Analytics. You can enable it by:

  • Going to Admin > Settings in your project.

  • Fill in the Analytics code heading with your Google Tracking ID (example UA-123456674-1)

Options to manage Google Analytics

Options to manage Google Analytics

Once your documentation rebuilds it will include your Analytics tracking code and start sending data. Google Analytics usually takes 60 minutes, and sometimes can take up to a day before it starts reporting data.

Note

Read the Docs takes some extra precautions with analytics to protect user privacy. As a result, users with Do Not Track enabled will not be counted for the purpose of analytics.

For more details, see the Do Not Track section of our privacy policy.

Disabling Google Analytics on your project

Google Analytics can be completely disabled on your own projects. To disable Google Analytics:

  • Going to Admin > Settings in your project.

  • Check the box Disable Analytics.

Your documentation will need to be rebuilt for this change to take effect.

How to use search analytics

In this guide, you can learn to use Read the Docs’ built-in search analytics for your documentation project.

To see a list of the top queries and an overview from the last month, go to the Admin tab of your project, and then click on Search Analytics.

Search analytics demo

How the search analytics page looks.

In Top queries in the past 30 days, you see all the latest searches ordered by their popularity. The list itself is often longer than what meets the eye, Scroll downwards on the list itself to see more results.

Understanding your analytics

In Top queries in the past 30 days, you can see the most popular terms that users have searched for. Next to the search query, the number of actual results for that query is shown. The number of times the query has been used in a search is displayed as the searches number.

  • If you see a search term that doesn’t have any results, you could apply that term in documentation articles or create new ones. This is a great way to understand missing gaps in your documentation.

  • If a search term is often used but the documentation article exists, it can also indicate that it’s hard to navigate to the article.

  • Repeat the search yourself and inspect the results to see if they are relevant. You can add keywords to various pages that you want to show up for searches on that page.

In Daily search totals, you can see trends that might match special events in your project’s publicity. If you wish to analyze these numbers in details, click Download all data to get a CSV formatted file with all available search analytics.

How to enable canonical URLs

In this guide, we introduce relevant settings for enabling canonical URLs in popular documentation frameworks.

If you need to customize the domain from which your documentation project is served, please refer to How to manage custom domains.

Sphinx

If you are using Sphinx, Read the Docs will automatically add a default value of the html_baseurl setting matching your canonical domain.

If you are using a custom html_baseurl in your conf.py, you have to ensure that the value is correct. This can be complex, supporting pull request builds (which are published on a separate domain), special branches or if you are using subproject s or translations. We recommend not including a html_baseurl in your conf.py, and letting Read the Docs define it.

MkDocs

For MkDocs we do not define your canonical domain automatically, but you can use the site_url setting to set a similar value.

In your mkdocs.yml, define the following:

# Canonical URL, adjust as need with respect to your slug, language,
# default branch and if you use a custom domain.
site_url: https://<slug>.readthedocs.io/en/stable/

Note that this will define the same canonical URL for all your branches and versions. According to MkDocs, defining site_url will only define the canonical URL of a website and does not affect the base URL of generated links, CSS, or Javascript files.

Note

2 known issues are currently making it impossible to use environment variables in MkDocs configuration. Once these issues are solved, it will be easier.

  • Support for !ENV: #8529

  • Add environment variable for canonical URL: #9781

Warning

If you change your default version or canonical domain, you’ll need to re-build all your versions in order to update their canonical URL to the new one.

How to enable offline formats

This guide provides step-by-step instructions to enabling offline formats of your documentation.

They are automatically built by Read the Docs during our default build process, as long as you have the configuration enabled to turn this on.

Enabling offline formats

Offline formats are enabled by the formats key in our config file. A simple example is here:

# Build PDF & ePub
formats:
  - epub
  - pdf

Verifying offline formats

You can verify that offline formats are building in your Project dashboard > Downloads:

_images/offline-formats.jpg

Deleting offline formats

The entries in the Downloads section of your project dashboard reflect the formats specified in your config file for each active version.

This means that if you wish to remove downloadable content for a given version, you can do so by removing the matching formats key from your config file.

Continue learning

See also

Other pages in our documentation are relevant to this feature, and might be a useful next step.

How to manage translations for Sphinx projects

This guide walks through the process needed to manage translations of your documentation. Once this work is done, you can setup your project under Read the Docs to build each language of your documentation by reading Localization and Internationalization.

Overview

There are many different ways to manage documentation in multiple languages by using different tools or services. All of them have their pros and cons depending on the context of your project or organization.

In this guide we will focus our efforts around two different methods: manual and using Transifex.

In both methods, we need to follow these steps to translate our documentation:

  1. Create translatable files (.pot and .po extensions) from source language

  2. Translate the text on those files from source language to target language

  3. Build the documentation in target language using the translated texts

Besides these steps, once we have published our first translated version of our documentation, we will want to keep it updated from the source language. At that time, the workflow would be:

  1. Update our translatable files from source language

  2. Translate only new and modified texts in source language into target language

  3. Build the documentation using the most up to date translations

Create translatable files

To generate these .pot files it’s needed to run this command from your docs/ directory:

sphinx-build -b gettext . _build/gettext

Tip

We recommend configuring Sphinx to use gettext_uuid as True and also gettext_compact as False to generate .pot files.

This command will leave the generated files under _build/gettext.

Translate text from source language

Manually

We recommend using sphinx-intl tool for this workflow.

First, you need to install it:

pip install sphinx-intl

As a second step, we want to create a directory with each translated file per target language (in this example we are using Spanish/Argentina and Portuguese/Brazil). This can be achieved with the following command:

sphinx-intl update -p _build/gettext -l es_AR -l pt_BR

This command will create a directory structure similar to the following (with one .po file per .rst file in your documentation):

locale
├── es_AR
│   └── LC_MESSAGES
│       └── index.po
└── pt_BR
    └── LC_MESSAGES
        └── index.po

Now, you can just open those .po files with a text editor and translate them taking care of no breaking the reST notation. Example:

# b8f891b8443f4a45994c9c0a6bec14c3
#: ../../index.rst:4
msgid ""
"Read the Docs hosts documentation for the open source community."
"It supports :ref:`Sphinx <sphinx>` docs written with reStructuredText."
msgstr ""
"FILL HERE BY TARGET LANGUAGE FILL HERE BY TARGET LANGUAGE FILL HERE "
"BY TARGET LANGUAGE :ref:`Sphinx <sphinx>` FILL HERE."
Using Transifex

Transifex is a platform that simplifies the manipulation of .po files and offers many useful features to make the translation process as smooth as possible. These features includes a great web based UI, Translation Memory, collaborative translation, etc.

You need to create an account in their service and a new project before start.

After that, you need to install the Transifex CLI tool which will help you in the process to upload source files, update them and also download translated files. To do this, run this command:

curl -o- https://raw.githubusercontent.com/transifex/cli/master/install.sh | bash

After installing it, you need to configure your account. For this, you need to create an API Token for your user to access this service through the command line. This can be done under your User’s Settings.

With the token, you have two options: to export as TX_TOKEN environment variable or to store it in ~/.transifexrc.

You can export the token to an environment variable, using an export command, which activates it in your current command line session:

# ``1/xxxx`` is the API token you generated
export TX_TOKEN=1/xxxx

In order to store the token permanently, you can save it in a ~/.transifexrc file. It should look like this:

[https://www.transifex.com]
rest_hostname = https://rest.api.transifex.com
token         = 1/xxxx

Now, it is time to set the project’s Transifex configuration and to map every .pot file you have created in the previous step to a resource under Transifex. To achieve this, you need to run this command:

sphinx-intl create-txconfig
sphinx-intl update-txconfig-resources \
    --pot-dir _build/gettext \
    --locale-dir locale \
    --transifex-organization-name $TRANSIFEX_ORGANIZATION \
    --transifex-project-name $TRANSIFEX_PROJECT

This command will generate a file at .tx/config with all the information needed by the tx tool to keep your translation synchronized.

Finally, you need to upload these files to Transifex platform so translators can start their work. To do this, you can run this command:

tx push --source

Now, you can go to your Transifex’s project and check that there is one resource per .rst file of your documentation. After the source files are translated using Transifex, you can download all the translations for all the languages by running:

tx pull --all

This command will leave the .po files needed for building the documentation in the target language under locale/<lang>/LC_MESSAGES.

Warning

It’s important to use always the same method to translate the documentation and do not mix them. Otherwise, it’s very easy to end up with inconsistent translations or losing already translated text.

Build the documentation in target language

Finally, to build our documentation in Spanish(Argentina) we need to tell Sphinx builder the target language with the following command:

sphinx-build -b html -D language=es_AR . _build/html/es_AR

Note

There is no need to create a new conf.py to redefine the language for the Spanish version of this documentation, but you need to set locale_dirs to ["locale"] for Sphinx to find the translated content.

After running this command, the Spanish(Argentina) version of your documentation will be under _build/html/es_AR.

Summary

Update sources to be translated

Once you have done changes in your documentation, you may want to make these additions/modifications available for translators so they can update it:

  1. Create the .pot files:

    sphinx-build -b gettext . _build/gettext
    
  2. Push new files to Transifex

    tx push --sources
    
Build documentation from up to date translation

When translators have finished their job, you may want to update the documentation by pulling the changes from Transifex:

  1. Pull up to date translations from Transifex:

    tx pull --all
    
  2. Commit and push these changes to our repo

    git add locale/
    git commit -m "Update translations"
    git push
    

The last git push will trigger a build per translation defined as part of your project under Read the Docs and make it immediately available.

How to support Unicode in Sphinx PDFs

Sphinx offers different LaTeX engines that have better support for Unicode characters, relevant for instance for Japanese or Chinese.

To build your documentation in PDF format, you need to configure Sphinx properly in your project’s conf.py. Read the Docs will execute the proper commands depending on these settings. There are several settings that can be defined (all the ones starting with latex_), to modify Sphinx and Read the Docs behavior to make your documentation to build properly.

For docs that are not written in Chinese or Japanese, and if your build fails from a Unicode error, then try xelatex as the latex_engine in your conf.py:

latex_engine = "xelatex"

When Read the Docs detects that your documentation is in Chinese or Japanese, it automatically adds some defaults for you.

For Chinese projects, it appends to your conf.py these settings:

latex_engine = "xelatex"
latex_use_xindy = False
latex_elements = {
    "preamble": "\\usepackage[UTF8]{ctex}\n",
}

And for Japanese projects:

latex_engine = "platex"
latex_use_xindy = False

Tip

You can always override these settings if you define them by yourself in your conf.py file.

Note

xindy is currently not supported by Read the Docs, but we plan to support it in the near future.

How to use cross-references with Sphinx

When writing documentation you often need to link to other pages of your documentation, other sections of the current page, or sections from other pages.

An easy way is just to use the raw URL that Sphinx generates for each page/section. This works, but it has some disadvantages:

  • Links can change, so they are hard to maintain.

  • Links can be verbose and hard to read, so it is unclear what page/section they are linking to.

  • There is no easy way to link to specific sections like paragraphs, figures, or code blocks.

  • URL links only work for the html version of your documentation.

Instead, Sphinx offers a powerful way to linking to the different elements of the document, called cross-references. Some advantages of using them:

  • Use a human-readable name of your choice, instead of a URL.

  • Portable between formats: html, PDF, ePub.

  • Sphinx will warn you of invalid references.

  • You can cross reference more than just pages and section headers.

This page describes some best-practices for cross-referencing with Sphinx with two markup options: reStructuredText and MyST (Markdown).

  • If you are not familiar with reStructuredText, check reStructuredText Primer for a quick introduction.

  • If you want to learn more about the MyST Markdown dialect, check out Syntax tokens.

Getting started

Explicit targets

Cross referencing in Sphinx uses two components, references and targets.

  • references are pointers in your documentation to other parts of your documentation.

  • targets are where the references can point to.

You can manually create a target in any location of your documentation, allowing you to reference it from other pages. These are called explicit targets.

For example, one way of creating an explicit target for a section is:

.. _My target:

Explicit targets
~~~~~~~~~~~~~~~~

Reference `My target`_.

Then the reference will be rendered as My target.

You can also add explicit targets before paragraphs (or any other part of a page).

Another example, add a target to a paragraph:

.. _target to paragraph:

An easy way is just to use the final link of the page/section.
This works, but it has :ref:`some disadvantages <target to paragraph>`:

Then the reference will be rendered as: some disadvantages.

You can also create in-line targets within an element on your page, allowing you to, for example, reference text within a paragraph.

For example, an in-line target inside a paragraph:

You can also create _`in-line targets` within an element on your page,
allowing you to, for example, reference text *within* a paragraph.

Then you can reference it using `in-line targets`_, that will be rendered as: in-line targets.

Implicit targets

You may also reference some objects by name without explicitly giving them one by using implicit targets.

When you create a section, a footnote, or a citation, Sphinx will create a target with the title as the name:

For example, to reference the previous section
you can use `Explicit targets`_.

The reference will be rendered as: Explicit targets.

Cross-referencing using roles

All targets seen so far can be referenced only from the same page. Sphinx provides some roles that allow you to reference any explicit target from any page.

Note

Since Sphinx will make all explicit targets available globally, all targets must be unique.

You can see the complete list of cross-referencing roles at Cross-referencing syntax. Next, you will explore the most common ones.

The ref role

The ref role can be used to reference any explicit targets. For example:

- :ref:`my target`.
- :ref:`Target to paragraph <target to paragraph>`.
- :ref:`Target inside a paragraph <in-line targets>`.

That will be rendered as:

The ref role also allow us to reference code blocks:

.. _target to code:

.. code-block:: python

   # Add the extension
   extensions = [
      'sphinx.ext.autosectionlabel',
   ]

   # Make sure the target is unique
   autosectionlabel_prefix_document = True

We can reference it using :ref:`code <target to code>`, that will be rendered as: code.

The doc role

The doc role allows us to link to a page instead of just a section. The target name can be relative to the page where the role exists, or relative to your documentation’s root folder (in both cases, you should omit the extension).

For example, to link to a page in the same directory as this one you can use:

- :doc:`intersphinx`
- :doc:`/guides/intersphinx`
- :doc:`Custom title </guides/intersphinx>`

That will be rendered as:

Tip

Using paths relative to your documentation root is recommended, so you avoid changing the target name if the page is moved.

The numref role

The numref role is used to reference numbered elements of your documentation. For example, tables and images.

To activate numbered references, add this to your conf.py file:

# Enable numref
numfig = True

Next, ensure that an object you would like to reference has an explicit target.

For example, you can create a target for the next image:

Logo

Link me!

.. _target to image:

.. figure:: /img/logo.png
   :alt: Logo
   :align: center
   :width: 240px

   Link me!

Finally, reference it using :numref:`target to image`, that will be rendered as Fig. N. Sphinx will enumerate the image automatically.

Automatically label sections

Manually adding an explicit target to each section and making sure is unique is a big task! Fortunately, Sphinx includes an extension to help us with that problem, autosectionlabel.

To activate the autosectionlabel extension, add this to your conf.py file:

# Add the extension
extensions = [
    "sphinx.ext.autosectionlabel",
]

# Make sure the target is unique
autosectionlabel_prefix_document = True

Sphinx will create explicit targets for all your sections, the name of target has the form {path/to/page}:{title-of-section}.

For example, you can reference the previous section using:

- :ref:`guides/cross-referencing-with-sphinx:explicit targets`.
- :ref:`Custom title <guides/cross-referencing-with-sphinx:explicit targets>`.

That will be rendered as:

Invalid targets

If you reference an invalid or undefined target Sphinx will warn us. You can use the -W option when building your docs to fail the build if there are any invalid references. On Read the Docs you can use the sphinx.fail_on_warning option.

Finding the reference name

When you build your documentation, Sphinx will generate an inventory of all explicit and implicit links called objects.inv. You can list all of these targets to explore what is available for you to reference.

List all targets for built documentation with:

python -m sphinx.ext.intersphinx <link>

Where <link> is either a URL or a local path that points to your inventory file (usually in _build/html/objects.inv). For example, to see all targets from the Read the Docs documentation:

python -m sphinx.ext.intersphinx https://docs.readthedocs.io/en/stable/objects.inv

Cross-referencing targets in other documentation sites

You can reference to docs outside your project too! See How to link to other documentation projects with Intersphinx.

How to use Jupyter notebooks in Sphinx

Jupyter notebooks are a popular tool to describe computational narratives that mix code, prose, images, interactive components, and more. Embedding them in your Sphinx project allows using these rich documents as documentation, which can provide a great experience for tutorials, examples, and other types of technical content. There are a few extensions that allow integrating Jupyter and Sphinx, and this document will explain how to achieve some of the most commonly requested features.

Including classic .ipynb notebooks in Sphinx documentation

There are two main extensions that add support Jupyter notebooks as source files in Sphinx: nbsphinx and MyST-NB. They have similar intent and basic functionality: both can read notebooks in .ipynb and additional formats supported by jupytext, and are configured in a similar way (see Existing relevant extensions for more background on their differences).

First of all, create a Jupyter notebook using the editor of your liking (for example, JupyterLab). For example, source/notebooks/Example 1.ipynb:

Example Jupyter notebook created on JupyterLab

Example Jupyter notebook created on JupyterLab

Next, you will need to enable one of the extensions, as follows:

conf.py
extensions = [
    "nbsphinx",
]

Finally, you can include the notebook in any toctree. For example, add this to your root document:

.. toctree::
   :maxdepth: 2
   :caption: Contents:

   notebooks/Example 1

The notebook will render as any other HTML page in your documentation after doing make html.

Example Jupyter notebook rendered on HTML by nbsphinx

Example Jupyter notebook rendered on HTML by nbsphinx

To further customize the rendering process among other things, refer to the nbsphinx or MyST-NB documentation.

Rendering interactive widgets

Widgets are eventful python objects that have a representation in the browser and that you can use to build interactive GUIs for your notebooks. Basic widgets using ipywidgets include controls like sliders, textboxes, and buttons, and more complex widgets include interactive maps, like the ones provided by ipyleaflet.

You can embed these interactive widgets on HTML Sphinx documentation. For this to work, it’s necessary to save the widget state before generating the HTML documentation, otherwise the widget will appear as empty. Each editor has a different way of doing it:

  • The classical Jupyter Notebook interface provides a “Save Notebook Widget State” action in the “Widgets” menu, as explained in the ipywidgets documentation. You need to click it before exporting your notebook to HTML.

  • JupyterLab provides a “Save Widget State Automatically” option in the “Settings” menu. You need to leave it checked so that widget state is automatically saved.

  • In Visual Studio Code it’s not possible to save the widget state at the time of writing (June 2021).

JupyterLab option to save the interactive widget state automatically

JupyterLab option to save the interactive widget state automatically

For example, if you create a notebook with a simple IntSlider widget from ipywidgets and save the widget state, the slider will render correctly in Sphinx.

Interactive widget rendered in HTML by Sphinx

Interactive widget rendered in HTML by Sphinx

To see more elaborate examples:

Warning

Although widgets themselves can be embedded in HTML, events require a backend (kernel) to execute. Therefore, @interact, .observe, and related functionalities relying on them will not work as expected.

Note

If your widgets need some additional JavaScript libraries, you can add them using add_js_file().

Using notebooks in other formats

For example, this is how a simple notebook looks like in MyST Markdown format:

Example 3.md
---
jupytext:
  text_representation:
    extension: .md
    format_name: myst
    format_version: 0.13
    jupytext_version: 1.10.3
kernelspec:
  display_name: Python 3
  language: python
  name: python3
---

# Plain-text notebook formats

This is a example of a Jupyter notebook stored in MyST Markdown format.

```{code-cell} ipython3
import sys
print(sys.version)
```

```{code-cell} ipython3
from IPython.display import Image
```

```{code-cell} ipython3
Image("http://sipi.usc.edu/database/preview/misc/4.2.03.png")
```

To render this notebook in Sphinx you will need to add this to your conf.py:

conf.py
nbsphinx_custom_formats = {
    ".md": ["jupytext.reads", {"fmt": "mystnb"}],
}

Notice that the Markdown format does not store the outputs of the computation. Sphinx will automatically execute notebooks without outputs, so in your HTML documentation they appear as complete.

Creating galleries of examples using notebooks

nbsphinx has support for creating thumbnail galleries from a list of Jupyter notebooks. This functionality relies on Sphinx-Gallery and extends it to work with Jupyter notebooks rather than Python scripts.

To use it, you will need to install both nbsphinx and Sphinx-Gallery, and modify your conf.py as follows:

conf.py
extensions = [
    "nbsphinx",
    "sphinx_gallery.load_style",
]

After doing that, there are two ways to create the gallery:

  • From a reStructuredText source file, using the .. nbgallery:: directive, as showcased in the documentation.

  • From a Jupyter notebook, adding a "nbsphinx-gallery" tag to the metadata of a cell. Each editor has a different way of modifying the cell metadata (see figure below).

Panel to modify cell metadata in JupyterLab

Panel to modify cell metadata in JupyterLab

For example, this reST markup would create a thumbnail gallery with generic images as thumbnails, thanks to the Sphinx-Gallery default style:

Thumbnails gallery
==================

.. nbgallery::
   notebooks/Example 1
   notebooks/Example 2
Simple thumbnail gallery created using nbsphinx

Simple thumbnail gallery created using nbsphinx

To see some examples of notebook galleries in the wild:

Background

Existing relevant extensions

In the first part of this document we have seen that nbsphinx and MyST-NB are similar. However, there are some differences between them:

  • nsphinx uses pandoc to convert the Markdown from Jupyter notebooks to reStructuredText and then to docutils AST, whereas MyST-NB uses MyST-Parser to directly convert the Markdown text to docutils AST. Therefore, nbsphinx assumes pandoc flavored Markdown, whereas MyST-NB uses MyST flavored Markdown. Both Markdown flavors are mostly equal, but they have some differences.

  • nbsphinx executes each notebook during the parsing phase, whereas MyST-NB can execute all notebooks up front and cache them with jupyter-cache. This can result in shorter build times when notebooks are modified if using MyST-NB.

  • nbsphinx provides functionality to create thumbnail galleries, whereas MyST-NB does not have such functionality at the moment (see Creating galleries of examples using notebooks for more information about galleries).

  • MyST-NB allows embedding Python objects coming from the notebook in the documentation (read their “glue” documentation for more information) and provides more sophisticated error reporting than the one nbsphinx has.

  • The visual appearance of code cells and their outputs is slightly different: nbsphinx renders the cell numbers by default, whereas MyST-NB doesn’t.

Deciding which one to use depends on your use case. As general recommendations:

Alternative notebook formats

Jupyter notebooks in .ipynb format (as described in the nbformat documentation) are by far the most widely used for historical reasons.

However, to compensate some of the disadvantages of the .ipynb format (like cumbersome integration with version control systems), jupytext offers other formats based on plain text rather than JSON.

As a result, there are three modes of operation:

  • Using classic .ipynb notebooks. It’s the most straightforward option, since all the tooling is prepared to work with them, and does not require additional pieces of software. It is therefore simpler to manage, since there are fewer moving parts. However, it requires some care when working with Version Control Systems (like git), by doing one of these things:

    • Clear outputs before commit. Minimizes conflicts, but might defeat the purpose of notebooks themselves, since the computation results are not stored.

    • Use tools like nbdime (open source) or ReviewNB (proprietary) to improve the review process.

    • Use a different collaboration workflow that doesn’t involve notebooks.

  • Replace .ipynb notebooks with a text-based format. These formats behave better under version control and they can also be edited with normal text editors that do not support cell-based JSON notebooks. However, text-based formats do not store the outputs of the cells, and this might not be what you want.

  • Pairing .ipynb notebooks with a text-based format, and putting the text-based file in version control, as suggested in the jupytext documentation. This solution has the best of both worlds. In some rare cases you might experience synchronization issues between both files.

These approaches are not mutually exclusive, nor you have to use a single format for all your notebooks. For the examples in this document, we have used the MyST Markdown format.

If you are using alternative formats for Jupyter notebooks, you can include them in your Sphinx documentation using either nbsphinx or MyST-NB (see Existing relevant extensions for more information about the differences between them).

How to migrate from reStructuredText to MyST Markdown

In this guide, you will find how you can start writing Markdown in your existing reStructuredText project, or migrate it completely.

Sphinx is usually associated with reStructuredText, the markup language designed for the CPython project in the early ’00s. However, for quite some time Sphinx has been compatible with Markdown as well, thanks to a number of extensions.

The most powerful of such extensions is MyST-Parser, which implements a CommonMark-compliant, extensible Markdown dialect with support for the Sphinx roles and directives that make it so useful.

If, instead of migrating, you are starting a new project from scratch, have a look at 🚀 Get Started. If you are starting a project for Jupyter, you can start with Jupyter Book, which uses MyST-Parser, see the official Jupyter Book tutorial: Create your first book

How to write your content both in reStructuredText and MyST

It is useful to ask whether a migration is necessary in the first place. Doing bulk migrations of large projects with lots of work in progress will create conflicts for ongoing changes. On the other hand, your writers might prefer to have some files in Markdown and some others in reStructuredText, for whatever reason. Luckily, Sphinx supports reading both types of markup at the same time without problems.

To start using MyST in your existing Sphinx project, first install the myst-parser Python package and then enable it on your configuration:

conf.py
extensions = [
    # Your existing extensions
    ...,
    "myst_parser",
]

Your reStructuredText documents will keep rendering, and you will be able to add MyST documents with the .md extension that will be processed by MyST-Parser.

As an example, this guide is written in MyST while the rest of the Read the Docs documentation is written in reStructuredText.

Note

By default, MyST-Parser registers the .md suffix for MyST source files. If you want to use a different suffix, you can do so by changing your source_suffix configuration value in conf.py.

How to convert existing reStructuredText documentation to MyST

To convert existing reST documents to MyST, you can use the rst2myst CLI script shipped by RST-to-MyST. The script supports converting the documents one by one, or scanning a series of directories to convert them in bulk.

After installing `rst-to-myst`, you can run the script as follows:

$ rst2myst convert docs/source/index.rst  # Converts index.rst to index.md
$ rst2myst convert docs/**/*.rst  # Convert every .rst file under the docs directory

This will create a .md MyST file for every .rst source file converted.

How to modify the behaviour of rst2myst

The rst2myst accepts several flags to modify its behavior. All of them have sensible defaults, so you don’t have to specify them unless you want to.

These are a few options you might find useful:

-d, --dry-run

Only verify that the script would work correctly, without actually writing any files.

-R, --replace-files

Replace the .rst files by their .md equivalent, rather than writing a new .md file next to the old .rst one.

You can read the full list of options in the `rst2myst` documentation.

How to enable optional syntax

Some reStructuredText syntax will require you to enable certain MyST plugins. For example, to write reST definition lists, you need to add a myst_enable_extensions variable to your Sphinx configuration, as follows:

conf.py
myst_enable_extensions = [
    "deflist",
]

You can learn more about other MyST-Parser plugins in their documentation.

How to write reStructuredText syntax within MyST

There is a small chance that rst2myst does not properly understand a piece of reST syntax, either because there is a bug in the tool or because that syntax does not have a MyST equivalent yet. For example, as explained in the documentation, the sphinx.ext.autodoc extension is incompatible with MyST.

Fortunately, MyST supports an eval-rst directive that will parse the content as reStructuredText, rather than MyST. For example:

```{eval-rst}
.. note::

   Complete MyST migration.

```

will produce the following result:

Note

Complete MyST migration.

As a result, this allows you to conduct a gradual migration, at the expense of having heterogeneous syntax in your source files. In any case, the HTML output will be the same.

How to add custom CSS or JavaScript to Sphinx documentation

Adding additional CSS or JavaScript files to your Sphinx documentation can let you customize the look and feel of your docs or add additional functionality. For example, with a small snippet of CSS, your documentation could use a custom font or have a different background color.

If your custom stylesheet is _static/css/custom.css, you can add that CSS file to the documentation using the Sphinx option html_css_files:

## conf.py

# These folders are copied to the documentation's HTML output
html_static_path = ['_static']

# These paths are either relative to html_static_path
# or fully qualified paths (eg. https://...)
html_css_files = [
    'css/custom.css',
]

A similar approach can be used to add JavaScript files:

html_js_files = [
    'js/custom.js',
]

Note

The Sphinx HTML options html_css_files and html_js_files were added in Sphinx 1.8. Unless you have a good reason to use an older version, you are strongly encouraged to upgrade. Sphinx is almost entirely backwards compatible.

Overriding or replacing a theme’s stylesheet

The above approach is preferred for adding additional stylesheets or JavaScript, but it is also possible to completely replace a Sphinx theme’s stylesheet with your own stylesheet.

If your replacement stylesheet exists at _static/css/yourtheme.css, you can replace your theme’s CSS file by setting html_style in your conf.py:

## conf.py

html_style = 'css/yourtheme.css'

If you only need to override a few styles on the theme, you can include the theme’s normal CSS using the CSS @import rule .

/** css/yourtheme.css **/

/* This line is theme specific - it includes the base theme CSS */
@import '../alabaster.css';  /* for Alabaster */
/*@import 'theme.css';       /* for the Read the Docs theme */

body {
    /* ... */
}

See also

You can also add custom classes to your html elements. See Docutils Class and this related Sphinx footnote… for more information.

How to remove “Edit on …” buttons from documentation

When building your documentation, Read the Docs automatically adds buttons at the top of your documentation and in the versions menu that point readers to your repository to make changes. For instance, if your repository is on GitHub, a button that says “Edit on GitHub” is added in the top-right corner to your documentation to make it easy for readers to author new changes.

Remove “On …” section from versions menu

This section can be removed with a custom CSS rule to hide them. Follow the instructions under How to add custom CSS or JavaScript to Sphinx documentation and put the following content into the .css file:

/* Hide "On GitHub" section from versions menu */
div.rst-versions > div.rst-other-versions > div.injected > dl:nth-child(4) {
    display: none;
}

Warning

You may need to change the 4 number in dl:nth-child(4) for a different one in case your project has more sections in the versions menu. For example, if your project has translations into different languages, you will need to use the number 5 there.

Now when you build your documentation, your documentation won’t include an edit button or links to the page source.

How-to guides: security and access

⏩️ Single Sign-On (SSO) with GitHub, GitLab, or Bitbucket

When using an organization on Read the Docs for Business, you can configure SSO for your users to authenticate to Read the Docs.

⏩️ Single Sign-On (SSO) with Google Workspace

When using an organization on Read the Docs for Business, you can configure SSO for your users to authenticate to Read the Docs. This guide is written for Google Workspace.

⏩️ Managing Read the Docs teams

When using an organization on Read the Docs for Business, it’s possible to create different teams with custom access levels.

⏩️ Manually importing private repositories

You can grant access to private Git repositories using Read the Docs for Business using a custom process if required. Here is how you set it up.

⏩️ Using private Git submodules

If you are using private Git repositories and they also contain private Git submodules, you need to follow a few special steps.

⏩️ Installing private python packages

If you have private dependencies, you can install them from a private Git repository or a private repository manager.

How to setup Single Sign-On (SSO) with GitHub, GitLab, or Bitbucket

Note

This feature is only available on Read the Docs for Business.

This how-to guide will provide instructions on how to enable SSO with GitHub, GitLab, or Bitbucket. If you want more information on this feature, please read Single Sign-On (SSO)

Prerequisites

Organization permissions

To change your Organization’s settings, you need to be an owner of that organization.

You can validate your ownership of the Organization with these steps:

  1. Navigate to the organization management page.

  2. Look at the Owners section on the right menu.

If you’d like to modify this setting and are not an owner, you can ask an existing organization owner to take the actions listed.

User setup

Users in your organization must have their GitHub, Bitbucket, or GitLab account connected, otherwise they won’t have access to any project on Read the Docs after performing this change. You can read more about granting permissions on GitHub in their documentation.

Enabling SSO

You can enable this feature in your organization by:

  1. Navigate to the authorization setting page.

  2. Select GitHub, GitLab or Bitbucket on the Provider dropdown.

  3. Select Save

Warning

Once you enable this option, your existing Read the Docs teams will not be used. While testing you can enable SSO and then disable it without any data loss.

Grant access to read private documentation

By granting read permissions to a user in your git repository, you are giving the user access to read the documentation of the associated project on Read the Docs. By default, private git repositories are built as private documentation websites. Having read permissions to the git repository translates to having view permissions to a private documentation website.

Grant access to administer a project

By granting admin permission to a user in the git repository, you are giving the user access to read the documentation and to be an administrator of the associated project on Read the Docs.

Grant access to import a project

When SSO with a Git provider is enabled, only owners of the Read the Docs organization can import projects.

To be able to import a project, a user must have:

  1. admin permissions in the associated Git repository.

  2. Ownership rights to the Read the Docs organization

Revoke access to a project

If a user should not have access to a project, you can revoke access to the git repository, and this will be automatically reflected in Read the Docs.

The same process is followed in case you need to remove admin access, but still want that user to have access to read the documentation. Instead of revoking access completely, downgrade their permissions to read only.

See also

To learn more about choosing a Single Sign-on approach, please read Single Sign-On (SSO).

How to setup Single Sign-On (SSO) with Google Workspace

Note

This feature is only available on Read the Docs for Business.

This how-to guide will provide instructions on how to enable SSO with Google Workspace. If you want more information on this feature, please read Single Sign-On (SSO)

Prerequisites

Organization permissions

To change your Organization’s settings, you need to be an owner of that organization.

You can validate your ownership of the Organization with these steps:

  1. Navigate to the organization management page.

  2. Look at the Owners section on the right menu.

If you’d like to modify this setting and are not an owner, you can ask an existing organization owner to take the actions listed.

Connect your Google account to Read the Docs

In order to enable the Google Workspace integration, you need to connect your Google account to Read the Docs.

The domain attached to your Google account will be used to match users that sign up with a Google account to your organization.

User setup

Using this setup, all users who have access to the configured Google Workspace will automatically join to your organization when they sign up with their Google account. Existing users will not be automatically joined to the organization.

You can still add outside collaborators and manage their access. There are two ways to manage this access:

  • Using teams to provide access for ongoing contribution.

  • Using sharing to provide short-term access requiring a login.

Enabling SSO

By default, users that sign up with a Google account do not have any permissions over any project. However, you can define which teams users matching your company’s domain email address will auto-join when they sign up.

  1. Navigate to the authorization setting page.

  2. Select Google in the Provider drop-down.

  3. Press Save.

After enabling SSO with Google Workspace, all users with email addresses from your configured Google Workspace domain will be required to signup using their Google account.

Warning

Existing users with email addresses from your configured Google Workspace domain will not be required to link their Google account, but they won’t be automatically joined to your organization.

Configure team for all users to join

You can mark one or many teams that users are automatically joined when they sign up with a matching email address. Configure this option by:

  1. Navigate to the teams management page.

  2. Click the <team name>.

  3. Click Edit team

  4. Enable Auto join users with an organization’s email address to this team.

  5. Click Save

With this enabled, all users that sign up with their employee@company.com email will automatically join this team. These teams can have either read-only or admin permissions over a set of projects.

Revoke user’s access to all the projects

By disabling the Google Workspace account with email employee@company.com, you revoke access to all the projects the linked Read the Docs user had access to, and disable login on Read the Docs completely for that user.

Warning

If the user signed up to Read the Docs previously to enabling SSO with Google Workspace on your organization, they may still have access to their account and projects if they were manually added to a team.

To completely revoke access to a user, remove them from all the teams they are part of.

Warning

If the user was already signed in to Read the Docs when their access was revoked, they may still have access to documentation pages until their session expires. This is three days for the dashboard and documentation pages.

To completely revoke access to a user, remove them from all the teams they are part of.

See also

How to manage Read the Docs teams

Additional user management options

Single Sign-On (SSO)

Information about choosing a Single Sign-on approach

How to manage Read the Docs teams

Note

This feature is only available on Read the Docs for Business.

Read the Docs uses teams within an organization to group users and provide permissions to projects. This guide will cover how to do team management, including adding and removing people from teams. You can read more about organizations and teams in our Organizations documentation.

Adding a user to a team

Adding a user to a team gives them all the permissions available to that team, whether it’s read-only or admin.

Follow these steps:

  1. Navigate to the teams management page.

  2. Click on a <team name>.

  3. Click Invite Member.

  4. Input the user’s Read the Docs username or email address.

  5. Click Add member.

Removing a user from a team

Removing a user from a team removes all permissions that team gave them.

Follow these steps:

  1. Navigate to the teams management page.

  2. Click on <team name>.

  3. Click Remove next to the user.

Grant access to users to import a project

Make the user a member of any team with admin permissions, they will be granted access to import a project on that team.

Automating this process

You can manage teams more easily using our Single Sign-On features.

See also

Organizations

General information about the organizations feature.

How to import private repositories

Note

This feature is only available on Read the Docs for Business.

You can grant access to private Git repositories using Read the Docs for Business. Here is how you set it up.

✅️ Logged in with GitHub, Bitbucket, or GitLab?

If you signed up or logged in to Read the Docs with your GitHub, Bitbucket, or GitLab credentials, all you have to do is to use the normal project import. Your Read the Docs account is connected to your Git provider and will let you choose from private Git repositories and configure them for you.

You can still use the below guide if you need to recreate SSH keys for a private repository.

⬇️ Logging in with another provider or email?

For all other Git provider setups, you will need to configure the Git repository manually.

Follow the steps below.

Importing your project manually

Git repositories aren’t automatically listed for setups that are not connected to GitHub, Bitbucket, or GitLab.

A cropped screenshot showing the first step of a manual import on |com_brand|.

That is the reason why this guide is an extension of the manual Git repository setup, with the following exception:

  1. Go to https://readthedocs.com/dashboard/import/manual/

  2. In the Repository URL field, you need to provide the SSH version of your repository’s URL. It starts with git@..., for example git@github.com:readthedocs/readthedocs.org.git.

After importing your project the build will fail, because Read the Docs doesn’t have access to clone your repository. To give access, you’ll need to add your project’s public SSH key to your VCS provider.

Copy your project’s public key

Next step is to locate a public SSH key which Read the Docs has automatically generated:

A screenshot of the SSH Keys admin page.
  1. Going to the Admin ‣ SSH Keys tab of your project.

  2. Click on the fingerprint of the SSH key (it looks like 6d:ca:6d:ca:6d:ca:6d:ca)

  3. Copy the text from the Public key section

Note

The private part of the SSH key is kept secret.

Add the public key to your project

Now that you have copied the public key generated by Read the Docs, you need to add it to your Git repository’s settings.

For GitHub, you can use deploy keys with read only access.

  1. Go to your project on GitHub

  2. Click on Settings

  3. Click on Deploy Keys

  4. Click on Add deploy key

  5. Put a descriptive title and paste the public SSH key from your Read the Docs project

  6. Click on Add key

Webhooks

Finally, since this is a manual project import:

Don’t forget to add the Read the Docs webhook!

To automatically trigger new builds on Read the Docs, you’ll need to manually add a webhook, see How to manually configure a Git repository integration.

How to use private Git submodules

Warning

This guide is for Business hosting.

If you are using private Git repositories and they also contain private Git submodules, you need to follow a few special steps.

Read the Docs uses SSH keys (with read only permissions) in order to clone private repositories. A SSH key is automatically generated and added to your main repository, but not to your submodules. In order to give Read the Docs access to clone your submodules you’ll need to add the public SSH key to each repository of your submodules.

Note

  • You can manage which submodules Read the Docs should clone using a configuration file. See submodules.

  • Make sure you are using SSH URLs for your submodules (git@github.com:readthedocs/readthedocs.org.git for example) in your .gitmodules file, not http URLs.

GitHub

Since GitHub doesn’t allow you to reuse a deploy key across different repositories, you’ll need to use machine users to give read access to several repositories using only one SSH key.

  1. Remove the SSH deploy key that was added to the main repository on GitHub

    1. Go to your project on GitHub

    2. Click on Settings

    3. Click on Deploy Keys

    4. Delete the key added by Read the Docs Commercial (readthedocs.com)

  2. Create a GitHub user and give it read only permissions to all the necessary repositories. You can do this by adding the account as:

  3. Attach the public SSH key from your project on Read the Docs to the GitHub user you just created

    1. Go to the user’s settings

    2. Click on SSH and GPG keys

    3. Click on New SSH key

    4. Put a descriptive title and paste the public SSH key from your Read the Docs project

    5. Click on Add SSH key

Azure DevOps

Azure DevOps does not have per-repository SSH keys, but keys can be added to a user instead. As long as this user has access to your main repository and all its submodules, Read the Docs can clone all the repositories with the same key.

Others

GitLab and Bitbucket allow you to reuse the same SSH key across different repositories. Since Read the Docs already added the public SSH key on your main repository, you only need to add it to each submodule repository.

How to install private python packages

Warning

This guide is for Business hosting.

Read the Docs uses pip to install your Python packages. If you have private dependencies, you can install them from a private Git repository or a private repository manager.

From a Git repository

Pip supports installing packages from a Git repository using the URI form:

git+https://gitprovider.com/user/project.git@{version}

Or if your repository is private:

git+https://{token}@gitprovider.com/user/project.git@{version}

Where version can be a tag, a branch, or a commit. And token is a personal access token with read only permissions from your provider.

To install the package, you need to add the URI in your requirements file. Pip will automatically expand environment variables in your URI, so you don’t have to hard code the token in the URI. See using environment variables in Read the Docs for more information.

Note

You have to use the POSIX format for variable names (only uppercase letters and _ are allowed), and including a dollar sign and curly brackets around the name (${API_TOKEN}) for pip to be able to recognize them.

Below you can find how to get a personal access token from our supported providers. We will be using environment variables for the token.

GitHub

You need to create a personal access token with the repo scope. Check the GitHub documentation on how to create a personal token.

URI example:

git+https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/user/project.git@{version}

Warning

GitHub doesn’t support tokens per repository. A personal token will grant read and write access to all repositories the user has access to. You can create a machine user to give read access only to the repositories you need.

GitLab

You need to create a deploy token with the read_repository scope for the repository you want to install the package from. Check the GitLab documentation on how to create a deploy token.

URI example:

git+https://${GITLAB_TOKEN_USER}:${GITLAB_TOKEN}@gitlab.com/user/project.git@{version}

Here GITLAB_TOKEN_USER is the user from the deploy token you created, not your GitLab user.

Bitbucket

You need to create an app password with Read repositories permissions. Check the Bitbucket documentation on how to create an app password.

URI example:

git+https://${BITBUCKET_USER}:${BITBUCKET_APP_PASSWORD}@bitbucket.org/user/project.git@{version}'

Here BITBUCKET_USER is your Bitbucket user.

Warning

Bitbucket doesn’t support app passwords per repository. An app password will grant read access to all repositories the user has access to.

From a repository manager other than PyPI

Pip by default will install your packages from PyPI. If you are using a repository manager like pypiserver, or Nexus Repository, you need to set the --index-url option. You have two ways of set that option:

Note

Check your repository manager’s documentation to obtain the appropriate index URL.

How-to guides: account management

⏩️ Managing your Read the Docs for Business subscription

Solving the most common tasks for managing Read the Docs subscriptions.

How-to guides: best practices

Over the years, we have become familiar with a number of methods that work well and which we consider best practice.

⏩️ Best practices for linking to your documentation

Documentation changes over time, and links and cross-references can become challenging manage for various reasons. Here is a set of best practices explaining and addressing these challenges.

⏩️ Deprecating content

Best practice for removing or deprecating documentation content.

⏩️ Creating reproducible builds

Every documentation project has dependencies that are required to build it. Using an unspecified versions of these dependencies means that your project can start breaking. In this guide, learn how to protect your project against breaking randomly. This is one of our most popular guides!

⏩️ Search engine optimization (SEO) for documentation projects

This article explains how documentation can be optimized to appear in search results, increasing traffic to your docs.

⏩️ Hiding a version

Learn how you can keep your entire version history online without overwhelming the reader with version choices.

How to deprecate content

When you deprecate a feature from your project, you may want to deprecate its docs as well, and stop your users from reading that content.

Deprecating content may sound as easy as delete it, but doing that will break existing links, and you don’t necessary want to make the content inaccessible. Here you’ll find some tips on how to use Read the Docs to deprecate your content progressively and in non harmful ways.

See also

Best practices for linking to your documentation

More information about handling URL structures, renaming and removing content.

Deprecating versions

If you have multiple versions of your project, it makes sense to have its documentation versioned as well. For example, if you have the following versions and want to deprecate v1.

  • https://project.readthedocs.io/en/v1/

  • https://project.readthedocs.io/en/v2/

  • https://project.readthedocs.io/en/v3/

For cases like this you can hide a version. Hidden versions won’t be listed in the versions menu of your docs, and they will be listed in a robots.txt file to stop search engines of showing results for that version.

Users can still see all versions in the dashboard of your project. To hide a version go to your project and click on Versions > Edit, and mark the Hidden option. Check Version states for more information.

Note

If the versions of your project follow the semver convention, you can activate the Version warning option for your project. A banner with a warning and linking to the stable version will be shown on all versions that are lower than the stable one.

Deprecating pages

You may not always want to deprecate a version, but deprecate some pages. For example, if you have documentation about two APIs and you want to deprecate v1:

  • https://project.readthedocs.io/en/latest/api/v1.html

  • https://project.readthedocs.io/en/latest/api/v2.html

A simple way is just adding a warning at the top of the page, this will warn users visiting that page, but it won’t stop users from being redirected to that page from search results. You can add an entry of that page in a custom robots.txt file to avoid search engines of showing those results. For example:

# robots.txt

User-agent: *

Disallow: /en/latest/api/v1.html # Deprecated API

But your users will still see search results from that page if they use the search from your docs. With Read the Docs you can set a custom rank per pages. For example:

# .readthedocs.yaml

version: 2
search:
   ranking:
      api/v1.html: -1

This won’t hide results from that page, but it will give priority to results from other pages.

Tip

You can make use of Sphinx directives (like warning, deprecated, versionchanged) or MkDocs admonitions to warn your users about deprecated content.

Moving and deleting pages

After you have deprecated a feature for a while, you may want to get rid of its documentation, that’s OK, you don’t have to maintain that content forever. But be aware that users may have links of that page saved, and it will be frustrating and confusing for them to get a 404.

To solve that problem you can create a redirect to a page with a similar feature/content, like redirecting to the docs of the v2 of your API when your users visit the deleted docs from v1, this is a page redirect from /api/v1.html to /api/v2.html. See Redirects.

How-to guides: troubleshooting problems

In the following guides, you can learn how to fix common problems using Read the Docs.

⏩️ Troubleshooting build errors

A list of common errors and resolutions encountered in the build process.

⏩️ Troubleshooting slow builds

A list of the most common issues that are slowing down builds. Even if you are not facing any immediate performance issues, it’s always good to be familiar with the most common ones.

Troubleshooting build errors

Tip

Please help us keep this section updated and contribute your own error resolutions, performance improvements, etc. Send in your helpful comments or ideas 💡 to support@readthedocs.org or contribute directly by clicking Edit on GitHub in the top right corner of this page.

This guide provides some common errors and resolutions encountered in the build process.

Git errors

In the examples below, we use github.com, however error messages are similar for GitLab, Bitbucket etc.

terminal prompts disabled
fatal: could not read Username for 'https://github.com': terminal prompts disabled

Resolution: This error can be quite misleading. It usually occurs when a repository could not be found because of a typo in the reposistory name or because the repository has been deleted. Verify your repository URL in Admin > Settings.

This error also occurs if you have changed a public repository to private and you are using https:// in your git repository URL.

Note

To use private repositories, you need a plan on Read the Docs for Business.

error: pathspec
error: pathspec 'main' did not match any file(s) known to git

Resolution: A specified branch does not exist in the git repository. This might be because the git repository was recently created (and has no commits nor branches) or because the default branch has changed name. If for instance, the default branch on GitHub changed from master to main, you need to visit Admin > Settings to change the name of the default branch that Read the Docs expects to find when cloning the repository.

Permission denied (publickey)
git@github.com: Permission denied (publickey).

fatal: Could not read from remote repository.

Resolution: The git repository URL points to a repository, user account or organization that Read the Docs does not have credentials for. Verify that the public SSH key from your Read the Docs project is installed as a deploy key on your VCS (GitHub/GitLab/Bitbucket etc):

  1. Navigate to Admin > SSH Keys

  2. Copy the contents of the public key.

  3. Ensure that the key exists as a deploy key at your VCS provider. Here are direct links to access settings for verifying and changing deploy keys - customize the URLs for your VCS host and repository details:

    • https://github.com/<username>/<repo>/settings/keys

    • https://gitlab.com/<username>/<repo>/-/settings/repository

    • https://bitbucket.org/<username>/<repo>/admin/access-keys/

ERROR: Repository not found.
ERROR: Repository not found.
fatal: Could not read from remote repository.

Resolution: This error usually occurs on private git repositories that no longer have the public SSH key from their Read the Docs project installed as a deploy key.

  1. Navigate to Admin > SSH Keys

  2. Copy the contents of the public key.

  3. Ensure that the key exists as a deploy key at your VCS provider. Here are direct links to access settings for verifying and changing deploy keys - customize the URLs for your VCS host and repository details:

    • https://github.com/<username>/<repo>/settings/keys

    • https://gitlab.com/<username>/<repo>/-/settings/repository

    • https://bitbucket.org/<username>/<repo>/admin/access-keys/

This error is rare for public repositories. If your repository is public and you see this error, it may be because you have specified a wrong domain or forgotten a component in the path.

Troubleshooting slow builds

This page contains a list of the most common issues that are slowing down builds.

In case you are waiting a long time for your builds to finish or your builds are terminated by exceeding general resource limits, this troubleshooting guide will help you resolve some of the most common issues causing slow builds. Even if you are not facing any immediate performance issues, it’s always good to be familiar with the most common ones.

Build resources on Read the Docs are limited to make sure that users don’t overwhelm our build systems. The current build limits can be found on our Build resources reference.

Tip

Please help us keep this section updated and contribute your own error resolutions, performance improvements, etc. Send in your helpful comments or ideas 💡 to support@readthedocs.org or contribute directly by clicking Edit on GitHub in the top right corner of this page.

Reduce formats you’re building

You can change the formats of docs that you’re building with our Configuration file overview, see formats.

In particular, the htmlzip takes up a decent amount of memory and time, so disabling that format might solve your problem.

Reduce documentation build dependencies

A lot of projects reuse their requirements file for their documentation builds. If there are extra packages that you don’t need for building docs, you can create a custom requirements file just for documentation. This should speed up your documentation builds, as well as reduce your memory footprint.

Use mamba instead of conda

If you need conda packages to build your documentation, you can use mamba as a drop-in replacement to conda, which requires less memory and is noticeably faster.

Document Python modules API statically

If you are installing a lot of Python dependencies just to document your Python modules API using sphinx.ext.autodoc, you can give a try to sphinx-autoapi Sphinx’s extension instead which should produce the exact same output but running statically. This could drastically reduce the memory and bandwidth required to build your docs.

Requests more resources

If you still have problems building your documentation, we can increase build limits on a per-project basis, sending an email to support@readthedocs.org providing a good reason why your documentation needs more resources.

Public REST API

This section of the documentation details the public REST API. Useful to get details of projects, builds, versions, and other resources.

API v3

The Read the Docs API uses REST. JSON is returned by all API responses including errors and HTTP response status codes are to designate success and failure.

Authentication and authorization

Requests to the Read the Docs public API are for public and private information. All endpoints require authentication.

Token

The Authorization HTTP header can be specified with Token <your-access-token> to authenticate as a user and have the same permissions that the user itself.

Note

On Read the Docs Community, you will find your access Token under your profile settings.

Session

Warning

Authentication via session is not enabled yet.

Session authentication is allowed on very specific endpoints, to allow hitting the API when reading documentation.

When a user is trying to authenticate via session, CSRF check is performed.

Resources

This section shows all the resources that are currently available in APIv3. There are some URL attributes that applies to all of these resources:

?fields=:

Specify which fields are going to be returned in the response.

?omit=:

Specify which fields are going to be omitted from the response.

?expand=:

Some resources allow to expand/add extra fields on their responses (see Project details for example).

Tip

You can browse the full API by accessing its root URL: https://readthedocs.org/api/v3/

Note

If you are using Read the Docs for Business take into account that you will need to replace https://readthedocs.org/ by https://readthedocs.com/ in all the URLs used in the following examples.

Projects
Projects list
GET /api/v3/projects/

Retrieve a list of all the projects for the current logged in user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/?limit=10&offset=10",
    "previous": null,
    "results": [{
        "id": 12345,
        "name": "Pip",
        "slug": "pip",
        "created": "2010-10-23T18:12:31+00:00",
        "modified": "2018-12-11T07:21:11+00:00",
        "language": {
            "code": "en",
            "name": "English"
        },
        "programming_language": {
            "code": "py",
            "name": "Python"
        },
        "repository": {
            "url": "https://github.com/pypa/pip",
            "type": "git"
        },
        "default_version": "stable",
        "default_branch": "master",
        "subproject_of": null,
        "translation_of": null,
        "urls": {
            "documentation": "http://pip.pypa.io/en/stable/",
            "home": "https://pip.pypa.io/"
        },
        "tags": [
            "distutils",
            "easy_install",
            "egg",
            "setuptools",
            "virtualenv"
        ],
        "users": [
            {
                "username": "dstufft"
            }
        ],
        "active_versions": {
            "stable": "{VERSION}",
            "latest": "{VERSION}",
            "19.0.2": "{VERSION}"
        },
        "_links": {
            "_self": "/api/v3/projects/pip/",
            "versions": "/api/v3/projects/pip/versions/",
            "builds": "/api/v3/projects/pip/builds/",
            "subprojects": "/api/v3/projects/pip/subprojects/",
            "superproject": "/api/v3/projects/pip/superproject/",
            "redirects": "/api/v3/projects/pip/redirects/",
            "translations": "/api/v3/projects/pip/translations/"
        }
    }]
}
Query Parameters:
  • name (string) – return projects with matching name

  • slug (string) – return projects with matching slug

  • language (string) – language code as en, es, ru, etc.

  • programming_language (string) – programming language code as py, js, etc.

The results in response is an array of project data, which is same as GET /api/v3/projects/(string:project_slug)/.

Note

Read the Docs for Business, also accepts

Query Parameters:
  • expand (string) – with organization and teams.

Project details
GET /api/v3/projects/(string: project_slug)/

Retrieve details of a single project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/

Example response:

{
    "id": 12345,
    "name": "Pip",
    "slug": "pip",
    "created": "2010-10-23T18:12:31+00:00",
    "modified": "2018-12-11T07:21:11+00:00",
    "language": {
        "code": "en",
        "name": "English"
    },
    "programming_language": {
        "code": "py",
        "name": "Python"
    },
    "repository": {
        "url": "https://github.com/pypa/pip",
        "type": "git"
    },
    "default_version": "stable",
    "default_branch": "master",
    "subproject_of": null,
    "translation_of": null,
    "urls": {
        "documentation": "http://pip.pypa.io/en/stable/",
        "home": "https://readthedocs.org/projects/pip/",
        "downloads": "https://readthedocs.org/projects/pip/downloads/",
        "builds": "https://readthedocs.org/projects/pip/builds/",
        "versions": "https://readthedocs.org/projects/pip/versions/",
    },
    "tags": [
        "distutils",
        "easy_install",
        "egg",
        "setuptools",
        "virtualenv"
    ],
    "users": [
        {
            "username": "dstufft"
        }
    ],
    "active_versions": {
        "stable": "{VERSION}",
        "latest": "{VERSION}",
        "19.0.2": "{VERSION}"
    },
    "privacy_level": "public",
    "external_builds_privacy_level": "public",
    "versioning_scheme": "multiple_versions_with_translations",
    "_links": {
        "_self": "/api/v3/projects/pip/",
        "versions": "/api/v3/projects/pip/versions/",
        "builds": "/api/v3/projects/pip/builds/",
        "subprojects": "/api/v3/projects/pip/subprojects/",
        "superproject": "/api/v3/projects/pip/superproject/",
        "redirects": "/api/v3/projects/pip/redirects/",
        "translations": "/api/v3/projects/pip/translations/"
    }
}
Query Parameters:
  • expand (string) – allows to add/expand some extra fields in the response. Allowed values are active_versions, active_versions.last_build and active_versions.last_build.config. Multiple fields can be passed separated by commas.

Note

versioning_scheme can be one of the following values:

  • multiple_versions_with_translations

  • multiple_versions_without_translations

  • single_version_without_translations

Note

Read the Docs for Business, also accepts

Query Parameters:
  • expand (string) – with organization and teams.

Note

The single_version attribute is deprecated, use versioning_scheme instead.

Project create
POST /api/v3/projects/

Import a project under authenticated user.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "name": "Test Project",
    "repository": {
        "url": "https://github.com/readthedocs/template",
        "type": "git"
    },
    "homepage": "http://template.readthedocs.io/",
    "programming_language": "py",
    "language": "es",
    "privacy_level": "public",
    "external_builds_privacy_level": "public",
    "tags": [
        "automation",
        "sphinx"
    ]
}

Example response:

See Project details

Note

Read the Docs for Business, also accepts

Request JSON Object:
  • organization (string) – required organization’s slug under the project will be imported.

  • teams (string) – optional teams’ slugs the project will belong to.

Note

Privacy levels are only available in Read the Docs for Business.

Project update
PATCH /api/v3/projects/(string: project_slug)/

Update an existing project.

Example request:

$ curl \
  -X PATCH \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "name": "New name for the project",
    "repository": {
        "url": "https://github.com/readthedocs/readthedocs.org",
        "type": "git"
    },
    "language": "ja",
    "programming_language": "py",
    "homepage": "https://readthedocs.org/",
    "tags" : [
        "extension",
        "mkdocs"
    ]
    "default_version": "v0.27.0",
    "default_branch": "develop",
    "analytics_code": "UA000000",
    "analytics_disabled": false,
    "versioning_scheme": "multiple_versions_with_translations",
    "external_builds_enabled": true,
    "privacy_level": "public",
    "external_builds_privacy_level": "public"
}

Note

Adding tags will replace existing tags with the new list, and if omitted won’t change the tags.

Note

Privacy levels are only available in Read the Docs for Business.

Status Codes:
Versions

Versions are different versions of the same project documentation.

The versions for a given project can be viewed in a project’s version page. For example, here is the Pip project’s version page. See Versions for more information.

Versions listing
GET /api/v3/projects/(string: project_slug)/versions/

Retrieve a list of all versions for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/pip/versions/?limit=10&offset=10",
    "previous": null,
    "results": ["VERSION"]
}
Query Parameters:
  • active (boolean) – return only active versions

  • built (boolean) – return only built versions

  • privacy_level (string) – return versions with specific privacy level (public or private)

  • slug (string) – return versions with matching slug

  • type (string) – return versions with specific type (branch or tag)

  • verbose_name (string) – return versions with matching version name

Version detail
GET /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/

Retrieve details of a single version.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/stable/

Example response:

{
    "id": 71652437,
    "slug": "stable",
    "verbose_name": "stable",
    "identifier": "3a6b3995c141c0888af6591a59240ba5db7d9914",
    "ref": "19.0.2",
    "built": true,
    "active": true,
    "aliases": ["VERSION"],
    "hidden": false,
    "type": "tag",
    "last_build": "{BUILD}",
    "privacy_level": "public",
    "downloads": {
        "pdf": "https://pip.readthedocs.io/_/downloads/pdf/pip/stable/",
        "htmlzip": "https://pip.readthedocs.io/_/downloads/htmlzip/pip/stable/",
        "epub": "https://pip.readthedocs.io/_/downloads/epub/pip/stable/"
    },
    "urls": {
        "dashboard": {
            "edit": "https://readthedocs.org/dashboard/pip/version/stable/edit/"
        },
        "documentation": "https://pip.pypa.io/en/stable/",
        "vcs": "https://github.com/pypa/pip/tree/19.0.2"
    },
    "_links": {
        "_self": "/api/v3/projects/pip/versions/stable/",
        "builds": "/api/v3/projects/pip/versions/stable/builds/",
        "project": "/api/v3/projects/pip/"
    }
}
Response JSON Object:
  • ref (string) – the version slug where the stable version points to. null when it’s not the stable version.

  • built (boolean) – the version has at least one successful build.

Query Parameters:
  • expand (string) – allows to add/expand some extra fields in the response. Allowed values are last_build and last_build.config. Multiple fields can be passed separated by commas.

Version update
PATCH /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/

Update a version.

When a version is deactivated, its documentation is removed, and when it’s activated, a new build is triggered.

Updates to a version also invalidates its CDN cache.

Example request:

$ curl \
  -X PATCH \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/0.23/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "active": true,
    "hidden": false,
    "privacy_level": "public"
}
Status Codes:

Note

Privacy levels are only available in Read the Docs for Business.

Builds

Builds are created by Read the Docs whenever a Project has its documentation built. Frequently this happens automatically via a web hook but can be triggered manually.

Builds can be viewed in the build page for a project. For example, here is Pip’s build page. See Build process overview for more information.

Build details
GET /api/v3/projects/(str: project_slug)/builds/(int: build_id)/

Retrieve details of a single build for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/builds/8592686/?expand=config

Example response:

{
    "id": 8592686,
    "version": "latest",
    "project": "pip",
    "created": "2018-06-19T15:15:59+00:00",
    "finished": "2018-06-19T15:16:58+00:00",
    "duration": 59,
    "state": {
        "code": "finished",
        "name": "Finished"
    },
    "success": true,
    "error": null,
    "commit": "6f808d743fd6f6907ad3e2e969c88a549e76db30",
    "config": {
        "version": "1",
        "formats": [
            "htmlzip",
            "epub",
            "pdf"
        ],
        "python": {
            "version": 3,
            "install": [
                {
                    "requirements": ".../stable/tools/docs-requirements.txt"
                }
            ],
        },
        "conda": null,
        "build": {
            "image": "readthedocs/build:latest"
        },
        "doctype": "sphinx_htmldir",
        "sphinx": {
            "builder": "sphinx_htmldir",
            "configuration": ".../stable/docs/html/conf.py",
            "fail_on_warning": false
        },
        "mkdocs": {
            "configuration": null,
            "fail_on_warning": false
        },
        "submodules": {
            "include": "all",
            "exclude": [],
            "recursive": true
        }
    },
    "_links": {
        "_self": "/api/v3/projects/pip/builds/8592686/",
        "project": "/api/v3/projects/pip/",
        "version": "/api/v3/projects/pip/versions/latest/"
    }
}
Response JSON Object:
  • created (string) – The ISO-8601 datetime when the build was created.

  • finished (string) – The ISO-8601 datetime when the build has finished.

  • duration (integer) – The length of the build in seconds.

  • state (string) – The state of the build (one of triggered, building, installing, cloning, finished or cancelled)

  • error (string) – An error message if the build was unsuccessful

Query Parameters:
  • expand (string) – allows to add/expand some extra fields in the response. Allowed value is config.

Builds listing
GET /api/v3/projects/(str: project_slug)/builds/

Retrieve list of all the builds on this project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/builds/

Example response:

{
    "count": 15,
    "next": "/api/v3/projects/pip/builds?limit=10&offset=10",
    "previous": null,
    "results": ["BUILD"]
}
Query Parameters:
  • commit (string) – commit hash to filter the builds returned by commit

  • running (boolean) – filter the builds that are currently building/running

Build triggering
POST /api/v3/projects/(string: project_slug)/versions/(string: version_slug)/builds/

Trigger a new build for the version_slug version of this project.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/versions/latest/builds/

Example response:

{
    "build": "{BUILD}",
    "project": "{PROJECT}",
    "version": "{VERSION}"
}
Status Codes:
Subprojects

Projects can be configured in a nested manner, by configuring a project as a subproject of another project. This allows for documentation projects to share a search index and a namespace or custom domain, but still be maintained independently. See Subprojects for more information.

Subproject details
GET /api/v3/projects/(str: project_slug)/subprojects/(str: alias_slug)/

Retrieve details of a subproject relationship.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/

Example response:

{
    "alias": "subproject-alias",
    "child": ["PROJECT"],
    "_links": {
        "parent": "/api/v3/projects/pip/"
    }
}
Subprojects listing
GET /api/v3/projects/(str: project_slug)/subprojects/

Retrieve a list of all sub-projects for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/pip/subprojects/?limit=10&offset=10",
    "previous": null,
    "results": ["SUBPROJECT RELATIONSHIP"]
}
Subproject create
POST /api/v3/projects/(str: project_slug)/subprojects/

Create a subproject relationship between two projects.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "child": "subproject-child-slug",
    "alias": "subproject-alias"
}

Note

child must be a project that you have access to. Or if you are using Business hosting, additionally the project must be under the same organization as the parent project.

Example response:

See Subproject details

Response JSON Object:
  • child (string) – slug of the child project in the relationship.

  • alias (string) – optional slug alias to be used in the URL (e.g /projects/<alias>/en/latest/). If not provided, child project’s slug is used as alias.

Status Codes:
Subproject delete
DELETE /api/v3/projects/(str: project_slug)/subprojects/(str: alias_slug)/

Delete a subproject relationship.

Example request:

$ curl \
  -X DELETE \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/subprojects/subproject-alias/
Status Codes:
Translations

Translations are the same version of a Project in a different language. See Localization and Internationalization for more information.

Translations listing
GET /api/v3/projects/(str: project_slug)/translations/

Retrieve a list of all translations for a project.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/translations/

Example response:

{
    "count": 25,
    "next": "/api/v3/projects/pip/translations/?limit=10&offset=10",
    "previous": null,
    "results": [{
        "id": 12345,
        "name": "Pip",
        "slug": "pip",
        "created": "2010-10-23T18:12:31+00:00",
        "modified": "2018-12-11T07:21:11+00:00",
        "language": {
            "code": "en",
            "name": "English"
        },
        "programming_language": {
            "code": "py",
            "name": "Python"
        },
        "repository": {
            "url": "https://github.com/pypa/pip",
            "type": "git"
        },
        "default_version": "stable",
        "default_branch": "master",
        "subproject_of": null,
        "translation_of": null,
        "urls": {
            "documentation": "http://pip.pypa.io/en/stable/",
            "home": "https://pip.pypa.io/"
        },
        "tags": [
            "distutils",
            "easy_install",
            "egg",
            "setuptools",
            "virtualenv"
        ],
        "users": [
            {
                "username": "dstufft"
            }
        ],
        "active_versions": {
            "stable": "{VERSION}",
            "latest": "{VERSION}",
            "19.0.2": "{VERSION}"
        },
        "_links": {
            "_self": "/api/v3/projects/pip/",
            "versions": "/api/v3/projects/pip/versions/",
            "builds": "/api/v3/projects/pip/builds/",
            "subprojects": "/api/v3/projects/pip/subprojects/",
            "superproject": "/api/v3/projects/pip/superproject/",
            "redirects": "/api/v3/projects/pip/redirects/",
            "translations": "/api/v3/projects/pip/translations/"
        }
    }]
}

The results in response is an array of project data, which is same as GET /api/v3/projects/(string:project_slug)/.

Redirects

Redirects allow the author to redirect an old URL of the documentation to a new one. This is useful when pages are moved around in the structure of the documentation set. See Redirects for more information.

Redirect details
GET /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/

Retrieve details of a single redirect for a project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/

Example response

{
    "pk": 1,
    "created": "2019-04-29T10:00:00Z",
    "modified": "2019-04-29T12:00:00Z",
    "project": "pip",
    "from_url": "/docs/",
    "to_url": "/documentation/",
    "type": "page",
    "http_status": 302,
    "description": "",
    "enabled": true,
    "force": false,
    "position": 0,
    "_links": {
        "_self": "/api/v3/projects/pip/redirects/1/",
        "project": "/api/v3/projects/pip/"
    }
}
Redirects listing
GET /api/v3/projects/(str: project_slug)/redirects/

Retrieve list of all the redirects for this project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/

Example response

{
    "count": 25,
    "next": "/api/v3/projects/pip/redirects/?limit=10&offset=10",
    "previous": null,
    "results": ["REDIRECT"]
}
Redirect create
POST /api/v3/projects/(str: project_slug)/redirects/

Create a redirect for this project.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "from_url": "/docs/",
    "to_url": "/documentation/",
    "type": "page",
    "position": 0,
}

Note

  • type can be one of page, exact, clean_url_to_html and html_to_clean_url.

  • Depending on the type of the redirect, some fields may not be needed:

    • page and exact types require from_url and to_url.

    • clean_url_to_html and html_to_clean_url types do not require from_url and to_url.

  • Position starts at 0 and is used to order redirects.

Example response:

See Redirect details

Status Codes:
Redirect update
PUT /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/

Update a redirect for this project.

Example request:

$ curl \
  -X PUT \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "from_url": "/docs/",
    "to_url": "/documentation.html",
    "type": "page"
}

Note

If the position of the redirect is changed, it will be inserted in the new position and the other redirects will be reordered.

Example response:

See Redirect details

Redirect delete
DELETE /api/v3/projects/(str: project_slug)/redirects/(int: redirect_id)/

Delete a redirect for this project.

Example request:

$ curl \
  -X DELETE \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/redirects/1/
Status Codes:
Environment variables

Environment variables are variables that you can define for your project. These variables are used in the build process when building your documentation. They are for example useful to define secrets in a safe way that can be used by your documentation to build properly. Environment variables can also be made public, allowing for them to be used in PR builds. See Environment variable overview.

Environment variable details
GET /api/v3/projects/(str: project_slug)/environmentvariables/(int: environmentvariable_id)/

Retrieve details of a single environment variable for a project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/

Example response

{
    "_links": {
        "_self": "https://readthedocs.org/api/v3/projects/project/environmentvariables/1/",
        "project": "https://readthedocs.org/api/v3/projects/project/"
    },
"created": "2019-04-29T10:00:00Z",
"modified": "2019-04-29T12:00:00Z",
"pk": 1,
"project": "project",
"public": false,
"name": "ENVVAR"
}
Environment variables listing
GET /api/v3/projects/(str: project_slug)/environmentvariables/

Retrieve list of all the environment variables for this project.

Example request

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/

Example response

{
    "count": 15,
    "next": "/api/v3/projects/pip/environmentvariables/?limit=10&offset=10",
    "previous": null,
    "results": ["ENVIRONMENTVARIABLE"]
}
Environment variable create
POST /api/v3/projects/(str: project_slug)/environmentvariables/

Create an environment variable for this project.

Example request:

$ curl \
  -X POST \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/ \
  -H "Content-Type: application/json" \
  -d @body.json

The content of body.json is like,

{
    "name": "MYVAR",
    "value": "My secret value"
}

Example response:

See Environment Variable details

Status Codes:
  • 201 Created – Environment variable created successfully

Environment variable delete
DELETE /api/v3/projects/(str: project_slug)/environmentvariables/(int: environmentvariable_id)/

Delete an environment variable for this project.

Example request:

$ curl \
  -X DELETE \
  -H "Authorization: Token <token>" https://readthedocs.org/api/v3/projects/pip/environmentvariables/1/
Request Headers:
Status Codes:
Organizations

Note

The /api/v3/organizations/ endpoint is only available in Read the Docs for Business currently. We plan to have organizations on Read the Docs Community in a near future and we will add support for this endpoint at the same time.

Organizations list
GET /api/v3/organizations/

Retrieve a list of all the organizations for the current logged in user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/

Example response:

{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [
        {
            "_links": {
                "_self": "https://readthedocs.com/api/v3/organizations/pypa/",
                "projects": "https://readthedocs.com/api/v3/organizations/pypa/projects/"
            },
            "created": "2019-02-22T21:54:52.768630Z",
            "description": "",
            "disabled": false,
            "email": "pypa@psf.org",
            "modified": "2020-07-02T12:35:32.418423Z",
            "name": "Python Package Authority",
            "owners": [
                {
                    "username": "dstufft"
                }
            ],
            "slug": "pypa",
            "url": "https://github.com/pypa/"
        }
}
Organization details
GET /api/v3/organizations/(string: organization_slug)/

Retrieve details of a single organization.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/pypa/

Example response:

{
    "_links": {
        "_self": "https://readthedocs.com/api/v3/organizations/pypa/",
        "projects": "https://readthedocs.com/api/v3/organizations/pypa/projects/"
    },
    "created": "2019-02-22T21:54:52.768630Z",
    "description": "",
    "disabled": false,
    "email": "pypa@psf.com",
    "modified": "2020-07-02T12:35:32.418423Z",
    "name": "Python Package Authority",
    "owners": [
        {
            "username": "dstufft"
        }
    ],
    "slug": "pypa",
    "url": "https://github.com/pypa/"
}
Organization projects list
GET /api/v3/organizations/(string: organization_slug)/projects/

Retrieve list of projects under an organization.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.com/api/v3/organizations/pypa/projects/

Example response:

{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [
        {
            "_links": {
                "_self": "https://readthedocs.com/api/v3/projects/pypa-pip/",
                "builds": "https://readthedocs.com/api/v3/projects/pypa-pip/builds/",
                "environmentvariables": "https://readthedocs.com/api/v3/projects/pypa-pip/environmentvariables/",
                "redirects": "https://readthedocs.com/api/v3/projects/pypa-pip/redirects/",
                "subprojects": "https://readthedocs.com/api/v3/projects/pypa-pip/subprojects/",
                "superproject": "https://readthedocs.com/api/v3/projects/pypa-pip/superproject/",
                "translations": "https://readthedocs.com/api/v3/projects/pypa-pip/translations/",
                "versions": "https://readthedocs.com/api/v3/projects/pypa-pip/versions/"
            },
            "created": "2019-02-22T21:59:13.333614Z",
            "default_branch": "master",
            "default_version": "latest",
            "homepage": null,
            "id": 2797,
            "language": {
                "code": "en",
                "name": "English"
            },
            "modified": "2019-08-08T16:27:25.939531Z",
            "name": "pip",
            "programming_language": {
                "code": "py",
                "name": "Python"
            },
            "repository": {
                "type": "git",
                "url": "https://github.com/pypa/pip"
            },
            "slug": "pypa-pip",
            "subproject_of": null,
            "tags": [],
            "translation_of": null,
            "urls": {
                "builds": "https://readthedocs.com/projects/pypa-pip/builds/",
                "documentation": "https://pypa-pip.readthedocs-hosted.com/en/latest/",
                "home": "https://readthedocs.com/projects/pypa-pip/",
                "versions": "https://readthedocs.com/projects/pypa-pip/versions/"
            }
        }
    ]
}
Remote organizations

Remote organizations are the VCS organizations connected via GitHub, GitLab and Bitbucket.

Remote organization listing
GET /api/v3/remote/organizations/

Retrieve a list of all Remote Organizations for the authenticated user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/remote/organizations/

Example response:

{
    "count": 20,
    "next": "api/v3/remote/organizations/?limit=10&offset=10",
    "previous": null,
    "results": [
        {
            "avatar_url": "https://avatars.githubusercontent.com/u/12345?v=4",
            "created": "2019-04-29T10:00:00Z",
            "modified": "2019-04-29T12:00:00Z",
            "name": "Organization Name",
            "pk": 1,
            "slug": "organization",
            "url": "https://github.com/organization",
            "vcs_provider": "github"
        }
    ]
}

The results in response is an array of remote organizations data.

Query Parameters:
  • name (string) – return remote organizations with containing the name

  • vcs_provider (string) – return remote organizations for specific vcs provider (github, gitlab or bitbucket)

Request Headers:
Remote repositories

Remote repositories are the importable repositories connected via GitHub, GitLab and Bitbucket.

Remote repository listing
GET /api/v3/remote/repositories/

Retrieve a list of all Remote Repositories for the authenticated user.

Example request:

$ curl -H "Authorization: Token <token>" https://readthedocs.org/api/v3/remote/repositories/?expand=projects,remote_organization

Example response:

{
    "count": 20,
    "next": "api/v3/remote/repositories/?expand=projects,remote_organization&limit=10&offset=10",
    "previous": null,
    "results": [
        {
            "remote_organization": {
                "avatar_url": "https://avatars.githubusercontent.com/u/12345?v=4",
                "created": "2019-04-29T10:00:00Z",
                "modified": "2019-04-29T12:00:00Z",
                "name": "Organization Name",
                "pk": 1,
                "slug": "organization",
                "url": "https://github.com/organization",
                "vcs_provider": "github"
            },
            "project": [{
                "id": 12345,
                "name": "project",
                "slug": "project",
                "created": "2010-10-23T18:12:31+00:00",
                "modified": "2018-12-11T07:21:11+00:00",
                "language": {
                    "code": "en",
                    "name": "English"
                },
                "programming_language": {
                    "code": "py",
                    "name": "Python"
                },
                "repository": {
                    "url": "https://github.com/organization/project",
                    "type": "git"
                },
                "default_version": "stable",
                "default_branch": "master",
                "subproject_of": null,
                "translation_of": null,
                "urls": {
                    "documentation": "http://project.readthedocs.io/en/stable/",
                    "home": "https://readthedocs.org/projects/project/"
                },
                "tags": [
                    "test"
                ],
                "users": [
                    {
                        "username": "dstufft"
                    }
                ],
                "_links": {
                    "_self": "/api/v3/projects/project/",
                    "versions": "/api/v3/projects/project/versions/",
                    "builds": "/api/v3/projects/project/builds/",
                    "subprojects": "/api/v3/projects/project/subprojects/",
                    "superproject": "/api/v3/projects/project/superproject/",
                    "redirects": "/api/v3/projects/project/redirects/",
                    "translations": "/api/v3/projects/project/translations/"
                }
            }],
            "avatar_url": "https://avatars3.githubusercontent.com/u/test-organization?v=4",
            "clone_url": "https://github.com/organization/project.git",
            "created": "2019-04-29T10:00:00Z",
            "description": "This is a test project.",
            "full_name": "organization/project",
            "html_url": "https://github.com/organization/project",
            "modified": "2019-04-29T12:00:00Z",
            "name": "project",
            "pk": 1,
            "ssh_url": "git@github.com:organization/project.git",
            "vcs": "git",
            "vcs_provider": "github",
            "default_branch": "master",
            "private": false,
            "admin": true
        }
    ]
}

The results in response is an array of remote repositories data.

Query Parameters:
  • name (string) – return remote repositories containing the name

  • full_name (string) – return remote repositories containing the full name (it inclues the username/organization the project belongs to)

  • vcs_provider (string) – return remote repositories for specific vcs provider (github, gitlab or bitbucket)

  • organization (string) – return remote repositories for specific remote organization (using remote organization slug)

  • expand (string) – allows to add/expand some extra fields in the response. Allowed values are projects and remote_organization. Multiple fields can be passed separated by commas.

Request Headers:

Embed

GET /api/v3/embed/

Retrieve HTML-formatted content from documentation page or section. Read How to embed content from your documentation to know more about how to use this endpoint.

Warning

The content will be returned as is, without any sanitization or escaping. You should not include content from arbitrary projects, or projects you do not trust.

Example request:

curl https://readthedocs.org/api/v3/embed/?url=https://docs.readthedocs.io/en/latest/features.html%23read-the-docs-features

Example response:

{
    "url": "https://docs.readthedocs.io/en/latest/features.html#read-the-docs-features",
    "fragment": "read-the-docs-features",
    "content": "<div class=\"section\" id=\"read-the-docs-features\">\n<h1>Read the Docs ...",
    "external": false
}
Response JSON Object:
  • url (string) – URL of the document.

  • fragment (string) – fragmet part of the URL used to query the page.

  • content (string) – HTML content of the section.

  • external (string) – whether or not the page is hosted on Read the Docs or externally.

Query Parameters:
  • url (string) – full URL of the document (with optional fragment) to fetch content from.

  • doctool (string) – optional documentation tool key name used to generate the target documentation (currently, only sphinx is accepted)

  • doctoolversion (string) – optional documentation tool version used to generate the target documentation (e.g. 4.2.0).

Note

Passing ?doctool= and ?doctoolversion= may improve the response, since the endpoint will know more about the exact structure of the HTML and can make better decisions.

Additional APIs

API v2

The Read the Docs API uses REST. JSON is returned by all API responses including errors and HTTP response status codes are to designate success and failure.

Warning

API v2 is planned to be deprecated soon, though we have not yet set a time frame for deprecation yet. We will alert users with our plans when we do.

For now, API v2 is still used by some legacy application operations still, but we highly recommend Read the Docs users use API v3 instead.

Some improvements in API v3 are:

  • Token based authentication

  • Easier to use URLs which no longer use numerical ids

  • More common user actions are exposed through the API

  • Improved error reporting

See its full documentation at API v3.

Authentication and authorization

Requests to the Read the Docs public API are for public information only and do not require any authentication.

Resources

Projects

Projects are the main building block of Read the Docs. Projects are built when there are changes to the code and the resulting documentation is hosted and served by Read the Docs.

As an example, this documentation is part of the Docs project which has documentation at https://docs.readthedocs.io.

You can always view your Read the Docs projects in your project dashboard.

Project list
GET /api/v2/project/

Retrieve a list of all Read the Docs projects.

Example request:

curl https://readthedocs.org/api/v2/project/?slug=pip

Example response:

{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [PROJECTS]
}
Response JSON Object:
  • next (string) – URI for next set of Projects.

  • previous (string) – URI for previous set of Projects.

  • count (integer) – Total number of Projects.

  • results (array) – Array of Project objects.

Query Parameters:
  • slug (string) – Narrow the results by matching the exact project slug

Project details
GET /api/v2/project/(int: id)/

Retrieve details of a single project.

{
    "id": 6,
    "name": "Pip",
    "slug": "pip",
    "programming_language": "py",
    "default_version": "stable",
    "default_branch": "master",
    "repo_type": "git",
    "repo": "https://github.com/pypa/pip",
    "description": "Pip Installs Packages.",
    "language": "en",
    "documentation_type": "sphinx_htmldir",
    "canonical_url": "http://pip.pypa.io/en/stable/",
    "users": [USERS]
}
Response JSON Object:
  • id (integer) – The ID of the project

  • name (string) – The name of the project.

  • slug (string) – The project slug (used in the URL).

  • programming_language (string) – The programming language of the project (eg. “py”, “js”)

  • default_version (string) – The default version of the project (eg. “latest”, “stable”, “v3”)

  • default_branch (string) – The default version control branch

  • repo_type (string) – Version control repository of the project

  • repo (string) – The repository URL for the project

  • description (string) – An RST description of the project

  • language (string) – The language code of this project

  • documentation_type (string) – An RST description of the project

  • canonical_url (string) – The canonical URL of the default docs

  • users (array) – Array of User IDs who are maintainers of the project.

Status Codes:
Project versions
GET /api/v2/project/(int: id)/active_versions/

Retrieve a list of active versions (eg. “latest”, “stable”, “v1.x”) for a single project.

{
    "versions": [VERSION, VERSION, ...]
}
Response JSON Object:
  • versions (array) – Version objects for the given Project

See the Version detail call for the format of the Version object.

Versions

Versions are different versions of the same project documentation

The versions for a given project can be viewed in a project’s version screen. For example, here is the Pip project’s version screen.

Version list
GET /api/v2/version/

Retrieve a list of all Versions for all projects

{
    "count": 1000,
    "previous": null,
    "results": [VERSIONS],
    "next": "https://readthedocs.org/api/v2/version/?limit=10&offset=10"
}
Response JSON Object:
  • next (string) – URI for next set of Versions.

  • previous (string) – URI for previous set of Versions.

  • count (integer) – Total number of Versions.

  • results (array) – Array of Version objects.

Query Parameters:
  • project__slug (string) – Narrow to the versions for a specific Project

  • active (boolean) – Pass true or false to show only active or inactive versions. By default, the API returns all versions.

Version detail
GET /api/v2/version/(int: id)/

Retrieve details of a single version.

{
    "id": 1437428,
    "slug": "stable",
    "verbose_name": "stable",
    "built": true,
    "active": true,
    "type": "tag",
    "identifier": "3a6b3995c141c0888af6591a59240ba5db7d9914",
    "privacy_level": "public",
    "downloads": {
        "pdf": "//readthedocs.org/projects/pip/downloads/pdf/stable/",
        "htmlzip": "//readthedocs.org/projects/pip/downloads/htmlzip/stable/",
        "epub": "//readthedocs.org/projects/pip/downloads/epub/stable/"
    },
    "project": {PROJECT},
}
Response JSON Object:
  • id (integer) – The ID of the version

  • verbose_name (string) – The name of the version.

  • slug (string) – The version slug.

  • built (string) – Whether this version has been built

  • active (string) – Whether this version is still active

  • type (string) – The type of this version (typically “tag” or “branch”)

  • identifier (string) – A version control identifier for this version (eg. the commit hash of the tag)

  • downloads (array) – URLs to downloads of this version’s documentation

  • project (object) – Details of the Project for this version.

Status Codes:
Builds

Builds are created by Read the Docs whenever a Project has its documentation built. Frequently this happens automatically via a web hook but can be triggered manually.

Builds can be viewed in the build screen for a project. For example, here is Pip’s build screen.

Build list
GET /api/v2/build/

Retrieve details of builds ordered by most recent first

Example request:

curl https://readthedocs.org/api/v2/build/?project__slug=pip

Example response:

{
    "count": 100,
    "next": null,
    "previous": null,
    "results": [BUILDS]
}
Response JSON Object:
  • next (string) – URI for next set of Builds.

  • previous (string) – URI for previous set of Builds.

  • count (integer) – Total number of Builds.

  • results (array) – Array of Build objects.

Query Parameters:
  • project__slug (string) – Narrow to builds for a specific Project

  • commit (string) – Narrow to builds for a specific commit

Build detail
GET /api/v2/build/(int: id)/

Retrieve details of a single build.

{
    "id": 7367364,
    "date": "2018-06-19T15:15:59.135894",
    "length": 59,
    "type": "html",
    "state": "finished",
    "success": true,
    "error": "",
    "commit": "6f808d743fd6f6907ad3e2e969c88a549e76db30",
    "docs_url": "http://pip.pypa.io/en/latest/",
    "project": 13,
    "project_slug": "pip",
    "version": 3681,
    "version_slug": "latest",
    "commands": [
        {
            "description": "",
            "start_time": "2018-06-19T20:16:00.951959",
            "exit_code": 0,
            "build": 7367364,
            "command": "git remote set-url origin git://github.com/pypa/pip.git",
            "run_time": 0,
            "output": "",
            "id": 42852216,
            "end_time": "2018-06-19T20:16:00.969170"
        },
        ...
    ],
    ...
}
Response JSON Object:
  • id (integer) – The ID of the build

  • date (string) – The ISO-8601 datetime of the build.

  • length (integer) – The length of the build in seconds.

  • type (string) – The type of the build (one of “html”, “pdf”, “epub”)

  • state (string) – The state of the build (one of “triggered”, “building”, “installing”, “cloning”, or “finished”)

  • success (boolean) – Whether the build was successful

  • error (string) – An error message if the build was unsuccessful

  • commit (string) – A version control identifier for this build (eg. the commit hash)

  • docs_url (string) – The canonical URL of the build docs

  • project (integer) – The ID of the project being built

  • project_slug (string) – The slug for the project being built

  • version (integer) – The ID of the version of the project being built

  • version_slug (string) – The slug for the version of the project being built

  • commands (array) – Array of commands for the build with details including output.

Status Codes:

Some fields primarily used for UI elements in Read the Docs are omitted.

Embed
GET /api/v2/embed/

Retrieve HTML-formatted content from documentation page or section.

Warning

The content will be returned as is, without any sanitization or escaping. You should not include content from arbitrary projects, or projects you do not trust.

Example request:

curl https://readthedocs.org/api/v2/embed/?project=docs&version=latest&doc=features&path=features.html

or

curl https://readthedocs.org/api/v2/embed/?url=https://docs.readthedocs.io/en/latest/features.html

Example response:

{
    "content": [
        "<div class=\"section\" id=\"read-the-docs-features\">\n<h1>Read the Docs..."
    ],
    "headers": [
        {
            "Read the Docs features": "#"
        },
        {
            "Automatic Documentation Deployment": "#automatic-documentation-deployment"
        },
        {
            "Custom Domains & White Labeling": "#custom-domains-white-labeling"
        },
        {
            "Versioned Documentation": "#versioned-documentation"
        },
        {
            "Downloadable Documentation": "#downloadable-documentation"
        },
        {
            "Full-Text Search": "#full-text-search"
        },
        {
            "Open Source and Customer Focused": "#open-source-and-customer-focused"
        }
    ],
    "url": "https://docs.readthedocs.io/en/latest/features",
    "meta": {
        "project": "docs",
        "version": "latest",
        "doc": "features",
        "section": "read the docs features"
    }
}
Response JSON Object:
  • content (string) – HTML content of the section.

  • headers (object) – section’s headers in the document.

  • url (string) – URL of the document.

  • meta (object) – meta data of the requested section.

Query Parameters:
  • project (string) – Read the Docs project’s slug.

  • doc (string) – document to fetch content from.

  • version (string) – optional Read the Docs version’s slug (default: latest).

  • section (string) – optional section within the document to fetch.

  • path (string) – optional full path to the document including extension.

  • url (string) – full URL of the document (and section) to fetch content from.

Note

You can call this endpoint by sending at least project and doc or url attribute.

Undocumented resources and endpoints

There are some undocumented endpoints in the API. These should not be used and could change at any time. These include:

  • Endpoints for returning footer and version data to be injected into docs. (/api/v2/footer_html)

  • Endpoints used for advertising (/api/v2/sustainability/)

  • Any other endpoints not detailed above.

Server side search API

You can integrate our server side search in your documentation by using our API.

If you are using Business hosting you will need to replace https://readthedocs.org/ with https://readthedocs.com/ in all the URLs used in the following examples. Check Authentication and authorization if you are using private versions.

API v3

GET /api/v3/search/

Return a list of search results for a project or subset of projects. Results are divided into sections with highlights of the matching term.

Query Parameters:
  • q – Search query (see Search query syntax)

  • page – Jump to a specific page

  • page_size – Limits the results per page, default is 50

Response JSON Object:
  • type (string) – The type of the result, currently page is the only type.

  • project (string) – The project object

  • version (string) – The version object

  • title (string) – The title of the page

  • domain (string) – Canonical domain of the resulting page

  • path (string) – Path to the resulting page

  • highlights (object) – An object containing a list of substrings with matching terms. Note that the text is HTML escaped with the matching terms inside a <span> tag.

  • blocks (object) –

    A list of block objects containing search results from the page. Currently, there is one type of block:

    • section: A page section with a linkable anchor (id attribute).

Warning

Except for highlights, any other content is not HTML escaped, you shouldn’t include it in your page without escaping it first.

Example request:

$ curl "https://readthedocs.org/api/v3/search/?q=project:docs%20server%20side%20search"

Example response:

{
    "count": 41,
    "next": "https://readthedocs.org/api/v3/search/?page=2&q=project:docs%20server+side+search",
    "previous": null,
    "projects": [
       {
         "slug": "docs",
         "versions": [
           {"slug": "latest"}
         ]
       }
    ],
    "query": "server side search",
    "results": [
        {
            "type": "page",
            "project": {
               "slug": "docs",
               "alias": null
            },
            "version": {
               "slug": "latest"
            },
            "title": "Server Side Search",
            "domain": "https://docs.readthedocs.io",
            "path": "/en/latest/server-side-search.html",
            "highlights": {
                "title": [
                    "<span>Server</span> <span>Side</span> <span>Search</span>"
                ]
            },
            "blocks": [
               {
                  "type": "section",
                  "id": "server-side-search",
                  "title": "Server Side Search",
                  "content": "Read the Docs provides full-text search across all of the pages of all projects, this is powered by Elasticsearch.",
                  "highlights": {
                     "title": [
                        "<span>Server</span> <span>Side</span> <span>Search</span>"
                     ],
                     "content": [
                        "You can <span>search</span> all projects at https:&#x2F;&#x2F;readthedocs.org&#x2F;<span>search</span>&#x2F"
                     ]
                  }
               },
               {
                  "type": "domain",
                  "role": "http:get",
                  "name": "/_/api/v2/search/",
                  "id": "get--_-api-v2-search-",
                  "content": "Retrieve search results for docs",
                  "highlights": {
                     "name": [""],
                     "content": ["Retrieve <span>search</span> results for docs"]
                  }
               }
            ]
        },
    ]
}
Migrating from API v2

Instead of using query arguments to specify the project and version to search, you need to do it from the query itself, this is if you had the following parameters:

  • project: docs

  • version: latest

  • q: test

Now you need to use:

  • q: project:docs/latest test

The response of the API is very similar to V2, with the following changes:

  • project is an object, not a string.

  • version is an object, not a string.

  • project_alias isn’t present, it is contained in the project object.

When searching on a parent project, results from their subprojects won’t be included automatically, to include results from subprojects use the subprojects paramater.

Authentication and authorization

If you are using private versions, users will only be allowed to search projects they have permissions over. Authentication and authorization is done using the current session, or any of the valid sharing methods.

To be able to use the user’s current session you need to use the API from the domain where your docs are being served (<you-docs-domain>/_/api/v3/search/). This is https://docs.readthedocs-hosted.com/_/api/v3/search/ for the https://docs.readthedocs-hosted.com/ project, for example.

API v2 (deprecated)

Note

Please use our API v3 instead, see Migrating from API v2.

GET /api/v2/search/

Return a list of search results for a project, including results from its Subprojects. Results are divided into sections with highlights of the matching term.

Query Parameters:
  • q – Search query

  • project – Project slug

  • version – Version slug

  • page – Jump to a specific page

  • page_size – Limits the results per page, default is 50

Response JSON Object:
  • type (string) – The type of the result, currently page is the only type.

  • project (string) – The project slug

  • project_alias (string) – Alias of the project if it’s a subproject.

  • version (string) – The version slug

  • title (string) – The title of the page

  • domain (string) – Canonical domain of the resulting page

  • path (string) – Path to the resulting page

  • highlights (object) – An object containing a list of substrings with matching terms. Note that the text is HTML escaped with the matching terms inside a <span> tag.

  • blocks (object) –

    A list of block objects containing search results from the page. Currently, there is one type of block:

    • section: A page section with a linkable anchor (id attribute).

Warning

Except for highlights, any other content is not HTML escaped, you shouldn’t include it in your page without escaping it first.

Example request:

$ curl "https://readthedocs.org/api/v2/search/?project=docs&version=latest&q=server%20side%20search"

Example response:

{
    "count": 41,
    "next": "https://readthedocs.org/api/v2/search/?page=2&project=read-the-docs&q=server+side+search&version=latest",
    "previous": null,
    "results": [
        {
            "type": "page",
            "project": "docs",
            "project_alias": null,
            "version": "latest",
            "title": "Server Side Search",
            "domain": "https://docs.readthedocs.io",
            "path": "/en/latest/server-side-search.html",
            "highlights": {
                "title": [
                    "<span>Server</span> <span>Side</span> <span>Search</span>"
                ]
            },
            "blocks": [
               {
                  "type": "section",
                  "id": "server-side-search",
                  "title": "Server Side Search",
                  "content": "Read the Docs provides full-text search across all of the pages of all projects, this is powered by Elasticsearch.",
                  "highlights": {
                     "title": [
                        "<span>Server</span> <span>Side</span> <span>Search</span>"
                     ],
                     "content": [
                        "You can <span>search</span> all projects at https:&#x2F;&#x2F;readthedocs.org&#x2F;<span>search</span>&#x2F"
                     ]
                  }
               }
            ]
        },
    ]
}

Cross-site requests

Cross site requests are allowed for the following endpoints:

Except for the sustainability API, all of the above endpoints don’t allow you to pass credentials in cross-site requests. In other words, these API endpoints allow you to access public information only.

On a technical level, this is achieved by implementing the CORS standard, which is supported by all major browsers. We implement it such way that it strictly match the intention of the API endpoint.

Cookies

On Read the Docs Community, our session cookies have the SameSite attribute set to None, this means they can be sent in cross site requests. This is needed for our sustainability API only, to not show ads if the current user is a Gold User. All resources in Read the Docs Community are public, you don’t need to pass cookies to make use of our allowed APIs from other sites.

On Read the Docs for Business, our session cookies have the SameSite attribute set to Lax. This means that browsers will not include them in cross site requests. If you need to have access to versions that the current user has permissions over, you can make use of our proxied APIs, they can be accessed from docs domains with the /_/ prefix. For example, you can make use of our search API from <your-docs-domain>/_/api/v2/search/.

Frequently asked questions

Building and publishing your project

Why does my project have status “failing”?

Projects have the status “failing” because something in the build process has failed. This can be because the project is not correctly configured, because the contents of the Git repository cannot be built, or in the most rare cases because a system that Read the Docs connects to is not working.

First, you should check out the Builds tab of your project. By clicking on the failing step, you will be able to see details that can lead to resolutions to your build error.

If the solution is not self-evident, you can use an important word or message from the error to search for a solution.

Why do I get import errors from libraries depending on C modules?

Note

Another use case for this is when you have a module with a C extension.

This happens because the build system does not have the dependencies for building your project, such as C libraries needed by some Python packages (e.g. libevent or mysql). For libraries that cannot be installed via apt in the builder there is another way to successfully build the documentation despite missing dependencies.

With Sphinx you can use the built-in autodoc_mock_imports for mocking. If such libraries are installed via setup.py, you also will need to remove all the C-dependent libraries from your install_requires in the Read the Docs environment.

Where do I need to put my docs for Read the Docs to find it?

You can put your docs wherever your want on your repository. However, you will need to tell Read the Docs where your Sphinx’s (i.e. conf.py) or MkDocs’ (i.e. mkdocs.yml) configuration file lives in order to build your documentation.

This is done by using sphinx.configuration or mkdocs.configuration config key in your Read the Docs configuration file. Read Configuration file overview to know more about this.

How can I avoid search results having a deprecated version of my docs?

If readers search something related to your docs in Google, it will probably return the most relevant version of your documentation. It may happen that this version is already deprecated and you want to stop Google indexing it as a result, and start suggesting the latest (or newer) one.

To accomplish this, you can add a robots.txt file to your documentation’s root so it ends up served at the root URL of your project (for example, https://yourproject.readthedocs.io/robots.txt). We have documented how to set this up in robots.txt support.

How do I change the version slug of my project?

We don’t support allowing folks to change the slug for their versions. But you can rename the branch/tag to achieve this. If that isn’t enough, you can request the change sending an email to support@readthedocs.org.

What commit of Read the Docs is in production?

We deploy readthedocs.org from the rel branch in our GitHub repository. You can see the latest commits that have been deployed by looking on GitHub: https://github.com/readthedocs/readthedocs.org/commits/rel

We also keep an up-to-date changelog.

Additional features and configuration

How do I add additional software dependencies for my documentation?

For most Python dependencies, you can can specify a requirements file which details your dependencies. You can also set your project documentation to install your Python project itself as a dependency.

See also

Build process overview

An overview of the build process.

How to create reproducible builds

General information about adding dependencies and best-practices for maintaining them.

Build process customization

How to customize your builds, for example if you need to build with different tools from Sphinx or if you need to add additional packages for the Ubuntu-based builder.

Configuration file reference

Reference for the main configuration file, readthedocs.yaml

build.apt_packages

Reference for adding Debian packages with apt for the Ubuntu-based builders

Other FAQ entries

How do I change behavior when building with Read the Docs?

When Read the Docs builds your project, it sets the READTHEDOCS environment variable to the string 'True'. So within your Sphinx conf.py file, you can vary the behavior based on this. For example:

import os

on_rtd = os.environ.get("READTHEDOCS") == "True"
if on_rtd:
    html_theme = "default"
else:
    html_theme = "nature"

The READTHEDOCS variable is also available in the Sphinx build environment, and will be set to True when building on Read the Docs:

{% if READTHEDOCS %}
Woo
{% endif %}

I want comments in my docs

Read the Docs doesn’t have explicit support for this. That said, a tool like Disqus (and the sphinxcontrib-disqus plugin) can be used for this purpose on Read the Docs.

Can I remove advertising from my documentation?

Yes. See Opting out of advertising.

How do I change my project slug (the URL your docs are served at)?

We don’t support allowing folks to change the slug for their project. You can update the name which is shown on the site, but not the actual URL that documentation is served.

The main reason for this is that all existing URLs to the content will break. You can delete and re-create the project with the proper name to get a new slug, but you really shouldn’t do this if you have existing inbound links, as it breaks the internet.

If that isn’t enough, you can request the change sending an email to support@readthedocs.org.

Big projects

How do I host multiple projects on one custom domain?

We support the concept of subprojects, which allows multiple projects to share a single domain. If you add a subproject to a project, that documentation will be served under the parent project’s subdomain or custom domain.

For example, Kombu is a subproject of Celery, so you can access it on the celery.readthedocs.io domain:

https://celery.readthedocs.io/projects/kombu/en/latest/

This also works the same for custom domains:

http://docs.celeryq.dev/projects/kombu/en/latest/

You can add subprojects in the project admin dashboard.

For details on custom domains, see our documentation on Custom domains.

How do I support multiple languages of documentation?

Read the Docs supports multiple languages. See the section on Localization and Internationalization.

Sphinx

I want to use the Read the Docs theme

To use the Read the Docs theme, you have to specify that in your Sphinx’s conf.py file.

Read the sphinx-rtd-theme documentation for instructions to enable it in your Sphinx project.

Image scaling doesn’t work in my documentation

Image scaling in docutils depends on Pillow. If you notice that image scaling is not working properly on your Sphinx project, you may need to add Pillow to your requirements to fix this issue. Read more about How to create reproducible builds to define your dependencies in a requirements.txt file.

Python

Can I document a Python package that is not at the root of my repository?

Yes. The most convenient way to access a Python package for example via Sphinx’s autoapi in your documentation is to use the python.install.method: pip (python.install) configuration key.

This configuration will tell Read the Docs to install your package in the virtual environment used to build your documentation so your documentation tool can access to it.

Does Read the Docs work well with “legible” docstrings?

Yes. One criticism of Sphinx is that its annotated docstrings are too dense and difficult for humans to read. In response, many projects have adopted customized docstring styles that are simultaneously informative and legible. The NumPy and Google styles are two popular docstring formats. Fortunately, the default Read the Docs theme handles both formats just fine, provided your conf.py specifies an appropriate Sphinx extension that knows how to convert your customized docstrings. Two such extensions are numpydoc and napoleon. Only napoleon is able to handle both docstring formats. Its default output more closely matches the format of standard Sphinx annotations, and as a result, it tends to look a bit better with the default theme.

Note

To use these extensions you need to specify the dependencies on your project by following this guide.

I need to install a package in a environment with pinned versions

If you’d like to pin your dependencies outside the package, you can add this line to your requirements or environment file (if you are using Conda).

In your requirements.txt file:

# path to the directory containing setup.py relative to the project root
-e .

In your Conda environment file (environment.yml):

# path to the directory containing setup.py relative to the environment file
-e ..

Other documentation frameworks

How can I deploy Jupyter Book projects on Read the Docs?

According to its own documentation,

Jupyter Book is an open source project for building beautiful, publication-quality books and documents from computational material.

Even though Jupyter Book leverages Sphinx “for almost everything that it does”, it purposedly hides Sphinx conf.py files from the user, and instead generates them on the fly from its declarative _config.yml. As a result, you need to follow some extra steps to make Jupyter Book work on Read the Docs.

As described in the official documentation, you can manually convert your Jupyter Book project to Sphinx with the following configuration:

.readthedocs.yaml
 build:
     jobs:
         pre_build:
         # Generate the Sphinx configuration for this Jupyter Book so it builds.
         - "jupyter-book config sphinx docs/"

Changelog

Version 10.24.1

Date:

April 23, 2024

Version 10.24.0

Date:

April 16, 2024

Version 10.23.2

Date:

April 09, 2024

Version 10.23.1

Date:

March 26, 2024

Version 10.23.0

Date:

March 19, 2024

Version 10.22.0

Date:

March 12, 2024

Version 10.21.0

Date:

March 04, 2024

Version 10.20.0

Date:

February 27, 2024

Version 10.19.0

Date:

February 20, 2024

Version 10.18.0

Date:

February 06, 2024

Version 10.17.0

Date:

January 30, 2024

Version 10.16.1

Date:

January 23, 2024

Version 10.16.0

Date:

January 23, 2024

Version 10.15.1

Date:

January 16, 2024

Version 10.15.0

Date:

January 09, 2024

Version 10.14.0

Date:

January 03, 2024

Version 10.13.0

Date:

December 19, 2023

Version 10.12.2

Date:

December 05, 2023

Version 10.12.1

Date:

November 28, 2023

Version 10.12.0

Date:

November 28, 2023

Version 10.11.0

Date:

November 14, 2023

Version 10.10.0

Date:

November 07, 2023

Version 10.9.0

Date:

October 31, 2023

Version 10.8.1

Date:

October 24, 2023

Version 10.8.0

Date:

October 24, 2023

Version 10.7.1

Date:

October 17, 2023

Version 10.7.0

Date:

October 10, 2023

Version 10.6.1

Date:

October 03, 2023

Version 10.6.0

Date:

September 26, 2023

Version 10.5.0

Date:

September 18, 2023

Version 10.4.0

Date:

September 12, 2023

Version 10.3.0

Date:

September 05, 2023

Version 10.2.0

Date:

August 29, 2023

Version 10.1.0

Date:

August 22, 2023

Version 10.0.0

This release is a Django 4.2 upgrade, so it has a major version bump, 10.0!

Date:

August 14, 2023

Version 9.16.4

Date:

August 08, 2023

Version 9.16.3

Date:

August 01, 2023

Version 9.16.2

Date:

July 25, 2023

Version 9.16.1

Date:

July 17, 2023

Version 9.16.0

Date:

July 11, 2023

Version 9.15.0

Date:

June 26, 2023

Version 9.14.0

Date:

June 20, 2023

Version 9.13.3

Date:

June 13, 2023

Version 9.13.2

Date:

June 06, 2023

Version 9.13.1

Date:

May 30, 2023

Version 9.13.0

Date:

May 23, 2023

Version 9.12.0

Date:

May 02, 2023

Version 9.11.0

Date:

April 18, 2023

Version 9.10.1

Date:

April 11, 2023

Version 9.10.0

Date:

March 28, 2023

Version 9.9.1

Date:

March 21, 2023

Version 9.9.0

Date:

March 14, 2023

Version 9.8.0

Date:

March 07, 2023

Version 9.7.0

This release contains one security fix. For more information, see:

Date:

February 28, 2023

Version 9.6.0

Date:

February 21, 2023

Version 9.5.0

This release contains one security fix. For more information, see:

Date:

February 13, 2023

Version 9.4.0

This release contains one security fix. For more information, see:

Date:

February 07, 2023

Version 9.3.1

Date:

January 30, 2023

Version 9.3.0

Date:

January 24, 2023

Version 9.2.0

This release contains two security fixes. For more information, see our GitHub advisories:

Date:

January 16, 2023

Version 9.1.3

Date:

January 10, 2023

Version 9.1.2

Date:

January 03, 2023

Version 9.1.1

Date:

December 20, 2022

Version 9.1.0

This release contains an important security fix. See more information on the GitHub advisory.

Date:

December 08, 2022

Version 9.0.0

This version upgrades our Search API experience to a v3.

Date:

November 28, 2022

Version 8.9.0

Date:

November 15, 2022

Version 8.8.1

This release contains a security fix, which is the most important part of the update.

Date:

November 09, 2022

Version 8.8.0

Date:

November 08, 2022

Version 8.7.1

Date:

October 24, 2022

Version 8.7.0

Date:

October 11, 2022

Version 8.6.0

Date:

September 28, 2022

Version 8.5.0

Date:

September 12, 2022

Version 8.4.3

Date:

September 06, 2022

Version 8.4.2

Date:

August 29, 2022

Version 8.4.1

Date:

August 23, 2022

Version 8.4.0

Date:

August 16, 2022

Version 8.3.7

Date:

August 09, 2022

  • @stsewd: Sphinx domain: change type of ID field (#9482)

  • @humitos: Build: unpin Pillow for unsupported Python versions (#9473)

  • @humitos: Release 8.3.6 (#9465)

  • @stsewd: Redirects: check only for hostname and path for infinite redirects (#9463)

  • @benjaoming: Fix missing indentation on reStructuredText badge code (#9404)

  • @stsewd: Embed JS: fix incompatibilities with sphinx 6.x (jquery removal) (#9359)

Version 8.3.6

Date:

August 02, 2022

Version 8.3.5

Date:

July 25, 2022

Version 8.3.4

Date:

July 19, 2022

Version 8.3.3

Date:

July 12, 2022

Version 8.3.2

Date:

July 05, 2022

Version 8.3.1

Date:

June 27, 2022

Version 8.3.0

Date:

June 20, 2022

Version 8.2.0

Date:

June 14, 2022

Version 8.1.2

Date:

June 06, 2022

Version 8.1.1

Date:

Jun 1, 2022

Version 8.1.0

Date:

May 24, 2022

Version 8.0.2

Date:

May 16, 2022

Version 8.0.1

Date:

May 09, 2022

Version 8.0.0

Date:

May 03, 2022

Note

We are upgrading to Ubuntu 22.04 LTS and also to Python 3.10.

Projects using Mamba with the old feature flag, and now removed, CONDA_USES_MAMBA, have to update their .readthedocs.yaml file to use build.tools.python: mambaforge-4.10 to continue using Mamba to create their environment. See more about build.tools.python at https://docs.readthedocs.io/en/stable/config-file/v2.html#build-tools-python

Version 7.6.2

Date:

April 25, 2022

Version 7.6.1

Date:

April 19, 2022

Version 7.6.0

Date:

April 12, 2022

Version 7.5.1

Date:

April 04, 2022

Version 7.5.0

Date:

March 28, 2022

Version 7.4.2

Date:

March 14, 2022

Version 7.4.1

Date:

March 07, 2022

  • @humitos: Upgrade common submodule (#9001)

  • @humitos: Build: RepositoryError message (#8999)

  • @humitos: Requirements: remove django-permissions-policy (#8987)

  • @stsewd: Archive builds: avoid filtering by commands__isnull (#8986)

  • @humitos: Build: cancel error message (#8984)

  • @humitos: API: validate RemoteRepository when creating a Project (#8983)

  • @humitos: Celery: trigger archive_builds frequently with a lower limit (#8981)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 09 (#8977)

  • @stsewd: MkDocs: allow None on extra_css/extra_javascript (#8976)

  • @stsewd: CDN: avoid cache tags collision (#8969)

  • @stsewd: Docs: warn about custom domains on subprojects (#8945)

  • @humitos: Code style: format the code using darker (#8875)

  • @dogukanteber: Use django-storages’ manifest files class instead of the overriden class (#8781)

  • @nienn: Docs: Add links to documentation on creating custom classes (#8466)

  • @stsewd: Integrations: allow to pass more data about external versions (#7692)

Version 7.4.0

Date:

March 01, 2022

Version 7.3.0

Date:

February 21, 2022

Version 7.2.1

Date:

February 15, 2022

Version 7.2.0

Date:

February 08, 2022

Version 7.1.2

Date:

January 31, 2022

Version 7.1.1

Date:

January 31, 2022

Version 7.1.0

Date:

January 25, 2022

Version 7.0.0

This is our 7th major version! This is because we are upgrading to Django 3.2 LTS.

Date:

January 17, 2022

Version 6.3.3

Date:

January 10, 2022

Version 6.3.2

Date:

January 04, 2022

Version 6.3.1

Date:

December 14, 2021

Version 6.3.0

Date:

November 29, 2021

Version 6.2.1

Date:

November 23, 2021

Version 6.2.0

Date:

November 16, 2021

Version 6.1.2

Date:

November 08, 2021

Version 6.1.1

Date:

November 02, 2021

Version 6.1.0

Date:

October 26, 2021

Version 6.0.0

Date:

October 13, 2021

This release includes the upgrade of some base dependencies:

  • Python version from 3.6 to 3.8

  • Ubuntu version from 18.04 LTS to 20.04 LTS

Starting from this release, all the Read the Docs code will be tested and QAed on these versions.

Version 5.25.1

Date:

October 11, 2021

Version 5.25.0

Date:

October 05, 2021

Version 5.24.0

Date:

September 28, 2021

Version 5.23.6

Date:

September 20, 2021

Version 5.23.5

Date:

September 14, 2021

  • @humitos: Organization: only mark artifacts cleaned as False if they are True (#8481)

  • @astrojuanlu: Fix link to version states documentation (#8475)

  • @stsewd: OAuth models: increase avatar_url lenght (#8472)

  • @pzhlkj6612: Docs: update the links to the dependency management content of setuptools docs (#8470)

  • @stsewd: Permissions: avoid using project.users, use proper permissions instead (#8458)

  • @humitos: Docker build images: update design doc (#8447)

  • @astrojuanlu: New Read the Docs tutorial, part I (#8428)

Version 5.23.4

Date:

September 07, 2021

Version 5.23.3

Date:

August 30, 2021

Version 5.23.2

Date:

August 24, 2021

Version 5.23.1

Date:

August 16, 2021

Version 5.23.0

Date:

August 09, 2021

Version 5.22.0

Date:

August 02, 2021

Version 5.21.0

Date:

July 27, 2021

Version 5.20.3

Date:

July 19, 2021

Version 5.20.2

Date:

July 13, 2021

Version 5.20.1

Date:

June 28, 2021

Version 5.20.0

Date:

June 22, 2021

Version 5.19.0

Warning

This release contains a security fix to our CSRF settings: https://github.com/readthedocs/readthedocs.org/security/advisories/GHSA-3v5m-qmm9-3c6c

Date:

June 15, 2021

Version 5.18.0

Date:

June 08, 2021

Version 5.17.0

Date:

May 24, 2021

Version 5.16.0

Date:

May 18, 2021

  • @stsewd: QuerySets: check for .is_superuser instead of has_perm (#8181)

  • @humitos: Build: use is_active method to know if the build should be skipped (#8179)

  • @humitos: APIv2: disable listing endpoints (#8178)

  • @stsewd: Project: use IntegerField for remote_repository from project form. (#8176)

  • @stsewd: Docs: remove some lies from cross referencing guide (#8173)

  • @stsewd: Docs: add space to bash code (#8171)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 19 (#8170)

  • @stsewd: Querysets: include organizations in is_active check (#8163)

  • @stsewd: Querysets: remove private and for_project (#8158)

  • @davidfischer: Disable FLOC by introducing permissions policy header (#8145)

  • @stsewd: Build: allow to install packages with apt (#8065)

Version 5.15.0

Date:

May 10, 2021

  • @stsewd: Ads: don’t load script if a project is marked as ad_free (#8164)

  • @stsewd: Querysets: include organizations in is_active check (#8163)

  • @stsewd: Querysets: simplify project querysets (#8154)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 18 (#8153)

  • @stsewd: Search: default to search on default version of subprojects (#8148)

  • @stsewd: Remove protected privacy level (#8146)

  • @stsewd: Embed: fix paths that start with / (#8139)

  • @humitos: Metrics: run metrics task every 30 minutes (#8138)

  • @humitos: web-celery: add logging for OOM debug on suspicious tasks (#8131)

  • @agjohnson: Fix a few style and grammar issues with SSO docs (#8109)

  • @stsewd: Embed: don’t fail while querying sections with bad id (#8084)

  • @stsewd: Design doc: allow to install packages using apt (#8060)

Version 5.14.3

Date:

April 26, 2021

Version 5.14.2

Date:

April 20, 2021

Version 5.14.1

Date:

April 13, 2021

  • @stsewd: OAuth: protection against deleted objects (#8081)

  • @cocobennett: Add page and page_size to server side api documentation (#8080)

  • @stsewd: Version warning banner: inject on role=”main” or main tag (#8079)

  • @stsewd: OAuth: avoid undefined var (#8078)

  • @stsewd: Conda: protect against None when appending core requirements (#8077)

  • @humitos: SSO: add small paragraph mentioning how to enable it on commercial (#8063)

  • @agjohnson: Add separate version create view and create view URL (#7595)

Version 5.14.0

Date:

April 06, 2021

This release includes a security update which was done in a private branch PR. See our security changelog for more details.

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 14 (#8071)

  • @astrojuanlu: Clarify ad-free conditions (#8064)

  • @humitos: SSO: add small paragraph mentioning how to enable it on commercial (#8063)

  • @stsewd: Build environment: allow to run commands with a custom user (#8058)

  • @humitos: Design document for new Docker images structure (#7566)

Version 5.13.0

Date:

March 30, 2021

Version 5.12.2

Date:

March 23, 2021

Version 5.12.1

Date:

March 16, 2021

Version 5.12.0

Date:

March 08, 2021

Version 5.11.0

Date:

March 02, 2021

Version 5.10.0

Date:

February 23, 2021

Version 5.9.0

Date:

February 16, 2021

Last Friday we migrated our site from Azure to AWS (read the blog post). This is the first release into our new AWS infra.

Version 5.8.5

Date:

January 18, 2021

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 03 (#7840)

  • @humitos: Speed up concurrent builds by limited to 5 hours ago (#7839)

  • @humitos: Match Redis version with production (#7838)

  • @saadmk11: Add Option to Enable External Builds Through Project Update API (#7834)

  • @stsewd: Docs: mention the version warning is for sphinx only (#7832)

  • @stsewd: Tests: make PRODUCTION_DOMAIN explicit (#7831)

  • @stsewd: Docs: make it easy to copy/pasta examples (#7829)

  • @stsewd: PR preview: pass PR and build urls to sphinx context (#7828)

  • @agjohnson: Hide design docs from documentation (#7826)

  • @stsewd: Footer: add cache tags (#7821)

  • @humitos: Log Stripe Resource fallback creation in Sentry (#7820)

  • @humitos: Register MetricsTask to send metrics to AWS CloudWatch (#7817)

  • @saadmk11: Add management command to Sync RemoteRepositories and RemoteOrganizations (#7803)

  • @stsewd: Mkdocs: default to “docs” for docs_dir (#7766)

Version 5.8.4

Date:

January 12, 2021

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 02 (#7818)

  • @stsewd: List SYNC_VERSIONS_USING_A_TASK flag in the admin (#7802)

  • @ericholscher: Update build concurrency numbers for Business (#7794)

  • @stsewd: Sphinx: use html_baseurl for setting the canonical URL (#7540)

Version 5.8.3

Date:

January 05, 2021

Version 5.8.2

Date:

December 21, 2020

Version 5.8.1

Date:

December 14, 2020

  • @humitos: Register ShutdownBuilder task (#7749)

  • @saadmk11: Use “path_with_namespace” for GitLab RemoteRepository full_name Field (#7746)

  • @stsewd: Features: remove USE_NEW_PIP_RESOLVER (#7745)

  • @stsewd: Version sync: exclude external versions when deleting (#7742)

  • @stsewd: Search: limit number of sections and domains to 10K (#7741)

  • @stsewd: Traffic analytics: don’t pass context if the feature isn’t enabled (#7740)

  • @stsewd: Analytics: move page views to its own endpoint (#7739)

  • @stsewd: FeatureQuerySet: make check for date inclusive (#7737)

  • @stsewd: Typo: date -> data (#7736)

  • @saadmk11: Use remote_id and vcs_provider Instead of full_name to Get RemoteRepository (#7734)

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 49 (#7730)

  • @saadmk11: Update parts of code that were using the old RemoteRepository model fields (#7728)

  • @stsewd: Builds: don’t delete them when a version is deleted (#7679)

  • @stsewd: Sync versions: create new versions in bulk (#7382)

  • @humitos: Use mamba under a feature flag to create conda environments (#6815)

Version 5.8.0

Date:

December 08, 2020

Version 5.7.0

Date:

December 01, 2020

Version 5.6.5

Date:

November 23, 2020

Version 5.6.4

Date:

November 16, 2020

Version 5.6.3

Date:

November 10, 2020

  • @pyup-bot: pyup: Scheduled weekly dependency update for week 43 (#7602)

Version 5.6.2

Date:

November 03, 2020

Version 5.6.1

Date:

October 26, 2020

Version 5.6.0

Date:

October 19, 2020

Version 5.5.3

Date:

October 13, 2020

Version 5.5.2

Date:

October 06, 2020

Version 5.5.1

Date:

September 28, 2020

Version 5.5.0

Date:

September 22, 2020

Version 5.4.3

Date:

September 15, 2020

Version 5.4.2

Date:

September 09, 2020

Version 5.4.1

Date:

September 01, 2020

Version 5.4.0

Date:

August 25, 2020

Version 5.3.0

Date:

August 18, 2020

Version 5.2.3

Date:

August 04, 2020

Version 5.2.2

Date:

July 29, 2020

Version 5.2.1

Date:

July 14, 2020

Version 5.2.0

Date:

July 07, 2020

Version 5.1.5

Date:

July 01, 2020

Version 5.1.4

Date:

June 23, 2020

Version 5.1.3

Date:

June 16, 2020

Version 5.1.2

Date:

June 09, 2020

Version 5.1.1

Date:

May 26, 2020

Version 5.1.0

Date:

May 19, 2020

This release includes one major new feature which is Pageview Analytics. This allows projects to see the pages in their docs that have been viewed in the past 30 days, giving them an idea of what pages to focus on when updating them.

This release also has a few small search improvements, doc updates, and other bugfixes as well.

Version 5.0.0

Date:

May 12, 2020

This release includes two large changes, one that is breaking and requires a major version upgrade:

  • We have removed our deprecated doc serving code that used core/views, core/symlinks, and builds/syncers (#6535). All doc serving should now be done via proxito. In production this has been the case for over a month, we have now removed the deprecated code from the codebase.

  • We did a large documentation refactor that should make things nicer to read and highlights more of our existing features. This is the first of a series of new documentation additions we have planned

  • @ericholscher: Fix the caching of featured projects (#7054)

  • @ericholscher: Docs: Refactor and simplify our docs (#7052)

  • @stsewd: Mention using ssh URLs when using private submodules (#7046)

  • @ericholscher: Show project slug in Version admin (#7042)

  • @stsewd: List apiv3 first (#7041)

  • @stsewd: Remove CELERY_ROUTER flag (#7040)

  • @stsewd: Search: remove unused taxonomy field (#7033)

  • @agjohnson: Use a high time limit for celery build task (#7029)

  • @ericholscher: Clean up build admin to make list display match search (#7028)

  • @stsewd: Task Router: check for None (#7027)

  • @stsewd: Implement repo_exists for all VCS backends (#7025)

  • @stsewd: Mkdocs: Index pages without anchors (#7024)

  • @agjohnson: Move docker limits back to setting (#7023)

  • @humitos: Fix typo (#7022)

  • @stsewd: Fix linter (#7021)

  • @ericholscher: Release 4.1.8 (#7020)

  • @ericholscher: Cleanup unresolver logging (#7019)

  • @stsewd: Document about next when using a secret link (#7015)

  • @stsewd: Remove unused field project.version_privacy_level (#7011)

  • @ericholscher: Add proxito headers to redirect responses (#7007)

  • @stsewd: Make hidden field not null (#6996)

  • @humitos: Show a list of packages installed on environment (#6992)

  • @eric-wieser: Ensure invoked Sphinx matches importable one (#6965)

  • @ericholscher: Add an unresolver similar to our resolver (#6944)

  • @KengoTODA: Replace “PROJECT” with project object (#6878)

  • @humitos: Remove code replaced by El Proxito and stateless servers (#6535)

Version 4.1.8

Date:

May 05, 2020

This release adds a few new features and bugfixes. The largest change is the addition of hidden versions, which allows docs to be built but not shown to users on the site. This will keep old links from breaking but not direct new users there.

We’ve also expanded the CDN support to make sure we’re passing headers on 3xx and 4xx responses. This will allow us to expand the timeout on our CDN.

We’ve also updated and added a good amount of documentation in this release, and we’re starting a larger refactor of our docs to help users understand the platform better.

Version 4.1.7

Date:

April 28, 2020

As of this release, most documentation on Read the Docs Community is now behind Cloudflare’s CDN. It should be much faster for people further from US East. Please report any issues you experience with stale cached documentation (especially CSS/JS).

Another change in this release related to how custom domains are handled. Custom domains will now redirect HTTP -> HTTPS if the Domain’s “HTTPS” flag is set. Also, the subdomain URL (eg. <project>.readthedocs.io/...) should redirect to the custom domain if the Domain’s “canonical” flag is set. These flags are configurable in your project dashboard under Admin > Domains.

Many of the other changes related to improvements for our infrastructure to allow us to have autoscaling build and web servers. There were bug fixes for projects using versions tied to annotated git tags and custom user redirects will now send query parameters.

Version 4.1.6

Date:

April 21, 2020

Version 4.1.5

Date:

April 15, 2020

Version 4.1.4

Date:

April 14, 2020

Version 4.1.3

Date:

April 07, 2020

Version 4.1.2

Date:

March 31, 2020

Version 4.1.1

Date:

March 24, 2020

Version 4.1.0

Date:

March 17, 2020

Version 4.0.3

Date:

March 10, 2020

Version 4.0.2

Date:

March 04, 2020

Version 4.0.1

Date:

March 03, 2020

Version 4.0.0

Date:

February 25, 2020

This release upgrades our codebase to run on Django 2.2. This is a breaking change, so we have released it as our 4th major version.

Version 3.12.0

Date:

February 18, 2020

This version has two major changes:

Version 3.11.6

Date:

February 04, 2020

Version 3.11.5

Date:

January 29, 2020

Version 3.11.4

Date:

January 28, 2020

Version 3.11.3

Date:

January 21, 2020

Version 3.11.2

Date:

January 08, 2020

Version 3.11.1

Date:

December 18, 2019

Version 3.11.0

Date:

December 03, 2019

Version 3.10.0

Date:

November 19, 2019

Version 3.9.0

Date:

November 12, 2019

Version 3.8.0

Date:

October 09, 2019

Version 3.7.5

Date:

September 26, 2019

Version 3.7.4

Date:

September 05, 2019

Version 3.7.3

Date:

August 27, 2019

Version 3.7.2

Date:

August 08, 2019

Version 3.7.1

Date:

August 07, 2019

Version 3.7.0

Date:

July 23, 2019

Version 3.6.1

Date:

July 17, 2019

Version 3.6.0

Date:

July 16, 2019

Version 3.5.3

Date:

June 19, 2019

Version 3.5.2

This is a quick hotfix to the previous version.

Date:

June 11, 2019

Version 3.5.1

This version contained a security fix for an open redirect issue. The problem has been fixed and deployed on readthedocs.org. For users who depend on the Read the Docs code line for a private instance of Read the Docs, you are encouraged to update to 3.5.1 as soon as possible.

Date:

June 11, 2019

Version 3.5.0

Date:

May 30, 2019

Version 3.4.2

Date:

April 22, 2019

Version 3.4.1

Date:

April 03, 2019

Version 3.4.0

Date:

March 18, 2019

Version 3.3.1

Date:

February 28, 2019

Version 3.3.0

Date:

February 27, 2019

Version 3.2.3

Date:

February 19, 2019

Version 3.2.2

Date:

February 13, 2019

Version 3.2.1

Date:

February 07, 2019

Version 3.2.0

Date:

February 06, 2019

Version 3.1.0

This version greatly improves our search capabilities, thanks to the Google Summer of Code. We’re hoping to have another version of search coming soon after this, but this is a large upgrade moving to the latest Elastic Search.

Date:

January 24, 2019

Version 3.0.0

Read the Docs now only supports Python 3.6+. This is for people running the software on their own servers, builds continue to work across all supported Python versions.

Date:

January 23, 2019

Version 2.8.5

Date:

January 15, 2019

Version 2.8.4

Date:

December 17, 2018

Version 2.8.3

Date:

December 05, 2018

Version 2.8.2

Date:

November 28, 2018

Version 2.8.1

Date:

November 06, 2018

Version 2.8.0

Date:

October 30, 2018

Major change is an upgrade to Django 1.11.

Version 2.7.2

Date:

October 23, 2018

Version 2.7.1

Date:

October 04, 2018

Version 2.7.0

Date:

September 29, 2018

Reverted, do not use

Version 2.6.6

Date:

September 25, 2018

Version 2.6.5

Date:

August 29, 2018

Version 2.6.4

Date:

August 29, 2018

Version 2.6.3

Date:

August 18, 2018

Release to Azure!

Version 2.6.2

Date:

August 14, 2018

Version 2.6.1

Date:

July 17, 2018

Version 2.6.0

Date:

July 16, 2018

Version 2.5.3

Date:

July 05, 2018

Version 2.5.2

Date:

June 18, 2018

Version 2.5.1

Date:

June 14, 2018

Version 2.5.0

Date:

June 06, 2018

Version 2.4.0

Date:

May 31, 2018

Version 2.3.14

Date:

May 30, 2018

Version 2.3.13

Date:

May 23, 2018

Version 2.3.12

Date:

May 21, 2018

Version 2.3.11

Date:

May 01, 2018

Version 2.3.10

Date:

April 24, 2018

Version 2.3.9

Date:

April 20, 2018

Version 2.3.8

Date:

April 20, 2018

  • @agjohnson: Give TaskStep class knowledge of the underlying task (#3983)

  • @humitos: Resolve domain when a project is a translation of itself (#3981)

Version 2.3.7

Date:

April 19, 2018

Version 2.3.6

Date:

April 05, 2018

Version 2.3.5

Date:

April 05, 2018

Version 2.3.4

  • Release for static assets

Version 2.3.3

Version 2.3.2

This version adds a hotfix branch that adds model validation to the repository URL to ensure strange URL patterns can’t be used.

Version 2.3.1

Version 2.3.0

Warning

Version 2.3.0 includes a security fix for project translations. See Release 2.3.0 for more information

Version 2.2.1

Version 2.2.1 is a bug fix release for the several issues found in production during the 2.2.0 release.

Version 2.2.0

Version 2.1.6

Version 2.1.5

Version 2.1.4

Version 2.1.3

date:

Dec 21, 2017

Version 2.1.2

Version 2.1.1

Release information missing

Version 2.1.0

Version 2.0

Previous releases

Starting with version 2.0, we will be incrementing the Read the Docs version based on semantic versioning principles, and will be automating the update of our changelog.

Below are some historical changes from when we have tried to add information here in the past

July 23, 2015

  • Django 1.8 Support Merged

Code notes
  • Updated Django from 1.6.11 to 1.8.3.

  • Removed South and ported the South migrations to Django’s migration framework.

  • Updated django-celery from 3.0.23 to 3.1.26 as django-celery 3.0.x does not support Django 1.8.

  • Updated Celery from 3.0.24 to 3.1.18 because we had to update django-celery. We need to test this extensively and might need to think about using the new Celery API directly and dropping django-celery. See release notes: https://docs.celeryproject.org/en/3.1/whatsnew-3.1.html

  • Updated tastypie from 0.11.1 to current master (commit 1e1aff3dd4dcd21669e9c68bd7681253b286b856) as 0.11.x is not compatible with Django 1.8. No surprises expected but we should ask for a proper release, see release notes: https://github.com/django-tastypie/django-tastypie/blob/master/docs/release_notes/v0.12.0.rst

  • Updated django-oauth from 0.16.1 to 0.21.0. No surprises expected, see release notes in the docs and finer grained in the repo

  • Updated django-guardian from 1.2.0 to 1.3.0 to gain Django 1.8 support. No surprises expected, see release notes: https://github.com/lukaszb/django-guardian/blob/devel/CHANGES

  • Using django-formtools instead of removed django.contrib.formtools now. Based on the Django release notes, these modules are the same except of the package name.

  • Updated pytest-django from 2.6.2 to 2.8.0. No tests required, but running the testsuite :smile:

  • Updated psycopg2 from 2.4 to 2.4.6 as 2.4.5 is required by Django 1.8. No trouble expected as Django is the layer between us and psycopg2. Also it’s only a minor version upgrade. Release notes: http://initd.org/psycopg/docs/news.html#what-s-new-in-psycopg-2-4-6

  • Added django.setup() to conf.py to load django properly for doc builds.

  • Added migrations for all apps with models in the readthedocs/ directory

Deployment notes

After you have updated the code and installed the new dependencies, you need to run these commands on the server:

python manage.py migrate contenttypes
python manage.py migrate projects 0002 --fake
python manage.py migrate --fake-initial

Locally I had trouble in a test environment that pip did not update to the specified commit of tastypie. It might be required to use pip install -U -r requirements/deploy.txt during deployment.

Development update notes

The readthedocs developers need to execute these commands when switching to this branch (or when this got merged into main):

  • Before updating please make sure that all migrations are applied:

    python manage.py syncdb
    python manage.py migrate
    
  • Update the codebase: git pull

  • You need to update the requirements with pip install -r requirements.txt

  • Now you need to fake the initial migrations:

    python manage.py migrate contenttypes
    python manage.py migrate projects 0002 --fake
    python manage.py migrate --fake-initial
    

About Read the Docs

Read the Docs is a C Corporation registered in Oregon. Our bootstrapped company is owned and fully controlled by the founders, and fully funded by our customers and advertisers. This allows us to focus 100% on our users.

We have two main sources of revenue:

  • Read the Docs for Business - where we provide a valuable paid service to companies.

  • Read the Docs Community - where we provide a free service to the open source community, funded via EthicalAds.

We believe that having both paying customers and ethical advertising is the best way to create a sustainable platform for our users. We have built something that we expect to last a long time, and we are able to make decisions based only on the best interest of our community and customers.

All of the source code for Read the Docs is open source. You are welcome to contribute the features you want or run your own instance. We should note that we generally only support our hosted versions as a matter of our philosophy.

We owe a great deal to the open source community that we are a part of, so we provide free ads via our community ads program. This allows us to give back to the communities and projects that we support and depend on.

We are proud about the way we manage our company and products, and are glad to have you on board with us in this great documentation journey.

If you want to dive more into more specific information and our policies, we’ve brought most of the most important ones below.

Business hosting

Learn more about how our company provides paid solutions

Policies and legal documents

Policies and legal documents used by Read the Docs Community and Read the Docs for Business.

Advertising

Information about how advertisement in Read the Docs

The story of Read the Docs

A brief throwback to how we were founded

Sponsors of Read the Docs

Read about who currently sponsors Read the Docs and who sponsored us in the past.

Read the Docs open source philosophy

Our philosophy is anchored in open source.

Read the Docs team

How we work and who we are.

Site support

Read this before asking for help: How to get support and where.

Glossary

A useful index of terms used in our docs

See also

Our website

Our primary website has general-purpose information about Read the Docs like pricing and feature overviews.

Advertising

Advertising is the single largest source of funding for Read the Docs. It allows us to:

  • Serve over 35 million pages of documentation per month

  • Serve over 40 TB of documentation per month

  • Host over 80,000 open source projects and support over 100,000 users

  • Pay a small team of dedicated full-time staff

Many advertising models involve tracking users around the internet, selling their data, and privacy intrusion in general. Instead of doing that, we built an Ethical Advertising model that respects user privacy.

We recognize that advertising is not for everyone. You may opt out of paid advertising although you will still see community ads. Gold members may also remove advertising from their projects for all visitors.

For businesses looking to remove advertising, please consider Read the Docs for Business.

EthicalAds

Read the Docs is a large, free web service. There is one proven business model to support this kind of site: Advertising. We are building the advertising model we want to exist, and we’re calling it EthicalAds.

EthicalAds respect users while providing value to advertisers. We don’t track you, sell your data, or anything else. We simply show ads to users, based on the content of the pages you look at. We also give 10% of our ad space to community projects, as our way of saying thanks to the open source community.

We talk a bit below about our worldview on advertising, if you want to know more.

Are you a marketer?

We built a whole business around privacy-focused advertising. If you’re trying to reach developers, we have a network of hand-approved sites (including Read the Docs) where your ads are shown.

Feedback

We’re a community, and we value your feedback. If you ever want to reach out about this effort, feel free to shoot us an email.

You can opt out of having paid ads on your projects, or seeing paid ads if you want. You will still see community ads, which we run for free that promote community projects.

Our worldview

We’re building the advertising model we want to exist:

  • We don’t track you

  • We don’t sell your data

  • We host everything ourselves, no third-party scripts or images

We’re doing newspaper advertising, on the internet. For a hundred years, newspapers put an ad on the page, some folks would see it, and advertisers would pay for this. This is our model.

So much ad tech has been built to track users. Following them across the web, from site to site, showing the same ads and gathering data about them. Then retailers sell your purchase data to try and attribute sales to advertising. Now there is an industry in doing fake ad clicks and other scams, which leads the ad industry to track you even more intrusively to know more about you. The current advertising industry is in a vicious downward spiral.

As developers, we understand the massive downsides of the current advertising industry. This includes malware, slow site performance, and huge databases of your personal data being sold to the highest bidder.

The trend in advertising is to have larger and larger ads. They should run before your content, they should take over the page, the bigger, weirder, or flashier the better.

We opt out
  • We don’t store personal information about you.

  • We only keep track of views and clicks.

  • We don’t build a profile of your personality to sell ads against.

  • We only show high quality ads from companies that are of interest to developers.

We are running a single, small, unobtrusive ad on documentation pages. The products should be interesting to you. The ads won’t flash or move.

We run the ads we want to have on our site, in a way that makes us feel good.

Additional details
  • We have additional documentation on the technical details of our advertising including our Do Not Track policy and our use of analytics.

  • We have an advertising FAQ written for advertisers.

  • We have gone into more detail about our views in our blog post about this topic.

  • Eric Holscher, one of our co-founders talks a bit more about funding open source this way on his blog.

  • After proving our ad model as a way to fund open source and building our ad serving infrastructure, we launched the EthicalAds network to help other projects be sustainable.

Join us

We’re building the advertising model we want to exist. We hope that others will join us in this mission:

  • If you’re a developer, talk to your marketing folks about using advertising that respects your privacy.

  • If you’re a marketer, vote with your dollars and support us in building the ad model we want to exist. Get more information on what we offer.

Community Ads

There are a large number of projects, conferences, and initiatives that we care about in the software and open source ecosystems. A large number of them operate like we did in the past, with almost no income. Our Community Ads program will highlight some of these projects.

There are a few qualifications for our Community Ads program:

  • Your organization and the linked site should not be trying to entice visitors to buy a product or service. We make an exception for conferences around open source projects if they are run not for profit and soliciting donations for open source projects.

  • A software project should have an OSI approved license.

  • We will not run a community ad for an organization tied to one of our paid advertisers.

We’ll show 10% of our ad inventory each month to support initiatives that we care about. Please complete an application to be considered for our Community Ads program.

Opting out

We have added multiple ways to opt out of the advertising on Read the Docs.

  1. Gold members may remove advertising from their projects for all visitors.

  2. You can opt out of seeing paid advertisements on documentation pages:

    • Go to the drop down user menu in the top right of the Read the Docs dashboard and clicking Settings (https://readthedocs.org/accounts/edit/).

    • On the Advertising tab, you can deselect See paid advertising.

    You will still see community ads for open source projects and conferences.

  3. Project owners can also opt out of paid advertisements for their projects. You can change these options:

    • Go to your project page (/projects/<slug>/)

    • Go to Admin > Advertising

    • Change your advertising settings

  4. If you are part of a company that uses Read the Docs to host documentation for a commercial product, we offer Read the Docs for Business that offers a completely ad-free experience, additional build resources, and other great features like CDN support and private documentation.

  5. If you would like to completely remove advertising from your open source project, but our commercial plans don’t seem like the right fit, please get in touch to discuss alternatives to advertising.

Advertising details

Read the Docs largely funds our operations and development through advertising. However, we aren’t willing to compromise our values, document authors, or site visitors simply to make a bit more money. That’s why we created our ethical advertising initiative.

We get a lot of inquiries about our approach to advertising which range from questions about our practices to requests to partner. The goal of this document is to shed light on the advertising industry, exactly what we do for advertising, and how what we do is different. If you have questions or comments, send us an email or open an issue on GitHub.

Other ad networks’ targeting

Some ad networks build a database of user data in order to predict the types of ads that are likely to be clicked. In the advertising industry, this is called behavioral targeting. This can include data such as:

  • sites a user has visited

  • a user’s search history

  • ads, pages, or stories a user has clicked on in the past

  • demographic information such as age, gender, or income level

Typically, getting a user’s page visit history is accomplished by the use of trackers (sometimes called beacons or pixels). For example, if a site uses a tracker from an ad network and a user visits that site, the site can now target future advertising to that user – a known past visitor – with that network. This is called retargeting.

Other ad predictions are made by grouping similar users together based on user data using machine learning. Frequently this involves an advertiser uploading personal data on users (often past customers of the advertiser) to an ad network and telling the network to target similar users. The idea is that two users with similar demographic information and similar interests would like the same products. In ad tech, this is known as lookalike audiences or similar audiences.

Understandably, many people have concerns about these targeting techniques. The modern advertising industry has built enormous value by centralizing massive amounts of data on as many people as possible.

Our targeting details

Read the Docs doesn’t use the above techniques. Instead, we target based solely upon:

  • Details of the page where the advertisement is shown including:

    • The name, keywords, or programming language associated with the project being viewed

    • Content of the page (eg. H1, title, theme, etc.)

    • Whether the page is being viewed from a mobile device

  • General geography

    • We allow advertisers to target ads to a list of countries or to exclude countries from their advertising. For ads targeting the USA, we also support targeting by state or by metro area (DMA specifically).

    • We geolocate a user’s IP address to a country when a request is made.

Where ads are shown

We can place ads in:

  • the sidebar navigation

  • the footer of the page

  • on search result pages

  • a small footer fixed to the bottom of the viewport

  • on 404 pages (rare)

We show no more than one ad per page so you will never see both a sidebar ad and a footer ad on the same page.

Do Not Track Policy

Read the Docs supports Do Not Track (DNT) and respects users’ tracking preferences. For more details, see the Do Not Track section of our privacy policy.

Ad serving infrastructure

Our entire ad server is open source, so you can inspect how we’re doing things. We believe strongly in open source, and we practice what we preach.

Analytics

Analytics are a sensitive enough issue that they require their own section. In the spirit of full transparency, Read the Docs uses Google Analytics (GA). We go into a bit of detail on our use of GA in our Privacy Policy.

GA is a contentious issue inside Read the Docs and in our community. Some users are very sensitive and privacy conscious to usage of GA. Some authors want their own analytics on their docs to see the usage their docs get. The developers at Read the Docs understand that different users have different priorities and we try to respect the different viewpoints as much as possible while also accomplishing our own goals.

We have taken steps to address some of the privacy concerns surrounding GA. These steps apply both to analytics collected by Read the Docs and when authors enable analytics on their docs.

  • Users can opt-out of analytics by using the Do Not Track feature of their browser.

  • Read the Docs instructs Google to anonymize IP addresses sent to them.

  • The cookie set by GA is a session (non-persistent) cookie rather than the default 2 years.

  • Project maintainers can completely disable analytics on their own projects. Follow the steps in Disabling Google Analytics on your project.

Why we use analytics

Advertisers ask us questions that are easily answered with an analytics solution like “how many users do you have in Switzerland browsing Python docs?”. We need to be able to easily get this data. We also use data from GA for some development decisions such as what browsers to support (or not) or how much usage a particular page or feature gets.

Alternatives

We are always exploring our options with respect to analytics. There are alternatives but none of them are without downsides. Some alternatives are:

  • Run a different cloud analytics solution from a provider other than Google (eg. Parse.ly, Matomo Cloud, Adobe Analytics). We priced a couple of these out based on our load and they are very expensive. They also just substitute one problem of data sharing with another.

  • Send data to GA (or another cloud analytics provider) on the server side and strip or anonymize personal data such as IPs before sending them. This would be a complex solution and involve additional infrastructure, but it would have many advantages. It would result in a loss of data on “sessions” and new vs. returning visitors which are of limited value to us.

  • Run a local JavaScript based analytics solution (eg. Matomo community). This involves additional infrastructure that needs to be always up. Frequently there are very large databases associated with this. Many of these solutions aren’t built to handle Read the Docs’ load.

  • Run a local analytics solution based on web server log parsing. This has the same infrastructure problems as above while also not capturing all the data we want (without additional engineering) like the programming language of the docs being shown or whether the docs are built with Sphinx or something else.

Ad blocking

Ad blockers fulfill a legitimate need to mitigate the significant downsides of advertising from tracking across the internet, security implications of third-party code, and impacting the UX and performance of sites.

At Read the Docs, we specifically didn’t want those things. That’s why we built the our Ethical Ad initiative with only relevant, unobtrusive ads that respect your privacy and don’t do creepy behavioral targeting.

Advertising is the single largest source of funding for Read the Docs. To keep our operations sustainable, we ask that you either allow our EthicalAds or go ad-free.

Allowing EthicalAds

If you use AdBlock or AdBlockPlus and you allow acceptable ads or privacy-friendly acceptable ads then you’re all set. Advertising on Read the Docs complies with both of these programs.

If you prefer not to allow acceptable ads but would consider allowing ads that benefit open source, please consider subscribing to either the wider Open Source Ads list or simply the Read the Docs Ads list.

Note

Because of the way Read the Docs is structured where docs are hosted on many different domains, adding a normal ad block exception will only allow that single domain not Read the Docs as a whole.

Going ad-free

Gold members may completely remove advertising for all visitors to their projects. Thank you for supporting Read the Docs.

Note

Previously, Gold members or Supporters were provided an ad-free reading experience across all projects on Read the Docs while logged-in. However, the cross-site cookies needed to make that work are no longer supported by major browsers outside of Chrome, and this feature will soon disappear entirely.

Statistics and data

It can be really hard to find good data on ad blocking. In the spirit of transparency, here is the data we have on ad blocking at Read the Docs.

  • 32% of Read the Docs users use an ad blocker

  • Of those, a little over 50% allow acceptable ads

  • Read the Docs users running ad blockers click on ads at about the same rate as those not running an ad blocker.

  • Comparing with our server logs, roughly 28% of our hits did not register a Google Analytics (GA) pageview due to an ad blocker, privacy plugin, disabling JavaScript, or another reason.

  • Of users who do not block GA, about 6% opt out of analytics on Read the Docs by enabling Do Not Track.

Customizing advertising

Warning

This document details features that are a work in progress. To discuss this document, please get in touch in the issue tracker.

In addition to allowing users and documentation authors to opt out of advertising, we allow some additional controls for documentation authors to control the positioning and styling of advertising. This can improve the performance of advertising or make sure the ad is in a place where it fits well with the documentation.

Controlling the placement of an ad

It is possible for a documentation author to instruct Read the Docs to position advertising in a specific location. This is done by adding a specific element to the generated body. The ad will be inserted into this container wherever this element is in the document body.

<div id="ethical-ad-placement"></div>
In Sphinx

In Sphinx, this is typically done by adding a new template (under templates_path) for inclusion in the HTML sidebar in your conf.py.

## In conf.py
html_sidebars = {
    "**": [
        "localtoc.html",
        "ethicalads.html",  # Put the ad below the navigation but above previous/next
        "relations.html",
        "sourcelink.html",
        "searchbox.html",
    ]
}
<!-- In _templates/ethicalads.html -->
<div id="ethical-ad-placement"></div>

The story of Read the Docs

Documenting projects is hard, hosting them shouldn’t be. Read the Docs was created to make hosting documentation simple.

Read the Docs was started with a couple main goals in mind. The first goal was to encourage people to write documentation, by removing the barrier of entry to hosting. The other goal was to create a central platform for people to find documentation. Having a shared platform for all documentation allows for innovation at the platform level, allowing work to be done once and benefit everyone.

Documentation matters, but its often overlooked. We think that we can help a documentation culture flourish. Great projects, such as Django and SQLAlchemy, and projects from companies like Mozilla, are already using Read the Docs to serve their documentation to the world.

The site has grown quite a bit over the past year. Our look back at 2013 shows some numbers that show our progress. The job isn’t anywhere near done yet, but it’s a great honor to be able to have such an impact already.

We plan to keep building a great experience for people hosting their docs with us, and for users of the documentation that we host.

Sponsors of Read the Docs

Running Read the Docs isn’t free, and the site wouldn’t be where it is today without generous support of our sponsors. Below is a list of all the folks who have helped the site financially, in order of the date they first started supporting us.

Current sponsors

  • AWS - They cover all of our hosting expenses every month. This is a pretty large sum of money, averaging around $5,000/mo.

  • Cloudflare - Cloudflare is providing us with an enterprise plan of their SSL for SaaS Providers product that enables us to provide SSL certificates for custom domains.

  • Chan Zuckerberg Initiative - Through their “Essential Open Source Software for Science” programme, they fund our ongoing efforts to improve scientific documentation and make Read the Docs a better service for scientific projects.

  • You? (Email us at hello@readthedocs.org for more info)

Past sponsors

Sponsorship information

As part of increasing sustainability, Read the Docs is testing out promoting sponsors on documentation pages. We have more information about this in our blog post about this effort.

Documentation in scientific and academic publishing

On this page, we explore some of the many tools and practices that software documentation and academic writing share. If you are working within the field of science or academia, this page can be used as an introduction.

Documentation and technical writing are broad fields. Their tools and practices have grown relevant to most scientific activities. This includes building publications, books, educational resources, interactive data science, resources for data journalism and full-scale websites for research projects and courses.

Here’s a brief overview of some features that people in science and academic writing love about Read the Docs:

🪄 Easy to use

Documentation code doesn’t have to be written by a programmer. In fact, documentation coding languages are designed and developed so you don’t have to be a programmer, and there are many writing aids that makes it easy to abstract from code and focus on content.

Getting started is also made easy:

🔋 Batteries included: Graphs, computations, formulas, maps, diagrams and more

Take full advantage of getting all the richness of Jupyter Notebook combined with Sphinx and the giant ecosystem of extensions for both of these.

Here are some examples:

  • Use symbols familiar from math and physics, build advanced proofs. See also: sphinx-proof

  • Present results with plots, graphs, images and let users interact directly with your datasets and algorithms. See also: Matplotlib, Interactive Data Visualizations

  • Graphs, tables etc. are computed when the latest version of your project is built and published as a stand-alone website. All code examples on your website are validated each time you build.

📚 Bibliographies and external links

Maintain bibliography databases directly as code and have external links automatically verified.

Using extensions for Sphinx such as the popular sphinxcontrib-bibtex extension, you can maintain your bibliography with Sphinx directly or refer to entries .bib files, as well as generating entire Bibliography sections from those files.

📜 Modern themes and classic PDF outputs
_images/screenshot_rtd_downloads.png

Use the latest state-of-the-art themes for web and have PDFs and e-book formats automatically generated.

New themes are improving every day, and when you write documentation based on Jupyter Book and Sphinx, you will separate your contents and semantics from your presentation logic. This way, you can keep up with the latest theme updates or try new themes.

Another example of the benefits from separating content and presentation logic: Your documentation also transforms into printable books and eBooks.

📐 Widgets, widgets and more widgets

Design your science project’s layout and components with widgets from a rich eco-system of open-source extensions built for many purposes. Special widgets help users display and interact with graphs, maps and more. Several extensions are built and invented by the science community.

⚙️ Automatic builds

Build and publish your project for every change made through Git (GitHub, GitLab, Bitbucket etc). Preview changes via pull requests. Receive notifications when something is wrong. How does this work? Have a look at this video:

💬 Collaboration and community
_images/screenshot_edit_on_github.png

Science and academia have a big kinship with software developers: We ❤️ community. Our solutions and projects become better when we foster inclusivity and active participation. Read the Docs features easy access for readers to suggest changes via your git platform (GitHub, GitLab, Bitbucket etc.). But not just any unqualified feedback. Instead, the code and all the tools are available for your community to forge qualified contributions.

Your readers can become your co-authors!

Discuss changes via pull request and track all changes in your project’s version history.

Using git does not mean that anyone can go and change your code and your published project. The full ownership and permission handling remains in your hands. Project and organization owners on your git platform govern what is released and who has access to approve and build changes.

🔎 Full search and analytics

Read the Docs comes with a number of features bundled in that you would have to configure if you were hosting documentation elsewhere.

Super-fast text search

Your documentation is automatically indexed and gets its own search function.

Traffic statistics

Have full access to your traffic data and have quick access to see which of your pages are most popular.

Search analytics

What are people searching for and do they get hits? From each search query in your documentation, we collect a neat little statistic that can help to improve the discoverability and relevance of your documentation.

SEO - Don’t reinvent search engine optimization

Use built-in SEO best-practices from Sphinx, its themes and Read the Docs hosting. This can give you a good ranking on search engines as a direct outcome of simply writing and publishing your documentation project.

🌱 Grow your own solutions

The eco-system is open source and makes it accessible for anyone with Python skills to build their own extensions.

We want science communities to use Read the Docs and to be part of the documentation community 💞

Getting started: Jupyter Book

Jupyter Book on Read the Docs brings you the rich experience of computated Jupyter documents built together with a modern documentation tool. The results are beautiful and automatically deployed websites, built with Sphinx and Executable Book + all the extensions available in this ecosystem.

Here are some popular activities that are well-supported by Jupyter Book:

  • Publications and books

  • Course and research websites

  • Interactive classroom activities

  • Data science software documentation

Visit the gallery of solutions built with Jupyter Book »

Ready to get started?
Examples and users

Read the Docs community for science is already big and keeps growing. The Jupyter Project itself and the many sub-projects of Jupyter are built and published with Read the Docs.

Jupyter Project Documentation
Chainladder - Property and Casualty Loss Reserving in Python
Feature-engine - A Python library for Feature Engineering and Selection

Read the Docs open source philosophy

Read the Docs is open source software. We have licensed the code base as MIT, which provides almost no restrictions on the use of the code.

However, as a project there are things that we care about more than others. We built Read the Docs to support documentation in the open source community. The code is open for people to contribute to, so that they may build features into https://readthedocs.org that they want. We also believe sharing the code openly is a valuable learning tool, especially for demonstrating how to collaborate and maintain an enormous website.

Official support

The time of the core developers of Read the Docs is limited. We provide official support for the following things:

Unsupported

There are use cases that we don’t support, because it doesn’t further our goal of promoting documentation in the open source community.

We do not support:

  • Specific usage of Sphinx and Mkdocs, that don’t affect our hosting

  • Custom installations of Read the Docs at your company

  • Installation of Read the Docs on other platforms

  • Any installation issues outside of the Read the Docs Python Code

Rationale

Read the Docs was founded to improve documentation in the open source community. We fully recognize and allow the code to be used for internal installs at companies, but we will not spend our time supporting it. Our time is limited, and we want to spend it on the mission that we set out to originally support.

If you feel strongly about installing Read the Docs internal to a company, we will happily link to third party resources on this topic. Please open an issue with a proposal if you want to take on this task.

Read the Docs team

readthedocs.org is the largest open source documentation hosting service. Today we:

  • Serve over 55 million pages of documentation a month

  • Serve over 40 TB of documentation a month

  • Host over 80,000 open source projects and support over 100,000 users

Read the Docs is provided as a free service to the open source community, and we hope to maintain a reliable and stable hosting platform for years to come.

See also

Our website: Who we are

More information about the staff and contributors of Read the Docs.

Teams

  • The Backend Team folks develop the Django code that powers the backend of the project.

  • The members of the Frontend Team care about UX, CSS, HTML, and JavaScript, and they maintain the project UI as well as the Sphinx theme.

  • As part of operating the site, members of the Operations Team maintain a 24/7 on-call rotation. This means that folks have to be available and have their phone in service.

  • The members of the Advocacy Team spread the word about all the work we do, and seek to understand the users priorities and feedback.

  • The Support Team helps our thousands of users using the service, addressing tasks like resetting passwords, enable experimental features, or troubleshooting build errors.

Note

Please don’t email us personally for support on Read the Docs. You can use our support form for any issues you may have.

Site support

Read the Docs offers support for projects on our Read the Docs for Business and Read the Docs Community platforms. We’re happy to assist with any questions or problems you have using either of our platforms.

Note

Read the Docs does not offer support for questions or problems with documentation tools or content. If you have a question or problem using a particular documentation tool, you should refer to external resources for help instead.

Some examples of requests that we support are:

  • “How do I transfer ownership of a Read the Docs project to another maintainer?”

  • “Why are my project builds being cancelled automatically?”

  • “How do I manage my subscription?”

You might also find the answers you are looking for in our documentation guides. These provide step-by-step solutions to common user requests.

Please fill out the form at https://readthedocs.com/support/.

Our team responds to support requests within 2 business days or earlier for most plans. Faster support response times and support SLAs are available with plan upgrades.

External resources

If you have questions about how to use a documentation tool or authoring content for your project, or have an issue that isn’t related to a bug with Read the Docs, Stack Overflow is the best place for your question.

Examples of good questions for Stack Overflow are:

  • “What is the best way to structure the table of contents across a project?”

  • “How do I structure translations inside of my project for easiest contribution from users?”

  • “How do I use Sphinx to use SVG images in HTML output but PNG in PDF output?”

Tip

Tag questions with read-the-docs so other folks can find them easily.

Bug reports

If you have an issue with the actual functioning of Read the Docs, you can file bug reports on our GitHub issue tracker. You can also contribute changes and fixes to Read the Docs, as the code is open source.

Glossary

This page includes a number of terms that we use in our documentation, so that you have a reference for how we’re using them.

CI/CD

CI/CD is a common way to write Continuous Integration and Continuous Deployment. In some scenarios, they exist as two separate platforms. Read the Docs is a combined CI/CD platform made for documentation.

dashboard

The “admin” site where Read the Docs projects are managed and configured. This varies for our two properties:

default version

Projects have a default version, usually the latest stable version of a project. The default version is the URL that is redirected to when a users loads the / URL for your project.

discoverability

A documentation page is said to be discoverable when a user that needs it can find it through various methods: Navigation, search, and links from other pages are the most typical ways of making content discoverable.

Docs as Code

A term used to describe the workflow of keeping documentation in a Git repository, along with source code. Popular in the open source software movement, and used by many technology companies.

flyout menu

Menu displayed on the documentation, readily accessible for readers, containing the list active versions, links to static downloads, and other useful links. Read more in our Flyout menu page.

GitOps

Denotes the use of code maintained in Git to automate building, testing, and deployment of infrastructure. In terms of documentation, GitOps is applicable for Read the Docs, as the configuration for building documentation is stored in .readthedocs.yaml, and rules for publication of documentation can be automated. Similar to Docs as Code.

maintainer

A maintainer is a special role that only exists on Read the Docs Community. The creator of a project on Read the Docs Community can invite other collaborators as maintainers with full ownership rights.

The maintainer role does not exist on Read the Docs for Business, which instead provides Organizations.

Please see Git provider integrations for more information.

pinning

To pin a requirement means to explicitly specify which version should be used. Pinning software requirements is the most important technique to make a project reproducible.

When documentation builds, software dependencies are installed in their latest versions permitted by the pinning specification. Since software packages are frequently released, we are usually trying to avoid incompatibilities in a new release from suddenly breaking a documentation build.

Examples of Python dependencies:

# Exact pinning: Only allow Sphinx 5.3.0
sphinx==5.3.0

# Loose pinning: Lower and upper bounds result in the latest 5.3.x release
sphinx>=5.3,<5.4

# Very loose pinning: Lower and upper bounds result in the latest 5.x release
sphinx>=5,<6

Read the Docs recommends using exact pinning.

See: How to create reproducible builds.

pre-defined build jobs

Commands executed by Read the Docs when performing the build process. They cannot be overwritten by the user.

project home

Page where you can access all the features of Read the Docs, from having an overview to browsing the latest builds or administering your project.

project page

Another name for project home.

reproducible

A documentation project is said to be reproducible when its sources build correctly on Read the Docs over a periode of many years. You can also think of being reproducible as being robust or resillient.

Being “reproducible” is an important positive quality goal of documentation.

When builds are not reproducible and break due to external factors, they need frequent troubleshooting and manual fixing.

The most common external factor is that new versions of software dependencies are released.

See: How to create reproducible builds.

root URL

Home URL of your documentation without the /<lang> and /<version> segments. For projects without custom domains, the one ending in .readthedocs.io/ (for example, https://docs.readthedocs.io as opposed to https://docs.readthedocs.io/en/latest).

slug

A unique identifier for a project or version. This value comes from the project or version name, which is reduced to lowercase letters, numbers, and hyphens. You can retrieve your project or version slugs from our API.

static website

A static site or static website is a collection of HTML files, images, CSS and JavaScript that are served statically, as opposed to dynamic websites that generate a unique response for each request, using databases and user sessions.

Static websites are highly portable, as they do not depend on the webserver. They can also be viewed offline.

Documentation projects served on Read the Docs are static websites.

Tools to manage and generate static websites are commonly known as static site generators and there is a big overlap with documentation tools. Some static site generators are also documentation tools, and some documentation tools are also used to generate normal websites.

For instance, Sphinx is made for documentation but also used for blogging.

subproject

Project A can be configured such that when requesting a URL /projects/<subproject-slug>, the root of project B is returned. In this case, project B is the subproject. Read more in Subprojects.

user-defined build jobs

Commands defined by the user that Read the Docs will execute when performing the build process.

webhook

A webhook is a special URL that can be called from another service, usually with a secret token. It is commonly used to start a build or a deployment or to send a status update.

There are two important types of webhooks for Read the Docs:

  • Git providers have webhooks which are special URLs that Read the Docs can call in order to notify about documentation builds.

  • Read the Docs has a unique webhook for each project that the Git provider calls when changes happen in Git.

See also: How to manually configure a Git repository integration and Build failure notifications

Mastodon

Read the Docs simplifies software documentation by building, versioning, and hosting of your docs, automatically. Treating documentation like code keeps your team in the same tools, and your documentation up to date.

Up to date documentation

Whenever you push code to Git, Read the Docs will automatically build your docs so your code and documentation are always up-to-date. Get started with our tutorial.

Documentation for every version

Read the Docs can host multiple versions of your docs. Keep your 1.0 and 2.0 documentation online, pulled directly from Git. Start hosting all your versions.

Open source and user focused

Our company is bootstrapped and 100% user-focused, so our product gets better for our users instead of our investors. Read the Docs Community hosts documentation for over 100,000 large and small open source projects. Read the Docs for Business supports hundreds of organizations with product and internal documentation. Learn more about our two platforms.

First time here?

We have a few places for you to get started:

Read the Docs tutorial

Take the first practical steps with Read the Docs.

Choosing between our two platforms

Learn about the differences between Read the Docs Community and Read the Docs for Business.

Example projects

Start your journey with an example project to hit the ground running.

Project setup and configuration

Start with the basics of setting up your project:

Configuration file overview

Learn how to configure your project with a .readthedocs.yaml file.

How to create reproducible builds

Learn how to make your builds reproducible.

Build process

Build your documentation with ease:

Build process overview

Overview of how documentation builds happen.

Pull request previews

Setup pull request builds and enjoy previews of each commit.

Hosting documentation

Learn more about our hosting features:

Versions

Host multiple versions of your documentation.

Subprojects

Host multiple projects under a single domain.

Localization and Internationalization

Host your documentation in multiple languages.

URL versioning schemes

Learn about different versioning schemes.

Custom domains

Host your documentation on your own domain.

Maintaining projects

Keep your documentation up to date:

Redirects

Redirect your old URLs to new ones.

Analytics for search and traffic

Learn more about how users are interacting with your documentation.

Security logs

Keep track of security events in your project.

Business features

Features for organizations and businesses:

Business hosting

Learn more about our commercial features.

Organizations

Learn how to manage your organization on Read the Docs.

Single Sign-On (SSO)

Learn how to use single sign-on with Read the Docs.

How-to guides

Step-by-step guides for common tasks:

How to configure pull request builds

Setup pull request builds and enjoy previews of each commit.

How to use traffic analytics

Learn more about how users are interacting with your documentation.

How to use cross-references with Sphinx

Learn how to use cross-references in a Sphinx project.

All how-to guides

Browser the entire catalog for many more how-to guides.

Reference

More detailed information about Read the Docs:

Public REST API

Automate your documentation with our API and save yourself some work.

Changelog

See what’s new in Read the Docs.

About Read the Docs

Learn more about Read the Docs and our company.