What do you want to learn? Leverged jhuang@tampa.cgsinc.com Skip to main content Pluralsight uses cookies.Learn more about your privacy Build or Contribute to Documentation with a Git-based Workflow by Erik Dahl Read the Docs is a great documentation platform used by many open source projects. This course teaches you how to create your own documentation project, use the reStructuredText markup language, and the basics of Git-based workflow for pull requests. Start CourseBookmarkAdd to Channel Table of contents Description Transcript Exercise files Discussion Learning Check Recommended Course Overview Course Overview Hi everyone, my name is Erik Dahl and welcome to my course Build or Contribute to Documentation with a Git-based Workflow. I'm a principle architect at Real Page. Do you author projects that other developers will be using and ever have to answer questions about your project? Or have you used an open source project and needed to pose questions in a forum or on Stack Overflow to get an answer about how to use the project in your application? If you answered yes to either of these questions, then this course is for you. We'll be addressing both the creation of a documentation project from scratch, as well as making updates or contributions using a Git-based workflow, which will include pull requests and continuous integration and delivery. No prior Git experience is assumed, so if you've been looking for that course that can help you hit the ground running with Git and pull requests, you've found it. Some of the major topics that we'll cover include setting up a brand new documentation project, learning the ReStructured Text markup syntax, setting up continuous integration and delivery, and reviewing the full Git workflow for a contribution via pull request. By the end of this course you'll know how to create your own doc project, how to contribute to open source docs, how to make pull requests, and even what hosting options are available for your source code and documentation site. I hope you'll join me on this exciting exploration of documentation tech and Git workflows with the Build or Contribute to Documentation With the Git-based workflow course, right here at Pluralsight. Free up Your Time or Give a Little Back: A Case for Documentation Overview and Course Structure Documentation is rarely a first class citizen when it comes to software projects. But getting it done well can improve the usability and adoption of your project and enable you to move on more quickly to either the next features or the next project without getting bogged down in routine support of what you've just created and released. Hi, I'm Erik Dahl, and in this course I'll be discussing both building a documentation project from scratch and the Git-based workflow used for making contributions and changes to existing documentation on projects. For the from scratch doc project, we'll focus on a platform called Read The Docs. But know up front that the workflow we'll be discussing will apply to most open source projects, whether they're using Read The Docs or not. So whether you're a software author of a package needing documentation or a consumer of an open source solution with some documentation gaps, this course will have something for you. Let's jump in and get started. This module we'll start out by asking why we should bother with documentation in the first place. We'll take this question from both the point of view of an author of a package or application and a consumer of a package or application from the open source world. We'll move on to looking at the options for how we can put together some documentation. Then we'll discuss some of the key benefits of using Read The Docs, the platform which will serve as the foundation for what we do in the remainder of the course. Lastly in this module, we'll have a look at the rest of the course and what you can expect there. If you've written a software package or application that needs docuemtnaiton, the reasons for creating good docs are pretty compelling. First off, you're providing a much better out of the box experience to new users or consumers of your package. With getting started information, you're essentially providing a nice welcome mat to new users trying to evaluate whether or not to use your package. This is helpful, even if they ultimately don't have a choice regarding whether or not to use your application. And by going further than just getting started with your documentation, you're also helping them to understand not just how to use your package, but how to use it properly. Communicating standard flows and/or configuration to ensure proper use can often be the difference between a successful and failed implementation. There are also selfish reasons to create documentation for your packages. The first of which is that you can avoid getting a lot of questions if you provide answers in the documentation in an easy to understand format. Even better than not getting questions is that you can avoid many of the repeat questions that different users or consumers will ask when they first come to your package. You may not completely avoid the need to provide answers or explanations, but having good docs definitely helps you avoid this to the extent possible. Lastly, you can avoid becoming a bottleneck to development or adoption efforts due to those questions that come your way. If others are dependent on your answers to get the package adopted, your speed of answer becomes a gating factor for them. If the answers to what they will likely encounter are in the docs that you've created, they can keep moving however fast their project warrants. You're not off the hook if you use other people's software packages. This will often come in the shape of an open source package that you might want to use in your own applications. If the final applications you write will be used by others, the argument for documentation is just what we've discussed in the previous slide. But if you struggled at all or had questions with the adoption of that open source package that you included, you may want to help out. The authors of the open source package created something that gave you some benefit, and by adding some docs, either describing how you got through your troubles or improving the information already available, can be a great way to give back to the community in a low risk, but very beneficial way. If you had difficulty in an area of adopting the package, chances are others will or already are having the same difficulty and you could directly help them by adding some documentation. Having a better set of documentation is something that improves the package as a whole and will help encourage even more adoption, which spurs on additional development and contribution. Lastly, this is a great way to step into the world of open source software without needing to know all of the coding nuances of the package. You start your contributions here with documentation, and if you get involved deeper later on, you'll already be a known contributor and will be a little more familiar with the processes around the package in question. So hopefully I've convinced you that both getting documentation in place in a good thing and that improving it over time is as important as continually developing the code itself. So what are your options for getting something in place that goes beyond a simple read me file? One of the easiest options is to set up a wiki for your project. Project wikis are available on many source hosting platforms, like GitHub, Bitbucket, and TFS. These provide some good functionality, but there is no real workflow involved for things like approving changes to the documentation. It's either a free for all or limited to contributors on the project. This can dissuade many would be contributors from adding or improving the documentation you create. Another option is that you could roll your own documentation site or content. This is certainly an option that provides whatever kind of flexibility you might want, but it comes at the price of reinventing the wheel for many features that are in the box on other solutions. You have to consider things like search, navigation, organization, styling, and things like that that you could avoid spending cycles on if you use a ready-made platform. Plus, you may have to repeat this process if you have multiple projects or packages or applications that need distinct documentation. A third option is DocFX, which might be a pretty good option if you're a pure .NET shop. This is what Microsoft has used to create the new docs at Microsoft.com site. The code generated documentation is limited to C# and VB.net and it uses a custom markdown format along the lines of the one used by GitHub. And this just generates a documentation site for you. You have to figure out for yourself how you want to host the doc site. Finally we arrive at Read The Docs, our Goldilocks platform where things are not too custom and not too rigid, but just right. We get the right amount of flexibility, the right amount of control, and a good set of in the box features that make for a good all around solution. The screen shot shown here from the Read The Docs site, which documents the Read The Docs platform, shows many of the features that we'll be leveraging in this course. You can see the built in search capability right at the top. It has an easy to use nav section on the left that lets readers quickly find what they're after. Including code in the documentation is very easy and looks clean and has syntax highlighting. Call outs like the note, when shown, are easy to create and are nicely eye catching to the readers. Plus the design looks great and is mobile friendly. All of these in the box features let you focus less on the mechanics, look, and behavior of the documentation, but rather on the content itself, which is what you want in a documentation platform. Plus, Read The Docs provides free hosting and automated integration with GitHub, GitLab, and Bitbucket. You can also host the documentation yourself or in other ways, and we'll see these options later in the course. So for the rest of this course, I hope there's a little something for everyone. We'll start by building a documentation site using Read The Docs locally. This will involve installing some prerequisites, running through a wizard, and then executing a build process. We won't assume any knowledge in this course, so don't be worried if you haven't seen anything like this before. Next up we'll add some content to our documentation using the ReStructured Text format favored by Read The Docs. We'll look at multiple pages, updating the navigation, adding notes, code, tables, images, and other such stuff that you might be interested in. We'll move on to taking our local doc site and code and setting up a workflow and real hosting on ReadTheDocs.io. We'll set up automated continuous delivery so that the hosted documentation will be auto-updated when changes are committed to the master branch of our documentation in GitHub. We'll be doing all of this from scratch and also showing how to do a full pull request based contribution here. So if you've got a project that already uses something like this for documentation and are unsure or scared about the process, this piece of the course should clear up those fears or uncertainty and leave you confident to make that contribution you've been considering. We'll then go beyond ReStructured Text and look at incorporating some other content in our site. We'll add markdown support to the project, we'll look at supporting different versions of the documentation, and we'll also do a little customization using our own CSS. Lastly in this piece, we'll review hosting alternatives for both the documentation source and the site itself, including TFS for the source and Git-based CICD processes. So let's go create some documentation. Getting Started: Create Your Documentation Project Introduction A couple of installs and using the ReadTheDocs command line interface to create your documentation project will have you up and editing real documentation in a matter of minutes. I'm Erik Dahl and in this module we're going to jump right in by setting up a few requirements and generating a documentation project that we can edit and run locally. Let's go. Install Read the Docs Prerequisites To get started with the demo, we'll install our prerequisites, Python, Sphinx, and the ReadTheDocs theme. Then we'll use the CLI to create our doc project. We'll build the project and then have a look at the basic structure of the project, as well as checking out the result of the build in the browser. Then we'll make a "Hello, World!" type change using a text editor and verify that the changes get built into the documentation site. Before we download and install any prerequisites, I first want to point out the main ReadTheDocs sites. If you just do a web search for ReadTheDocs, chances are you'll end up here at readthedocs.org. This is where we'll sign up to get our docs hosted later in the course. You can see some content at the top that introduces ReadTheDocs and right in the middle of the content is a link to the guide to ReadTheDocs. When we follow that link we arrive at a documentation site about ReadTheDocs that was built using ReadTheDocs. The initial document we see covers how to get started with ReadTheDocs. Feel free and encouraged to browse around here for information beyond what we do in this course. We'll peek back here a couple of times throughout our journey and review a couple of specific items, but I just wanted to make sure you knew it was there. So now onto the prerequisites. Let's pop over to our other tab and get Python installed. To do that we just download the latest version of Python for whatever platform you're running on, and once it's downloaded, invoke it as an administrator. I'll check the option to add Python to my path and then install it. It'll likely take you a minute or two to do this, but I've collapsed this time since watching an install isn't that interesting. With the setup complete, we can close the installer and move to a console. And now that Python has been installed, we can install the other prerequisites. We'll do that right here from a command prompt and use the Python pip command to install these. The first is sphinx, it takes just a second to run through all of its stuff, and then that finishes up. We have one more to go. It's the ReadTheDocs theme. Voila. At this point we're all set to create our doc project. Use the CLI to Create a Documentation Project With everything installed, we can now set up our doc project. Throughout the rest of this course we'll be building a sample documentation site called Coding with Kids. The actual content for this site won't matter as much as the processes, syntax, and tools we'll be using to create it. The sphinx-quickstart command is what we'll use to create the project and we want to have considered two things before we run the command, where we should put our docs, inside a code project folder or on its own, and the project name and author name, which appear in the built docs. Armed with that information, we jump in. For our example we'll put our doc project in a place of its own and call the project Coding with Kids. So we'll first create that directory, and then we'll go into that directory, and then we'll run the sphinx-quickstart command. In most of the cases the defaults are fine and the text prompting you for a decision is pretty helpful. The first choice is where to put the docs and we'll leave the current directory default alone. We can leave the source and build directory config alone as well, which means that source content will be placed directly in the root folder and the build will go in a folder within the root directory. Prefixing directories for static content with an underscore is fine, so we'll leave that too. Next up comes our name prompts and we'll provide our choices for these. Moving one, we're asked about versions. If you're documenting a specific software project, you may want to provide these. I'll leave them blank. Next up comes a language option, which I'll leave as English. Then a suffix for source files and I'll leave this as rst for ReStructured Text. I'll leave the starting doc named index and then we can opt into epub support if we want to produce something that could be read on a Kindle or other Ebook reader. I'll leave this off. Next up comes the option to include some sphinx extensions in our project. I won't include any of them, many, but not all, relate to documenting Python projects. Lastly, we agree to create a Makefile and a Windows command file to simplify our build process, and then we're done. If we now do a dir, we can see the files and folder that have been created. And then to build the project, we just run make html. If we run the index.html file that's in the build\html folder, here's our website. You'll see that it doesn't look like ReadTheDocs just yet and we'll fix that next after, first, a quick discussion about the text editor we'll be using. Visual Studio Code and Documentation Project Overview To edit our doc project we need a text editor, and to build it we need to use the terminal or console. You can use whatever you want for this, but I'll cut straight to the point and recommend Visual Studio Code, which can be found here at code.visualstudio.com. It's a free, lightweight code editor that runs on any platform and has lots of great plugins, plus an integrated terminal where we can do our builds. It also knows about projects and folders and has get source control management functionality built in. I've already got it installed and we'll pop over there and open our doc folder. If you don't already have it, it's definitely worth a look and the install is easy and fast. Okay so here we are in Visual Studio Code. The start page has lots of good information to help you get started, but this is, of course, on doc authoring, not VS Code, so we'll keep things to what we need to do. We'll go straight to opening the folder containing our doc project. I've already opened it before, so it shows up in our recent list. Nice. Since I haven't opened any files, the welcome pages stays visible. I'll close that here and we can get to work. Here I can get a bird's eye view of the doc project we've just created. The build folder contains the result of our make command, and if we expand the HTML folder we can see some of the HTML files and some of the various resources and other content that the make process created. Our static and templates folder are empty initially. We'll be using those later in the course. The index.rst file is the simple starter page for our doc project, and we'll be looking at that closely in the next module. The conf.py file is a configuration file written in Python that tells the make process many of the options about our project and we'll modify this to get the ReadTheDocs theme applied. Lastly, the Makefile and make.bat just support the build process and are not all that interesting. Apply the Read the Docs Theme To apply the ReadTheDocs theme we just open the conf.py file and head down to the line that sets the HTML theme, the default is alabaster. We want to set it to sphinx_rtd_theme. Once we've done that, we save the file and then we can use the View menu to get to an integrated terminal to run our build. We'll run the make html command, just like we did before, and we get a nasty looking error. All it's saying actually is that this is a PowerShell terminal and we have to specify the local directory when we run commands. So if we run ./make html, we'll see the build run successfully. And when we launch ./build, we now see that the ReadTheDocs theme has been applied. Awesome. Hello, World To say hello, we'll make a small edit to the index.rst file. When we open that file, we see a very bland looking editor with no syntax highlighting. Let's fix that first and then we'll make an edit. We'll grab the ReStructured Text plugin from the Marketplace and install it. We get prompted to reload the window, which we can do, and now we'll go back and open up that index.rst file again. When we do that, we see a warning about a linting issue and this won't effect us, so I'll just close this and move on. Looking at the file now, we see some syntax highlighting that'll help us as we go. Awesome. But even better, we can see a preview of the file if we click on this icon at the top. And it even includes our ReadTheDocs theme. Double awesome! If you need more real estate to see stuff on your screen, you can collapse the folder view by just clicking on the icon on the left. To make our hello edit, we'll just add Hello world to the file underneath the heading, save it, and then we can see that it shows up in the preview on the right. Nice. We'll close this down for now and then we'll hit Control+tick to bring up that terminal so we can build it. Then we'll run our ./make html command again and then we open build\html\index.html. And our edit is there. Sweet. Let's wrap this module up so we can dive into ReStructured Text and more content creation. Summary In this module we got all of the requirements set up on our local machine and created a doc project that's ready for us to add content. We also learned how to build the project and make a minor edit that shows up in the resulting site. Next up, we'll really dive into content creation by having a closer look at the ReStructured Text markup. Leveraging the Power of reStructuredText Markup Introduction Having a documentation project is great, but not very helpful if you can't utilize the different constructs of a language to help highlight important points and create some structure to the information you're trying to convey. Hi, it's Erik again, and in this module we're going to explore how the ReStructured Text markup language is used to do just that. Here we go. Once again, this module will be slide light and demo heavy. The whole purpose of this module is to get you up and running with ReStructured Text quickly, so we'll be covering most of what you'll need to do just that. The information we present here will enable you to create a rich documentation site that will engage readers. There are some options and constructs that we won't be covering, and for the full details of the options available, you can jump over to the sphinx docs themselves if you find the need. They are at the sphinx-doc URL I'm showing here. To get us started, we'll look first at the table of contents tree, or TOC tree, that is part of the sphinx framework. We'll use this on the main index page to provide a quick jumping point to the other documents in our site. To that end, I've already created some additional rst files in our doc project and they're empty, save a little hello from here statement. We'll use those empty docs as the items in our table of contents and see how they work. We'll move on to use restructured text to create various kinds of page elements, from formatted text to including code snippets, tables, numbered and bulleted lists, and other kinds of stuff. Finally, we'll learn how to get hyperlinks onto our pages so we can jump to other locations, whether they're outside our documentation site, pages within it, or specific locations that we define. The TOC Tree and Document Sections As with most things, it's a really good idea to begin with the end in mind. As such, I've created a little outline that shows where I'm hoping to end up on the main landing page for our docs. The left navigation area should have a section for options that describes the various options you have when considering Coding with Kids. And that'll have a few documents under it. As a second main item, we'll have a section for guidelines that pertain to the documentation we're creating, like content that can be added and how the workflow is going to work. The main body of the page, I just want to be some high level content that sets the tone for the rest of the site. Now that we know where we want to end up, let's dive in. As I mentioned in the intro for this module, I've already create a couple of documents to get us started. You can see the new folders I created for Options and Guidelines. And here I'll show you the justlogic file I've created with some undecorated text, and the justcode file is almost the same. Now I'll collapse the folder view to give ourselves a little bit more room and we'll see about getting our table of contents structure started. The index.rst file that I'm on now is the same one that was created by the CLI that we used in the previous module, along with my Hello World text. To get our documents into the toctree, we need to come down under the toctree element and then go in three spaces so that it's underneath the colon of the caption in maxdepth items, and then just type in our document names. We add Options/justlogic and Options/justcode. When we save that, we can look at the preview to see if our table of contents looks okay, and the new items don't in fact appear in the table of contents, unfortunately, and that's because we haven't done anything to create what's called sections in the documents. So let's go do that. To create a section in a document, we just put some punctuation underneath what you want to be defining as a heading. The punctuation used under the first heading will define the outermost level for your document. I've got the just logic structure basically already created, so I'll just replace this content with our heading structure. So what you can see here is underneath this main heading of Just Logic, I've got the plus sign repeated underneath it. Then I've got a section for Key Goals, which is a new section underneath Just Logic, and at the same level I've got an Options section and that has there sub-items underneath it. I'll do the same thing for the justcode document. Okay the justcode document has a main heading I called JustCode, then it's got a section for Key Goals and Options and the Options section has four sub-items underneath that. I've saved both files as we've been editing and now we'll go back to the index file and preview it to see if it looks any better. It does, but you can also see that our main outline is appearing in the body of the document. I don't want that in there, so I can add the hidden element to our table of contents. When I save that, the table of contents goes away. While I'm at it, I'm going to remove this Indices and Table item because I don't want that in there either. When I save that, I'm left with just our Hello World text, which we can replace with whatever content we want to in the clips ahead. So now things are starting to look pretty good. I'm going to close down this preview, I'm going to bring back the folder view because I want to copy in a couple of other files that I've already got prepared. We'll copy two files in for the Guidelines, content and workflow, and we'll copy two other files in for the Options, justhardware and hardwareandcode. If we look at those, there's hardwareandcode, it looks just the same as our other files. Here's our justhardware, and here's our content and workflow files. Very much the same as our other contents. Having copied those files in, now I need to go back and update our table of contents. So I'll do that here, we'll add our Options and justhardware item and our Options/hardwareandcode. Then I'm actually going to change the contents up here to be Options and I'm going to copy this to a new toctree for our Guidelines. And underneath that we'll add Guidelines/content and Guidelines/workflow. This time I'll do a full build, so I'll open up my PowerShell terminal again, and we'll build that, and we'll load our index.html file and we see our nice, fully table of contents on the left with Options and Guidelines and we can navigate around to these things and expand the items as we move through. We can go back to the home page. Things are looking good. Now we're ready for some content creation on the pages themselves. Text Formatting This clip and a few that follow will be short and sweet, with the focus being just on a small portion of ReStructured Text that will probably be of interest to you. In this clip, we'll look at text decorations. On the screen, I've got the main index doc on the left and a preview on the right. I've replaced the Hello World text with two paragraphs of content, all with no text decoration at all. The three different kinds of decorations that are readily available are italics, bold, and code words. To make something italicized, you put a single asterisk before the text and a single following the text you want in italics. You can do this on a single word or multiple words together. When I make the changes and save the file, you can see the italicized words in the preview. For bold, you do the exact same thing but with two asterisks. And for code words, you just use two back tics in exactly the same way. They don't actually need to be code words, they just get decorated with a little box and a special font for extra emphasis. And that's it for text decoration. A special note regarding when you might want text that is both bold and italics, you can't just put three asterisks together to make this work, unfortunately. There may be other solutions, but we'll be doing this by adding a custom style sheet and then applying a class later in the course. Lists: Bulleted, Numbered, and Multi-level Moving on to lists of things, I've popped over to the Just Logic page and again added some undecorated text plus a list of four items at the bottom of the Key Goals section. You can see the preview on the right and my four item list isn't displaying very nicely at all. List items need to be preceded by an asterisk or minus sign. I'll do that and save the page. To make this list a numbered list, you can change the asterisk to hash dot. Cool, so far so good. Moving back to simple bullets, you can create multiple levels to your outlines, and this will add some text decoration to higher levels of the outline. So when I add a level, underneath the Understand basic logic item, and save the page, you can see that the Understand basic logic has been made bold and there's extra space under the sub level. This might be fine if your outline has a consistent hierarchy from one item to the next, but you may want to think about this if your items and their levels will vary. Some of the techniques we discuss later in the course can help with this if you need it. And as a last point, you can add the text decorations we discussed in the last clip to give specific items in your list some extra punch if you need it. And that's lists. Admonitions Occasionally you have a larger block of text you want to call special attention to and that's where what's called admonitions come in. They come in four different flavors with the ReadTheDocs theme, green, blue, yellow, and red. I'm back on the main index page and I've added a simple paragraph that provides a warning about leaving kids unattended with technology. I'd like to call that text out in one of these admonitions for the right emphasis. I'll start with the yellow one, for that I can use caution, warning, or attention, I'll choose caution. I'll add the admonition line above the text I want included in it and then I have to indent the text so that it's under the keyword I've selected. When I save the page, I see a nice decorated Caution box. If I change caution to danger, you can see the box change to a shade of red. And changing again to tip changes it to green. Lastly, changing it to note updates the color to be blue. The content below the admonition will remain in the box until the text or other content starts back on the far left of the file as opposed to under the admonition keyword. You can include lists and other text decorations with the techniques we've already been using. And now you know how admonitions work. Images Including images on your documents is as simple as most of the other stuff we've been doing. Once you have an image you want to include, just get it into your project somewhere. When you refer to images from a document, you need to provide a relative path, and that's about all there is to it. So here we are on our hardwardandcode document, and as in previous demos, I've added some basic text to the page and I'm showing a preview on the right. I've left the folder view open for this clip to show you the locations of the two images I'll be using. For one, I've created an images folder and put a PiAndSenseHAT.jpg file into it. For a second image I've put the Connect4-logic.jpg directly into the options folder. I'll start by adding our Raspberry Pi and senseHAT image. I'll use the image directive and make sure that I use its relative path information. And for the second image, I can refer directly to the file name since it's in the same directory as the doc itself. When I save the file, both images show up just like we wanted. Quick and easy and that's all there is to it. Code Samples I'm guessing your documentation project will need to include some code samples, and including these with ReStructured Text is pretty easy. The shorthand for indicating a code sample is a double colon at the end of a line, followed by a code sample that remains indented until the end of the sample. We're looking at the justcode page of the documentation and I've got two places in the document that precede some code I want to show. I'll add the double colons to indicate the starting points of the code blocks and put some Python code in the first spot and some C# in the second one. One thing you need to make sure of is that you have the entire code-block indented. The code-block will continue until the indenting ends. Once I save the file, the preview will show our two code blocks. One thing you might notice in looking at these two blocks is that the C# block doesn't have quite the same level of syntax highlighting detail as the Python block does. That's because the lexer used to do the highlighting doesn't know that the second block is C#. I can indicate the language for a code-block by using a more verbose format. The code-block directive on its own line with blank lines above and below it. When I replace the double colon with a normal single colon and then add the code-block directive and specify my language, we get better highlighting. Nice. One last thing to note is that if most of the code in your documentation will be a single language, you can add that setting in the conf.py file, right under the pigments_style setting. There are lots of languages supported and if you look up the Python pigments library, you'll see the list of options. Once you set that though, you would have to use a code-block directive to indicate sections with other languages. That's about it for including code samples in your documentation. Pretty sweet, right? Tables: Four Methods Creating tables in your documents can be a little tricky, but this clip should give you everything you need to make it as simple as possible. There are four different ways you can render tables and I definitely have two that I like better than the others. So I'm back here on our hardwareandcode page and I've already created a relatively simple table that you can see in the preview. With this format, to identify a header row in the table, you put equal signs above and below the text for the header with a single space separating each column. Then you put the contents of your rows under the header with one line per row. Underneath your last row, you repeat the equal sign row to finish the table. You can see the table rendered on the right. This looks readable even in the markup, but adding rows to the table can sometimes be a tedious task. For example, if I had a new product called shiny new platform and wanted to add that into the list, I add another row underneath Lego Mindstorms and type the name Shiny New Platform. What you can see is that now I need to touch every other row and add some more equal signs to make everything line up again. This is actually required for the sphinx parser to properly render your table. You can see that when I just save it with the new row, the table disappears due to the parsing error. When I walk the table to correct the other rows and then save the doc again, it now shows up with my new row. This can be pretty tiresome if you regularly add rows and they don't always have the same number of characters in their various columns. I'll get this table back to our original state now and point out one other feature. To give our table a title, which includes the ability to directly link to the table, you can include a table directive with the title above the table. Just make sure then to indent the contents of the table for it to work. This title functionality will work with any of the other three approaches that I'm about to show as well. The next approach is even more tedious than the first one and I'll just paste in the text so you can see it. For this one, you basically fully paint the table. This can keep the markup more readable, but definitely is harder to keep perfect. It does allow me to control the markup width though because you can make a cell as tall as you want. The header under the row is marked by the line with the equals and plus signs under it. If you need to be able to see your table even in the markup, this format may be great for you. But if your table changes from time to time, this one is a pain in the neck. Finally we get to my preferred formats, the list table and the csv table. I'll just paste them both in and make a couple of comments about each. The list-table directive provides table functionality for lists like we created earlier in the module. Each row is marked with an asterisk and each column within the row is preceded by a dash. Pretty simple format and lets you focus on the content of the table rather than tedious markup. The header-rows and widths properties are important here too. Make sure you have a value for each column in your widths list and the header-rows should be pretty self explanatory. The csv list is very similar, but the header-row is specified as a property of the directive here. I much prefer the csv and list approaches to tables due to the fighting with the markup that you do with the other two. In comparing csv to list, I kind of like the list because it seems a little more readable to me than the csv approach. But with all of these options regarding tables, I'm sure you can find one that will work for you to create tables in your docs. Hyperlinks Documentation without hyperlinks that cross reference different sections or documents or provide links to more information is almost hard to imagine. So knowing how to do hyperlinking is a critical piece of your documentation journey. We'll cover three distinct scenarios for linking to other places. Linking to external places on the web, linking from one document to another within the project, and linking to a specific spot on a document inside the project. We'll take them in order. First, I'm here on the justhardware page and I've given short descriptions of three options in this space. I'll just put the Arduino link on its own line and that'll work just fine. For the Snap Circuits link, I'll create a hyperlink under the text of Snap Circuits. And to do this you need to use back tics, angle brackets and a trailing underscore to make it work. Lastly, I'll just put the direct hyperlink for the Project Bloks website right inside the paragraph and that'll work just fine as well. The main thing you need to remember here is that the sphinx parser can see web links and light them up if the text of the link is the link itself. But if you want to make the text of the link something different, then use the back tic, angle bracket, underscore syntax. When I save the file, the preview looks good. And when I click a hyperlink to verify that it works, you see one of the shortcomings of the ReStructured Text previewer. Links don't behave very well at this point and you'll find this for all of our different types of hyperlinks in this clip. To really verify that they're working as expected, we can open our terminal again, build the project, and then launch the justhardware page that we're editing. When that comes up in our browser, we can click the various links and see that everything is indeed working as expected. Switching gears now, we can go back the index file and look at creating a link to one of the docs in our project. First I'll show you how to simply link to the document target you want. It will show the top section name of the document as the link text and then I'll show you how to use custom text. I've got a little starter paragraph here and to link to a document we just use the colon doc colon and back tics for the relative path of the doc target, like so. To link to the doc with custom text, we can do almost the same thing, but put our custom text where the doc path would have been and then a space followed by the relative path inside of angle brackets. The preview would show us the links, but they wouldn't work. So I'll just build the project and verify, again, using the browser. The first one works and the custom text one does too. For the last hyperlink technique, I'll add a reference directly to the paragraph of the text I'm showing here on the hardwareandcode page under the Options heading. To do this I'll create a reference tag by using a dot dot space underscore syntax followed by the name I'd like to use to refer to this location. I'll call it hwcodeOptions. Then I'll go back to the index doc and use the ref keyword with the same custom link syntax that I used before. When I build the project and click on that link, you can see that I went straight to the spot I wanted to. Nice. You can use this last technique anywhere in your doc project, whether you're linking to another spot on the same page, or a spot on other pages, like the one I just did. And those are the techniques that you can use to light up your content with useful hyperlinks. And that wraps up our journey through most of the common features of restructured text that you might want to use to create your own documentation. Let's go wrap up this module and see what's next. Summary In this module we dove into ReStructured Text to light up our doc site. We created a TOC tree in our starting doc to make quick navigation around the site a snap. Then we used ReStructured Text to create lots of different constructs that are the building blocks of great content. We also learned how to hyperlink to any place we wanted, whether inside our doc site or not. But all of this so far has been done simply on a local file folder we've created on our own machine. Next up, we'll set up ReadTheDocs hosting so we can publish our site and also integrate a workflow with GitHub to control changes to the content. Stay tuned. Automating Updates with a Streamlined Workflow Introduction So far we've learned how to create a doc project and also to create content. That's all great, but we're just getting started. It's Erik again and in this module we'll set up Read The Docs hosting for our project, along with the typical Git workflow to control and simplify the way that updates get made to the docs. Let's dive in. This module will cover four major topics regarding our doc project. We'll start with Git and GitHub. We'll discuss what each of those things are and get setup so that we can use both of them. And then we'll set up what's called the repository for our code. Feel free to skip round if you know this stuff. I'm including it because I wish I could have seen something simple like this awhile ago. We'll move on to Read The Docs where we'll get logged in and link our Read The Docs account to our GitHub account and the repository for our docs. We'll also set up CI/CD here to automate some things, and if you don't know what that is, we'll quickly explain it. The remainder of this module will be spent covering pull requests, what they are and how to configure our doc repo to require them. Then we'll walk through the actual workflow for a pull request from the perspective of the creator and the reviewer. If you're already familiar with Git and Pull requests, that's great, but if you're new to them and they sound like mysterious things that only hardcore open source people know about, this module ought to remove that mystery and leave you comfortable to create your own pull request should the need arise. Git and GitHub: Introduce and Install Let's clarify Git and GitHub at this point. Git is a distributed version control system. Other source control systems are subversion, cvs, and team foundation version control, or TFVC. There are a lot of compelling reasons to use Git over some of the other options, such as the way it stores files, the ease for users of switching branches, and the prevalence of its use throughout the open source world. To use Git functionality on your local machine, you download and install Git from git-scm.com. That website also has some good general info about Git and the command line options available. We'll need this installed for what we're doing and we'll go over just what we specifically need for this course. A key term that you'll see or hear with regards to Git is a repository or repo. This basically refers to a single code project in the looser sense of that work. A repository might be defined as having the collection of solution and project files or directories that constitute a logical application or package, including any applicable tests or other such stuff. We'll be creating a single repository for our doc project, but our doc project could actually be a part of a larger repository if that's what made sense in that case. Moving on, GitHub is a website that hosts Git repositories and implements the Git interface points. There are many other platforms that can host Git repos, such as GitLab, Bitbucket, and even Team Foundation Server since 2015, and VSTS in the cloud. For our initial demo we'll be working with GitHub, but later in the course I'll show how the same functionality we'll be implementing can be done in VSTS and on-premises TFS. In this course, we'll be covering just what we need to about Git, which will get you up and running with this stuff quickly. But if you want to learn more about Git, I recommend Paolo Perrotta's How Git Works course here on Pluralsight. Now let's go get Git set up on our machine. Say that three times fast. And make sure we can get logged into GitHub. To download and install Git, you visit the git-scm.com website. I'm here and will quickly point out the interactive Try Git link that lets you explore some of the command line functionality of Git right in the browser in a little tutorial. I'll just download the current version for Windows and launch the installer once the download has completed. So here we are with the installer for Windows. The first step is a license note and we'll just click Next. We confirm the directory where Git will be installed and then we have some options to choose from. I'll just leave the defaults, that'll be fine for what we're doing. We then have an option for the Start menu folder and I'll leave that as well. Then we can choose whether to modify our path environment variable to include Git. This is fine and what I want, so I'll click Next again. Then we have an option about security protocol and I'll leave the OpenSSL library selected. Git can perform some line ending conversions. I don't generally have much trouble with this, so I'll, again, leave the default and click Next. Two more quick defaults, the first is the terminal emulator, and again, I'll leave this alone. And then finally, a couple of last options that I'll leave with the defaults, and then get to click the Install button. When this completes, the Git source control commands will be available on my machine. Let's go make sure we have a GitHub account now. Okay, here I am on the GitHub website. Having an account is free, but if you want to have private repositories, you'll have to use a paid account. If you haven't already signed up for an account, you can use this screen to do it. I've already got an account, so I'll sign in. If you can get signed in to GitHub and have a green button or option to create a new repository, you're all set. See you in the next clip when we move our doc source into a repo we'll create. Create and Load GitHub Repository for the Source Code In this demo we'll take our local folder containing the doc project we've been working on and turn it into a Git repo that have on GitHub. To start, we'll create an empty repository on GitHub. Then we'll be adding a config file to our local folder to make sure that Git will ignore our build folder for source control purposes. We don't need to retain version history for those files, just the source files that we're editing. Then we'll turn our local folder into a local Git repository by using a Git initialize command. After that, we'll configure the local repo to be remotely connected to the GitHub repository we set up in our first step. Then we'll do an initial push from the local folder into the remote repo, which will push all of our source files up into GitHub. These steps are all one time setup steps, the regular workflow for making changes and getting them committed will be a little different and a little easier. So if you start getting worried by what you see here, fret not and stay tuned. We'll start the process to get our doc code into a GitHub repo right here on the GitHub website after we've logged in. I'll click the button to create a new repository and give it a name of coding-with-kids. There are a couple of other options here that I can simply ignore for now. I'll press the green button to finish the repository creation and then I see some helpful information that can get me rolling. I'm going to grab the HTTPS URL for this repo and copy it for use in just a sec. But note that we'll basically be following the steps for creating a new repository from the command line. Let's go into VS Code now for our doc project and pick up there. In VS Code, I want to add a Git ignore file so that our build folder contents, which amounts to quite a few files, will not be part of what lives in our Git repository. To do that, I'll add a new file called .gitignore to the root directory for the project. This is a special file that Git will look at as it monitors the repository we'll create. Once I've created the file, I can simply add _build as the first and only line to the file to make sure that Git doesn't look at that directory for change monitoring and source control. Now that that's in place, I can initialize the Git repository for this directory. I could do that from the command line, but let's use the VS Code source control tab instead. When I click that tab, I see a note that no source control providers are set up for this directory yet. I can click the little Git icon to initialize the local repo. I get a very much oversized initialize window. My computer sometimes gets confused with the multiple monitors and their different resolutions. But all I need to do is click the Initialize button. Since this is a brand new repo that we've just created, all of our existing files now show up as needing to be committed into the repo as new files. I'll type in a commit note of initial commit and then commit the changes. I see a note here about staged changes that I don't really need to worry about right now. Staging changes is beyond the scope of this course, but you might think of it like creating shelf sets in TFVC. Having completed the init and first commit, we now have a local Git repository and the current state of all our source code is now at a known point. All of this is so far still on our local machine. Now we need to connect to a remote repo and do a push. The integrated menu options and Git functionality do no currently support the two operations we need to do, so I'll open a terminal for this. We start by adding a remote origin link to the URL that GitHub gave us when we created the repo. Then we do a push with the -u flag to the master branch and we see some objects being reviewed and written, which seems promising. The process completes, and we can pop back to our browser showing the repo we created earlier. If we refresh the page, we see that our code has been uploaded. Awesome. So now we have local code that we have pushed into a repo on GitHub. Let's get Read The Docs hooked up to it now. Set up Read the Docs Hosting Alright, let's do this and see how easy it is for this to really come together. Here we are on Read The Docs and you'll need to sign up for an account if you haven't already done so, but it's free as long as your documentation can be public. We'll look at other options, if public docs don't work for you, later in the course. Once we've logged in, we need to set up a connected account. We'll click the button to do so, and then we'll choose the button to connect to GitHub. I'm still signed into GitHub, so I go immediately to their consent screen. Read The Docs is wanting to have certain access to my GitHub account to be able to grab content and monitor repos via web hooks and such. I'll scroll down and grant the access and then I'm redirected back to Read The Docs. Now I'll click on my name badge at the top to get back to a kind of home page. On this page, I can choose to import a project. That's what I want to do, so I'll click the button. When I do that, I don't see much at all other than a note that I might want to refresh my accounts and a nice green button, I'll click that. When I do so, I see the various repos that I have on GitHub, including the coding with kids one that we just created and pushed our docs into. I'll click the plus on that row to indicate that that's the repo that I want to import. I next have to confirm some details about my project. They look okay to me, so I'll click Next. Then I get a nice page that tells me my documentation is building. Hey this is sounding pretty good. I'll collapse the time of me waiting a tick, then refresh the page, and now it says that my documentation is ready to use. If I click the View Docs button I can now see my documentation with a public URL that anyone can go and check out. How cool is that? At this point we've completed the one time setup activities. Now let's go have a look and see what it's going to take to set up CI/CD and process this to support changes to the documentation. Continuous Integration/continuous Delivery Explained This part of the module we'll focus on setting up what's called continuous integration and continuous delivery, or CI/CD. They are two different things and you can have them together or either independently as well. Let's have a look at what it's all about. To start with we have some contributors, developers or authors or whatever, that make some kind of a check-in, commit, or change to a source control repository. That act of making a change to this source control repo then automatically would initiate a build process, which produces some artifacts or results of the build. In our case, that will be the HTML and supporting files for the doc website. If this process of creating artifacts does not require any person to do anything for it to happen, other than make the change to source control, then you have continuous integration, or CI. Once the artifacts have been created successfully, it's possible to recognize a new build artifact and automatically deploy those artifacts to a server that makes them available to end users. If this happens automatically, you have continuous delivery, or CD. You can execute automated tests within these processes, include notifications or require approvals, and other things like that. The main point is that the heavy lifting of creating the builds and getting them deployed just happens and is not a labor intensive task, just a matter of processes being initiated or approved. Let's go explore doing this with our doc project. Set Continuous Integration/continuous Delivery Explained up CI/CD on Read the Docs What we need to enable CI/CD for our Read The Docs project is to enable the GitHub web hook for our doc project. So on the project page in Read The Docs, let's go over to the Admin function. When we get to our admin page, we see some of the default options that we left alone when setting up the doc project here. We need to look at the Integrations tab, and when we do that, wait, what? The GitHub incoming webhook is already set up. That must mean that Read The Docs set that up for us when we connected our GitHub repo to this project. I don't know if I fully believe all of that. So let's go make a change and see what happens. Okay so I'm back here in VS Code and I'll make a change to our index.rts file. When I make that change and save it, you can see that I now have a pending uncommitted change on the source control tab. I'll go first commit the change, which will check the change in locally. In fact, you can even see in the VS Code toolbar that I am in the master branch and that I have zero remote changes that need to be applied or pulled here, and then I have one local change that needs to be pushed up to the remote repo. If I click the dot dot dot in the Git bar at the top, I see a menu with a list of options available to me. I can simply choose Push to push the changes from my local version up to the remote counterpart. Let's do it. Now for the moment of truth, I just pushed my changes into GitHub. Let's pop onto our Read The Docs project page and see how things look. If I refresh the page, we can see that my doc project is building. This is looking very promising. I'll refresh the page again and the build is completed. If I look at the Builds page, we can now see that I have two different builds of the documentation. Cool. But can we see our change? Yes. That's pretty awesome, right? CI/CD at work. We pushed our change into the master branch on GitHub and Read The Docs both built and deployed the project. There's one other thing I'd like to point out here. You may have noticed the Edit on GitHub link at the top of the page. This is also something baked into Read The Docs integration with GitHub repositories. If I click the link, I'm taken directly to the index.rts file as it shows on GitHub. And if I click the pencil to edit the document, I can make a change here. After I've made my change, I'll move to the bottom of the page and commit the change. This time I'm actually on the remote repository and I'll just commit this directly to the master branch. When I do that, I go back all the way to the Read The Docs site and we can see an installing build already shows up in the Builds page. And the Project page, again, shows that my docs are building. When I refresh the page to note that my build has completed, we can see the completed build on the Builds tab and when we look at the published docs, we can see the change we made from GitHub. So CI/CD really is working when changes get made to this master branch. This is super cool stuff. But this also means that anyone with a GitHub account can commit change to master right now without any review from us. This may not be what we want, which brings us into the world of pull requests. Let's go explore what those are and how they can help enable change in a more controlled way. Pull Requests Explained One of the reasons I'm spending a lot of time on Git, GitHub, and now pull requests in this course is that I found myself looking for a simple version of this information not too long ago. I wanted to contribute some documentation to an existing open source repo on GitHub and all my source control and workflow experience had come from TFS up until then. I asked how to make my contribution and was told to fork the branch and make a pull branch. It was like they were speaking a totally different language. I stumbled my way through it and made my contribution, but it wasn't pretty. I'm hoping that this content can help you out of the situation if any of this resonates. So pull requests, easiest to define with a picture. We start with some kind of existing repository and a master branch of source code, plus the owner or reviewer or creator of said repo. Enter the person who has a shiny new feature they want to contribute. They have created their own feature branch called snf where they've done some work and then want to get that work merged into the main master branch of the code, to make it officially part of the real deal. To do that, after they have committed and checked in or pushed all of their changes into their feature branch and its remote counterpart, they create what's called a pull request. The contributor is essentially asking the owner to review their code. The pull request, or PR, will highlight changes between the snf branch and master. And then if the reviewer is happy with the code changes, she'll merge the changes into master. This is a pull operation. The code differences are being pulled from snf to master. Pull requests can come with notifications and be associated with policies, like maybe you need two people to approve or complete a pull request. So pull requests are simply a code review and merge request combined. We've seen that as it stands we can directly check in or push changes into the master branch for our doc project in GitHub. Let's go change that so that now to introduce change we need to do it via a pull request. Configure GitHub to Require Pull Requests to Master Okay here we are in GitHub, looking at the code tab for our repo. To change the settings to require pull requests for our master branch, we'll go over to the Settings tab. It's worth spelunking around here just to see what's what, but we'll keep our activity focused around the task at hand. We'll choose the Branches node and have a look at that. What we want to do is require pull requests on the master branch, which is a form of protecting the branch. So we use the protected branches, Choose a branch button to choose the master branch that we want to protect. When I do that, we see a page with some options about how we want to protect the branch. I'll first check Protect this branch, which will light up yet more options for us. For our purposes, I want every change to this branch to come from a pull request, even from me, and I'm the branch administrator since I created it. So the options I'll check are Require pull request reviews before merging, which gives yet more options that I don't need to worry about here, and Include administrators, and then save the changes. Since I'm the only real member of the project right now, I'll be undoing the Include administrators option for the approval clip coming up. But I want to show you how a prevented push is going to look. I'll explain this more in the approval clip. I see a note that the branch changes have been saved, which is good. So let's go verify. I'll go back to looking at the index.rst file and make another edit. When I do so, and then go down to the bottom of the page to commit the change, I no longer have the option to directly commit to master. The only option is to create a new branch and start a pull request. Excellent. I'll cancel this and we'll try a direct commit from VS Code just to double check things from that angle. Back in VS Code, I wanted to again first point out that the status bar is showing a change on the remote repo that I haven't pulled locally. I'll use the Git tab and then the menu option to pull from the remote branch. Don't confuse this pull that I'm doing with a pull request. The pull operation is just me pulling the latest changes from the remote master branch. A pull request is that whole workflow around reviews and then merging. Having done my pull, I'll make a change and save it and then commit it. This is being done locally on my machine so I haven't run into any problems yet. But when I try to push the change remotely, like we did earlier in this module, I now get an error message that says try pull first. This is a bit misleading and trying a pull won't help in this case. But if we open the Git log, we get a little bit more insightful messaging, which says that we've got a protected branch and at least one reviewer is required here. This is good and exactly what we wanted. To undo my changes, I'll use the Git menu to undo my last commit. That will make my change pending again and ready for a commit. I can go up to the listed file in the changes area and use the undo button to discard my changes, which brings everything back to where we want to be and our master branch has the desired level of control now imposed upon it. Join me in the next clip where we actually submit a pull request. Submit a Change via Pull Request Now that we've configured our GitHub repo to require a pull request to introduce change, we're going to walk through the process to actually make one of those changes. So we'll make our first pull request in this clip. First, we'll create a branch or fork as a place to actually initiate the change. Then we'll make some changes to the actual documentation source, and following that we'll use our Git source functions to commit the changes and publish to a new remote branch. At that point we have code and a remote feature branch that we want to get into the master branch, so this will be the point where we can make our actual pull request. Let's do it. To start the process to create our pull request, we're still in the master branch, but we should make our change in a new branch just for our new feature or change. To change branches, we can click the current branch name in the status bar of VS Code. We then see that we really don't have any other branches, but we have an option to create a branch. I'll do that and it will prompt me for the name of a new branch, which I'll call shiny. You can see that the status bar now shows shiny as my current branch and a little cloud indicator that says my branch doesn't exist remotely yet. At this point we can make our shiny new change. When we do that and save the file, we have a pending change that we can commit. Remember this is still a local commit in our shiny branch. To push our change remotely when the remote branch doesn't yet exist, we can use the Publish Branch option from the Git menu. It seems that that was successful, so let's go check GitHub and see how things look there. So here I am back on GitHub, inside our coding-with-kids repo and I see a new little banner that shows me that I just pushed the shiny branch and it gives me a big green button to compare and pull request. This is exactly what I want to do, so let's push the button. The title of the PR will default to the commit message I made locally, but we can change it here if we want. In the description, you might want to put some rationale or justification about why you're submitting the pull request so that any reviewers will have better context of what they're reviewing and why you're suggesting the change. Then we go down to the bottom of the page and use the green button to create the pull request, simple as that. We then see some ominous looking notes, like review is required and merging is blocked. We'll address these in the next clip where we'll discuss the approval and review process for pull requests. See you there. Approve and Merge the Pull Request I tell you what, it's nice to have friends, or at least collaborators. I had mentioned earlier that I would probably be removing the include administrators option for protecting the master branch. I was going to need to do this because by definition you're not allowed to review your own pull requests on GitHub. And this is a good thing. But I was able to add a collaborator to the project, you do that under the Settings tab for the repo. So RPbarfield, who you're seeing as the reviewer here, came to my rescue and reviewed and approved the change in my pull request. Had he not been in the mix, I would have had to removed the option about including administrators and override some big red warnings about merging my own code. But here we are with the reviewed and approved pull request that has not yet been merged. Let's have a look around. The three tabs are for Conversation, Commits, and Files changed. We can see that there was only one commit included in the PR, and if we look at the Files change, we can see the actual change that I made. If the reviewer wasn't happy with anything, they could suggest or require changes before approving the PR. We won't go through a correction required type workflow in this course, but armed with what you know now, you shouldn't have any trouble with that process. So back on the Conversation tab, we can scroll down and we have a button for merging the pull request. We do have some options here, but the main button is fine for what we're doing, and most merges actually. So we can just push the button. We get prompted for any new comments we want to add and finally confirm the merge. After the successful merge, we get prompted that we can delete the shiny branch if we want, since we now have the changes from that branch and probably won't need it anymore. Just for completeness, let's pop over to Read The Docs and make sure our CI/CD processes picked up our merged pull request. And sure enough, it shows a build having just completed. And when we view the docs themselves, we can see our shiny new note. Awesome. Let's go put a wrap on this module. Summary I hope you had as much fun in this module as I did. This is pretty awesome. We got our docs pushed into a Git repo that we created on GitHub. Then we got them live for the world to see on Read The Docs. Further, we saw how CI/CD was automatically on by default so that our docs get updated simply when we commit change to the master branch on GitHub. Then we demystified pull requests and set up our repo to require them in order to change the master branch. Then we created a pull request and went through the approval process for it and saw our change in the built docs after the approval, pretty awesome stuff. At this point in the course, you really have enough to get you started in creating your own documentation sites and getting some workflow defined for them, which is great. You may want to stay tuned though because up next we'll be looking at options that takes us beyond the simple ReStructured Text, such as including markdown files in the project or making some custom CSS changes to control the look or behavior of some doc elements. See you there. Content That Goes Beyond reStructuredText Introduction In this course you've learned about the ReadTheDocs platform using ReStructured Text and setting up workflow with automated CI/CD using the GitHub and ReadTheDocs integration. This is enough to get you rolling in creating your own docs, but getting a real project out there sometimes presents some additional challenges, right? Hi, it's Erik and in this module we'll explore some of those curveballs you may have to deal with and how to address them. You can look at the clip titles and pick and choose the ones that apply to you and ignore the rest, I won't be offended. The first of the challenges we'll address is adding some custom CSS into the mix. You may want to tweak the colors of something or control the layout of an element a little differently than it would happen by default and custom CSS can help you with that. The next item we'll cover is enabling markdown files in addition to ReStructured Text. Markdown is a popular markup language used on GitHub and other places and if you have some existing markdown docs, enabling them to be included in your doc projects may expedite delivery for you. Lastly, we'll look at supporting different versions of the documentation and how to make that work. Custom CSS: The Approach In this demo, we're going to be adding a custom style sheet to our project. We'll start by looking at a table I've modified and see how it looks by default. Hint, we'll probably need to apply some CSS to make it better. Then we'll fix the situation by creating a custom stylesheet. We'll need to make sure it gets included in our built output, so we'll be modifying our conf.py file. Then we'll update the table to reference the class from our CSS file and verify everything works. Let's jump in. Why Custom CSS: Control Horizontal Scroll on Tables For this demo, we're imagining a scenario where we want to add a new column to the comparison table that contains a description of the items. Because master requires pull requests now, I'll start my changes by creating a new branch and give the new branch a name of newtableversion. Then I've got the new version of the table ready to paste in so that you don't have to watch me typing these things. I've added a column called Description and in both the Raspberry Pi row and the Lego Mindstorms row, I've added some text for the description. And this text gets a little long. You can also see I added a fifth number to the header rows above. I'm going to build this and look at the page in a browser because I want to show you a few things in the built HTML and then see an inline style fix. And when I run the build, oh no. I get some kind of nasty looking error. Let's have a look at what happened. It's complaining about my list table and not all of the columns having the same widths or something like that. You may have noticed when I pasted in the new version of the table that my columns didn't line up quite right under the others as it relates to their whitespace. These types of errors are definitely something you want to watch out for when dealing with sphinx.doc projects like the one we're working on here. To fix this, let's just go adjust our whitespace handling for the new columns. I'll delete all the whitespace and the new line and hit Return and let VS Code a little of the details for me. So now when I save and build the project, it should build successfully. And it does. Great. Let's go have a look at that table on the HTML table that we built. What you can see here is a pretty long scroll bar on the page, it means that viewers of the page are going to need to scroll right to have a look at the content I've created. It's not the most obvious looking scroll bar either, and forcing the content to remain on the page shouldn't hurt the overall flow. So I want to change things so that the width of the table remains inside the page itself. If I open up the browser tools to inspect the actual HTML that was created by our build process, I can review specifically these two table cells in our right most column. I already happen to know the inline styling that will correct our issue. Setting the white-space value to normal and making it important. If you're curious about CSS or the HTML that I'm showing here, there are lots of other courses on Pluralsight that can bring you up to speed on those topics. When I set both cells whitespace values as I want, you can see that the table gets a little taller and the new column that I've added is wrapped, but stays on the page. So what we need to do here is to get this styling into a CSS file, include it in the build process and then reference the CSS class we create on our table right here. We'll do all of this in the next clip. Add Custom CSS to the Project Our first step in including some custom CSS in our project is to add a file containing the CSS code to the _static directory. I'll add a directory called CSS first and then add a file called custom.css. When I've done that, I can paste in the CSS code, which has the inline style and a class that can be applied to table cells. Having added what I need for the CSS file itself, I can now make sure it gets included in the build process. This is done in the conf.py file and I'll add some code that defines a setup method and when the method gets called, it'll add the stylesheet we created. This add_stylesheet method will look in the _static directory as its starting location and the build process automatically calls a setup method if we've got one defined. And now that our CSS file will be included in the build and output process, we can add the class reference to our table, which is pretty simple. We pop over to the file with the table and then add a class attribute under the header rows that we already had in place. And we set the class value to the CSS class name we defined in the CSS file. Now let's build a project. If we have a look at the build folder, we should be able to see our custom CSS file included at this point. It's not showing up for me, but I was looking at this a minute or two ago. I'll refresh the view, ah-ha, and sure enough, there it is. You can see some other files here and they come from the ReadTheDocs theme that we've applied. Let's make sure our table looks okay. We launch the page in a browser and sure enough, everything looks like we wanted it to. The process to change styling elements is basically the flow that we just did here. To recap, we first had a look at the generated output in a browser and the developer inspection tools. Then we added some inline styling right from those tools to make sure we had it looking like we wanted it to. Then we moved that custom styling into the CSS file we created, and when apply the class to the document elements we want. So now you can go have fun with your own custom styling. Add Markdown Support and a Markdown Document In this demo we'll be adding support for markdown in our project so that we can use both ReStructured Text files and the markdown files for our docs. If you look at the getting started information on the Read The Docs guide, they have the instructions we'll follow and some notes. The note at the bottom of the section points outs that you're missing out on some features of sphinx if you use markdown docs, and also shows a link that'll give some deeper reasoning for why the authors of Read The Docs prefer ReStructured Text for technical documentation. It's definitely worth a read if you have the time. To get set up, we'll be installing a Python package called recommonmark and the import statement shows CommonMarkParser. There is on flavor of markdown called CommonMark that tried to standardize some things across the different markdown formats, but there are many other variants, including one for GitHub, and that can be problematic for a common parser. But if you're watching this clip, you may have some already existing markdown that you'd rather use as is versus converting it to ReStructured Text. If so, great, we'll go through the steps here, but you may need to be aware that the parser appears to be specifically looking for the common mark format of markdown. Okay, enough preamble. Let's get down to it. We'll copy the code they want us to add to our conf.py file and then jump in. In our terminal window, we'll take the first step and install the recommonmark package. Easy enough. Then we'll add that code we copied into the conf.py file underneath the setup function we defined earlier. Now let's get a markdown file added. As noted before, you may have some preexisting markdown files you just want included and you can copy them into the directory wherever you want them. I didn't have one just laying around, so I'll add a new one and then add the content for it. I won't really explain the syntax for markdown either since it's beyond the scope of what we're covering in this course. But we should be able to build the project at this time and have our markdown file included in the mix. When we go this, it works but we get a warning from the compiler that our new file isn't included in any toctree. This maybe something that's okay in different cases for you where you link to the doc from other places, but I'll add it in just to show that including markdown files on a toctree is the same as the rst files we've been working with. So down at the bottom of our index file where the toctress are defined, I'll add a new section for about and include the same set of attributes that are in the other sections, and then reference the about doc I just created. We'll save the file, run a build, and we get clean results this time, so we'll launch the main index file and then we see the about link in the toctree and when we click on the link, we see our markdown page, and despite the slightly different markup, it looks nicely consistent with the other docs at our doc site. Swell. Versions: Introduction In this demo we'll have a brief look at version support in Read The Docs. You may have a version in a production environment that is stable, but may also want to be communicating with a community about what's coming up or currently being reviewed and tested. We'll be specifically looking at this scenario by creating a release version of the documentation, as well as a latest version. And this latter will refer to the upcoming features that are being tested, or something like that. These versions in Read The Docs are based on either branches in GitHub or tags we put onto commits. We'll focus on branches to keep things simple, but if you're all about tagging and want to understand how Git tags can drive Read The Docs behavior, then go check out the documentation about versions and the user guide in Read The Docs. For our purposes, we'll set up a long lived release branch in GitHub and then we'll get that branch activated as a version in Read The Docs. We should be able to see both versions on our doc site then, and so we'll finish by going through a pull request workflow, applying the changes to each of our branches in turn, first master then release, and we should be able to see that reflected in the versions we've created on Read The Docs. Let's go. Enable Versions with a Long-lived Branch To get started adding support for different versions in our documentation, I'm here in GitHub looking at the code repo for the docs. I'll start the process by clicking the branch dropdown and using it to create a new branch called release. Since my current branch was master, it shows me that I'll be creating the new branch from master or based on the current state of master, perfect. And now that my branch has been created, let's go over to Read The Docs and create a new version of the docs from this branch. Over here on Read The Docs, we see a suspicious looking button called versions, and as you might expect, that's where we need to be. When we get there, we see an active version called latest that's hooked up to our master branch. But we also see three inactive versions and one of them is tied to our release branch, nice. If we click Edit on that, we see that by simply clicking the Active checkbox and saving it, we should be off to the races. We now have two active versions that we're seeing and we can go back to the overview and see a second version in the list there that still appears a little half baked. And the Build tab does indeed show a build in process. We'll go back to the overview and refresh the page to see if the build is completed, and now we have two full versions showing. When we click on View Docs, we can see our doc page comes up just fine. But now when we click on the View Latest link, we can indeed see both the latest version and the release version. We can click back and forth and the content looks the same. This shouldn't be surprising since we just created the release branch directly from master. Join me in the next clip where we'll verify the behavior of our two versions by having some changes flow through both branches one at a time. Verify Version Behavior by Rolling a Change Through Branches With our two versions that the documentation set up, let's now see a change flow through the master and release branches of our code and make sure the documentation versions behave appropriately. So I've got the changes from this module in an approved pull request that I can merge to master. These two changes are the custom CSS for the table change we did on the Hardware and Code page, and the added support for markdown and the about markdown page. Once again my friend Robert has approved the pull request and I'll just scroll down and merge this to master, which should kick off a CI/CD process on Read The Docs with a new build for the version called Latest. The release version should remain intact without these changes. Let's go see if it worked. Okay, even before I was able to get back to Read The Docs, I could already see a new passed build for latest, but the release version was from a day ago. Promising. When we view the docs, which loads the latest version by default, we see the About link at the bottom of the left nave. I click on that and the markdown page looks as expected. And I click on the Hardware and Code page and the table does not scroll off to the right of the page. Huh, so far so good. And when I click the release version, we can see that my table doesn't even have the new column. Great. Nor does the about link exist on the left nav. This is awesome. Let's go back to GitHub and merge our changes from master into the release branch and see if this last step works as well as this one has. To merge changes from master into release, I'll create a new pull request. I choose a base of release and leave master as my compare branch. I'll add my note and then submit the pull request. I've not done any branch protection on the release branch, so I can just turn around and merge the pull request. I'll confirm the merge and then we can go see about that CI/CD process again on Read The Docs. This time we're looking for a new build of the release version. So I'll go back to our doc tab and then nav back to Read The Docs. I got here in time to see a new installing version of the release version. With a couple of refreshes, the build has completed. I'll click View Doc again to have a look. The latest version comes up by default and I can still see my changes there. And when I change to release, I don't see them, but a page refresh comes to the rescue. I know see both changes in my release version, which is pretty awesome. All of that just by managing the source code for the docs in their respective branches. Summary In this module we rounded out some content related aspects that took us past standard ReStructured Text. First we added a custom stylesheet to the project and used it to control table width. Then we added the capability to include markdown files in the project. And finally, we looked at supporting different versions of the documentation. If the already reviewed solution of public Read The Docs hosting and GitHub works for you, then you can go forth and create or contribute to documentation with my full endorsement. But if you want to explore alternative hosting options for either the source docs or the documentation website, or even both, then join me in the next module where we'll have a look at these topics. Hosting Alternatives for the Documentation and Its Source Introduction So far in this course everything we've done has used cloud hosting from GitHub and ReadTheDocs.org. While this may work fine in many situations and offers its conveniences, it may not fit your requirements for hosting source code or for controlling the accessibility of the documentation site itself. It's Erik again and in this module we'll be having a look at options available to us in these areas and actually setting up an alternative to GitHub and ReadTheDocs that should provide sufficient guidance to help direct you in most any other combination of options you might choose. We'll start the module by discussing the options available for both source hosting and documentation site hosting. The available options should have something that will meet your needs. Then the rest of the module will be spent specifically setting up the whole process with Visual Studio Team Services, or VSTS. We'll continue to use Git for version control, but when we get away from GitHub and ReadTheDocs, we'll have to do some extra work to get both builds and deployments set up. We'll be configuring VSTS on the cloud to build the docs on my local machine by using what's called a build agent. Then we'll set up a deployment from VSTS to my local machine as a host for the doc website. We'll also enable CI/CD here, just like we had with GitHub and ReadTheDocs. This flow may not be exactly what you would want to use for your setup, but if you recall what we've already learned about Read The Docs itself, are familiar with the other options we'll discuss here in a sec, and then stay with me while we learn the techniques to set up VSTS, you should have everything you need to know to host source or doc websites wherever it makes sense for you. Hosting Options Explored So let's talk hosting options. We'll cover source first. We learned earlier in the course that Git is a distributed version control system and that GitHub is one place in the cloud where we can host Git repos for free, as long as they're public. GitHub has great integration with Read The Docs, so we put our source there. BitBucket is a similar platform and offers very comparable integration with Read The Docs. Other Git hosting providers include GitLab and Microsoft's VSTS and TFS, and I'm sure there are many more. These last few Git providers do not have Read The Docs webhooks out of the box, so choosing those options may result in some more work to set up automated build and deploy with Read The Docs. The Read The Docs documentation site describes the process in the webhooks section of the docs. Moving on to the doc site hosting options, we used ReadTheDocs.org website to start with. This is a free option and provides a great place for public documentation. If you love the functionality of Read The Docs, but public documentation doesn't work for you, then ReadTheDocs.com may be exactly what you're looking for. This is a commercial version of Read The Docs and has different pricing levels, and you can choose the plan that matters to you. Here's a quick look at the ReadTheDocs.com website and you can see the main benefits we've already been exploring in this course, plus the fact that the docs are only available to people in your organization. To get started, you can use the Sign up button at the top of the screen if you're so inclined. If neither of the ReadTheDocs options seem to be what you want, keep in mind that the built doc site is just HTML, CSS, and JavaScript and that probably means that any web host you can identify can host your doc site. So if you've got a favorite cloud that you deploy to, or if you want to put this site right on your own internal network somewhere, those options are both completely valid. Later in this module I'll be deploying from VSTS to IIS on my local machine and this could represent a web server on your network. You could run the site under NGINX or Apache or whatever web host you want and wherever you want. Set up Repo in Visual Studio Team Services (VSTS) In this demo, we'll be setting up Visual Studio Team Services with a new project for our doc source. Visual Studio Team Services, or VSTS, is Microsoft's cloud version of Team Foundation Server which supports lots of great features and doesn't require any hardware infrastructure on your part. It's free for small projects and teams. You can get started by visiting visualstudio.com and choosing the Get started for free link under Visual Studio Team Services. I'll meet you inside VSTS after I get logged in. So here I am in my main VSTS welcome page, which shows my projects. I'm going to jump right in and create a brand new project. I'll call it coding-with-kids and leave the description blank, as well as leaving the version control set to Git and the work item process set to Agile. After the project setup process completes, I see some information very much like what we saw earlier in the course from GitHub. I'll expand the push existing repository item and copy the commands there. This should be a good refresher of how we got our project into GitHub earlier in the course. We'll pop over to VS Code and continue our work there. Back here in VS Code, I've got a copy of our doc project and I've already added a .gitignore file that will ignore our _build folder from Git monitoring. With that done, I can use the source control tab and the Git button to initialize the directory as a Git repository. Then I can commit all of the changes locally using the VS Code Git interface. To finish up though, I'll need to use the terminal with those commands I copied from VSTS. I'll open the terminal and paste in the first command. A Git remote add command that hooks us up to our VSTS repo. Then I type in the second command to push the changes up to the server. I see some progress output, which looks good. Let's pop back to VSTS and make sure our code has been uploaded. Refreshing the page, I no longer see the setup instructions, and when I go to the Code tab, I see the code listed out. Nice. So we're all set up and ready to continue. Our next step will be setting up a build agent. VSTS Build Agents: Explanation and Process Overview In this demo we'll be setting up a remote build agent. A build agent is a place where we can physically run the build process. VSTS has some basic build agents already in place that can support a variety of different application platforms and their targets, and if one of these meets your needs, you don't have to create any build agents on your own. You'll see where choosing existing build hosts fits in when we start the actual demo part of this clip. To actually set up our own build agent, the steps will be to download the agent software and then we'll get a personal access token that can be used to securely communicate between VSTS and the build agent, then we'll extract the agent software from the downloaded zip file using a provided PowerShell command, and finally, we'll configure the agent software. We'll tell it where our VSTS instance is and choose whether to run this as a service or interactively. Let's go make this happen. Set up a Local VSTS Build Agent Our immediate goal here is actually to build our code into the doc site. So let's start there. We'll go to the Build and Release tab and start the process to create a new build definition. When we do that, we're prompted for a template. You can see all of the different stock templates that VSTS supports out of the box, which is great. We're looking for a Python build for a sphinx doc project, which isn't in the list. Not a problem, we'll start with the empty definition. What we get prompted for next is what will lead us to create our queue and agent. We need a machine of some sort that can perform the build, which means that it should have Python, sphinx, the ReadTheDocs theme, and possibly the readcommonmark package, all installed already. You can see the options in the dropdown for the agent queue and some of them may have what we need, but I'm happy to just have the build done on the machine where I actually know I've done the install of all the required items. So I'll hit the Manage link next to the Agent queue label and keep working from there. You can see some of my previous experimentation and confirmation work here, but I'll be ignoring that and setting things up from scratch. I'll actually create a completely new queue called ReadTheDocsQueue. And in this queue, I can put build agents capable of building ReadTheDocs sphinx projects. I'll only put one in here, but you could add as many as you want with the steps that follow. When I create the queue, it shows me the list of agents, which is empty. I'll click the Download agent, which might not be entirely intuitive to you. You're not downloading an existing agent, but rather agent software that will run an agent process on a target machine. Having clicked the button, I see some instructions that I can follow, along with tabs for other operating systems. I'll click the Download button, which will prompt me to save a file. Creating an configuring the agent will be done next and I'll copy the PowerShell commands from here. There are detailed instructions about configuring the agent that will explain some of the rest of what I'll be doing. It basically amounts to running the PowerShell commands you see to create the agent, creating and copying a personal access token, and then running the configuration command you see at the bottom here. To create a personal access token, you click on your picture or name indicator at the top right and choose the Security option, then just click Add to create a token. It will ask you for some information about your token, including a name, and I'll give it a descriptive name of readthedocsbuildanddeploy, since I'll use this token for a build agent in just a few minutes for a deploy agent. Then I'll scroll down to the bottom of the page and click Create Token. It warns me to copy the token now that it won't be accessible to me after I leave this page. I'll copy the value so I can use it when we configure our build agent. Okay now I'm in a PowerShell command window running as administrator. I'll go to the root C directory and create a new directory called agent and then go into that directory. Then I'll run the PowerShell command that the instructions gave me, which just extracts the download agent zip file to the directory that I'm sitting in now. This will take a minute or so and is pretty quiet when it runs. Once it's done, we can run the config command from right here. The first value we're prompted for is the server URL, which is the visualstudio.com subdomain for your overall VSTS instance, mine is knowyourtoolset.visualstudio.com. The authentication type is PAT, so I can just press Enter, and then paste in the access token that I copied a second ago. Then I get prompted for the queue I want the new agent to be a part of, and I'll specify my ReadTheDocsQueue. Then I'll use a custom name of eriksp4 for the agent name. Then it does a few checks and everything looks good, and I get prompted whether to run the agent as a service. Specifying yes will run a service in the background that is always listening for build requests. Not running as a service means that I would need to come in here and run a command to manually start the agent and stop it when I choose. I'll use the service option. I'll specify my local account here to run the service as. You can specify any account, local or domain, that would have access to the installed Python instances on packages we pulled. And that completes our build agent setup. We should now be able to circle back and create an actual build definition and have it run on our new agent. Join me in the next clip when we do just that. Create VSTS Build Definition and Build the Docs Now that we have our new queue and agent setup, we should be good to go to create our build definition. So let's do it. We'll click New definition and then just choose the empty process since we already had a look at the available options before and didn't have a template that would meet our needs. Now when I get prompted for the agent queue, I'll choose the new ReadTheDocsQueue that we created. Nice. The first step is to brab sources and I'll just confirm the contents there. You can see that it's pulling from this project and using the coding-with-kids Git repo, and specifically the master branch. This looks good, so let's move on. We now need to set up the actual build process steps, so we'll click the plus by Phase 1 to set up our first task. We just want to run the make batch script from our project, so I'll scroll down to the Batch Script and click Add to configure that. I can see that I'll need to specify some values here, so I'll click on the Run script to see what it needs. The path is the name of the script, ./make.bat in our case, and the arg is html since that's what I'm building. This is coming straight from the way we built the docs within VS Code. I'll save my work at this point, but don't want to actually queue a build yet, so I'll click the arrow by Save and queue and choose the Save option. The current folder is fine, so I'll click Save again, which will close the dialog. We need one more step and that's to get the build artifacts, i.e. the results of the build, published back to VSTS so that they can be deployed in a release. I'll use the search box and look for a task related to artifacts. The one I want is Publish Build Artifacts. I'll choose that and then click on the new task to configure it. The path I want to publish is the _build/html folder, and I'll give the artifact a name of DocSite. I'll just save this again and then I'll go back to the Builds tab so you can see what that looks like. I see my new build definition, and if I click on that, I get a details page that includes the option to queue a new build. I'll queue the build, then allows allow the pop-ups, and we can see the progress of the build right in our browser. It did a source fetch, then it's running our make.bat command, then it's publishing the artifacts, and it's done. And if I click on Build 5 here and then choose the Artifacts link, I can see the DocSite artifact. And if I click Explore next to it, I can indeed see the content of our website containing the built documentation. Sweet. So now we have a build definition that works. Let's go create a deploy agent and get the site deployed. Set up VSTS Deploy Agent To enable us to deploy our DocSite, we need to have a deployment group defined with a deployment agent inside of it. So I'll choose the deployment groups option and then we'll add a deployment group. I'll call it ReadTheDocsHosts and then I get a nice big registration script that I can copy. I'll click the checkbox to add the personal access token to the script and then copy it to the clipboard. Then we'll pop back to our admin PowerShell window. I'll clear the screen, then paste in our lengthy script. It will run pretty quietly for a bit, then connect to the server and ask about deployment tags. I don't need those, so I'll just hit Enter. I'll again use my local account my run this under and you can run this as any user account that has appropriate permission to perform the deployment tasks on the machine. In our case here, we'll be deploying the site to IIS, which I've already enabled as a feature on this machine. Join me in the next clip to actually perform the deployment. Create VSTS Release Definition and Execute It Now that we have a deployment agent in place, we can create our release definition. I'll click New definition to start the process. I'll then do a quick search for IIS because, as I mentioned earlier, I'll be deploying our site to run within IIS. If you have a different web execution environment, you could choose something different. It asks me for an environment name and I'll type Production for that since it's a real living DocSite. Once I do that, I'll add an artifact that will be the built output that we want to make available to our release definition. I'll choose the build that we just created as our source and then save the artifact settings. Then I'll come back into the Environment box so we can provide our deployment details. The name of our website should probably not be Default Web Site, so I'll change that to CodeWithKids. Note that I'm leaving the default binding of port 80 with all unassigned IP addresses alone. If you've got other websites running on your host, you may need to adjust theses bindings. Now when I click the Manage task, I want to update the physical path where we'll place our content. I'll make that a subdirectory of wwwroot called codewithkids. Using subdirectories here is not required and you may have your own preferences for handling site content. Next, something that's easy to overlook is the package or folder, which is set with a default value that looks imposing. It is definitely not what we want, so we need to change that. I'll click the ellipses next to the entry field, and because we specified the artifact from our build earlier, we can navigate to the DocSite artifact folder and choose that. This is where we actually specify the content to deliver for this site. Once I do that, I see one last setting that needs our attention, so we can pop in and review that. It turns out we need to specify where we want the deployment actually sent, and this is where we'll specify the deployment group we created in the last clip. We'll save this and then I'm noticing once more thing, our release definition is named new release definition, which is probably not very helpful if we start getting a few of these things created. I'll use the pencil icon to edit the name to something more meaningful. Once again I'll save the release definition, and now, we should be ready to go. I'll click the plus Release to actually add a release based on a real build and get prompted to see if there's anything I don't want to just let fly. There isn't, so I'll click Create. I'll then click on the Release-1 link that showed up in the banner to see the details of the current release. I can quickly see that the deployment status is in progress, which is great. And before I can even click on the Logs tab to see status as it goes, it's completed successfully. This means that I should be able to open a new browser tab and go to localhost/index.html and see if the deployment worked. And it did. I'll see you in the next clip where we'll make sure that we've got CI/CD fully enabled and verify it by making a change to the master branch. Set up CI/CD with VSTS, Push Change for Edit on GitHub In this clip we'll set up CI/CD within VSTS and then confirm the setup by introducing a change you might like. You may not have noticed this, but in the locally deployed version of the docs, the Edit on GitHub link has changed to View page source. This is functional and a little handy, but nowhere near as useful as being able to directly contribute by containing a link to the doc in this source repository. We'll introduce a little workaround for this as our change to be deployed through CI/CD. But first, let's go set up the CI/CD. We'll pop back to VSTS and look at our build definition. To do so, I'll click on the ellipsis in the build definition row and choose Edit. Then I'll go to the Triggers tab and enable the CI trigger. Everything else is fine, so I'll save the updated build definition. Now for the continuous delivery part. We'll hit the Releases tab, I'll click the ellipsis next to CodeWithKidsRelease and choose to edit that. The lightning bolt is what I'm after here. When I click that, I have the option to enable the CD trigger, which I'll do. I'll specify the master branch is what I want to be looking for here, and then I'll save the change. Before I go back to VS Code and make the change though, I wanted to point out that I'll be a doing a direct push to master here. If you wanted to impose branch protection and require pull requests, you can go back to the Code tab and choose Branches. From there you can click the ellipsis on the branch row and the Branch policies item is probably what you're looking for. Many of these options should look similar to what we saw regarding branch protection when we applied that to our GitHub repo. For now, just know where this is and now we'll go back and hop into VS Code and make our change. The change to enable a link to the Git repository is to use the GitHub URL directive that's described in the documentation and the ReadTheDocs theme repository on GitHub. There's also a GitLab URL directive and a BitBucket URL. None of those are VSTS URL, but I'll just hijack the GitHub one. When I specified the URL for this page, which I grabbed from VSTS, note the index.rst portion of the path. This is a page level setting that should be done on each page. If you add a line like this to each of your pages, you'll achieve the same functionality. I'll confine the change to just this one page and save the file. Then I'll commit the change. I'm in the master branch right now and I can push this commit to the VSTS remote repo. Back in VSTS, we should be able to check our CI/CD configuration at this point. I'll refresh the page and I see a suspicious looking not started note in the Status column. I'm not sure if you noticed earlier, but somehow in my earlier exploration I had inadvertently clicked Pause for this build definition in the ellipsis menu. I'll just click Resume to unpause the definition, and that fires everything off. We go over to the build details page and almost immediately it's succeeded. So now I'll head over to the Releases tab and navigate into the new release, which has just been created. And I see that it's in progress, which shouldn't be a surprise. It succeeds a few seconds later, so I'll just pop back to the tab with my locally hosted docs and refresh the page. Sure enough, the link has been changed to Edit on GitHub, and when I click the link, I'm indeed taken to the VSTS page for the index.rst file. Sweet. If you're looking for a challenge and VSTS or TFS source hosting is where you hope to end up, you could use the CSS technique we used earlier in the course for JavaScript and maybe change the text of the link to be Edit on VSTS, or something like that. It didn't bother me enough to want to change it, and I like the behavior, so I kept it. Mission accomplished. Functional CI/CD set up with VSTS and on-prem doc hosting. Let's go finish things up. Conclusion What an awesome ride this has been. I hope you enjoyed what we accomplished in this course. I know I sure did. In this module we looked beyond ReadTheDocs and GitHub for other source and doc hosting options, and got a full alternative solutions setup for source hosting in Visual Studio Team Services with an on-prem build server and doc website. In the course overall, we've covered a lot of ground. We looked at documentation options and created a new project using sphinx with a theme provided by Read The Docs. We learned the syntax and markup for most of the common ReStructured Text elements you would want to use in your documentation. Then we set up hosting for both source and docs using GitHub and Read The Docs. During the process, we got a crash course in Git workflow, which covered branches, commits, pushes and pulls, and pull requests. We also watched CI/CD in action to get new doc builds automatically created by simply changing the master branch of code. Then we ventured out past ReStructured Text and did some work with custom CSS, markdown, and even different versions of the documentation. At this point, you should really have a full complement of knowledge to dock yourself out. Until next time, thanks for watching. Course author Erik Dahl Full-stack developer, architect using the Microsoft stack and other key tools to create awesome solutions. Course info LevelBeginner Rating (10) My rating Duration2h 0m Released6 Dec 2017 Share course