Docker for Web Developers


  1. Why Use Docker as a Developer? Introduction Docker gets a lot of attention nowadays, and for good reason, but if you've looked into it all, you might have wondered what exactly is it, and is it something I can actually use as a web developer? I know when I first started reading about it, hearing about it at conferences and user groups talks and things like that, I really wondered if it was something that even played a role in what I did, and the more I dug in, the more I found out that, yeah, it actually can play a big role in our web development operations, and that's what we're going to address in this first module. So we're going to start off by talking about what exactly is Docker, and we'll clarify some key terms and concepts that you need to know in order to be successful to understand how Docker works and how you can use it. Now from there we'll jump right into the benefits that Docker could provide us as web developers, and you're going to see there's actually quite a few benefits that it can provide us, a lot of great stuff there. Next up we'll talk about the Docker tools and the role that they each play in this overall development workflow that we're going to be discussing throughout the course. And then we'll wrap up by seeing Docker in action, and I'll actually show an application that's using Docker to hit a database, do some caching, and some other aspects of a normal development workflow and development application. So let's go ahead and get started by answering that all-important question of, what is Docker, and then jump into the benefits it can offer us as developers.

  2. What Is Docker? Let's start things off by answering the question, what is Docker? Docker does have some different terms, so we're going to clarify what those are, we're going to clarify where it can run, and how this all kind of works. So Docker itself is just a lightweight, open, secure platform, this is kind of the official party line if you will. And the first time I heard that it didn't make maybe a whole lot of sense because I could think of several things that might fit a lightweight, open, secure platform definition, but really what Docker is is a way to simplify the process of building applications, shipping them, and then running them in different environments. Now when I say environments, of course I'm talking about development, staging, production, and others that you may have at work. Now what actually ships with Docker then? Well we're going to be talking about things called images and containers, and containers are really, really important. You'll see over to the left the Docker logo, and you can think of the whale there as kind of like a ship that has containers, and back in the old days there was no standardized way to ship things on the old-school ships, so it was a lot more time intensive and not very productive to get stuff on the ship and off the ship, whereas nowadays the major shipping companies of course have very standardized size, standard height, standard width shipping containers, so as the crane goes over when the ship docks, it's very quick and efficient, very productive, to get those containers on and off these ships. Well Docker is very similar. If you think of the old days with the ships that had no standards for shipping products around, that's kind of where development has been for many years, everybody does it their own way. Well, Docker provides a consistent way to ship our code around to different environments, and so it's going to provide a lot of benefits that we'll be talking about in this particular section of the module for us as developers. Now it runs natively on Linux or Windows, and when I say Windows, Windows Server 2016 or higher now supports it. We'll talk more about that coming up. And as a developer, if you're on a Windows box or a Mac box or a Linux box, you can use Docker in your development workflow and it's very easy to get up and running. Now if you're on Mac or Windows, you will need a virtual machine because by default it's going to expect a Windows server or a Linux server. Now finally the key buzzwords that are typically thrown around with Docker are images and containers. Let's clarify what exactly is an image and what is a container, and how do they relate to each other. So when it comes to the role of images and containers, an image is something that's used to build a container. Now an image will have the necessary files to run something on an operating system like Ubuntu or Windows, and then you'll have your application framework or your database and then the files that support that. So if you're doing Node.js or ASP.NET or PHP or Python, then you'd have that framework built into the image, as well as your application code typically. Now the image itself is not overly useful because it just is the definition, think of it as the blueprint that's used to actually get a running container going. So if you go back to the shipping analogy, think of some person who does some CAD drawings of what's going to go in the shipping container, maybe even how they're going to arrange it in the shipping container, but the blueprints aren't very useful on their own, but you can use those to create an actual instance of that container. Well that's the same process in Docker. We'll have images that can be used to create a running instance of a container. Now containers are actually where the live application runs, or the database or caching server, or whatever it may be that you need to actually run on a Linux or a Windows Server machine. Now let's dive into the definitions of each of these just a little bit more. So an image is a read-only template, and what it's composed of, and we'll be building these throughout the course, is a layered file system. So you'll have some files, for instance, specific to your Linux operating system or your Windows operating system, and then you'll have your files for your framework, ASP.NET or Node.js or whatever it may be, and then they are all put together to make this image. Now once you have an image, you can use that to build this isolated container, and again, if you go back to ships, every container is very isolated from the other containers. It makes it so you don't necessarily know what's going on in another container. Now there are some gotchas there that we can talk about later, but in a nutshell the image is used to create an instance of the running container, and then you can start the container, you can stop it, you can move it, you can delete it, and it starts and stops really, really fast, and that's what's so cool about this technology is it's very quick and easy to get a container on the ship and off the ship, and the ship in our case would be the development environment, the staging environment, the production environment. Now where does Docker run then? Well, as I've already mentioned, Docker can run on Linux or Windows servers, and so if you're going to be running on a development machine you have to have a virtual machine, which we'll talk about. Now the exception would be if you're developing directly on a Linux machine, then you can just run Docker containers natively. Docker ships with what's called a Docker Client. And Docker Client can then integrate with these different operating systems such as Linux, and it integrates with a Docker Engine, a daemon that you'll see here. Now Docker itself has its roots in Linux, that's actually where it came out of, the company built on top of some LXC support it's called, Linux container support, that's already built in to the Linux operating system. Now likewise, Windows Server 2016 or higher also has Docker support built in, and so the Docker Client there could be used to integrate with the Docker Engine, which can start and stop and delete, and do all those things with our containers. So think of the Docker Client as kind of the commands that are given to the people that load the ship or remove things off the ship, whereas the Docker Engine could be the cranes and the people running those that actually get the container on the ship and up and running. Now what's the difference between Docker containers then and virtual machines, because this is one of those things that the first time I learned about this it didn't make a lot of sense to me. So virtual machines always run on top of a host operating system, so of course you could have a host running Linux or Windows, and then you can run a guest OS on top of something called a hypervisor. And so the left you might have App 1, it might be a PHP app for instance with its binaries, libraries, whatever it may be, or ASP.NET or Node or whatever you have. And then App 2 might be running on a different guest OS, so let's say the guest OS on the left is maybe Windows and the guest OS on the right could be Linux. The bottom line is you have a copy of the operating system for every virtual machine, and so depending on the type of hard drive and things like that, it can be a little bit expensive to start up and stop a virtual machine. They run well, but they're pretty big. The images for a virtual machine to get it up and running are generally multiple gigabytes in size. Well let's compare and contrast that with Docker containers. Now they do sit still on top of a host operating system, Linux and now most recently Windows Server 2016 or higher, and then we have this Docker Engine, which can integrate the containers with the host operating system. And so now as we get a container up and running, you can think of the host operating system as the ship, and then the container for App 1 has everything App 1 needs for that particular feature, so Node.js with all the application code for instance. Now App 2 might have a completely different container, and then typically applications will have multiple containers. You might have a container for your database, a container for a caching piece, a container for your application code and the framework, those type of things, but the bottom line is they sit right on top of the native operating system, so when I ship these around, I'm shipping a smaller image, it's very small compared to a guest OS virtual machine. They also start, containers also start very, very fast, and we'll be seeing that throughout the course, they're great, they just come in a matter of seconds. The difference here is very, very big. Containers sit on top of the host, whereas guest OSs and VMs sit on top of the actual host, but they're their own copy of all the files, everything is a copy as you make a new VM. So that's an example of what Docker is. We've now talked about images and containers, and we'll be delving much, much more into those throughout the course, and then we've also done a comparison between Docker containers and virtual machines. So now that we've done that, let's start talking about, alright this is great and all, but how does this actually help me as a web developer?

  3. Docker Benefits (for Web Developers) Docker offers several different benefits to us as web developers, and in this section we're going to walk through some of the key benefits that we can get by leveraging it. So whether you're working on a team of one or many, Docker can help set up a development environment very quickly, and that's really one of the key aspects that we're going to focus on throughout this course, although that's just a very minor benefit, it's definitely a big benefit as a web developer. Docker can also help eliminate app conflicts. If you have versions of frameworks and you can't move the latest version, isolated containers can help out there. It also provides a way to move your code and the entire environment of the code between your different environments, so between things like development, staging, and production. And if we can do all of that, we can more than likely ship software faster, and that's a good thing of course. So let's dive into each of these four areas really quickly. So when it comes to accelerating developer onboarding, oftentimes we have multiple team members of course, and we might have some developers, maybe a mix of designers or people that kind of do both of those things, and oftentimes you want people working with the actual version of the app versus just a prototype that's separate. And so we might have a web server, we might have a database server, caching server, and those types of things, and setting all that up on an individual developer machine, especially for people that work remotely in different scenarios can be challenging because you have to get the security right, you have to get the configuration settings right, make sure the right versions are on there, and so getting that right and not having surprises after the fact can be a little bit challenge. So Docker can help there because we can make one or more images that can then be converted into running containers, and those containers can run on our different developer, and even designer machines. You'll see in just a little bit, to get this up and running, you can just run a simple command from the command prompt. So you really don't even have to be a developer per say to get some of the benefits out of what Docker can offer here. Now the next thing we'll talk about that Docker can help us with is eliminating app conflicts and version conflicts. Oftentimes you have an app running on a specific version of a framework and you'd like to move to the next version of the framework, but you're told you can't because that'll impact other applications running on the production servers. And so what Docker can offer is isolated containers, and each container that actually contains the framework that's having this versioning, can actually be isolated as we've talked about. And as a result, we can have V1 and App 1, 2, and 3 that are targeting V1 will run fine in their own containers, and then App 1, 2, and 3 targeting V2 can run in their own containers, and now we can have different versions of whether it's Node or PHP or ASP.NET running, and this makes it a lot easier now to work with versioning and app conflicts. Now some of the frameworks out there obviously have some of this versioning built in, but with this you really won't have to worry about it, even if your framework doesn't have a good versioning story as you move between versions, Docker can help out in those scenarios. Now it can also help as mentioned with consistency between environments, and this is one of those things that I know I personally have been burnt by over the years, going way back around the year 2000. We had a particular project I was working on at a consulting company, and our development environment was set up by the company, we didn't actually do it ourselves, so we had to work on remote machines, and the staging environment was also set up by them. And everything was working great on dev, and was supposed to be the same as staging, but turns out it was not, and so we had a nice surprise and had to do some rewrites of things as we moved our first code over to staging. With Docker we can eliminate a lot of these surprises because we'll simply move the different images that we're going to be building throughout this course over to the different environments and get the containers up and running, and that way if it runs on dev, it definitely should run the same on staging and production. And we'll talk about how to get all that set up. Now when doing that, that just means obviously we can leverage all of these benefits to ship code faster, and that's really what software is all about is productivity, high quality, predictability, consistency, all these different words we can throw out, but as we do move our images between dev and staging and production and get those containers going, we can leverage these benefits that Docker offers of the isolation of the containers, we can have a consistent development environment, and all the other benefits with versioning and things that we've talked about. So there really is a lot of good things that Docker brings to the table for us as web developers, and now you've seen a few of those.

  4. Docker Tools Now that you've seen what Docker is and some of the benefits that it can offer us as web developers, let's jump in really quickly to some of the Docker tools that we're going to be talking about and actually installing a little later in the course. So Docker has a Docker Toolbox that you can actually download from this URL shown here. And so if we go to docker.com we'll be talking about some of the documentation, and they have some really nice docs on installing Docker and getting these tools I'm going to show you all going on your machine, it's actually very simple to do, but we'll do that in a future module in the course. Now Docker Toolbox itself allows us to do a lot of the things that you'll want to do with images and containers. So for instance I can build images and I can then run an image as a container, and I can do all the configuration, I can get the logs of a container for instance, I can specify ports to use, and much, much more that we'll talk about. Now it is going to require a virtual machine, by default it uses VirtualBox, and it's going to get all that installed actually for you out of the box, you don't even have to really do much on your part. So if you're on Windows or Mac, you will need to have VirtualBox, but the Docker Toolbox will provide that for you. And then this runs anywhere you want. So you can run Docker itself, and when I Docker that's actually kind of a generic term for the Docker Toolbox, but you could run a lot of the tools that are provided such as the Docker Client that we talked about earlier, the Docker Engine, and all of that on Windows, Mac, and Linux. And again if you're on Linux, it has the container support and you can just run it like on Ubuntu for instance, directly. So what tools do you actually get in this Toolbox? Well, we're going to get all the key things we need. So we're going to get the Docker Client, and this is how we're going to be able to interact and view our images, our containers, and we can start and stop containers, do all that type of stuff. Docker Machine is going to allow you to work with the virtual machine and set up environment variables and other things that we'll need so that Docker Client can then interact with the different running containers that you might have or that you might want to start. Docker Compose is going to allow us to run multiple containers, so an application typically isn't just a web server, we oftentimes have our database and other types of application servers potentially, so Docker Compose will allow us to set up a really simple way to bring up multiple containers and have those containers talk to each other. And then if you're kind of either anti-command line, or just would prefer to have a GUI, then Docker Kitematic, which is a bit of a newer part of the Docker Toolbox, allows you to actually locate Docker images that are out there, when you go to docker.com you can also get to something called the Docker Hub, and this is where we can actually host images that we might create, or we can actually grab images that other people create. So for instance if you want to run ASP.NET or Node.js or PHP or whatever it may be, those images are already up there, and then of course you can always tweak those images if you want to do your own little tweaks. So Docker Kitematic allows us to view the different images that are out there, we can then bring those down locally, convert them into containers, interact with those containers, and do a lot of the stuff that would normally be done through the command line more through a GUI type of tool. And I mentioned that VirtualBox is going to be used to host these different containers that we're going to be running, and this is something we'll have to run again if you're on just a regular Windows machine, or on a Mac. Now in the upcoming modules in the course I'm going to show you how to get all this installed and we'll actually start using specific commands from a lot of these tools, and I'll show you how these can fit into our web development workflow.

  5. Docker in Action Now that you've seen what Docker is, some of the benefits it can offer us as web developers, and a few of the Docker tools that are going to be involved throughout this course, let's take a quick look at Docker in action. So the demonstration I'm going to show you actually uses six containers. We have NGINX, this is a reverse proxy, we have three Node.js instances, MongoDB is the database, and Redis as a caching server, and I'll be able to get this up and running quite quickly on my machine. So let's jump in and I'll show you how this works. Let's assume I've been tasked with getting my development environment up and running, and I need it to look not only like the other team members, but also like our staging and production environments. Now if you've done this very much you'll know that that can actually be a little bit tricky, but with Docker it's very, very simple. So I've already configured some Docker images we're going to talking about throughout the course, and I already have some containers ready to go, so I'm just going to run a simple command that we'll learn about later called docker-compose up, and this is a way that I can basically start up six containers that I need to run this particular application. So we'll go ahead and let this run, and it'll take just a moment to fire up here as the web servers connect to the database and things. And right now I have an IP address that I already know Docker is going to give me, and I'm going to hit Refresh, and you'll see right now it's not quite ready, so let's go on back, and we should be really close here now. Alright, looks like we should be good to go now, so let's hit it now, and there we go. So this just hit a website that's using again, Mongo, Redis, Node, NGINX, some other features behind the scenes, and this is actually a company site, you're going to get to work with a subset of this site so you can have a more realistic demonstration to work with throughout the course, but it's a pretty standard application, I can go in and get information about different things, go back to the home page, pretty standard stuff. And Docker made this really, really easy to work with. Now I can do Ctrl+C here, and now this is going to stop all the containers, and from there we'll be kind of done and ready to go, every now and then it'll throw an abort message, you kind of just ignore that because we can just start it back up, yep, looks like we're good, and I can just wipe that out, and if I want to start it back up, we're ready to go there and we'd be up and running. So that's an example of Docker in action on a Mac. We can get the exact same environment going on the Windows side as well. So I'll go ahead and run a command, we'll start everything up, this'll start up NGINX and some web servers and database and more, and once this is all done, because we have the exact same environment that I showed on the Mac side, we'll of course get the exact same output in the same website running here. So there's not going to be any variability between the two sides of the house, and the same goes as we move things between environments such as development, staging, and production. So this is almost done firing up for the first time, and we'll go ahead and try to load this at this point. And you can see we can get the exact same website loaded up, and that allows us to very consistently run our code in different environments. So that's an example of running Docker on the Windows side.

  6. Summary In this module, you've learned what Docker is and seen how it can simplify building, shipping, and running applications across different environments. We talked about that it runs natively on Linux and now on Windows Server, but that it's not the same thing as a virtual machine. In fact it's very, very different, it can be a lot faster in many cases as well. Now for us as web developers, there's a lot of key benefits that we also discussed. One of the big things is you can get your environment up and running very, very quickly, and in a consistent way. This allows developers, whether they're on the team already, or contractors or new hires, to get up to speed very quickly, whether it's on Mac, Windows or Linux, and not have to do some custom installs of various software. We also talked about if you work with multiple apps and multiple versions of apps and frameworks, Docker can help because of the container isolation. We talked about moving code between environments, development, staging, and production for example, and how Docker can provide a consistent way to do that, and all of this really leads to us shipping faster. We can ship our code into production faster, hopefully make everybody more happy along the way. So I'm really excited about Docker because it offers some things that we really just haven't had before. As I mentioned, it's kind of like shipping in the old days across the ocean without containers, to moving into consistent, standardized containers for packing and shipping everything around. Well when it comes to software, we now have a way that we can containerize, if you will, our code and our frameworks, databases and other things, and this provides a much easier way to ship these different things around between our environments. So as we move along in the course we're going to jump right in to getting Docker installed, you're going to see how to use the different commands and how you can actually work with Docker in your web development environment.

  7. Setting up Your Docker Environment Introduction In this module, we're going to take a look at how we can get our Docker environment set up so that we can work with the different images and containers that we're going to be discussing throughout the rest of this course. So we'll start things off by talking about how to get Docker installed on Mac, and I'll introduce something called Docker CE, Docker Community Edition. Now, we're also going to look at how you install Docker on Windows. And with Windows, you actually have to choose a different version of Docker. So if you're on Windows 7 or 8, you're going to be installing Docker Toolbox, whereas if you're on Windows 10 Pro or higher, you can use Docker Toolbox, but most people are going to want to go with something called Docker Community Edition, a very similar version to what people on a Mac would want to run. Now, once we explain those differences, talk about how to get things installed on Mac and Windows, we'll also talk about something called Docker Kitematic. Now, most of what you do with Docker is command line, but Docker Kitematic is a GUI type of tool that'll let you view images that are up on something called Docker Hub, pull those down to your machine, and then run them as containers. We're going to introduce what Docker Kitematic is, and then I'll show you a quick example of Docker Kitematic in action and how we can actually pull down some different images out there and get them running on our machines. So let's go ahead and get started by discussing how to get Docker installed on a Mac.

  8. Installing Docker on Mac Let's go ahead and get started by installing Docker on a Mac and show the process. And you'll see it's a very straightforward process, very simple to walk through an installation wizard. Now, the first thing you'll want to do is run off to store.docker.com and if I scroll on down, you'll see a little explanation here about different editions. So they have an enterprise edition. Then they have a free community editor or Docker CE. We're going to go ahead and use Docker CE throughout, and that'll get everything we need. Now, you might also come across something called Docker Toolbox, and in fact, videos throughout the rest of the course are going to show some aspects of Docker Toolbox. Now, it's an older version of Docker, and it uses something called VirtualBox to actually host your Linux virtual machines and things like that. Now, that is still an option, and you can run that on a Mac. And so if you had a really old version of the Mac, then you could do it. And if we come into this, I'll show you as we click on Docker Community Edition for Mac, it'll actually tell you what you do need, but assuming that you have a new version, I would recommend installing Docker CE. Now, there's two editions. There's Docker CE Stable and there's Docker CE Edge. I do run Edge from time to time because I like to play with some of the new features that are experimental, but obviously if you're on a kind of day-to-day operations on a laptop or a machine at work, you want to be with something that's considered the stable release. And they have a walkthrough on how to get started installing it. It's a very simple process though, so I'm going to go ahead and download this. And we go ahead and run the .dmg. Now, from here, we'll just drag it into the applications. Now we're all set up to go once that's copied. Now we just need to start up our Docker client and what'll happen is we'll get this little whale that'll show up at the top here. So let's go ahead and run off, and we'll type Docker. There it is. So it's going to prompt us, since I downloaded that form the internet. We'll say Open. And now it's going to tell us that we need some privileged access. So I'm going to hit OK, enter my password, and we're off and running. So now you'll notice it's kind of firing up. It has a little animated icon. And they give you some information while it's firing up, some commands that we'll be learning throughout the course, and things like that. We'll go ahead and, as it says, hang on for a moment and start it up. All right. And with that, Docker is now up and running. So if we go in and click here, you'll see I can restart it. I can get some about information or learn more about the Docker Enterprise Edition, get to Preferences, check for updates, go back to the Docker Store, which we're already at. Talk about Kitematic a little bit later. And then we can sign in and create a Docker account as well. Now, I'm going to go ahead and leave it as-is for now, but let's go ahead and go to preferences, and I'll show you one thing that's going to be a little bit important as we move forward. All right, so first off, there's a few things you can change here. If you don't want Docker to start up right as you log in, you can certainly turn that off. I'm going to go to File Sharing. You'll notice by default there's some folders listed here. Now, later in the course, we're going to learn about something called volumes, and we're going to be using specifically the Users folder. We're going to put our code there. So for instance on the desktop, and later, we'll talk about why this is important, how we can hook a Docker container into some source code that's on our local map. Now, you can add and remove from here, but I just want to make you aware that there's already a Users folder added, a Volumes folder, a tmp, and a private, and I'll mainly just be using the Users folder moving forward. Now, if you're behind a proxy of some type and need to worry about that as you work with Docker images that we'll be doing later, you can go into here to get to that. You can reset settings, go into the Docker Daemon itself and add some insecure registries, things like that. They even have an Advanced where you can control how many CPUs you want to dedicate to this and how much memory. So this is kind of the defaults. Usually works pretty well, but if you find yourself using a lot of containers and a lot of images, you might want to update your memory potentially. So, that's the basics of getting started with it. Now, if we run off to a command prompt, we can kind of try it out. So I can just type docker, and you'll notice I get some different help docs here. And then I can see if I have any images. I'll be covering these commands later, but we can say docker images, and I don't have any currently. So, at this point, we don't have a whole lot to go on. But later, as we move through the course and talk about images and containers and how to start 'em, you'll see how these different commands can be used. So that's how easy it is to get started installing Docker on a Mac, and now you'll have everything you need to move forward. Now, I'm going to be showing demonstrations on Mac and Windows throughout the rest of the course, and because I don't know if someone's going to be running Docker Toolbox, which is an older version, or Docker CE, you're going to see some commands like docker-machine that's actually a tool that you won't need to use here. In fact, I can go, just to show you real quick, and type docker-machine, and you'll see I get some commands. And I could even do ls. You'll see in the middle there, that's list machine. So we'll say docker-machine ls, and I don't have anything there. Now, that's because the Docker Community Edition is handling everything for me. Now, if I was using Docker Toolbox on Mac, there would be some extra commands. I'm going to show those as we move along, but you'll be able to skip the docker-machine commands because Docker CE basically simplifies all that for us. So, now that that's installed, let's move onto the next clip, and if you happen to also work on Windows, I'm going to show you what we can do there to get Docker installed.

  9. Installing Docker on Windows Installing Docker on Windows is actually very straightforward, but what you install really depends on the version of Windows you have. Let's talk about that first before I show the actual installation process. So if you're on Windows 7 or 8, you're going to have to install something called Docker Toolbox. Now, this is a version that uses VirtualBox as a manager for the different virtual machines. And this is a product that's actually open sourced from Oracle and runs very well actually but is a little bit bigger than the other option. Now, if you're running on Windows 10 Pro or Windows 10 Enterprise, then you would install Docker CE, community Edition. Now, that does not use VirtualBox, but it does require Hyper-V, and that's why you have to have Windows 10 Pro or higher. And that's what Docker CE won't actually run on Windows 7 or 8, because there's no Hyper-V there. And so we'll be using Docker Toolbox in that environment, 7 and 8, but Windows 10 Pro or higher, we can use Docker CE. Let's get started by learning about how we can install Docker Toolbox if you are running on Windows 7 or 8. So the first thing you'll want to do is run off to store.docker.com, and if we scroll on down, we'll go to Docker Community Edition. So I click on the get, and we'll scroll on down until we see Windows. So there's Docker Community Edition for Windows. All right, now, right up at the top, you'll see the requirements. If you want to run the community edition, you're going to have to have Windows 10 Professional or Enterprise 64-bit. Now, if you're not on Windows 10, for instance, at work, then you're going to have to get Docker Toolbox, and this is the one that uses VirtualBox. So I'm going to go ahead and select that. Now, Docker Toolbox can run on Mac or Windows because it's actually using VirtualBox under the covers to manage the virtual machines. So in order to run the Linux virtual machines, you have to have some type of a VM engine. And in this case, it'll be using this VirtualBox. So we'll go ahead and download this. All right, so now the Docker Toolbox is downloaded, you go ahead and run the executable. And you're going to get the installation wizard. So we'll go ahead and leave the defaults here. We'll hit Next. Leave the default for the installation path, although you certainly can change that. And then we'll select the defaults here for what do we want to install. Now, I'm going to go ahead and install, and I'm going to recommend you install Git for Windows. This is going to provide some nice Bash type of commands that you can run, and I'll be using it throughout the course. So if you want to kind of follow along at all, you want to install this as well. All right, so we're going to go ahead and leave all the defaults here. Create a desktop shortcut. We're going to add Docker binaries to our path. We'll hit Next. And we'll go ahead and install. Now, this is going to install all the different tools, and I'll talk more about the tools as we move along here in other sections of the course. But right now it's going to give the key one. We'll just call it Docker client installed, and it's also going to be installing this VirtualBox I mentioned. And you'll be prompted to install some Oracle drivers here for VirtualBox. And we'll go ahead and make sure we do install those. It's kind of easiest if you just leave this box checked here. Okay, so there we have it. It's now completed, and if we just hit Finish there, it's going to pop up a few things. We can get to a Docker Quickstart Terminal. We're going to be looking at that a little bit later. And then Kitematic is a GUI type of tool that'll actually allow you to work with Docker images and containers very visually. And so we're ready to go with Docker Toolbox. If you're on Windows 10 Pro, Enterprise, or higher, then you can install the Docker Community Edition. Now this uses Hyper-V, as mentioned earlier, and that's why we have to have that specific version of Windows. Now, to do that, it's going to be very similar to the Mac type of installation I showed earlier. We can hit Get Docker. It's now downloaded the Docker installation wizard. So we'll go ahead and run. All right, so after accepting the terms, we can hit install. And we'll of course hit yes so we can continue. All right, and we're all finished. So I'll click the Finish button here, and now what'll happen is if we go down into our toolbar area, we won't see anything quite yet, but I'm going to come in and type Docker. You'll see Docker for Windows. I'm going to go ahead and start that up. And just to kind of show you, I'm going to go back to this area, and you should see the little whale starting. Now, this'll take just a moment to start, but once it does, we can get to the preferences and things like that that are related to the Docker client. Once it starts up, you'll see this type of display, and if we go back down, you can see that I have the whale. It's no longer animated. Docker is running. So I can right-click on this, and there's a couple things I want to show you in here. So, much like the Mac, we can get to the Enterprise edition and read up on that. We can get to settings, which we're going to come back to. Check for updates. Go to the store. Documentation, Kitematic, things like that, but let me go to Settings, and let's take a look at what we have here for the settings. I could sign in here if I wanted, but you don't have to do that right now. So first off, you can control again if you'd like Docker to start right away. That's up to you. Also, we can go to Shared Drives. Now, this one's really, really important. Later in the course, we're going to be talking about something called volumes, and we're going to be placing some code on our desktop. And by checking this box next to C drive, and you can also expose other drives if you want, it's going to allow support for something called volumes. And so we'll be getting into that in much greater detail later in the course, but you want to make sure that is checked on your Windows installation. So we'll go ahead and hit Apply there. It's going to make you log back in. And now I'm ready to go, and my C drive is now available so I can share code and create these things called volumes. Now, you can also get into proxy. So if you are at work, and you have a proxy going, and getting images and being locked and things, you can kind of open that up. No, the final thing is if we come on down, we can go to a command prompt here, and I can just type docker again. And you can see the different Docker commands, a lot of which we'll learn about later. You call also type so like docker images, which is a common command you'll be using, and I don't currently really have anything going except for it looks like a little one was added. And that's it. So that would be how we can get the Docker Community Edition installed if you have Windows 10 Pro or higher or Docker Toolbox if you have Windows 7 or 8.

  10. Getting Started with Docker Kitematic Now that we have Docker installed, we can start working with the different command line tools that it provides. Before we do that, though, I want to talk about an additional tool that's available called Docker Kitematic. And although we're not going to be using it throughout the course, it's a great way to visually see what are images and what are containers, and you can even hold down images and start up containers right there on your machine very, very easily. So let's take a quick look at what it offers. So Docker Kitematic is a GUI tool that makes it really easy to work with images and containers. And as I mentioned, we're going to be using command line for everything throughout the rest of the course, but I like Kitematic simply because if you're new to Docker, really haven't played with images or containers or, at this point, don't even really know what those are, 'cause we'll be providing more details as we move along. And this'll provide a really easy way to get started. Now, it allows you to visually search on something called Docker Hub for these images. And the image an then be downloaded to your machine, and then you can create, run, and manage containers using this GUI. So just with a few clicks of a button, you can actually issue commands behind the scenes that we'll be learning about later that you would normally have to do on the command line, but Kitematic kind of hides that from us. So, if somebody wanted to play around with images and containers and doesn't know the command line at all for really doesn't use it, Kitematic would certainly be an option. But I think even if you are going to be using command line, then it's nice to know about at least and interesting to explore. So let's go ahead and take a look at Docker Kitematic in action.

  11. Docker Kitematic in Action Let's take a look at Docker Kitematic in action and see an example of how we can use it to work with images and containers on our machine. Now, if you've installed Docker Community Edition either on Mac or on Windows, you'll have the whale icon. Now, if you right-click on that, you can get to Kitematic. Now, you'll have to install it. So in the case of Windows, we'd have to download and install it, or of course on Mac we'll get something similar. So if we go down to Kitematic, we'll see I'll also have to install it. So I'm going to go ahead and download that. We'll come back to it in just a moment. Now, going over to the Windows 7 or 8 side, you'll have Docker Toolbox, as we discussed earlier, and you'll have Kitematic either as a shortcut on the desktop by default or you can get to it right here. Right after it installs, you'll see this, and you can just double-click it to open it. Now, what I'm going to do is go back over to the Mac, and it would be the same if you were running Windows and Pro and install Kitematic quickly. All right, so I'm going to go ahead and double-click the zip, and this is going to open up the Kitematic beta. You see it extracted. I'm ready to go. And really all you have to do is just drag and drop this into your applications folder, and we're ready to go. So now it's actually going to show up as one of our applications. So we can come in and type Kitematic, and let's go ahead and launch it and try it out. So we'll hit open, and we'll go from here. And this will take me to a screen where I can log in to something called Docker Hub. Now, Docker Hub is cloud-based repository of images that other people have put up on the web so that with can leverage those. Now, the images I'm going to show you are public, but it is possible to have private images as well that only you can get to. Now, normally, what you'll have to do initially if you haven't signed up already is click on Sign Up of course, and then you'd have to register, a very quick process. But I already have an account, so I'm going to go ahead and just log in with this. So we'll go ahead and log in, and this will take me to the search screen where I can now download different images that people have made, run those as containers in my Docker on my local client in my development environment, and then work with these very easily without really knowing anything at all about the command line or VirtualBox or any of these things that we need for the development environment. So, you'll notice on the left, I have a list for containers. I currently don't have any. But if we had any containers that maybe are already running or just that are installed but not running, we can control those here. You'll see that change in just a moment. And then over here on the right, we have the images from Docker Hub that we can download to our local development machine. Now, if I were to click on one of the kind of dot, dot, dot type things here, the ellipsis, you'll notice that I can view on Docker Hub. So let me show you what this looks like really quickly. This takes me up to the hub.docker.com, where I can also sign in, and it shows me the commands I could run, including this command called pull, which we'll talk about later. But I have some comments about it, how it can be used, and things like that. So we can come up to search. Let's say we wanted to find a WordPress Docker image. Well, here's the official WordPress image, and there's some information about the versions, other things that support it, what's not supported, all that good stuff. Here. So there's a lot of info, and then here's the command we can run to actually get this. But with Kitematic, I could get this and other images on my local machine very, very quickly and easily. So, I'm going to start off by just showing a really simple hello world example. So I'm going to come up to the search, and we're going to type hello-world. And we find this nginx again. This is a reverse proxy type of server. We'll actually be using this later in the course. But I'm going to go to the really, really basic hello-world image. So we could view this on Docker Hub. We can get a little bit of information about what it is. Here's the example output of what we should see when we run it. It's a very, very simple type of image. And there's a command again that you can run if you want to use the command line. Now, for us, though, since we're in Kitematic right now, I can simply click on Create. That's now going to download that image, and now it's going to go ahead and allow me to run it. I can start that container now. So if you remember, we have images which serve as the foundation for containers. So I'm going to go ahead and hit start, and you'll notice that we now get some nice output, because this container started up, spit out this output to the log, and then it shut down. You'll notice it stopped up here. So, we could go again, get more information by clicking here. We can get to any settings. This one doesn't really have much. We could delete the container this way, or I can come over here and remove it as well. But this allows me to now interact with my containers in a very, very simple way without knowing anything at all about the command line. Now, throughout the rest of the course, we're really not going to be using Kitematic that much, but I like to show it, because number one, it's a quick and easy way to see if your Docker Toolbox is up and running and everything's, you know, working as we would expect. But it's also a very quick and easy way to get started if you don't want to use the command line, because you don't have to. You could actually do it right here. Now, if I was done with this, I can click here, and it'll say do you want to stop and remove it? It's already stopped, so I'll go ahead and say yeah, let's go ahead and remove it, and we'll be back to our search. Now, if I wanted to get the nginx one going, I could, of course, click on create. This'll give me some information once it downloads. See the download coming down here. It just now created the container. And we're off and running now. And now we have an actual container that's running. You'll notice this one is not quite as simple as the hello-world. It just spit out some log information, then stopped. This container is now up and running, and you'll notice there's even a preview over here to the right of the actual container itself, and it gives you some information about what you can do to interact with it if you wanted. There's little website files you can even interact with. But this actually shows us a little preview. If I click on this, I can actually view this live container in my browser. It doesn't really show you a whole lot, but it does show that this is actually working. There's our IP address, and it came up with the port that it used to run this. Now, if I'm done with it, I can hit stop. That'll now stop the container, and then once that container's stopped, I can just click here and say remove. And now we're back to where we were. That's how easy it is to locate an image, download it onto your machine, and then run that image. And then again if you want more details , you can always go into this and go to the Docker Hub, or you can view the documentation once the image has already been installed. Now, there's even more you can do with this. This'll let you get into all of the images up there, but since I'm logged in, if I had anything in my local repository, up in Docker Hub, you can have a private repository, then I can get to those images as well. So that's an example of what Kitematic will allow us to do. Now, on the Windows side, it works the same exact way. Here's an example of Kitematic running on Windows. I've already logged in. I even found the hello-world image, and we can now come down to that really simple one. We can create it. That's going to run it locally after a download. You'll notice it's downloading right now. It stopped because it just runs the output, and then we're done, and likewise, I can now remove it. Little bit different looking, but same general premise. And now that's gone, and then same thing for the nginx. So whether you're on a Mac or you're on Windows, you're going to get the same type of behavior with the same interface. So it's very easy to get started with Kitematic, and that's why I like to start with it initially, especially for those that are new to images and containers. It just kind of takes all the complexity out and lets you interact with images and containers visually. Now coming up, we're going to switch gears to some of the toolbox command line tools that we can run. But I hope you now have a good idea about what Kitematic can do.

  12. Summary In this module, we've learned about how we can get Docker up and running on a Mac or on a Windows machine. If you're on Windows 7 or 8, you're going to need to run Docker Toolbox, because that'll use VirtualBox as the VM host. You can install Docker Toolbox on Mac as well, but if you're on Mac or Windows 10 Pro or higher, you're going to want to install Docker CE. It simplifies quite a few things. Less commands to know, and it's very easy to get to and interact with. And we also looked at Docker Kitematic, which provides a visual way to work with images and containers, and although I'm not going to be using that moving forward in the course, it is a nice way to play with different images, see what's out there, and then get those containers up and running. So that wraps up our discussion of getting the tools installed, and now it's time to talk about some of the different Docker tools and features that are available for us to use as developers.

  13. Using Docker Tools Introduction In this module we're going to take a look at specific tools in the Docker Toolbox, including Docker Machine and Docker Client, and we're going to see how we can run these across different operating systems. Now we're going to start off by talking about Docker Machine and how it can be used to interact with VirtualBox on Windows or Mac, and interact with the actual machine that's running Linux in this case. Now from there we're going to take a look at Docker Machine in action on Mac, and then on Windows. Now I'm going to show both operating systems, in fact I'm going to show the commands twice, once on Mac, and then again on Windows, and there's a little bit of a method to the madness there. Number one, you're going to be able to see the commands on your chosen operating system, but number two, if you do want to watch both, the one for Mac, and then the one for Windows, then this is going to help reinforce the commands and hopefully help give you a better grasp of what you can do with these commands and how they're used. Now once we cover Docker Machine we're going to take a look at the Docker Client tool, introduce that, and then we'll do the same thing, I'll show several commands you can use that are the key commands for a Mac, and then we'll also show for Windows. And then we'll do a wrap up and discuss what are the Docker commands again so that you can hopefully remember these even better and will cement your knowledge a little bit. Now as a quick review, Docker Toolbox has several different tools in it, and we're going to cover just a few of those here. So we're going to cover Docker Client as mentioned and Docker Machine, but there are some other tools that we'll cover later in the course, Docker Compose is one of those. We've already taken a look at Docker Kitematic. And then we're going to see that although VirtualBox plays a behind the scenes role on Mac and Windows, Docker Machine is going to help us interact with that and actually work with VirtualBox machines. So let's get started by talking about Docker Machine and how it can be used and some of the different commands that can be run in your development environment.

  14. Getting Started with Docker Machine Let's take a look at another tool in the Docker Toolbox called Docker Machine. Now Docker Machine can be used to create and manage your local machines that you're going to be working, for instance, on your development environment machine. It can also be used to create and manage different cloud machines, such as ones on AWS or Azure or other cloud-based providers, but we're going to mainly be using it to manage our local machines. Now as mentioned, if you're on Mac or Windows you are going to have VirtualBox because Docker out of the box is either going to be running, when the Docker containers run I should say, on either Linux or on a Windows server. Now we're going to mainly leverage the Linux features here, and so for us to interact with that we need a way to host it, and that of course is what VirtualBox does, and so Docker Machine will let us start and stop and create different virtual machine images. Now it can also configure the environment so that when you pull up a command-line Bash-type shell in Windows or in Mac, that you can use the Docker commands to manage your images, start and stop your containers, and perform those types of operations. Now there's a few commands that we need to know to get started, and I'm just going to show you a quick list of a few of the key commands. These are not things you need to memorize because I'm going to be using these throughout the course, but they are good to know. So one of the commands is called docker-machine ls. Now docker-machine is the actual command-line tool, and then ls is the command we're going to run. So this would list all of the different machines that we can issue Docker commands against. Now what do I mean by machine? Well, out of the box you're going to see in a moment that when you install Docker you're going to get one VirtualBox machine set up called default. Now you can certainly set up others, but when you first get started one is good enough, and so we'll have default, and we can use docker-machine to list that and any others that you might create, that's what the ls command does. Now we can also start and stop our virtual machines. And so docker-machine start, and then machine name would be whatever it is, as mentioned default is the default name of the machine, so we can use the start command, or we could use the stop command, and that's how you can easily start and stop one of the VirtualBox images on Mac or Windows if you'd like. We can also configure the environment for our machine. This is really important, and I'll be showing this in just a moment, but when you first pull up a command-line terminal window, you're going to want to issue some Docker commands to manage your images, your containers and things like that, and you first need to make sure that Docker knows what machine it's going to be interacting with during that terminal session. So you'll see in a moment I'm going to use docker-machine env command to do that. Now we can also get the IP address of a given machine, and that's useful as we start to test our containers that are running, for instance we might pull up a browser and want to call in to the machine and call a specific container, and we'll be demonstrating that as we move along as well throughout the course. So these are some of the key commands that you can use for docker-machine, there are certainly others, but these are the ones you need to know to get started. Let's just take a look at an example of some of these commands in action.

  15. Docker Machine in Action (Mac) Once we have Docker Toolbox installed, we can get directly to the Kitematic tool that I showed earlier, but we can also get to the Docker Quickstart Terminal, and this is a terminal window, a command-line window that you can get to to interact with Docker tools such as Docker Machine. Now what I normally do is drag and drop this down to my dock, and you'll see I already have it here, so I'm going to close this one, we'll come on down and open it up, and the first time this fires up, if your virtual machine is not running, your VirtualBox machine that Docker Toolbox installs, then it's going to go ahead and run a machine and it's going to give it a name of default, you'll see that right here, and it's going to take a moment to start this machine up. Now from here it's going to copy over some certificates and do some other configuration, and once you get to the nice little whale image, you'll know that Docker is now configured as it says to use this machine called default, and it even gives you the IP address of the machine that VirtualBox is actually hosting. So what is default? Well default is our VirtualBox Linux machine that we're actually going to be issuing Docker commands against, but before we do that, the first thing you want to do is make sure that your terminal window is linked up to the proper machine. Now ours obviously is because I have, my default machine is all wired up here it says, and once that's done we can start working with these machines and using them. So one of the key things you'll want to know about is things like the IP address, and you'll see that, yeah, you can see it here, but as you start issuing commands or maybe you type clear and clear it all out, you might forget what that is. So there's a couple ways we can get the IP address of the running machine that is hosting our Linux server in this case, and one of those is we can say docker-machine ls. And the ls command will automatically let us know all the machines that are running on this particular box on my development machine, it looks like I have one called default, it's running through virtualbox, it's up and Running, and there's the IP you can see. If I just wanted to get the IP address though, I could say docker-machine ip for the name of the machine. Now you would have to know the name by default, it's called default, but you can create other machines as well. Now we're not going to do that, we'll just use the default machine in this course, but it is possible to create others. So I'm going to hit Enter, and you'll notice I can get the IP address. Now likewise if I just want to get the status, you'll see the State, which is Running, and if I want to get it for a particular machine we could say docker-machine, get me the status for default, and it's up and running. Now from here I can also start and stop machines. Now this one is obviously already started, but we could say docker-machine stop default, and this will take a moment to run, but this will actually shut down the running virtual machine, and typically if I'm not using Docker on a particular day I will shut it down, you'll see it was pretty quick to do this, because that'll free up some memory on your machine if you happen to need it, but a lot of the time I'll just leave it up because I'm jumping in and out of Docker throughout the day when I'm on a particular development project. Now we can also say docker-machine start default, and this will now start the machine back up. Now once again when you run the Quickstart Terminal, if it's not already started up, it'll start it up for us. So you kind of don't have to use the start as much, but every now and then you might shut it down yourself and then want to manually restart it. Now while this is running, I'm going to go ahead and open up just a regular command terminal window here. So I already have my running Docker Machine one, but I'm going to do just a New Window. Now because I didn't use the Quickstart Terminal, it didn't run any of the early shell scripts that kick us in to the world of Docker, I'm just in normal terminal mode, in fact, let me just make this a little bigger so we don't get confused by the one in the background, and now let's go ahead and try to do something like docker-machine ip of default, and you'll notice I can get to that, but if I start to run commands, and we'll learn about some of these Docker Client commands a little later in this module, but I'm going to do one called docker ps, and you'll notice I get an error, Cannot connect to the Docker daemon. Is the docker daemon running on this host? And you might, the first time you see this, do what I did and go well wait a sec, I know the virtual machine is running, so what's the problem here? Well, if you go through the Quickstart Terminal, you probably won't have to do this, but if you want to either A, configure a different terminal to use the default machine so that you can issue Docker commands against it, such as this ps, ps would list all of our containers by the way, but we'll learn about that coming up, or B, we might want to hook this terminal up to a different machine other than default. What we can do is wire up this terminal to the machine that we want to issue Docker commands against. And the way we can do that is through another command called docker-machine env, and then if I just do this we'll get an error, but we have to tell it the machine name, so we'll say default. And what this will do is add some, you'll see kind of variables here into our environment variables, and then it's going to say run this command to configure your shell. Now when we ran the Quickstart Terminal, it's already doing this behind the scenes to hook us up to the default, because that's the one you get out of the box, but if I wanted either hook up to a different machine, then I could have said docker-machine env, whatever that other machine is, mymachine maybe, and then what you have to do is run this eval command. So you literally just copy this, paste it down, and then run that, and now when I run docker, show me all the containers, which is the ps command, you'll notice that at least it works. Now I don't have any, we'll be doing that shortly. So that's a really, really nice tip that I know I struggled with initially when I got into Docker because I didn't realize that you had to hook up the terminal window if you didn't use the Docker Quickstart Terminal anyway, to the actual machine that you want to issue the commands against. That's a quick look at the Docker env command as well. So now we've seen several of the commands, you can actually list all of them by saying docker-machine and just hit Enter, and this will list all of the different commands that we have available, and you'll see there's quite a list here. A lot of stuff you can do, but we're now kind of going over the key ones, the env command, the ip, the status, the ls, the start and the stop. You can even restart a machine, very similar, just docker-machine restart default. And there's even ways you can create new machines. If you wanted to have different versions of Linux or something like that running, then potentially you could create a different machine if you'd like. So that's a look at some of the key Docker commands that you can run that are specific to Docker Machine, and again, Docker Machine is part of the Docker Toolbox that we've already installed and now we have that up and running, and we can now interact with that machine, and that's what it looks like from a Mac standpoint.

  16. Docker Machine in Action (Windows) Whether you're working on a Mac or on Windows, you can run the same exact Docker Machine commands. In fact, you'll run a Bash shell even if you're on Windows, as you'll see in a moment. Now we've already installed the Docker Toolbox, so it added some icons on my desktop area, what I'd like to do is drag down the Docker Quickstart Terminal down to my toolbar, so I'm going to go ahead and open that up, and you can see that it automatically fires up the virtual machine called default. This is running as a Linux virtual machine in VirtualBox, and gives me the IP address. Now one of the things that's a little bit different though on the Windows side is that you'll note that I'm not in a normal DOS mode here, I'm in a type of Bash shell, and so I can run commands, for instance like ls, which would be very similar to a dir that you're used to in Windows, and that'll list where I'm at, all the files and folders and things. I can type clear, kind of like cls in Windows, and notice that clears off the screen, and there's a lot of other things we could do that are related to more of a Bash environment. So what this does is it installs this Bash environment for Windows, and that's a good thing actually because the same commands that we can run on Mac and Linux you can run here on Windows. So let's get started by jumping into the docker-machine command itself. Earlier when I pulled this up you saw that we had a Docker Machine called default, and again it listed the IP. And so I can list all the machines on my Windows environment by saying docker-machine ls, and this will now list that I have a machine called default, it's running on virtualbox, the status is it's up and Running, and there's the IP address you can see for that machine. Now if I wanted to get to the IP on Windows for that particular machine, we could say docker-machine ip for the name of the machine. So you simply take that name there, hit Enter, and there we go, and I could also get the status by doing docker-machine status for default, and you can see it's up and running. So ls will get you all the machines, but if you just want to get a particular property of that machine, such as the IP or the status, then you can run those commands as well. Now, when we ran the Quickstart Terminal, this actually ran some behind the scenes scripts that made it possible to run other Docker commands against this machine called default, and one of the commands we're going to learn about later is docker ps. Now if you watched all of this for the Mac side of it, this is going to look exactly the same, and that's kind of the point of it, but if you are just jumping right to this Windows information, docker ps will list the running containers that we have, and we'll talk more about this coming up later in the module, but you'll notice that it works. And the reason this works is because when I did the Quickstart Terminal, it already made sure that my default virtual machine was running, if it wasn't it starts it up, and it hooks this terminal window here to that running machine, so that when I issue commands, many of which we'll learn about a little bit later here in this module, such as docker ps, those commands work. If we weren't linked to the machine, then we'd have some problems and we couldn't run the commands. Now one way you can switch machines or link it up yourself is you can run docker-machine env for the name of the machine that you want to hook to. And what this will do is it'll set up some environment variables for us, you'll see those there, and then tells me, go ahead and run this eval statement. Now you'll notice there's a space here, and so unfortunately if you run it as they sort of show you, let me just copy that down, we'll paste that, we're going to get an error because of the space, it didn't know what to do. Now we could just kind of abbreviate this and run docker-machine directly because it is a known command, you've already seen that up here, but if we wanted to leave what they give us and just fix it up, I'm going to hit Home to go to the beginning, and I'm just going to add a single apostrophe and we're going wrap this path, let me do End and we'll kind of back it in here. Now it'll know that we have some spaces in our path in this case, in our folders, and now it's going to work appropriately. Now we were already hooked into default, so that was a very redundant thing to do, but it's good to know, because if you ever for whatever reason want to switch to a different machine, you'll have to set the Docker environment to something other than default, maybe mymachine if that's what it was called. Alright, so the last thing is if you want to see all the commands Docker Machine has, you can say docker-machine, and this'll list everything you can do. So you'll see there's quite a few commands here, we can go in, and in fact I'm going to show a start and a stop to wrap up, but we can start a machine, we can stop a machine, I've shown you the status, the ls, and the ip, but there's a lot of other things you can do. You can actually create new machines, we're not going to do that because the default machine has everything we need for now, but there's a lot of information. You can inspect and kill a machine and do all kinds of fun stuff. We're focusing right now on the key commands that you need to know to get started here. The last thing I want to show you is that you can also come in and say docker-machine start or stop, obviously default is already started. We've seen the ls and we can see it's running, but I could also come in and say docker-machine stop default, and now that's going to stop that virtual machine, and then likewise I could say docker-machine start default, and there's even a restart. So let's go in, and now that one is stopped, which is pretty quick you can see, we can start default, and this'll take just a little bit more time to start it back up, but then we'll have a run your virtual machine again and we'll be kind of ready to go. Now if when you start a machine, if for some reason when you issue Docker commands you get an error, then you'll need to run that docker environment, the env command that I showed, and then do that eval copy and paste thing. So that's an example of how you can run docker-machine commands on a Windows environment, and again, if you happen to watch the earlier one on how to do this on a Mac, you should now see that the commands are the same, which is really, really nice. It doesn't matter what you're on, Linux, Windows, Mac, you're going to be running the same exact commands, and so once you learn these core commands that I'm going to focus on here, you're pretty much good to go regardless of what operating system you're going to be running against.

  17. Getting Started with Docker Client Once you have your Docker Machine set up using the Docker Machine tool, you can then start working with Docker images and containers through the command line using the Docker Client. Now the Docker Client really is a tool that interacts with something called the Docker daemon, and we talked about this a little bit earlier, but you can think of the Docker daemon as the engine that ultimately can control accessing our containers that are running. Well the Docker Client will let us work with images, and then convert those images into running containers, and so we're going to look at some of the different commands that you can use with this particular tool. As mentioned, this tool, Docker Client, is going to allow us to interact with the Docker Engine, the Docker daemon that's running behind the scenes. Through using this tool, we can build and manage images, we can then take those images and run and manage containers. And all of this will be running on Linux for these particular demos, and we'll be using VirtualBox again either on Mac or on Windows. So let's look at some of the commands that you could use as you start to work with the Docker Client. Now some of the key commands are going to be shown here, and this is a small subset of the commands, there are quite a few that you can run. One of the big ones you'll use though is called docker pull. You might find a Node.js image or an ASP.NET or PHP or whatever it may be, you might have an image up in Docker Hub and you want to pull that from Docker Hub down to your development environment. Well we can use the docker pull command to do that. Once we have an image, we can run it. We can use docker run to do that. We simply say docker run, and then give it the name of the image that we want to run. We can also list all of our images by simply running docker images, and then when it comes to containers, we can run docker ps, and this will list all of the different containers that we might have available. Now once you have the containers and the images and everything available, we can then start containers, we can stop containers, and do all kinds of things that we're going to look at throughout the course, and so before we go too far though, let's take a look at using the Docker Client with these commands and see how they can interact with our different images and containers that we may have, or that we might want to grab from Docker Hub.

  18. Docker Client in Action (Mac) Let's take a look at some of the commands we can issue using the Docker Client tool that's available in the Docker Toolbox. So the first thing I'm going to do is just type the word docker, and if I just hit Enter or Return here, this is going to show all the different commands, and you'll notice there's quite a few commands. Now we're only going to focus on just the essentials that we need to get started, but those are the commands you can use, and there's a lot you can do with images and containers and a bunch of other stuff as well. Now from here, let's learn how we can pull an image off of Docker Hub. And we talked about Docker Hub a little bit earlier, and if you go to hub.docker.com you can get to it. Now you can certainly log in and create an account and all that, but if we just come up to the Search, I'm going to type hello-world, and this will pull up the hello-world image, and we'll click on it and you'll notice there's this Docker Client pull command that we can issue. So I can actually just copy that from here, get information if I'd like to read up on it, but we'll go on back and I can just paste this in. So let me clear this and we'll just paste docker pull, and the name of the image. Now this is going to pull down the layered file system it's called, this is the actual image itself, very small you can see. And so now I have this image locally, but how do we know if it's really there, did it work? Well, the second command we're going to look at for Docker Client is we can type the docker client tool again, and now I can say images. And what this will do is list all of the images that I have installed, and it looks like we now have one image, let me make this just a little bit bigger, you can see it's hello-world, we have the latest, it assigns a unique ID to it, and it looks like it was created 11 weeks ago and it's very small, 960 bytes. Now that's how easy it is to first off pull an image off of Docker Hub, and then actually see what images do we actually have. Now I've mentioned a couple times, an image on its own is not ultimately that useful because we need to take that image and actually get a running container. And so what we can do from here is we can say docker run, and then we can give the name of the image, and we're going to do hello-world, and this will now run the hello-world image as a container, and you can see all of this output, and if you see this Hello Docker, then you did good, it worked. And so we now have a hello-world container. So we have an image that's sitting there, we've now taken that image and made an instance of the container that's now, it ran and then it actually stopped, and so how do you know what containers that you have available? So I can come in and type docker ps, but let's see what we get here. Notice nothing shows up, which I know the first time I saw this was a little bit confusing because I knew I had a container, because that's what the run command that we looked up here does. So what's going on? Well, docker ps, this command, only shows the running containers, so how do we see all containers? Well, we do -a, and that will tell the Docker Client ps command, I would like to list all containers, and there we go. So now we have the container ID, we have the image that was used for it, we have, well it's kind of wrapping here, this is the sort of friendly name it comes up with if you don't want to refer to this guy, we created it about a minute ago, and the status is it's Existed. So this particular container right now is not up and running because this container just outputs this log data you see up here, and then it kind of stops. So that's a really important little add on to the ps command for a command-line switch to do docker ps -a, because again, if you don't do that, you're not going to see it in this case. Now that's now that useful of an image or a container, so how do we get rid of these now? That's great, we see it works, but now I'll probably never, ever use it again. So, what I'm going to do is I'm going to come in and say remove, and that's remove container, and then we have the container ID right here, and then we also have a little alias, but I just normally type the first few characters. So I'm going to do 59f, so we'll do 59f, I'm going to hit Enter, now when I do docker ps -a, you'll notice I don't have any containers left, so we deleted it. Now what about the images? So let me do a clear here. Well, let's do docker images again. Okay, it's still there, we deleted the container, but we didn't get rid of the image, so it's really similar to what we just did, it's docker rmi, and then again, we can take the image ID and just do the first few. So in this one since there's only one, I'm just going to do 0a, and there we go, it just deleted the layered file system for that particular image. So that's an example of how easy it is to use the Docker Client command-line tools to get started with pulling an image, viewing images, and then taking those images and converting them into running containers. So now that we've seen the basic commands for Docker Client, let's pull down the NGINX image that I showed if you watched the Kitematic demo earlier in the course, and see how we can get that actually running as well. So if we go back up to Docker Hub and go to hello-world, and I'll just kind of re-search on this, we'll hit Enter there, and you'll notice that there's quite a bit of things we can do. There's a tutum/hello-world, there's also a Kitematic one down here, it looks like I'm not finding it immediately, so we could actually search for Kitematic, and there it is right there, so let's click on that. And just to save a little bit of typing, I'm going to go ahead and grab this pull command and just paste this again into my Docker Client terminal that we have here. So we'll paste that in. And now this image will have a little bit more to it, it's bigger than the last one, so it'll take a moment to download, but it's pretty quick. And there we go, we have it. So there's docker pull again. Now we can do docker images, there it is, so there's our Kitematic, it's the latest, there's the unique ID it gives it and it's about six months old it looks like. So now we can actually start this image up and get it running, and to do that we can again do docker run and then give the actual name over here that we have of the image, so I'm just going to copy that and paste it in. Now this particular image though has a port that we need to set, and so you can kind of think of it this way. The image is going to become a running container. Now ultimately what we have is a machine that hosts the container. Well the machine has a port we're going to hit, because you'll notice I already have an IP address up here typed for the machine, but we can also set the port that we're going to call on that machine. Now when that gets called, it's then going to call into the appropriate container, in this case the NGINX container, which is a reverse proxy type of tool. We can also set a port inside of the NGINX. Now normally NGINX is kind of a front-end server that'll serve up static files and forward more complex requests to back-end servers, ASP.NET, Node.js, and others, and so normally it's on port 80 if it's a kind of a public facing website. So what I'm going to do is come into here and we're going to use a command-line switch on run, and I'm going to say that I would like to run this image, but I want to run it on port 80 for the machine, and that's going to forward internally to port 80 in the container itself. Now this is a really important one because we need to set what is the port we're going to call on our actual machine, and then what's it going to call on this container that's going to get created based on this image. So let me go ahead and just hit Return here, or Enter, and you'll notice this now started up my NGINX container in this case, it converted from the image into the container. So I'm going to come up here, and now if I hit the IP address for my machine, which is this 192.168.99.100, we should see an NGINX output here, and there we go. Now this looks very, very similar to what we saw earlier if you watched the Kitematic demo again, because it's the same exact image, it's just that we're now using the terminal here to actually work with this particular image and container. What I'm going to do from here is I can actually just start up a new kind of Bash tab here, and I'm going to go into the docker ps command that I showed you earlier, and you'll notice that right off the bat because I didn't click on a Quickstart Terminal, I get this Cannot connect to the Docker daemon. Now, one way around this is I can close it and just open up a different Quickstart, but let's go ahead and use what we learned earlier. I'm going to do docker-machine env, default is my machine, and then I'm going to run this eval command. And these are the types of things you kind of need to know as you work with Docker. So now let me try docker ps. Alright, now it works, and you'll notice this command terminal is now tied to my default machine. And again, I didn't have to do that, I could have just clicked on the Docker Quickstart terminal here. But, that's a nice little thing. So, here we go. We have the container ID, here's the image it's based on, there's a little start.sh script that's run, we started about a minute ago, and it looks like the status is up, it's wrapping a little bit there, but you'll notice status is up for about a minute or so, it looks like. Now we can come into here and we can say docker stop, and then we can actually list just a few of the digits here, so I can just say docker stop 109, and now this is going to go in and try to stop that particular running container that we have going over in this tab right here. And so we'll let this run just for a sec, okay, there we go. Now let's do docker ps, and if it's stopped it shouldn't show it. Alright, it's empty. So now we'll do docker ps -a, and there it is, but you'll notice that the status is Exited. So that's how easy it is to pull that down, and then you now know how we can get rid of this as well. Once it's stopped we can say docker rm, and then give it that same container ID, so we'll just do 109 since that's a quick and easy way. Okay, so now docker ps -a, it's gone. Let's go to docker images, and there's the image. Notice the, again, image ID here, so we can say docker rmi 385, and that now deleted all the parts of the layered file system, and there we go. So now our container is gone and our images are gone, and from a development standpoint this is pretty awesome because now my machine is completely clean of this NGINX server. I don't have to worry about other files sticking around because I got rid of the actual image and the container. Now compare that to manually installing different servers and databases and things like that, and I think you'll find that that's a pretty compelling thing we can do in the development environment, because I don't know about you, I tend to like to keep my machine pretty clean. So that's an example of some of these different Docker Client commands that we can actually run in a Mac environment.

  19. Docker Client in Action (Windows) To get started using Docker Client on Windows, we can first click the Quickstart Terminal and get our terminal window going. So I've already done that, and you can see I'm linked up to my default machine on the IP address that you can see here. Now the first thing I'm going to do is show you some of the Docker Client commands that we can run on the Windows side, and they're the same as in other places like on Mac, but if we just type docker and hit Enter, this will show us the list of commands we can run, and you can see there's quite a few. Now we're going to focus on the key commands that you need to know to get started with Docker Client here. So we're going to be looking at some commands that you'll see down here like pull, we're going to look at rm, rmi, run, and stop, and a few others along the way here. So the first thing I'm going to do is I'm going to run off to hub.docker.com, and I've already typed in this hello-world image that I'd like to find that's up there up in the cloud on Docker Hub, and so let's go ahead and find this, and you'll see the official hello-world image, and this is a very basic image you can use to get started. So if we scroll on down you'll see a description, some information about it, you'll see some example output of we would expect if we run it as a container, and then over here to the right you'll notice that I can run this command that's a Docker Client command called pull. Really simple to run, you simply say docker pull, and the name of the image. So I'm going to copy that, run on back here, and let's paste that into our terminal window, hit Return or Enter, and this is going to pull down a layered file system, you'll see the pull is now complete, very fast because it's a very small image, and now we're kind of ready to go. So it pulled that image down to our local machine. Now how do I know that it actually worked? Well, we can come in and say docker images, and this will list all the images that we have on the machine, and it looks right now that I have this hello-world, it's the latest, here's a unique image ID it assigns per image, created about 12 weeks ago, and it's really small, 960 bytes it looks like. Now from here we have an image, but images on their own aren't really that useful, they're like having a blueprint, but never creating a building. We want to create the building, we want to create the container that can do something. So now I can use the Docker Client command called run and I can say docker run, the name of the image that you'll see here, hello-world, hit Enter there, and there we go, this is the actual container running, and so you can see Hello from Docker. This message shows that apparently we're working, so we've done pretty good so far, we have a really, really simple image running, and they have some other info you can check out there if you'd like. That's not super impressive obviously, but we do have a container. Now is that container still running, or what happened there? So we can actually see all the running containers by doing docker ps. And so I'm going to hit Enter there and you'll notice it's empty, which is a little bit weird because I do have a container, obviously it ran, but it must not be running. So if we want to list all the containers on the system, we can say docker ps -a, and that'll show all of them, so we'll hit Return there, and there we go. So this is wrapping a little bit, so I'll make it a tad bit bigger, but you'll notice that we have a container ID, that's assigned per container, it's based on the hello-world image, there's a command that runs internally, just hello, we created it about a minute ago, and it exited about 55 seconds ago. Now it also gives it a little more friendly alias if you will, and this particular alias is kind of something you can use instead of the alphanumeric characters you can see over here for the container ID. Alright, so we've now run the container, we can see the container, but it exited, so this is a different container, this isn't one that you run the container and it stays up and running like a server, it just runs and then it just shuts down, so it's a very simple hello-world type of example. Alright, so let's get rid of this container then. We know it works, but we really don't need it anymore, and you'll probably never ever use again, so. We're going to do another Docker Client command called remove, and this removes containers. Now I'm going to go ahead and use the container ID, but I really don't want to type all of this. I know when I first started using Docker, I didn't realize that you don't have to type the entire container ID, so I went in and typed the whole thing though, but you don't need to. We can actually, in this case we only have one, so I could get away with 24, I could get away with 2 if I wanted, but let's go ahead and do that, and you'll see it echoed back out the container it removed. Now let's make sure it worked. We'll do docker ps -a again, and everything is gone you'll notice. Okay, so the container is gone now. Now what about the image? Well, the image is still there, and I probably don't need that on my system, so let's clean that off. Now we can do that and remove it by doing docker rmi, remove image, and then just like we did with the container ID, there's an image ID here. So we only have 1, so it's pretty simple, I'll just do like 0a, and now it just deleted that layered file system. Now if we go back and do docker images, you should see that it's completely gone. So now we've downloaded the image or pulled the image, we've run it, the container immediately stopped, we removed the container with the rm command, and now we just removed the image, so now there's really no trace of this on our system, and that's a great feature that we're going to talk more about in a moment with Docker in the development environment. So that's an example of how to get started with those commands. Now let's take a look at how we can pull a more robust image from Docker Hub and get that up and running on our machine. So if we go back over to the Docker Hub site, I can come in and search for hello-world, but for the NGINX, and if you saw the Kitematic demo earlier in the course, I'm going to do the same thing, but we're actually going to do it using the Docker Client tool. So I'm going to come in and we'll just search for Kitematic, and we can just do hello-world here, it should pull it up, and there it is. So we can view some information about it, there's not a whole lot on this one, but it's a simple NGINX reverse proxy container, and you'll notice over here again, just like with the hello-world image, I can also pull the kitematic/hello-world-nginx image. So let me make sure I grab that whole command, and now I'm just going to come on back and paste this in. So we'll paste in the docker pull command, and this one will have a little bit more. So this is going to pull down again the layered file system, you'll see this will start to fill in, and it's still pretty fast. Alright, so we're ready to go. So I'm going to do docker images, and there we go, we have the kitematic/hello-world-nginx, latest, there's the image ID again, and we can see the age and how big it is. So this one's a little bit bigger, it likes about 8 MB or so. So the next thing I'm going to do is we have the image, and just like with the hello-world image, I want to go ahead and run this. So we would do the same thing. We would say docker run, and then we would put the image name. Now because this is an actual server, it doesn't just write simple log output, there's a little bit more that we need to supply here. Now we have a Docker Machine, and in fact that machine IP is shown right up here because I'm going to use it in just a moment, we saw that when I started the Quickstart Terminal, and that machine needs to be told what port do you want to call to come in to on the machine, and then we have to tell the machine, okay, well once you get on that port, and we're going to do port 80, how does it call into the container and what port does the container actually have? And I like to think of it as a bubble around the container, and on the outside of the bubble is the machine port, and then it's going to call a port that's the actual NGINX port that's in the bubble, or in the container in other words. So let me show how this works, it'll make a little more sense. So I'm going to say -p, and then we're going to do 80, and that's going to be the port for the machine, colon, and then that's going to say I want to forward from port 80 on the machine to port 80 in the container itself. Now if we wanted to do maybe 5000 on the actual machine, but 80 on NGINX we could do that, but NGINX is typically used as a front-end type of reverse proxy server, it can serve up static files and then forward requests to more complex back-end servers like ASP.NET and Node.js and PHP and things like that. So we're going to do 80 to 80, and this will forward it. And we're going to say docker run on this port on the machine and on the container, and then we have to put the name. So it's going to be kitematic/hello-world-nginx. Alright, so now that we have that in place we can go ahead and start up this NGINX server, so we'll go ahead and run that, and there we go. It looks like that container is now up and running. So what I'm going to do is leave that up and I'm going to right-click here on our Quickstart Terminal, and I'm going to start a new terminal, and that'll link us up to the default machine again, and there's the IP address, and now let's see what do we have as far as containers. So we're going to come in and we're going to run the docker ps command, and it looks like we do indeed have a container, and you can see that it is up for about 23 seconds, up and running, there's the image it's based on, there's the container ID, and then here's the port forwarding I was talking about. So the IP here just is kind of generic, but this will be the machine IP, :80, that forwards to port 80 in the actual container itself. Now that container is up and running as you can see here, and it looks like when we started it up, we have this start.sh that the NGINX image actually had in it, and that actually started up the NGINX server. So I'm going to run over to this tab now, and we're going to try to refresh here, we're on port 80, so obviously I don't have to put :80, we could, but I'm just going to hit Enter, and there we go. So we now have on our development machine an NGINX container up and running. Very cool because I didn't officially install anything from the NGINX site, we're just using obviously Docker images and containers here. So I'm going to run on back, let's bring both of these back up. You'll notice that a request was made here, it shows the GET request was made, it was a successful 200, and some other information about the browser. And then if we come on back here, you can see that we're back where we were and we can see that it's up and all that. So now what I'm going to do is let's go ahead and try to stop the container. So we'll go ahead, and not to confuse terminals here, let's go ahead and leave this one up, and I'm going to type docker stop, and then I'm going to take the container ID over here. Now again, we don't have to type the whole thing. In this case I can say, for instance, d7, and that's going to stop using the Docker Client tool, that particular running container, and this will take just a moment for it to stop. So once it echoes back out the ID it typed, that pretty much means it stopped. So if we type docker ps we shouldn't see it, and we don't, but if I do docker ps -a for all, we should see it, but you'll notice that now it says it's exited, about 14 seconds ago. Alright, great. So we now have a running container that we stopped, now we've seen that because it's stopped we have to do the -a switch again to see it, and now let's go ahead and clean it up. Now this is one of the more exciting features, I think, from a development standpoint. Instead of installing the server on your physical machine, whatever it is, database server, web server, normally when you uninstall it, it seems like there's always a few files left over, but in this case because we're using images and containers, we can use our normal Docker Client commands and I can say rm, give it that d7 container, and now if we do docker ps -a, you'll notice everything is gone from the container. So alright, that's great, but what about the image? Let's do docker images, we still have the image. Now normally if you're going to be reusing this image to make other containers, you'd probably just leave it if we do it a lot, but in this case let's just say, hey, I'm done with it, I really don't want it on my machine any more, I've maybe tested something and everything is working great. Well, just like we did earlier we can do rmi, and in this case the image ID has this 385, so we'll go ahead and do that. That completely deletes that image, and now if we do docker images, you can see we're clear, and docker ps -a, of course we're clear on containers. And this is pretty cool I think. I'm a little bit picky on my development machines, and I like to keep everything really clean, and so when I'm done with something, I really would like all traces of it to be removed, and now it is. And I think this is a very, very cool feature for development that I literally can get, whether it's a database server, a server that's a reverse proxy like NGINX or others, up and running quickly on my machine without a lot of effort, just a few commands, and then I can completely remove all traces of it using these Docker Client commands. So that's an example of some of the different Docker Client commands that you can run on your Windows machine.

  20. Docker Commands Review Now that you've seen some of the different commands that can be run using Docker Toolbox tools, let's go ahead and do a quick review of some of the key commands so that we cement our knowledge and make it easier to remember them. So we started off by talking about Docker Machine and how it's all about interacting with the machines that we might have. Now again if you're on Windows or Mac, this will be VirtualBox machines. So we talked about commands like docker machine ls, and this'll list the different machines that you have. If you want to start or stop a machine, you can simply say docker machine start or stop, and then give the machine name. In our case the machine name was default. If you're wanting to set up the environment to link a terminal window into a machine, you can run docker-machine env, and we talked quite a bit about that one, and then we showed some simple commands like docker-machine ip and the machine name, and docker-machine status and the machine name. Now as a quick review, the ls command will actually give you everything that ip and status will give you as far as the commands go, but ls shows all the machines, and if you did have a bunch of them, then you'd have to figure out which machine you're looking for, whereas with docker-machine ip or docker-machine status, you can get that machine's info directly. No in addition to docker-machine we also talked about Docker Client, and Docker Client of course is all about images and containers. So some of the images commands we talked about were docker pull, we talked about docker images, it lists all the images you have on your machine, and then we talked about how you can remove an image with docker rmi, and you can give it the image ID that's available when you run docker images. Now when it comes to container commands, we talked about docker run and the image name, and docker run, interestingly enough, will actually pull the image if it's not already found locally. So although technically you'd run docker pull first, and then run docker run, if you want to save a step you could actually run docker run, and then give it the image name, and if it doesn't find it, it'll go download it and then it will run that container. We also talked about how you can list the containers. Now if you only want the running container, so you could do docker ps, but if you want to see all of them, you would add -a to the end of that. And finally we talked about docker rm, which can be used to remove a container. So that's an example of some of the key commands that we've covered throughout this module, and I hope that helps cement them a little better in your mind, we'll be using them throughout the rest of the course actually.

  21. Summary In this module we took a look at some of the key tools in Docker Toolbox and leaned how we can use those tools to run commands. Now specifically we talked about Docker Machine, how it can be used to work with VirtualBox machines, specifically the Linux machine that we'd be running on our development machines. We also looked at Docker Client and how it can be used to manage our images, pull those images from Docker Hub, list the images, and then convert the images into running containers using commands like run and others. So that provides an overview of some of the key commands that you really do need to know if you're going to get started with Docker in your development environment. So moving forward in the course, now we're going to dive even deeper, and we're going to learn more about images and how we can build custom images and work with those.

  22. Hooking Your Source Code into a Container Introduction We've learned how to work with images and containers in Docker, but we haven't seen how to hook our source code into a container, so that's going to be the focus of this module. Now we're going to start off by introducing something called the layered file system, and this plays a really critical role with your images and any running containers that you have. So as you want to, for instance, write to a log file or have database files, or even work with source code, it's important to understand how Docker actually works with files. Now once we talk about that, I'll introduce a term called volumes. Volumes are really important, especially as you work with your source code, if you want to get that source code hooked into a running container. So I want to introduce it here, and then we'll talk about Docker Client commands that you can use to actually create a volume. Now from there I'm going to show some actual examples of hooking real source code into running volumes, and I'll show all the tools to even create the source code from scratch, get that up into a volume that's associated with a running container, and how everything works there, and then once we're done with those demonstrations, I'll show you how we can, with just a really simple command, remove a volume that might be associated with a running container. So the big question that we're going to answer in this module is how do you get your source code into a container, because that's really what we're after here. And it turns out there's actually multiple answers. We're going to focus on this first one, how do you create a container volume that points to your source code, and that's what I'll address and show you how to do. Now later in the course I'm also going to show you how you can add your source code into a custom image that can then be used to create a running container, and I'll show the tools and how all that works as well. But for now, we're going to focus on container volumes and I'm going to show you how we can get started using those and how the file system works.

  23. The Layered File System Before we can talk about how we can get our source code into an image or a container in Docker, we first need to understand how Docker images and containers work, and discuss something called the layered file system. Now I've mentioned this term a few times earlier in the course, but we really haven't gone in to any good details on what it is and the role it plays with our images and containers, so let's talk about that now. Now from a high level, a dessert perspective in this case, we have a bunch of layers here you can see. And at the very bottom we have the base layer, and then we add layers on top of that, and build up and up and up until we get the final dessert in this case. Now you may immediately say, well what does that have to do with Docker images and containers, and actually the concepts have a lot to do with images and containers. Docker images and containers are actually built of this layered file system, and so you can think of instead of the dessert layers, layers of files that build upon each other, and you're going to see that's good for a lot of reasons. It's good for disk space, it's good for re-use, and even other things. So let's take a look at an image and see how these layers play into that image. So here's an example of Ubuntu, and let's say that we grabbed this off of Docker Hub. And so we got all these different layers, and this is our layered file system in the image. Now the file system and the layers that compose it within a given image, they're all read only, and so once that image is baked, you're not going to be writing anything to that image, from a container for instance. The image has the files, they're kind of hardcoded in the image, and they're ready to go and be used, but you can't actually write to this. Now that may seem a little bit limiting at first glance because we might have images with a database that needs to write files, maybe we have to log some files, maybe we have some source code we want to swap and change as the container is running. So fortunately while images and the file system they have is read only, a container builds on top of this and gets its own thin read/write layer, and really that's the main distinguishing factor between a container and an image, an image is a set of read-only layers, whereas a container has a thin read/write layer. Now as you write to that layer, if that container gets deleted, then the writable layer also gets deleted, but coming up, I'm going to show you how we can change that and use something called volumes. But for now just understand that it is possible to write to a container and do log files or database files or even have source code that does something like that, and we'd need to put it though either in the image as a baked in layer, or up in this thin read/write layer of a container, and we're going to focus first in this module on the container layer and what we can do there and how we can use it. Now as mentioned, these file layers that we're using within our images are really, really efficient when it comes to disk space and re-use and things like that, so as an example, if I were to use this Ubuntu image and make a bunch of containers, then all these images layers that you see here, and they all have unique identifiers per layer, you'll notice they have some universal unique identifiers, and these are all going to be shared across all the different containers. So that's really, really good for disk space because we don't have to make a copy of that entire file system, and that's why it's pretty quick to actually pull down different images, especially once you already have some images installed, because if you take this one in the very bottom of Ubuntu that starts with d3, if that particular file layer is used in other images, then it'll just detect that hey, I already have that, and it won't have to re-download it, it'll just share it between those images. Now in the case of each container that's created here, you can see they all have their own unique read/write layer. And so that's going to be okay, each one can uniquely log or store database files, but as mentioned, if you delete the container you also get rid of that thin read/write layer, and that's where some things called volumes are going to come into play. Now that we've talked about this layered file system, this can help us answer the question we addressed earlier in the module of how do you get your source code into a container? Well at a minimum, you could put it in the image, and we'll talk about that in a later module, and I'll show you how to do that. But as mentioned for this particular module, we're going to focus on the container level, that thin readable/writeable type of layer, and we're going to integrate the source code into that container using that particular layer. So let's go ahead and take a look at more information on how we can do that with containers and something called volumes.

  24. Containers and Volumes Up to this point we've learned about the layered file system and how it works with images and containers, and how containers are a little bit unique and have their own thin, read/write type of layered file system, and we call that the container layer typically. Now I mentioned though that any changes made while a container is running that are written to the writeable layer, they kind of go away if a container is removed. So if you delete that container, you're also going to delete the file layer that is the read/write layer. Now obviously in scenarios when you have database files and logs and source code, we might want to keep that around potentially, especially while we're doing development, and just trying to use Docker as a development environment. So fortunately, Docker and containers have another feature we can use called volumes, and what I'm going to do in this section is just introduce volumes, and then later we're going to learn about how we can use those volumes. So what is a volume? Well, a volume is nothing more than a special type of directory that's associated with a container, and typically you'll hear it referred to as a data volume, and that's because we could store all types of data, it could be code, it could be log files, it could be data files, and more. Now we can share and reuse these among containers, so it is possible for multiple containers to write to this volume, or you could just have a single container that has one or more volumes that it writes to. And what's nice about this is any updates to an image aren't going to affect a data volume, it stays separate. Also, data volumes are persistent. So even if a container is deleted and it's completely blown away from the machine, the data volume can still stick around and you have control over that. Now from a high level you can think of volumes this way. If we have a container, then we can come in and define a volume within that container. So in this example var/www. Where do we want that to write? Well you kind of have two options. You can let Docker figure it out, or you can give it your own, and I'm going to show you how to do your own custom volume coming up in the next section, but for now let's just know that when you write to a volume, so let's say that your code in the Docker container actually does a write operation to this var/www path, well that is really just going to be an alias for a mounted folder that is in your Docker host. Now remember that the Docker host is actually hosting the container, so if you're running on a Linux system or a Windows Server 2016 or higher type of system, then the host would be that OS, it's the thing that the container is actually running on top of. And so in this example if we had a volume that we wrote to, instead of writing into that thin, read/write type of layer that is associated with the container that we talked about with the layered file system, it can actually write it up into this mounted folder area that's part of the Docker host. Now if you delete the container, the folder that's on your Docker host, it can actually stick around and you can preserve all that code if you'd like. So that's a quick introduction to what a data volume is and what a volume in general is in the world of Docker containers. So now what we're going to look at is how can we actually get our source code into our containers using volumes, and we'll see how we can set that up using things like the Docker Client.

  25. Source Code, Volumes, and Containers Up to this point you've learned about the layered file system and how it's used with images and containers, and we've also learned about the basics of volumes, but let's go a little more in depth into volumes and how we can actually use these to store some source code. So earlier we looked at containers and saw that we can define a volume in a container. Now we haven't quite seen the syntax to do that, but I'm going to show you that here. And I mentioned that when you write to a volume, if you set that up, that it's actually going to write to some special area that's on your Docker host, and by default Docker takes care of that, it takes care of creating this area where it mounts this folder, and so I like to think of the var/www volume that's in the container here as really being an alias that points over to the Docker host and this mounted folder type of area, and that's where you can put your log files and that type of stuff. Now to do this, we'd normally run a command like the following to start up an image and make a container, so we could say docker run, give it our port, and we have the external and the internal container port, and in this case I'm going to run the node image. Well, if we actually want to have a special area, a data volume where that Node app could write to, then we can change it to look like this. So I can put -v, and that stands for volume, and in this case say /var/www, and then put the image name. Now the var/www, or w-w-w, however you like to say it, that would be the volume, and then the area that it writes to would be in the Docker host, and so it would kind of look like this, we create a volume, this is the container volume alias, but it actually is going to write to the host area, and Docker again will automatically create that. Now where does it store it, how would you know, let's say that for instance your Node application writes a log file out to this var/www folder, how does it know where that's going to be? Well, what's going to happen is Docker kind of magically makes that mounted folder, and the way you can find where it is is by running docker inspect, and then the name of your container. And so we could come in and do that, and if you scroll through the information it gives you, you'll see a Mounts area, you can see that over here, and the Mounts area has a Name, and it's going to be a really long unique identifier, and then a Source path, and it's also a fairly long path, you'll notice it's in this Mount, mnt folder, and that's going to be on your host, your Docker host. The destination that the container actually writes to is going to be this var/www, and so we have the host location defined there in the source, and then we have the volume location that's in the container defined by the Destination property that you can see. And so in this case Docker is automatically taking care of where that data gets written to, but you'd now have to know if you ever wanted to go get it outside of the container, have to know how to get to this path, which can be a little bit long, and it works great in scenarios where you don't want to control it, you just want to set up a container volume, write to it, and then have Docker take care of storing that somewhere and persisting that data, and this is the default way it will do it. Now the other option is we can actually customize our volumes, because in this case Docker is determining the mount location, the folder, where the var/www is actually going to write to. So let's look at how we could actually customize this. So instead of actually having Docker set up the folder that it writes to on the host, we could come in and give it our own folder path, and in this case I'm just saying /src, but it could be a variety of paths, and this could be your source code, it could be where you want your log files, your database files, or whatever it may be. So this gives us an option to work with, for instance, source code in this example, store that in a certain folder, maybe on your Mac or your Windows machine, and then have the volume actually read and write to that specific area, which in this case would be our /src. So what does the Docker Client command look like to make this possible? Well again, if we start out with the following where we just run the node image, we could change it to this type of pattern, we could say -v, and that again creates a volume. This $pwd basically says hey, go from the current working directory and use that as the host mount, in other words use that as the folder where I want to put my source code. Now the actual container volume though would be this /var/www, and then of course we have the name of the image, in this case node. So what this will do is create a volume in the container, which is going to be var/www, but when you write to that, or when source code is read from that as the Node container is actually running, then it's actually going to look in the host location, which would be the current working directory. So if you set up a /src folder and that was where you ran the command prompt from, then that would be your current working directory. If you were in your user folder, then that would be the current working directory, it just depends on where you are. Now if you do an inspect on this, things change a little bit. And so we'll run docker inspect on the name of your container, and here's what we'll see in the Mounts area. So again, we'll always have a name, which is a unique identifier, but you'll notice now that the Source on the host location is what we wanted, in this case /src. Now if you're on Mac or Windows, Docker is smart enough to allow you to work with source code directly on your Mac or your Windows machine, have that talk through VirtualBox, and up to the container, and it kind of does some magic there to make all that happen. So that's really nice for us as developers, because now I can work with my source code right on my Mac, Windows or Linux machine, but have my container be loaded up, and then reading the source code, or writing, to that area using this volume support. Now the destination, or the volume in the container, the destination as far as the container is concerned, is now going to be var/www again, so that part doesn't really change, but again, that's kind of like an alias is how I like to think of it, and that's actually going to read and write up to this /src. And again, any time you're working with this -v syntax as you run a particular image and make a container, whatever you write to that volume gets persisted. So even if you delete the container, that's going to stick around, and in this case that's a good thing. Obviously if we delete a container, we don't want it to delete our source code. So that's an example of how we can get started using the Docker Client with setting up a volume that could either automatically generate a folder using Docker on the host, or how we can control it by using our own syntax. So now that I've shown you the basics there, let's look at some examples of how we can actually hook source code into different types of containers.

  26. Hooking a Volume to Node.js Source Code In this section we're going to take a look at how we can get Node.js source code into a running Docker container by using volumes. Now this source code is actually going to live on our local development machine, but we're going to magically link it in to the container using volume support. Let's go ahead and get started here. So I'm going to come on in and start up the Docker Client, and you can see that my virtual machine is already running, there's the IP address we're going to hit once we get this all up and running, but first I'm going to run this locally just to show you that we can get this source code up and running, and then we're going to get it running in an container. I'm going to use something called Express, this is a web framework for Node.js, and it has a little feature called express-generator that'll generate a little sample site that'll make it easy to get started with some Node.js code. So the first thing I'm going to do is run an npm command, and this'll install some modules, and we're going to install express and express-generator, and we're going to do these globally because express-generator will add some command-line integration support, which you'll see in just a moment. Alright, so we're kind of ready to go, and what I'm going to do from the location, which is really the user account I'm in right now, is I'm going to create a new folder and generate some source code in that folder that's for Node.js. And the way we can do that is run express, I'm going to give it a folder name, we'll say ExpressSite, and then I can give it a technology for how to render the views, I'm going to use something called Handlebars. So that'll run, and now all we have to do to get this site up and running is first off, run these commands. So we're going to cd into the folder and then we're going to install all the dependencies of this particular web application. So this'll take just a moment to pull these dependencies down. Alright, and we're ready to go. And then, I'm going to get this running. Now I do have Node.js running on this local machine of course, that's how I'm running npm and these other commands, but the goal is not to run Node here, it's to run it in a Docker container, but this'll show us that the source code is running properly, at least locally to start. So now that I'm in the Express site, I can just type npm start, and this'll fire off this little web server here, and now we can come on in, and there we go, we now have a little Express site. Alright, so we're off and running. Now that's nice and all, but that's actually just running Express directly from this particular folder. So I'm just going to go ahead and kill that, we'll leave this open to start. So the next thing I'm going to do is show you how we can work with volumes. So we're going to come back to the Docker Client, and if I run docker images, I've already pulled the latest Node.js image that we're going to ultimately create into a container from Docker Hub, and so normally to run this we would just say docker run, we'd give it a port, we'd give it the external port, I'm going to say 8080, and then you saw that the Express website that I got going is actually going to run on 3000, and we would say node, but if we run this, we don't have any source code. So basically what'll happen is the container will try to start, it'll see there's no command to run, and it'll just exit, it'll just stop. So we're not going to do that. What I'm going to do first is create a volume. Now the first volume I'm going to show you is going to be this var/www, or I like to call it dub-dub-dub to shorten it up, but this is going to create a volume, but it doesn't actually point to our source code, this is just an area that if we did have something running in the container, and the path for the running app wanted to write to var/www or read from it, then it would create a volume outside of the container in the host machine. And so now I could say node here for the image, and we'll go ahead and return, and now it just started the container, but again, there was nothing to run. So if we do docker ps -a, you should see that it's exited. So really what happened is it tried to start it, it didn't see anything fancy, and then stopped it, but we should now have a volume under the covers. Now you'll notice the container ID. Now what we can do from here is we can say docker inspect, and then give it the start of that container ID, or we could do the drunk_borg, that's one of the more interesting ones I've seen as an alias, but we'll go with the 03 here, and this will spit out a whole bunch of information about that particular container, and I'm going to scroll back up and we're going to look for something called Mounts. Alright, there we go. So there's our Mounts you can see right there. Now you'll notice that we have a source on the host system that's a really long path, and it uses this name, which is a unique identifier, for the particular volume, and it kind of buries it in this folder structure that you'll see right here. Now on the container itself, the alias for this path that's on the host is just var/www. So again, if we read from var/www, or we write to var/www, what it's going to really do is be writing to this location, or reading from this location. Now that doesn't help us as much with source code, because I'd have to get my source code into that folder, and, eh, that's not a path I really want to work with, so what I'm going to do is come back down and we're going to go ahead and remove the container. So again, if we do docker ps -a for all, this will show us all the containers, even the exited ones, and then I'm going to do the normal docker rm that we've learned up to this point, but I'm going to add one more thing. I want to make sure that that volume that's on the host machine also gets removed when we remove this container. So I can do the 03 for the container ID, but by adding -v, that'll go ahead and clean up the volume. Now normally when you remove a container, it's not going to delete the volumes because there might another container that's using that volume. So if this is the last container that uses that volume, you typically want to clean that up, and that's what I'm going to do here. Alright, so now if I do docker ps -a again, you'll notice that we're all cleaned up and we're good to go here. Alright. Now we have that ExpressSite folder that I created earlier though. So what I want to do is let's cd into that, and we can do an ls to basically list everything, kind of like a dir that you'd normally do on the Windows side, and you'll notice that we have this app.js and some node_modules and package.json, and this is pretty standard folders for an Express site. So what I want to do is link this folder into the container, and then start up Node, very similar to what I just did earlier. So what we can do is use the volume support that I talked about in a previous section in this module. And so we could do something very similar, we could say docker run, give it a port on the external 8080, internal it's 3000, but this time when I create the volume, I'm going to say let's start from the current working directory, and this is the little shortcut I showed earlier you can do, and this is going to be the directory that the volume in the container is actually going to point to. Perfect, because that's what we want, we want to point to our source code, which is in this ExpressSite. Now the name again we're going to use is var/www. Now I made that up, it could have been something else, but I'm going to go with that. And then normally we would say node, and then if you want to run any commands in the container, we could run that npm start command. Now we're going to have a problem here. It won't actually run npm start from this folder, it'll run it from a different location. So I'm going to show you a little trick you can do here called the working directory. So we're going to say -w, and that stands for the working directory, it's a shortcut, of what is the startup directory, what's the folder in the container where any command should actually be executed from. So it kind of sets the context of where to run these commands. And I'm just going to say /var/www here, put the image, and then after that we can put the command that we want to run, and I want to run npm start. Alright, so to review, we're going to say hey Docker, let's run on port 8080 on the external, 3000 is going to be internally what we're going to run, and I picked that on purpose because that's what this ExpressSite will use by default, we could change it, but that's how it's set up currently, we're going to set up a volume that points to our source code in the current working directory, and then the volume though that's inside of the container that's going to point to this ExpressSite folder, is going to be var/www, then we're going to go ahead and use that volume as our working directory. That way when I run npm start, really what it's going to do is forward the call from var/www into this ExpressSite, which will call it into an area over here. Alright, so a lot going on, but let's go ahead and try it out, and I'll just hit Return here. Alright, now you'll notice it started it up, but this time it's not running on my local machine, it's actually running in the Docker node container. So this is very, very cool because I've now linked my source code into this container, and even if I didn't have Node installed, if I just had the source code, but didn't have Node on my dev machine at all, then we could still work with Node, because obviously it would be loaded up in the container. Alright, so let's try this out. Now instead of going to localhost, I want to go to the IP address. Now, this'll be the Docker Machine IP. So we're going to go 192.168.99.100. I'm not going to go to 3000 though. Back here we said that, what'd we put, we want to go from 8080 to 3000 internally in the container, so I want to use 8080 right there. We'll hit Enter. Alright. Now you'll notice we get the exact same Express site. Now just to kind of prove this, let me dive into the source code real quick, and so I'm just going to run off to this folder in Users, and where's Express, there it is, ExpressSite, and let's just come in and I'm going to do a change to the view, and right now it's loading the home page, this is called index, and I'm just going to open this up in VS Code Editor. We'll do a very simple edit. So it says Welcome to title, and say Welcome to title running within a Docker Container, with a volume no less, which is pretty cool. Alright, we're going to go ahead and save that. That should now be committed in the source code, and you'll notice, ah, look at that, so now our source code is linked and we just proved it into the container, so now I can do all my edits locally, but I can actually run it in whatever container I want. Now in this particular demo I'm using Node.js, but this applies to PHP, ASP.NET, Python, whatever it is you want to run. So that's an example of the actual commands that we learned about going back to up here that would allow us to link our working directory, the current folder that is on our local machine on a Mac or Windows, or even Linux, and we can now link that into the volume that we defined, this var/www up in the container. Pretty cool stuff.

  27. Hooking a Volume to ASP.NET Source Code In this demonstration we're going to see how we can get an ASP.NET CoreCLR application up and running in a Docker container, and I'm going to link from that container to some source code that's going to be running on my Mac, and we're going to do that again using a Docker volume. So to get started, I'm going to slide on over to the Docker Client, and there's two tools I'm going to install, because I need some source code first off. We don't have any right now, so I'm going to install two tools using the npm tool. So you will have to have Node.js installed to run these, but assuming you have that we can run generator-aspnet and Yeoman, and I'm going to install these globally because it'll add some command-line functionality. So we'll go ahead and let this run. Alright, so now that those two tools are installed, what I'd like to do is generate some scaffolding, some code for an ASP.NET CoreCLR site. So I'm going to come on in, and you'll notice currently this folder is empty, but we're going to go ahead and use this as our container. So let me go ahead and cd into it, so it's on my desktop, we'll go to Asp.NetDemos. Now from here there's a tool I can run, which is this Yeoman tool, and Yeoman can use the ASP.NET generate. So I'm going to run yo aspnet, and this will start a wizard, and now I'm going to select Web Application. Now from here it'll ask me what I want to call it, I'm going to go ahead and call it AspDockerApp, and that just scaffolded it out, so just to show you kind of what's in here, you'll notice I have a new folder now, and I now have some source code, and this is the code that we want to link in to a Docker container using a volume. In order to make this run in a container and allow me to get to it from my Mac in this case, I need to make one little configuration change, because by default, this project, it'll run on localhost on port 5000, but you can only hit it from localhost, and that's not going to work if I'm outside of the container and I want to hit it from a browser in my Mac, for instance. And so let's go ahead and make some edits here. I'm going to run into project.json, and this'll be the main project file that has configuration for ASP.NET, and I'm going to open this in Visual Studio Code. Alright, and this has all the different DLLs and features that I need, all the modules if you will, and you'll notice that there's this web command that runs something called Kestrel, and this is a server we can run that's actually going to run inside of the Docker container on Linux. But what I need to do is tell this server that hey, it's okay to run outside of localhost, that you can have other IPs hit it. So I'm going to add a little property called server.urls, and what you do is you basically just give it an empty IP, we're going to do four 0s there, and then I want it to run on port 5000. Alright, so normally this would be localhost 5000, but by putting the four 0s, this'll make it so it knows to allow it to run outside of just localhost, that way we'll be able to hit it. Alright, so let's go ahead and save that guy. So our server.urls is in place, now we can go ahead and try to get our Docker running here. Alright, so let's close that. Alright, now the next thing I want to show you, if I go to docker images, I've already done a docker pull command on microsoft/aspnet, and I downloaded a very specific version. And so you can see I have the coreclr version of ASP.NET available. Now that image is already here, so now I can run the docker run command and we can get a container going, but of course I want to link it into this Asp.NetDemos. Now I'm already in that folder, we can do an ls, but I want to go inside of the AspDockerApp, because that's going to be my working directory, so we're going to cd into that now. And let's do an ls and there we go, there's all the files that you saw earlier in the Finder. From here what I want to now do is do a docker run command, and there's going to be quite a few command line switches I'm going to add, and I'll talk through these as I do them. So let me go ahead and clear the screen so we have some room here. And I'm going to do a docker run, and the first thing is I'm going to make this interactive, and you'll see why that is in a moment, and then I'm also going to add something called pseudo tty. This is a way we can emulate working with hardware, at least back in the old days, this will let me interact with some things I need to, and then I'm going to do a port of 8080, but internally it's going to be 5000 you saw, and then we want to create the volume. Now we want to link it into the current directory that I'm in, which again is my AspDockerApp. So we're going to do the pwd shortcut command, and then the volume name in the Docker container, I want that to be /app. I could have named that whatever I want, but I'm going to go ahead with that, and I also want a working directory of /app, and that'll start us up when the container starts running right in that app folder. Now the app folder again is really just an alias that's going to point over to the folder that's buried in this Asp.NetDemos. Alright, now from here, we can come in and we can list the image. And so this one's a little bit longer, it's the one I showed earlier, it's going to be microsoft/aspnet, and then I want the coreclr image that I installed, so it's going to be in this case 1.0.0, and this one's a little bit long, alright. And so now we have told it the image, but because there's a specific version of the images out there, there's quite a few of these actually up on Docker Hub, I need to explicitly put this colon character. So I can say I want the 1.0 version, but I want the coreclr version of that. Now, I'm going to have to run some commands, I'm going to need to restore some packages that this app is going to need to run, we need all the DLLs in other words that we'll need to run here, the assemblies. So I'm going to start us up in a Bash mode, and so I'm going to say bin/bash, and when I hit Enter, that will now spin up this container and automatically put me in the app folder, and now I'm going to run some ASP.NET CoreCLR commands. We're going to run dnu restore, and this is going to restore all the packages that we need, and this'll take a moment to run. Now that that's done, we can start up the web server. So I showed you that web command in the project.json. So I'm going to go ahead and say dnx web, and dnx web is basically the command to fire up the Kestrel server on really any IP, but in this case we're going to link it in you'll see in just a moment, and it's going to be port 5000. So this should start it up. Now that just did some database configuration and some other work, and it tells us that we can now hit this on an IP address, which I'll show you in a moment, and port 5000. And what I want to do now with Chrome is we're going to come in and hit a 192 address, although I don't want that version of it, so we're going to hit 192.168.99.100, you've seen this a couple times, and I want port 8080, because that's what we configured. So let's try this out, and this should now get everything ready, and there we go. So we've now linked in our container to the volume that's pointing to, in this case my Mac machine, and so again, just to kind of show this is running, we could come in back to the source code, and actually let me just minimize this and we'll come back into here, and let's do something really simple. Let's go into the Views and we'll go to Home, and we'll open up the Index, let's go into that. Actually, let's go to About because it's less busy. So this is really simple. So I'm just going to say Hello from Docker and a local volume on my Mac! So let's go ahead and save that. Let's go back to the web app and let's go to About, and there we go. So there you have it. We now have an ASP.NET CoreCLR container going, we've linked that into the source code on my Mac using a volume, and now I can do full on ASP.NET development, I can even interact with the terminal you saw interactively, that's kind of that -i and -t stuff that we did, and now we can do whatever we want from here. Now I could also come in, and let's say we want to go ahead and let's stop this actually, and I'll exit out, now let's do a docker ps -a, and you'll see that now our container has exited and we're kind of off and running. Now if I want to eliminate the volume pointer, then we can say docker rm, and I want to remove any volume pointers back for this 97, and then you can just abbreviate as you've seen earlier in the course. We'll go ahead and do that. Now normally when you do a volume, if Docker is managing the volume and it's mounting the volume itself, then it's going to remove that, but let's see what it did for us given that it's local, and you'll notice that everything is still there. And that's because when I set up the run command and set up the volume for that run command, we were all ready to go. Now it's way back up here, and so because we set up the current working directory and we had that volume point to that, in this case when you do the -v it's not going to delete your local folder. Now if this would have been, again, a Docker configured one, then it would have deleted that. So that's an example of the type of commands we can run, how we can interact with that container and run things like dnu restore and others, and how we can get an ASP.NET CoreCLR application actually up and running, and use a volume.

  28. Removing Containers and Volumes In some of the earlier demos I showed how you can clean up volumes as you delete your containers. So I want to reiterate and go through some examples of when you need to do that and when you really don't need to do that. So if you run a container, and as you do that you actually add a volume to that container like we saw earlier, and you only specify one part to the volume as you see here, I just have /var/www, then in this case Docker is actually going to manage the volume location where it reads and writes. And so we're not specifying where our source code is or where to write, we're going to let Docker figure that out. All we're doing is saying that the container has a volume of var/www, and then Docker is going to do the magic that actually creates that folder and mounts it on the host machine. Now in those cases, which definitely will be reality in some production or staging scenarios where maybe you write log files or things like that, then when that container goes away, if this node container for instance needs to stop and then be deleted, we'll probably want to clean up that volume, because otherwise you can kind of have some dangling files and it eats up some hard drive space, and I mentioned this in some of the earlier demos, but I just want to reiterate. So if you run docker inspect on your container, I showed earlier that you can actually see the mounting location and you'll see the Source property in the Mounts property. So you'll notice a nested object with Source and Destination. Now if you see that it's mounting it and that Docker is taking care of it, and that means again you did -v with just one piece, not two pieces as you define the volume, then when you're down to your last container, you're going to want to remove this so you don't waste any hard drive space. So as I showed earlier in some of the demonstrations, you can just simply say docker rm -v, and that'll say in addition to removing the container using the volume, let's also remove the docker manage volume. Now as I showed in one of the demos, if you do this and your volume has two parts, you have the container volume name, but it actually points to a folder you specify, like your source code, doing -v is not going to delete like your source code, it's going to leave it all there. So this is really only needed when you specify a volume and you let Docker manage the location on the host machine of where that volume lives. Now if any other containers are using the volume, you'd only want to run this when you're down to your last container using it, and then it would go ahead and clean that up, because obviously some other containers, if they need it, you don't want to get rid of it. So that's a quick review on volumes and the need to clean those up in cases where you delete a container and where you define a volume that Docker actually manages on the host machine.

  29. Summary In this module you've taken a look at the layered file system, and you've seen how images are composed of layers, and how containers are really the same thing, but they have a thin read/write layer that sits on top of all the other layers provided by the image. Now we talked about that because there may be times when you want to read or write specifically to store some information, but you want it to stay around. And so we learned about how we can hook source code, and how we can even write if we wanted to do like log files or database files into things called volumes. And we learned that volumes are persisted even if a container is deleted. So to do that we can use the docker run command and specify -v, and then we can either do a Docker managed volume, or we could specify a folder where the volume in the container points to that might have things like our source code. Now as mentioned, volumes are persisted on the Docker host, and that's a good thing because we might have some log files that a container writes out, and even if the container goes away, we don't want those deleted, or we might have maybe database files or something along those lines, but if you do get down to the last container and you don't need the volume anymore, then we can remove that using the docker rm command, and we can simply say -v and then the container ID, or the container alias. So that's an example of how we can actually get our source code linked into a container, and that's a really, really useful feature to know about, because now I can get Node or PHP or ASP.NET, or whatever it may be, up and running as a container, I don't even have to install anything on my machine, I just have to get that image and that container running, and then I can simply create a volume that links into my source code and I'm off and running. When I'm done, I can just delete that container and there's really no trace of it, especially after I get rid of maybe the image. So I hope that clarifies what the layered file system is and how we can use Docker volumes.

  30. Building Custom Images with Dockerfile Introduction Up to this point in the course you've worked a lot with images and containers, but they've been images that were hosted up on Docker Hub, and we've pulled those down. In this module we're going to focus on building custom images, and you're going to learn about a special text file called a Dockerfile, and learn about some of the instructions that you can put in that. Let's jump into the full agenda. So we're going to start off by talking about what a Dockerfile is, and I'll introduce some of the key instructions you're going to need to know about, and explain the general process of how it works for building custom images. From there we're actually going to create several types of custom Dockerfiles, and we'll see the different instructions, and we'll do that on Windows and on Mac. Then we're going to learn some Docker client commands we can run to build a custom image and tag it. And then finally once we're all done with multiple images, we'll talk about how we can publish an image up to Docker Hub to make it available for us on any other machine, for other team members, or even for the general public if you want. Now the main question that we're trying to address in this module and some others is how do you get source code into a container? And one way we've already learned about, you can create a volume and you can have a volume that points to your source code on your local development machine. And that's great when you're working in development mode, but in this module, we're going to see how we can actually get our source code into a custom image, that way that image could be used by other team members or anyone out there in the public if we wanted to set it up that way. So let's go ahead and get started by talking about what is a Dockerfile and what are some of the instructions you need to know to create a custom Dockerfile.

  31. Getting Started with Dockerfile Developers are quite used to writing instructions in a code file, running those through a compiler, and then outputting some type of binary or other file. Well in the Docker world, we have a very similar type of process that we can follow to create an image and then a running container, and that's to create something called a Dockerfile. Now a Dockerfile is nothing more than a text file that has instructions in it. Now these instructions of course are unique to Docker, and they're defined up in the Docker documentation, but it's a very, very similar process too if you're writing Java or C# or another compiled language, you'll write some instructions in a file, and then in the developer world we'd run those through a compiler. Well in the Docker world we'll run them through the Docker Client, and it has a build command we can run, and then that build command can read through those instructions, generate a layered file system, as we've talked about earlier in the course, and then we have a Docker image that comes out of this that we can then use and we can make a container from that. Dockerfile itself, as mentioned, is really just a text file. There's really nothing fancy about it. In fact, it's normally called Dockerfile, and oftentimes doesn't even have an extension, but you can name it whatever you want, it's just a text file that we want to feed into the Docker build process. And so it contains some build instructions, which we're going to be looking at, and these build instructions will do things like work with environment variables or copy source code into the image, and more. Now the instructions that we're going to be doing oftentimes create intermediate images, and these images are kind of behind the scenes images that are cached, and that way if you maybe change an instruction, need to rebuild the image, it won't have to do everything from scratch. Now there are ways you can override that and not cache anything, but then it'll make your build process take a little bit longer. And as mentioned, we're going to be using a Docker build command to actually convert the text file into an image. Now here's some of the key Dockerfile instructions, and this certainly is not all of them, it's just a few, and I'm going to talk through the high-level look at what do these do. So normally what you'll do is you'll start off by saying I would like to create an image from another image. Now you can create it from scratch from kind of nothing, but normally you'll create one, based on for instance a Node.js image or a MongoDB image or a PostgreSQL or something like that, but use that as your baseline, and then you'll build on top of that using this layered file system. There's also a way you can define who maintains it, that's a very simple instruction, but you could say your name. And then there's a run command. Now the run command is really important because you can actually have different things defined that are going to be run, and these would be I want to go out to the internet and grab something, I want to run npm install, dnu restore, those types of things could be actually run using this run instruction. Another really important one is copy. When you're ready to go to production, we learned about earlier in the course you can use volumes for source code, but when you go to production we want to copy that source code into the container oftentimes. There's multiple ways to do it, but that's pretty common, and so we can use the copy instruction to do that. You could also set what is the main entry point for this container. In other words, when you have an .exe or something like that on a system, you can normally double-click it and it has a main entry point that kicks everything off. Very similar here. What is the initial entry point that kicks off the container for instance? It could be a Node command, it could be a DNX command, it could be a Java-type command, or something else. You can also define what the working directory is, this sets the context for where that container is going to run as, for instance, the entry point is run. So I could say what folder has my package.json, and I could do an npm run. What folder has my project.json, and I could run a dnx web or something like that. You can also expose a port, and this'll be the default port that the container would then run internally with, define environment variables, these environment variables can be used then in that container, and then we can even define volumes. And we've already looked at volumes in the course, but you can now define the actual volume and control how it stores that on the host system for that container, as we've already talked about with volumes. Now let's take a look at what a Dockerfile actually looks like then, that's a few of the commands, but it kind of helps to see them in action. So first off you could say FROM, and this will always be the very first instruction that you're going to put at the top of your Dockerfile, and I'm going to say FROM node build an image, and this will grab node as the base file system, and then add additional layers on top of that. Now I could say the MAINTAINER, in this case I obviously put me, but this is where you could yourself, you could put your email, things like that. Then we can say I would like to copy my source code from my current folder I'm building from, in this case dot, and I'm telling Docker in this case that when you build the image, copy that source code into the var/www folder, which is just one I of course I just made up. What that will do now is this layered file system will have a layer in it that's going to be just for our source code, and that'll be the copy command or instruction. We can then set the var/www as our work directory because we might want to run some different commands like npm install if it was Node, or dnx restore, or whatever it may be, depending upon your framework and the type of containers you're building. We can define the port we'd like to expose that the container actually runs with, and we can also define an entry point, in this case I'm saying that the node command and server.js is my initial entry point into this, but of course that could be whatever you want for your chosen technology. So that's a quick look at some of the different instructions, and as mentioned there are more, but we're focusing on the ones that you really need to know to get started with as a developer. But what we're going to do now is take a look at building custom Dockerfiles from scratch with some different technologies.

  32. Creating a Custom Node.js Dockerfile Let's assume that you just got back from a team meeting and you've now been tasked with making a custom Docker image that the team could use, and specifically you need to build a Node.js image. Now to do that, we're going to need to build a custom Dockerfile, and we're going to need to add instructions into it. Now once we're done with that I'll show you a little bit later, we can then use Docker Client to actually build that into an image, and then of course we can convert the image into a running container. So let's take a look at how we can do this. So I have some code here for a Node.js ExpressSite, and this is the same one generated from express-generator, and I want to call out one thing in the package.json file. You'll notice that we have an npm start command that can be run, and when that runs it actually runs the node command, and then points to a file called www here. Alright, that's going to come into play in just a moment as we make this Dockerfile. Now the next thing I have in here is I've already added an empty file called Dockerfile. Now it turns out you can actually name it whatever you want. This is the standard format, but if I wanted to rename it to something like node.dockerfile, just to give it an extension, I could definitely do that. And when I have just one Dockerfile in a project, then I'll usually just go with the de facto standard, which is this Dockerfile, this one here, but if you have multiple or you just want to give it a more explicit name, then you certainly can rename it, it's just a text file. Now the first thing we're going to do is use the FROM instruction. Now the FROM instruction instructs Docker, I want to build this particular image that we're going to make from another base image, and because we're going to be doing Node.js in this example, I'm going to base it on the official Node repository image that's up on hub.docker.com. Now this particular image has a lot of different versions, so you could do this alone, and that would be like doing this, latest, and that'll always grab the latest version of Node every time you rebuild the image. Now that could be good, that could be bad, because it could be you don't want to move forward with the latest version, but you could also come in and we could specify a different version for instance if we wanted as well. Now I'm going to go ahead and go with the latest one here, and I do like to put latest in cases where I want to grab the latest because it makes it really obvious, even though as I mentioned this is the default, that will grab the latest, but I like to be explicit, so we're going to go ahead and do that. Now the next thing I'm going to do is I'm going to say that I'm the MAINTAINER of this particular Dockerfile. So we can say MAINTAINER, and then you can give your name, you can put your email address, whatever you want on this line, so this is a little bit more of just metadata, but it's good to have as other people look at your Dockerfile, maybe they want to get in touch with you for whatever reason. Now the next instruction I'm going to put is called expose. I'm going to say that we would like this particular image and the container that comes out of this, to actually run on 3000, and that's because that's what the Express side by default will run on. Now when we do docker run as you've seen, we can actually map different ports if we want, but this would give the default. And then finally, I'm going to put something called ENTRYPOINT, and the ENTRYPOINT command is when the container actually gets started up, what is the entry point to fire up that container, and for us it's going to be the npm start command. So I'm going to put npm start. Now something interesting about this, you'll notice I'm putting it in a JSON array, in fact I have to put the double quotes in this case because it is a JSON array, it's treating it that way under the covers. Now I could do this as well, but the normal recommended way that most people will tell you anyway, is to go with something like this. So there we have it, we now have our very first node image. Now it hasn't really done a whole lot, because I could have just done a docker run on the node image up in Docker Hub itself, and the only thing I've gained here is I put who the maintainer was really, not a whole lot. I did put the default entry point, okay, that could be useful, but there's no source code that's going to be built in this image, so we'd have to use volume support to make that happen. So let's take this up just a notch and see what we can do here with it. So let's say that part of the requirements for making this image was that we needed to copy some source code into it so that when other people on the team run the container, they don't really have to do anything. Maybe it's going to be a Node.js RESTful API, maybe it's just a web app that's just going to be running that'll be hit from some other container potentially. So what I'd like to do first is come on in and use another instruction for Dockerfile called copy. And copy does kind of what it says, it allows you to copy in whatever you want, it could be an individual file, it could be an entire folder structure, but we're going to copy the entire project that you see over here on the left, that's everything in here, and I'd like to copy that into the var/www, and that's just a folder structure I'm going to go with to say that's where we want our code to run. What that will do is now bake this source code as a file layer into that layered file system that Docker builds up for images, and so now our code is going to be in there, it's baked in, it's going to be ready to go. We could also then come in and set the work directory. What the work directory allows us to do is set the context for different commands we might want to run, where does it run them from? Does it run them from the root, does it run them in this folder called var/www, it's kind of up to you. So I'm going to say yeah, we want to run it from this var/www. Now the reason that's important is when I use instructions like run, which is another one that's built in to Docker and the Dockerfile, then it needs to know in some cases the folder where the context is where that should be run. So for instance if I run something like npm install, then we probably are going to want to run that where the package.json is located. Had I not put the work directory, then we would have to actually tell it the context of where to run this command, so it can find that package.json and find all these dependencies that it might want to add to get those going. So the work directory you'll find is actually pretty common and very useful, especially for us as developers because our images might need to run some specific commands in that folder or that location. Now the other thing we can do that's related to this work directory is we could say that maybe we want it for whatever reason on the host system where the container is ultimately going to run. And we know how to do that, and that's to use volumes, so I could actually come in and say volume, and then I could give it var/www. Now that alone is going to cause Docker, once the container runs, to mount this particular volume that's in the container, onto a folder, or into a folder on the host file system. And we talked about this in a previous module in pretty good depth, but this would set that scenario up, and it's really useful, we might even have multiple volumes, maybe this app needs to write to, we'll just pretend there's a logs area. Well we could set up a volume so that the logs actually stick around even if the container is deleted and then maybe brought back up at a later time. So that's what the volume command can do. Now I'm just going to leave this in here just to show it. Keep in mind that with the docker run command that we looked at in previous modules in the course, I showed you how we can actually set up a volume, it was the -v switch, and we can point that for instance to the source code on our developer machine. But in this case we're going to go ahead and use a volume just so we can kind of see it here. Now the last thing we're going to do is let's assume also that this needs to run with certain environment variables. Maybe for instance our code, we're going to expose it here, but maybe instead of using this when they run the container, they might specify a different port. Maybe in production it's port 8080 or something like that. Well we can use environment variables as well. Now I normally like to put these up at the top, although they can go in different places, it depends on if you're going to use them in the Dockerfile or not, but in this case we're just going to make an environment variable, and I'm going to make two of them. We're going to do the NODE_ENV, and let's just assume we want production for this particular container, instead of the default, which would be development, but let's go back to production, and then I could right here do another name value pair. As you'll see, these environment variables are just the name of the environment variable and then the value, and I could just do another one, but I'm going to break it into two steps so you can see the separate instructions. Let's say that we also want to put a default port, and I'm going to go ahead and leave 3000, that is what Express runs, but this'll be an environment variable that your Node.js code can now read from. So now when that container gets fired up, and if this for production, you can potentially say a different port, maybe for production containers or something like that. Now I'm going to go ahead and match it with the expose that we have here, but the goal again is just to show you that yeah, we can do environment variables. So that's an example of a custom Dockerfile that does a few things. Number one as a review, it pulls in the latest Node.js image, we say who the MAINTAINER is, we define two environment variables that'll be in and available to that container, we copy our local source code from here into the image into a folder called var/www, which is also our working directory, and, it's going to be set up as a volume, which in this case means the Docker host would actually be where that source code is going to live, but we can override that again with docker run. We're going to run the npm install command because we need to get our dependencies installed once that container is going, expose port 3000 for the internal port for our container, and then we're going to have our ENTRYPOINT as npm start, and that would be an example of a Dockerfile. Now before we wrap this up I'm going to clean it up just a little bit because I don't want to actually put this here and here for the port because if we are going to be running dynamically based on a port that code loads, then I probably want to expose that same type of thing. So we can actually use environment variables, and I can do something like this, and what the image will do is once this is defined, it'll then go and apply that exact value, which in this case would be 3000 right there, and there's a few other spots we could potentially even do that, maybe even for in these areas. But we'll go ahead and leave it right now because I don't need to set that as an environment variable, but the port might be something. So now the expose will actually read it from the environment variable value. So that's an example of some of the key Dockerfile instructions you can use. There are certainly many, many others out there, but these are the key ones that you need to get started with. So now what we're going to learn about is how do we take this and actually convert it and build it into an image.

  33. Building a Node.js Image Once you have your Dockerfile completed and all the instructions are in place, you're going to need to run that through the build process using Docker Client, and that's what we're going to look at here. So how do we take that Dockerfile for Node.js and actually make it into an image? Well it's actually a very simple process. Docker the client has a build command you can run, and then what you can do is tag that build, and you'll want to tag each build. Now you can do --tag, or the shortcut that you see here, which is just tag. I prefer the -t, it saves you a couple keystrokes. Now from there you're going to go in and give your image that you're going to make a tag name. Now if you go up online and look at all the Node images, there's a whole bunch out there, lots of different versions, because they're all tagged with the version. Now in this case I'm just going to say whatever my username is, /node, we're just going to say that's good enough for our team, but we could more details there with version info if we wanted. And then finally I'm going to give it the build context, which is going to be the folder where it's actually going to run this from, that'll help find the Dockerfile and do some other things along the way. So let's go ahead and do this with the image that we have. Now the first thing I'm going to do is I want to get rid of my node_modules here. So I'm going to come in and use a little npm tool I use a lot called rimraf. So the first thing I'm just going to right-click and Open in Terminal, and I'm going to use this rimraf, which is a delete module, and I want to show you that if I get rid of the modules here that as we build the image, because we did an npm install right here, that it'll take care of that for us. Now the next thing I need to do is I started a command prompt, but it's not hooked in to my virtual machine for Docker, and we learned about Docker Machine earlier in the course, so I'm going to run docker-machine env, environment, and then default is the default, but I like to put it, so we'll go ahead and do that, and then I'm going to copy this eval command. And that'll get us all hooked in to the virtual machine that we want to run with. From here we can go ahead and build our Dockerfile and do an image using the docker build command. Now I mentioned that you can do -t for the tag, but before we do that, notice that our name again is Dockerfile, and that's the default name that the docker build process looks for, but if you do have a different name for the file, it could be Dockerfile.dev or node.dockerfile, or something like that, the -f will be for the file name, and then I can put the name. Now in this case it's redundant because that's what it looks for anyway, but that's a nice one to know as you might have different file names for your Dockerfiles. Now I can do my tag, we'll give my Docker Hub ID, and we'll give it the name and then the context, the folder in which we're going to run this is dot. So let's go ahead and run it. Now I already have the node image, just as a heads up, already installed locally as a Docker image, so I did that on purpose to speed this process up. So aside from the npm install, which it's doing right now, this should go really, really fast. But once this is done, I want to show a couple things here that relate to something called intermediate containers. So we're all done, and if we do docker images, you'll see my DanWahlin node, and then there's my node base image that it's based upon, but notice that every single instruction generated what's called an intermediate container. And if I go way up to the top, let me just slide quickly back up, even the environment instructions each generated their own separate intermediate container. Now what happens is these containers, they won't show up in your docker images when you run that, but they will be cached behind the scenes, so that the next time I do a build, if this instruction such as the environment one doesn't change, then the build process can just say hey, I've already seen that before, let me just go load the layered file system layer and just include that in the build. So it's very much like source control. Every time you check in a small little thing in source control, it tracks it incrementally. That's exactly what happens with Docker instructions. Now in the case of our environments, I could have put, because I used the equals, if we go back to here, I could have put the PORT=3000 right next to this one up on top, and I'm mentioned that earlier, and that would've just done 1 intermediate container, but because I ran them as a separate instruction it has to do 2 looks up, and so those are very quick look ups, not a big deal, but it's important to know that every instruction leads to an intermediate container being created that's ultimately cached behind the scenes. Now that we have that done, let's go ahead and try to do the run process. Now you're going to see we might have a little issue here, and I did this on purpose with our Dockerfile so we can fix some things, but I'm going to do a docker run, I'm going to run this in something we haven't seen much up to this point, in a daemon mode. That way the output of running the container won't actually show up in the console, it'll run behind the scenes, and then I can do other things with the console if I want. So we're going to do the port, we'll do 8080:3000, and then we'll put the name of the image, like so, and we'll be off and running, and alright. It looks good and we should hopefully be happy, we can go party now, we have a container, but let's see what happened. Docker ps, oh, it didn't show up, and that means it's not running right now, for some reason, and that's not good. Docker ps -a, yep, there it is, and it looks like that it exited about 15 seconds ago. Well what happened? Well, this is a very subtle thing, and this is why I left in something as we built the Dockerfile so I could talk about it. So before I tell you, let's go ahead and remove this container, and now when we do ps -a it's gone, but it's still, our image is still there. We can go ahead and leave that. Now let's go back to this code. What happened here is that when we made the volume and ran npm install, that went ahead and did the node_modules, but it's going to put it, once we run the container, in this var/www. Well, that doesn't exist yet on the host. Remember that with a volume, we're mounting the folder in the Docker container, but it still, when you run the container, has to make the folder where it's actually going to store that volume, because the volume lives outside of the container. Well at this stage when we build it, it sets it up and it looks good, but then when we run the container it recreates the mounting, well it creates it for the first time really, and this var/www here has to then link up to the host folder. Well when it does that linkage it kind of wipes out stuff that happens such as the npm install, and that's actually the error if we would dug into the logs, that we would have seen. So I'm going to go ahead and take this out, we'll leave that volume out, because I don't really need it anyway, I'm not trying to persist anything, it's just source code here. Let's go back now and we'll get back to our commands, let's go back to the docker build, and let's rebuild this now that it's been saved. It should go really quick here except for the npm install, that's the slowest part. And once this is done, let's try to run it again and see what happens, and now it should stay up and running and then we should be able to hit it in the browser for instance. So let's try our docker run command again, there it is, but now I'm going to do docker ps. Alright, very good, look at that. That's a good first step. So let me come on in now, and let's run off to the location, and this'll be 99.100:8080, you can see it right there, and it looks good. Now this is actually using the old code, so there's actually not a volume in this case, so I probably could update that now, but this is showing that the source we copied in, that we did previously in the course, is actually being used in the container. So that container is now up and running and things are okay. So to reiterate, the problem with the volume was at runtime if you will, when we ran the container, Docker had to step in and create the mounting to the host, because the image doesn't know about that yet, it's just an image. So that kind of wiped out our npm install and our node_modules folder, and as a result what we thought was there turns out to not be there, and that's kind of a problem actually. So by removing the volume in this case, we can leave our npm install code and we should be off and running. Now just to kind of review some things, now that we have a running container we can always stop it, we can say docker and give it the ID there, it'll take a moment to stop, and then we can go ahead and remove that container using the rm command. So we can say docker rm 53. So if we do docker ps -a, we're good, and if we do docker images, our image is still there. In fact it looks like a little intermediary got left over. So I can even do rmi 7e, now do docker images and there we go. So that's an example of building a Node.js image from scratch using a Dockerfile.

  34. Creating a Custom ASP.NET Core Dockerfile Dockerfile instructions can be used to build a variety of images, so in this section we're going to talk about how we can use a Dockerfile in the different instructions to build a custom ASP.NET Core image. So let's go ahead and jump right in. So I have an ASP.NET Core project already loaded up, this is the same one we looked at earlier that was generated automatically, and it has the normal suspects, you can see the controllers, the models, the views, our project.json, things like that, but I also have this empty Dockerfile. Now we're going to base this image, this custom image that we're going to make, on an ASP.NET Core image that's up on Docker Hub, but before we start writing the Dockerfile instructions, I want to take a look at what images we have. So I've already gone to hub.docker.com and you can see I've searched on aspnet, here's the official Microsoft one, and if I scroll on down you'll notice that there's several different supported images here. Now the versions that you'll see are pretty important, and these of course will constantly be changing, but as of today these are the ones we have. And I want to do this latest one right here. Now before I grab this little fragment, because we're going to need it, I'm going to run off to the Dockerfile and just point out one quick thing. Now you really don't have to be a master of the Dockerfile that you see here, there's a lot to this, but I want to point out just a couple little things that are in this a little off to the right here. So notice sqlite3 and this libsqlite3-dev. Now those are already in the image, and it just so happens that the version of the code base that I'm running, it's going to be hitting SQLite. So what's nice about this is when I pull down this ASP.NET Core image, I'm already going to have access to SQLite, it's part of the Core image, so I don't have to build it into mine. So this would be a pretty simple image that we're just going to copy the source code into to get going. So let's go back to here, and you'll notice the version of this that you can see up top here. In fact you can see the version right here. And so we're going to need this version as we do the Docker instructions. So let's go on back and let's type our FROM instructions. So we're going to say FROM microsoft/aspnet, and then we're going to have to say the specific version, because as you saw there's kind of two roads, you can do the Mono or the CoreCLR, and because I'm going to do the CoreCLR I have to be explicit. So this is the 1.0, and then this part I'm typing here will certainly change, but what's nice is that you can control the version that your image is based upon. Now from here we've already seen SQLite is included in this 1.0 coreclr image, so we don't really have to do anything there. So the next thing I want to do is I want to copy all this source code into the custom image that we're going to make with this Dockerfile. And to do that we can use the COPY instruction. So I'm going to COPY the local folder, and I'm going to copy that in to, and I just have to make up the name of the folder in the container, and I'm just going to call it app. We could name it like I did in the node demo, var/www, but it's really up to you and that's kind of the point here. I'm also going to set the work directory as being this /app, and then I like to kind to of tab things over a bit to line them up, it makes it a little bit easier to read as you build this. And then we're going to have to run a dnu restore so that we can get all the packages that are needed. So we're going to type run, and I'm going to type the different commands, so first off you RUN dnu, and then we're going to run restore, and that'll help us do that. We're going to EXPOSE 5000, that's going to be the port that this will default to in this project, and then I'm going to set the entry point. So we're going to say ENTRYPOINT, and now I'm going to run the DNX web command. So if we go into our project.json, scroll on down, and you'll notice there's a web command, so we can simply run dnx web, and then you'll notice this is something that was covered a little bit earlier in the course, but I want to reiterate, this server.urls property was set and I don't have an IP there. That'll allow us to call port 5000 on something other than just localhost. Really, really important though, otherwise we'd only be able to call it on localhost and it wouldn't work too well with our containers. So let's come on back to our Dockerfile, let's go ahead and enter our dnx web, so we'll do dnx, web, and we'll save that. So now we have an actual ASP.NET Core Dockerfile, and this particular Dockerfile is going to be based on the latest Core version, we're going to copy our source code into the app folder, set that as the working directory, restore our packages, expose the port, and then set the entry point. Now the other thing you really should add, and this is certainly optional, which is why I didn't put it because we've already seen it, but the MAINTAINER is something that is not vital, but it's good so that people know who created this. So we'll go ahead and put me, you can put yourself if you're doing it as well. And there we have an ASP.NET Core custom Dockerfile. Now once we build this and then run it as a container, we would have all our source code there, we'd have the port that we're going to expose, and be able to restore the packages and run it, so now we're off and on our way. So what we're going to look at next is how we can build this and then run it.

  35. Building an ASP.NET Core Image Now that we have the Dockerfile for our custom ASP.NET Core image ready to go, we need to convert that Dockerfile into the image by running the docker build command. Now we saw this command earlier, so I'll run through this really quickly, very slight changes here in really just the name of the image, but we're going to run docker build, we're then going to tag our image again, we're going to use -t in this case. You would normally prefix it with, again your Docker Hub account ID, or username, and then give it a name. Now I'm going to do aspnetcore here just to keep it real obvious, but if you had multiple containers with different source code in those containers, you'd probably want to give a more explicit name, but that's pretty easy to do obviously. And then we have the build context, where is the Dockerfile and what's the overall context, the folder where we're going to do this build from. So let's take a look at how we can now execute this build and then run this image. Alright, so here's the Dockerfile that we already have ready to go. So I've already loaded up the Docker Client, and you can see that I've run the docker images command, and I already have the microsoft/aspnet coreclr, so it's all available, and that'll save us a little bit of time now as we build this image, our FROM will already be there as the core. So we can run the docker build command again. So we'll say docker build, and I'm going to tag it, and we'll just name it aspnetcore again, but you can be more explicit, and then we'll just hit Enter. Now this is going to grab the image as the base here, the microsoft/aspnet coreclr image, but it also has to run the dnu restore, so that part will take the longest as it runs this process. So let's go ahead and try it out. So now you can see it's starting to run the dnu restore, and this'll take a moment to get going, so we'll leverage the magic of video again to speed this up. Alright, and we're all ready to go. Everything has been restored, it's generated some initial intermediate container images you can see here, and we should be off and running. So if we come in and run docker images again, you'll see that we now have our aspnetcore that we just created, that's great, but if we run docker ps -a, you can see that I don't have any containers yet. So from here we can go ahead and run our docker run command. You can say docker run, I'm going to go ahead and run this as a daemon again so it runs in the background, and we'll do the port of 8080 and 5000, and then we need to run the image name, so danwahlin/aspnetcore, and we'll try it, and there we go. Now again, just because it gave us the nice ID here doesn't mean it works, so let's do docker ps. And alright, it looks like it's up, and it's been up for about 7 seconds, there again is the IP, the external port, and the internal. So let's go ahead and pull this up and we'll hit this 192., 99.100:8080, and that again is hitting our VirtualBox IP. Now if we go to the About, we had modified this a little bit earlier to say Hello from Docker, and you can see that that source code is now part of the actual container. So now we have a running container with our actual source code embedded into the image that of course we used to create that container. Now, just to review a couple other things, if we want to stop this again we could come in and I'll just do the first part of this, and this will go ahead and take just a moment to stop, and then once this is stopped we could remove it, do whatever we want there. So let's say we did want to get rid of it, maybe we want to make some changes. In fact, I'm going to do that to show you some of the caching that goes on. So let's go ahead and do rm 67, alright, and we're ready to go so let me clear that, and if we go back to images again, there we go, we have it. So, let's say that we did make a code change, but maybe only made a few minor changes, let's just run our build command again. And you'll notice how fast that was, and again, we saw this with the Node.js image, but you'll see this Using cache, Using cache, Using cache. Now it is possible to do a build that doesn't use any of the cache at all. In fact if you do docker build, and then do your help command here, you'll notice that there's some different things we can pass in as we do a docker build, a lot of different options here. And so if we come on in here, you'll notice this --no-cache, kind of catchy. So if we wanted to come back in and do that build, we could say --no-cache, and then we could go ahead and do the same thing, tag it, and then give the context of where we want to run. Now what this will do is it's, number one going to make it slower because it's not going to leverage any of the cache, but if for some reason there's a piece that you just didn't want to use, then this is how you can kind of override the cache, and like I said, this is going to slow it down, but it will at least let you override the cache. So let's go ahead and run it, and now you'll notice that it's rebuilding everything again, it's rerunning dnu restore, every little instruction in the Dockerfile is actually being run as well, and added as an intermediate container. And there we go, everything is good. If we go to docker images it'll still show up, but this is the new version, it looks like we have a little intermediate one of an image there as well, but that's an example of how we can now build the Dockerfile for aspnetcore, how we can then run it, and even how we can do some other little options such as no-cache.

  36. Publishing an Image to Docker Hub Although you can always build a custom image using the docker build command anywhere you'd like, you may want an easy way to deploy this and pull it down so you don't have to build it every single time, and we can do that by publishing an image up to Docker Hub, and that's what we're going to take a look at here. So the command you'll use is really, really simple. Number one, you will have to go to hub.docker.com and create an account, and we talked about that earlier actually in the course, very simple to do, very quick, and then you'll have to run a logging command, and I'm going to be showing that in just a moment. But once you're logged in it's very, very simple to push your image. All you have to do is say docker push, give it the username and the name of the image, in this case node, and then that's going to go ahead and push it up to the Docker registry. So let's take a look at how we can do that with the node image and the aspnetcore image that we generated earlier. Now before we can run a docker push we will have to log in, so we'll do docker login, just hit Enter, and then you can put your username and your password, and then it's going to ask for the email you used as well. Once it's done that, from here we can go ahead and try to push, and you'll notice this kind of saves some of our credential information locally. So now I could come in and let's do docker images again and we'll do docker push, danwahlin/aspnetcore, and then this is actually going to prepare the image and then push it up into Docker Hub. Now I'll go ahead and let it do that and we're going to come on over as well to the Mac side, and I have the node image that was created. Now I've already logged in on this machine, so we can again do docker push, my username, and the image tag, and that'll go ahead and prepare that, and after a little bit of time after it's done pushing this up, we'll be able to log in to the site and you can actually see your image up there. Now right now this is going to go ahead and put it into a public repository, and I'll show you that as soon as it's done. It looks like the image is now pushed up to Docker Hub. So we'll go to hub.docker.com, and let me go ahead and log in here, alright, and there we go. So there's my node one that was pushed up, and there's the aspnetcore you can see. So we could click on this and there won't be much in here because I don't have any descriptions yet, but you can see how somebody could easily pull this and then use that image, and we're going to do that in just a moment. Now likewise over here on this side, on the Mac side we've pushed up the Node.js image that was created, and you'll see that's all ready to go. So what I'm going to do is let's go ahead and try to run this now directly from Docker Hub. So we're going to say docker rmi, and we'll give it this f4 image here, alright, so that should be gone. Now we can say docker pull if we'd like, or even docker run if we want to run it, but we've already seen that, and now we can put the name of the image. So let's go ahead on this machine and we'll grab the node one, and this is going to pull down the latest version, so you can see some of the layers already exist because they were cached, you can see how fast that was. Now I could go in and clear everything out or do kind of a no-cache scenario, but you can see that work. Now likewise on this same machine, if I wanted to come in and do a docker pull on the aspnetcore one, that was also shown in this module, and we can grab that as well. Now this one won't be cached, so it's going to have to pull down everything because I've never run this image on this particular machine. Alright, so that'll take a moment to run. And then we could of course do the same thing over on my Windows box over here as well, it could be the same exact command. We could do docker pull, and since I've already done the aspnetcore one here, let's go ahead and I'll show you how fast this should be, and this should be pretty quick because a lot of the layers are already there, you can see that was extremely quick. Basically what it did is it looked at the IDs for each of those layers and said hey, I already have these, there's really no need to recreate these because they haven't changed, and that makes it really, really fast as you work with multiple containers and images, it'll start caching those. So you can see this one is still going, it's going to take a little bit longer, and we're all done, so now I can say docker images, and there we go, we now have the node, the aspnetcore, and then the base node that I had. So that's how easy it is to actually take an image, once it's built, push that up to Docker Hub, and now I can pull that from anywhere. Team members can pull it, even other people out there since these are public images right now could pull it as well. So it's really, really powerful technology because now it makes it very, very easy to share my exact environment. And we still have a lot more to cover, so that'll get us started with custom Docker files, how we can do builds, run containers, and even push images up to Docker Hub, but we're going to start diving into more about linking containers and more as we move along in the course.

  37. Summary To wrap this module up, we've learned that Dockerfile is nothing more than a simple text file that has specific instructions. You can say what the image that you're going to be creating is based on, that's the FROM instruction, we can run different types of commands such as npm install, dnu restore, or many others. We can define environment variables, set the entry point that'll run as you run the container, and much, much more. Now the FROM is where it all starts as mentioned, and this has to go at the very beginning of the file, because we have to know what is the base image, and then once we've added the other instructions, we can go ahead and use the docker build command, tag the image, and then it'll be available on your local system. Now if you do want to make it available, either remotely for other team members or maybe even for the public, then we can push that image up to Docker Hub, and that's very easy to do once you've logged in using the docker push command. So now you've seen the process of building custom images, getting those images working as containers, and now what we're going to talk about moving forward is how we can start to orchestrate multiple containers and have them start communicating with each other.

  38. Communicating Between Docker Containers Introduction We've learned about how to work with images and get containers up and running, as well as different Docker Toolbox tools and how you can use those, but we haven't addressed a really, really important question that you'll certainly encounter as you work with Docker, and that is how you communicate between Docker containers? And so that's what we're going to talk about in this module. We're going to focus specifically on how can we do things like have a container that has a web server, talk to a container that maybe has a database, or something else along those lines. So we'll start off by talking about the general concept of container linking or container communication, whatever you'd like to call it, and I'll talk about two options that are available that we can use there. We're then going to dive into the first of those two options, which is called legacy linking, and this is a way that we can name our containers, and then easily link one container to another container based upon the naming. I'll then show some examples of linking up different containers, and specifically I'm going to show Node.js with MongoDB, then I'm also going to show ASP.NET Core with PostgreSQL. So we'll have some real examples to walk through of how we can get this container communication going. Now Docker also provides a really powerful way to communicate between containers that's related to setting up networks. And so we're going to learn about something called a bridge network, or you might hear a container network, and we'll talk about what that means, the benefits it offers, and how you can set it up, and you'll see it's actually really easy to get set up, it doesn't take a lot of time to get going. And then we'll go ahead and show the same examples of Node and Mongo, and ASP.NET Core and PostgreSQL using container networks. And then finally I'm going to wrap up by talking through the scenario of what if you don't just have two containers, you have three, four, five or more containers that all need to communicate. And we're going talk about some future parts of the course and some other techniques we'll be able to do that'll simplify that entire process. So let's go ahead and dive right in and introduce the concept of container linking and communicating between containers.

  39. Getting Started with Container Linking As you use more and more Docker images and containers, you'll certainly run into the need to link them up. We need a web server for instance to communicate with a database server, or something like that. So for example, we might have a web server that not only hits a database server, but also needs to hit a caching server, and maybe even some others potentially. Well normally each container will hold its own individual functionality. In other words, you could have a web server container, a database container, a caching type of server container, and maybe others as well. So we need a way for containers to talk to each other because up to this point we've only worked with single containers, not with multiple containers kind of orchestrating things together. Now Docker provides two different linking technologies that can be used. The first is now referred to as legacy linking, and you're going to see that this is just done using container names. Under the covers, it creates what's called a bridge network, and within that network you can communicate between the containers based on the name of each of the containers, and I'll show you how all this works. Now this particular option is still very useful, it's still very easy actually to do you'll see, and in a development environment it's especially easy to set up. But there is another option, especially as you move multiple containers into staging and production areas, and this provides even more power. This second option involves adding containers into a custom bridge network. Now this is a newer option, compared to the legacy one anyway, and what this entails is creating a custom bridge network, and this is a type of isolated network and only containers in that network can communicate with each other. Now this is nice because now you would have a way to create one network for a certain set of containers to communicate, another network for some other containers that they need to communicate, and this allows you to divide things up a little more elegantly than what you can do with the older legacy linking. So throughout this module I'm going to walk you through first the legacy linking and how that works, I'll show you some examples of getting some actual containers communicating, then we'll move on to using those same exact containers, but we're going to move on to the bridge networking, and I'll show you how you can create a custom bridge network very, very simple, it sounds a lot harder than it really is, and how we can then communicate amongst containers in that bridge network. So let's go ahead and jump right in and talk first about legacy linking and how we can name containers so that they can communicate with each other.

  40. Linking Containers by Name One technique you can use to link up containers to each other is called legacy linking now, and this is a very simple technique where you can give a container a name, and then another container can link to it using that same name. Let's jump into a step-by-step walkthrough of how this works. So the steps to link containers is really basic actually, just a few little command-line switches you'll need to know about, and we'll go through each of these. First off we're going to need to run a given container that we want to link to with a name, and I'll show you how to do that, it's just one little command- line switch you have to add. Now, we can use that name then as we run another container to link those containers together, and we're going to take a look at that as well. And then of course if you have additional containers, you just kind of repeat and keep going. So you'll add a name, and then link it to the next container, add a name, link it to the next container. So let's look at step one here. Alright, so when we do docker run, we've seen that a few times throughout the course, we can do the daemon, that's the -d, that'll make it so it runs in the background, but we can also do --name, and then give that particular running container a name. Now up to this point in the course we've mainly relied on the ID for the container, or the alias that was automatically generated by Docker, but you can give each of your running containers your own custom name. So in this case we're going to define a name for the container called my-postgres, and that would take care of the basics of naming it. Now if that's all we did, it's not going to accomplish too much, it's just going to add a name that we can then use to, for instance, remove or stop the container, but now that we've named it we can go to step two and we can link up another container to this database container. So for instance, let's say that we would like to run an ASP.NET Core container, then we can run it as you see here with the daemon mode, give it a port, pretty standard stuff that we've seen, but we can also come in and link to another container, and we do that with this --link command-line switch. Now this is the actual name that you saw previously, my-postgres, and then we could even give it an alias that we use internally in the ASP.NET Core container that's going to be running. So as we connect, we can use this Postgres alias in our database connection string, for example. So that's really all you have to do to link a container to another container that's already running. Now step three would be we just keep going if you have more and more containers. So you'd start another container, give it a name, link it to the next container, and then repeat. So normally the containers that you're going to link to, they'll typically be started up first with the docker run command, and then once you're done with that, you could then use the --link command-line switch to link any other containers by name to those containers. So now that we've seen how we can do this with docker client on the command line, let's go ahead and take a look at linking up different types of containers across different technologies.

  41. Linking Node.js and MongoDB Containers In this section we're going to take a look at how we can link a Node.js container to a MongoDB container, and the Docker technology that makes this linking possible. So I've already loaded up a Node.js project that hits MongoDB, and I'm just going to walk you through the fundamentals of what happens here to show you that we are indeed going to be inserting data in Mongo, and then pulling that data back out. So first off I have a config folder, and this just stores the connection string type of info, so I have the host and the database, and you'll notice this name mongodb. Now I didn't pull that out of thin air, that's actually what we're going to be naming the MongoDB container. So we'll come back to that in just a bit. Now we're also going to be calling a dbSeeder, and that calls up into this dataSeeder, and you'll notice that we have some Docker commands here. Now this is just a custom object I made in Node.js, it just has some custom properties that I'm going to insert, I could have picked anything, but I'm going to insert a Docker command, a description, and then some examples of using that command. So we could kind of pretend this is like a help database or something like that, and then I'm going to save it here, and then I'll also create a Docker command, this time ps, and we'll run some examples of that. So it's just some basic sample data that we're going to insert into Mongo using the Node application that you see here. Alright, so that's kind of the fundamentals of the app itself, it'll just write out those commands to the home page. Now the next thing I'm going to show you is this node.dockerfile. The actual set of instructions that you see here shouldn't surprise you, we're going to copy the source code into a folder on the container, set that as the working directory, run npm install, and then start up the server. But you'll notice at the top here I have some instructions on how to link everything up, because we want to link, again, Node.js as a separate container to Mongo, which is its own container. So the first thing we're going to do is we need to convert this into an image. And so let's go ahead and do that, I'm just going to copy this down, and we'll pull this up and just paste that right in there and build it. Now this should be cached, so it should be pretty fast to do. Alright, so we're all done there, and if we do docker images, you'll notice that I have my custom image, I already have a node image, and there's mongo, so we're ready to go there. Now the next thing we're going to do is we're going to run the mongo image, but you'll notice that in the run, I'm running it first off in daemon mode, so in a background mode, but I'm also giving it a name, and we really haven't done that much up to this point, so let's see what that does. So I want to paste this down, and we'll get this mongo going. Alright, so let's run docker ps and you'll notice that it's up and running, but you'll notice the name here is now the name that I chose as you'll see right up here. So the my-mongodb, it could be useful in this case, if you just want to start and stop the container and don't really want to use the ID that we have, but it's also very useful as we want to link containers, and that's where we're really going to use the name here. So the next thing we're going to do then is we need to start up Node as a container, but we want to link it into this my-mongodb. So let's go ahead and paste this command in, and before we run it, let's talk about it real quick. So we'll do the standard docker run in daemon mode, external port of 3000, internal port of 3000 for the container, but here's the magic. We're going to link to my-mongodb, which of course is the name that we gave Mogo that you can see here, and I'm going to give it an alias though in the node container of mongodb. Now remember, when it came to the connection string, if you will, MongoDB was used as the host name, not localhost or an IP in this case, the actual name that was assigned to the container. So that name now is really, really important. Now we didn't have to alias it, we could have just used this external name as well, but we're going to go ahead and go with that here. So let's start that up, we'll run docker ps, and now you can see that we have two containers up and running that are hopefully linked here. So let's run off to the browser, and I already have the IP address for my VirtualBox machine and that port that you just saw, so let's hit it, and it looks like it's running, but we didn't get any data yet, and that's expected because I didn't run the dbSeeder. So I need to run this dbSeeder now in the node container, because that's not something I set up when the server.js fired up. So I kind of did that on purpose so I could show you another Docker command that's very useful, and it's called docker exec. This allows us to execute a command in a running container. I need to know the container though, so let's do docker ps, and I could use that name, but let's just go with d6 here, that'd be a little easier. So I'm going to say docker exec, we want to execute this command in the d6 for the ID, and then I want to run node dbSeeder.js, and I have that set up so that you can run it directly as a module, and that should now insert some data into this mongodb database, and there's the name of the database. So the server is mongodb, the database is funWithDocker. Alright, so we should have some data in there, let's run on back and refresh, and this'll now hit it and there we go. So it looks like we now are able to pull that data that was inserted, and we're able to render it using Express in this case. So that's an example of some of the different commands that you can actually run to, first off, name a container, then reference that name using --link in this case, give it an alias, and then we can use that alias in the linked container, and that makes it really easy now for Node.js to call MongoDB.

  42. Linking ASP.NET Core and PostgreSQL Containers Let's take a look at how we can link an ASP.NET Core container to a PostgreSQL container. So I've loaded up an ASP.NET MVC CoreCLR project here, and this project is designed to hit a PostgreSQL database. Now at the time of recording this, the ASP.NET CoreCLR tools, the command-line tools, and some of the libraries are a little bit in flux, and so I really debated, should I include something like that in a Docker course, and the thought came to me, that's actually a great example of how Docker could be used in one way for developers, because we oftentimes have technologies that we'd like to take a look at, but we don't typically want to mess up our system or get something on there that might be hard to remove eventually. So what I'm going to show is a snapshot of an ASP.NET CoreCLR project that certainly will change. In fact, I can guarantee in a week there will be a new release, but I'm going to show you how we can customize that and still work with whatever version we want by creating a custom Dockerfile, and then we'll get into how we can link things up. Now first let me just briefly run you through the project. So we have a Models folder with some pretty standard model classes. These classes store Docker commands, so if you watch the Node.js demonstration I did with Mongo and how I linked those, it's the same general concept. We have Docker commands and they can have an array of examples, and that's what the example class is you see right here. So pretty standard stuff there. Now in the repository, I have an Entity Framework in here that'll interact with PostgreSQL, but I also have a DbSeeder, and this is actually what's going to be used to load up some sample data when the database first starts up and the application first starts up I should say, ASP.NET MVC in this case. So that'll load automatically for us. Now other than that, it's a pretty standard ASP.NET MVC type of project, but I am working with a particular build that I needed to use at the time. As so if open up the aspnetcore.dockerfile, you'll first off notice I named that a little bit different, and you'll also notice a whole bunch of stuff in here. Now don't let that overwhelm you at all, we're not even going to worry about much of it, but at the time, this is the version I wanted to work with, and that version wasn't actually up on Docker Hub, it's not available yet. So what I did is I took the Dockerfile for a previous version, and the good news is most of the different frameworks out there will define some environment variables in their Dockerfile that'll typically, not always, but typically, let you specify the version of the framework that you want to work with, and that makes it really easy to basically copy and paste their Dockerfile into your project, change things how you'd like, and then you can build your own custom images based on any version that you'd like to target, it could be a nightly build, an unstable build, whatever you want to call it. So you'll notice this particular one is based on debian:jessie. I went in since I tweaked this and put me as the MAINTAINER, but really the only thing I changed was this, all the way down to here, and then all I did was copied, like we've already talked about in the course, all the source code here into this particular project, set it as the working directory, restored the package dependencies, and then run this. Now what's really nice about this, and why I wanted to include a demonstration like this is this particular dnu and dnx command, they're in flux right now, there's another tool that's going to be coming out it looks like. As of today though, this is what I needed to be able to run this project. By using Docker, I can get a very early version of a framework up and running on my machine, we can hit a database in this case and I'll focus in a moment on the linking part, but I can do this in a way that I can get the image there locally, I can run that as a container, and when I'm done, I can just, as you've already seen, remove the container and remove the image, no harm done. My machine pretty much acts as if it was never there. And I really, really like that feature about Docker is it makes it a lot easier to run things without having to fire up a big old virtual machine, you can just have these Docker images and containers like we've been talking about. Alright, so having shown you that, here's the commands that we're going to run. So the first thing we're going to do is we need to build this into the custom image so that I can pull down this specific version. And Microsoft, whoever made the Dockerfile here, they've already done all the hard work here to pull down what we need, so a lot of that I didn't even have to worry about. Now the next we're going to do is we're going to fire up a PostgreSQL database, and we're going to give it a name, and like I showed earlier in this module, any time we want to link a container to another one, simply give it a name, and then we can reference it by that name. Now they have some other options where you can specify a password as it runs the container, I will with this highly secured password of password, don't try that at home, but that'll get us going. The last thing we're going to do then is we're going to run our aspnetcore, this is the one we're going to build from this Dockerfile, and I'm going to run it as the daemon on the port 5000, external and internal, but here again is the key part. So you'll notice that I'm referencing the name of the container up here that'll be created, and I'm giving it an alias. Now this is important because when I set up the connection string, which is in the appsettings file, you'll notice here for the server I have this postgres, and that name comes from the name that you saw right here. So this is the alias the container will use, which actually points up to this other container. Alright, let's give this a shot. So I'm going to grab this command here, and let's come on in to a command that I already have up, and we'll just paste this on in and run it. Now this one will take a moment to run, so through the magic of video I'll speed up the process for you. It looks like this particular image has been completed, so let's go ahead and run docker images, and there we go. So in addition to this custom image you'll notice I have some others on here, including postgres, which we'll run in a just a moment, but now we're ready to go with that particular Docker image. Now we can move on to the next part, which is going to be getting the database going. So we're going to run this one, but give it a name. So let's run on back and paste this in, let me clear the console. Alright, so that should be going. Let's run docker ps, and there we go, you can see that is now up and running on port 5432, and notice that the name here has actually been added. So again, up to this point we've mainly messed around with container IDs, mentioned the alias that Docker generates the name it generates automagically, but my-postgres is the custom one that we did, and that's what we're going to use to link to. So the final step here is we need to grab our aspnetcore command to run that container, and then we're going to link it again into this postgres database. So we'll paste that in, and let's do docker ps again. Alright, and it looks like we're good to go here, dnx web was run in this case and it's been up for about 2 seconds, the database should be up as well. Now when this one ran it automatically updated the database, so it had a seed piece of code that actually did some inserts, and now if we go to the home page for this we'll hopefully see the Docker information. So let's run off to the browser here, and we're going to add port 5000, and there we go. So we have the aspnetcore running, and that is now hitting our container, which is actually our postgres container, and so you can see all the information there. In fact, we can kind of prove that just to show another feature by doing docker logs, and if we just did that and did help, you'll notice I can get some information about it, but I just need to give the container name or ID. So let's do docker logs, and actually I need to go grab the ID. So let's go back to docker ps, and then we'll do docker logs on just 60 should do it for us. Alright, and there we go. There's the logs that were kind of dumped by that container. In fact, here's some of the different commands you can see as I was hitting the home page here. This is actually some of the queries that were going on and everything that was going on behind the scenes. So that's an example of how we can actually run some code that maybe isn't even ready for primetime in production, but we need to start looking at it maybe as a company, but we don't want to mess up developer machines. And as a result, Docker provides a great way to containerize that so that we don't have to install anything we don't want, now I can go in, stop the containers, remove them, and then remove the images and it's as if I never did anything, and that takes a matter of a few seconds to do. So that's an example of how we can link up ASP.NET CoreCLR and the Postgres database.

  43. Getting Started with Container Networks You've seen how we can link up containers using the name of a container, and how that allows us to communicate between, for instance, a web server and a database server, but Docker does provide a different technique that can be used that also provides additional functionality, and that's what we're going to talk about here. So what we're going to cover is something called container networks, or bridge networks. Now to understand this, think of a Docker host. Now this could be a Linux box up in the cloud, it could be VirtualBox running locally with that Linux box in it, wherever it may be, and then in that Linux box you have these different containers that need to talk with each other. And so to do that, we could use naming, but anything that knows the name could automatically get to that container by the name. And while that's a good thing, especially I think in the development environment, it's very easy to get started with and to use, once you start having a whole bunch of containers running, you might want to start to isolate those containers so that you have to be in the same group if you will. Well, we don't call it a group, but we do call it a network or a bridge network is the official term you'll see in the Docker documentation. And the way it works is you can, through Docker Client, create an isolated network. And you just give it a name, it's a very simple command that I'll show you coming up here in a moment. Any container that's run in that isolated network can communicate with other containers in that same isolated network, and they do so by name. That's why we took a look at the legacy linking type of container naming and linking earlier. That means I could have one set up here, maybe this is a Node.js server talking to MongoDB whereas I might have a separate isolated network with Postgres, ASP.NET Core, and some other type of infrastructure set up there for containers. So this is nice because I can actually now group the containers into their own isolated network, and that allows me to isolate them much more in who they're allowed to communicate with as far as their container friends, if you will. The steps to follow to create a container network are actually very straightforward, and the commands you're going to run with Docker Client are also very easy. So the first thing we'll do is we need to create a custom bridge network, and we'll give that a name. Now once you've set up your custom bridge network and given it a name, then you can start the containers up using the standard docker run, but we can specify what isolated network to run in. Now it is possible for a container to run in more than one network, and that would allow it to communicate with multiple containers that might be kind of cross group if you will, cross isolated network. Now we're going to focus just on one isolated network in this particular example, and the examples that follow, but you can definitely do some more advanced things if you'd like there. So let's walk through the steps here real quick. So step one involves creating a custom bridge network, and the way we do that is we use the Docker Client and we use the network command. And we could say hey Docker, I'd like to create a new network, I'd like to use the bridge as the driver, and there's a bunch of different drivers you can do, as mentioned, even cross host is possible and more, and then I'm going to name the custom network. Now, I gave it a real basic name of isolated_network, but it could literally be whatever you want, this is just like naming an image or naming a container when you run it, you can come up with whatever name you want here. Now that's it. Now what that will do out of the box is not a whole lot because it just creates this isolated network, but at this point nothing's in it. So step 2 involves then running your containers, but specifying that I'd like to run that container in a specific network, and notice that I'm now saying I'm going to run it in isolated_network, which of course is what we just saw that was created. Now we've said what network we want this container to run in, but how would another container in the same network call into this container? And the answer there is we do, just like we did earlier with the legacy linking, and we give it a name. So every container that you want to link up will have a name. So in this case I named it just plain old mongodb. Now the connection string for a web container that's also in the isolated_network could then call into mongodb by using the server name of mongodb, because that's what the container name is. So I won't have to use the --link that we saw earlier with the legacy linking, and you're going to see all this coming up with an example in just a moment, but all I have to do is just give every container that I want to link to a name. As long as they're in the same isolated_network in this case, I can now reference that name just like we saw earlier, and then I'm off and running. I can hit a database, a caching server, or whatever it may be. Now it's important to note that the Docker documentation doesn't actually refer to this technique with the bridge and the container networking as linking. That's a term I like to use because it just makes sense, we want to link one container to another, but in this world really we would just call it communicate between one container and another container. Now to wrap this up I also want to mention that linking, as far as the legacy linking, is actually not supported in this world, we don't need it of course. We have our isolated_network and we can just use that directly. So now that you've seen an example of what this bridge network or container networking looks like, let's jump into the samples that we already saw earlier with Node and Mongo and ASP.NET Core and Postgres, and let's see how we can change those up to use this technique.

  44. Container Networks in Action Let's jump into an example of creating a custom container network using the bridge driver, and then adding some containers into that network so they can communicate. So what I'm going to do is the same exact demonstration I showed earlier with the legacy linking, but we're going to do this with our own custom bridge network. Now I've updated the comments here and added two options. So option 1 is what we looked at earlier, and this is the legacy linking that I showed. But option 2, which is the new one, is we're going to create our own network, I'm going to call it again isolated_network, but you would normally give it a more specific name, probably based on the containers that are going to be in that network. Before I run this though, let me come back to the command prompt here, and I'm going to show you another Docker Client command, and it's called network, and we can do ls, and we can list the networks. And you'll notice currently that I have none, host, and bridge, and it shows these different drivers. Well, we're going to be creating some containers in a customer bridge network so we can communicate locally on this host. And so to do that we first need to create the network. So I'm just going to grab this command here, and we'll run this. And it gives an ID and now I can run the same command earlier, docker network ls, and there we go. You can see my isolated_network, and it's the bridge driver. Now what's interesting about this is I can inspect the network as well. So I can say docker network inspect, and that can give it the name of isolated_network, and this gives me some information, but I want to point out currently there's no containers in there. So it does have some information about the subnet and the gateway and some other info up here in the ID, but it's really not very useful at this point. So we need to run some containers in that network, and we're going to do that using the --net switch that I showed a little bit earlier. So the first one I'm going to start is the mongodb container. So we'll paste that in and that's going to fire that up. Now that's in the network. So we should be able to now do a docker network inspect on our network, isolated_network, and now you'll notice in the containers that we have mongodb listed, and only the items that show up in here are going to be available. So this is actually pretty cool to work with. And now we'll come back and we'll start up our node container. Alright, same thing, this'll now add it and when we do our docker ps we should see those both running. Now I can go to the browser, and I didn't load the sample data here, but let's just refresh, and we should see this Docker Commands show up once it loads up here. Alright, there we go. Now I've already shown earlier in a previous demo that if we want we can do this docker exec, and this'll run against the name that you see here of the container, so I made it a little bit easier, you don't have to know the container ID now, go ahead and run that and that starts it up, and then I can just stop to get out. Now the mongodb database should have some data, and there we go, we're now able to run that. So that's an example of how we can use our container networking or bridge networking, it really depends on how you want to look at it, but the official term is container networking with a bridge driver, and that's how we can have multiple containers communicate with each other in a way that isolates them to this custom network container that we created. Pretty cool stuff. Now that you've seen that, let's do the same thing with the ASP.NET Core and PostgreSQL. So I'm going to run through this one a little more quickly because we've already seen it, but if I run in and say docker network ls, you'll notice I have kind of the standard items here, and now I'm on the Mac side versus the last one was on the Windows side. So we can again create our custom network, we'll paste that in, there we go, so now we can run our docker network ls and there we go, it's in there, but if I ran the inspect it would be empty of course as far as the containers. Alright, so from here we'll go ahead and now we'll start up our database container, and then we'll go ahead and start up our web server container that wants to communicate with that. And we're off and running, so let's make sure they're started. Both are up it looks like over here, so we can come back over and let me refresh this particular IP and port, and we should see the same type of page on this particular browser. Alright, there we go. So there's aspnetcore again with postgres, but again, this time they're running inside of their own network, so let's just prove that one more time by doing docker network inspect, and then the name of the network was isolated_network. Alright, you can see we have two containers. There's aspnetcoreapp and there's our postgres. So the name is actually the name that was used in the connection string up in here, and likewise on the mongodb side the name of that container of course was used in the connections string. So this is the preferred route moving forward with Docker as you're definitely moving to staging and production. Now I'd say in development I don't know that it matters quite as much because you may not even need a network, but it's just as easy I think to set up a network as it is to link with the legacy linking, so I'll let you kind of debate the merits there, either one works. But that's an example of how we can do this with container networks.

  45. Linking Multiple Containers As you've walked through the different samples in this module, you might have wondered, do I really have to type so many commands to link up multiple containers to each other? Obviously if you only have two or so containers it's not that big of a deal, but as you start adding more and more and more, it starts to convolute things and definitely make it a little bit more challenging to get those containers up and running and all connected and communicating. So the good news is, there is an easier way. If you do have the scenario where you have a web server and a database and a caching server and more, then in the next module we're going to learn how we can apply all the different topics we've talked about here, and put those into something called Docker Compose. And as you can see by their logo for Docker Compose, it's good at juggling and really managing multiple containers in a way that's really, really easy to work with. And so the good news is while you might use some of the commands that I showed throughout this module, just to get up one or two containers for sure, if you have requirements that say hey, we have four or five containers maybe, or maybe even more, then it is a lot easier to use this other tool that's part of the Docker Toolbox called Docker Compose. And so we're going to be covering that in the next module, something for you to look forward to.

  46. Summary I hope you have a good idea now about how you can communicate between different containers that you need to get up and running in your development environment, or even in maybe a staging or a production environment. So we've learned that Docker containers can communicate in different ways, we can use the legacy linking function and that's where we use the link command-line switch, or we can do the networking option as well, and that would be one that definitely is very powerful because now you can isolate containers to only be allowed to talk to other very specific containers if you'd like. So the link switch is the one that provides the legacy linking, and of course the net command-line switch is the one that provides the bridge network functionality. Now I also mentioned that this is all great, but if you start getting past more than two or so of these, then you end up running a lot of commands, then you start trying to come up with ways to batch those to save some time, and the good news is we already have a solution built in the Docker Toolbox called Docker Compose, and that's what we'll jump into in the next module.

  47. Managing Containers with Docker Compose Introduction We've covered a lot of really fun concepts when it comes to working with Docker in a development environment, but we're now getting to one of my favorite parts of Docker, and that is Docker Compose. Docker Compose provides a great way, and a very simple way you'll see, to get multiple containers up and running with a minimal effort on your part. It's very easy to get started with, the configuration files that we're going to talk about aren't hard to work with, and the commands are even more simple than you've seen up to this point. So let's take a look at the agenda for this module. So we're going to kick things off by talking about what exactly Docker Compose is, and I'll kind of make the case for why we need it, especially in a development environment. We're then going to introduce a file that you're going to need to know about to work with Docker Compose, and it's called docker-compose.yml, or y-m-l you'll see here, and this is going to be your configuration file that's going to be responsible for taking images and getting them up and running as containers. And you're going to see we're going to call those actually services. Now from there we're going to talk about some of the commands you can run with a Docker Toolbox tool called Docker Compose. So we've seen Docker Machine, we've seen Docker Client, and Docker Compose is yet another tool that you can run a few commands with to do all kinds of great things that are very productive and efficient. Now once we get through the overview of what it is and how the configuration file works and how to run some commands, we'll take some of the images and containers that we worked with earlier in the course and we'll see how we can very easily get those up and running, and even communicating with each other as well. Then from there we're going to wrap up the module by walking through a more robust example. Earlier in the module I'm going to introduce a scenario where we might have a bunch of services, in our case it's going to be about six services that we need to get up and running for our development environment. I'm going to walk you through the overall development environment services, we'll talk about a custom docker- compose.yml file that can configure these different services, and then we'll talk about how we can manage those services, and this'll include bringing them up, taking them down, removing containers, and some more topics. So let's go ahead and dive right in and let's take a look at what is Docker Compose and why is it so important, especially in the world of web development environments.

  48. Getting Started with Docker Compose From a web development standpoint, Docker Compose is definitely one of the more exciting pieces of Docker. It's a great way to automatically manage the lifecycle of your application in the development environment, and get it up and running and stop it, and things like that, very, very quickly, and that's what we're going to talk about in this first section. The logo really kind of gives away a lot about what it does. It allows you to have multiple images, and then convert those images into containers. Now to do that though by hand, which we've pretty much been doing throughout the course up to this point, we've been going in to the command line and having to do manual Docker run, and you can see that with a lot of containers, that can be a little bit problematic and definitely not very efficient or productive. So the image that you see here from their logo reflects exactly what it does. It allows you to manage multiple containers and the overall lifecycle. Now if you go look at the official docs, they'll highlight four main areas that it works well, and it's great for the development environment, the staging, maybe for production, Docker has some other options you could use there for DevOps like Docker Cloud, but definitely in the development environment it can do these types of things. So as mentioned it manages the entire application lifecycle, and that includes things like starting, stopping, rebuilding, what they call services. Now you're going to see that a service really becomes a running container, so we're still going to be using images behind the scenes that get converted into running containers, but we're going to call those services in the world of Docker Compose and you'll see as we dig in deeper. It also allows us to view the status of running services, including the log output of all those running services very easily. You don't have to do a command per container to get the logs, you can actually get to all the different container logs at once if you'd like. Now if you do want to get to one container and do a one-off operation, you want to maybe view the logs for it or just start and stop that one container, or even build it from the image standpoint, then Docker Compose will let you do that as well. So it's a really, really nice way to manage different containers in an app that you're going to be working with. Now let's talk about the need for Docker Compose. This gives you some high-level, kind of 10,000-foot level stuff, but let's dive in a little bit more here. So let's assume that we have a setup in a web app where we have NGINX on the front end, and that's a reverse proxy, we have Redis for caching on the back end, and MongoDB as our data storage let's assume, and the NGINX, when a request comes in, let's assume that it also is going to route that into different Node.js servers. Now again, you could substitute your chosen framework, it could be PHP, ASP.NET, Java, whatever it may be here. Now as these servers get called, they'll of course call into the database, they'll more than likely then cache some of that data in Redis, and then that's kind of how it proceeds. Now what's nice though is Docker Compose can manage all of these, and you'll see that we have six different containers in this particular case, and you could certainly have a lot more if you have other application servers and things going, and managing those by hand, I don't know that I want to do that, it's a little bit problematic, like I said, not very efficient, not very productive. So Docker Compose has a file that we're going to be talking about called docker-compose, and it's a YML file, so if you're new to it, don't worry, it's a super, super simple format, and in this file you can define all these services and even the relationships between the services. If you remember earlier in the course we talked about linking, and we also talked about networking or bridge networks. We're going to talk about that as well here as we dive into this Docker Compose. So what we're ultimately going to be after here is we're going to make a Docker Compose file that can manage the different application services. Now the services in this case would be the NGINX, the Node, the Redis, the Mongo, really they're just containers of course at runtime, but in this world of Docker Compose we're going to call and refer to them as services. Now the standard workflow, once you have your Dockerfile set up, if you have custom Dockerfiles and your docker-compose.yml file, is you're going to use Docker Compose to then build your services. Now under the covers that's just going to create images like we've been doing all throughout the class. From there we can then use Docker Compose to start up our services, and then when we're done we can tear down those services and stop the containers, and even remove them if you'd like. Throughout the rest of this module we're going to be talking about these different aspects of the Docker Compose workflow, and we're going to start off for instance by talking about the docker-compose.yml file and how you can work with that, then we'll move into some of the Docker Compose commands that you can run, and then we'll jump into some actual examples of using it and applying it to a development environment.

  49. The docker-compose.yml File So let's jump in to how this YML file is used and some of the key aspects and instructions that you're going to find in the YML file. So first off the docker-compose.yml file defines, as mentioned, all of our services. And so this would be things like what's the instances of different web servers you might have running, the different frameworks there, Node, PHP, Java, whatever it may be, your database services, caching services, you might have some application server services, and so on and so forth. And so this'll just be a normal text file that on its own is not that useful, but we can run it through a Docker Compose build process. And this build process can actually generate images that we can then use to create containers as we run this. Now the Docker Compose build process you're going to see is extremely simple. In fact, it's probably the simplest command we've run throughout the entire course. That's why I'm a big fan of Docker Compose, it provides a lot of functionality with just a little bit of work on your part. So we'll be looking at these commands in a moment. Now that's going to generate, as mentioned, the images, we're going to call these services though once they get up and running, and then on a development machine just with one little command I can then build out my services, and then with one other very small command I can then get those services up and running. And so it's very, very nice in the development world because if I just was given a YML file with just a few basic commands, I can actually have all my images ready, and then actually convert those into running containers and have these services if you will that are actually up and running, then I can start building my code against those services. So what goes in this docker-compose.yml file? Well the first thing you'll always see at the top is a version. Now this will certainly change as docker-compose.yml files go through different versioning and Docker Toolbox has different releases, but version 2 would be an example. Now if you do see a Docker Compose file out there, just out on GitHub or out on the web somewhere, and if it doesn't have a version at the very top then it's probably an old version. The initial versions of Docker Compose didn't have a version, but everything moving forward is supposed to have that as the very first thing at the top. Now under the version you could have different options. You could have things like services, which we'll be talking about, but you can define other things like volumes and networks as well. Now for our services, this is where we're going to define what it is that we want to be running once we build this docker-compose.yml file, and then get all those images up and running as containers. So this is where we'd define, for instance, Node.js or ASP.NET or Java or PHP, our databases, our caching servers, and so on and so forth would go in here. Now there are a lot of different options for defining these. So for example, some of the configuration options you can supply include things like the build context. This would be things like what folder do we kind of build from as the context, and what Dockerfile do you want to use to build that particular service, and you'll see this coming up. We can define environment variables, and these environment variables then can be automatically put into that running service, that container, at runtime. So that makes it really nice to swap, for instance between an app environment of maybe development to production and see how your app responds to that. We can also define just an image. Maybe you're not going to build an image, you already have one either local or up in Docker Hub, you just want to use that as the service. We can also associate a given service with a network that's been defined. Now if you recall earlier in the course we talked about ways of linking up, if you will, Docker containers at runtime. So for instance linking up a Node.js to MongoDB database. Well the recommended way to do that of course is through networks, and we talked about something called a bridge network and how that can be used to allow these containers to communicate with each other, and we can define those networks and then reference them to link things up in our docker-compose.yml file. We can also expose different ports and define those, and we can even define volumes, including pointing to source code on your local dev machine volumes, and so it makes it really, really easy to hook up a volume into a container at runtime. So let's look at an example that dives a little bit deeper into some of these different options. So as mentioned we'll have the version at the top, and then we'll have our services. Now under the services you then name the different services. So I have one here just called node. Now that becomes the name of the service. Now under that, in this case, I say I'm going to have a custom build for a Dockerfile called node.dockerfile, and the context is the current folder. So when it builds, use the current folder as kind of the starting point, the folder context, of how to reference sub-paths and things. Now I'm also saying that this node service needs to be associated with a nodeapp-network, and this is a bridge network. And that will allow me to put this in a specialized network and then communicate with other services, other containers in that network. Now here's another service called mongodb. Now in this case I'm not building from a custom Dockerfile, I'm going to be using the Mongo image that's up on Docker Hub, so this will cause it to pull it down and then use that image, and then notice I'm adding it to the same network, nodeapp-network, and then you can even define multiple networks. In this case I defined a single network called nodeapp-network, and then it has a driver, which is our bridge type of network. Now if you're new to the YML format and you're coming from maybe an XML background or JSON or something like that, then this is definitely very different. You'll notice that there's a little bit of an indentation kind of going on here. And what's nice about this is number one, you don't have to worry about closing tags, so it's very simple that way, and you also don't have to worry about closing brackets and things, as with JSON. It's just a different way to do it. So you can see, it's just a simple file, on its own it's not that useful, but as I teach you and we walk through the different Docker Compose commands, we can take this and convert it into a Node service, a MongoDB service, and then a network that both of those services are in so they can communicate with each other. So that's a simple example of what could be in a docker-compose.yml file, now let's look at how we can work with some of the commands that can take this and convert it into images, containers, and services.

  50. Docker Compose Commands Once you have your docker-compose.yml file available, you can go in to the Quickstart Terminal and run the Docker Compose tool and use some different commands that we're going to talk through real quick here. So here's a few of the key commands that we're going to be using in the upcoming sections in this module. First off we need to build our services into images, and we can do that with the Docker Compose tool and we can run the build command. That's it, really simple, you'll notice there's not a lot to that, especially if you look back to what we've done when we did builds in the past with just the Docker Client. Now once you have your images available you can then say docker-compose up to start those up as running containers, you can tear them down with the docker-compose down command, and then you can do a lot of other things in addition to that. We can view the logs, we can list the different containers that are running as our services, we can stop all of the different services and then start them back up if we'd like, and then once we've stopped them, we can even remove the different containers that are making up our services. Now we're going to be diving into a lot of these as I move in to some of the examples of using them, but let's walk through the fundamentals of the key ones here, the build, the up, and the down. So earlier I talked about the Docker workflow involving building your services, starting them up, and then tearing them down. So let's focus on the build part here. So as shown earlier, we can come in and say docker-compose build, and that will automatically build or rebuild all of the different service images that we need that are all defined in your docker-compose.yml file. Now this is great because if you had a bunch of services like I showed earlier, maybe NGINX, Node, Mongo, Redis, and maybe even others, then with one simple command you can automatically create all the different images that those services will need to run on your development machine, so it's really, really nice that way. Now you can also build individual services. Oftentimes as I'm doing this, I make a tweak, maybe to a custom Dockerfile, or maybe there's just a new version of an image that you want up on Docker Hub, and you don't want to rebuild everything, you just want to rebuild one of those services. Well you can do one-off commands as well, and this would only build or rebuild the Mongo service of course. Now once you have everything built, we can then start those services up, and you saw that's very, very simple to do with our docker-compose up command. That'll automatically create the containers, and then fire them up, start them up. That includes linking them together if you're doing linking technology, or if you're using bridge networks, or whatever it may be. So very, very simple, one simple command and you're up and running. Now again, I want to highlight, compare that to what we've done up this point in the course where we've had to do individual docker run commands, and some of those commands get a little bit long. This is a lot easier. So this is a great way to simply take a Docker YML file, do a docker-compose build, once that's built we can then say docker-compose up, and we're off and running. Now we can also come in and do a docker-compose up and supply some other command-line arguments here. Maybe there's a particular service we want to bring up individually, such as Node in this case, you'll notice over to the right here, and we don't want any of the other dependencies though. Maybe Node depends on MongoDB or PostgreSQL or something like that, and we don't want to recreate those other services, just the Node one. Well, we can do that with the docker up, that will make sure that the node is brought back up, but we don't recreate the other containers that might be linked in to or bridged in to the node container. So we've now looked at building the services, starting up the services, and now let's look at tearing down the services. So the simple command here is docker-compose down, and that automatically will take all the containers and stop them, and then remove them. Now if you don't want to remove them you could just do docker-compose stop, I showed you that a little bit earlier, but down is really nice in cases where you're kind of done maybe for the day or something like that and you just don't want those containers hanging around, maybe you're going to be rebuilding your images anyway, and so you just like to kind of clear all that out. Now if you'd also like to not only stop the containers, remove the containers, but also remove all the images, then you can add some extra switches here. You can do --rmi all would remove all the different images that we have associated with those services, and then you could even remove any volumes associated with those with just a very, very simple command you can see. So again, you can imagine if you had five containers running or more, this provides a really, really easy way to not only stop those and remove them, but even remove all the different images and all the containers associated with those instead of having to do that individually like we've done up to this point. Now there's a lot of other commands you can run, but those are the key ones that you're going to start seeing as we look at Docker Compose in action. So let's jump on in to the next section here, and let's put this to use.

  51. Docker Compose in Action Let's take a look at Docker Compose in action, and we're going to work with a custom YML file, as well as use some of the different Docker Compose commands that we talked about earlier. Alright, so I've already opened up a Node.js/MongoDB type of project, this is the same exact one that we saw earlier where we had to manually run some of the different commands to build our images, and then run our different containers. So while that works, it's a little bit inefficient I would argue, and definitely not something I want to have to copy and paste those commands in every time I want to run a container, rebuild an image, or whatever it may be. So I've already created a docker-compose.yml, a YML file, and the first thing we're going to do to make it easier to bring up our node and our mongo services, which is again, really our containers, is add them and define them into this particular YML file. So the first thing I'm going to do is we're going to come in and mark the current version that I have to do as of today. The next thing we're going to do is we're going to add the services that we want, and because we're going to have two services here that sort of link up, if you will, we're going to do that through the bridge network. So I'm going to come in and I'm going to name it nodeapp-network, and then we're going to have to say that the driver for this network, since there are different options here, is the bridge one that we've already talked about earlier in the course. Alright, so that'll take care of having a network that's named nodeapp-network, so that part represents the name, and then the only property I had to put in this case for that was that it was a bridge network. So the next thing we're going to do is come in and define, I'm going to call it node, and we're going to do a node service, and I want to build this from the custom node.dockerfile that I already have, and so I'm going to come in and add a build property, and then it has some sub-properties. I'm going to name the first one context. I want to run from the context of where this YML file is here, so if there's any subfolders I had to get to the Dockerfile, it would set the context of where that runs from, so that's actually a really important concept. Then the next one is what's the Dockerfile? Well I'm not using the standard just Dockerfile name, I'm doing node.dockerfile. Alright, that'll take care of that. Now this particular one is going to run on some ports, we're going to do the mapping of the external to the internal, I'm going to do 3000:3000, very similar to what we've done, 3000 in the external, 3000 on the internal. And then we're also going to need to hook this into our nodeapp-network. So I can say networks, and then simply put in, every time you see the dash it's because I could add multiple items here in the YML format, and we're going to call this nodeapp-network, and that just matches that name right there. Alright, so we're kind of off and running with that particular service. So one more time, we're going to call it node, we set the build context to basically the folder where the YML file is, and then we give it our Dockerfile. We're going to expose the ports that we want to set up, and hook it into the network that you see here, in this particular case. Now the next thing we're going to do is hook in our mongodb, and this one is not going to be built from a custom image, or a custom Dockerfile that I have, it's going to be based on the one that's up in Docker Hub. So I'm just going to list the name of the image there, and let me change that because it's actually just mongo, and then we also need to hook it into the network, so I can just kind of copy and paste this part right here, and we're off and running. So that would be an example of creating a custom docker-compose.yml file. It's not that hard, really it's just a matter of going to the documentation on docker.com, looking up the docker-compose.yml file documentation, it's pretty well documented, and then just taking the time to do it. And the nice thing is, once you've done this a few times, you can just start to copy and paste and tweak things between your different YML files there. So now we have our services defined, we have a node, we have a mongo, they're both in the same nodeapp-network here, and that way when we run these services and run the up command, then it's going to put both of them in the network so they can talk to each other, and we've seen that earlier again in the course. Alright, so let me run off now to the terminal that I already have up for this particular folder, and because we have a Docker Compose file, imagine that you just check this source code, maybe out of your version control, and brought it down to your local machine, and now all I'd have to do to get an environment up and running is say docker-compose build. So I first need to get the images in place. Now this'll take just a little bit of time, I have some of this cached, but it should be pretty quick, and you can see it's now done, and then mongo was already there locally if I did docker images, so it didn't have to do anything there. So if we do docker images, you'll notice I have this nodeexpressmongodbdockerapp_node, it kind of named that part, and we could even name it by the way, that's possible to do too, and then here is my mongo image. And so those are all ready to go. So now the next thing to try to get this going would be to do docker-compose up. Now this is going to go ahead and start both of these up, they just did, and you'll notice though that when I brought it up it's kind of in log mode right now, and so I'd have to open up a new terminal to really do anything here. Probably not what I want. So let's go ahead and go into here. I'm going to say docker-compose down, and that's going to go ahead and stop both of those, and it's going to go ahead and remove the containers as well, which is really nice, so we'll let this stop. So that's all stopped now and you'll notice it also removed, so stopped and removed, all in one shot. Now what I didn't like about that is when I did docker-compose up, it kind of blocked us from using the terminal, so let's go ahead and do it, but we want to run in daemon mode, I want to run it behind the scenes. Let's go ahead and try that. Now notice they should be up, we'll prove that in a moment, but I can get back to the command prompt. So you've seen that before when we did docker run. So let's go in and do another command, docker-compose, and let's do ps and see what we have running here. And you can see we have two containers. So there's our mongodb, there's our node, it shows the status of the ports and the IPs and all that stuff we've already seen before. So since those are both up, let's run off to the browser, and I already have the IP for the virtual machine here, we're going to do the port of 3000, and this should now hit it, and there we go. Now I'm not going to run the dbSeeder that we saw earlier, I'm not seeing any data because this was a fresh container, and so there's no data in the MongoDB database, but we certainly could also run commands against that if we wanted, and that way we could seed it with some data. But you can see it is up and running and we didn't get any errors there. Now if you wanted to see that hey, there are no errors, then we could also while we're here to docker-compose, and we can do logs. And what this will do is get us the logs for all the containers that are associated with this docker-compose that we ran. And there we go. So notice that now I have the entire, kind of log infrastructure here if you will, I'm going to go ahead and exit that out, and you'll notice that I can get to all the details about, in this case the Mongodb setup, here's my npm it looks like, here's the calls that went in as we hit the web page, and it looks like other than me aborting down here, everything is looking pretty good. Now to bring these down, which you've already seen, we can do docker-compose down, and this will go ahead and bring these down, and now we'll be kind of off and running, and we could rebuild the images or do what we want, and I can even remove all the images if we wanted. When you do the down there's a --rmi all that I showed a little bit earlier, and we could do that as well. So now let's do docker-compose ps, you'll notice nothing is running there, and we could even do the normal docker-client ps -a if we wanted, and there's nothing there. But if we go to docker images, you'll see that I still have my two images here, there's my mongo and there's my node image. So that's an example of how we can use Docker Compose to very easily, not only build, but also run our services, and then take those down when we're done, and then if I wanted to set up volumes and all that kind of stuff, we certainly could, and we'll see that coming up in a later demo in this module. Now before we wrap up this section, let me show you one more docker-compose.yml file. Now this is for ASP.NET Core and PostgreSQL. Now again, I could come in to our Dockerfile, and we could manually run the different commands that we looked at earlier in the course, but now I have my version, my services, and I have a web and a postgres, and you'll notice that for the web it's very similar to what I just showed for node. We have a custom Dockerfile, we set the context, we give the ports, and we hook it into a bridge network. And for postgres it's really close to the same thing, but you'll notice right here, I'm actually adding an environment variable value. Now this is something that the PostgreSQL image knows about, and so we did this manually earlier when we ran docker-client run, we had to put this on the command line, and you could do that, but again, I don't really want to type that over and over and over, or copy and paste over and over and over, so we just simply used this environment property. Now I can put in the different environment variables that I'd like to set, and you can certainly put more than one if you'd like. And then we link that into the same network, and we can call this from the aspnetcore container that'll be running based on this name right here. So we can do the same exact thing now, if I just had grabbed this, which actually I don't have this going as an image yet, then we can come into the folder, which I'm already in, we can say docker-compose build. Now this will have to build the aspnet image, which will take a moment, and you can see there's a postgres image as well, which I already have locally, so now we're kind of ready to go, and if we do docker images, we should now see that we have the web one here for the aspnetcorepostgressqldockerapp. So that's kind of the first step, now we know we can do docker-compose up, this should now start those up, I didn't do -d, but that's okay in this case, there's all our logs for that. Now we can come back and hit port 5000, so let's go ahead and try that, we'll change that up, and we'll be off and running here, and this now hits it, and this one already seeded the database, so this one you can see is definitely working because it shows us the seed data that ASP.NET put in, and if I scroll up you'll be able to see some of the SQL statements and things, even the inserts for the seeding are shown here. So now we can break out of here, and I can say docker-compose ps, there we go, they're both up, and then of course docker-compose stop if I wanted, that would just stop them, but we're going to do down, and that'll actually stop them and remove them as we talked about. So that's an example of how you can get started with Docker Compose, with Node and Mongo and a YML file, and then ASP.NET Core and Postgres.

  52. Setting up Development Environment Services Now that we've taken a look at how you can work with docker-compose.yml files and some of the different commands you can run, let's walk through setting up a more robust development environment, talk about some of the different Dockerfiles involved, and then look at the custom YML file and how we can actually run it. So earlier I talked about this type of environment where we have NGINX on the front end, it can then route to multiple Node.js processes that could be running, and then Node could integrate with MongoDB and cache some data in Redis. Now obviously in a development environment you probably don't need multiple node instances, but I'm going to show you how you could do it just so you see the setup and how it all works. So let's go ahead and jump over to a code demonstration here, and what I'm going to do is rather than typing it all out, because we've already seen the YML file and we've seen Dockerfiles, I'm going to talk through the setup, walk you through the basics in this particular section, and then in the upcoming sections we'll look more deeply at the YML file, and then start to run it and get this environment going, and you're going to see it's actually extremely easy to get going, that's what's so exciting, again, to me about Docker. So let's jump on over here. Alright, so this is a project that has all these different services that I need to have in place, the NGINX, the Node, the Mongo, the Redis, and so what I've done here is I have a .docker folder. Now this is my own name, you can certainly choose whatever you want here, but inside of it I have my custom Dockerfiles. So I have one for mongo for instance, and we'll talk through the basics of this, here's nginx, here's my node, and then here's my redis. Now these are pretty standard, there's a few things I'll point out and call to your attention, but out of the box I'm really just grabbing from the latest mongo, I'm running a couple custom commands, it turns out that the debian:wheezy image that this is based upon didn't have a particular feature I needed, at least at the time, which is at the time I'm recording this, and it's called netcat, and so I had to actually do an apt-get, which is one way on a Linux machine that you can go grab different tools and download them dynamically. So I'm going to do that, and then I'm going to run some custom mongo scripts and copy them in. Now in this case when Mongo runs, I need to supply some username and password type information, and out of the box you don't get that. You can supply some basic stuff, but I need to supply obviously my admin password and username, and then the web that's going to hit it needs a web account, so the node application needs to be able to call it with a specific account as well. And this is all done with some different sh or shell scripts here. And so you'll notice I'm calling this run.sh, and in a nutshell, what I'm doing is kicking off a little bit of scheduling for backups, which again, in the dev world you probably don't need, but in this particular case I could use this in a production mode if I wanted, and then I kick off some other things like this first_run, and this is where I use environment variables, and this is how in a, in this case a shell script, how we can get to some environment variables that are being set. Now these are going to be loaded through this mongo.development.env, and so you'll notice in here this is an environment variables file, you're going to see this as we get into the docker-compose file, which is back down here, but I can supply the environment, I can supply the Mongo type of functionality for the username and password, and these scripts take care of applying all of this information to the actual MongoDB database. So that's really what this guy does, the entry point runs this custom script, and that kicks off getting the username and passwords all updated in the database. That way I can truly do authenticated calls from the web app. Now the nginx also does a little bit of extra stuff. First off, I have a configuration file, and if you're not familiar with NGINX, it's a reverse proxy, and it's something that, as was shown in the diagram earlier, it could be hit first on port 80 for instance, and then it could forward dynamic calls that Node needs to handle, or whatever your server is on the back end, to in this case the node process. But for the static resources, and this would be your CSS, your JavaScript, your images, things like that, I really don't need to hit a back-end process for that, why not just let a really efficient server like NGINX serve those up, and that's what I'm going to do. So this has the configuration for this proxy server. And so it's actually going to configure a few things. If I go into config, nginx, you'll notice in here if I scroll on down we have this little node-upstream, and don't feel like you need to understand this if you're new to it, because quite honestly you could just go to their documentation, copy and paste some samples and tweak them, but I wanted to just point out that I actually set up a node1, node2, and node3. So when a request comes in to NGINX, if a dynamic call is required, in other words it wasn't a static resource like a CSS file, then it can route it into one of these node instances. Now as mentioned earlier, it's not like you need three node instances in development, but maybe you want to simulate a production type of scenario and do some testing. Well, it's pretty easy to do that, that's kind of why I set this up. So that's really all I want to point out, you'll see that these are running port 8080, and these are going to be 3 containers that ultimately will run behind the scenes. Now the rest of this, if I go back to the Dockerfile for nginx, is I actually copy the public resources, this is the static resource, if we jump over here to public, css, img, and js, so I'm actually copying that up into the container that's going to run for NGINX, and that way it can handle serving that. I do a little bit with certificates in case you want to play around with SSL, these are self-signed certificates, so they wouldn't be used for production, and then I kick it off down here by running the nginx command that you see right here. So that's that Dockerfile. And I also have one for node. This one's pretty basic, it does a little bit of an install as far as npm install, it installs something called pm2. This is a process monitor that'll monitor our server process for all the node instances, and if the server process dies it'll restart it, or if I change the code it can restart it, and that's a real nice thing to have obviously in development mode. And then finally I have the redis image, and really all I'm doing here is copying again some configuration file info, and that again is located up in this config, you'll see redis, and all I'm doing here is supplying a password for the caching server. Don't use that in production, but it's not bad for dev, it's just password. So that's a quick run through on the services that we're going to have running and the Dockerfiles that are going to drive these images and ultimately the containers, and by using these, you're going to see that we can make a Docker Compose file, and we'll be doing that in the next section here, and then get that up and running very quickly and efficiently using the Docker Compose commands.

  53. Creating a Custom docker-compose.yml File Now that you've seen the custom Dockerfiles that are going to drive the services that we're going to get up and running in our development environment, let's jump in to the docker-compose.yml file and see how it's used and the different services that we have in it, and some other features, and see how we can create that. So earlier we looked at the custom Dockerfiles, and we saw that we had our mongo, nginx, node, and redis, let's jump on down to the docker-compose.yml file. Now to start things off you'll notice that I have the standard version up top, and then I have my services. So from a high level I have my nginx, a node1, 2, and 3, we have mongo, and we have redis for our caching. And so that's the same infrastructure that we talked about a little bit earlier in this module. Now let's walk through each of these real quick and just take a look at what's going on with our individual services. So first up is nginx you can see, and it has a container name. So there's a property that you can put in your YML files called container_name. We've already seen the build context up to this point in the modules, and we're setting the build context as this folder here, the root of this folder, which is CODEWITHDANDOCKERSERVICES, and then you'll notice that I'm pointing the Dockerfile location to that .docker folder that we looked at earlier, and of course the actual Dockerfile. Now if you recall in the config for nginx, I configured node1, 2, and 3, and that way when a request comes in, it can kind of load balance and does a round robin by default, and it'll call node1, and then next request goes to node2, and so on and so forth. So what I have here, if we go back to the docker-compose, is those actual node1, node2, node3. So this is going to link up to the services here, and this is kind of like an alias, and then it points down to the node1 definition down here, and 2 and 3. So those are actually really important in this case because of the nginx acting as a type of load balancer, or reverse proxy actually, and it does that type of thing. Now I'm also exposing the ports that I want nginx to support. Now in this environment for development, I'm probably just going to hit port 80, but I showed earlier that in the Dockerfile for nginx I actually do load up some self-signed certificates, so it would be possible with some more configuration code to get SSL going if we wanted on port 443. Now the next thing I do is I load an environment variable. Now this environment variable is just one you'll see, but it's in a file, this .env, so let me show what this file looks like. It's very, very basic, but really, really useful. So you'll see this app.development.env, and all I'm doing is adding this NODE_ENV and setting it to development. Now normally that's used just with Node.js, but I may actually want to use that particular environment variable throughout multiple containers. Now in this case I'm not really using it per say, but it would be available. So what'll happen is when we build and then run this, it's actually going to load that environment variable and make it available in the container so that we can work with that if we'd like. Now you might wonder what this is. Well, I'll show a little more of this as we get into the next section and actually run the Docker Compose commands, but I have a little kind of read-me up here of what to do to get this running, so the first kind of step after you do some other changes for connection strings is I have to export APP_ENV, and I set it to the environment I want to run. Now right now I only support development, you'll see I don't have an app.staging, app.production, mongo.staging, mongo.production, just development, but as I'm getting ready to migrate this to other environments, I could certainly add those files, and I just made kind of my own way of doing it, a little environment variable that's local to the particular file here, and it will be read dynamically. So we'll run this in the console, and then when we do the docker-compose build and the docker-compose up, then it will automatically make this available. So this would load app.development.env, assuming I set it to development, and I'll show you this as we move into the next section with the command line stuff. And then the final thing is I have a custom bridge network called codewithdan-network, and that's again going to allow all these containers to communicate in the same network on that Linux host. Now for the node it's very similar, I have a container_name, I have a build location for the Dockerfile, expose the ports, but here's where I actually set the volume. Now this assumes we're in development mode, because you'll see that I'm pointing to the local folder, which would be everything you see here, and then on the actual container though, this is where we do that kind of aliasing, and the volume actually is going to point to my code here. So once these containers are up and running, we can make our code changes, and then I showed earlier how I have, it's called pm2, monitoring, and that pm2 will kind of watch for changes and if anything changes, then it'll reset the server.js, and that way I can just leave my containers up and running and they'll reset each other, or themselves I should say. I set the working_dir, and then I also load an environment file. Now this is used because this is Node, and again the environment file is specific to Node, and so Node and Express specifically, Express is being used as the web component of this, it knows how to read that environment variable and it can actually tweak some settings there. Now you could have multiple environment variables. This is something that if I wanted I could have Env1 or 2=foo, or whatever, and just keep going, name value pairs. And so this makes it really, really easy if you have a bunch of environment variables, well I could put these right in the YML file here, it's a lot easier just to put them in an environment file and then have them loaded up, and then as mentioned if I do this export APP_ENV=development, then what'll happen when I use this with Docker Compose is this would be app.development.env. Now obviously if I change this to production, then it'd be app.production.env, and that would help you kind of dynamically load the different environment variables. So this is the same for all the node containers, and again, I put three of them mainly because you might want to simulate your production environment while you're in your dev environment, and so this is just kind of allowing you to do that. Now if you only wanted one you could certainly delete two of these. Now the mongo is actually pretty simple. We have, again, the container_name and the build, here's the ports, external/internal, and then I have environment variables, and these are really important because the mongo one, this is going to be available to the container, and then the shell scripts that I showed a little bit earlier, they're going to read from these environment variables, apply them to mongo. So you'll see that I have my kind of root admin account and then my webrole here represents what Node would use to actually call in to mongo in that network, and so what'll happen is the shell script would read these as they're passed into the container once it gets up and running, and then once it's done applying those to mongo, it would just erase those out of memory, because obviously once mongo is up and running and we've configured it, we don't really need those or probably don't even want those hanging around in the environment. So that's one way you could do it, there's certainly other ways that you could configure this, but this makes it easy for development environment. And moving on down, there's the environment variables being loaded, the last one is redis, very similar procedure. We have our Dockerfile, this is the Redis port that's kind of the standard port, we load that environment variable, this one just loads the node, which isn't used here, but I could have environment variables that are maybe specific to Redis, potentially, and again, we put it in the network. And so there you have it, that would be the entire file we need, and what I love about this is right now we have six services in here, nginx, three nodes, redis, and mongo, but if I wanted to add a seventh or an eighth or a ninth service, then I could certainly do that just by updating this file, and you've seen how easy it is to bring services up and take them back down, and that's what we're actually going to talk about in the final part of this module here. So that's an example of the custom YML file that could be used to get our development environment up and running, and this is something we could just check in the source control, every team member could pull it down, and then we would be able to do our builds and start running our containers, and that's what we'll take a look at next.

  54. Managing Development Environment Services At this point we're all ready to go. If we have that YML file local with our source code, now we can use the Docker Compose tools, and that's what I'm going to walk you through here to get this development environment and all of these services up and running in just a matter of minutes, especially if you already some of the images already cached locally. So let's jump in and take a look at how we can do that. So I already have a Docker console set up here ready to go. And all we'd have to do, as we've seen earlier, is run docker-compose build. Now I already have some of this cached to kind of speed it up, but this will go through and build out our different services, we have six of them again. And now that these are built, we can do a docker images, and you can see that here we go, I have my nginx, node1, 2, 3, and these are some other ones that I had, but you can see here's the codewithdandockerservices_node1, 2, 3, redis, but the reason this built pretty fast is I already had another version of this going and so I was able to leverage some of the layered file system. Now if I run docker ps -a, you can see that I've already tried to run some of these before, and so a little trick we can do here is we could say docker rm, and I'd like to go ahead and remove all these, so we're going to start kind of from scratch here, and when I do a rm I can do a -f to force, and that way if anything is kind of locked up at all we can take care of it. And now what I'm going to do is say docker, and we're going to list all the images, ps -a and -q for quiet, and this'll go through and remove them. So let's do docker ps -a, and let me do that one more time, there we go, nice and clean. So just to kind of show that we're starting from scratch here on our containers. So now it's pretty easy, you already know what to do. In fact, we've done this. We can do docker-compose up, and let's go ahead and run this, and you can see it's now bringing up all my different services here, mongo is loading, here's some of my node images that are loading now that are creating some routes behinds the scenes to handle all that, the db connection you'll see is opened. If I scroll up a little bit to the mongo section, you'll notice that it's actually showing me that a root user was set called dbadmin and a root role was set as well, and then we have webrole and a database name and all that stuff, and I'm just logging that out right now so we can see it kind of work. And so you can scroll through all the logs if you want, and I'm in this case running it in interactive mode, but of course I could have just done docker-compose up -d, and that'd run in the daemon mode that you've seen. And now that this is up and running, we can come on over and I'm just going to refresh, so I've already run this, and there we go, it looks like the content has been loaded here. Now I'm not seeing any data of course because at this point I haven't run the seeder, but since we have containers I could go on back again, and we can kind of exit out of here. Now notice it's going to try to gently shut down our different services, so we'll go ahead and let it finish, and let's do a docker-compose ps. And you'll notice here that they've all exited, we can see that over here to the right. Okay, that's fine, we know how to do the up. So let's go ahead and do the up again, but we'll do -d this time. That'll let me get back to here. Now you can see the names are actually shown right here. So I just need to do a docker exec, and then I could put the name of one of these containers, so let's go ahead and grab this guy as an example, and I can execute that node dbSeeder, I showed this a little bit earlier in the course. This is a file that's in the actual project that'll get some fake data up into mongo. So we could try to run that, it looks like it worked, so we'll exit out of there, again do docker-compose ps. Alright, looks like everything is up and running, you can see the state right there, and let's refresh. Alright, there we go. Now we're getting some data, this is all from the database and this, and actually now it just cached in Redis, so this data right here is being cached because it doesn't really change much, so every time I refresh it actually is going to be pulling from Redis. So we could actually do a docker-compose logs, and we could get back into the logs and you can see some of the redis connections and things going on here. So that's an example of how easy it is now to get this custom six service development environment up and running and allow us now to have a fully functional website, I can start editing the code because of the volumes, because remember we had a volume that points to my local machine in this case, and now when I'm done I can just close up shop for the day if I want, we'll get out of the log mode here and do, like we saw earlier, we could do docker-compose down, and then that of course will stop the services, and as you saw earlier, also remove the containers. So there you have it, there's a walkthrough of setting up a custom development environment with six services, all the way from looking at the custom Dockerfiles, some of the configuration and environment files, to the YML file, to actually running it with our Docker Compose commands.

  55. Summary Docker Compose provides a great way to manage the process of building services, and then starting and stopping those services. And of course we talked about how behind the scenes, really a service is a running container. Now of course starting and stopping services by hand using just the command line is a little bit challenging when you get more than one or two, so we talked about there's a docker-compose.yml file that defines all the services, and it's an excellent way to manage these services. It's very easy to write, it allows you to define custom networks like bridge networks, define ports, environment variables, and much more. And then as we talked through this module and looked at our Docker Compose files, we also talked about some of the key Docker Compose commands such as build, up, and down, and then of course there's others like ps to view your running services, and you can call start and stop and things like that. I personally think that from a developer standpoint, understanding the fundamentals of Docker Compose is really, really important, especially if you want an easy way for you or maybe even a group of people on a team to very easily have a consistent development environment that could also be deployed into staging and production environments. It provides a very productive way to do this, of course as you saw, and it really takes a lot of the headache out of the picture that we've traditionally dealt with in the world of software development and all these different services that we often need. So I hope that gives you a great feel for the power of Docker and once we combine Docker Compose with our images and containers, you really can do a lot with just a little effort.

  56. Running Your Containers in the Cloud Introduction Throughout the course you've seen how Docker can be used to set up a development environment and run that environment using containers, and we've seen a lot of the different tools, learned how we can link containers and network containers, and from a development environment standpoint we've covered a lot of ground. So I suppose we could stop there, but we're going to do one more final piece here, and that is discuss how would you get those images that might be on your development machine, up into a cloud and actually running. So maybe you want to run them in Azure for instance, how would you go about doing that? While it is possible to do that using command-line tools, it'd be pretty tedious, and as you get more and more containers going, it'd be pretty problematic, and definitely not very efficient and error prone. So what we're going to talk about in this module is something called Docker Cloud, and this is yet another tool provided by Docker that can simplify the process of working with images and containers, and actually getting them up into the cloud. So the first thing we're going to talk about is what is Docker Cloud, and I'll just introduce the fundamentals of the workflow and how it works. Then we're going to walk through the workflow, and the first thing we're going to do is use Docker Cloud to hook up to a cloud provider. We're then going to deploy a virtual machine, what's referred to as a node up into the cloud, get that all set up and running, and then we're going to create a stack of services, and this is actually going to involve another type of YML file, it's very similar to Docker Compose, but not identical, and we're going to see how we can use that to define a stack of services that'll make it easy for us to manage all these services running up on a cloud-hosted provider. Then finally once we get the stack of services set up, I'm going to show you some of the different tools you can use in Docker Cloud to actually manage those services, and make it really, really easy to redeploy an updated image, stop a container, start a container, view the logs, view environment variables, much, much more actually, a lot of really nice stuff you can do. So by the time you get done with this module, you should be able to take your development environment and move it up to a chosen cloud provider, maybe for testing purposes or staging, or maybe even for production potentially.

  57. Getting Started with Docker Cloud As you work with more and more images and containers with your applications, at some point you're going to want to move those off to a different server, maybe it's staging, maybe it's production or something or else. Well we're going to focus on doing that using a tool called Docker Cloud, and this is another tool provided by Docker, so we've talked about the Docker Toolbox and all those tools, we've talked about Docker Hub, well Docker Cloud is another tool that we can use to help manage our images and our containers, and actually get those up into some of the cloud providers out there. So earlier we talked about some of the different images that link up, such as NGINX, Node, Redis, Mongo, and others, and as you start to move these, while you certainly can do it by hand running different command-line tools, management tools out there such as Docker Cloud can really simplify the process, and that's what we'll focus on here. So if you want to go to Microsoft Azure or AWS, Digital Ocean, or whatever it may be, then Docker Cloud can help you manage that entire process. So some of the key features are that you can link to the major cloud providers out there, and even if your provider is not available to link, you can bring your own virtual image if you'd like and work with that, there's a tool you can install to even make that possible. Using Docker Cloud you can then set up and provision what they refer to as notes up in the cloud, and you can really think of this as a running virtual machine that you can get up and running, and you could set up and scale out multiple nodes if you'd like. Now once a node is provisioned and set up, you can then create a stack it's called of Docker services and deploy those to the node or nodes that you might have in the cloud, and this is a really powerful concept that'll we go through, and then you can use Docker Cloud to manage the stacks and the services that you may have, and you'll see it's as easy as just going into a web browser and clicking on, I'd like to start a particular service or stop it or terminate it or do whatever you'd like to do. So here's the general workflow that we're going to walk through. So first off I'm going to show you how we can use Docker Cloud to link up to a cloud provider. From there we're actually going to deploy a virtual machine up in the cloud, and we'll deploy this node as it's called. We'll then use another type of YML file, very similar to Docker Compose, to actually create a stack of services. And then we'll use Docker Cloud to manage those stack services. And what's really nice about this, you'll see, is that we can do things like view IP addresses, I can actually get to a terminal right in the browser and interact with my different containers if I'd like, and it makes it really easy to get to those. So for instance if you'd like to hook maybe a MongoDB tool like MongoChef into one of your mongo containers, then they provide a nice endpoint that you can hit and make that a pretty straightforward process to get started with. So let's get started with Docker Cloud and see how we can link it to a cloud provider.

  58. Linking to a Cloud Provider Let's take a look at the Docker Cloud website, and see how we can use it to link up to one of the cloud providers that they support. So you can go to cloud.docker.com and you can log in with your Docker Hub account, and I've already logged in you'll see. Now the first thing you're going to notice when you log in is that they have a little workflow here for you to walk through. So they talk about linking to a cloud provider, and that's what we're going to do, then deploying the node and working with the services and stacks, and things along those lines. So the first thing we're going to do is add a cloud provider. So I'm going to go ahead and click on that, and here's the providers they support at the current time. Now I'm going to go ahead and do Microsoft Azure, mainly because that's the one I've worked with the most, but you can certainly choose one of the other options and they have help documentation for getting started with those. So I'm going to go ahead and click on Add credentials, and this is going to have me download what's called a management certificate, and you can think of this as the way to authenticate Docker Hub into your Microsoft Azure account in this case. So I'm going to go ahead and download the management certificate, and you'll notice it has that there, and then we're going to go ahead and use that in a moment. Now what it's going to want is for me to grab a subscription ID, and this'll be your ID that of course you would have, in this case with Azure, and use that. So before we move on, I'm going to run off to Azure, and I've already logged in here as well, and if I scroll down I can go to Settings. Now I'm currently in what they call the classic portal, because as of today this is where you'll need to add this management certificate, but I suspect in the near future the new portal will probably do this as well. But we'll go ahead to here and you'll notice Management Certificates. Alright, so from here it's pretty easy. You can come in and upload, so I'm just going to browse into my Downloads folder, we'll go ahead and grab that certificate there, and then we'll just upload this in, you can pick your proper subscription, and this'll take a moment to get this uploaded. You can see that now we have Docker Cloud, and that's all been uploaded for us, it's ready to go, so we're kind of off and running. So what I need to do is grab a subscription ID here, and basically copy this ID back in to the subscription ID that you see right here, and there you go. So now I'm linked up to Microsoft Azure. Now if you wanted to do say AWS, you'll go get your access key and your secret access key as you can see here, and then they have help documentation to walk you through the whole process, so they're all pretty straightforward to get started with, I think. Alright, so now we've linked up to Microsoft Azure, so the next step is we can start working with virtual machines and stacks and do those type of operations, but we're ready to go at this point, so let's move on to the next step.

  59. Deploying a Node Once Docker Hub is hooked up to a cloud account, you can go ahead and deploy a node into that cloud account. Now a node is basically a virtual machine, specifically a Linux virtual machine, that's what's supported as of today with Docker Cloud. So we're going to take a look at how we can get a Linux virtual machine up and running in Azure. Now coming back over to the Docker Cloud web page and the little Welcome workflow they have for you, you can see that they've detected that we already linked up a cloud provider, so the next step is to deploy a node. So we'll go ahead and click on that. Now we can give it a name, this is this CodeWithDan site, so I'm just going to call it CodeWithDanNode, and then you can give it deploy tags. Now these will be things like if this was production, we could say where it is, this particular node, this is going to be us-west, and so on and so forth. The provider is obviously Azure, I could pick my region that I'd like, in this case I'm going to scroll on down and do West US, and then we can select an option, I'm going to do the A2 option, right here actually. And then you could set up the number of nodes you'd like, I'm just going to do one, and the disk size, this is kind of the default for what I'd like to do here. So now we can just simply click Launch node cluster down here, and this will take up to 10 or so minutes I've found, it takes a little bit of time to create, but you'll notice as it's deploying right now that it's already given us some information, we're in US West, it's an A2, the version of Docker, and how many containers we have and all that good stuff, and you can see over here on the left it has a summary, along with the different tags that we added. Now once this gets done, we'll go ahead and start working with some services that we want to add in to this particular instance, but let's go ahead and let this node complete deploying. And it looks like the node has now been deployed. Now from here you'll notice I can start deploying some containers, but before we do anything with that, which will be the next section, let's just take a look at the options. So you'll notice an IP address, and the same information you saw earlier, but it's now picked that up, so that'll be the external IP if you want to call in to the virtual machine. So we can go ahead and click on this, and this'll let us actually drill down into anything we have, so we can get a little bit more information on things like endpoints, we don't have any specific ports set up for this. A timeline of what happens, so obviously we created it, we deployed it, and then it looks like there was an update that was done, and these are just some basic things that you'll see with the actual node being created. Alright, so now that we have the node ready to go, we can go back to our Welcome screen here, and you'll notice that it's now been updated, we've now linked up to the cloud provider, obviously we've deployed a Linux node, it's now up and running, now we're going to get into services and stacks, and that'll be the topic for the next section.

  60. Creating a Stack Once you have a node up and running in your cloud service, you can work with services and something called a stack. Now you can think of a stack as basically a collection of services. So before we talk about stacks let's look at what Docker Cloud providers for services, and then we'll walk through the process of creating a stack of those services. So coming back over to the workflow that Docker Cloud gives us, you'll notice that we can go to Create a service. And it kind of explains what it is, it's simply a group of containers from the same Docker repository. So if I click on Create your first service, because I'm logged in with my Docker Hub account, I can actually get to the different images that I've already pushed. And if you remember earlier in the course we talked about the Docker Client command that you can run, which is called push, and that of course pushes your local images up to Docker Hub. Well, Docker Cloud has already picked up my local repositories that I have here, so you'll see I have my nginx, my node, redis, and mongo, and you can see when it was built, it's public, all that type of stuff. So what we could do from here is we could actually launch these and get those containers up and running on the node that was just created. Now that would be fine, but we do know we have some links between some of these containers, and we need to make sure those links are working properly. So instead of jumping right to Services here, and I can just while we're here also go to the public repositories and search there, but I'm going to go to Stacks. Now a stack is a simply a collection is services, and it's kind of like using Docker Compose, but up in Docker Cloud. In fact you're going to see that when we create this first stack, we're going to need a YML file that's very similar to our docker-compose.yml file. So I'm going to click on Create your first stack here. Well let's call this CodeWithDanStack, and then you'll notice it wants something called a stackfile. Now if you're not sure what a stackfile is you can actually click here and learn more about it, this'll take you right off to the documentation. You'll see a lot of the different instructions over here to the right are very similar to what we've covered in Docker Compose, and it looks a lot like a docker-compose.yml file, slightly different though. So let's go on back now, and what I need to do is put in the YML file, and so I have one of those here. Now first off before I show you that, let's go review our Docker Compose, and if you remember we had an nginx. In addition to nginx we had three nodes, mainly just to show you that we could do multiple, we have mongo and we have redis and then we're loading environment variables, setting the ports, all the stuff that we covered earlier in the course. Now I also have a docker.stackfile. Now some people call this Docker Cloud, it really doesn't matter what you want to call it because in this particular case I'm going to copy and paste this up to the web page that I just showed you. So this is very, very similar to the YML file that we've already seen. Now there are some limitations here, for instance right now networks aren't a part of this, there is no networks property that you can add like you can do in Compose, so I'm using the standard linking to link things up. I fully expect that's going to change as this product matures and has new versions roll out, I think we'll get a lot more parity there between the two, but all we have to really do here is I kind of made a template, and I'm just going to come in and replace yourHubAccount here, and we're going to replace that with my username. So I'll come in and we'll do Replace, let's replace that with, this one will be danwahlinstaging. So we'll go ahead and Replace All here, it looks good, we're off and running. So these are the images I just showed you that I already have in my public repository, so I'm just going to grab this entire file, and now that we've kind of seen what it looks like, very, very similar to Docker Compose again, we can just paste this in. Now as we click Create stack here, it will validate the file, and I know the first time I tried it I tried to put in my regular Docker Compose and got a bunch of parse errors because some of the properties weren't supported, but once you get it working right it'll just load it right up for you. So let's go ahead and click on Create stack. So this has now made it very, very easy to get to my different containers. You'll notice I have my redis and mongo, there's my three node containers, and there's my nginx. And so what we're going to talk about in a moment is how we can start these up and get this all working, but I want to show real quick before I do that that we can get extra information up here as well, just like I showed with our nodes area. We can get to the Stackfile, that's the one we just uploaded, and we can actually download that if we want to get it back. Timeline of what was created, not a whole lot in this case you'll notice, but it does give us some more details here, and there's even a little console, we'll play with that a little bit later once we get our containers running. We have Endpoints, nothing there, but here's all our Services again, and you can see it's a pretty obvious status, and then it shows you overall that the CodeWithDanStack is not running, and it's not deployed. So this is really, really nice because now we can come in and we can start up all of our containers, we can delete them if they were already deployed, which they're not right now, but they will be. If there's an update to one of images we can redeploy that particular container, and then we can come in and do some edits to all of this as well. So it provides a really, really nice visual way to deploy and work with your services, really containers, but now that they're organized into a stack I can just come in to this application and very easily manage those without having to resort to a ton of command line things. So that's how easy it is to create a stack. Now obviously the next step is we need to start managing the services in this stack, and that's what we'll take a look at next.

  61. Managing Stack Services Once the stack is created we can go in and manage the different services in that stack. So let's take a look at that process. So here's the services that you saw earlier, and we have our redis, mongo, node, and nginx. Now I can actually come up and we can do Start here, but what I'm going to do is start up one individually, and I'll show you why here. So I'm going to come in to mongo first, I'm going to get the database started up. So you can see that we now have mongo up and running. Now if you remember I had a little dbSeeder, and so the next thing I'm going to do is I'm actually going to start up my node image here, and the reason I'm going to do that is we're going to get into the console for the container, and I'm going to run my dbSeeder so that mongo gets some starter data so that we can run this a little more quickly, and then we'll go ahead and start up the other containers. And it looks like that container is now started up. Now, here's where it gets really cool I think. I would need to run the dbSeeder and connect up to the mongo, and fortunately we've already done that through linking, so I'm going to click on node1 here, and you'll notice I can get to a lot of different things including the logs, but if I drill down in to the actual running container, you'll see that in addition to getting to the variables and the logs and the volumes, I can get to a terminal. So we're going to run off to the terminal here, and now I can actually run my normal commands. So for instance if I do an ls, because the working directory was set up to this particular folder, and this was done in the Dockerfile image that we created earlier in the course, you'll see that I can get to this dbSeeder.js. So I'm going to try running node dbSeeder.js. Alright, and there we go. It looks like the seed data has been loaded and it opened, so I'll Ctrl+C to close out of here, and how nice is that? We're able to get to the console right here in Docker Cloud. Now I can get to other things if there's any volumes, so we could get to those, different environment variables, there would be a whole bunch in this case from Docker, we can get to the timeline of what happened with the service and when it was created, any endpoints it has, well, this one is exposed on 8080, and then over here to the left you'll notice we see the start command. There's our pm2 we saw earlier, there's the actual port that that container was assigned, but it's going to route to port 8080, and the IP that it's using and some other features there you'll see. Alright, so now what I'm going to do is go back up to my stacks and we'll drill down into this one here. And I'm going to go ahead and stop this one, and the reason is I want to get redis up, so I'm going to go ahead and stop it. Alright, and that stopped really quick. Now I'm going to start up redis. It looks like redis is now running and mongo obviously. So now I'm going to come back to node1, 2, and 3, and let's start up those, and we can actually just check the ones we want, and I can say Start Selected. It looks like all three of our node containers are now running. Now I'm going to run nginx. Now did I have to do these individually, no, but I wanted to be able to run that dbSeeder, and that kind of helped me do that very easily, otherwise we could have come up and you'll notice right up top we can Stop, Terminate, Redeploy, do some things up here as well. And there we go, now nginx is running. Now before we try this out, let's run back to say node1, and I showed you that there's a Logs. And so you can also stream the logs, and it auto-refreshes, and so as requests come in, you'll see those show up if that's what your logging shows for your node, or whatever server you're using in the container, but you'll notice in this case that the very bottom, it looks like the db connection is open, there's some other information about pm2 up here you'll see in the middle, and a lot of good stuff there. So there's all kinds of great stuff you can get in to. I can also come in to Configuration and get to things like, again, the environment variables, any links, mongo and redis in this case, linked from, and then you can see nginx is linking to node and then node links to mongo and redis, any volumes that we have. Now let's come on back to Stacks here, and we'll see that all six are running, but now the question would be okay, how do we hit our nginx, because nginx, if we go look at the endpoints, should definitely have at least one from port 80, and there you go, 80 and 443 if SSO was all going with certificates and things, but what I'd like to do is hit the IP address of the actual node, the virtual machine. So I'm going to click on that, and there's the IP right there. Now the next thing I'm going to do is we're just going to run and open another tab here and actually try to hit this directly, and fortunately it looks like it worked. So this is definitely a really, really nice management tool. I've had several people who are starting to get into Docker say that they wish there was a better tool out there that they didn't want to have to run all these commands, that they like Docker Compose in the development environment, but that aside from that getting things off into the cloud was difficult. Well these tools are definitely making it much, much easier to do that as you can see. So that is an example of kind of start to finish on using Docker Cloud and getting our custom containers up into a stack, organizing those by service, and then firing everything up.

  62. Summary Although the focus of his course has been on working with Docker in the development environment, you've now seen that with tools like Docker Cloud, you can migrate that environment up into a cloud provider, such as Azure, AWS or others. And what's so great about that is maybe you want to move from your local development environment up to the cloud purely for testing purposes, maybe you want to throw more load at it or something like that. Well, we could do that pretty easily you can see. Now normally as you migrate your images and get your running containers going, we'll rely on a YML file that I showed you, and this is the stack YML file, which is for Docker Cloud. It's a little bit different than Docker Compose, but very, very similar overall in the functionality, the structure, and it even has a lot of the same properties that you can use. So in some, Docker Cloud can do these types of things. We can create and provision nodes, this'll be our virtual machines, we can of course create stacks and the stacks contain services, and that relates back to the YML file, and then finally once we have a stack all set up and registered with Docker Cloud, we can then manage the different services in that stack and start and stop them, as an image gets put up on Docker Hub we can redeploy it so that it updates whatever container was there, and do a lot of great things with that. So that should give you a really solid overview on how to get started with Docker Cloud and some of the different functionality, and I think this really delivers now on the promise of Docker, and that promise is that you can get an environment going with multiple services and migrate that across different systems, different cloud providers or wherever it may be, and your application should run exactly the same across those. And by using Docker Cloud it's a lot easier to the migration as compared to doing it all by hand.

  63. Reviewing the Case for Docker Course Review Let's wrap up by doing a final review of the case for Docker and why we want to use this in our development environment. So we talked about that Docker brings many development benefits to our different team members, and even if you're just a team of one, there's a lot of benefits because you could bring servers and databases and many other things up very quickly, and then get rid of those if you're done with them. So we talked about how we can bring up web servers and databases and caching servers and more, and bring those up in a very consistent way across team members, even in distributed locations if we needed to, and then how we can even move those up into the cloud if we'd like. And so there's the benefits of the consistency and the fact that how it runs in development is how it's going to run if you move those containers and images into your staging and your production environments. Now we talked about that the heart and soul of Docker is of course Docker images, and that Docker images are used to create our containers, and to work with all of this we need Docker Toolbox. So some of the key tools that we talked about were Docker Client, and this is of course how we can work with our images and containers, we talked about Docker Machine and how we can use that to interact with the VirtualBox on Windows and Mac, and then Docker Compose, one of my personal favorite tools, is especially helpful as we need to work with multiple containers and get those all up and running and talking between each other. And then, if you don't want to work with the command line, we also talked about Docker Kitematic, and it does provide a really nice and simple way to get started with Docker if you really just want to jump in quickly and not have to learn all the commands that we talked about for Docker Client, Docker Compose, and others. And then of course we talked about how on Windows and on Mac you aren't going to need VirtualBox because Docker is going to run with Linux, or of course you could use the Windows Server version as well. Now we also talked about liking our source code from a running container to a local folder that you might have on your development machine. And this is a really, really important part for us as web developers, because we obviously need a way to quickly and easily make changes to code, get that code running up in the container such as a Node.js or ASP.NET or PHP, or whatever it may be container, that is running the actual server, and we talked about how we can do that with Docker volumes and how a volume can point to a folder that we set up, and that makes it very easy to link our source code into our running container. Now when it comes to containers, while we can pull many of the containers out there from Docker Hub, we can also create our custom Docker images, and we talked about how we can do Dockerfiles ad create Dockerfiles that can be based on an image that's in Docker Hub, and then we can add our custom functionality into that. Now as mentioned, one of my favorite things covered in the entire course was Docker Compose, and we also talked about the compose YML file, and this YML file is just a great way, I think, to get your development environment up and running very quickly. You could have 10 different services if you wanted, and not have to bring those up individually using Docker Client, we could get this YML file in place and we'd be off to the races and up and running. And then finally I showed how we can move our different containers. In our case we did NGINX and Node and Redis and MongoDB up into cloud services such as Azure or AWS or others, using the Docker Cloud tool, and it provides a really nice way to pull from Docker Hub, get those images up and running as container services, and then we can start and stop those, redeploy them, and do all the different operations that we discussed. So that's a wrap on the Docker for Web Developers course, and I hope that you have a really solid fill now for the role that Docker can play in your development environment. I appreciate you taking the time to listen to the course, and hope you're able to apply this new knowledge into your web development efforts.