Getting Started with Docker
by Nigel Poulton
Hello, and welcome to Pluralsight. In fact, welcome to the future. But I'm serious. If you are in any way into IT, and I'm assuming that you are seeing as how you've taken this course, but if you are in any way into IT, I am telling you now, Docker and containers are going to be a massive part of your future. Anyway, first things first. I'm Nigel, and it is going to be my privilege to be your instructor for the next hour or so. Now, I've been working with Docker and containers for a good couple of years now. My first foray into this was back when Docker was at version 0.9, and then, as they say, the rest is history. But some detail anyway, right? I've done a bunch of consulting around containers. Obviously I've pumped out a load of courses here on Pluralsight, and generally speaking, I'm pretty well connected with the community, as well as the team over at Docker, Inc. And you know what, I'm an interactive guy as well. So, if you want to reach out to me, I'm usually found lurking around in the dark corners of Twitter on Friday and Saturday nights, where you know what, I am more than happy to talk about containers and other cool technologies. But enough about me, let's talk about Docker and this course. So, my thinking behind this course is that it'll follow on nicely from my Docker and Containers: The Big Picture course. Speaking of which, if you're like a proper noob to all this stuff, I don't know, maybe you haven't quite wrapped your head around what containers are or what they're all about, then seriously, you might want to swing by The Big Picture course first, then, once you're done with all that stuff, swing back here and we'll take you to the next level. And that next level is, well, we're going to take you all the way through getting Docker installed, playing with containers, and finally, deploying, and who knows, maybe even scaling an application. But like I said, all of that in, I'm thinking an hour or so, so buckle in, because in the next hour or whatever, definitely less than two, we're going to be totally cool with installing Docker and rocking it with containers. And you know what, if you play this back at like 1.5x or 2x speed, which a lot of people tell me they actually do, well you'll be up to speed even quicker, and in my books, that is pretty exciting. Though if you do play it back at 1.5 or 2x speed, be prepared for some weird, British-sounding chipmunk. But seriously, that's it. Install Docker, and we'll look at how to install it on various different platforms, so like straight vanilla Linux install so that you've got an idea of how you might do it on VMs that are on-prem or on your laptop or anything like that, then we'll look at AWS, Azure, and maybe even some others. Once we've got it all installed, we'll play around a bit with containers, because that's the crux of all this. And I know, it's one thing to sit and listen to me wax on about the theory in my Big Picture course and stuff, but there's just no substitute for seeing it and actually doing it. Well, once we're installed and we actually know all about containers and what have you, we'll finish up by playing around with a containerized or a dockerized app. And here's the best bit, I've been saving this. Everything we're going to do in this course is bang up to date with the game changing new features that came out with Docker 1.12. So, native, built-in clustering, plus all the goodness of services and bundles and deployments and all that stuff. So yeah, you're getting the latest and greatest here. Anyway, that's the plan. Talk is cheap, let's go install some Docker.
Okay, as the title suggests, this module is going to show you how to install Docker. And of course, as we go, we'll be shedding light on each installation type. See, I've no interest in teaching you how to click Next, Next, Next while being totally oblivious to what's going on behind the scenes, no chance. So, we'll do a bit of explaining as we go, but it's things like whether or not the Docker install that you're doing is going to give you Docker running natively on the operating system's own kernel versus maybe running it inside of a VM on top of a hypervisor. Now then, there literally are loads of ways and places to install Docker. There's Windows, there's Mac, there's obviously Linux, but then there's on-premises, there's cloud infrastructure platforms, there's manual installs, scripted, wizard-based, there are flipping loads, and we can't tackle them all here. So what I'll do is this. We'll do the ones that I think are a good mix of the most common, as well as the ones that'll teach us a bit of how Docker works. But before I go any further, please, please, please appreciate this. Things in the tech world change all the time, and they change fast, and that's what makes technology so cool, but it also means, yeah, from time to time things are going to change, so maybe some of the screenshots and some of the prompts, and I don't know, maybe the return codes and the like, they're going to change from time to time, and I totally appreciate that it can be a right pain for you if this module gets out of date, I hear you. So, here's what I'm going to do. I will do my level best to keep an eye on things in case any of the installation methods that we show here change significantly. If they do, and if I notice, then I'll come right back here and I'll get things updated. Mind you, if the changes are minor and I don't think they're worth doing a redo of the content, then I'm just going to leave things as they are, but I am looking out for you here. There is no way that I'm going to leave something as it is just because I'm too lazy and can't be bothered to do the update. No way. If I genuinely think any of the changes are big enough that they warrant updates, I will make them. But, as well, we're in this together. That means if you hit this module and things have changed enough to make it a pain for you, then give me a heads up, because it is totally possible that things can change and I don't notice, and I know that might sound unbelievable, but it does happen. Well, when it does, let's do us all a favor and give me a heads up, and I'll do two things. First up, I'll see what I can do to help you get through the installation right there and then. Then, second up, I'll also see what I can do about getting the course content updated. Deal? Magic. But remember, on the topic of getting the course content updated, these courses are hard work to make, so I can't update the content overnight, I might be ill, I might on holiday, things just take time. Give me a few days. Anyway, here we go. These are the installs that we're going to look at. We'll look at desktop installs, Windows and Mac, then we'll do server installs, Linux and Windows, and then we'll look at a couple of cloud installs, AWS and Azure. But, the cloud installs at the moment are just placeholders. You see, Docker for AWS is in private beta right now, and Docker for Azure, that's not far behind it, but they're not quite ready to be included in here yet, but as soon as they are, I'm committed to coming right back here and adding the content. But let me just say this, I'm going to be running this entire course as Docker running on Linux inside of AWS. What these installs here that are currently just placeholders are, they're kind of scripted or wizard-based installs that let you spin up a full Docker and cloud infrastructure in a really easy manner. I mean, you can totally do Docker on AWS manually, just spin up instances in AWS and do a manual Docker install, it's all good, and you know what, probably half the world already runs on that anyway. Maybe not yet, but it is massively common. So just for clarity, these placeholders are for Docker for AWS and Docker for Azure, which are both official Docker products, they're just not fully baked in GA yet. Anyway, first on the cards, installation on a Windows desktop, also known as Docker for Windows.
Docker for Windows
Docker for Windows. What's it all about? Well, first up, Docker for Windows is actually the name of a product from Docker Inc, so that's the product we're looking at here, and it's all about getting a small Docker environment up and running locally on your Windows laptop or desktop. Right now that's got to be Windows 10, 64 bit, but who knows what other versions might or might not be supported in the future. Right now though, at the time of recording, we're Windows 10, 64 bit only, which is a great platform I think. Let's not get religious though, oh, good point. Now, I am assuming a bit of a clean system here, meaning it's going to work smoothest and best if you've not previously installed the Docker to a box or Docker machine or anything like that. I mean, if you have, I'm sure it's probably still going to work, I'm just aiming for the cleanest installation scenario possible, because as much as I'd love to, I can't be technical support for every nuanced install. Anyway, what you're going to end up with at the end of this clip is a fully working, single engine, Docker environment on your Windows desktop or laptop, but, it's really only for test and dev work, you're definitely not going to want to run your production estate on it. I mean, it's only a single engine, remember, so it's not scalable or anything, and you might even find that not all features work straightaway, the guys at Docker are very much taking a stability first, feature second approach here. So definitely it's good for testing dev, and of course it's brilliant for learning Docker. Now, the other thing I want to point out before we go any further is that at the time of recording, although this is Docker for Windows, what you're going to get here is the Docker engine, which is just a fancy way of saying Docker or the Docker server, but it's going to be running on Linux. Now, stay with me here, Linux inside of a Hyper-V virtual machine. Now I've drawn it on the screen for a reason. Take a second or two to let that sink in. This is Docker for Windows, but we're getting a Hyper-V virtual machine running a Linux VM called MobyLinuxVM, and inside of that we're going to run Docker. And I know I can hear some of you asking, so why do you call it Docker for Windows for then if it's on Linux? Well, we'll see in a second, but for now, despite all of the Linux and virtualization magic going on behind the scenes, what you're going to end up with is the ability to fire up a command prompt or PowerShell if that's your thing, and just work in Docker commands and it's going to work. It'll be just like you've got a fully working Docker engine locally, the only thing to note is that it's running on Linux behind the scenes, and that obviously means you can only run Linux containers on it. One final thing before we actually get to install it. It seems very much like native Docker for Windows 10 is coming, so stripping out at the very least this Linux stuff here, giving you access to a Windows kernel, so proper Docker on Windows. Now at the time of recording, that's only available if you signed up for the Windows 10 Insider builds, but soon enough I reckon it'll probably make its way to mainstream Windows 10. Anyway, far too much waffling. How do we install it? Well, first up, you've got to make sure that you've got the Hyper-V feature installed. So from Control Panel, Programs and Features, just hit Turn Windows features on or off, then Hyper-V here, put a checkbox in it, and off you go. Windows wants a restart, you might need to do that, me on the other hand, I'm going to use the Force. Thanks, but I don't need a restart. Okay, good! Now then, for those of you who do need to restart. Well at least one of you has told me about an issue that you've hit, where you've followed along but you go this message. Basically Hardware virtualization report was not enabled in the machine BIOS. So if you do have this issue when starting docker on your Windows 10 PC, then you need to _____box, hit the magic BIOS key like F12 or Dell or I or whatever it is. Go into your BIOS and enable hardware virtualization assists. You know like Intel VT or AMD-V If you can while you're in there you might want to check things like second level address translations, slat. And data execution protection is also enabled. The point is, if you get this message, or a similiar one after you've enabled Hyper-V in Windows, restart your machine and check that hardware-virtualization is assists are enabled in your BIOS. And it goes without saying I hope--be mega careful when poking around in the BIOS, I mean mega careful. After you've installed the Hyper-V feature and you've restarted, head over to the Docker website. You're going to want Get Docker here. Now we're all about Windows at the moment so we'll download Docker for Windows, again, I'm not waiting for the download, Jedi mind trick again, the file is downloaded. So let's fire it up. Of course I agree to the license agreement, and off that goes. This is going to launch Docker for us. I've tried to be clever here and record it on my second monitor so you don't see all my clutter on the desktop. So, you don't get to see the tiny little icon thingy, although actually I will record it later and then I'll patch it in. Here we go. This is what you'll see, check that out, like a little window inside of a window. But the thing is, on the status bar you'll see this Moby Dock whale here, and if you click Settings, you get into a bare bones UI. There's been a bunch of talk about a more feature-rich UI, maybe Kitematic if you know of that, but for what we're doing here we don't need to get sidetracked into that detail. What we really want is a command line. So, the first thing I always do is a Docker version for a sanity check. Great, we're working. Now if you look closely, see how the client is Windows, and then down here the server is on Linux? That server bit, remember, is running inside that Hyper-V VM, easy for me to say, it's running inside that Hyper-V VM. Magic. So, all regular Docker commands should now work. Now the only other thing to note, I guess if you're curious you can spin up Hyper-V Manager and dig a bit deeper. You know what, I recommend you do that, it's a great way to get your head around what's going on behind the scenes. That's the Docker for Windows installation. Next up, Docker for Mac.
Docker for Mac
Okay, Docker for Mac, the far superior solution than the old boot2docker. If you don't know what boot2docker is, lucky you. So, for clarity, Docker for Mac is an official product from Docker Inc. It's entire raison d'etre is to give you as a Mac user the smoothest possible local Docker experience. And by local, I mean have Docker running locally on your Mac, and for all of the behind-the-scenes mystery and magic, just to get out of the way and let you crack on with Docker. Now, speaking of behind-the-scenes magic, because I do think this stuff is often worth having at least a conceptual grasp of, behind the scenes it's going to run a VM. Inside of that VM it's going to run Linux, then it's going to expose the Docker engine on that Linux VM back to your Mac. Now, for the curious among us, in order to do that, it's leveraging something called HyperKit to implement a super lightweight hypervisor, and HyperKit is based off of X-Hive if you know that, just with more than a few enhancements. Now it also leverages some stuff from Datakit, runs a highly-tuned Linux distro called Moby based off of Alpine Linux, so, super small and all about security, speed, and stateless boots, and if it's your thing, it includes support for binfmt_misc in case you want to do ARM or PowerPC stuff, which I don't, so I can't really give you any guidance on that. In fact, I've just heard about it and I'm dropping in buzzwords to make me sound clever. Watch that backfire. Now a quick instalation pre-req. To be sure this is going to work, we need to be running a modern Mac, or pretty modern. So anything from 2010 or later should work. What you need is the hardware assists for the memory management unit. As well as that though, you're going to want to be running OS X 10.10.03. Yosemite, that is alot of 10's right there. Anyway, let's get it installed. So, in my browser here I'm heading to the Docker website, I'm going to go Get Docker, and you know what, I'm sure you could just as easily type Docker for Mac into Google. Never mind, I'm going to hit the Learn More under MAC here, and I want to download for Mac. Now obviously I've got a space time compressing internet connection, so that's already downloaded, so let me go start it from here, it's this one. Let's drag Moby over here to the Applications folder, then let's go start it. Yeah, that's fine, Next, OK. Password1, just kidding, and there we go. Now, this animated whale icon up here, this is telling us that it's starting up. So, we'll give it a second or two to warm up there, and that looks good. And for now, if we hit Preferences, we get a minimal UI. Expect this to change in the future, but really for what we're doing here, it's immaterial. What we actually want is a terminal window, and if this has gone to plan, heck yeah, we're up and running. And notice how the client here is showing as darwin because the client bits are running natively on the Mac OS Darwin kernel, and I hope nobody minds me calling it Mac OS instead of OS 10, I know there are trolls out there, yes, I'm looking at you. No, just kidding, knock yourself out, but the server bit down here, this is running inside of that Moby Linux VM that we mentioned a second ago. But, that bit is critical to understand. Although this seamlessly works on your Mac, it's obviously Docker on Linux under the hood, so it's only going to work with the Linux-based Docker containers, which right now that's where most of the action is anyway, so that's good. Anyway, that's Docker installed on your Mac. All the regular Docker commands are just going to work, docker info, docker images, yep, the whole caboodle. Next up, Docker on Windows Server.
Installing Docker on Windows Server 2016
Man, have we been excited about this. Windows containers, well, at least I've been excited about it. You know what, I'm hoping that a few of you will at the very least have a passing interest in this. This is early stuff, I am pushing the boat out here for you because we're not even GA yet, I'm actually rolling here with Windows Server 2016 Tech Preview 5 inside of Azure. So yeah, I'm obviously keeping this old boy or girl close at all times. We are living it large on the bleeding edge. But remember, as and when this stuff goes general availability, I'm going to update the content ASAP, and if it happens without me noticing, pipe up and let me know and I'll pull out all the stops to get the section updated as fast as I can. But here's the plan, I'm going with a full-fat Windows Server here, GUI and all, I'm old fashioned. And, I'm going with a version that does not have the containers feature already installed. I'm thinking showing you how to install the containers feature, if nothing else, will reinforce in your mind that it's needed. Now what we're going to be looking at here is native Windows Server containers, not Hyper-V containers. They're a different variation, and I've de-scoped them from this project. So, we'll install the containers feature, then, we'll install Docker, and then, because images are a little bit different in the Windows world, I'm actually going to show you how to pull a base OS image and then run your first container. So, you came here expecting to learn how to install Docker on Windows, and you're going to walk away with the knowledge of how to run your first container as well. That's what I call a deal. So, this is our Windows server, and PowerShell is the future, so we'll go ahead and use that. Well, first things first, we need to install the containers feature. That's doing its thing. Alright, looks like we need a restart. Anyone cracking jokes about Windows needing reboots gets sent to the back of the class, and you don't want that. Here we are, server back up and running. Now to install Docker. Well, first up, I'm going to come and manually create a folder called docker on the local machine here in the C: Program Files folder. Now I need to grab the daemon and client binaries from these two links here. So, I'll grab the daemon first. If you look closely they're both exactly the same, except the daemon's got d on the end and the client doesn't. So I'll have the client now. Let's go and find them. I'll have both of these things, and I'll copy them over here in the folder we just created. Yes, thanks. Yes, thanks. It's a bit manual, though I'm sure it will change when it goes GA. Ping me when it does. So next up, we stick that docker folder that we just created in our system's path. And I know, I'm an old-school Windows admin from many moons ago so I'm doing it the old point and click way, it's nostalgic, right? Let me come back here and grab the path. I don't need any of these anymore. Nope, none of these. Okay, let's get back to PowerShell. Now if you've already got your PowerShell window open, you're probably going to want to close it and reopen it just to pick up that new path, but the next job is to register dockerd here as a service, this is the daemon or server. No news is usually good news. Let's see if it'll start. Good, but always good to double check I think. Bingo. And I think that's us in business. This machine should now be primed and ready to rock with Docker. So first up, probably my most frequently used Docker command, maybe not actually, but I use it a lot anyway, docker version. Right on. Client and server both responding. And look, both running natively on the Windows kernel. Let's docker info. Lovely. It is a beautiful thing. Now, for a bit of necessary Windows stuff. Before you can start running containers on Windows, you need to download what's currently being called a base OS image. And to do that, we need the container image package provider here, give that a second or two. Now we can run Find-ContainerImage. There they are, our two options for base OS images, NanoServer and ServerCore. So let's grab this ServerCore one. Oh, it's like 9 GB or something, so it's going to take a whole. But look here, there's Microsoft doing the same as me, this stuff is in mega flux, bleeding edge and all, remember? Now then, while that downloads, the reason that we've got to do what we're doing here with Windows base OS images, I think, is pretty much down to licensing and legal stuff. I don't think these can just be hosted on Docker Hub and the likes. Now I'm hopeful that that'll change soon, it might be wishful thinking on my behalf, I don't know, but I know that way back in late 2015 at DockerCon in Barcelona, that was the reason cited as to why we're doing what we're doing here. Now that that's downloaded we need to restart Docker, if we throw docker images, there it is, ServerCore. The thing is though, in order to use it more easily, it is a really good idea to tag it as the latest image. Now, this lesson is mushrooming out of control here. So, when you use Docker commands that reference images, unless you explicitly tell the command which image version or tag you want to go with, like say version 1 or version 0.5 or whatever, or even this hideous long version or tag here, if you don't specify that and you just omit it, then the Docker daemon is going to assume that you mean that image that is tagged as latest. Only look here, there's none tagged as latest, just 10.0.143 whatever. So, let's tag that image as latest, and hopefully if it's not clear you'll see what I mean in a second. So we go docker tag, we throw the image ID here, and we'll tag it as windowsservercore, and then latest. Docker images again. See how it looks? You know what, that looks rubbish actually. Let's make this a little bit wider, and over here as well. Miles better. See how it looks like there's two images now, only there's actually not, because if you look closely, you'll see they've both got the same image ID and they're both the same size, but, there's two tags against it, and an image can have pretty much as many tags as you want, they're just helpful metadata at the end of the day. Now that we've got our image and we've tagged it as latest, let's run a container. So we go docker run, this is telling Docker to go run as a new container, the -it flags here make it an interactive one, basically attach my PowerShell terminal here to the standard in stream of the container, though you know what, to be honest, I've absolutely no idea whether Windows uses terminology like standard and in streams and the likes, but we're attaching my PowerShell terminal here to the terminal that we're going to create inside the container. Speaking of which, we'd better base it off that ServerCore image we downloaded, and we'll just run a cmd shell process inside of it. Now then, if we hadn't just tagged our image as latest in the previous step, we'd now have to go and stick that hideous long 10. whatever tag on the end here. But, because we did tag it as latest, we can just omit the tag and it'll default to using the image with the latest tag. Crystal clear? I hope so. If it's not, rewind and watch it again maybe, and if all else fails, watch my Docker Deep Dive course. But, before I hit Return notice my prompt here. In fact actually, let's do this first. So this machine is called docker-win, and that's Docker for Windows, it's not Docker for the Win, just thought I'd better make that clear. Well, let's have that docker run command again, and there we go. We've lost the funky blue of PowerShell, and notice that the prompt is changed, all because we're inside the container now. So if we hostname again, there we go, totally different environment. This shell that we're in now is that cmd process that we just started inside of our new container. So, if I Ctrl-PQ out of this to exit the container but leave it running, and I do a docker ps here to see running containers, I see we didn't get our nice PowerShell blue back, but there it is running. And that, ladies, gents, and sysadmins, is how to install Docker on Windows, and, thrown in as a bonus, it's how to run your first container. Oh yes. The world is now your oyster.
Installing Docker on Linux
Docker on Linux. For once in my life, the easiest of the installs, but, there are tons of Linux distros out there, and I know that that can in its own way sometimes make it a bit confusing. So, what I'm going to show you here is Docker on Ubuntu. But you know what, all I'm going to do is a single command, and, that command is going to work just fine on most of the major Linux distros out there, and I know for sure that it works on Ubuntu and CentOS, even across init systems, so systemd, classic init, and even, what was it, upstart, yeah, upstart. It's going to work fine on all of them. But the thing is, if you get stuck, just /install docker, and then your flavor of Linux like install Docker on CentOS into Google, and what you'll get is a link to the Docker Docs website, which is a great place for the latest instructions. But, here I am on a clean Linux machine, and I'm going to run this single command here. Great. This is going to grab this install script here, and feel free to read this in your own time if that's your thing, in fact, I recommend that you do, but the command grabs the script, and then look, it pipes it through your shell. So, if we unleash it, off it goes, doing everything it needs to do to install Docker. Now, depending on your system it's going to have to install prereqs and the likes, so it can take a few minutes, but what I'll do is I'll flex my intellectual might and I will warp space time. Magic. There it is, all done. No other commands, nothing, just the script worked its magic. Now, certainly in real world production environments you're going to want to go with this advice here, and add users to the Docker group or some other group that fits your corporate standards. The idea being, you really don't want to be abusing the root account on your machines to run and manage Docker. I'm not fussed about here it here for this demo, so if I go with trusty old docker version here, there we are, all Dockered up. Quick docker info. Lovely, clean install as well. This machine, all ready to rock and roll for the rest of the course. Now, if you do have issues, seriously, install Docker Ubuntu 1404 or whatever your distro is, and up near the top here you're going to get a link to docs.docker.com. Now, this is going to change from time to time, like they've just rebranded the website recently so it looks all funky now, but here you've got everything you need to install Docker on. Well, this page has got all the different versions of Ubuntu, but whatever Linux distro you're searching for, you'll get a link to the right page. But the instructions here are manual style, so you're going to be running all of the commands that the script would otherwise do for you. Not the end of the world, it's just a few more commands to type. But that's it, Docker installed on Linux. Now, one final thing. Some vendors and distros from time to time go with their own forked repos, so, forking the Docker project and doing bits and pieces of their own thing in it. So, you do need to be careful that you know exactly what you're getting, because forks of the Docker project can land you in a scenario where mods and fixes that you might get might not actually make it into upstream Docker, and before you know it, you've diverged and you're not in the place that you're expecting. Now I'm not saying that this is a bad thing in and of itself, it's probably not the route I'd personally take, I'm just making you aware, forewarned is forearmed as they say. You just need to make sure you know what you're getting, because if you want commercial support for your Docker estate from Docker Inc, or maybe from one of their authorized partners, then I think you're going to struggle if you're running a fork from somebody else. But I'm getting off piece here. Yeah, it is really important because Docker is making it more and more into production and more and more of you guys are looking for commercial support and service level agreements. So be careful. But yeah, that's Docker on Linux.
Super quick recap, and this is a bit disheartening for me, because I've spent absolutely ages recording this module, and it can be summed up in like a handful of words. We've seen how to install Docker. Wow. That does not even begin to convey the effort in spinning up a Mac just so I could record the Mac install, the Windows tech preview stuff, oh my goodness, all that hard work. But I guess on the flipside too, and more importantly, it does absolutely no justice to how important and exciting this is for you if you're just starting your Docker journey. But that's what's important, you've now got the tools required to get started with Docker and you're ready to crack on with the rest of this course. Speaking of which, this is where the exciting stuff actually starts. Next up, you're going to see how to start working properly with images and containers, very cool stuff. Then after that, you're going to get to play with the latest and greatest native orchestration stuff, and if the images and container stuff is cool, then oh my goodness, I don't even know how to start to prepare you for that. Maybe stick on your arctic expedition gear, because that is a new level of cool that I am calling ice cold. See you there.
Working with Containers
What Is a Container?
Time to do some work with containers, but, the world's quickest primer on what a container is first. And I'm just setting my stopwatch here to see if I can get this done in under less than a minute. Now, if you need more than what I'm going to give you here, go see my Big Picture course. Anyway, the clock is ticking. So, you know how a virtual machine manager or a hypervisor, how it grabs physical resources like CPU, RAM, storage, networks, then, it slices them into virtual versions, so virtual CPU, virtual RAM, virtual NICs, all that goodness, and then it builds virtual machines out of them, virtual machines that look, smell, and feel like normal physical servers. Well not so with containers. Now, keeping this somewhat high-level here, instead of slicing and dicing physical server resources, Docker and other container engines slice and dice operating system resources. So, they slice up the process namespace, the network stack, the storage stack, the file system hierarchy actually. In effect, every container gets its own PID1, process ID 1, every container gets its own root file system, that's obviously / on Linux and c on Windows. So, hypervisor virtualization virtualizes physical server resources and builds virtual machines. Container engines like Docker, they're more like operating system virtualization, they create virtual operating systems, assign one to each container, inside of which we run applications. And they're way more lightweight than VMs. Okay, that took me, oh, check that out, less than 1 minute 20, I'm happy with that. Like I say though, if you think you need more, go check out either my Big Picture course, or even my Docker Deep Dive course, because in that one, I really get into the weeds of how this is all put together inside of the Linux kernel. So, things like kernel namespaces and control groups. But yeah, there we are. Let's go work with containers.
The 'docker run' Command
I'm on a Linux machine here, that's what this funky little tux image at the top means. But you know what, it really doesn't matter if this is running on my laptop inside of a virtual machine, on bare metal in my own on-premises data center, or up in some super sexy cloud somewhere, it makes 0 difference as far as how Docker works. In fact, we're now living in a world where I could be doing all of this on Windows, so this could be like native Windows server containers or Docker for Windows on my laptop, or even Docker for Mac if that was my thing. The point is, irrespective of platform and operating system, you're going to get the exact same Docker experience. Anyway, docker version, this is always the first command that I run whenever I get logged on to a Docker host. Now, before I say anything else, I'm running commands here on Linux as root. Now, I'm absolutely not recommending that you do that, I'm just trying to keep things as simple as possible here. So, definitely no sudo prefixes or anything, because I want this to be as platform-agnostic as possible. Now, back on piece. The output here is broken into two sections, client and server. At the top here, we get the version details of the client bits and pieces, and then down here we get the same for the server or the daemon bits. And if you've been following along with the install lesson, then these are the client and server versions running locally on the machine that you're logged on to. I'm saying this because it is possible to point the Docker client to a remote daemon somewhere on the other end of the network, we're just not doing that here. Well, docker info, this is another good one. So right up at the top here, we can see how many containers and images that we've got. Not a lot right now. Then, below that we've got a wealth of version information. So, generally speaking, this is a really good command for seeing how things are on your Docker host. Now, we're not going to get into the weeds here, that's for my Docker Deep Dive course, but feel free to take a closer look in your own time. But come on, I'm waffling, let's run a container. So, keeping it simple, we go docker run and hello-world. I mean, really? How simple does that look? Alright, that didn't take long, but what are we looking at? This is doing my job for me, so just let me cover this up while we step through what just happened there. First up, we typed a command, docker run hello-world. Well, all Docker commands start with the docker keyword if you will, calling the Docker binary in the background. We then said run, that's the standard way of saying hey Docker, go run me a new container please. And then we said, oh, you know what, run that container based on the hello-world image. Then we hit Return. The client went and talked to the daemon, the daemon checked if it had a copy of that hello-world image stored locally, it didn't, so that's what we see here with this unable to find image hello-world latest locally. That meant the daemon had to go away and pull the image from a place called Docker Hub. More on that in a second. So it pulls the image, which is just a fancy way of saying it grabs its own copy, and it used that image as a template to create a new container. Now it's a really simple container or image, all it did was print a load of text to the screen, then it exited. So it was a super short-lived container, prints text to the screen, exits. Meaning, if we run a docker ps command, no container is currently running. Though, if we whack the -a flag onto the end of that, see how it shows as a container that was running, but is now exited. Well that's obviously our container that we just ran, and that is a command that you're going to probably use a lot, docker ps list running containers. Remember, slap -a onto the end and you can see containers that you have ran, but are now exited. So if we now run docker info again, back up here at the top, we now see 1 container, 1 in the stopped state and 1 image. Well we've seen the container with docker ps, so to see the image, funnily enough we go docker images, and there it is. For now, think of repository as just the image name, hello-world. All images get tagged, that means we can act like grownups and version them, every images gets its own unique hash, this one was created a few weeks ago, and well, it's a whopping 967 bytes in size. Well, that's just magic. We've done a docker run to run a container, docker ps to see info about running and stop containers, and docker images to see info about images, but just images actually that are stored locally. But I think that's a great start. We've got a few commands in the arsenal now, and we've run a container, but, I want to take a quick step back for a second and have a quick visual recap of what went on behind the scenes, just to solidify this before we move on to working with slightly more complex containers.
Theory of Pulling and Running Containers
So a really quick recap of that whole docker run thing we just did. So, this here is our Docker host, Linux, Windows, on-prem, in the cloud, we don't care, do we? Well, it's running the Docker client and the Docker daemon. You'll often hear that combo referred to as the Docker Engine, though sometimes Docker Engine might just be used to refer to the daemon part. Either way, a standard Docker install gives you the client and the daemon on the same host. We issued a dead simple docker run command. That was the client component, it interpreted that command and made the appropriate API calls to the daemon. So, right there we learn that it's obviously the Docker daemon that implements the Docker remote API. Again, sometimes you might hear that called the Docker Engine API. Either way, it's a client/server model. Now docker run basically says start me a new container, and then we had to specify the image that we want to use as the template for the container. We said use the hello-world image. So, the daemon checked its local store to see if it already had a copy of it. In our case, it did not, so what to do? Well, it needs the image in order to start the container, so, what it did was it went and searched for it on Docker Hub. Now Docker Hub is what we call a Docker image registry, a place where we can store images that we want to use later for containers. While Docker Hub is the default registry that the daemon uses, and it's out there on the public internet, though other registries totally exist including secure on-premise registries like Docker Trusted Registry, plus other public and private offerings from third-party ecosystem partners. The point being, the Docker daemon didn't have the required image locally, so it went and searched a registry for it, it found it, and pulled it locally. Remember, pull is just Docker lingo for making a local copy. Once the image was pulled locally, the daemon did the heavy lifting needed to create and spin up a brand new container, based of course on the configuration inside of the hello-world image. Now cutting a long story short, that effectively translated into starting a new container that ran a simple command to display some text to the screen, including the words hello-world. And as that was the only thing the hello-world image was configured to do, the container did that job, and it exited, that left us with a local copy of the hello-world image in our Docker host's local registry, again, that's a fancy way of saying on the Docker host's local file system, plus we also ended up with a single container in the exited or stopped state. Fabulous stuff, and hopefully crystal clear. Well, let's go and do a bit more with containers.
Working with Images
Alright, we've just seen how to use the docker run command, and we've seen how containers are based off of images. Well you know what, if that terminology is a bit new to you, don't sweat, just think of images as stopped containers, then on the flipside, containers as running images. Speaking of which, we're going to take a few seconds now to make sure we've got our heads properly around images, because no images, no containers. Now, the docker run command automagically downloads required images, but, we totally can download images on their own, so, without firing off a container at the same time. So, here if we go docker pull alpine, that's going to go away and pull a copy of the official alpine image from Docker Hub. Alright, well that took all of no time at all, obviously a small image. But what about if we wanted the latest Ubuntu image? No prizes for guessing, that's docker pull ubuntu. Alright, this is going to take a smidge longer, obviously a bigger image. Now, each of these lines here by the way, is referring to an image layer that's being pulled, the Ubuntu image, it's obviously got four layers. Now, if you need more detailed information about images or layers, then pop over to my Docker Deep Dive course. But there we go. Again, that took relatively no time. Now, I did say if you were listening carefully, that was the latest Ubuntu image. What happens if I want a particular version, I don't know, say good old 14.04? Well, I just run the same command again, but at the end I'll lash on :14.04, and off that one goes. Okay, so by my count we should have four images now. Hello-world from what we did before, plus alpine, ubuntu latest, and ubuntu 14.04. Well let's see. I tell you what, I must be a proper expert to have known that. Anyway, what we're looking at here is the contents of this Docker host's local registry, or its local image store. Basically, these images have been pulled down from Docker Hub and they're stored on this Docker host's local file system. Exactly where on the file system depends on things like the version of Docker, the version of Ubuntu, the version of Windows, all that kind of stuff. Now, in the left column here we can see we've got three local repositories, ubuntu, alpine, and hello-world. Each repository has got a single image, except for the ubuntu repo, that's got 2, 14.04 here and latest here. We can see each image gets its own unique hash and we can see sizes over here. Fabulous! Well that's all local, all on our Docker host. Let's go take a look at Docker Hub where we just pulled them all down from. To do that we just open up a browser here, we go to hub.docker.com, I'll log in with my Docker ID, if you haven't got one, then what is wrong with you? No, don't worry about it, you sign up for one now, it's free. This is what the Hub UI looks like. Now, if I come up here to Explore, these here are the official repos, official as in well look, here's the ubuntu one, this is managed and maintained by the folks at Ubuntu, or I should say Canonical, meaning you can be pretty sure that you're getting something that's official and stable, as opposed to something shoddy that some schmuck like me just uploaded. Anyway, if we click into it, we get some info. So some description info over here, and then the command over here that we'd use to pull the image. But I want to click Tags here. Oh, okay, these lines here are all the different images or versions of Ubuntu images that are available in this repo. Well we downloaded 14.04 and latest, so there's 14.04 and there's latest. If we drill in even further, we see some really useful security info relating to known vulnerabilities. Now, 14.04 here is carrying 4 criticals and a few of the less criticals. Yeah, not exactly ideal, but if we back out, and let's have a look at latest. A bit better, just 1 major. So, nice insight there, but let's not get sidetracked. Back onto our host, I'm actually not very cool without Ubuntu 14.04, too many vulnerabilities, so I'm going to go docker rmi, rmi is just a nice little TLA for remove images, and I can either give its name or its tag or I can give its ID here. Well you know what, I'm going to go name and tag, and that is gone. So, that's a quick primer on images. I've only got three pulled locally now, so I can spin up containers from those without having to pull from Hub, but any others that aren't in that list, the docker run command is going to need to go and pull in from Hub before it can start containers from them. Cool. Now let's go and take a look at the lifecycle of a container.
Much has been said and written about the persistence of containers, or, the supposed lack of persistence. And you know what, much of what has been said and written is tosh, total and utter tosh. Not all of it mind you, but enough of it. True it is, containers are an outstanding fit for non-persistent workloads, but it's not like containers by design can't persist data, they can and they do. We're seeing more and more solutions that make persistence even better with containers, but I don't want this to be chapter and verse on container persistence, all I'm going to say is this. When you create a container with docker run, it goes into the running state. From there, we can stop it, restart it, stop it, restart it, stop it, I think you get the picture. But also, we can remove it. We can pause them and what have you as well, but as far as we're concerned, containers can be started, stopped, restarted, ad infinitum, and ultimately removed. The thing is though, when you stop a container, it's not like it's gone forever, wiped out of existence. No, it's still there along with its data on the Docker host. So, if and when you restart it, shock horror, it's going to come back to life with all of its data intact. You see, it's not until you explicitly remove it with a docker rm command that you stand any chance of losing it. So seriously, you literally have to go weapons hot, fire at will, and intentionally wipe it out of existence before you stand a chance of losing it and losing its data. And even then, if its data is in a volume or some other persistence store, that data is not going to go away without a fight. My point being, in a lot of ways, containers are just like VMs, you start them, stop them, restart them, everything's good, it's not until you explicitly destroy or delete them that you risk losing them. Anyway, look, the container that we created before, that hello-world one, it was really short-lived, and it didn't really do a great deal. A bit boring. Well how about we go with this. Now, I appreciate it's a bit different to the last docker run we did, so let's pass through it starting from the left. It's docker run again, we're all good with that asking the daemon to create it as a new container, but this -d flag is new. That's telling the daemon to start the container in detached mode, basically throw it in the background and don't latch it on to my terminal here. Then, we give it a name. Knock yourself out there, as long as it's a unique name. Then we're mapping some network ports. Now, this particular container, you don't know this, I know that, but it's a web server listening on port 8080. Well, we want to be able to access it from port 80 on the Docker host, so this 80:8080 business is saying map port 80 on the Docker host to port 8080 inside of the container. That means in a second when we point the web browser to our Docker host on port 80, it's going to get mapped through to port 8080 inside the container, and we should hopefully see a web page. And then last but not least, we tell it which image to use. But remember how last time we just said ubuntu or alpine, and this time we're saying nigelpoulton/, and then the name of the image. That's because top-level images are official images, and they're a bit special, they don't need their own separate namespace like this one, they live at the top level, indicating hopefully that they're reliable and from trustworthy sources. You see, once you get into the realms of second level repos like this one, then I'm afraid you're totally at the mercy of people like me, and that's not so good, not so secure, and it's not so trustworthy, in theory anyway. Anyway, we're creating a new container called web, we're mapping some ports, and we're basing it off of this image. Let's go. Okay, looks good. That return code of sorts there is the ID of the container that's been created. Now, to point a web browser to this Docker host, and there we go. Good grief, the button even works. Sorry, this is an old image from a previous course. Never mind though. To be clear, this web browser is pointing to my Docker host that's hosting that container we just started. So this here is the Docker host's publicly accessible DNS name. We're hitting it on the default port of 80, and as we can see, we're obviously getting through to the web server that's running inside of the container. It's a beautiful thing. But back on our host here, docker ps, and this is our container. The output is wrapped over a couple of lines, but container ID, image, command, how long ago it was created, look, it's up, port mappings, and its name. Well, if we go docker stop web, then come back here, let's refresh this, uh-oh, gone. What about if we docker start web, give this another refresh, and we're back in business. So that's a simple web container. Now let's try this. I'm going to go -it this time, so instead of it being detached in the background, I actually want to interact with this one, so I'm saying open it standard in stream and assign it a terminal. Then I'll name it, this is arbitrary, right? I'll base it off of ubuntu:latest, and I'll tell it to run Bash. Oh, subtle, but let me come to that in a second. In fact, no, did any of you notice the change? Well done if you did, but don't worry if you didn't, I miss it all the time, so you're in good company if you did miss it. But, what am I on about? Well my prompt here just changed, I am now root at this funky hex string. How come? Well I'm actually in the bash shell inside the container we just created, and I changed the picture up here as well to show that. So if we go ping google.com, vim etc/hosts, okay hang on, what's going on? Well, we're inside of that container, and containers are for the most part designed to be super lightweight. So even though this is an Ubuntu container, it's like bare bones, stripped down, furniture removed, and all the fat sucked out. The thing is, for the most part you don't want to be SSHing into containers or anything and doing management stuff from them. I'm just showing you here that it is possible to have interactive containers and to get shell access into them, it's just not that common. Now, to exit from the container I can type exit here, which if I do, the container itself is going to exit as well, so actually let me show you this. Whoa, just bash and the ps process that it forked. Really? Yeah, really. So containers are very often single process constructs. This particular container, we told it to run bin bash, the standard Linux shell if you're a Windows person, and guess what, it is running Bash, it's just that it's running Bash and only Bash. Lightweight remember, but, because I'm in that Bash shell right now, if I type exit, then Bash is going to exit, and as that's the only process running on the container, well, that'll leave the container with no running processes, and the container would exit. So I'm not going to do that. Another way to get out of a container without killing it is by pressing Ctrl P+Q. See, subtle again, but I'm out. And if we go docker ps, there it is, still running along with the web container. Now just real quick, things are a little bit different with Windows Server containers, they've got a bit more going on inside of them, it's just the way that the Windows kernel works. So, Windows containers will likely show more than a single process, but you know what, we've now run a few containers. I think we should quickly tidy up our mess before we go and recap what we've learned so far in this module. So, first up we want to stop our containers. Now, you can go docker stop, and then the container name or ID if you want, but you know what, that's a real pain when you've got a lot of them. Well, I'm going to wipe this machine clean, so I'm going to go docker stop, and then instead of manually listing every container I'm going to say run that against the output of this command. So the -a here after docker ps says give us all containers, and the q says be quiet, so just return container IDs, effectively giving all container IDs to the docker stop command. That's stopped them all, we'll do the same again, but we'll change stop here to rm, that's the more removed. Just a quick check, magic. The same for images. Okay, those are all the individual layers being deleted. A quick check again, and I reckon we're all clean. Let's go remind ourselves of the major things that we've learned.
Recap time. We now know that docker run is the command we use to start new containers, and, we know that it needs at the very least an image to work with. I think we said that images are like stop containers, and that containers are a bit like running images, depending on which way you look at it. I don't know, maybe think of images as templates that let you stamp out containers. We learned that docker run is a Docker client command, and that the client makes the appropriate API calls to the daemon to instantiate our containers. In fact, it's the daemon that does just about all the heavy lifting on anything, and we said that the daemon implements the Docker Remote API. We also said that when creating a container, the daemon will search its local image store and see if it can find the image locally. If it can't, it'll then go and pull it from a registry. The default registry is Docker Hub, and that's out there on the wild west of the inter-web. We can run our containers detached in the background, and that's pretty common, that way they just crack on with their job, but if we really need to, it is possible to interact with containers by attaching a terminal to their standard in stream. And that is doable, I mean it's fine, it's just not the ideal way, and these are totally my words here. In the ideal cattle herding, cloud native microservice architecture app, you're not going to be logging into containers and poking around. But, that being said, containers are absolutely not just for that design pattern. Yeah, they're top notch when it comes to that, but they can be used for other design patterns too. What else did we learn? We learned that images can be pulled locally with the docker pull command, and then we can view images already pulled locally with docker images. If we want to get rid of them, then it's docker rmi and then the names or IDs of the images, and pretty much similar for containers, docker ps lists all running containers. If we add the -a flag, then it's going to show us stop containers as well. If we want to blow them away like sterilizing the Docker house of their existence, I'm potentially losing any data in them, but not definitely, just potentially, well that's docker stop followed by docker rm. Though you can skip the stop step, the stop step, that's easy for me to say, by adding the -f flag to docker rm, that's just going to force the removal, and I think that's the highlights. Well, coming up next. Take a seat and prepare yourself will you, because we are about to dive into the world of native Docker swarms and native orchestration with constructs like services and stacks. This is exciting stuff.
Swarm Mode and Microservices
This is the good stuff, we've saved the best until last, because while Docker and containers themselves are the future, what we're about to see here is the future of Docker in applications, so the future of the future if you will, only it's here already, so stick that in your pipe for a paradox. Anyway, in this final module, the module of all modules, we're going to get down with the latest and greatest in native container orchestration from Docker Inc. Things like swarm mode and swarms, services and tasks, stacks, all the game-changing stuff that was announced with Docker 1.12, which I suppose gives us our first prerequisite right there. If you're following along, you need to be on at least Docker 1.12. So we're going to do this. There's a whole rack full of new vocabulary and concepts and things to get our heads around, so we'll start out with a quick theory primer on swarm mode, and I don't recommend you skip this. There are significant changes, and you want to fully grok them before you go on. Well, once we're done with the theory we're going to build a swarm, and I cannot tell you how easy that is these days. After that, we'll see how to deploy a simple web app using services. Then, we'll scale it, then we'll roll and update it. Then, we'll deploy a more complex microservice app using stacks and bundles. So let's crack on and nail that theory.
Swarm Mode Theory
So, a few terms and some new concepts, but they're huge. So if you need to watch this clip a couple of times before you get it, I recommend that you do. Well first up, the crux of what's new with Docker 1.12 and later is native clustering, but I'm talking about proper native clustering here. None of what we had in the past with swarm where first up we would build a bunch of Docker engines, and then we'd grab swarm as a separate tool and product and layer it on top. No, absolutely not that anymore. What we're talking about here is proper native clustering, so all built-in, proper first-class citizen stuff. And we'll be all over the detail in a second, but we're calling this native cluster a swarm. Yeah, obviously it's a cluster of sorts, but the folks at Docker Inc are pushing the term swarm, so I'm happy enough to go with that. A collection of Docker engines joined into a cluster is called a swarm. And then the Docker engines running on each node in the swarm are said to be running in swarm mode. Lingo, lingo, lingo. Now, one of the cool things about this, and it's vitally important as well, is that swarm mode is entirely optional. As we'll see in a second, with a single short command we can put Docker into swarm mode and initialize a new swarm. But we don't have to, and, if we don't, well that's fine, Docker is just going to run like it always has in standalone mode or single engine mode, and in that single engine mode, it is fully backwards compatible. Now let me just repeat that. You do not have to enable swarm mode, and if you don't, then everything's 100% backwards compatible, it just runs like Docker always has. That means if you've got third-party clustering stuff going on, chillax, it's just going to keep working. Anyway, a swarm itself consists of one or more manager nodes, and then one or more worker nodes. As you might expect, the manager nodes look after the state of the cluster and dispatch tasks and the likes to the worker nodes. And mangers are highly available, meaning that if 1 or 2 or however many of them go away, the ones remaining will keep the swarm going. And it works like this behind the scenes. While you can have X number of managers, an odd number is highly recommended, and only one of them is ever the leader or primary. All managers maintain the state of the cluster, but if a manager that's not the leader receives a command, it's going to proxy that command over to the leader, and then the leader's going to action it against the swarm. And, you can spread managers across availability zones, regions, data centers, whatever suits your high availability needs, but of course, that's all going to be dependent on the kind of networks that you have, you're going to want reliable networks. Now, I suppose I should mention Raft, because while we can have multiple managers for redundancy and high availability, that naturally gives us complexities over agreeing on the current state consensus. Well, Raft is the protocol that's used behind the scenes to bring order to that chaos by ensuring we achieve a distributed consensus. Now, manager nodes are worker nodes as well, so it's totally possible to have a swarm where every node is a manager node, though, more than five manager nodes generally isn't thought of as a good idea. The premise here is that the more managers you have, the harder it is to get consensus, or the longer it takes to get consensus. Just the same way as 2 or 3 people deciding which restaurant to go to is way easier than, I don't know, 20 people. Anyway, worker nodes on the other hand just accept tasks from managers and execute them. Speaking of which, that leads us nicely to services. So, services are also a new concept introduced with swarm mode, meaning if you're not running in swarm mode, then you can't do services. But, we're going to be running swarm mode, so, a service is a declarative way of running and scaling tasks. I'm sure that's as clear as mud. So, as an example, say you have an app that consists of a back-end store and a front-end web interface, you'd implement that as two services, one service for the back-end store and another for the web front end, only with services, we tell Docker what we want the app service to look like, and it's now up to Docker to make sure that happens. So let's say we wanted five instances of the container that was running the web front end, you'd tell Docker that when you define the service. This is an example command here. So it's telling Docker, go create a service called web-fe, and I want five instances of the container or task it's going to run. Marvelous. Docker is going to go away and spin up five tasks; think of tasks as containers for now, and it's going to spread them across all the work nodes in the cluster. Remember, managers are also workers, but here's the thing. It is going to make sure that there are always five of them running. If one of them dies, there's a reconciliation loop running in the background that'll say okay, I've got four tasks running for this service, wait up, I should have give, and it's going to start a new one. And that declarative model of expressing desired state and having Docker keep an eye on things, making sure that actual state always equals desired state, is both new and really powerful. Now we'll see this in a second, but I did mention tasks. So, a task is the atomic unit of work assigned to a worker node. We as developers or sysadmins or whatever tell the manager about services, then the manager assigns the work of that service out to worker nodes as tasks. Now, right now tasks means containers. A little bit more so, they include metadata about how to initiate the container and some runtime stuff, but we can pretty much think of a task as being a container. But, and this is crystal ball time here, there's actually nothing technically stopping tasks including other things like unikernels or any other unit of work in the future. It's just right now, tasks mean containers. We've got a swarm consisting of a bunch of manager nodes and worker nodes, we define services, declare them to the manager via the standard Docker API, albeit new endpoints in the API, the manager then splits the service into tasks and schedules those tasks against available nodes in the swarm. And then, in order to deploy complex apps consisting of multiple distributed independently scalable services, we've got stacks and distributed application bundles. And there's more, and we'll see a bunch of it really, really soon, if I stop waffling. I reckon we've got enough to get started, so let's go build a swarm.
Configuring Swarm Mode
Okay, let's finally get our hands dirty and build a swarm. In fact, this swarm here, three manager nodes and three worker nodes. Now, in my lab here in case you're following along, I've got six AWS instances, all of them running Linux with Docker 1.12 or higher. Now, you can pretty much rock and roll however you want here, so this could be anything from six bare metal hosts on-premises, it could be VMs on your laptop, and obviously instances in the public cloud, it totally doesn't matter, and you know what, you can mix and match them pretty much how you like. Just as always, beware of network performance if you do that. The only absolute must though is Docker 1.12 or higher. I've got mine named and IP'd like this. The thing that matters though is that they can all talk to each other. So, I'm going to jump on to mgr1 here and initialize the swarm, that'll make this machine the first manager. Then I'm going to jump on these two nodes here and join them as managers. Then, these three down here are going to be joined as workers, and that'll be our six-node swarm, open and ready for applications. Here we are in mgr1, and we can see that I've got Docker installed, and we're at version 1.12 or higher. Well, to initialize the swarm we go docker swarm init, that's a new command as of 1.12, and it's fully optional remember, but as soon as you whack this command in, you're stepping into a brave new world. So docker swarm init, then I like to give it a couple of parameters. First up, advertise-address. This tells Docker, no matter how many nics and IP addresses this machine's got this is the one to use for swarm related stuff. Like expo's in the API. Now if the machine's only got a single IP, okay, you don't technically need to specify this, but how many production machines are there in the real world that have only got a single IP? Not so many in my experience. So I always use this flag and the next one as well, so I recommend you do. Infact, I've had issues just this last week in the setup with multiple IP's, where I was trying to do something really quickly, so I skipped the flags, just that I always use them--nevermind. Anyway, I paid the price. So, create the habits starting now. Anyway, we give it it's IP and we'll use port 2377, then I give it the listen address as well. This is what the node listens on for swarm manager traffic, and I'll tell it the same address in port. Now for clarity. The address is used here, obviously it needs to be valid addresses on the node. But, any of the nodes that want to be part of this swarm are going to need to be either on the same network or you're going to need roots and routes in place on your network so that those other nodes can reach this IP. Now 2377 is not mandatory, you can actually go with any port that works in your environment. But the native Docker engine port is 2375. the secured engine port 2376, and I think the guys at Docker are talking to IANA trying to register 2377 as the official swarm port. So, long story short if 2377 works for you and your environment and your looking to standardize on something. it's as good a place as any to start. Why then let's run that. Okay, we look like we're in business. We can see here that swarms initialized and this machine's a manager. Looking good. But a couple of important things to note. This command here is the exact command you need to run on a worker that you want to join to this swarm. If you look at it, we can see it includes a token that without which no machines are going to be joining as a worker, and we'll use this one in just a second so we'll see it. But before that, the instruction here to add a manager, I think it's a bit, well it's a bit weird, right? So, let's grab it and we'll run it. And look at that, we get another command that looks pretty much the same as the one above, only it's not. Well, the token is not the same. But let's do this first. Same command again, but switch out manager for worker. So, any time you need to know the command and token to join a worker or a manager, these are the commands, docker swarm join-token, and then either manager or worker. But look at the last section of each token and see how they're different. When joining a new node to the swarm, the way that the swarm knows whether it should be a manager or a worker totally depends on the token that you give it. One token is for managers to join, the other for workers. But let's just see what docker info shows us. Right, magic. Here we see Swarm: active, our NodeID, yes, this node is a manager, and this swarm has got, oh, 1 manager and 1 node. Hmm, interesting. Not so much, right? Remember, manager nodes are also worker nodes, so we've really only got one node in the cluster, it's just doing two jobs, manager and worker. But we want to join some more nodes, and we'll go with a manager first. So, yeah, this one's the manager join token, so we'll have that, and we'll swing over to mgr2, paste in the command, and then the obsessive compulsive part of me kicks in. Like I said, I always add these two lines here, advertise-addr and listen-addr. But you've got to be careful, you've got to use mgr2's addresses here, so mgr2's own IP here that we want to use for swarm. Give it the wrong address or the address of another machine, and I'm speaking from experience here, things are going to be unpleasant. Well, we'll give that a try, and we're joined as a manager. So we're docker node ls here, this is the new subcommand in 1.12 by the way, but as we can see, 2 manager nodes, both accepted, ready, and active. And, only one of them is leader, more on that in a second. But you know what, this little asterisk here, this is interesting, this tells me which machine I'm logged on to and what I'll run the command from, so mgr2. Anyway, let's add a worker. So if we skip back to mgr1 here and grab that command and join token for workers, well, let's whack that in here on mgr3. Now, you'll see why I'm making a node called mgr3, a worker, in just a second, so bear with me, it's not actually a mistake. Obviously I'm going to add the advertise and listen flags here, get into the habit, seriously. But that's it, joined as a worker. Take a look at the swarm now. Oh yeah, of course you can only run docker node ls from a manager, and this station here is a worker. So back over to mgr1, let's run it here, and 3 nodes. Now, we know that mgr3 is joined as a worker, well, we know that because it said so when we ran the command, but we can also derive it from the output here due to the fact that there's nothing in the MANAGER STATUS column here. And you know what, while that's interesting and good to know, well, we actually want it to be a manager. Well, that's fine, we'll just go docker node promote, I'll grab its ID up here, you can use its name as well if you want, but I like this, and flipping heck, instant promotion to manager. Now if only an HR Department could get its act together like that. Well, docker node ls again, magic. This reachable here means it is definitely now acting as a manager. So, that's our three managers working together with one acting as leader and the other is backing it up and referring commands over to it. Now to add our three workers. So first up let's grab that worker join command, this one, yeah, and let's do wrk1. Okay, but you know me, I'm going to add the advertise and listen-addrs. Now like I said before, you've got to be sure that you're adding the correct addresses for each node. These are the node's own IPs that it's going to use for swarm. So wrk2, same again here, but these are wrk2's IPs. And last but not least, wrk3. Again, wrk3's IP. So I reckon if we scoot back to mgr1 and docker node ls again, bingo, 3 managers and 3 workers. The managers are the ones that have got something in the MANAGER STATUS column at the end, the workers, they don't. Though technically actually, yeah, 3 managers and 6 workers, seeing as how each manager also acts as a worker. Now a couple of things to remember. See back somewhere up here how only one manager is leader or primary, depending on who you're talking to, well it's only that leader that can affect changes against the cluster, but we can issue commands on or to any of the managers, they'll just proxy those commands onto the leader who'll execute them. If that leader goes down, that's no problem, a new one gets elected as per the Raft protocol, and the world keeps turning. So, that's our swarm up. Three managers and three workers, or I suppose three dedicated workers. But, six worker nodes in total considering managers are by default workers as well. Blah, blah, blah, blah, blah Nigel, we are ready to start using services to deploy applications to our swarm.
Services. One of the major, major constructs introduced in 1.12, and they're all about simplifying and, I don't know, robustifying large-scale production deployments. And yeah, I probably just made up the word robustify. But, like we said in the theory intro module, services are all about declarativeness, good grief, I'm making up words like crazy here, but also the concepts of desired state and actual state with a form of reconciliation process running in the background, doing all the heavy lifting in the background, required to make sure that actual matches desired. Anyway, we manage services with a new subcommand, and you know what, I'm sick of PowerPoint, so let's just dive in. So I go docker service create, I'm going to name it psight1 just because pluralsight1 is a bit hard to type, I'm going to map port 8080 across the entire swarm to 8080 inside of the service. Now more on this in just a second. Before that I'm going to say here let's have 5 tasks in a service, replicas, tasks, containers, same thing, we'll be having 5 thanks, and I'm going to deploy this image as the app or the service. Bit of a long name. Well let's see. Okay, looks good, but we can check with the new docker service ls. Okay, there's our service psight1, and 5/5 of the tasks are already running. Now I know that was insanely quick. The reason for that is because I've run this earlier in testing, so I already had the image downloaded on every node. If you're doing this for the first time, well, it's going to take as long as it takes to pull your image from Docker Hub or wherever your image is stored. Well, if we dig a bit deeper with docker service ps, and then our service name, let's make it a bit smaller, sorry. Okay, well if you've got superhero vision, maybe you can see that all five tasks, replicas, containers, call them whatever you want, no matter, they're all up and running. So let's just step through the output. Every task or container gets its own unique ID. Then fortunately it also gets a friendlier name, which is basically the name of the service that it belongs to, and then it gets a task number added to the end, so servicename.tasknumber, or sometimes here people call it slot number. Well, then we see an image name, all tasks in the service use the same image, I guess unless you're doing a rolling update or something, which we're going to see shortly. We can see which node a task is running on, and actually you might notice ours are nicely balanced across the swarm. Then we see the desired state of the task and the actual state. Do they match? Okay, the world is a happy place. Then, there's a column at the end for errors. We've also got docker service inspect as well, this is good for drilling into the config of your service, we can see things like down here the image we used, then a bunch of settings that we didn't bother with, but further down here, and I think you'll find this important in the real world, the network config. Now, actually I should have said this, it's really easy and most of the time I always do this, I don't know why I didn't do it this time, I usually start my services on their own overlay network, just because overlay is the future of networking. When we do stuff in later lessons, I'll make sure to use overlay networks. And this is really cool, but it's all a bit Docker-y. I want to see what the app looks like in the real world. Well, when we exposed port 8080 for the service, we basically said any traffic hitting any node in the swarm on that port is going to reach our service. So, let me pop over here and we'll grab the IP of one of our nodes. Now I know this is AWS, but all I'm doing is getting an IP that I can hit one of the nodes on. So I'll have this one, and let's see what happens if we hit it. Where's 8080, there we go, that's our service. Now let me try and be clear here, because I know I'm in AWS and that's not going to be the case for everyone. So, all I've done here is I've taken the IP, or actually I've taken the DNS name, but I've taken the IP or the DNS of any of the nodes in the swarm, then, I've hid it on port 8080. Now, if you're running something more locally, maybe like on your laptop or a physical server on-prem or something, you just need to hit it on whatever IP or DNS it resolves to. Now, if you're logged on to it locally, you can even hit localhost or 127.0.0.1, or whatever, as long as you're hitting it on port 8080. Now, the reason for port 8080 here is just because that's the way I've built this particular app, and it's how we define the service. So hopefully that's clear. Now, we're hitting the service on mgr1 here, and we know that mgr1 is running a task or a container for the service. So, what would happen if we hit a node in the swarm that's not running a task for the service? Only one way to find out. So, let's run that docker service ps again, and apologies again for the size of the text here, but if you grab your magnifying glass and you look through the output here, it's the IP ending in 162 that's missing, and, that's mgr2's IP. So, mgr2 is the one that's not running a task. So if we come back here and we grab its DNS name, again remember, if you're not in AWS, just grab whatever IP or DNS you used earlier, and if you're logged on locally with a browser, just hit it up on localhost or something, but if we open up a new tab and we whack that in with port 8080, same result, we still hit our app. Now this is despite the fact, and let me be really, really clear here, despite the fact that we just hit the one and only node in the swarm that does not have a container running for the service. So we can literally hit any node in the swarm and we'll always get to our service. Now then, and man, would I love to get more into this here, but it's beyond the Getting Started course. But you know what, behind the scenes there is some great network magic going on, making sure that you can expose a port for a service across the entire swarm. Then, as we saw a second ago, any time you hit any node in the swarm on that port, you're going to get your service. But as well as that, you're going to get your traffic load balanced across all of the tasks that form the service, so let that sink in, a fully, container-aware load balancer that Docker are calling the routing mesh, or the routing mesh. And you can totally mix it with your traditional load balancer. Like, I'm in AWS, so maybe I'd use the classic form of an elastic load balancer, but whatever environment you're in, if you've got existing load balancers, you can still have them in the mix. So they would load balance traffic across the nodes in your swarm, then it would let swarm as the next layer, use the built-in routing mesh load balancer to further balance the load across the containers in the service. It is a work of art. Though right now it's early day, so it's not layer-7 aware or anything like that, but, who knows what the future will bring. Now, I'd love to go into the detail, but it's too much for this course, but let's do this instead. A quick visual wrap of what we've done. So, we spun up a new service, I asked for five instances of the container in the service, we got six available workers. So, five of those workers got a container and one didn't. Boohoo. I also said map port 8080 on the entire swarm to 8080 inside of each container that's forming part of the service. So, up here against the entire swarm, all six nodes, even though one of them isn't even carrying a load for the service at the moment, we get a port mapping. So taking 8080, coming in and mapping it through to 8080 on each of these containers. Now, I'm only showing that mapping on the end node here so that the diagram doesn't get even uglier, but effectively every node gets this mapping here. The detail is a little bit beyond this course, I'm hoping at some point I'll get around to doing a swarm mode networking course where we can get into all the lovely details of kernel IPVS, sand boxes, ingress networks, VXLANs, all that good stuff. But for now, what we need to know now is we can hit any of these nodes in the swarm on 8080 and get sent to a container that's running as part of the ps1 service, and all nicely load-balanced. That means any external load balancers down here, they can balance across all the nodes, even this one here that doesn't have a container for the service, and then the swarm-wide routing mesh container or whale load balancer will balance across all of the containers in the service. And I know I said it was a beautiful thing, yeah, looking at that picture, that is not beautiful, but it works and it is great. Well, that's cool and all, but no rest for the best. Next up, we're going to put our desired state to the test and we're going to see how to scale the service.
Quick docker service ls, yeah, we've got a service running here with 5/5 replicas in the running state, and if we look here, maybe not. Let's fix that. Now we can see that yeah, the world is a happy and a harmonious place. Why? Well because actual state matches desired state, 5/5 up here, and then all 5 down here are running and should be running. But, we've got one node that's not doing anything, we've got six nodes and only five tasks. So, who is it that's sitting there doing nothing? I think that's mgr2, I don't see mgr2 or the IP ending in 162 anywhere in the list. Mind you that's not necessarily surprising seeing as how the text is practically microscopic. Never mind though, let's switch over to mgr3 here, who hopefully does have a task running. Alright, let's go weapons hot here and we'll sink this worker. See ya. Okay, back over here on mgr1, oh right, all five here are showing as still up and running, and that's a good thing, but how come? I mean, we just nuked a node running a task, so shouldn't we be at 4/5 tasks? Well, let's drill in a bit deeper. Well, the first thing to note is that mgr2 here, the one with the IP ending in 162, remember, this node wasn't running any tasks before. Well now it is, and for the last 14 or so seconds be exact. And you know what, we can also see the task below it that was on the nuked node, that's showing as shut down. This running 17 minutes ago in the CURRENT STATE column, that's probably a bug or something, it's not running anymore, we've got 5/5 tasks, we know that. Never mind though. But let me say this, because I don't want the moment to be lost here. We just nuked a node, nothing at all graceful about it, right? When that node went down, our desired state of 5 running tasks no longer matched the running state, which will have changed to 4. Well you know what, that's like cue alarm bells inside of the swarm, full on, red alert, all hands scrambling to stations to fix it. Yet here, in my idyllic world, nothing, everything's calm. I have my feet up on the desk, it was peaceful, serene, just the way things should be when things like this happen. I mean, a system goes down, no need to beep me on my pager, just deal with it yourself swarm, and let me keep relaxing. You know what, if I wasn't looking closely here, I doubt I'd even have noticed. But yeah, let us not underestimate what just happened. We got violently wrenched away from our desired state, but swarm stepped up to the plate and dealt with the issue admirably, Zero manual intervention from yours truly. I think I'll have a cold beverage and relax. But no, we're not done yet. To scale our service, so to add or remove tasks or containers, we go docker service scale, and by the way, this is actually just an alias for docker service update --replicas, but we go docker service scale, ps-1 for our service, then =, and however many we want, I don't know, 7. And it scaled already, just like that. Drink says the Slackbot. Docker service ls again, right, 7/7 running, 7/7, reminds me of 7/9. Never mind. Now then, if we dig deeper again, 162 has got 2 containers here, that's mgr2, and so does 165, that's wrk2. Well if we scale the service again though, this time to 10, in fact you know what, let's do it this way instead. Remember, docker service scale is just a shortcut to this. And I'm showing you this just in case they're a bit naughty in the future and take away the scale shortcut. I'm sure they won't, but either way, taking our service to 10 should in theory, hopefully, give us 2 containers on every node. Remember, we're down to five nodes since we blasted worker whatever it was off the planet a minute ago, and that's how bad my memory is, I've forgotten which worker it was already. But you know what, we're herding cattle remember, we shouldn't care about stuff like node names anymore, at least in some cases. Now, to me, that kind of looks like two tasks per node, so congrats to the Docker engineers on that test passed. Drink. Sorry, this is getting out of hand, I know. What if we bring that dead node back to life, or, I don't know, add a new node to the swarm for that matter. I mean, we're all nicely balanced at the moment, we'll throw in a new node into the mix, all to that balance. So I'm asking, will that new node that we add instantly pick up its fair share of the existing workload? Only one way to find out. So we're here in AWS. Again, look, this can be whatever tool or platform you're using, but I'm going to bring mgr3 here back to life, same difference as adding a new node. Wave my magic wand, abracadabra and all that nonsense, and thanks to the magic of video editing, that is back up. Well a cheeky little docker node ls here, yep, that's showing as added back to the swarm as well. When it's down it'll show as down here, but it's ready. So it was mgr3 that we just brought back up, so let's see Well, let's see if it picked up any tasks and if it's doing any work, but let me scale the screen to microscopic font again. Heck, we're just following Poulton's Law here that says the amount of text per line doubles every time Docker releases a new version. Anyway, good grief, it's barely visible. Apologies if you're watching on your iPhone, but, if you can almost see it like me, mgr3's IP address ends in 163, and I don't see it in the list anywhere, well, other than the shutdown task here. So, what does that tell us? Well, at the time of recording, and I'm sure you're sick of hearing me say this, I'm sure as heck sick of saying it, but this stuff is always a changing. At the time of recording, newly added nodes don't get existing tasks automatically rebalanced across them. That might change in the future, but today adding new nodes to the swarm or bringing old ones back does not automatically rebalance existing running tasks. Anyway, we can cycle through the nodes and see this another way. So if we go docker node ps and mgr1, 2 tasks. If we go same again, but the rest are named differently, aren't they? Maybe I should explain this actually. So, this IP, and then each node's IP address, but delimited by dashes here instead of dots, this is just an ultimate DNS name for each node. So 162 here, this is mgr2, 2 tasks. Same again, 163 this time, so mgr3, and this is our outlier, just the one shutdown task. Let's go through the rest though. Two tasks. Two tasks. And finally, another two tasks. Oh well, that right there what we've learned in this lesson is the power of a declarative desired state, and, the sheer simplicity of scaling. Now, if only my terminal scaled so smoothly, seriously. Never mind though, loads more cracking stuff to come, next. Oh yeah, rolling updates.
Okay, pushing updates to applications is a fact of life, and for the longest time, it's been a royal pain in the backside. I've personally lost many weekends to monolithic, all hands on deck legacy application updates, and you know what, I have no intention of going there again if I can help it. Well, rolling updates, like we're going to see here, when coupled with things like microservice architectures and the likes, let's just say they promise a better future. So, to see this, we're going to deploy a new service. So, let's clean this one up first. So, docker service rm psight1, let's just check, it looks nice and clear, but no harm in double checking. I reckon we're clean. Now, first up I'm going to create a new overlay network for this service. Just call that ps-net. And just make sure it looks okay. Yeah, there it is at the bottom, ps-net overlay. And I'm going to deploy this service here. Now look, we've seen all this before, so just quickly. We're creating a new service, calling it psight2, putting it on the ps-net network that we just created, net network. This time we're going to publish it on port 80 across the swarm remember, we'll have 12 tasks, and we'll base it off of this image. Now a couple of quick things about the image. First up, this is a fork that I made of the common voting app that's used all the time to demo Docker functionality. Seriously, it's become like the Hello World of Docker demos, so I thought I'd better throw it in somewhere. But, second up about the image. This time I'm being specific about which version of the image to deploy. I recommend you're always specific. Now, if we scoot over here real quickly to my repo in Docker Hub, we can see that I've got two versions of the image, v2 and the latest tags are just tags of the same version of the image. So I've got v1 and then v2 and latest. Back here, and off that goes. We'll do our usual checks, Okay, 0 out of 12 tasks currently running. Now, I imagine that's because Docker is off pulling the image from Docker Hub, I expect it'll probably take a second or two, but you know what, we can still drill down while that's happening. Oh, this will be the end of me, seriously. I hope you've got your glasses on. Well, if you can see them, they're all there, and they're all at version 1 of the image. And huh, most of them are in the Running state, hmm. Anyway, look, we can see that in the time it took us to resize the screen and punch in that command, the image has been downloaded and all of the tasks, well, most of the tasks are running. And if you want, you can pause the video here and check it out, but there'll be two tasks per node because of the way swarm balancers work across the nodes. But yeah, there's an elephant in the room. It looks like we've had a task fail to start, rejected. It's on mgr1 and it failed to start due to a gateway allocation error. Okay, really interesting actually, but now is not the time to get sidetracked. Okay, let's see if we can reach it. So if we come here and grab the IP of any of our nodes, now let's stick it up here in a new tab, and there we go, a simple web app that lets us vote whether or not the beautiful game should be called football or soccer. Now I'm talking about the game played with this kind of ball. I know, obviously it's real name is football, I don't know who came up with the name soccer, what is that? Anyway, this is what version 1 of our app looks like. Now, if we come back here and inspect our service like this, we see, yeah, the image we're using. Hmm, nothing under Placement strategy right now, but if we come down here we can see the service's Networks stuff. But what I'm most interested in for now is this section here, UpdateConfig. Now then, when you initially create a service, so with docker service create, you can set a couple of update-related settings. Now, I'm personally not in the habit of doing it when I create the service, it might work for you doing it that way, help yourself. But what these options do though is set defaults for the service, meaning let's say if we were to update the image in this service from v1 to v2 based on the config that I'm showing you on the overlay screen here, it's going to do it 2 tasks at a time, so roll 2 tasks to version 2, wait for a 10 minute delay, and then roll another set, wait for 10 minutes, roll another set. Get the picture? But, I didn't set any of these when I started the service, so for me to update it I can go docker service update, then image because I want to update the image. You know what, we could set it to update the network config, or whatever, but I want to go to version 2 of that image we looked at earlier, then I'm manually saying here update-parallelism, try spelling that, I'll go 2, and then I'll say update-delay and 10 seconds. So, docker service update, self-explanatory. We're going to update the image to this version, and then this is how we're going to do it. We're going to update 2 tasks at a time, so 2 in parallel. After we've done the first 2 we're going to wait for 10 seconds, then after 10 seconds is up, we'll update another 2, wait another 10 seconds, update another 2, essentially steamrolling our way through all 12 tasks that we've currently got in the service. Speaking of which, we've got to tell it our service here, and let's go. Now then, a really good way to monitor the progress is docker service ps. which obviously works miles better when the output doesn't wrap over 50 lines. Well, we can see that our first 2 have already rolled and they're at version 2. In fact, they've only just gone, I only have 1 second before I run the command. And over the process of the next 120 seconds or so, every one of these here tasks is going to get its number called, meaning that if we hop back over here and if we hit refresh a few times, there we go, we start hitting some new versions of the app, so it's now containers versus VMs instead of football versus soccer. Now the longer we wait and the more we hit refresh, if we kept doing that, we'd see more and more of this come up versus the old version. Now, I'm an impatient person, but I reckon I've waited a good 120 seconds by now, so let's check this again. Okay, so that's not massively easy to read. Tell you what, how about good old grep? Right, that's better. Now if there's 12 on there, which it looks like there might be, then we're all done and dusted. So, we've rolled through updating all 12 tasks or content that is in the service, 2 at a time, and holding off 10 seconds in between each set of 2. And if we inspect our service again, okay, well first up we can see the status of the update, all complete and some timing info. But look down here as well, we can see the UpdateConfig of the service now matches the update parallelism and update delay settings that we just put in a moment or two ago, which, at least at the time of recording, means we can do further docker service updates without having to manually type the update parallelism and the update delay flags. What'll happen is we'll automatically get the values that are here. Oh yeah, that's rolling updates with Docker services, a real go-to weapon in the war against monolithic apps. Next up, our final new technology to learn. I'm starting to get sad, it's like that kind of feeling you get, when it's towards the end of your summer holiday from school, and it's still holiday, but the end is looming, well, we've got more to come here in this course, but the end is fast approaching. Anyway, we're going to look at stacks and distributed application bundles, literally the icing on the cake of everything that we've learned here, so I promise you won't want to miss it. See you there.
Stacks and DABs
Stacks and bundles. Now, I know we've seen this a lot in the course, but it has never been more true than it is right now, because what we're about to see here is taking the bleeding edge to another level. So, for starters, this stuff not only needs a minimum of 1.12, it needs 1.12 experimental, where if you don't know the experimental channel, that's where everything is in more flux than the flux capacitor. And that's a pain for me as an instructor, but you know what, it is so flipping important what we're going to see here that I'm willing to take that pain for you guys, because I'm really hoping that it will nicely cap off what we've learned in the course, and it'll give you a great view of where Docker are headed with all of this stuff. So, most applications in the real world I'm sure we get, are made of up multiple layers and interacting components, like web front ends, datastores on the back end, message busses, workers, reporting engines, all hopefully independent, but working together. Well for us in this course, each of these is a service, and together, those independent services form a working application that hopefully delivers some business value. Well, that's great and all, except all we've seen so far in this course is how to work with single services. What we're lacking is a way to bundle all of these services together, and then package them and deploy them, everything else as a single unit. A bit like if you've used Docker Datacenter or Docker Cloud at all, like Docker Cloud for example, it's got this notion of a stack, sound familiar, and this stack comprises multiple services. Sound familiar again? Well, you feed Docker Cloud a stack file that defines all the services in your app, and then you deploy the whole app from that single file. You know what, I think I actually show it in my Docker for DevOps course. I mean it's been a while, but I think I do. Anyway, that all sounds great, can we have some of that for swarm mode please? Well, it is being worked on, and that's what we're going to have a look at now. What we've basically got at the moment with 1.12 in the experimental channel is the concept of a stack, an application made up of multiple services, and we deploy these stacks from what we're calling DAB files, distributed application bundles. All that is is a new open format for distributing and deploying stacks. And of course, we get a new subcommand for working with them, docker stack. So you know what, let's just cut to the chase and see it. Well first things first, before we do anything, we need at least the 1.12 experimental build. This here is the GitHub repo if you want to build from source, or if you run one of the major Linux distros like for sure Ubuntu and CentOS and their upstream and downstream cousins, then you can install it with this command. Now, this is just personal opinion time here without any insider knowledge, other than I guess knowing how important this stuff is to Docker, I'd expect we'd probably see this stuff go GA in 1.13, fingers crossed at least. No, actually, if you're rocking it with Windows Server, I'm giving that a wide berth regarding this stuff, just until Server 2016 goes GA. Anyway, this machine here is running the experimental build as we can see here. Now I've cloned a fork of the instavote example voting app repo, the one that I said earlier was like the de facto Hello World for Docker demos. Well, this here is the architecture for the app, and while I appreciate it's a small app, I am tipping my hat to its developers, because they have taken a proper microservice approach here. I mean, we've got five fully independent services, written in using five different languages or frameworks. So we've got Python, Node, there's even some .NET down here with some C# in it. This is like, well, it's just a lot like how things come together in the real world, a mishmash of just about everything you can think of. Well, five microservices comprising this app, each of which can obviously be managed by its own team, released and updated on its own schedule, obviously each can be scaled independently, the whole shebang. Anyway, we're going to look at this docker-compose.yml file here. Would you believe it, it fits perfectly on the screen, it's about time. But, what is docker compose? Well for us here, let's just think of it as a Docker tool for running multi-container apps the old way, and these are my words, but think of it as for before swarm mode and stacks and DABs came along. The first thing to note here is the version tag at the top. For what we're going to be doing, it needs to be 2 or higher, but then it's defining 5 services. So the vote service here at the top, we've got how to build it, run it, the volumes it needs, ports it needs, everything we need to roll with it, and the same for the other four. The compose file here defines the entire five-service app. Marvelous, only swarm mode can't use it, which is a shame. What swarm mode needs is this compose.yml file converted into DAB format. Shame, I know, but it's not the end of the world, Docker Compose can convert it for us. Except, I haven't got Docker Compose, and maybe you don't either. So, to get it, just code this URL here. This won't take long, and it's put it in usr/local/bin. Now one last thing we need to do is set the execute bit on it. And you know what, actually it's got to be 1.8 or higher, that's good. Well now that we've got Docker Compose, we can convert the compose.yml file into a DAB file, only I've had a few issues doing that. So, here's what I've been doing. First up, I'll docker-compose build. Now, that's probably going to take, oh, that's quick for me because I've done this before in this machine, so all the images are pulled locally and I've got a cache built up. For you, if you're following along and using the same app, it's going to take like a minute or two, and you're probably going to see a bunch of red 200s, but don't worry, it will build. Well, let's have a clean screen and we'll look at the images that we've got locally. Alright, that's all the images we're going to need for the app. Now, the top three are the ones that were built locally with that compose-build we just did, whereas the others, these are official images that are available on Docker Hub. So, next we want to tag the top three so that I can push them to one of my repos on Docker Hub. Now, I'm going to save you watching me type in here. Basically we're just tagging the three images so they can be pushed to Nigel Poulton on Docker Hub. I do actually live inside of Docker Hub if you didn't know that. Anyway, if you're following along, you're going to need to log in to Hub with your Docker ID, and I've got one, no problem. Go to hub.docker.com, sign up for one, it's free, and it works straightaway. Anyway, once you're logged in, we've got to push those newly-typed images. Again, you don't need to watch this real time, all I'm doing is taking the three images that we just tagged and pushing them to hub.docker.com, and I'm guessing you know all about Docker Hub if you've done my Docker and Containers Big Picture course. No? Go check it out. One last thing before we can actually build the DAB file. We'll edit the compose.yml file here, and what we're going to do is we're going to get rid of these build lines here and replace them with image lines, pointing to the images that we just pushed to Hub. And actually while we're on, we'll get a shot of these volume instructions here because DAB doesn't work with them and we've already built. Great. That's the docker-compose file how we need it, just in its current form, being in release candidate in the experimental channel, it needs the images for these three services prebuilt and accessible, it just won't build them on the fly. Do you reckon it might be time to build that DAB file yet? I hope so. Well, we'll just go docker-compose, and then bundle, short for distributed application bundle I guess. And that's it. Well, it took us a while to get here, but the actual conversion was quick. Well that's dropped a new DAB file into our working directory here, the .dab extension, and the file gets named after the directory that you're working in, so we got voteapp.dab, drop in the dash in the middle there, but if we take a look at it. I hate JSON, please Docker, give me YAML, I just can't be dealing with all the braces and commas and stuff, I'm a simple person, YAML for the win. Anyway, the important thing is that the file describes all the services in our application stack, so there's db, redis, result, vote, and worker. Well now that we've got the DAB, we can use it to deploy the stack. So I'm going to go docker stack deploy, though you can shortcut that and just go docker deploy, but then we just give it the name of our stack. I think you can say .dab on the end if you want, but you don't need to. Basically I'm telling it here the name of the DAB file in my local directory. Well, how quick was that? So, creating a network, then all five services. No prizes for guessing, but docker service ls, look at that, five services up and running. And notice how each service is named as a combination of the stack name plus the service name, and then notice as well how each service only has a single task. Well, that's a current limitation of stacks and dabs, or at least the tooling around them. We can't currently use DAB files to set the number of tasks in the service at deploy time. But stuff like that is par for the course with experimental features. Well, another docker stack command here that says look at the tasks in the stack, just like docker service tasks. And let's have this task here, or just the service actually, and then let's inspect it. Okay great, just the bit I wanted actually. This is the network stuff. Well, what's this published port 30006 here? I didn't tell it that, and I think if we look in the compose.yml file here it's something like 5000. Yeah, it was 5000. So, if we've got 5000 down here in the compose.yml file, how have we ended up with 30006 up here? One word, experimental. It is early days on this remember. Right now DABs and stack deployments don't let us specify published ports at deploy time. No doubt, a future, more production-ready version will let us do that. I don't know whether directly through the DAB file or some other deployment time obstruction, I've no idea, but you can rest assured it will come. Now, speaking of 30006, let's go grab one of our node's IPs, or DNS actually, and let's try it on 30006. Look at that, it works, though I won't say cool, it would be cool if I could hit it on 5000 like the compose file says, but patience Nigel, that will come. We're clinging to the bleeding edge here remember. Well, I tell you what. It is like silly o'clock here now, so I'm going to bring this to a close, but let's quickly clean up. So, docker stack rm, and I'm liking the familiar feel of these commands by the way, they've got a good, common look and feel to them, but that should be it. Yeah, clean up with a single command, just what you need at silly o'clock. But a quick double check. All squeaky clean. So that says done, a sneak preview into the future of the future of the future, tinkering over the edge of the bleeding edge. But I'm hoping it was enough to give you some real insight into where this stuff is headed. And you know what, I am wrecked, and I think if ever I created a module that needed a recap, this is the one. We've built swarms, declared services, scaled them, rolling updated them, and then we've explored the outer limits of experimental builds with stacks and bundles. Let's do a recap.
Okay, what a module. A lot to take in, I know. We started out initializing a shiny new swarm. We spun up a brand new six-node swarm with what I reckon were two ridiculously simple commands. We ran docker swarm init on the first manager, and we ran docker swarm join on the rest of the nodes. There were some arguments to add to each command, but two insanely simple commands to build a highly available swarm, all secured with TLS and key rotation, and all that usually painful stuff to do. And if you've tried securing clusters like these in the past manually, you will know what a genuine life changer this is. But yeah, docker swarm init to create the swarm, then a single docker swarm join on every node that you want to join the party. We said that managers maintain the state of the cluster, and they issue work to workers in the form of tasks. Right now, tasks equals containers. And then I suppose speaking of which, we also threw ourselves into services, the new declarative way of running, scaling, and updating groups of containers running off the same image. We used the new docker service create command to declare a desired state for our service, and then we left the rest up to Docker. Seriously, with a single command I think one of the times we told Docker to fire up 12 containers and expose them all on a single load balanced port across the entire swarm. Then in the background, swarm or Docker would fight to maintain that state, come rain or shine. Man, brilliant. Then we looked at scaling services. Remember, we can scale up and down, and all with a super simple docker service scale command. Then we saw how docker service update can be used to perform rolling updates across containers or tasks in a service. In the example we did, we took a service from version 1 of an image and told Docker to go update all the tasks in the service to version 2 of the image, but, we said to roll through the service updating 2 tasks at a time, and holding off for 10 seconds between each batch. Hallelujah. Then we looked at the icing on the cake, stacks and distributed application bundles, the future of the future for orchestrating entire multiservice apps on Docker infrastructure. Wow. Well you know what, I've put together a quick 2 or 3 minute what to do next module for you, as a nice way to wind down from everything that you've learned here. So seriously, do yourself a favor and chill out for a couple of minutes while I point you to some useful things for taking your Docker journey to the next level.
You finished the course. Get in! So what are you doing here? You should be out changing the world, or at least celebrating. Well, before you do that, let's talk about where you can go next, although the question is more appropriately where can't you go next? Well, keeping things grounded, the obvious options are right under your nose here at Pluralsight. If you're not sick of me, then there's my Docker Deep Dive for a deeper knowledge of how Docker works, all the way from kernel internals through a deeper look at images in the container runtime, through Docker files and all that stuff. Still with me there's Docker for DevOps Automated Workflows where I demo how to take application code from your laptop through GitHub, CI/CD tools, automated image builds, and getting it out to production. Away from me, there's Docker for Web Developers by Dan Wahlin and there's Continuous Delivery Using Docker and Ansible by Justin Menga. Now try telling me that that is not an appetizing list of courses. Aside from that though, honestly, the best thing I can recommend is attending DockerCon, and that's coming from someone who's not a fan of events like this. But you know what, Docker and DockerCon, I guess it's still so early in its life that it's still valuable for technologists, so like as an example, you'll find most of the top talent from the container ecosystem there. Seriously, the booths are manned by engineers and company founders and the like, and the same guys are hosting sessions and spending the whole time talking about containers and solving problems. I do love it. And as a bonus, you'll find me there. Speaking of which, if you do go and you do see me, come and introduce yourself. I'm shocking with names, and I mean shocking, so I can't promise to remember your name, but honestly, I love meeting up with you guys and hanging out. Come and introduce yourself. The same goes for Twitter and elsewhere. Give me a shout and let's talk, though I'm going to tell you, I'm not the fountain of all container knowledge, though I'm sure you know that already, but what I can often do is facilitate the conversation, and I'm talking Twitter here, but more often than not, we can loop in somebody who can help. But yeah, no point in me taking up more of your important time. Thanks for taking the course, it means a lot to me. I do sweat blood making them, and believe me, the effort is not so that I can stick them on the TV at home on a Friday night and sit down with my wife. No, they're for you, and I couldn't care less how cheesy this sounds, I've worked too hard to care. These courses are yours. It's funny actually, I have people come up to me at DockerCon and the likes, and they're like hey, are you Nigel Poulton, and I'm like, that depends on who's asking, but yeah. And then you guys are like, thanks for your courses, man. And you know what? I'm thinking it's like me that should be thanking you. I make them, you watch them, we all learn, the world's a better place. Anyway, I hope this one has been useful for you, and with that, good luck. Hopefully I'll see you around. Come on!