Docker and Containers: The Big Picture
Course Introduction Hello there, and welcome to this course, Docker and Containers: The Big Picture. And I'm pretty excited about this course because I'm thinking, right, it might be of real value to a lot of people. And that's just based on my gut feel because if I had a quid or a dollar or whatever for every time somebody asked me for a quick rundown on containers, you know what's the big deal of containers and the likes here, the TLDR, well if I had a quid for every time I'd been asked that, man I'd be a wealthy man. Anyway, so that's the goal of this course, to bring anyone and everyone up to speed with what all the buzz is over Docker and containers. Now the course is going to be intentionally fairly high-level. We're going to shoot through stuff fairly quickly. I want you to be able to digest the entire course in a relatively short period of time. I've got more technical and more detailed courses in the Pluralsight library if that's what you want, but this one's moderately high-level, big picture stuff. But don't be put off. It's going to cover some cracking topics. So we're going to do what are containers, a quick explanation of what a container actually is. High-level stuff though, yeah? I'm thinking enough to give you the kind of confidence you need to be able to hold your own in your container chat in the pub or at the coffee machine. Then we'll look closer at Docker. We'll look at the company and the technology. After that we'll switch gears a bit, and we'll talk about some other things that we can do to prepare for a world that promises to be full of the things, containers, you know, a world where logos like Docker are as common place as the likes of VMWare and Microsoft. We'll then look at what kind of tasks containers will take on, you know, what kind of impact they're going to have. Then we'll look at what a container registry is. And this is massive in my opinion. Container registries are like the app stores of the enterprise IT world. They really are changing the way that we do enterprise IT. Then we'll have a chat about whether or not containers are production ready and whether they'll take a hold in traditional enterprises. All very pertinent and very pressing questions where we'll give some food for thought there. Then we'll wrap things up with a chat about what container orchestration is, something we absolutely need to get right if we want to truly leverage the power of containers and not be left with more work than we bargained for. So that's the agenda. I'm not going to waste any of your time waffling on about anything else here. Let's jump straight into our first topic, what are containers? What Are Containers? Module Intro First things first, we need a decent grasp of what a container actually is, but in this module we're not going to go into detail. It's going to be just big picture stuff, just enough so that we can follow along nicely with the rest of the course. But actually, you know what? This'll be perfect for anyone involved in strategic IT conversations, especially the kind that include talk of containers, obviously. And you know what? If that's you, and I don't know, maybe you're not 100% up to speed with what a container even really is, but maybe you don't want to admit that, heck, I've been there myself. Well, we all know the situation. Everyone's talking away about something new, and you've heard the terminology and the likes, but to be honest you haven't really grasped the concept yet. Not a great place to be. Like I say, I've been there. Well, the raison d'etre for this short lesson is to fix that, give you the confidence to articulate yourself when you're in a container-related conversation. Okay, so let's crack on. The Bad Old Days Now to do this properly, I think we really need a quick IT history lesson. High-level and quick though, IT 101, if you will. At the highest level, applications are what run our businesses, so internet banking, online retailing, airline booking, air traffic control, media broadcasting, education, farmer, automotive. Whatever your business, applications are a massive part of it. It's a truism in today's world that we can no longer distinguish between our business and the applications that power it. They are one and the same. No applications, no business. Well, these applications run for the most part on servers. And back in the day, I'd say definitely early to mid 2000s, most of the time we ran one application on one physical server. And by physical servers, I mean hardware like this. So IT worked a little bit like this. Hey, the business needs a new application for whatever reason, maybe a new product launch, new lines of business, whatever. The business needs a new application, so that means IT needs to go and procure a new server to run it on, and of course that server's got an upfront CapEx cost plus a bunch of OPEX costs that kick in later, things like the cost of powering and cooling it, administrating it, all of that stuff. Okay, fine. But you know what? What kind of server does this application require? I mean, how big does it have to be? How fast? Well, I can tell you from sorry experience the answers to questions like those are almost always uh, yeah, we don't know. And in that case, IT did the only reasonable thing. They erred on the side of caution and went big and fast because the last thing anybody wanted, including the business, was dreadful performance, that inability to execute and potential loss of customers and revenue. All because what? IT guessed too small and too slow? No, we weren't going to go there, so we guessed big and fast. And yeah you can no doubt imagine more often than not in my experience, and I'm not alone here, we ended up with a bunch of massively overpowered physical servers running at like 10% of what they're capable of, maybe 20% on a great day. However you look at it though, it was a shameful waste of company capital and resources. Hello VMware Then along came VMware, or I should say along came the hypervisor because there's not just VMware out there. But oh my goodness did the world change, and for the better. It was almost overnight that we had a technology that would let us take the same physical servers and squeeze so much more out of them, get a load more bang for the companies buck. So instead of dedicating one physical server to one lonely app, suddenly we could safely and securely run multiple apps on a single server. Cue halleluiah music. Seriously! So think about it. That scenario of the business coming and saying hey, we're growing, expanding, diversifying, whatever, and we need a new application, well it was no longer an automatic purchase of a brand spanking new sever with its associated CapEx and OPEX costs. Now we could say hey, guess what? We've already got a server over there, and it's barely doing anything. Let's run the new app on that. So almost overnight, though let's not forget to be honest, VMware is a company, and it's virtualization technology under the hypervisor virtualization technologies out there, they're nearly 15 years old. So it wasn't really overnight, it did take time, but here we are in a day and age where for the most part we only buy new servers when we actually genuinely need them. I mean how cool is that? We can squeeze many applications on a single server and really put our server assets to work. Make them sweat! That means a ton better value for the spending of the business's money. And like I said a second ago, what a better place the IT world is for it. But, and there's always a but, it's not a perfect solution. Of course it's not. VMwarts So let's look briefly at some of the shortcomings of this model, the VMware model. Or to be fair I really should say the hypervisor virtualization model because like we said, there are plenty of non-VMware hypervisors out there, Hyper-V, KVM, Xen, just as a few examples. Well, in this model we take a single physical server. And I'm going with a slightly more detailed diagram this time, but we're still high-level, okay? So this is our server. It's got processes, memory, disk space, all of that stuff, and we know we can run loads of apps on it. Now I'm just going to go four here because it keeps the diagram nice and clean, but to accomplish this we create four virtual machines, or virtual servers. Now these are essentially slices of the physical server's hardware. So, for example, virtual server one here, we might have allocated it, just for arguments sake, we'll say 25% of the processing power of the physical server. Remember we're big picture stuff here, so maybe 25% of CPU, 25% of memory, and 25% of disk space. And maybe we did the same for the other three virtual machines. Now these virtual machines are all slices of the real resources in the physical server below. Then each of these virtual machines needs its very own dedicated operating system, so that's four installations of usually Windows or Linux, each of which steals a fat chunk of those resources, CPU, memory, disk just in order to run. No applications yet. This is just to run the operating systems. And then a lot of the time these operating systems need licenses. I mean red hot enterprise Linux isn't free, and Windows certainly isn't either, so there are costs right there in both resources and budget and costs that, I don't know, it just feels like a shame that we have to have them. Because you know what? None of us are in the business of seeing how many operating systems we can run, manage, and pay for. When we boil it all down, operating systems, as cool and as important as they are, they are a necessary evil. I mean if we could safely and securely run our apps directly on the server hardware without needing an operating system, man, we sure as heck would. And you know what? At the end of the course I'm going to introduce you to a way of potentially making that possible. But back on track. It's not just the cost of licensing these operating systems. Each and everyone needs feeding and caring for, so admin stuff like security patching, maybe antivirus management. There really is a whole realm of operational baggage that comes with each one. And VMware and other hypervisors, as great as they absolutely are, they don't do anything to help us with these kind of problems. So yeah, VMware and the hypervisor model change the world into a better place, but it's still got its issues. There still are efficiencies that can be gained by moving to better technologies and better methodologies. Containers So that is definitely enough setting the scene. Let's finally explain what a container is, and we'll draw a picture to help. So we'll start out with the same physical server and the same four business applications. But instead of installing a hypervisor down here, and then four virtual machine and operating systems on top, remember each with its own baggage in overhead, instead of all of that, we install one operating system, just one. Then on top of that we create four containers, and it's inside each container that we run our apps. And again, I'm holding my hand up here okay, we are still being high-level, so we're not getting into microservice architectures or anything. We're just thinking one app or maybe one service per container, though I am purposefully drawing the containers smaller than I drew the virtual machines because in reality they're a lot smaller and a lot more efficient. Now that's an alright picture, but to be fair it looks kind of similar to the virtual machine model. In fact, let's bring the old hypervisor diagram back. Yeah, right, see how on the left here on top of the hypervisor we create a virtual machine, a virtual server if you like. Well, all it is is a piece of software dressed up to look and feel like a physical server. So each of these virtual machines has got virtual CPUs, virtual RAM, virtual disks, virtual network cards. Then on top of that construct we install an operating system. And in the eyes of each of these operating systems, the virtual machine below it is no different to a real physical server. It doesn't know that it's virtual. It really does feel and look like a physical server. Anyway, we already said this probably too many times already, but each and every operating system has got CapEx and OPEX costs, so we said admin overhead, patching, upgrading, driver support, all of that stuff. But look here. Each one also consumes resources from the physical server, so each and every operating system is consuming CPU, memory, disk space, all of that kind of stuff. Kind reminds me of a book I used to read to my kids called The Hungry Caterpillar, a caterpillar that just eats and eats and eats. Well, we could call this model here the Hungry Operating System. Each and every one is just eating into everything, admin time, system resources, budgets. Oh, and that's not to mention every operating system here is offering up a fairly chunky attack service too. So seriously, somebody remind me why we have them? Anyway, in the container model over here, we've just got one operating system, so our physical server at the bottom, and then we install an operating system. After that, we essentially carve or slice up that operating system into secure containers. Then in each of these containers we install an app. Net result, we get rid of pretty much all of the fat over on this side gone meaning we've got all of this free space over here to spin up more containers, and that means more applications for the business. And you know what too? These apps and these containers here, man, they start like, I don't know, just so fast, pretty much perfect for situations where you need to spin things up really quickly. You see, there's no virtual machine, and there's no extra operating system that we have to boot before we can start our app. The operating system in this model is down here, and it's already started. So all of the apps up here are securely sharing this single operating system down below. Result? More space for containers, therefore more apps, and apps spin up in usually less than a second. In fact, you know what? Let's go and see a container in action. Container Demo Whoa! Hang on a sec. What's this command line all about? I thought we were going to be high-level. I know, but don't be put off. Everything I'm going to do here can also be done through GUIs, and even better it can all be automated through APIs and orchestration tools. I just didn't want to complicate things here. So this is about as minimal as it gets. I'm logged on to a Docker host. In fact, actually let's drop our picture up here like this. I'm on a Docker host here, that sort of red layer, an operating system, Linux, with the Docker engine installed. So we're logged on here. And yes other operating systems are supported, and yes other container engines do exist. But Linux and Docker seem to be where most of the action is at the minute, so here we are. Anyway, I've downloaded a single image onto this Docker host. It's called Pluralsight-docker-ci. Sorry about the long name. I'm just lazy, okay. It's an image that I created for a previous Pluralsight course. Never mind though. You can think of an image as let's say a stopped or a powered off container. But maybe this is helpful as well. I suppose they're a bit like a virtual machine template, or an OVF, if your familiar with that world. Essentially, this image contains a container with a simple web app so that if we kick it off, so start it up, we'll have a running web server. So we'll do that. Start a container from this image, and then connect to the web server in it. So first things first, let's just grab the IP address of this host. Okay, lovely. Now we're going to need that so that we can connect to the web server, so we'll put that over here. And this particular web server listens on port 8080, so we'll do this. Hit return, and nothing. And that's expected, right? Okay, back to our Docker host. Now if I type this horrifically long and scary command, bear with me, okay, all this is doing is running a new container, calling it web, and I mean we could called it anything, right, but then it's saying map port 8080 on the host to 8080 inside the container we're going to run. Like I just said, this web server happens to run on port 8080 that's all. And then we're saying base the container on this pluaralsight-docker-ci image. Okay, you know what? The detail doesn't really matter. If we hit return, that's it. This long number here is the unique ID of our container. So it's already up and running. I mean how fast was that? So if we go back here, if we hit refresh, there we go. We're in business. So a fast, super lightweight web server container. Oh and wait, right? I can stop it. Check over here. Yep, that is looking pretty stopped to me. Pop back here, start it again. Yeah, already back up. Kind of a lot like a VM or a normal web server up, just way more lightweight and way faster to start. And there you go. If you've never seen a container in action before, well you have now. Though don't be underwhelmed. These things really come into their own at scale. Anyway, let's quickly wrap this module up with a lightning fast recap of what we've covered. Module Summary So apps run businesses. No apps, no business. And in the bad old days we needed to spend a lot of time, effort, and budget spinning up new apps for the business. We'd buy new servers, wait for the PO to be signed off, wait for the server to be delivered, unpack it, rack it, power it up, configure the hardware, build the operating system, and then after all of that, and that was a long time and a lot of effort and money, we'd get around to installing the app. And if all that doesn't sound bad enough, more often than not the server that we'd bought was massively overpowered and massively overpriced. Not something to be proud of. Well, then came virtualization, VMware and friends. They managed to squeeze out a good amount of the waste on the physical server side of things, and thanks be to them for that. But they didn't squeeze all of the waste out. They left us with a boatload of operating system overhead. Basically, we still needed lots of them, operating systems that is. And each one often needed licensing, patching, maintaining, and each one consumes resources and presents a sizeable attack service. So yeah, better, but far from perfect. Well, now we've got containers, even more lightweight and even more efficient. We just take an existing server, as long as it's running an operating system that supports our container technology of choice, and of course so long as it's got free resources, but we just spin up a new container anytime we need a new app exactly like we did a second ago with that web server. And seriously, how easy and fast was that? So containers in many ways they're virtualization 2.0, though let's not get carried away. VMware and other hypervisors have been around for about 15 or so years now, and in that time we've worked hard to build strong ecosystems of management and operations tooling, and we've got a robust and a skilled workforce out there. So yeah, I mean the container ecosystem, it is growing like wildfire, but in some areas we're not in the same place as we are with traditional virtualization. So again, yes containers are great, and yes in many ways they are the future or the next step in virtualization, but the container ecosystem is young, and it's very fast moving. So keep that in mind. I'd say the short and skinny, we won't be switching all of our hypervisors off tonight and shifting all of our apps to containers ready for start of business tomorrow morning, but the movement is underway. Anyway, what have we got next? Next on the agenda a close look at Docker, so the company and the technology. See you there. What Is Docker? Module Intro So, let's take a minute or two to focus in a bit on Docker because although there're definitely are other container technologies out there, and good ones at that, Docker is where most of the development and most of the action is. In fact, I think it's more than fair to say that Docker is to containers what VMware is to hypervisors. Anyway, what we'll look at in this module is we'll look at Docker Inc., the company, then the Docker project, so the container engine plus other products that are springing up around it. Then we'll wrap up the module with a quick look at the OCI, the Open Container Initiative. Docker Inc. So Docker Inc., the major company and the main sponsor behind the container technology currently changing the world. Well, it's a tech startup based in San Francisco. Only Docker didn't start out life called Docker, nor was it originally in the business of changing the way we build, ship, and run applications. No. Originally the company started out as a platform as a service provider called dotCloud, the idea behind that business being to offer a developer platform on top of Amazon Web Services. Great, only in or around 2013 the business was on its knees. It was in serious need of a new lease of life. Well, and somewhat poetically I think, it was an internal project that provided that new lease of life. You see, behind the scenes at dotCloud they were using this funky container management and runtime technology as their main deployment engine. So while their core business of selling a developer platform on top of AWS was waning, they were all the time sitting on something developed internally that was a little bit special. So with nothing much to lose, and you know what, those are like totally my words by the way, and no doubt they're totally inaccurate, I'm sure it was a huge decision to make, but the point is at some point in 2013 they decided to make a major pivot and bet the business on this container technology that they were calling Docker. And by the way, the name Docker apparently comes from some British colloquialism meaning dock worker. So you take dock and worker, lose the work part, and you get Docker, apparently. Anyone know a British guy who might be able to vouch for that? Anyway, I actually kind of like the name. It's short, and it's catchy. Now back on track though. With brand new CEO, Ben Golub, plus CTO founder and pretty much God Father of Docker, Solomon Hykes, at the helm, and I hope you like what I did there, well, they performed a masterful pivot of the business. And today Docker Inc. is seen as a leading technology company with a market valuation of or around a billion dollars having raised, at the time of recording here, $180 million in venture cash by way of five rounds of funding from some of the biggest names in Silicon Valley venture capital. So we're talking the likes of Greylock Partners, Inside Venture Partners, Sequoia Capital, Lightspeed Venture Partners, Goldman Sachs, and more. And I think about $170 of that $180 million has come since the business pivoted away from a platform as a service company to be the container company, so investors see potential in the company. Also though, since pivoting to becoming Docker Inc., and by the way they've since sold off the dotCloud business, but since becoming Docker Inc. they've been busy on the acquisition front themselves. So they've picked up five container centric startups all for undisclosed fees, but each one making a valuable addition to the growing Docker Inc. product portfolio. Anyway, there we go, Docker Inc., a Silicon Valley tech startup. I don't know, somewhere around 200ish employees, not far off $200 million, and five rounds of funding, and they've made a few acquisitions of their own. Right then, let's make a pivot of our own, and we'll start talking about the Docker project. The Docker Project So the Docker project is absolutely not the same as Docker Inc. the company. And I should point this out up front and center, Docker Inc., while they're like the guardian of the Docker project, they're where it all started, and they're the major sponsor and driving force behind it, but they really don't own it. Docker, the container technology, belongs to the community. So that gives us our first point. It's open source. This means everyone and anyone is free to contribute to it, download it, tweak it, and use it so as long as they adhere to the terms of the Apache license version 2. And make no mistake, this is massive, the fact that it's open source, but I'm going to park that for a minute or two and come back to it later. Now the aim of the Docker project, or its raison d'etre, which seems appropriate seeing as its founder Solomon Hykes is French, anyway, the Docker project is all about providing awesome open tools to build, ship, and run modern applications better, so to build better, to ship better, and to deploy better better than we used to. And there's more than one tool and technology to the Docker project. The same way that VMware is a ton more than just the ESXi hypervisor, well the Docker project is way more than just the Docker engine, the Docker engine, by the way, being that core piece of software for building images and running containers. And you know what? That's actually not such a bad comparison with VMware and ESXi. If you know VMware, and you can relate to the comparison, the Docker engine is kind of the core technology that all the other Docker project technologies, plus third-party tooling, build on and build around. So you know what? A quick picture. If we stick the Docker engine here in the middle, this is the core technology that builds and manages images, as well as starting and stopping containers and the likes. Then everything else like clustering, orchestration, registry, security, all that kind of stuff all build around the engine and plug into it. So yeah, the Docker project, not owned by Docker Inc., though I would definitely say they're pretty much the guardians of it, but it's definitely a community project meaning if you look at it you'll notice that everyone and their dog is contributing to it from the likes of IBM, Cisco, Microsoft, Red Hat, you name them. The who's who of infrastructure technology is involved, but as well all the way to Suzy the software developer contributing in her own free time on a weekend. There's a place for everyone. So, as you'd imagine, the code is up there for the world to see on GitHub. This here is just a random sample of code from the core Docker engine publicly available on GitHub. And by the way, core Docker components are written in Go or golang, the modern programming language that came out of Google. Also, we can see the planned release cycle. See here, aiming for a major point release every two months. And for the record, they pretty much achieve that. So none of the old ways where code was all proprietary, top secret, and hidden from prying eyes, and none of the old ways of uncertain release cycles either, you know those that were often kept secret and then preannounced to ridiculous pomp and ceremony at loud and brash global events. Well, Docker's not like that. Pretty much everything is done out in the open for everyone to see and everyone to contribute to. And the project is massive. So only two or three years old at the time of recording here, and it's already sporting numbers like these. Now, okay, I know we haven't talked about Docker Hub yet, but I want to give you an idea of the kind of scale and degree to which people are obviously using this stuff. So real quick, Docker Hub, the public Docker registry, a place where you can store and retrieve Docker images, well there are over 240,000 repositories on there, 240,000! But that's nothing, right? Images from those repositories have been pulled, so downloaded and used well over a billion times. Yeah, that is billion with a B. And that number's going up all the time. They're doing like more than 5 million pulls or downloads every day. Oh, and you know what? That's only the public repositories, and it's only on the official Docker Hub. So Docker Hub itself, it also has private repositories. Then away from Docker Hub there are third-party registries as well, so the actual numbers will be even higher. The short and skinny, this stuff is being used. So there we go, the Docker project, the technology behind running containers. It's open source, it's available on GitHub, and it's being heavily developed and backed and heavily used. The Open Container Initiative Alright, the Open Container Initiative, the OCI, and I'm hoping this'll be brief. It's basically a governance counsel, so responsible for standards around the very most basic and fundamental components of a container ecosystem, the container format, and the container runtime. But you know what? It's always good to know your history as it gives us a bit of perspective, and it frames the situation that we're in today, though beware this is container history according Nigel. Anyway, as Docker gained in popularity and momentum, it started getting used by more and more people in more and more ways and for more and more use cases, so I suppose it was inevitable that at some point somebody was going to get a bit frustrated with it. You know what I mean, frustrated that it didn't do exactly what they wanted it to do, or maybe that it didn't do it exactly how they wanted it to do it. Well, and you know what? They weren't the only user or partner in this situation, but CoreOS, a modern internet infrastructure 2.0 company, they were using Docker, but they had a few issues with it. The tl;dr was that they thought architecture was bloated or at least that too much functionality was being shoved into the engine or the main demon, but also they didn't really think it was secure. And to be fair, they had some good points, though to be fair to Docker too, they were growing like crazy. And you know what? It doesn't matter who you are or how many you are. There's only so much you can do without sacrificing quality, and I guess as well without potentially derailing the project. Anyway, CoreOS decided to write their own container runtime and bring their own specification for a container format to market. The runtime was called rkt, spelled R-K-T, but pronounced rocket, and the specification for the container format was called appc. And look, I wasn't personally involved, and I don't want us to get too sidetracked, but that's essentially what happened. But what it threatened to do was fracture the ecosystem, create two competing standards, if you will. Now there's nothing wrong with competing products. In fact, I'd bet we all agree that competition is a good thing for innovation. But two competing standards? Yeah, you know what? That's a different thing. And like I said, it threatened to fracture the ecosystem, not to mention competing standards are rarely good for customers. But guess what? And I kid you not here, common sense prevailed. Seriously, everyone decided to act like adults, at least on the outside, and they came together, and they formed the OCI. And so far things seem to be working. So history lesson over. What is the OCI? Well, it's an intentionally lean and lightweight governing structure, one that operates on principles on contribution and openness. Its ultimate reason for existence, and the only reason for existence as far as I can tell, is to produce standards all lightweight and all open, of course, around container format and container runtimes, core principles including making sure that no single vendor is favored and that no single orchestration platform is favored. The aim of the game? Being to produce stable open standards that encourage adoption and ecosystem growth. Anyway, it was formed in June of 2015, and Docker and CoreOS are major forces behind it, though the initiative is also backed by most of the household names in enterprise IT. Docker for their part have donated the container format, and they've ripped out all of the container runtime specific code from the Docker project, made it so that it's not reliant on anything else Docker specific, and they've donated that to the initiative too. Anyway, the initiative operates nominally under the auspices of The Linux Foundation. And I can't stress this enough, it is minimalist in every way. Nobody involved has any desire for some huge and hideous governance body that gets in the way of engineering and innovation. In reality, they should operate silently in the background, and we should barely be aware that they even exist. Oh, and one of the analogies that they use is that they're making the decisions on things like the size and the properties of rail tracks meaning everyone else can concentrate on building better trains, better carriages, better signaling, better stations, all of that good stuff. So that ran a bit longer than I thought it would, but that's the OCI. And on that note, let's go remind ourselves of what we talked about in this module. Summary Right, we covered three main things. First up, Docker Inc. We said that they're a tech startup of about 2 to 300 employees in the San Francisco Bay area. They've raised not far off $200 million in venture capital, and they've made a bunch of their own acquisitions. And it's fair to say they are the leading name in the emerging container world. Then we mentioned the Docker project, the open source project that's available on GitHub that's changing the technology world. Docker Inc. is the main sponsor behind the project, but it's open sourced under the Apache license 2.0. And oh my goodness it is being used. We mentioned hundreds of thousands of containerized apps and billions of downloaded container images all backed by the who's who of the infrastructure tech world. Then we wrapped things up by talking about the Open Container Initiative, a lightweight project aiming to develop lightweight standards to foster adoption of container technologies and encourage a strong ecosystem. These guys are deciding on the size and the properties of the train tracks while we all focus on building trains, signaling, telemetry, stations, all of that kind of stuff. So that's Docker Inc. and the Docker project. Next up, we're going to talk about how each of us can prepare for the container world. Preparing to Thrive in a Container World Module Intro Okay, so a lot of people I've talked to about containers have heard of them, and quite often they've got some awareness of what they are and the potential changes that they're going to bring. But you know what, often they're quite worried. Because let's face it, while I'm a massive believer that change brings opportunities, for a lot of people it brings a bit of worry and a lot of unknowns. And most people, including a lot of organizations, don't like unknowns, or are at least unsure and a bit weary of them. That all said, more often than not, they accept that at some point they're going to have to deal with containers, and that leads them to ask me, hey, what can I do to prepare? Well, that's what we're going to talk about in this module. Prepare ourselves and our organizations so that we can live and thrive in a world of containers. So we're going to cut it from two aspects. How to prepare ourselves as individuals, make sure that we look after our careers, and make sure that we're ready as individuals for the opportunities coming. But we'll also talk about how we might prepare our teams and the organizations that we work for. So those are our two focuses. And you know what, I for one love times like this. Monumental change is coming and everybody has their chance to be either a winner or a loser. But the cool thing is, it's all about, well, a bunch of the stuff that we're going to discuss here. So let's crack on. Personal Preparedness So on the personal preparedness front, the following two principles will serve you better than any others that I know of, knowledge and experience. Now I know that no two of you watching this course are exactly the same. Some of you are going to be hands-on techies, developers, sysadmins, DevOps, while others are going to be more management and generally less hands on. Well, if you're one of the hands on type, do just that, get your hands on this stuff. All you need is a virtual machine, so on your laptop or in the cloud, it really doesn't matter where, and then get Docker installed. After that, do what you do, play with it, develop and dockerize an app, build images, start containers, smash them, trash them, just get your hands dirty messing around with it. And, if you like learning through videos, let me tell you, I've currently got over 6.5 hours' worth of Docker training for you on Pluralsight. There's my Docker Deep Dive course, which despite the potentially scary name, is a hands-on, soup to nuts course that takes you all the way from installing Docker, through networking, storage, Dockerizing, and a bit of troubleshooting. And you know what, you don't need to be a Linux guru to get on with it, I promise. And I've also got a magic little 1-hour course that takes a simple web app all the way from code on your laptop through GitHub, unit testing, building on Docker Hub, and deploying to Amazon Web Services. Plus, I've got more Docker courses in the pipeline. The point is, get your hands on it, and get learning it. Now if you're not a hands-on person, well crack on with this course. The whole reason behind this course is to bring you fully up to speed in a non-hands-on way, with the most important things that I think you need you to know in order to be prepared for containers. And I promise, once you get yourself comfortable with this stuff, and it really won't take long, well any fears or concerns that you may have had, you'd be like, what was I worried about, this is so simple. Anyway, once you have wrapped your head around it, go and see where you can start leveraging it in your organization. Talk to your peers and your colleagues, see if anybody else is using it. And if they are, get together with them and start talking and coordinating, start planning how you might pilot and test some of it at work. Organizational Preparedness Okay, the million dollar question, how can we prepare our teams and our organizations for Docker and containers? Well, this one's a bit trickier, but it's still very doable. First and foremost is acceptance. Your teams and your organization have got to accept that containers are coming. And allow me to say this, even if you don't think they're coming to your particular organization, you might be surprised. And no, this isn't me thinking that I know more about your business than you do. I'm just saying, you might be surprised. Now let's face it, nobody wants to do a Larry Ellison. So Larry, the founder and highly opinionated Godfather of the very successful, I will say, Oracle Corporation, now I'm not having a dig at Larry, nor Oracle here, I'm just using them as an example, you see, not that long ago Larry poopooed the cloud, claiming that it was pure water vapor, and that it would have zero impact, note that, zero impact on Oracle Corp and their lines of business. Okay, well I probably don't need to tell you, go take a look at Oracle Corp now and see how they're scrambling to reinvent themselves as what? As a cloud business. All the while though, seeing Amazon Web Services and the Microsoft Azure cloud eat into their lunch. And there's a few links on the slide here as well, just to back up some of what I've said there. So the moral of the story, and don't get me wrong here, you know your business way better than I do, but do yourself a favor and just make extra sure that you're not going to do a Larry and find yourself playing catch up later on. Anyway, so the first thing is definitely to accept and acknowledge that those things over there on the horizon, coming towards you, are containers. Cool! Next up, ask around within your organization. Do what you need to do to ascertain whether or not you've already got containers in your environment, potentially under the radar. I mean, let's learn from our past experience with the public cloud. Hands up, how many of us were operating in blissful ignorance of teams and individuals that were procuring services and infrastructure from Amazon Web Services with credit cards, under the radar? Shadow IT anyone? Hey, I was there and I got burned. So get out there. Ascertain whether or not you've already got containers. That makes it sound like a disease. Next though, start thinking about how you can make them official in your estate. A great place to start, if you haven't already, is with developers. They're usually going to love Docker, and introducing it to your organization and your estate as part of a continuous integration and continuous delivery workflow, is a great way place to start. But beware, from that point on, expect the developers to love it and want to use it everywhere. Another good place, more on the infrastructure and operation side of IT, is to see how you can start running some already-resilient infrastructure services inside of containers. Think about stuff that's distributed, where maybe you can run some instances inside of containers, and some in VMs. Then after that, you really want to start looking at tools to orchestrate deployment, manage logs, all of that kind of stuff. But here's a golden rule, do not ignore the infrastructure ops aspect. Don't think that containers are just for developers, or at least if you do think that, don't ignore the fact that containers need trucks to transport them, cranes to lift them, all of which are parts of the operational side of IT. I think the last thing any of us want, and we all love developers, but we don't really want them running amuck in our production estates. It goes back to principle numero uno, accept that containers are coming and prepare all aspects of the infrastructure. Get the cranes, the trains, the gantries, all of the other ops stuff ready to deal with them. Now what am I talking about? Cranes and gantries, and stuff? What does it really mean? I'm talking about container orchestration tools, clustering tools, deployment tools, monitoring and logging tools. There's a whole growing ecosystem out there of container centric versions of all of those kinds of tools. Do your homework and start testing some of them. And then let's do everybody a favor. Developers, go and talk to operations, and operations, go and talk to developers. Let's all pull in the same direction for once. Otherwise our production estate's going to end up looking like the wild west. And then one last point, you probably want to start having conversations internally about who's going to pay for this stuff. Whose budget it all comes out of? Which, in turn, quite often dictates who's going to own it. Why Being Prepared Is so Important Now just a few words on why it's so important to get prepared and get prepared quick. So Datadog, a cloud monitoring as a service company, published some stats at the back end of 2015, showing container adoption in the real world. So here are some of those stats with the URL for the original article shown at the bottom, and according to the article, these stats are from 7000 real companies. Well here we go. Real world Docker adoption is up 5x. So in September 2014, 1.8% of Datadog customers used Docker. A year later, in September 2015, that was at 8.4%. Then, 6% of hosts that Datadog monitors now run Docker. And that's no shabby number considering they monitor over 120 technologies. Two-thirds of all of their customers that tried it, appeared to have ended up adopting it. And this one here is scary, at least if you're not ready to manage the sprawl it is, but users approximately triple the number of containers they use within the first six months of deployment. So when they arrive, they spread. So again, be prepared. Summary So that was an easy module. Get prepared! Though on a serious note, actually that was serious as well, but I hope you did find it useful. I think in recapping, the two major takeaway points are, one, containers are coming. And you know what, you might already have some that you don't know about. And then two, when they do come, man oh man, they spread like wildfire. So, prepare yourself individually with the necessary skills, but, also prepare your organizations by investigating containers and the technologies that make deploying and managing them simpler. Then make sure all the relevant teams are talking, especially developers in operations, if that's how your organization's geared up. Now I'm probably going to get a bit carried away here, because you know what, these are exciting times, but make no mistake, the winners and the losers have not been decided yet. On the individual front, there'll be folks who carve out stellar careers and stellar companies in this container world, but on the flipside, there'll be people who struggle to keep up, and unfortunately, there'll be some who are just totally steamrollered by the whole thing and maybe never recover. The thing is, it's still early days, and you know what, this is going to sound like horrifically cheesy, but you've absolutely got the power to choose your own destiny here. So I recommend you grab containers by the throat and you make them work for you. I almost want to say, use the force, but know I'm deadly serious. And it's the same for companies and organizations, and IT departments, there'll be those who see this coming and position themselves to profit and benefit from it all. But, there'll also be those that see it as a storm on the horizon and batten down the hatches and try and ride it out. Well, enough of the cheesy life coaching. Next on the cards, we're going to talk about whether or not containers are going to impact our traditional IT and applications, or whether they're just about hipsters, stateless workloads, and modern microservice app architectures. What Kind of Work Will Containers Do? Module Intro Okay, a question I get asked a lot about containers is whether or not they can be used for stateful apps, so apps that persist data, or if they're only useful for stateless apps. And it's a great question, but one I think that deserves a quick word on what stateless and stateful apps and services actually are first. So on a mega high level, a stateless app or service is one that doesn't keep many data, so it does what it needs to do, maybe forward a connection or something, and then once it's done it forgets all about what it's just done and moves onto the next job. On the other hand, a stateful app or a stateful service remembers what it's done before. So let's assume it's forwarding connections again. So it forwards your connection. Then later on you maybe come back. Well, a stateful app is going to remember, sort of say hey, you were here before, and I sent you this way, so let me send you that way again, or something like that, right? I don't know, kind of like eating at restaurant if you like analogies. A stateful restaurant would be like ah, yes, Mr. Poulton, we remember you. You didn't like sitting next to the lobby. You like the table over by the window. Let's see if we can get you over there again. A stateless restaurant though would be more like, well, McD's or something; I don't know. This way Sir and Madame, and just sit you down wherever they've got a free table. They don't remember you from before. And you know what? Typical examples of stateless services and apps are things like web servers that deliver static content, and I suppose the archetypal stateful service is the database or key value store because they're not much use if they forget about everything that's stored in them, right? So that's the broad notion of stateless and stateful apps and services. Well, in this module we're going to talk about whether Docker and containers are just for stateless stuff or if they're good for stateful too. And you know what? The too long didn't read on this, is that Docker and containers can totally handle both, no issues handling either, though they excel at stateless. Anyway, let's discuss this a bit further. Stateless Now, and I'm going to keep this as brief and as high-level as I can because it's a deep and complex topic, but I don't think there's any doubt that there's a huge push towards new app designs and new app architectures. And public cloud services and the need for scalability are fueling it, while containers, they're making it more possible than ever before. But to fully appreciate it, we need to take a quick step back and recap our IT history. Now, you might remember from the module on containers we said that VMware and hypervisors revolutionized IT, so they dragged us from the dark ages of wasted server resources to the modern day where we're pushing resource utilization like we've never pushed it before, though, of course, main framers and big UNIX, like Solaris and friends, have been doing this since forever. But the point is hypervisor virtualization was a great thing for mainstream IT, only it's a two-edged sword. On the good side, it let us lift our existing applications from the physical world and drop them straight into the virtualized world, but on the bad side, it let us lift our existing applications from the physical world and drop them straight them into the virtualized world. Wait, huh? Okay, so on the one hand we could take our legacy apps, note the term legacy, and without changing them in any meaningful way, we could run them as virtualized. Magic! Migrations literally couldn't have been easier. And you know what? That was good, or it was better than not doing it. It just wasn't truly great, reason being it meant that all of our old crud from like the 1990's, and I'm talking about apps here, suddenly got a shot in the arm. Net, net, most of us are still running around doing business today in the 20 teens on top of applications built, well, donkey's years ago. And it works; don't get me wrong, we get by. It's just, well, it could be so much better. Well, containers, while we kind of can do the same sometimes, so lift the old crud and drop it inside of containers, sometimes, not all the time, well, you know what, we really don't want to. You see, containers by their very nature encourage us to rethink and redo our apps. But that too is a two-edge sword. On the one hand, we're starting to develop and deliver new, modern, scalable, self-healing, portable apps, but on the other hand, we're having to develop, and I'm sure you get it, modern, scalable, self-healing, portable apps. The point, yeah, it's the way forward, and yes we want to do business on those kinds of terms, but yes it takes pain and effort to transform and get there. So, and let me just take stock here because I don't want to lose you. I know I'm waffling a bit, right? What I'm saying is containers provide us with a lot more of an opportunity to rethink and redo our apps. And that's great, but it takes time and money, and it brings its share of risk. But I do recon most people are moving towards the mindset that modern, microservice, self-healing, scalable, portable apps are the way forward, the kind where we build the overall app out of multiple small parts, parts that can scale, heal, do true online, anytime, rolling upgrades via things like canary builds and blue-green builds, I don't know, can be deployed multiple infrastructures and multiple clouds, all of that stuff, right? Most of us get that this is the future. It just takes effort to get there, but no pain, no gain, right? So yes, after all that waffle, containers are awesome for new cloud-native apps that we've just described. To be fair, they make virtual machines look like dinosaurs in this space. But those kind of apps are only part of the overall picture, especially in traditional enterprises. What about the more traditional types of apps? Stateful So like we said, a lot of people are of the opinion that Docker and containers aren't for the more traditional apps, so monolithic, legacy, stateful, all of those, the kind that power many an enterprise. Well, you know what? I know where folks get this idea from. You see, a third bit of the early messaging from Docker was that containers were all about a new type of app. Now I'm not speaking for Docker here. I'm simply saying that some of the early messaging was construed by many to mean that Docker and containers weren't suitable for traditional apps. And you know what? I think I get that. Docker and containers excel with modern microservice style apps. So to get the most benefit out of using Docker and containers, that's the direction you want to be headed in, but that in no way means you can't use it for the more traditional apps and workloads. You totally can, and people do. In fact, in the last week or two, I spoke to a guy from Switzerland who was running Oracle 12 inside of Docker. Okay, he did say that Oracle wasn't happy about this, like they weren't, and potentially still aren't about running their workloads inside of VMs, but he was absolutely doing it. And he's not the only one. I mean, as an example, Redis, MySQL, MongoDB, so all stateful database type apps, they all feature in the top 10 list of Most Popular Official Repositories in Docker Hub. So there's evidence there that people are obviously using Docker a lot for databases, which by their very nature persists data or as stateful. And this is probably as good of place as any to point out Docker containers are persistent by nature. So if you start a container, then you stop it, you don't lose any of its data. It's all still there meaning if you start it again, it comes back up with all of its data. No different to a virtual machine in that respect. You'd only lose its data if you remove, so explicitly destroy the container. Again, that's the same as with a virtual machine. Though, even in that in case, if the container's data was inside of a data volume, which is the preferred way to store persistent data, that data is going to persist even after the container is destroyed. So Docker containers persist data by their very nature. Stopping a container isn't going to lose anything. Alright, tell you what, let's see if we can put it all together in a quick summary of sorts. Summary So Docker and containers can deal with both stateful and stateless services meaning we can use them to deploy fancy, modern, cloud-native, scalable, self-healing, portable apps, and we can also use them to deliver classical apps. Like I said, I've spoken to more than one person running Oracle inside of Docker containers. And I by no means am recommending that, by the way. I'm just saying folks are doing it. However, what we tend to end up with are apps that have stateless and stateful parts, so, I don't know, maybe a web frontend and the likes all nice and stateless running inside of containers, but relying on somewhere in the background a persistence tear. And again, that can be containerized and using technologies like MySQL, MongoDB, Redis, whatever. Well, in the Docker world, I think since version 1.8 of the engine, the storage backend has been made pluggable, ultimately resulting in a bunch of storage solutions get around the portability of stateful containers. So companies like Portworx, ClusterHQ with Flocker, SEF, Blockbridge, EMC, these guys are all developing and delivering solutions that enable the portability of stateful containers, and they're doing things like snapshots and replication. The point being, things are gearing massively towards making persistent stateful workloads first-class citizens in a Docker environment. So yeah, in the early days of Docker at least, it was kind of stateless all the way, or it was look, you got the best out of containers by deploying stateless apps and services in them. But that was then, and this is now. Things have come on a really long way since then to the point where we literally are living in a world where stateful services get many, if not most of the benefits of containerization. So circling back to our original question, are containers just for stateless workloads? Well, the answer is, in my opinion, is a resounding no. Docker and containers absolutely are a great platform for stateful persistent workloads. So there we have it. Now then, next on the cards we're going to talk about Docker Hub and container registries. And okay, this is my personal opinion, but container registries, especially Docker Hub, these are as revolutionary as containers themselves, so you won't want to miss that module. Docker Hub and Other Container Registries Module Intro Do you know what? I used to think of Docker Hub and other third-party container registries merely as centralized places to store and retrieve images. And to be fair, they absolutely are that, but oh my goodness are they more than that. Container registries, particularly Docker Hub, are literally becoming the App Stores or the Goggle Play Stores of enterprise IT. So think about it, just like the App Store is central to everything that you do on your iPhone, Docker Hub or potentially whatever third-party container registry you decide to use is also the dead center of everything you do with containers. In fact, you know what? Docker Hub is even more central to the container experience than the App Store is to the iPhone experience. I'd say it's more like the combination of the App Store plus iCloud, and then some. So there you go. I obviously think container registries are important. And you know what? Technically speaking, they're actually called image registries, but I think container registries just keeps is a bit simpler. So if you don't mind, I'm going to call them container registries in this module. Well, the aim and goal of this module is to explain what they are, how they work, and then why they're so insanely important. Registry Basics So container registries, at the very highest and most basic level, they're places to store and retrieve container images. You know what? A bit like a bank, which in its own way, at a high level, is a place to put your money so you can get at it safely and securely from anywhere in the world. Well, Docker Hub is the official public registry from Docker Inc., but there are loads of third-party registries out there too. Quay or Key, depending on how you pronounce it, is a popular one from Quay.io, a CoreOS company. Google Cloud Platform's got their own; Amazon does. There are honestly loads of them out there, but Docker Hub's the big one. And this here is what it looks like. So you can see there are a bunch of official repos here, centos, busybox, ubuntu, loads of them, just like the app store. And as long as you've got a Docker host with an internet connection, you can get to any of these. Fair play, but how does it work? So we've got some Docker hosts down here, laptops, on-prem servers, in the cloud. It makes no difference so long as they've got internet access. And then let's say they're all brand new installed, so squeaky clean with no images. And no images means they can't run any containers, so the first thing we'd normally do with any new Docker host is pull down an image. And pulled, by the way, is just Docker terminology for download. So let's say we want to download the MongoDB image. Well, on the Docker host we go Docker pull Mongo, and in a default configuration that's going to send the Docker host off to Docker Hub to pull down a copy of the Mongo image. Once that's done, like we see here, each of our Docker hosts has their very own copy of the Mongo image, so both of them can run MongoDB-based containers. Brilliant! But we can push containers to upload them. Let's say we tweak that Mongo image that we just pulled. Maybe we customized it according to our company's standard MongoDB build. But you know what? We've got like hundreds of Docker hosts in our estate, and we definitely don't want to have to manually tweak the image on every host we've got. What we really want is to make the config changes once, then upload the updated image somewhere central. Then any host that wants to run the customized image can just pull it down and crack on. Well, no prizes for guessing. Docker Hub and all the third-party registries out there, they all let us do just that, upload and store our own customized images. And I think that's a good place to explain, right? So Docker Hub or Quay.io or AWS EC2 Container Registry, these are all container registries. Then within each registry we can have our very own repositories. In fact, we can have loads of repositories if we want. Though be careful, charges may apply. The point is though, the overall service is called a registry. Then within the registry we can have our own little sectioned off areas called repositories, or repos for short. So this is Docker Hub, and these here are some of my repos. And we can see actually some are marked as public and some as private. Well, that makes a pretty decent segue into registry security. Registry Security So a really common question when discussing container registries is so does this mean everyone can access my stuff? And the answer is it depends. For these public repositories here, yeah, they are wide-open to the world, or at least they're wide-open for the world to pull. So this one here it's been pulled or downloaded a few times, but only I can push updates to it. But see this one here. This is marked as private. This means it is definitely not open to everyone. So if you've got a Docker host that you dispose of, and you try and Docker pull nigelpoulton/test, it's not going to work. So public repositories are there for everyone to pull, but only you or accounts that you authorize can actually push to it. Private repos, they are all about only being accessible by people or accounts that are specifically given permissions. Taking it further though, most registries these days, they let you define organizations in teams. Then you can manage your permissions using these. You know, rather than having to fuff around assigning permissions and restrictions to every individual account, you can bundle them together into teams and just say hey, the whole team has access to this or that or maybe not this or that. So, here we go in Docker Hub. I've got an organization here called techinfra. Then within that organization there are four repos, or repositories, and they're all private. Then within the techinfra org, I've got three teams. Now if we jump back and we choose one of these repos here, NGINX Front End, okay, here we can see the linuxeng team only has read access, whereas the devops team has write access. But registry security doesn't stop here either. Docker Hub, Quay.io, Amazon, EC2 Container Registry, these are all public cloud services. And that's fabulous, but what if your organization has a no-no to the Cloudo policy? What, are you left on the sideline? No game for you today? No, of course not. And this is cool. We can get container registries that are run inside of your cooperate firewall that you own and you manage. So all the goodness that you get from something like Docker Hub, but instead of it being out in the wild, wild west of the public cloud, these run inside the relative safety of your cooperate firewall. Docker Trusted Registry from Docker Inc., and Quay Enterprise from Quay.io are just a couple examples of these. So, for example, Docker Trusted Registry, this is available as part of a commercial support subscription from Docker Inc. meaning you pay for a support contract, and part of that deal entitles you to use Docker Trusted Registry, so a registry that you own a manage inside of your own corporate firewall or inside of your own AWS VPC and the likes if you're in the cloud, but also backed by support from Docker Inc. What is not to like about that if you're a cautious enterprise that treats its absent IT infrastructure like the crown jewels? You're going to love something like that. But we're still not done. There's more to registry security than setting permissions and hiding behind firewalls. What about trust? So, when pulling images down, how do we know that we can trust what we're getting? Seriously, this question is unbelievably important these days, especially when pulling software over untrusted networks like the internet. We all know that www is short for wild, wild west, yeah? Wow, you absolutely need to know that you can trust what you're pulling. Well Docker's got a technology called, Docker Content Trust, and it's exactly for this. It lets you verify both the integrity of the image that you're pulling, and verify the publisher of the image. Couple this with something like a Yubico hardware crypto solution, and you end up with a solid solution, and one that can survive just about any key compromise scenario too. Again, massively important for the enterprise. So the takeaway points here I think are that there are options for privacy, so private repos, options for on-prem registries if you can't sue the cloud, there are registries that offer good permissioning models for restricting access to images and the likes, and solutions exist for authentication and integrity check-in. And you know what? I can absolutely guarantee you that Docker Inc., and maybe this applies to other companies too, but Docker Inc. is passionate about making all of this security stuff as simple as possible to implement. They've got this really cool mindset, in my opinion, that unusable security isn't security at all. If it's hard for people to do, they just won't do it, so Docker Inc. is passionate about making security easy. Automated Workflows Now one last thing on container registries. Oh my word, they are becoming absolutely central to application infrastructure and delivery. So let me nick this animation from one of my other Docker related courses. Here on the left we've got our application being written, tweaked, modified, patched, updated, whatever, and we push it to our software repo. From there, we perform tests. After all, we want to make sure that none of the changes that we just made break the actual app. Assuming the tests come out good, we push it to our container registry. The registry performs an automated build. This gives us an updated container image to deploy from, and from there we deploy the updated app. And we can deploy it to our own data centers on-premises or to the cloud. But this right here, the container registry, is becoming the pivot point, the dead center of these types of workflows. Oh, and all of this can be automated. It's a beautiful thing. Now we can test and automate every change we make to our apps, and we can automatically push them to dev test, even prod. The world is changing, and container registries are acquiring mass, and as a result they're dragging all of this stuff into orbit around them. And while we're here, I've got an entire 1 hour, 1 minute, 100% hands-on course showing exactly that picture there, so taking a web app, pushing it to GitHub, triggering automated unit tests, pushing successful tests to Docker Hub, performing automated builds, and then pushing out to AWS using the Totem deployment tool. I need to take a breath after that. But it's all automated; it's all that simple. Honestly, magic stuff. And for increasingly dynamic markets, this ability to rapidly test, build, and push changes to applications and have most of it automated, man, it's becoming more and more important, and yeah, container registries are at the core of it. So there we go. That's container registries. Let's go and just remind ourselves of the salient points with a quick recap. Summary Alright then, we started out by saying that registries are like the App Store or the Google Play Store of the enterprise application world. You need a containerized app? I'd be very surprised if you can't find it on Docker Hub. But what if you want to build or write your own containerized apps for other people to use? No sweat. Just build it and Dockerize it, and host it on Docker Hub. This stuff really is revolutionizing the enterprise IT world. Now we did say there are other container registries. Yes, Docker Hub is the main one, owned and operated by Docker Inc., but there are others, and really good ones at that. Architecturally speaking, we said that registries contain multiple repositories, and it's really the repositories where we store and pull our images. And we said that these repos can be public or private, though even public repos are only public in the sense that others can pull or download from them. You've still got to be logged in with your own account if you want to push or make changes to them. And we said that container registries like Docker Hub, Quay.io, EC2 Container Registry, Google Container Registry, they're all in a public cloud. And while that's great for some, it's a huge uh-oh for others. So private registries, not just repos, entire private registries, also exist. We said these are the kind that we can install on-prem behind our own corporate firewalls. Then we can use them just like normal, just without having the risks that come with consuming public cloud-based services. And we finished things off by talking about the massive, massive importance of trust, being able to sleep well at night knowing that the content that we pull from registries is exactly what we expect. Oh, wait, I lied. We didn't finish with that. We finished with this, the central role that container registries are playing in today's agile business and IT world. They literally are becoming crucial to our ability to safely and rapidly deploy apps and deploy changes to apps. And on that note, let's switch gears, but only slightly, right? Next up, we're going to talk about whether or not containers are ready for production use and for use in the enterprise. Are Docker and Containers Ready for Production and the Enterprise? Module Intro Now then, at least two of the questions that I hear all of the time about Docker and container technologies are: one, are they ready for production, and two, are they suited for enterprises because I guess, for whatever reason, containers have a reputation for being all modern and all cutting edge. And that's great and all for things like dev and for tests and maybe for funky new cloud-native apps and born in the cloud startup companies that, I don't know, maybe don't have a bunch of existing customers and products and brands that all need protecting. They're super cool for all of that kind of stuff, but wow, it's a totally different thing when we're talking about traditional companies with real customers and real transactions. Now obviously I'm not saying that startups and born in the cloud companies don't have real customers and don't execute real transactions. They absolutely do, but there's an absolute ¬¬chasm between them and the traditional cautious enterprise. Even if that chasm is mainly in mindset and attitude, it still exists. Anyway, to stop my waffling, I get asked all of the time if Docker and container technologies are ready for the primetime in things like finance, farmer, ecommerce, government, all of that stuff. Well, that's the aim of this module, to hopefully shed some light on these matters. Though I'm going to say this before we even start, this is not going to be me saying or me declaring Docker and containers as production ready or enterprise ready. I mean, as if. I am certainly not arrogant enough to think that I can tell you whether or not something's production ready or enterprise ready because I totally get that every organization is different. What's enterprise class or production ready for one organization might not be for another. But what I am hoping I can do is paint a picture of where things are relative to, you know, a kind of broad sense of production worthiness and enterprise worthiness, maybe give you a direction of where Docker and container technologies are heading, you know, give you some food for thought. Then you can go away and make your own decisions. So let's crack on. The Docker Engine Now, for the time being, I'm going to be really Docker specific. Yeah, there are other container products and solutions out there, but right now I'm going to be talking about products and offerings from Docker Inc. So the logical place to start is the Docker Engine, that core technology that actually runs containers. At the time I record this course, it's at version 1.10. Now then, it was about a year or so ago when it hit that magical 1.0 release, and that was basically Docker Inc. blessing it as what they considered production ready, so their way of saying look, a bunch of the design and code is now stable, no planned major U-turns, so feel safe considering it for production use. And I guess this is as good as place as any to mention that there are effectively three channels, if you will, for the Docker Engine. There's the experimental channel, so if you want to live on the bleeding edge. It's got nightly builds, and it's full of stuff that will be here today and gone tomorrow. Definitely not for production. And I kind of like its logo, as well. Never mind. Then there's the stable channel, so stuff that's been tried and tested and actually made it through the furnace of the experimental channel meaning anything in the stable channel should be just that, stable, so the APIs and the likes are not going to suddenly change on you overnight. Oh yeah, and the stable channels on a rough release cycle of about one major release every couple of months. Now obviously they do point releases and bug fixes faster than that, but let's say changing from a 1.10 to a 1.11, that's about every two months. Then over and above that there's the commercially supported engine, the CS engine. At the time of me recording this course, that's actually only had two major releases, 1.6 and 1.9, so roughly every 6 months. But this version is all about stability. So yeah it's the same code base as the stable channel, just less bleeding edge, and it only supports tried and tested configurations. So, as an example, it's only supported with a limited set of storage configurations. The point being, all of the configs are about stability and reliability. Plus, it's sold as part of a commercial support package from Docker Inc., so you pay for support directly from Docker Inc., and as part of that support package you get access to the CS engine. Well, when we start to add all of that up, so it's well beyond the magical 1.0 release, it's got what I'd call, at least, a responsible release model, so the different channels and the likes. Plus, there's a commercially supported edition. And when you start to add all of that up, it really is starting to resemble something that looks pretty production worthy, I'd say, for a lot of people. Other Docker Products Then on top of Engine, or actually on top of multiple engines, we can lay a Docker Swarm, so native Docker clustering. And from a version numbering perspective again, Swarm already hit that magical 1.0, which like I said a second ago, is that magical version number where Docker Inc. is pretty much saying hey, the design's pretty stable, and so is the code; therefore, we think it's production ready. Again, I'd probably add my personal opinion that yeah, I think Docker Swarm is ready to be seriously considered for production use by a lot of organizations. I've used it a fair bit from being Alpha Code, and I recently saw it with my own eyes deployed to hundreds of Docker hosts managing tens of thousands of containers, and it looked pretty solid. I also think that Docker Content Trust, that technology that we talked about in the registries module, the one that lets you verify both the publisher and the content of images, oh my goodness, this is huge. I mean we all know that security is right up there at the top of just about every CIO's, every board's, and every enterprise's agenda, insanely important for every organization in today's world of hack after hack after hack, so I'd recommend to anyone and everyone using Docker for production turn on Content Trust. Again, a small feature sort of, but one that makes this stuff really start to feel production ready and enterprise worthy. Now then, let's split onto two tracks here, a cloud track and an on-prem track. On the cloud front, there's Docker Hub for shipping applications. It's been around for ages, it's done billions of downloads, it's handling millions more every day. Well, at the backend of 2015, it had its build engine design overhauled. It supports Content Trust, it offers private repos, it does role-based access control, and something new, Project Nautilus, which could be a really important new technology. It scans images, and it detects known vulnerabilities. This, I think, is going to be huge going forward. But the point is Docker Hub is central to doing things in the cloud with containers. Well, it's pretty stable, it's feature-rich, it does security, and it integrates like a dream with other Docker tools. In fact, one of those is Tutum, the official Docker Inc. platform for deploying and managing your apps in the cloud. And Tutum's great! I've recently seen it used to deploy a small app across multiple clouds, so a app across, let's say, AWS and Azure, and how huge and important is that kind of capability going to be in the future of app design and deployment? So I think the point being, if your production is in the cloud, Docker has got you pretty well covered. But what about on-premises? Well, Engine's Engine and Swarm's Swarm, so no difference between on-prem and in the cloud for those. And like I said before, I'm on board with both of those as being production ready for a lot of organizations. But in the ship column here, instead of Docker Hub, there's Docker Trusted Registry, your own private version of a Docker registry, so one that you own and you manage, and you run it inside of your own firewall. Well, it's currently at version 1.4.something, so blessed by Docker Inc. again as being "ready for production use." It supports Content Trust, and it won't be long before it supports Nautilus for vulnerability detection. In fact, depending on when you're watching this, it might already support it. Oh, and Docker Trusted Registry also comes with commercial support, always, always a good thing for the cautious enterprise. But in the run column here for on-prem, things are a little bit newer, a little greener around the edges. So, at DockerCon in Barcelona in November 2015, Docker announced the Docker Universal Control Plane, frankly a great looking tool for managing your Docker apps and infrastructure no matter where they are, so on-premises or in the cloud, but it's massively focused, in my opinion, on being enterprise grade. So just as an example, right out of the doors it's designed to integrate seamlessly with LDAP and Active Directory, and often overlooked or afterthought for a lot of products out there, but a must have for any tool hoping to make it large in the enterprise world, so good call there from Docker. It looks good, and it looks like they're gearing it towards the enterprise, but it is early days here. And you know what? There's more too, so there's Docker Compose for composing orchestrated applications, there's Docker Machine for deploying engines in your estate, there's Docker Toolbox for getting developers and the likes up and running. The portfolio is maturing and growing, and we're starting to see more and more commercial support offerings around them. So there we go, enough I hope at least to give you a feeling, to enable you to go away feeling like you see the big picture. Then you can start having discussions internally. The Container Ecosystem But everything that we've said so far is Docker Inc., which is by no means the full picture. The ecosystem that's springing up around the whole container movement is actually a thing to behold. So, for example, at the Barcelona DockerCon in November 2015, the show floor was crammed with innovative startups, and almost all of them were geared towards deploying Docker and containers in production environments. So there were container-centric logging solutions, container-centric auditing, container-centric storage solutions all the way up to entire management platforms geared towards the needs of enterprises and production environments. So I saw platforms with extensive graphical user interfaces that showed you things like end-to-end views of your entire container infrastructure, deep log analysis, deep monitoring of what's going on inside of your containers, all stuff that's massively important if you want to start deploying and operating containers in production in the enterprise. And you know what? The ecosystem itself is a good mix of container-centric startups, so founded by some of the brightest minds in the industry, so former VMware, Citrix, Facebook, Twitter, you name them, bright, bright minds with some great ideas. But then there are the already established tech giants, IBM, HP, Microsoft, Dell, EMC, Red Hat, NetApp, all of the household names that are the safe harbors that enterprises are used to partnering with. So if you've got an appetite for containers, but you're not really in the business of partnering with a flashy new startup that's got less cash than you spend each month on, I don't know, data center power and cooling, well rest assured you can go to your traditional partners and start talking containers with them. Alright, let's go quickly recap what we've talked about in this module. Summary So by now I'm hoping you've got a better idea of whether or not Docker and containers are ready for production and are ready for the enterprise. So what did we talk about? Well, first up I said it wasn't my place to declare something as production ready or not. That's absolutely your call. Well, anyway, as far as content goes, I feel like we spent most of the time talking about products and services from Docker Inc. So we said that they've got a fast growing, but a maturing suite of products, a lot of which are at least at that magical 1.0 release, considerably higher in some cases. The point being though, anything 1.0 is considered by Docker Inc. to be production ready. You, of course, need to make your own decisions. Oh, and there's a growing set of commercial support offerings, so you can buy a Docker subscription today that includes the commercially supported Docker Engine and Docker Trusted Registry, all backed by a support contract. Yeah, we also said that there's a solid set of tools for building, shipping, and running applications in the cloud and that the on-prem side of things is coming along fast too. So the build and ship are pretty solid in the on-prem world, and the run side of things, that's being addressed with Docker Universal Control Plane. Right, but that's all just Docker Inc. Let's not ignore the power and the potential of the ecosystem, an ecosystem that's literally bubbling with energetic startups that are laser focused on container technologies, as well as just about every major technology partner you can imagine, IBM, Intel, Dell, HP, NetApp. They're pretty much all in there, kind of like we saw with the VMware ecosystem. Right now everyone wants to be seen onstage with Docker. Alright then, before you run away to hit the green light on container deployment in your organization, stick with us for just a few more minutes because next up we're going to have a quick look at what container orchestration is all about. What Is Container Orchestration All About? Big Picture Now in case the concept of container orchestration is like a foreign concept to you, or maybe you've got an idea what it is, but you're not sure why it's so important, well that's what this module is for. So at its very highest level, and I've run a massive risk of embarrassing myself here seeing as I know very little about American sports, but if you take an American football team, there are a bunch of players in it, yeah? And at any one point in the game, some of them are on the field and some of them are off. And each player has a specific job, so lots of players doing lots of different jobs. Now to work as a team, they need organizing, dare I say orchestrating? So there's a coach or a coaching team, we'll put them here, and it's his or her job to organize and orchestrate everyone, so tell people where to go, make the play calls, all of that kind of stuff. And Americans, forgive me here if I'm getting all this wrong, but the coach, in a way, orchestrates everyone and gets them to play as a team. So out there on the field there are the big guys at the front that stand in a line and charge at each other and pretty much beat up on each other. And stick with me here, but maybe they're the team's web frontend. Then the quarterback might be the load balancer or message broker or search feature, whatever, right? And I know I'm really winging it now, but there's more. Maybe the wide receivers are like the database backend. My attempted point here is, and I'll stop this before I really do embarrass myself, but the intended point is the team is made up of loads of individuals, each often doing a different thing. Somebody blocks, somebody throws, somebody catches, yeah? But when coordinated or orchestrated, they play as a team. Well you know what? The same goes for our applications. They're made up usually from a bunch of individual or small services that when orchestrated together act as a single unified app like a team if you stretch your imagination a bit, yeah? So yeah, check that out for an analogy. Anyway, let's leave analogies behind, and we'll look in a tiny bit more detail. More Detail So just about any containerized app out there, certainly any production worthy app, is going to be composed of multiple interlinked containers, probably spanning multiple hosts. And heck, who knows, maybe even multiple cloud infrastructures. And if we're talking about a lot of component parts to our app, so many microservices spanning thousands of containers on tens or hundreds of hosts, honestly, we don't want to be manually hand-stitching all of that. What we need is a game plan of sorts, something that composes everything into the overall app. And we're talking about things like, well, first of all, defining all the different components or the services that make up the app, then how they fit together, so networking, message queues, API calls. That stuff all needs defining. That's the game plan of sorts. Then, once our app's defined, we need a way of deploying it and scaling it. We definitely don't want to be manually choosing which containers run on which hosts. We just want a pool of hosts, and then be able to fire-up containers and have our orchestration tool put the right containers on the right hosts. And I know all of this is high-level, but this really is what container orchestration is about. Define our app, how all the parts interact, provision the infrastructure, and then deploy the app with potentially a single click. Then put our feet up and enjoy the performance. But it gives us great things, like it manages all of our app dependencies, so make sure all the components come up in the right order and all of that jazz. It also gives us a reproducible pattern. So if we need to, we can blow it all away and just redeploy. Again, potentially with a single click. And it's key to letting our apps scale. So as usage ramps, it can also take care of spinning up and deploying more instances for us. All really good stuff. Now, from a Docker Inc. perspective, they've got two or three products that do all of this for us. Docker Machine provisions Docker hosts for us, so on-premises, in the cloud, wherever. Then we can use Docker Compose to define and compose our multi- container app, so which images to use, which network ports to open, the config that glues our app containers together. Then there's Docker Swarm to take care of actually scheduling our containers across our estate of Docker hosts. Then on top of that there's Docker Tutem. This gives us a pretty UI and lets us control and manage everything. Magic! But, as always, there's the wider ecosystem, so technologies and frameworks like --- and you know what? This by no means an extensive list, but Kubernetes, Mesosphere Datacenter OS, CoreOS fleet and etcd, and, in fact, OpenStack Magnum. These can all be used to orchestrate containerized apps. And obviously each has its own pros and cons. So Kubernetes came out of Google, but it's an open source project now. It's based somewhat on Borg, Google's internal container orchestration technology, so I reckon with that in mind we should expect it to be able to scale. But it's also good with microservices, and it works with more than just Docker containers. Then Mesosphere DCOS, Datacenter Operating System. This is a commercial product. Yeah, it's based on the open source Apache Mesos, the technology said to power infrastructures like Twitter and Airbnb, but it's backed by a commercial entity, Mesosphere. And you know what? It's not that long since they struck a deal with Microsoft to build container orchestration into the Azure cloud. And then we've got OpenStack Magnum, apparently making Docker and Kubernetes so-called first-class citizens in OpenStack environments. Now this is far from an extensive list, but hopefully it gives you a taste of some of the types of products that exist out there to help us orchestrate our containerized apps. Summary So to recap, we talked about how apps are generally composed of multiple components. Well, in a container world, each component could quite easily map one-to-one to a container, so one app service or one app component per container. Then, as soon as we've got any decent number of services or containers, and definitely once we start to scale, we desperately need something to orchestrate it all, take the manual handling out of it. And at its core, that's the essence of orchestration, taking something that was manual and automating it. And you know what? There's an absolute ton of effort being expanded in this space. It's a real heat spot, if you will, within the already hot container ecosystem. So tons happening, tons of products. We said that Docker Inc., they're hard at work developing Docker Machine, Docker Compose, and Docker Swarm. And, as you'd expect, they're magic for Docker-centric solutions, but then this ecosystem plays like Kubernetes, Mesosphere DCOS, Cloudify, tons more. But let me say this, orchestration really shines at scale. I mean it's not always easy to set up, it takes time and effort, but once it is set up, it really starts to shine when things scale. And that's us done on container orchestration. And you know what? That's us pretty much done with the course. Do join me though in the next module for a quick round-up and a teaser into a technology that I think is something worth keeping your eye on. What Next? What Next? Well, here we are, finished the course. And if I've done my job, you should be fully in the know about Docker and containers. In fact, what did we say at the beginning of the course? Give you enough knowledge so that you could hold your own talking about containers in the pub or at the coffee machine. That's nothing. I reckon by now you probably know enough to impress folk. What the heck, you could probably go and hit on that guy or girl that you've had your eye on, impress them with your knowledge of containers, though, no, probably not recommended. Anyway, this is the end of the course, so thanks a bunch for watching it, and genuinely I hope it's been of value. I will admit it was a bit of a challenging course for me actually because I wanted it to be ideal for everyone, you know all the way from hands-on developers and admins through technical architects and even non-technical management. A tall order, I know, and a risky business trying to make everybody happy, so if you could me a tiny favor, I'd really appreciate some feedback, honest feedback, right? So if the course was great, sure let me know. But if it didn't do the trick for you, let me know why so that when I refresh the course in the future I can try and make it better. And on the feedback front, stars are all fine, okay, but comments help me a lot more. So if you can leave a comment or two, man, I would be much obliged. Anyway, what next? Well, an obvious place is more Docker and container courses, and we've got a bunch of them here on Pluralsight. Right now, my Docker Deep Dive course is that soup to nuts hands-on course that gets you up and running with Docker and then gets into a moderate level of detail around things like Dockerizing apps, a bit of networking, troubleshooting, all of that stuff. And it's been really popular. And you know what? By some bizarre coincidence, today of all days it's got its 1000th rating and averaging a full five starts, so it's a decent course. More recently though, I produced a short and sharp course all about leveraging Docker and Docker Hub in a CI/CD workflow, so taking an app from code on your laptop all the way to deployment in Amazon Web Services by way of Docker Hub. Again, it's proved popular, and it's nice and short, so easy to consume. Aside from those, since producing this particular course, I've added the following Docker courses to the library. Oh, wait, right now that's none. But that's a good teaser, right? Because I'm already working on more courses, so keep an eye out for them. Expect things like Docker storage, Docker networking, Docker clustering, Docker orchestration. I'm pretty stoked and excited about them all, and they'll all be hands-on. What else? Oh yeah, I'm fairly active on the Twitters. And none of this what I've had for breakfast stuff though. I use Twitter to talk and learn about technology. So if that's your thing, come and hunt me down. I'm @nigelpoulton. And one last thing, if you're technically minded and you want a sneak peek of what I personally think could be really interesting and potentially game changing in a few years' time, go look up Unikernels. And on that note, thanks again for watching the course, and good luck driving your career and organizations into a future full of containers. Course author Nigel Poulton Nigel is a popular figure in the tech industry, renowned for his love of storage and deep technical knowledge. Nigel writes a popular long-running storage blog as well as having hosted many popular...