What do you want to learn?
Skip to main content
Containerizing Angular Applications with Docker
by Dan Wahlin
Learn how to build and run your Angular application code using Docker containers. Explore how to write dockerfiles for custom images, leverage multi-stage dockerfiles, container orchestration with Docker Compose, and much more.
Resume CourseBookmarkAdd to Channel
Table of contents
Welcome to the Containerizing Angular Applications with Docker course. My name is Dan Wahlin, and I'm a software developer, architect, and trainer specializing in web technologies. I work a lot with Angular and Docker, so this course was especially fun to create, and I'm excited to share it with you. Throughout the course, you'll learn about the role of containers when it comes to containerizing Angular applications. We'll start off by answering the question, why use containers at all? I love working with containers, but with frontend applications there are several viable alternatives that can be used. So we'll answer the question by discussing the benefits that containers bring to the table, as well as some alternatives that exist. From there we'll jump into the role of Dockerfiles and how they can be used to create development and production images for Angular apps. This includes discussing multi-stage Dockerfiles and how they can be used to build an app and create a final image that's as small as possible. We'll also learn about a Docker extension that can be used in VS Code to simplify working with images and containers, and wrap up by showing how to run Angular with other containers by using a tool called Docker Compose. So let's get started and jump right in.
Angular and Containers
Let's talk about the course agenda and the specific topics we're going to be covering throughout the course. So we'll start off by talking about some of the prerequisite knowledge you do need to have so that you can get the most out of this course. Now from there we're going to jump right into the question, why would you want to use containers with frontend applications like Angular? We'll discuss some of the benefits that containers bring to the table, but we'll also discuss some alternatives that exist that you might consider as well. From there, we're going to get an Angular container up and running so you can see that process, and then we'll move on to covering how to build custom images for development and for production. Specifically, we're going to focus on creating a multi-stage Dockerfile. This is a file that will ultimately allow us to not only build our Angular app, but also generate a very small final image that'll make it more efficient as we deploy between different servers. In this next module, we'll talk about that process of deploying an image, and then running a container, and then finally we're going to wrap up with how do we run multiple containers. Oftentimes a team might not only be responsible for building the Angular part of the app, but also some of the services that Angular actually calls into. You might want to containerize all of that. We're going to talk about that process, discuss a tool called Docker Compose that you can use, and also discuss alternatives that exist as you start to move to servers or maybe to the cloud. Now, throughout the course we're going to cover a lot of different technologies. We'll learn about nginx, which is a very efficient reverse proxy server, we're going to talk about custom Docker image and Dockerfiles, talk about different ways to get containers up and running and how Angular fits into that, and we'll also talk about how to orchestrate multiple containers using Docker Compose. So now that you've seen what we're going to cover, let's go ahead and jump into the first module and get started.
Angular and Containers
Let's start the course off by talking about some of the prerequisite knowledge you need to have to maximize your learning as you go through the course, and then address an important question we need to answer, which is, why would you want to use Angular with containers? And we'll address a few other questions along the way. So we're going to start off by talking about the prereqs to get the most out of this course. I always like to set that up front so that if you need to jump out and maybe view another course on Pluralsight you can do that and then come on back, but I'll explain what you need to know. From there, we're going to cover and review some of the key course concepts, some of the prereqs actually, but just do a quick review. So we're going to talk a little bit about what are containers, what are images, and a few key Docker commands we're going to be using throughout the course. Now from there we're going to address the question, why use containers at all? Maybe you've heard about containers, but you haven't actually seen the benefits they can bring, so I'm going to walk you through several scenarios and explain why containers make a lot of sense in many cases for applications, even Angular applications. From there we're going to get the sample app up and running. Now we're not going to be building any Angular code in this course because it's all about running Angular in containers, but I have a really simple little Angular app that we can demonstrate. We'll get it running locally with the CLI, and then we're going to get it up and running in a container. And then as we move through the course we're going to be talking about that process to build our Docker image with Angular, we're going to be using a server called nginx for that, by the way. And then we'll talk about how to run our container and do much more along the way. Now you can grab the sample code for the course at my GitHub repository that you see here. This will have a README so if you want to run things locally you can do that, if you want to run it within a container you can do that, and there will be some different options there. So feel free to grab that code, and then you can start playing around with me as we go through the course. So let's jump right in to some of the prereq knowledge you need to have to get the most out of this course.
Prerequisites to Maximize Learning
One of the things I always like to do to start off my courses is make sure that you know the prerequisite knowledge and expectations so that you can get the most out of the course and really feel like you get a lot of value. So for this particular course, the expectation is that you do know at least the basics of Docker, you know what an image and a container is and how they work. Now I am going to be reviewing some of the key course concepts a little bit later in this module, so if you are new to it I'll give you those basics, but really it would be even better if you have some experience working with images and containers. Now if you don't, I'll have a few options for you that I'll mention in just a moment. Now we're also going to be using some Docker commands, so if you have worked Docker before and done Docker Run or Docker Build or something like that. We won't be doing anything super complex, but if you already have some experience with Docker commands that'll also be advantageous for you. And then finally, although we're not going to be building an Angular app in this course, we will be using the CLI a little bit, and we're also going to be talking about Angular somewhat, so having a foundation in some of the Angular fundamentals would also be advantageous. Now, if you don't have any knowledge of those topics, I would recommend checking out some of the other courses on Pluralsight.com. I have one called Docker for Web Developers, and this will give you all the details you'd need, even more than you'd need for this particular course, to learn about how Docker and images and containers could be used to provide a lot of benefits for various types of apps, not only Angular, but many others. Now if you want to see some Docker with Angular and other technologies, I also have an Integrating Angular with ASP.NET Core RESTful Services course, and one that's also for Node.js developers, so you could check those out as well. And then if you need some Angular fundamentals type of courses, then Pluralsight.com has those as well. So as long as you have the foundational topics, the knowledge of images and containers, some of the key Docker commands and Angular fundamentals, you'll be fine. Even if you don't, I'm going to do a really quick intro to those momentarily, but that's what you need to get the most out of this course.
Key Course Concepts
Why Use Containers?
Running the Angular Application Locally
Let's take a look at the Angular application that we're going to be working with. Now, this application is a simple one, it shows customers and orders, and really it just gives us a starting point so that we have an Angular app built with a CLI. And though I'm going to show you how to run it locally, throughout the rest of the course we're of course going to focus on containers, building images, and more. Let's take a look at how we can run the app locally. So the application itself is just called Angular-Core-Concepts. It just covers the fundamentals of any Angular app. And it's built with the Angular CLI, of course. So to get it running, we can go ahead and in VS Code here I'm just going to say Open in Terminal, or Open in Command Prompt on Windows, and from here we can go ahead and do our normal ng serve and -o to open the browser. Now this assumes that you have installed the Angular CLI. If you are not familiar with that, you'll probably want to go check out some of the different Angular courses out on Pluralsight as well. So it looks like the application is loaded, and you can see that we have some different customers. You can drill into these customers to view the orders, you can filter the customers, and we can also sort and do pretty standard operations there, pretty basic core concept type of Angular app. Now currently you'll notice it's on port 4200, and we have our routing and everything going, and this would be kind of the normal way we would develop. Now I'm going to show you how we could also develop a little later in the course, not only using the ng serve, but also using a real server if you wanted, like nginx or Apache or others. Now we're going to focus on a server called nginx, which I'll be talking about a little bit later. But that's how easy it is to get the sample app going locally. So let me go ahead and we'll Ctrl+C here to stop it, and now what we're going to do is take a look at how to get this going in a container.
Running the Angular Application in a Container
In this section, we're going to take a look at how we can get the Angular application up and running in a container, but what we're going to do is run a real web server in the container, but link it back to our local source code, so we're going to create something called a volume. Later on in the course, I'll show you how we can actually copy the source code into the container so we could deploy it to different environments. Now in order to run what I'm going to show you here, you will need a few things. First off you'll need an editor. Any editor will work. I'm going to be using VS Code throughout the course. You can download it from code.visualstudio.com. Now you're also going to need Docker Community Edition. Now this is the free edition of Docker that runs on Linux, Mac, and Windows, and you can go to docker.com to get details on how to install it. Now to run Community Edition you do need Windows 10 Pro or higher, if you're on Windows. There is an option if you're on Windows 7 or 8. I'm not going to be covering it here, that is covered in my Docker for Web Developers course, but that uses something called VirtualBox and Docker Toolbox. So it is possible to do that, but you do have to install something different. Now assuming you have Windows 10 Pro or higher, Mac, or Linux, you'll be able to run the latest version of Docker, which is Docker Community Edition. Okay, so now let's switch over to the code, and I've opened up a file called nginx.dockerfile. Now I'm not going to talk through what it's doing quite yet, I'm just going to jump down to how to run it, and the first thing we need to do is we need to build our Angular projects. So the normal way to do that is with an ng build, and I might even add a watch on there so I can just leave it up and running. Now when you do this, it actually deletes the dist folder, and we don't want that in this case. We want to keep the distribution folder and just update the contents when it rebuilds. So there's a command line switch you can add that not a lot of people know about, but it's there, it's called delete-output-path, and I'm going to say false. Now what that means is, do not delete the existing dist folder. Now the reason that's important is we're going to get a container up and running here in just a moment, and then we're going to link that container back to our local folder here, this Angular-Core-Concepts folder, using a Docker volume. Now the volume is really just an alias, inside of the container they're going to be looking for this folder I'm highlighting, but we're going to point that folder back to our local code. That way we can do local development and actually try out this real web server. Now you might kind of question why would I do that? I can just do ng serve and call it a day. And I would say when you first start getting going on a project, absolutely, just use ng serve, and maybe you have containers for your services, and maybe you don't, but you don't need a container to run ng serve of course. But what happens when you're ready to maybe move to staging or production or something like that? You might want to check, hey, how does the code act in the actual environment for the real HTTP server, and we can do that. That's why containers are so cool. Now I'm going to be using something called nginx. This is a very fast reverse proxy, HTTP server. It's very popular. A lot of sites you've gone to out there use nginx as the port 80 server, the initial point of contact when you hit the website. And then what we're going to do is nginx is going to be running in the container, but I'm going to link, like I said, back to my source code using a Docker volume. Now later I'll show you how we can actually copy the Angular source code from the dist folder into a container, into an image which gets created as a container, and that way we could deploy to staging, or production, or the cloud, or whatever you're doing. But let's go ahead and get this up and running now. So once you've done a build, I could go ahead and just do this if I wanted, let's leave the watch and the delete-output-path false. We'll let that start. Now you'll notice as I do this the dist folder won't magically disappear like it normally does when you do an ng build. Okay, perfect. Now I'm going to just hit the plus here to get a new console, and I'm just going to paste in a build statement. Now, we'll talk about this a little later in more detail, so if you're new to it, don't worry, we'll get to it, but right now I'm just going to use this to create an image. But the image doesn't have my source code in it, again, we're going to link back to the source code. So this'll be really fast because I've already pre-built this. Alright, so the image is ready. Now the next thing I'm going to do is run this, so we're going to do a docker run. Now before I do that, I want to point one little thing out. If you're on a Mac, you can use this syntax. This means your current working directory, this is an alias for the current working directory, and that's something that if you're on Mac just kind of works, and there's only kind of one way to do it on Mac. On Windows, there can be different ways to do this. So I have a blog post up here, and if you are on Windows, you'll want to check that out, because if you're using PowerShell you might do it one way versus just regular command prompt versus maybe get bash for Windows, something like that. Now I'm on a Mac in this demo, so I'm just going to copy this down. Now what this is going to do is say take this image, the image is nginx-angular, and run it. Now I'm not going to run in what's called detached mode, that would make it so my console comes back and I can use it. I'm going to lock it up, so notice it kind of sticks there. and now I'm going to switch over to the browser, and I'm on port 8080. Now to kind of show you that again, you'll notice I'm running on port 8080, but nginx is running on 80 internally. So let's come on back, and let me refresh, and there we go. So now I'm actually able to do live development against this particular nginx server that's running, and that way I can literally simulate the exact environment on staging, test, QA, whatever you call that, production, a cloud environment, and if I got this to your machine, you could do the exact same thing of course. Very nice and easy to run. Now, to stop the container I'm just going to say Ctrl+C here, and then if we do docker ps -a, that'll list all containers, there we go. It looks like it exited about 8 seconds ago. It is nginx that's running. And I'm going to say docker remove, and then we're going to take just the first part of the ID here, c3, and now if I do docker ps -a, you'll see it's gone. Alright, and to kind of show that, we go on back, notice I no longer can get to the site because my nginx server is down. So there's an example of how we can actually run Angular locally, the code anyway, but actually run it against a real server. So we're going to be talking about this as we move along, and as I mentioned, I'm also going to show you how to do a build for when you're ready to go to a staging environment or production environment later in the course.
To summarize this module, we've learned that containers provide several different advantages to us as developers, and also for deployment. Now, containers in general get most of the news and press when it comes to dev ops, and for good reason, because it makes it easy, as we discussed, to move between environments, but as I showed a little bit earlier in this module, we can even develop against containers if we'd like. In some cases, you might do that full time, other times you might do it like we showed with Angular, just to make sure that everything is running in the environment where you know your app is going to run later for production. Now Angular can, of course, run in a container using any static file type of HTTP server. While I used nginx, there are many other options out there. You could use Apache, IIS, HAProxy, and many others. And then finally, we saw how Docker images and containers provide several different benefits to us as developers. Now I just mentioned earlier the ability to move our images and then start up our containers between environments is a great advantage, but in addition to that we talked about how bringing people up to speed on an app is much quicker if you have containers available for their local development machine. We talked about how you can isolate apps and even run different versions of a given framework or server on the same virtual machine very easily because we containerize those. And then, of course, we talked about the ability to move our images and run our containers across the different environments. So we have a lot more to cover. We're going to be discussing more details on Dockerfiles, building images, running containers, multi-stage Dockerfiles, and even how to run multiple containers later in the course, but now you have a good sense for what's possible, so let's move on to the next module.
Creating a Multi-stage Dockerfile
Creating a Multi-stage Dockerfile
Creating the Angular Development Dockerfile
One of the more powerful features built into Docker is multi-stage Dockerfiles. This allows us to build an image, but in a way that the image size can stay as small as possible. All the other artifacts needed we can get rid of and just get a very kind of bare-bones image that we can deploy to our different servers so we can run the container. So let's talk about what a multi-stage Dockerfile is, and why you really do care about this, especially if you're working with an Angular project that's going to run in a container. So earlier we talked about the route we travel to get code from point A to point B, and we talked about how we could take our code and then copy that into an image, and that image could then be pushed down and we can run the container on the server. Now what if we wanted to actually do the build of the Angular code ourselves, in a container though. In other words, not local on your machine with ng build locally, but actually in a container. And then what if we could copy the output of that to the final runtime image? Well, that's what a multi-stage Dockerfile is. Now before I show you that, let's take a look at what most people end up doing as they work with Angular or other types of frontend applications and containers. A lot of times they'll create a Dockerfile that looks like this. They'll go FROM node or FROM ASP.NET Core or from PHP, or something like that, give it a LABEL, give it a WORKDIR, and then they'll copy the dist folder into that working directory. Alright, pretty normal. Then, in the case of node you'd run npm install on the package to get your express or other packages available, expose your port, and then set your entry point. Now this would assume that the node server in server.js has been configured to know about the var/www working directory, and if this was an ASP.NET Core or a PHP, it'd be the same story, you'd have to configure where do the static files go. Now, this would work, in fact, there would be nothing wrong with this, this is a totally viable Docker file, I mean, there would be a little more to it than is on this one slide here, but there's a better way in my opinion, a more efficient way. I really don't use Node or PHP or ASP.NET Core to serve up my frontend apps, because there's other servers like nginx, HAProxy, and others, that are very fast, that are geared for that, and a lot of the CDNs out there will actually serve the CDN scripts using servers such as this. So what can we do, then, to not only get a workable server, but also build our Angular code in a container. That'd be kind of cool if we could do that. So that gets us to multi-stage Dockerfiles. Now multi-stage Dockerfiles are really just one Dockerfile, but they have a workflow to them. They have multiple stages that Docker will go through, they'll build some what's called intermediate images that we used to have to do manually to do the same thing but now it'll do it automatically, and then we can take output from one image and copy it over to another so that we get a very small image in the end. So it'd look like this: Stage 1 we might actually build out code. So we might take our Angular app, build it with a normal ng build process, but we're actually going to build it in the container. Now this is very cool, in my opinion, for multiple reasons. Number 1, you could automate this locally, but number 2, if you're using CI/CD, then you could also automate it on like a staging environment, and that's actually pretty compelling as well. Now what we'll do then is that's going to generate an intermediate image, but that's not a runtime image, that just has our dist folder in it, but it also has all that Node stuff. It has all the node packages that we needed to do the build, the Angular CLI, Node itself, and you know, we really don't need that for runtime if we want to be as small and fast as possible. So what we're going to do is add a second stage, and we're going to take the dist output that was generated by stage 1, and we'll copy it into a stage 2 server. Now this would be our nginx server, and then ultimately we'll get a final image that's really, really small. It won't have any of the Node packages, it won't have the Angular CLI, in fact it won't even have Node.js at all installed. So in this stage 1 we're going to use Node, because we need it with the Angular CLI to generate our Angular app image, but then we'll simply say, alright, from this image let's copy that dist folder into another one that ultimately becomes our staging or production image, and then we can run that container. And it's going to be very fast, and very small, very good for a deployment. So the benefits of multi-stage builds include you're avoiding manually creating these intermediate images. We used to be able to do the same thing, but you kind of had to manually do it all, and it was a little bit painful and much more complex. That's kind of number 2, it reduces complexity. And then it allows us to selectively copy what we want from one image, we'll call it an artifact, from one image, into another image, and that way we only grab the stuff we want for our, for instance, production image. And that leads to smaller image sizes, of course, which is what we're after. That's great for now deployment between environments, makes it much easier to work with. So now that we've talked through what a multi-stage Dockerfile is, let's go ahead and start to build out pieces of one so you can see how this works.
Creating the Angular Build Stage
The first thing we're going to do in our multi-stage Dockerfile is build the Angular code. We want to generate that dist folder that has our bundles and our CSS bundles, and everything we need to run the application. So to do that, I'm going to come over to a file I called nginx.prod.dockerfile. Now this has all the code kind of completed for the different stages, but let me go ahead and break this down line by line for you. So one of the first things we need to do is we need to build our docker code. So I'm going to say FROM node, because we need the node container so we can run npm install and get the CLI and things like that. And then you can give it a version here, I'm just going to say latest, now that's the default. But what I'm going to do is I'm going to alias this as node. Now this is going to make it so in the next stage we can refer back to what this stage does. Now the next thing I'm going to do is add our standard LABEL, and I always give the author. And then what we're going to do is set the working directory. So the working directory in this case I'm going to make it very simple because this is just a build container, I'm just going to call it app, and then we're going to copy our package.json into the working directory. I'm going to just name it the same. Now, the reason we're going to do that is because the next step is we're going to run npm install. So I'm going to use the run command and we're going to run npm install. Now by putting this in their own layer, remember earlier in the course I talked about how images become layers of a cake, they're instructions, well we're building up the cake now. And so by putting the package.json on its own layer, if you change it then it will automatically know that that layer changed, and if it doesn't change it won't have to do as much work, and so this is a way to be a little more efficient on how it actually does the installs and your package.json. Now from there we're going to then copy the code in, so I'm going to grab everything in this folder, and we're going to copy it in. And then I'm going to run our build process, so I'm going to run npm run build. Now I'm going to run it in prod mode in this case, so the way we kind of have to escape this is by putting two dashes in front of it, otherwise it will treat this as a different command and it won't work right. There's other ways you could do this as well with strings, but I kind of like to keep it very clean. Okay, so that would be our initial build Dockerfile. Now this is our stage, so let me add a comment up here, and let's just put Stage 1. Now we could do this, but on its own it wouldn't be that useful because we'd run the container, it would build, it would generate a dist folder, and then what do you do? Unless you had a volume set up on your local machine, you really wouldn't have access to anything because it would all be in the container. So what we're going to do in the next section is talk about stage 2, where we're going to reference this alias and use it to access the dist folder that would have been generated from running this command.
Creating an nginx/Angular Stage
The next thing we need to do in our multi-stage Docker file is copy the dist folder that would be generated from running the build process over to a new image that's going to have our nginx, and this'll be the runtime image, the one that we want to maybe deploy between staging and production or other environments. So before we write this stage, recall that earlier we aliased our first stage as node. Now you can name that whatever you want, it's just a variable, an alias that you can use. Well now what we're going to do is reference that, so that we can get to the dist folder and then copy that code into what's going to be an nginx image. So let me go ahead and we'll put a comment here for Stage 2, and now we're going to say FROM nginx, and I'm going to do nginx:alpine because it's a very small variant I mentioned. Let's go ahead and I'm going to set up a volume here for some of the caching it does. This is optional, but it'll allow you to kind of control where the cache files in nginx get put, if you want to. Now, it'll kind of default that, but I just wanted to put it in there so you know that you can change that. Now the next step is the most important. Now we're going to say COPY, but I need to get to the dist folder that this process would have generated. So the way we do that is we say --from, and then =node. Now, of course, that references the image here that would be generated, and ultimately once it runs in the container it'll give us the output from that, and specifically we want to get to /app, because that was our working directory, /dist. Now I'm going to go ahead and put this into, or copy it into the folder that nginx looks for static files by default, and this is the location, so usr/share/nginx/html. Now that's the kind of key part of this multi-stage Dockerfile. Once this runs it'll get the output from that, then we're going to copy from that into the nginx folder, and then to wrap it up I also have a configuration file, so we're also going to copy that in, and we'll go ahead and do that as well. You saw this earlier, this is going to help with the routes. There's a lot of other things you would do here as well. you can put SSL certificates, you're caching settings for the browsers, headers, all kinds of fun stuff. Okay, so we'll get that guy in there. And we're done, that is a multi-stage Dockerfile. So, again, this will build our Angular code, we'll take the output from the node, and then we'll copy it into this container that will ultimately run at runtime. Now what's nice about this is when we do the build, Docker kind of hides everything from stage 1, and you could have multiple stages. I only need two stages in this case, but we could have many, but it hides that process from you. All we have to do is just a normal build, and then it will take care of everything we need. So let me, I realized I have one little thing to fix right there. Let me double-check this, that looks good, and we should be ready to go. So now we can move onto the next part of this where we actually build the image, but that would be an example of a multi-stage Dockerfile that builds Angular and then generates an nginx image that has the Angular code included.
Building the nginx/Angular Image
The next step is fairly straightforward since we've seen a build process multiple times already in the course. Let's go ahead and build our multi-stage Dockerfile and actually see the type of output that's generated in the log as it does the build. So I've already added a comment that you'll see right here, and I'm going to go ahead and copy that down. Now I want to emphasize, we're going to tag it. I'm not tagging it with a version or a username right now because I'm just going to be using it locally, but oftentimes you'll have a user name in front of this, and then you could also have a version, so maybe this was version 1.0. Now I'll leave that up to you to add if you'd like, but that's all part of what we call the tag, and that way you can version the different images out there as you push them up to a repository. Now I also have a -f flag. Normally you'll see Dockerfiles named with just Dockerfile, that's it, no name, or extension per se, just the word dockerfile, and when that's the case you can just leave this out and put the dot, and that would just look for the Dockerfile, but in this case I have multiple dockerfiles, and a lot of times I have even more than that. So what you have to do is tell it which file are we actually trying to build, and then of course the dot means look in this folder for that file when we're going to run that in the root folder here. So by doing this, we can actually build our multi-stage Dockerfile and get an image called nginx-angular. Now just to show there's kind of nothing up my sleeves here, let's run Docker images real quick. Let's go find our nginx-angular, it looks like 7de is the ID right here, and let's go ahead and remove that real quick. So we'll say docker rmi, and I'm just going to give it the first part of it. And it says unable because the image is being used by a running container. Okay, well, that can happen sometimes, so let's do ps -a, and it looks like I still have a container going, so let's go ahead and first off I have one that's exited, so let me just kill that one by doing docker remove 75a. We'll do docker ps -a again, and then this one you'll notice is still running. It's been up for about an hour, so I'm going to say docker stop be7. Give it a second, alright, and then we can say docker rm be7. Okay, now if we do docker ps -a you'll see we're good to go. Let's go back to Docker images and go back to the top. Now you'll notice I have a lot of kind of what they call intermediate images, a lot of left-over images. Let me show you a little trick we can do really quick here. So you can do docker system prune. And it'll say this is going to remove all these things, all stop containers, networks, dangling images. I have these intermediate images, or dangling images, as they call them. So I'm going to say yes, and this will probably free up a ton of space on my hard drive. So we'll let it free these up and then we'll go and do our build here. Alright, so you can see I have apparently been using Docker a lot, and it freed up about almost 1.5 GB, which is pretty good savings for my hard drive there. So let's go back to docker images. Now notice we don't have nearly as much junk in there, and there's my nginx-angular 7de, so let's remove that. Alright, so that one is untagged and removed. And if we go back to docker images you'll notice it's all gone here, we don't have anything available. Okay, so now we can do the build, and let's see the output that we get. So let me grab this code right here, we'll paste that in, and here we go. It looks like now it's doing the web pack build for our code, so this should be generating the dist folder. So now Stage 1 is done, there's Stage 2, and that was super fast, because all it really had to do was just copy that dist folder code into the nginx folder and then we're ready to go. Now I'm not going to run it yet, I'm going to wait until the next module because there's a few other things I want to show you, but that would be an example of a multi-stage build. So now if we go back to docker images, we should see our nginx-angular and we can actually see it was 20 seconds ago, and it's only about 18.4MB, pretty small overall. Now you'll notice there's a new dangling image here. That typically happens when you do some of these builds, so that's where I showed the docker system prune. We say y, and this will probably free up just a small space in this case, well, 331MB, a little bit. Alright, so there you go. There's an example of how we can not only build our code in a container, but also then take the output from Stage 1, copy that into an nginx image in this case, and then we can use that, which we're going to do a little bit later to actually run this application.
Using the VS Code Docker Extension
Although you can build your docker images and run your containers using the command line like you've seen up to this point, there's actually a VS Code Docker extension you can use that can really speed things up and do a lot of the work for you, so I wanted to cover that real quick just so you know that this is an option if you're using VS Code. So if we go back into the project, and if I go to my extensions, you'll see that in addition to some Angular snippets I have, and other things, if I scroll on down I have one called Docker. Now you won't have this installed potentially, so if you don't, you can just come on up and type docker. Make sure you find the one from Microsoft here, and then install it. Now that, of course, will make you restart your VS Code editor. Once you restart it, you'd be ready to go. So the next thing we can do is actually access everything that we have on our system, our images, our containers, and anything else, using this Docker extension. So if I come on down to Docker here, I can expand it and get any running containers, I don't have any; any registries, well right now I'm hooked up to Docker Hub; and then all my images, there's our nginx latest. I can right-click on it and I can even inspect the image if I'd like to see that, and it's going to show me some details about that particular image. It does a data dump for you. Now if I wanted to get rid of it, I can even remove the image, I can push it up to Repository, I can even run it if I want, and even retag it. If I hit Tag here and wanted to add, instead of latest, 1.0 or something like that, then I can do that as well. So what you can also do is build. I'm going to come and right-click on my nginx multi-stage dockerfile here, and when I do that, at the very bottom of the menu you're going to see a build image. Now when I do that, this will come up, what do you want to call your image. Now, by default it will call it by the same name as my workspace here. Of course, we want to name it something different. In this example we're using nginx-angular. Now what this would do is simply by pressing Enter, and again, I could version it in my tag and all that if I'd like, but this will actually generate the same exact code, run back through that same process you saw earlier, build my Angular app, copy the dist folder into the nginx image, and we'd be off and running. And so you can do all of that though without actually typing a single command. Very, very powerful, very easy to work with, and if you don't want to right-click you can even do the Shift+CMD or Shift+Ctrl+P, and if I just type docker up here, you'll see that the extension provides a lot of different options that you can do. So I can view my logs, stop containers and start containers, there's my system Prune actually, you can do that as well, and not have to type these commands one by one. Now there are cases where there is some customization you want to do, like a volume, for example, and this may not help you there, and you may still end up typing, but in a lot of cases where you just need to build a file like the one we have here, and don't want to type that all out, it'll do that for you. So I wanted to show that really quickly, because I use that all the time to save myself a little bit of typing when I build and sometimes when I run my images and get those containers going.
Deploying the Image and Running the Container
Deploying the Image and Running the Container
In this module, we're going to take a look at how we can deploy the image that we created earlier, and then run that as a container, not only locally, but also up in the cloud. So we're going to start off by talking about how do we get the Angular container running locally. Now we did something like this earlier to start the course off, so this will be a quick review of what you saw before with docker run. From there we're going to use the VS Code Docker extension, and I'll show you how we can get a container up and running very quickly by just right-clicking on an image. We'll talk about some image registry options, including Docker Hub and some of the different cloud providers. And then from there we're going to take a look at how we can deploy the Angular runtime image up to Docker Hub, and that's going to be our registry for the demos I'm going to show. The next thing we'll do is use a service that's available on Azure to pull that image down into Azure, start up the container, and then we'll hit our application that's running inside of that container. So let's go ahead and start things off by first getting our image converted into a running container locally on our development machine.
Running the Angular Container Locally
In a previous module, we created our multi-stage Dockerfile, and then we converted that into an image using the Docker build command. We showed how to do that through the command line and how to use the VS Code Docker extension. Well, in this section we're going to talk about how we can actually get that multi-stage container going, and remember this is the one where the Angular code is actually built into the image, so once we start up the container we should be off and running. So let's go ahead and take a look at how we can do that. So I'm going to come down to the command that you see here, docker run on port 8080, nginx is going to run on 80 internally within the container, and then we're going to have the name that we built earlier. So I'm going to go ahead and just paste that in. Now if I do this, it's going to lock up the console, and you may like that and you may not. A lot of times I don't like to do that, because now I have to start up yet another one. So let me go ahead and stop this real quick. Let's go to docker ps -a, and let's remove this. Okay, and what we can do is add a -d flag, and this would be run it in detached mode. So let me go ahead and we're just going to add a -d up here. And what this will do is give you your console back if you haven't seen that before. So notice it gives me the ID of the started container, and then I get my console back, and now of course I can do docker ps -a, and you can see it's been up for 7 seconds approximately. And there's my port forwarding going on. So now what we can do is start up the browser. So I have it open here already, and notice we're going to port 8080, I'm going to a default route, although because we have the nginx configuration file in place I can go to the root or I can go to a client-side route. And recall earlier we talked about how you can do that redirect, and I'm doing it through the nginx config file. So everything works as expected, and now we're actually running a container that I can get going anywhere very easily because I have this image now, and that makes it very easy to work with. So that's how easy it would be to get a container actually going. So just as we saw before, we can go ahead now and we can do docker stop 32c, and then we can do docker remove if we want, 32c. And this is if it wasn't there. Now of course my image is still going to be up here somewhere towards the top, and there we go. So now that we have that running locally, let's look at one more way we can do this, and then we'll start discussing how we can move this up to cloud repositories and maybe run it in a different location.
Running the Angular Container Using the VS Code Docker Extension
Earlier in the course, we saw how we could use the VS Code Docker extension to take a multi-stage Dockerfile and build the image simply by right-clicking. Now what we're going to take a look at is how we can get that image up and running as a container by also right-clicking so we don't actually have to type anything on the command line. So coming back into VS Code, as a reminder, I have the Docker extension installed. If you don't, you can click on your extension icon and then search for Docker. Make sure it's this one from Microsoft, because that's the one we'll be using. Now once that's installed, you're going to see a little area again down at the bottom for the Docker containers, and images, and registries. And right now I don't have any containers. We can refresh that, and notice everything is empty. But I do have several images. Here is the one we've been working with. This is our runtime image that has Angular inside of the image. As a reminder, we built this by right-clicking on the Dockerfile, and at the bottom we selected Build Image, and then I renamed it to nginx-angular, and we can do a very similar thing now to get this running. So I'm going to right-click on it, and I can either choose Run, which will give me my command line back, this is detached mode, or Run Interactive, which is not detached mode. So I'm going to go ahead and do Run, and now you'll notice we get detached mode, and then it defaulted the port to 80, and then internal 80. It gave us back the ID of the container. If we do docker ps -a, we can see it there. And then if we come back over to here, we can see it right there, and I can even right-click and remove, restart, show the logs if I'd like, not a whole lot going on there right now, but do all kinds of fun stuff that way, and it makes it really, really easy to work with this. So to kind of show that this container is indeed running, we can come back to the browser and instead of 8080, which now longer works, that container is not running, do 80 and there we go, we now have that running container. Now as of today, the extension doesn't allow you to change the port, there's no prompt when we right-click on the image. Maybe you want to enter a custom external port, for example. Well, as of today it doesn't do that, so now you're kind of back to just typing in the command line, which isn't that hard anyway, but it is a very nice and quick way to just try out a container when you're not so worried about that and you want to default it, in this case, to 80 as an example, on the external port. Now when I'm done with it I can right-click and we can go ahead and say stop. Now this will actually do two things: it'll stop it and it'll remove it. You'll notice if I refresh here it doesn't show anything, and then if we go back to docker ps -a it's gone. So that's a nice kind of quick and easy way that you can actually get an image up and running as a container. It's not going to work in every scenario, because sometimes when you start up an image you might have a volume, a different port or other settings that you want to apply, but for simple test cases this provides a great way to just kind of right-click and try out your container.
Image Registry Options
Deploying the Angular Runtime Image to a Registry
In this section, we're going to see how we can deploy our runtime image up to a registry and put it on the shelf so that we can then pull it down anywhere else that we'd like. So if I go back to docker images, we know that we have our nginx-angular image already created, and of course we can just come over to our Images here and see it as well. Now, when you're about to deploy to a registry, you want to associate your registry ID with this image, so you want to retag it, and you might also want to add a specific version to it. What I'm going to do is right-click on my image here, and I'm going to say Tag Image. Now we could do this through the command line as well, but it'll do the command for us. So let me clear the command line here, and now I'm going to go ahead and say Tag Image, and what I'm going to do is put my user account for Docker Hub, which is just danwahlin, and then a slash, and then I'm going to go ahead and put the version. So let's kind of pretend that this was version 1.0, and what this will do is now re-tag the image. So we'll hit Enter there, and now notice that we have our nginx-angular, but then I also have this danwahlin/nginx-angular.1.0. Now I can right-click and we can run that if we'd like and try it out. So let's go ahead and do that, and just make sure it's working the same. Okay, so let's go back to our browser, we'll refresh, and everything is looking good. Now, you'll notice that when it did the run here it actually put the user ID followed by the tag name, the name of the image, and then the version for that. Now you don't have to version it, but in a real-life scenario you'll definitely want to do that, because that way you can move forward or backward if you need to with your images and your containers. And I normally take the approach that we never change a container, we make a new image, add that new version to it, and then replace the existing container with a new container, and that way you can always go forward and backwards, whatever you'd like to do there. Okay, so that's working great, so let's go ahead and we'll stop the 228 here. So now what we're going to do is push this up. Now, I have this name that's tagged, and I could do a docker push and then type all that in, the name, the danwahlin/nginx-angular.1.0, but I can very easily right-click over here and say Push. So I'm going to go ahead and do that. Now you'll notice it wrote out the command for us, just saved me a little bit of typing, and now this is going to push up, these are the layers, up to Docker Hub, and it just did that. Now what's really nice about this is I can then come down to my Registries, and because I've already registered with Docker Hub here, I can expand my user account, and let me refresh, and there we go, there's my nginx-angular, and there's my 1.0 under it, and so now I can actually browse this on Docker Hub. And here we go, there's my tag and some basic information. Now it doesn't have a lot more here because we didn't do much, but now this is available to actually pull from Docker Hub if we wanted. In fact, if we go back to the Repo Info here it'll even have the command we could put to get this going. So let's say we came back, let's go to our Images. I'm actually going to come into this particular image here and we're going to remove it. Alright, and there we go. And now I can come into here, and I'm just going to paste in that command docker pull, and we can even pull the specific version. And notice that almost all of the layers already existed. This is what I was talking about earlier, that based on these IDs each layer gets, if it doesn't detect a change then it just kind of leaves it alone. It only pulls the differences, if you will, between what you have and what's up in the registry. And then we already know we can right-click on this and run it, because we just did that. So that's how we can actually push up to a repository. Now, if you had your repository up in Azure, Google Cloud, AWS, you would have to actually put a little bit more. There's a command line syntax you can do to specify the registry, and you can even do that through the Docker extension as well if you've already hooked up and logged in to your Azure registry. So that's how we can get our image up into a repository, and then pull that back down and run it. Now that we've done that, we can easily pull that image onto any server we'd like, whether it's staging, production or somewhere else, and we can even easily move between a local system and a cloud system, because all we have to do is pull that image down and we'd have that Angular app up and running.
Running the Angular Container in Azure
Now that we have our image up in a registry, we can pull it down to any server we'd like, and in this section I'm going to show you how we can do that with Azure and use a Web App for Container service to get this container up and running very quickly and easily. So I've logged into my Azure portal, and I'm going to click on Create a resource. Now from here if you just type containers you'll see Web App for Containers comes up, so I'm going to click on that. And then you can read about the details here if you'd like, but I'm going to hit Create. Now what this'll do is I can give it a name, so I'm just going to say test-nginx-angular, go ahead and select my subscription, and I have actually a resource group, kind of a group where you put this in called Sandbox I like to use for these type of things. And then you can pick your location. I'm going to go ahead and leave the default. Now the last piece here is configure a container, so I'm going to click on that, and now I can choose if I go to the Azure Container Registry, some private registry I have, or you can see it defaulted to Docker Hub. Now from here I can say, alright, is this a public or a private image that's on docker Hub? Well we know it was public, so I'm going to go ahead and put danwahlin/nginx-angular.1.0.0 in this case. And this will be the name of the image that's up in my registry that I want to go ahead and grab and pull down to this Web App for Container service. So now I can hit OK, and we're ready to go. So everything is configured. I have a name, I have my account, I have my resource group, the location where this is going to run, and then the container image that we're actually going to use. So now it's as simple as hit Create, and then this will take a moment the first time it starts to kind of fire it up. It's going to download the image, and then once it's done it'll let us know, and through the magic of video I'll jump ahead to when it's ready. Alright, so my deployment has succeeded now, and I can get an overview of this. So here's my resource, I'm going to go ahead and click on that. Now I can come down to my Apps, and here's my test-nginx-angular. We'll go ahead and drill into that, and here we go. So now I can test it out and actually hit this container by going to the URL that you see here. So I'm going to click on that. That's now going to fire up the container, and once this is all ready to go we should see our Angular site come up. And there we have it. So now I'm running that exact setup. The image was pushed up to the registry, instead of pulling it down to my local machine, in this case we pulled it over to Azure, but again, it could be any cloud service, it could even be any machine out there that supports Docker, and now we're able to run the app exactly as we would if we ran it on our machine. And the beauty of this is now we can start and stop, we can swap them out as the image changes, and do all kinds of fun stuff there, and it's very easy to work with you can see. Now there's a lot more to the story here. Right now this is a self-contained Angular, the data is hard-coded, and in a real-life app you would, of course, have RESTful services or some other endpoint that you're going to hit to actually get the data for the Angular app. So what we'll do in the next module is we'll start to talk about what do you do when you have multiple containers, you have your Angular container and then maybe one or more RESTful services or some other type of container that provides data.
In this module, we've seen different ways to take an image and get a running container on whatever system we'd like to get that up and running on. So we learned about docker run and also the VS Code docker extension and how they can be used to start and stop a container. We talked about pushing an image up to a registry, and also talked about how if you're going with a cloud provider they have their own image registries you can use if you don't want to use Docker Hub. The push command can be used directly through the command line, or as you saw, we can do it right through the Docker extension in VS Code. And then we saw how Azure provides a way to get a container up and running with the Web App for Container service. There's a lot of other options out there as well if you're using a different cloud provider. So that's a walkthrough of getting our Angular image converted to a running container, and running that locally, as well as in the cloud. Now we're going to talk about how do we work with multiple containers.
Running Multiple Containers
Running Multiple Containers
So up to this point in the course, you've seen how we can put an Angular app inside of a running Docker container, and that works really, really well, it's very easy to deploy that way, but what happens when you have multiple containers? How would I get those up and running? For example, I might have an Angular application that calls out to one or more RESTful services, and maybe each of those services is in its own container. Well that's the type of thing we're going to talk about in this module. So we're going to start out by talking about running an application with multiple containers using a tool called Docker Compose. And what I'll have for you in the project is the same Angular app we've been working with, which up to this point had the data kind of hard-coded into the app, but now we're going to change it so that we can call a separate RESTful service, and we're going to bring that up in another container. Once I show you how to get multiple containers up and running, we're going to talk about the Docker Compose yml files, and the services they define that will allow you to very easily build multiple images and run multiple containers. And then we're going to wrap up with some different options you can consider for deploying images and containers out to servers or out to different cloud providers. Let's go ahead and jump right in to getting the app up and running with multiple containers.
Running the Application with Docker Compose
Up to this point, we've run the Angular application in a container, but it's been in isolation. All the data was in that container, and so we didn't have to call out to any other services. If we brought up the container, and Angular called into services that were already up and running, either in dev, or staging, or production, then that's really all you'd have to do. Maybe somebody else on your team or another team actually manages the services. But what do you do in cases where your team is not only writing the frontend app, the Angular app, but also writing some of the services that the Angular app is calling? Now we might want to bring up multiple containers. So in this section we're going to take a look at a tool called Docker Compose that's part of Docker that allows us orchestrate the process of bringing up multiple containers. It makes it very easy to build multiple images and then get those images converted to running containers very quickly and easily. So the first thing I'm going to do is show you just how to get the app up and running with multiple containers, and then a little later in this module we'll talk about some of the Docker Compose orchestration files, these are YAML files. So let's jump on into the project here. So the first thing I've done is I've gone into the src, app, core folder, and in the data.service file I've taken out the hard-coded URL. So before when we built it was just copying some JSON files into our assets here, and that's why the single-container solution was able to run okay. But now I would like to change that to actually call out to an endpoint, to a RESTful service in this case. Now this is a very simple Node.js service that's included in the project, you'll see it here in the server folder. So I'm not going to go through that, because it could be anything. It could be Java, ASP.NET Core, Node.js, PHP, Python, whatever it is you use, but the point is I now have two containers, the Angular container, and then I have this Node.js container that I want to bring up. Now I want to bring those up in a way that I can talk to each other not only in development, but possibly in staging and other environments as well. So that's the first thing I've done. Now, of course, we'd have to rebuild the project through an ng build, I've already done that, and we're kind of ready to go. Now what I'm going to do is get both of these containers up and running with just a single command, and that's called docker-compose, if you haven't done it before. Now if you've watched my Docker for Web Developers course, or some of the other Docker courses on Pluralsight, there's a lot of info covered about Docker Compose. I cover it quite a bit in the Docker for Web Developers, so if you do want more details on it feel free to check that course out. I'm just going to focus at this point on how do we use it to build our images and then get our containers up and running. So I'm going to go ahead and say docker-compose build, and this is going to go in and actually build our images for the server, and for our Angular container that we've been working with throughout the course. So let's go ahead and do that. Now this will be very fast because I've already done it earlier, and so it's just reusing the layers I had, but you'll notice that I have a Node image, and that's going to be for the Node.js RESTful service. And here's the nginx image that we've been working with up to this point in the course. Okay, so now our images are ready to go. Now how do I make it so I can get both of these containers up and running? Well, now we can say docker-compose up, and that's going to start up the containers based on some information that's in some docker-compose files, and I'll talk about those a little bit later in this module. But now my containers are up and running, and I actually have three containers here. So I have the Angular container. It's going to call out to the Node.js RESTful service container. Then I added a third called cadvisor, and cadvisor is a container that can monitor and provides a little web app to monitor other containers. So let's see if everything is running properly in the browser now. So I'm going to come into localhost, alright, and everything seems to load, and we can get the customers and the orders, but how do we know it's maybe not the hard-coded data being cached? Well, if I right-click, Inspect, we'll go to Network, XHR. Let's Refresh. Alright, so here's our customers. Notice it did call localhost 3000, so let's try that out. This is the other container. And it called into api/customers, and there we go. And then if I go to the orders it's calling into orders, and there's the order JSON data, so it is working properly. We have multiple containers up and running, and now these containers are actually communicating with each other, which is super nice and pretty easy to get going you can see. Now I mentioned I threw in a third one, mainly just so I could show you that we can bring up as many containers as we'd like. This one is called cadvisor, and it's on port 8080 by default, and what it allows us to do is monitor our Docker containers. So some of these are for something called Kubernetes, but here's my angular-node-service, and this is the one for the RESTful service. I can get to the CPU, and the memory, file system, all that fun stuff. And then if we go back I can also get some information about my nginx Angular container, and we can see that type of info here. Obviously it's not doing too much right now because I'm not hitting it on the website. But that's called cadvisor, and that's just a free image and container you can use when you'd like to monitor containers that you actually have. So that's an example of Docker Compose, and what we're going to do moving forward is take a closer look at the Docker Compose yml files. So to stop this I'm going to say Ctrl+C and now I'm going to say docker-compose down, and this is going to take down my containers and now I don't have to manually say docker stop, and docker remove, and all that fun stuff, it'll do it for me. Alright, so those containers are down, we can say docker ps -a, everything is gone, and just as a kind of side note, I locked up the console, so I ran in interactive mode, but I could have done the -d as well. I could have done docker-compose up -d, and what that'll do is bring those three containers up, but then I get my console back, and then we'll go ahead and take those down as well. And that's how easy it is to orchestrate containers on your development machine using Docker Compose. So now, as mentioned, let's take a closer look at these yml files.
Exploring the Docker Compose File
The magic behind the docker-compose build and docker-compose up commands is a file called docker-compose.yml. So what we're going to do in this section is take a closer look at what a yml file is, and explain how we can create services which equate to the images that we want to build, and then how we can bring those services up as containers. So going back into the project, we've already talked about the Docker files for our nginx-angular. In the server folder for our RESTful service, there's also a Dockerfile there for that particular item that's being built, a very simple one actually that just copies the source code and then runs npm install. And then aside from that we have cadvisor, but that's not an image that we're creating here. That's provided for us up on Docker Hub, so I'm just pulling that down and using it. Now when we went to the console, though, and ran docker-compose build, well there obviously had to be something there for it to build, and as I mentioned there's a docker-compose.yml file here. Now this file defines three services, as I mentioned, and the first is an nginx-angular, so that's the one we've been working with throughout the course. Then it has a node, and then here's cadvisor. Now for the cadvisor configuration settings you see here, this isn't something I memorized, I went and looked it up and they provide some details on how to use it, but if we go back to the nginx and the node, these are services I had to define, so let's walk through these real quick here. So the first thing I did is said what is the container name going to be once this nginx service is actually brought up and used. I also said what the image name is going to be when we do a docker-compose build. Now to do the build, we have to tell it where to find the Dockerfile. So I said that the context is the current folder where the docker-compose build command would be run, I'm running that at the root, and then the file that actually accomplishes the image build is our nginx Dockerfile, and this is the one we talked about a little bit earlier. This one is the very basic one for development purposes. Now in addition to that, I'm actually setting up the volume through docker-compose, and if you recall we did this through docker run, and we did a command line switch, -v. But with docker-compose, you can define this in the yml file, so that now when I do docker-compose up ultimately to get the container up and running, it'll automatically create this volume from the container folder back to our dist folder, so this provides a really convenient way to do that type of thing. I'm also defining external and internal ports. I even added 443, although we don't have an SSL certificate for this one. This container is going to depend on one called node, that's another service we'll get to in a moment, and then it's in a network called app-network. Now what you'll typically do is when you use docker-compose you'll set up some type of a network. Now if I go to the very bottom, I set up a bridge network it's called, give it a name of app-network, and you'll notice that I have networks app-network for cadvisor, for node here, and for nginx, so basically that means all three of these containers are in the same group, if you will, and they can all talk to each other. Now if we move on to the node container in this one, we have, again, the name of the container once it gets running, the name of the image once it builds. Now the build context is a little different. From where I'm currently at go to the server folder, and then get this node.dockerfile, and so this gives it the context of where this file is. I'm setting an environment variable in this case of NODE_ENV to development, setting my external/internal port, and then we've already talked about the network. So I do want to re-emphasize that I'm giving you a quick overview of this, so if you are new to Docker and Docker Compose, definitely go back and check out the Docker for Web Developers course, because I break this down in much more detail. I hope that even if you're new to this it helps give you an idea about what's going on with the build process, and then as the container is brought up. Now the cadvisor, as mentioned, on this one I'm saying the container name, and then the image is actually an image that's already up on Docker Hub called google/cadvisor. So you'll notice there is no build context like we did up here, because I'm not actually building this image I'm just using it, and then it has some volumes it requires you to find that you can get in the documentation for cadvisor. So in the end, when we do a docker-compose build it's building the nginx image and the node image that I have here, those are the two services that it has the build context for. The cadvisor it's not building because, as I mentioned, it just pulls it from Docker Hub. When we do a docker-compose up, it looks at all the services, finds the image, and then starts up the container and then does any extra settings like volumes, ports, and even environment variables if you'd like to pass those into the container. Now if we open up the docker-compose prod, this is the one that we would use for testing, QA, things like that, and maybe even production to build our images. So this is very, very similar. The only main difference is that this one uses the prod version of the dockerfile, we covered this earlier in the course, that actually copies the dist folder into the image. So I don't have a volume in this case, because I have the code in the actual running container once we get this up and running. Now aside from that, everything is pretty much the same down here except for I've changed the environment variable of node to production, and then cadvisor is identical. So if we wanted to build the images for this file, we could come on in and instead of saying docker-compose build, well that would build the docker-compose.yml file. that's the default. We would say -f, and then we would say build. Now this would go and build all three of the images, but this would be more of what I'd call the runtime type image, the ones that might run in production or staging type environments. So now that it's done building, we can do the same type of thing. We say -f, give it the prod file, and now we can say up, and this'll bring up all three as you saw before, but now these will be the kind of more official running containers. So if we jump on back, refresh, we'll see the same exact thing, but again these will be more of production type of containers that we might want to run now. So that's an example of how we can actually work with Docker Compose, and specifically the yml files that we have. So that's a quick run-through of how we can get multiple containers up and running with yml files and the docker-compose commands.
Options for Deploying Multiple Images/Containers
The question that usually comes up at this point is, alright, Docker Compose is great because I can get the images built and run multiple containers on my system, but what do I do when I'm going to try to do that on my server, up in the cloud, or somewhere else? What are the different options for deploying multiple image and get getting those containers up and running? Now this is a big topic that I can't address in this class, it's way outside the bounds and scope of this class, so what I'm going to do here is talk about some of the different options, so at least you know, and then you can go research those either on Pluralsight.com or other avenues out there. So when it comes to deploying multiple images and containers, there are several options you can choose from. One of the easiest is you could just create a virtual machine, either on your server or up in the cloud, and then copy over to your Docker Compose file for production, staging, QA, whatever it is, and run docker-compose up, assuming that you already have the images in a registry somewhere. Now that would be by far the easiest way, and there are many people that actually do that. Now the challenge is, you're not going to be able to scale out easily, so if you want to move those containers on multiple VMs, that would be very challenging, it's not designed for that. So if you just need to get the containers up and running on some server, then Docker Compose is definitely the easiest option to do that. Now the second option is to use one of the cloud providers' container management services. I showed one of those earlier on Azure, and that was the Web Apps for Containers. Now the challenge there is there's no orchestration mechanism built in. It's kind of a one by one adding the containers. You can scale, so there is that benefit, but there's not an orchestration mechanism, there's no orchestration files, things like that. So it does provide a way, though, that you could get multiple containers up and running, and then on the one for Azure I showed you could just use HTTP to communicate between those different containers, and in the Angular world that would work perfectly. But, it really depends on how many containers, and what you're trying to do. Now the most robust solution that's extremely popular out there these days is to use Kubernetes. Now I'll talk a little bit more about Kubernetes to wrap this section up, because it is very popular, but it's also a very big topic and there are entire courses on Pluralsight just on Kubernetes, so let's talk about that just a little bit here. So here's the official definition from kubernetes.io. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Now let me give you my summary. If you'd like to manage and orchestrate many containers, scale them out, replace containers as updates come out, it has self-healing and much, much more, then Kubernetes is the go-to mechanism as of today to do the management of the containers. It's very popular across all the main cloud providers, and almost all of them provide some type of a managed Kubernetes service that you can subscribe to. So when it comes to these services, Azure has Azure Kubernetes Service, or AKS; AWS has their elastic container service for Kubernetes, or EKS; and then Google Cloud has the Google Cloud Kubernetes Engine. And these would provide a way to get your containers up and running on one node, a node being like a VM, or scale it out to many nodes. And it allows you to dynamically scale, and as I mentioned it has self-healing capabilities and much, much more when it comes to fixing issues that come up. So while this is a very big topic and outside the scope of this course, as mentioned, I did want to make you aware that this would be a good option to look into if you want an official way to manage multiple containers in an environment. Kubernetes can also be used in your local environment if you'd like to get it going locally on a server. You don't have to use one of the cloud providers. So if you'd like more information on that, you can go to the kubernetes.io link that I showed earlier, and they'll provide some additional information, help documentation, commands you can run, and much more, but hopefully that gives you a nice starting point for how you can deploy multiple containers to a server either locally on-premise, or up into the cloud.
So in this module, we've talked about how Docker Compose provides excellent orchestration capabilities to get one or more containers up and running on your system quickly and easily. It also supports building your images, and in the end it just saves a ton of time. So while we can use docker build to manually build our images and even create an automated process if we have multiples to build, with Docker Compose we can build multiple images very quickly, and then we can run multiple containers very quickly. So we have the docker-compose build command we talked about, and docker-compose up. Finally, we talked about how cloud providers offer several different solutions for managing multiple containers. Kubernetes is definitely one of the most popular as of today. And you'll find a lot of great courses out there on Pluralsight, and documentation in other areas. So I hope you have a better feel for how we can work with and deploy multiple containers. Docker Compose definitely provides the easiest way, but when you want a more robust solution you can look to Kubernetes and other options that are out there.
Thank you for taking your valuable time to watch the Containerizing Angular Applications with Docker course. Let's review some of the key topics covered throughout the course to wrap things up. We started off by talking about the benefits that containers offer, and compared them to frontend application options such as directly deploying code to a server or using content delivery networks, or CDNs. You saw that containers bring several benefits to the table including enabling team members to be productive quickly when first getting started with a project. Also, you can isolate application versions, and even allow multiple versions of a server or framework to run side by side. We also looked at consistent deployments between multiple environments, and how containers can generally simplify the process of deployment. From there we learned how to get an existing nginx Angular image running as a container using the docker run command. This included discussing the role of volumes and how they can be used to link a container to a local folder for development purposes. As a side note, volumes are also very common with some production scenarios where artifacts in a container need to be stored outside of the container. We also learned how to write custom Dockerfiles, and how a multi-stage Dockerfile can be used to build an Angular application and copy the final code into an image. This can result in a smaller image size, which is great for deployment, and multi-stage Dockerfiles can provide significant benefits in deployment setups used in organizations. Once an image is created, we learn how to run the container using the docker run command, and by using the VS Code Docker extension. The extension provides a quick and easy way to work with images and containers without having to type any commands in many cases. Finally, we discussed how to orchestrate the process of building multiple images and running multiple containers by using docker-compose.yml files and the docker-compose command. Many applications require multiple containers, so by using docker-compose you can greatly simplify the process of building and running containers. Now that you've watched the complete course, I hope that you have a solid understanding of the role that containers can play with Angular and many other types of applications. I've sincerely enjoyed creating the course, and hope that you'll be able to apply the knowledge that you've gained to your development projects back at work. So thanks again for watching the course, and I hope you'll check out my other courses on Pluralsight as well.
Released26 Jul 2018