What do you want to learn?
Leverged
jhuang@tampa.cgsinc.com
Skip to main content
Pluralsight uses cookies.Learn more about your privacy
Containerizing Angular Applications with Docker
by Dan Wahlin
Learn how to build and run your Angular application code using Docker containers. Explore how to write dockerfiles for custom images, leverage multi-stage dockerfiles, container orchestration with Docker Compose, and much more.
Resume CourseBookmarkAdd to Channel
Table of contents
Description
Transcript
Exercise files
Discussion
Learning Check
Recommended
Course Overview
Course Overview
Welcome to the Containerizing Angular Applications with Docker course. My name is Dan Wahlin, and I'm a software developer, architect, and trainer specializing in web technologies. I work a lot with Angular and Docker, so this course was especially fun to create, and I'm excited to share it with you. Throughout the course, you'll learn about the role of containers when it comes to containerizing Angular applications. We'll start off by answering the question, why use containers at all? I love working with containers, but with frontend applications there are several viable alternatives that can be used. So we'll answer the question by discussing the benefits that containers bring to the table, as well as some alternatives that exist. From there we'll jump into the role of Dockerfiles and how they can be used to create development and production images for Angular apps. This includes discussing multi-stage Dockerfiles and how they can be used to build an app and create a final image that's as small as possible. We'll also learn about a Docker extension that can be used in VS Code to simplify working with images and containers, and wrap up by showing how to run Angular with other containers by using a tool called Docker Compose. So let's get started and jump right in.
Angular and Containers
Course Agenda
Let's talk about the course agenda and the specific topics we're going to be covering throughout the course. So we'll start off by talking about some of the prerequisite knowledge you do need to have so that you can get the most out of this course. Now from there we're going to jump right into the question, why would you want to use containers with frontend applications like Angular? We'll discuss some of the benefits that containers bring to the table, but we'll also discuss some alternatives that exist that you might consider as well. From there, we're going to get an Angular container up and running so you can see that process, and then we'll move on to covering how to build custom images for development and for production. Specifically, we're going to focus on creating a multi-stage Dockerfile. This is a file that will ultimately allow us to not only build our Angular app, but also generate a very small final image that'll make it more efficient as we deploy between different servers. In this next module, we'll talk about that process of deploying an image, and then running a container, and then finally we're going to wrap up with how do we run multiple containers. Oftentimes a team might not only be responsible for building the Angular part of the app, but also some of the services that Angular actually calls into. You might want to containerize all of that. We're going to talk about that process, discuss a tool called Docker Compose that you can use, and also discuss alternatives that exist as you start to move to servers or maybe to the cloud. Now, throughout the course we're going to cover a lot of different technologies. We'll learn about nginx, which is a very efficient reverse proxy server, we're going to talk about custom Docker image and Dockerfiles, talk about different ways to get containers up and running and how Angular fits into that, and we'll also talk about how to orchestrate multiple containers using Docker Compose. So now that you've seen what we're going to cover, let's go ahead and jump into the first module and get started.
Angular and Containers
Let's start the course off by talking about some of the prerequisite knowledge you need to have to maximize your learning as you go through the course, and then address an important question we need to answer, which is, why would you want to use Angular with containers? And we'll address a few other questions along the way. So we're going to start off by talking about the prereqs to get the most out of this course. I always like to set that up front so that if you need to jump out and maybe view another course on Pluralsight you can do that and then come on back, but I'll explain what you need to know. From there, we're going to cover and review some of the key course concepts, some of the prereqs actually, but just do a quick review. So we're going to talk a little bit about what are containers, what are images, and a few key Docker commands we're going to be using throughout the course. Now from there we're going to address the question, why use containers at all? Maybe you've heard about containers, but you haven't actually seen the benefits they can bring, so I'm going to walk you through several scenarios and explain why containers make a lot of sense in many cases for applications, even Angular applications. From there we're going to get the sample app up and running. Now we're not going to be building any Angular code in this course because it's all about running Angular in containers, but I have a really simple little Angular app that we can demonstrate. We'll get it running locally with the CLI, and then we're going to get it up and running in a container. And then as we move through the course we're going to be talking about that process to build our Docker image with Angular, we're going to be using a server called nginx for that, by the way. And then we'll talk about how to run our container and do much more along the way. Now you can grab the sample code for the course at my GitHub repository that you see here. This will have a README so if you want to run things locally you can do that, if you want to run it within a container you can do that, and there will be some different options there. So feel free to grab that code, and then you can start playing around with me as we go through the course. So let's jump right in to some of the prereq knowledge you need to have to get the most out of this course.
Prerequisites to Maximize Learning
One of the things I always like to do to start off my courses is make sure that you know the prerequisite knowledge and expectations so that you can get the most out of the course and really feel like you get a lot of value. So for this particular course, the expectation is that you do know at least the basics of Docker, you know what an image and a container is and how they work. Now I am going to be reviewing some of the key course concepts a little bit later in this module, so if you are new to it I'll give you those basics, but really it would be even better if you have some experience working with images and containers. Now if you don't, I'll have a few options for you that I'll mention in just a moment. Now we're also going to be using some Docker commands, so if you have worked Docker before and done Docker Run or Docker Build or something like that. We won't be doing anything super complex, but if you already have some experience with Docker commands that'll also be advantageous for you. And then finally, although we're not going to be building an Angular app in this course, we will be using the CLI a little bit, and we're also going to be talking about Angular somewhat, so having a foundation in some of the Angular fundamentals would also be advantageous. Now, if you don't have any knowledge of those topics, I would recommend checking out some of the other courses on Pluralsight.com. I have one called Docker for Web Developers, and this will give you all the details you'd need, even more than you'd need for this particular course, to learn about how Docker and images and containers could be used to provide a lot of benefits for various types of apps, not only Angular, but many others. Now if you want to see some Docker with Angular and other technologies, I also have an Integrating Angular with ASP.NET Core RESTful Services course, and one that's also for Node.js developers, so you could check those out as well. And then if you need some Angular fundamentals type of courses, then Pluralsight.com has those as well. So as long as you have the foundational topics, the knowledge of images and containers, some of the key Docker commands and Angular fundamentals, you'll be fine. Even if you don't, I'm going to do a really quick intro to those momentarily, but that's what you need to get the most out of this course.
Key Course Concepts
In this section, I want to talk about some of the key course concepts that we're going to be covering throughout the course. I mentioned earlier the prerequisites. This will give you some of that, but the goal is really just to act as a review for some of these concepts in case maybe you've done them but haven't used them recently. If you're already familiar with Docker images, containers, and some of the key commands, feel free to skip ahead to the next section. So the first thing we're going to talk about is we need to work with Docker images in order to get an Angular application running in a container. Now an image is really like a layered cake. Think of a chocolate cake that has different layers, and each layer is an instruction; it could be code, it could be security settings that run, it could be many different things. And what we're going to be doing in this course is actually building a Docker Image, and we're going to use something called nginx. Nginx is a reverse proxy, a very fast way to serve up static content like HTML, JavaScript, CSS, and those types of things. And we're actually going to host Angular in this container that we're going to run. Now, to do that, though, we have to build an image, so we're going to do that a little later. And then what we'll do is we'll take that image and we'll move it to a server, or in our case we can just run it on our local development machine, and we'll use it to create a running container. Now, a container is kind of like that cake with some frosting on top. The frosting creates what's called a readable writeable layer, and this allows us to do some other things at runtime. So what we're going to do is take our nginx image that has our Angular code embedded inside of it, and then we're going to make that into a running Docker container. And this is how we'll be able to use this functionality on our local machines, or move it to a server, or even up to a cloud service, because once we have that image we can move it anywhere and be assured that when we run that container it's going to run the same. So, as mentioned, an image is really just a read-only template composed of a layered file system. It has these different instructions that are stacked up very similar to layers of a cake, and the image itself is not that useful on its own. But, we can push this image up to different repositories, it could be Docker Hub, it could be Azure, or Google cloud, or AWS, or whatever it may be, and then we can use that image, as mentioned, to actually get a container running. Now this is like a container of products that go on a ship that ships across the ocean, right, it's very organized, we know exactly what's in that container, and we can use cranes and different machinery to easily pull that off the ships. So what's so nice about this, and we'll talk about this a little later, is that this is very reproducible across different environments, and you'll see that's going to be a big, big pro of using containers, even with Angular apps or other apps. Now, in addition to images and containers, and kind of knowing what those are, we're also going to be using some different Docker commands. So I'm going to do a quick review of those here, and then you'll see them in action. Now, these commands, if you do want more info, I would recommend checking out my Docker for Web Developers course, because I go into all those in great details, and many others. So one of the commands you can use is called docker pull. I mentioned that we create an image, we put it up in a repository, and then we want to pull that down to a machine, a server, a virtual machine, whatever it may be, that runs the containers. Docker pull can do that. Docker images, this will list all the images on the system where you run this command. Docker remove image, rmi. Alright, so if you do docker rmi, and then give it an image ID, which we'll be seeing a little bit later, then you can actually delete that image off of whatever system this command is run on. And then finally, for the key commands for images we're also going to be building images, and there's a docker build command. Now there's a little more to that one, and we'll be talking about that a little bit later as well. Now when we want to take an image and actually run it as a container, we can use docker run, and then give it the image name. Now there's a lot of other command line settings you can give as well, and we'll see some of those as we move forward throughout the course. Now once you get one or more containers running, you might want to list all of the containers, maybe running containers or stopped containers. We can do that with docker ps -a. That -a will show us all of the containers on that system, including running or even stopped containers. And then finally, just like removing images, you can also delete containers or remove containers, and that's docker rm. And then you give it the container ID and that will take care of actually removing that container off of your system. Those are some of the key things we're going to be talking about all through pretty much every module of the course. We're going to be building images that have Angular code in it, we're then going to get those images up and running as containers, and we're going to be using some of the commands that you see here to actually do the build, and the run, and deleting, and things like that. So now that we've talked about some of the key concepts we'll be covering throughout the course, let's answer the all-important question of why do we really want to use containers with Angular applications?
Why Use Containers?
Before I jump into any technology, I like to ask the question, why are we using it? There's a lot of great technologies out there, but does it solve a problem that you actually have in your job or in your company? Well, containers solve a lot of problems that I think we've traditionally had to worry about, so let me talk through why do we want to use containers for Angular or other applications? Now before officially answering that question, let's talk about some other options that we could use instead of containers. We, of course, could just deploy our source code normally, and we'll talk about that actually in a moment, but we could also use something called a content delivery network, and these are normally could services that you can subscribe to, and they allow you to host your static type content, your HTML, CSS, JavaScript, your images, your SVG, and pretty much everything that a frontend app would actually need. And what's nice about content delivery networks is they can deploy regionally and scale out automatically, so it's very easy to manage. You just upload your files, and you can have your app up and running very quickly, get caching benefits, and more. So you might say, why not use a CDN, Dan? And I think that's actually a good question to ask. There's never one-size-fits-all; every company does things different ways. Here's the kind of pros and cons, though, in my opinion, to CDNs. First off, there's a lot of pros. You don't own, though, the management of it. You are relying on a cloud service to actually manage deploying the bandwidth, all that fun stuff, for your static files. Now a lot of people are moving to the cloud, so you might not be afraid of that at all. With some companies, though, they might want more ownership, and they also might have a very high percentage of their apps that aren't actually public. I think a CDN, an external CDN, makes sense if it's a public type of app, potentially, but if it's internal, no, you're probably not going to put your stuff there, because typically they're geared towards your company, and you really don't want that app leaving your company in many cases. So then you may say, well let's set up an internal CDN. Okay, well I think that's another great idea, but now you still have to come up with a way to manage that CDN, and I think that's where containers can come into play. Now there are some other things as well that we're going to talk about with deploying our code, so let's move on to the next part of this. So when it comes to getting an application up and running, we're all accustomed to getting our Angular code built, we do ng build, for example, and then we ship that code off to some server somewhere. So if you boil it down, when we move between environments it's almost like getting in the car, and you have a road map, and it says, oh, okay, we're going to stop at the city called staging, and then once we get through that we're going to stop at the city called production, and we want to make sure that everything runs the same as we go through these different cities. Well, if you've been in this business very long, you'll know that that can actually be challenging, because whoever set up the server, whether it's a physical hardware server or it's virtual machines running on the hardware, there's a little bit of a challenge getting everything consistent between the different environments, even something as simple as just a JavaScript app with HTML. Maybe that's running on Apache, as an example, or nginx, or IIS. What version is IIS, what security settings does it have, what configuration does it have? There's so many different factors that go into this, and so we normally just worry about shipping the code, and we hope that the server is set up properly, but I'm going to show you that containers can do a lot more. So option 1 I just kind of discussed. We write an Angular app, we build out code, we get our bundles and our HTML and everything ready, and then we just simply ship that through some deployment process off to a server. Now, again, that could be hardware, that could be a virtual machine sitting on top of that, it might be up to the cloud, it just depends. Now, this part for us as developers is actually pretty simple. You just get the code from the dist folder generated by ng build, and we, or somebody else, just ships that over maybe using a CI/CD type process, continuous integration, continuous delivery, or maybe even a manual process that we want to work with. So there's nothing wrong with this approach, it's what most of us have done for many, many years, but here's the thing. We have no control over the server part of it oftentimes. A lot of times, especially in larger companies, some other group is managing that, and then that's where the surprises come up. A service pack was applied that we didn't know about, we didn't have it in our development environment, that broke something. Security settings are different than we thought. I mean, who knows, there's all kinds of fun things that can come into play. So, to get back to the question of why use containers, let's talk about that now. So let's say that we, again, are kind of hopping in the car, and we want to get whatever's in the back of the trunk shipped from point A to point B, just like we've been doing here, well another technique would be, what if we build a Docker image that not only has our code in it, but also has the actual server that's going to run the code? Now that could be Apache, it could be IIS, it could nginx, HAProxy, there's many things that can serve up our JavaScript and HTML, and then what if we simply moved that image onto whatever server we want to run it on, and then that is used to create the container. Here's what it would look like visually. So we get our code built, we use that to build an image, the image moves down to the server, and then we use that image to create this running container. Now the difference here is very significant actually, if you haven't done this before, because what's inside of the container really, really matters. Now, if we were just putting the code in the container, that's really not a lot different than option 1 that I showed, right, because that just has the code. But what if everything needed by the app, and keep in mind an app might have multiple containers, because Angular, likely, is going to call into RESTful services, or GraphQL, or something else, and those might also be containerized. So you could have an ASP.NET Core, or Node.js, or PHP, or JAVA, whatever it may be. In these different containers they all have their own images, we copy everything over to the server, and now we can get that entire app up and running very consistently. So the difference here is what's in the container. What goes in it? Well, everything you need for that part of the app. So in the case of Angular, of course you're going to put the code, but you're also going to have the server, and a specific version of the server, any patches, you could have SSL certificates, configuration the server needs, environment variables the server needs, and on, and on, and on. The bottom line here is we're not just deploying the code, we're deploying a complete success strategy, meaning that we're deploying everything we need to run, in this case, the Angular app. And, again, that might entail running multiple containers as well. That's a big difference from option 1, where we simply deploy the code. Now, everything is very reproducible across different environments. So that gets us to the final piece of the benefits of containers. Now this is something that I touch upon in much more detail in the Docker for Web Developers course, but I thought this would be a good opportunity to review some of these key benefits to wrap this section up. So, first off, how many times have you ever tried to get an application up and running only to find out that different team members built it, maybe some different technologies were used for pieces of it, and now you're spending a day or two getting security right, the proper server version installed, configuration, everything you need to get it up and running? Well, with images and containers, I can give you the images, you can start the containers up in a matter of seconds, and you're up and running. When you're done with them, you can remove those images, remove those containers, it's as if nothing ever happened. There's no uninstall process, there's no traces left on your system of the server or your code, or whatever it may be. So that's actually pretty compelling. It also can eliminate app conflicts. How many times have you deployed code, and maybe the code was a newer version of whatever framework you were using, let's say it was a server-side framework, and everybody is always scared about updating the server framework because they're afraid older apps might break. Well, with containers, because the server and all the settings and everything goes in that container, you could have version 1, version 2, version 3, all running on the same virtual machine with no problem. And then once everybody stops using version 1, you see there's no more traffic to it, you can just kill that container and you're good to go. So now you can completely isolate the different server versions, and security, and everything, and that's pretty compelling as well. Now I've mentioned environment consistency. How many times have you ever deployed between environments at your org, and there's a, what I call surprise that happens? Something was different between staging, or production, or something like that. Well, again, if we can build these images, and then deploy these images to our different environments, and that might even mean cloud environments, then we can get the app up and running very quickly, because there is nothing to install anymore on the server as far as the server version or things like that. The image has everything needed. I don't have to worry about keeping the server up to date. Now I'm worried about keeping the image up to date. That's going to give us the consistency to move between environments. For our company, I've had to move between different virtual machines, because I've maybe added a bigger one or scaled out or something, and because I use images and containers, it's very, very easy for me to do that. It's very minimal effort actually. Now all of this leads to shipping software faster, with less headache, I might add. So, there's a lot of benefits to this. It's not just shipping the code anymore, it's shipping the code plus everything needed to run that code successfully. So, circling back to why use containers and that all-important question, I hope that gives you some ideas about the benefits that containers can bring to the table.
Running the Angular Application Locally
Let's take a look at the Angular application that we're going to be working with. Now, this application is a simple one, it shows customers and orders, and really it just gives us a starting point so that we have an Angular app built with a CLI. And though I'm going to show you how to run it locally, throughout the rest of the course we're of course going to focus on containers, building images, and more. Let's take a look at how we can run the app locally. So the application itself is just called Angular-Core-Concepts. It just covers the fundamentals of any Angular app. And it's built with the Angular CLI, of course. So to get it running, we can go ahead and in VS Code here I'm just going to say Open in Terminal, or Open in Command Prompt on Windows, and from here we can go ahead and do our normal ng serve and -o to open the browser. Now this assumes that you have installed the Angular CLI. If you are not familiar with that, you'll probably want to go check out some of the different Angular courses out on Pluralsight as well. So it looks like the application is loaded, and you can see that we have some different customers. You can drill into these customers to view the orders, you can filter the customers, and we can also sort and do pretty standard operations there, pretty basic core concept type of Angular app. Now currently you'll notice it's on port 4200, and we have our routing and everything going, and this would be kind of the normal way we would develop. Now I'm going to show you how we could also develop a little later in the course, not only using the ng serve, but also using a real server if you wanted, like nginx or Apache or others. Now we're going to focus on a server called nginx, which I'll be talking about a little bit later. But that's how easy it is to get the sample app going locally. So let me go ahead and we'll Ctrl+C here to stop it, and now what we're going to do is take a look at how to get this going in a container.
Running the Angular Application in a Container
In this section, we're going to take a look at how we can get the Angular application up and running in a container, but what we're going to do is run a real web server in the container, but link it back to our local source code, so we're going to create something called a volume. Later on in the course, I'll show you how we can actually copy the source code into the container so we could deploy it to different environments. Now in order to run what I'm going to show you here, you will need a few things. First off you'll need an editor. Any editor will work. I'm going to be using VS Code throughout the course. You can download it from code.visualstudio.com. Now you're also going to need Docker Community Edition. Now this is the free edition of Docker that runs on Linux, Mac, and Windows, and you can go to docker.com to get details on how to install it. Now to run Community Edition you do need Windows 10 Pro or higher, if you're on Windows. There is an option if you're on Windows 7 or 8. I'm not going to be covering it here, that is covered in my Docker for Web Developers course, but that uses something called VirtualBox and Docker Toolbox. So it is possible to do that, but you do have to install something different. Now assuming you have Windows 10 Pro or higher, Mac, or Linux, you'll be able to run the latest version of Docker, which is Docker Community Edition. Okay, so now let's switch over to the code, and I've opened up a file called nginx.dockerfile. Now I'm not going to talk through what it's doing quite yet, I'm just going to jump down to how to run it, and the first thing we need to do is we need to build our Angular projects. So the normal way to do that is with an ng build, and I might even add a watch on there so I can just leave it up and running. Now when you do this, it actually deletes the dist folder, and we don't want that in this case. We want to keep the distribution folder and just update the contents when it rebuilds. So there's a command line switch you can add that not a lot of people know about, but it's there, it's called delete-output-path, and I'm going to say false. Now what that means is, do not delete the existing dist folder. Now the reason that's important is we're going to get a container up and running here in just a moment, and then we're going to link that container back to our local folder here, this Angular-Core-Concepts folder, using a Docker volume. Now the volume is really just an alias, inside of the container they're going to be looking for this folder I'm highlighting, but we're going to point that folder back to our local code. That way we can do local development and actually try out this real web server. Now you might kind of question why would I do that? I can just do ng serve and call it a day. And I would say when you first start getting going on a project, absolutely, just use ng serve, and maybe you have containers for your services, and maybe you don't, but you don't need a container to run ng serve of course. But what happens when you're ready to maybe move to staging or production or something like that? You might want to check, hey, how does the code act in the actual environment for the real HTTP server, and we can do that. That's why containers are so cool. Now I'm going to be using something called nginx. This is a very fast reverse proxy, HTTP server. It's very popular. A lot of sites you've gone to out there use nginx as the port 80 server, the initial point of contact when you hit the website. And then what we're going to do is nginx is going to be running in the container, but I'm going to link, like I said, back to my source code using a Docker volume. Now later I'll show you how we can actually copy the Angular source code from the dist folder into a container, into an image which gets created as a container, and that way we could deploy to staging, or production, or the cloud, or whatever you're doing. But let's go ahead and get this up and running now. So once you've done a build, I could go ahead and just do this if I wanted, let's leave the watch and the delete-output-path false. We'll let that start. Now you'll notice as I do this the dist folder won't magically disappear like it normally does when you do an ng build. Okay, perfect. Now I'm going to just hit the plus here to get a new console, and I'm just going to paste in a build statement. Now, we'll talk about this a little later in more detail, so if you're new to it, don't worry, we'll get to it, but right now I'm just going to use this to create an image. But the image doesn't have my source code in it, again, we're going to link back to the source code. So this'll be really fast because I've already pre-built this. Alright, so the image is ready. Now the next thing I'm going to do is run this, so we're going to do a docker run. Now before I do that, I want to point one little thing out. If you're on a Mac, you can use this syntax. This means your current working directory, this is an alias for the current working directory, and that's something that if you're on Mac just kind of works, and there's only kind of one way to do it on Mac. On Windows, there can be different ways to do this. So I have a blog post up here, and if you are on Windows, you'll want to check that out, because if you're using PowerShell you might do it one way versus just regular command prompt versus maybe get bash for Windows, something like that. Now I'm on a Mac in this demo, so I'm just going to copy this down. Now what this is going to do is say take this image, the image is nginx-angular, and run it. Now I'm not going to run in what's called detached mode, that would make it so my console comes back and I can use it. I'm going to lock it up, so notice it kind of sticks there. and now I'm going to switch over to the browser, and I'm on port 8080. Now to kind of show you that again, you'll notice I'm running on port 8080, but nginx is running on 80 internally. So let's come on back, and let me refresh, and there we go. So now I'm actually able to do live development against this particular nginx server that's running, and that way I can literally simulate the exact environment on staging, test, QA, whatever you call that, production, a cloud environment, and if I got this to your machine, you could do the exact same thing of course. Very nice and easy to run. Now, to stop the container I'm just going to say Ctrl+C here, and then if we do docker ps -a, that'll list all containers, there we go. It looks like it exited about 8 seconds ago. It is nginx that's running. And I'm going to say docker remove, and then we're going to take just the first part of the ID here, c3, and now if I do docker ps -a, you'll see it's gone. Alright, and to kind of show that, we go on back, notice I no longer can get to the site because my nginx server is down. So there's an example of how we can actually run Angular locally, the code anyway, but actually run it against a real server. So we're going to be talking about this as we move along, and as I mentioned, I'm also going to show you how to do a build for when you're ready to go to a staging environment or production environment later in the course.
Summary
To summarize this module, we've learned that containers provide several different advantages to us as developers, and also for deployment. Now, containers in general get most of the news and press when it comes to dev ops, and for good reason, because it makes it easy, as we discussed, to move between environments, but as I showed a little bit earlier in this module, we can even develop against containers if we'd like. In some cases, you might do that full time, other times you might do it like we showed with Angular, just to make sure that everything is running in the environment where you know your app is going to run later for production. Now Angular can, of course, run in a container using any static file type of HTTP server. While I used nginx, there are many other options out there. You could use Apache, IIS, HAProxy, and many others. And then finally, we saw how Docker images and containers provide several different benefits to us as developers. Now I just mentioned earlier the ability to move our images and then start up our containers between environments is a great advantage, but in addition to that we talked about how bringing people up to speed on an app is much quicker if you have containers available for their local development machine. We talked about how you can isolate apps and even run different versions of a given framework or server on the same virtual machine very easily because we containerize those. And then, of course, we talked about the ability to move our images and run our containers across the different environments. So we have a lot more to cover. We're going to be discussing more details on Dockerfiles, building images, running containers, multi-stage Dockerfiles, and even how to run multiple containers later in the course, but now you have a good sense for what's possible, so let's move on to the next module.
Creating a Multi-stage Dockerfile
Creating a Multi-stage Dockerfile
In this module, we're going to take a look at multi-stage Dockerfiles, and I'll explain what that is, how you can create them, and some of the different benefits we can get from using these types of files. So we're going to start off by talking about how do we create an Angular development Dockerfile first, and that's going to entail using volumes. We saw a little bit of this earlier, but we'll spend a little more time discussing this. From there we're going to talk about multi-stage Dockerfiles, the benefits they offer, and the actual process and workflow that they allow you to create. They're great for build processes where you actually want to build the code in a container, but you don't want everything that that container has in it to be in the final version of the container that you're going to run at runtime, for example. So what we can do is use it to build a very small runtime image, and that's going to allow for easier deployment, faster deployment, and it's just better all the way around. So once we discuss what a multi-stage Dockerfile is, we're going to create our first stage of a Dockerfile that's going to be responsible for building our Angular code. So we're actually going to create an image that'll generate a running container as we build this multi-stage Dockerfile process, and that's going to have the Angular CLI in it so that we can run a build command. So instead of building the code locally, we're actually going to build it in a container. What we're going to do then is take the output from stage 1, copy it into another image, which is going to have nginx. Now nginx is a high-performance reverse proxy server, very, very popular out there, and very good at serving up static content like HTML, JavaScript, and CSS, so we'll talk about how to build that stage. From there we're actually going to build our multi-stage Dockerfile using our Docker build command, and we'll talk about some other things there like tagging and versioning, and things along those lines. And then finally I'm going to show you a VS Code Docker extension you can use that's free, and it will allow you to do things like build an image from a Dockerfile, run a container from an image, and much more without really typing any code at all, so it provides a nice way to avoid some of the more laborious and tedious Docker syntax that you might have to write on the command line. So let's go ahead and kick things off by talking about how we can build an Angular development Dockerfile and then convert that into an image.
Creating the Angular Development Dockerfile
Let's take a closer look at the Dockerfile that we built into an image earlier to start off the course, and see what it's doing and kind of why it's doing it for Angular. So here's a file called nginx.dockerfile, and you'll notice it has the standard FROM nginx:alpine. Alpine is a very, very small variant of Linux. We have the Label, and then we have a COPY command. And that's really all we have. Everything else is already built into this base image that nginx provides. Now why am I doing the COPY command? Well, let me go ahead and comment this out first, and then I'm going to go ahead and rebuild this image. Alright, and that's all done. And then let's go ahead and start this up. Alright, so it's now started. Now let me run off to the browser here, and I'm going to hit Refresh. Now notice I'm going right to the route, okay, so that's going to be kind of the key here. And I get a 404. Hmm, not good. Now, if I take that off and go to the root it works, and then you'll see everything is good there, but if the user had bookmarked and went to this directly, well what's happening is the server is seeing that as a server-side route. It doesn't know what it is, and it gives a 404, so if I hit Enter I get a 404. So how do we fix this? Well, let me stop the container here, let's go ahead and remove it. Alright, and now what I'm going to do is come back in and put this COPY command. Now the reason this is important is this is telling nginx that I have a custom nginx configuration file, and then we're going to copy that into the location that nginx can look for the default config. Now what this is doing is something important with the location. The rest of this you can look up in the nginx docs, don't worry too much about that, but you'll notice this try_files uri index.html, and then 404 here. Well, this little index.html, what it's going to do is if it finds a uri, a uniform resource identifier, or a URL you could say, then if it doesn't know what is, it's actually going to redirect it back to index.html. That's what we want in our Angular application. Otherwise, we get that 404 because the server saw /customers, didn't know what to do, and then it errored. And if you don't know about this little trick on any server, it can be very painful actually. So in the nginx world we can add this little line, and there's some other ways you can do this as well, but make it so that when it sees that server-side route it doesn't error, instead it just sends them back to index.html, that way Angular can see the route and then show that appropriate view. Alright, so now that we have that, let's go ahead and rebuild our image. Alright, we're good there. And let's come on in and we'll run it one more time, and then we'll be back to what we had originally, but you'll see that this configuration file really matters. Okay, so the container is up, and I'm going to go directly to /customers. Let's refresh, and it works. Notice it actually sent us back to index.html/customers. Now if I take that out it works, or if I just go to it directly it works as well, and we don't get that error. Now the other important part of this we discussed a little bit earlier, but I want to revisit this volume concept in case you haven't done much with this. So, normally when we're building an image we're copying the code into the image. The challenge with that is if the code changes, which obviously it does in development, then you would have to rebuild the image and restart the container every time, and that would get very annoying after the first one or two times of doing that. So by creating this volume, we're able to say again that, hey, nginx, when you look in the normal folder where it looks for static content, HTML, JavaScript, CSS, actually kind of like redirect it or alias it as my current folder, which is this Angular Core-Concepts/dist, and of course that's where our ng build code went. That way, as I showed earlier, we can actually go in and start this up, so let me go ahead and start it one more time here. Now that that's running, let's go back, and I'm going to do that ng build one more time. We're going to do a watch, and we're going to say delete-output-path false. Now I want to note before I go any further that if I go back and refresh it still should work. Had we not done the delete-output-path false, as I showed earlier, it would actually fail. So now what I want to do is kind of prove that we can do local development but actually run it against a container so that we can simulate a real production-type environment. Now, again, I don't use this every day necessarily for my Angular apps, I prefer ng serve whenever possible, but when I am close to saying, alright we're ready to deploy, we might actually want to start working with a real server to make sure we don't have any surprises. So let's go ahead and just change something really, really simple here in the app, so I'm going to go into a customers feature I have, and in the html, this is where we show the title, and I'm just going to put Edited so that we know. Now you'll see our bundle just built. Let's go back and refresh, sometimes I have to dump the cache, it looks like I might have to. So let's go ahead and empty cache and hard reload here, and there we go, we get Customers Edited. And that would be an example of where we could do live development, but against an actual real server. So we'll let that run one more time, let's come on back, and there we go, we have our change live. So this provides a nice way to work with this in a more realistic scenario. Now, again, you're not going to get, by default anyway, the live reload features that you get with ng serve, but, as I mentioned, when you're ready to test against a real server this can actually be very handy. So now that we've taken a look at that and examined that very basic Dockerfile and the volume, let's go ahead and look at how would we make an Angular image that could go to staging or production.
Multi-stage Dockerfiles
One of the more powerful features built into Docker is multi-stage Dockerfiles. This allows us to build an image, but in a way that the image size can stay as small as possible. All the other artifacts needed we can get rid of and just get a very kind of bare-bones image that we can deploy to our different servers so we can run the container. So let's talk about what a multi-stage Dockerfile is, and why you really do care about this, especially if you're working with an Angular project that's going to run in a container. So earlier we talked about the route we travel to get code from point A to point B, and we talked about how we could take our code and then copy that into an image, and that image could then be pushed down and we can run the container on the server. Now what if we wanted to actually do the build of the Angular code ourselves, in a container though. In other words, not local on your machine with ng build locally, but actually in a container. And then what if we could copy the output of that to the final runtime image? Well, that's what a multi-stage Dockerfile is. Now before I show you that, let's take a look at what most people end up doing as they work with Angular or other types of frontend applications and containers. A lot of times they'll create a Dockerfile that looks like this. They'll go FROM node or FROM ASP.NET Core or from PHP, or something like that, give it a LABEL, give it a WORKDIR, and then they'll copy the dist folder into that working directory. Alright, pretty normal. Then, in the case of node you'd run npm install on the package to get your express or other packages available, expose your port, and then set your entry point. Now this would assume that the node server in server.js has been configured to know about the var/www working directory, and if this was an ASP.NET Core or a PHP, it'd be the same story, you'd have to configure where do the static files go. Now, this would work, in fact, there would be nothing wrong with this, this is a totally viable Docker file, I mean, there would be a little more to it than is on this one slide here, but there's a better way in my opinion, a more efficient way. I really don't use Node or PHP or ASP.NET Core to serve up my frontend apps, because there's other servers like nginx, HAProxy, and others, that are very fast, that are geared for that, and a lot of the CDNs out there will actually serve the CDN scripts using servers such as this. So what can we do, then, to not only get a workable server, but also build our Angular code in a container. That'd be kind of cool if we could do that. So that gets us to multi-stage Dockerfiles. Now multi-stage Dockerfiles are really just one Dockerfile, but they have a workflow to them. They have multiple stages that Docker will go through, they'll build some what's called intermediate images that we used to have to do manually to do the same thing but now it'll do it automatically, and then we can take output from one image and copy it over to another so that we get a very small image in the end. So it'd look like this: Stage 1 we might actually build out code. So we might take our Angular app, build it with a normal ng build process, but we're actually going to build it in the container. Now this is very cool, in my opinion, for multiple reasons. Number 1, you could automate this locally, but number 2, if you're using CI/CD, then you could also automate it on like a staging environment, and that's actually pretty compelling as well. Now what we'll do then is that's going to generate an intermediate image, but that's not a runtime image, that just has our dist folder in it, but it also has all that Node stuff. It has all the node packages that we needed to do the build, the Angular CLI, Node itself, and you know, we really don't need that for runtime if we want to be as small and fast as possible. So what we're going to do is add a second stage, and we're going to take the dist output that was generated by stage 1, and we'll copy it into a stage 2 server. Now this would be our nginx server, and then ultimately we'll get a final image that's really, really small. It won't have any of the Node packages, it won't have the Angular CLI, in fact it won't even have Node.js at all installed. So in this stage 1 we're going to use Node, because we need it with the Angular CLI to generate our Angular app image, but then we'll simply say, alright, from this image let's copy that dist folder into another one that ultimately becomes our staging or production image, and then we can run that container. And it's going to be very fast, and very small, very good for a deployment. So the benefits of multi-stage builds include you're avoiding manually creating these intermediate images. We used to be able to do the same thing, but you kind of had to manually do it all, and it was a little bit painful and much more complex. That's kind of number 2, it reduces complexity. And then it allows us to selectively copy what we want from one image, we'll call it an artifact, from one image, into another image, and that way we only grab the stuff we want for our, for instance, production image. And that leads to smaller image sizes, of course, which is what we're after. That's great for now deployment between environments, makes it much easier to work with. So now that we've talked through what a multi-stage Dockerfile is, let's go ahead and start to build out pieces of one so you can see how this works.
Creating the Angular Build Stage
The first thing we're going to do in our multi-stage Dockerfile is build the Angular code. We want to generate that dist folder that has our bundles and our CSS bundles, and everything we need to run the application. So to do that, I'm going to come over to a file I called nginx.prod.dockerfile. Now this has all the code kind of completed for the different stages, but let me go ahead and break this down line by line for you. So one of the first things we need to do is we need to build our docker code. So I'm going to say FROM node, because we need the node container so we can run npm install and get the CLI and things like that. And then you can give it a version here, I'm just going to say latest, now that's the default. But what I'm going to do is I'm going to alias this as node. Now this is going to make it so in the next stage we can refer back to what this stage does. Now the next thing I'm going to do is add our standard LABEL, and I always give the author. And then what we're going to do is set the working directory. So the working directory in this case I'm going to make it very simple because this is just a build container, I'm just going to call it app, and then we're going to copy our package.json into the working directory. I'm going to just name it the same. Now, the reason we're going to do that is because the next step is we're going to run npm install. So I'm going to use the run command and we're going to run npm install. Now by putting this in their own layer, remember earlier in the course I talked about how images become layers of a cake, they're instructions, well we're building up the cake now. And so by putting the package.json on its own layer, if you change it then it will automatically know that that layer changed, and if it doesn't change it won't have to do as much work, and so this is a way to be a little more efficient on how it actually does the installs and your package.json. Now from there we're going to then copy the code in, so I'm going to grab everything in this folder, and we're going to copy it in. And then I'm going to run our build process, so I'm going to run npm run build. Now I'm going to run it in prod mode in this case, so the way we kind of have to escape this is by putting two dashes in front of it, otherwise it will treat this as a different command and it won't work right. There's other ways you could do this as well with strings, but I kind of like to keep it very clean. Okay, so that would be our initial build Dockerfile. Now this is our stage, so let me add a comment up here, and let's just put Stage 1. Now we could do this, but on its own it wouldn't be that useful because we'd run the container, it would build, it would generate a dist folder, and then what do you do? Unless you had a volume set up on your local machine, you really wouldn't have access to anything because it would all be in the container. So what we're going to do in the next section is talk about stage 2, where we're going to reference this alias and use it to access the dist folder that would have been generated from running this command.
Creating an nginx/Angular Stage
The next thing we need to do in our multi-stage Docker file is copy the dist folder that would be generated from running the build process over to a new image that's going to have our nginx, and this'll be the runtime image, the one that we want to maybe deploy between staging and production or other environments. So before we write this stage, recall that earlier we aliased our first stage as node. Now you can name that whatever you want, it's just a variable, an alias that you can use. Well now what we're going to do is reference that, so that we can get to the dist folder and then copy that code into what's going to be an nginx image. So let me go ahead and we'll put a comment here for Stage 2, and now we're going to say FROM nginx, and I'm going to do nginx:alpine because it's a very small variant I mentioned. Let's go ahead and I'm going to set up a volume here for some of the caching it does. This is optional, but it'll allow you to kind of control where the cache files in nginx get put, if you want to. Now, it'll kind of default that, but I just wanted to put it in there so you know that you can change that. Now the next step is the most important. Now we're going to say COPY, but I need to get to the dist folder that this process would have generated. So the way we do that is we say --from, and then =node. Now, of course, that references the image here that would be generated, and ultimately once it runs in the container it'll give us the output from that, and specifically we want to get to /app, because that was our working directory, /dist. Now I'm going to go ahead and put this into, or copy it into the folder that nginx looks for static files by default, and this is the location, so usr/share/nginx/html. Now that's the kind of key part of this multi-stage Dockerfile. Once this runs it'll get the output from that, then we're going to copy from that into the nginx folder, and then to wrap it up I also have a configuration file, so we're also going to copy that in, and we'll go ahead and do that as well. You saw this earlier, this is going to help with the routes. There's a lot of other things you would do here as well. you can put SSL certificates, you're caching settings for the browsers, headers, all kinds of fun stuff. Okay, so we'll get that guy in there. And we're done, that is a multi-stage Dockerfile. So, again, this will build our Angular code, we'll take the output from the node, and then we'll copy it into this container that will ultimately run at runtime. Now what's nice about this is when we do the build, Docker kind of hides everything from stage 1, and you could have multiple stages. I only need two stages in this case, but we could have many, but it hides that process from you. All we have to do is just a normal build, and then it will take care of everything we need. So let me, I realized I have one little thing to fix right there. Let me double-check this, that looks good, and we should be ready to go. So now we can move onto the next part of this where we actually build the image, but that would be an example of a multi-stage Dockerfile that builds Angular and then generates an nginx image that has the Angular code included.
Building the nginx/Angular Image
The next step is fairly straightforward since we've seen a build process multiple times already in the course. Let's go ahead and build our multi-stage Dockerfile and actually see the type of output that's generated in the log as it does the build. So I've already added a comment that you'll see right here, and I'm going to go ahead and copy that down. Now I want to emphasize, we're going to tag it. I'm not tagging it with a version or a username right now because I'm just going to be using it locally, but oftentimes you'll have a user name in front of this, and then you could also have a version, so maybe this was version 1.0. Now I'll leave that up to you to add if you'd like, but that's all part of what we call the tag, and that way you can version the different images out there as you push them up to a repository. Now I also have a -f flag. Normally you'll see Dockerfiles named with just Dockerfile, that's it, no name, or extension per se, just the word dockerfile, and when that's the case you can just leave this out and put the dot, and that would just look for the Dockerfile, but in this case I have multiple dockerfiles, and a lot of times I have even more than that. So what you have to do is tell it which file are we actually trying to build, and then of course the dot means look in this folder for that file when we're going to run that in the root folder here. So by doing this, we can actually build our multi-stage Dockerfile and get an image called nginx-angular. Now just to show there's kind of nothing up my sleeves here, let's run Docker images real quick. Let's go find our nginx-angular, it looks like 7de is the ID right here, and let's go ahead and remove that real quick. So we'll say docker rmi, and I'm just going to give it the first part of it. And it says unable because the image is being used by a running container. Okay, well, that can happen sometimes, so let's do ps -a, and it looks like I still have a container going, so let's go ahead and first off I have one that's exited, so let me just kill that one by doing docker remove 75a. We'll do docker ps -a again, and then this one you'll notice is still running. It's been up for about an hour, so I'm going to say docker stop be7. Give it a second, alright, and then we can say docker rm be7. Okay, now if we do docker ps -a you'll see we're good to go. Let's go back to Docker images and go back to the top. Now you'll notice I have a lot of kind of what they call intermediate images, a lot of left-over images. Let me show you a little trick we can do really quick here. So you can do docker system prune. And it'll say this is going to remove all these things, all stop containers, networks, dangling images. I have these intermediate images, or dangling images, as they call them. So I'm going to say yes, and this will probably free up a ton of space on my hard drive. So we'll let it free these up and then we'll go and do our build here. Alright, so you can see I have apparently been using Docker a lot, and it freed up about almost 1.5 GB, which is pretty good savings for my hard drive there. So let's go back to docker images. Now notice we don't have nearly as much junk in there, and there's my nginx-angular 7de, so let's remove that. Alright, so that one is untagged and removed. And if we go back to docker images you'll notice it's all gone here, we don't have anything available. Okay, so now we can do the build, and let's see the output that we get. So let me grab this code right here, we'll paste that in, and here we go. It looks like now it's doing the web pack build for our code, so this should be generating the dist folder. So now Stage 1 is done, there's Stage 2, and that was super fast, because all it really had to do was just copy that dist folder code into the nginx folder and then we're ready to go. Now I'm not going to run it yet, I'm going to wait until the next module because there's a few other things I want to show you, but that would be an example of a multi-stage build. So now if we go back to docker images, we should see our nginx-angular and we can actually see it was 20 seconds ago, and it's only about 18.4MB, pretty small overall. Now you'll notice there's a new dangling image here. That typically happens when you do some of these builds, so that's where I showed the docker system prune. We say y, and this will probably free up just a small space in this case, well, 331MB, a little bit. Alright, so there you go. There's an example of how we can not only build our code in a container, but also then take the output from Stage 1, copy that into an nginx image in this case, and then we can use that, which we're going to do a little bit later to actually run this application.
Using the VS Code Docker Extension
Although you can build your docker images and run your containers using the command line like you've seen up to this point, there's actually a VS Code Docker extension you can use that can really speed things up and do a lot of the work for you, so I wanted to cover that real quick just so you know that this is an option if you're using VS Code. So if we go back into the project, and if I go to my extensions, you'll see that in addition to some Angular snippets I have, and other things, if I scroll on down I have one called Docker. Now you won't have this installed potentially, so if you don't, you can just come on up and type docker. Make sure you find the one from Microsoft here, and then install it. Now that, of course, will make you restart your VS Code editor. Once you restart it, you'd be ready to go. So the next thing we can do is actually access everything that we have on our system, our images, our containers, and anything else, using this Docker extension. So if I come on down to Docker here, I can expand it and get any running containers, I don't have any; any registries, well right now I'm hooked up to Docker Hub; and then all my images, there's our nginx latest. I can right-click on it and I can even inspect the image if I'd like to see that, and it's going to show me some details about that particular image. It does a data dump for you. Now if I wanted to get rid of it, I can even remove the image, I can push it up to Repository, I can even run it if I want, and even retag it. If I hit Tag here and wanted to add, instead of latest, 1.0 or something like that, then I can do that as well. So what you can also do is build. I'm going to come and right-click on my nginx multi-stage dockerfile here, and when I do that, at the very bottom of the menu you're going to see a build image. Now when I do that, this will come up, what do you want to call your image. Now, by default it will call it by the same name as my workspace here. Of course, we want to name it something different. In this example we're using nginx-angular. Now what this would do is simply by pressing Enter, and again, I could version it in my tag and all that if I'd like, but this will actually generate the same exact code, run back through that same process you saw earlier, build my Angular app, copy the dist folder into the nginx image, and we'd be off and running. And so you can do all of that though without actually typing a single command. Very, very powerful, very easy to work with, and if you don't want to right-click you can even do the Shift+CMD or Shift+Ctrl+P, and if I just type docker up here, you'll see that the extension provides a lot of different options that you can do. So I can view my logs, stop containers and start containers, there's my system Prune actually, you can do that as well, and not have to type these commands one by one. Now there are cases where there is some customization you want to do, like a volume, for example, and this may not help you there, and you may still end up typing, but in a lot of cases where you just need to build a file like the one we have here, and don't want to type that all out, it'll do that for you. So I wanted to show that really quickly, because I use that all the time to save myself a little bit of typing when I build and sometimes when I run my images and get those containers going.
Summary
We've covered a lot of ground in this module, starting with creating a development type of image where a volume can link back from a container to our running code, and that could be useful if you want to run Angular in an actual real server, but do live development while it's running in that live server in the container. We also took a look at multi-stage Dockerfiles and showed how they provide a very streamlined and really less complex way, especially compared to the old days, to build code, create images, and do other things. And although I showed two stages, you could have many stages in a build process. We used nginx in stage 2, that's high-performance reverse proxy server often used for static files like HTML, JavaScript, and CSS. we also saw the Docker build command, and how we can actually convert our multi-stage Dockerfile into an image. Then we wrapped up with a quick look at the VS Code Docker extension and showed a few ways it could be used to save yourself a little bit of typing, less keystrokes, so that you can just right-click on a Dockerfile and have the extension actually build the image for you. So what we're going to look at next is we have an image with Angular in it, what do we do from here? Well, we're going to run it, and we might even want to move it to other environments.
Deploying the Image and Running the Container
Deploying the Image and Running the Container
In this module, we're going to take a look at how we can deploy the image that we created earlier, and then run that as a container, not only locally, but also up in the cloud. So we're going to start off by talking about how do we get the Angular container running locally. Now we did something like this earlier to start the course off, so this will be a quick review of what you saw before with docker run. From there we're going to use the VS Code Docker extension, and I'll show you how we can get a container up and running very quickly by just right-clicking on an image. We'll talk about some image registry options, including Docker Hub and some of the different cloud providers. And then from there we're going to take a look at how we can deploy the Angular runtime image up to Docker Hub, and that's going to be our registry for the demos I'm going to show. The next thing we'll do is use a service that's available on Azure to pull that image down into Azure, start up the container, and then we'll hit our application that's running inside of that container. So let's go ahead and start things off by first getting our image converted into a running container locally on our development machine.
Running the Angular Container Locally
In a previous module, we created our multi-stage Dockerfile, and then we converted that into an image using the Docker build command. We showed how to do that through the command line and how to use the VS Code Docker extension. Well, in this section we're going to talk about how we can actually get that multi-stage container going, and remember this is the one where the Angular code is actually built into the image, so once we start up the container we should be off and running. So let's go ahead and take a look at how we can do that. So I'm going to come down to the command that you see here, docker run on port 8080, nginx is going to run on 80 internally within the container, and then we're going to have the name that we built earlier. So I'm going to go ahead and just paste that in. Now if I do this, it's going to lock up the console, and you may like that and you may not. A lot of times I don't like to do that, because now I have to start up yet another one. So let me go ahead and stop this real quick. Let's go to docker ps -a, and let's remove this. Okay, and what we can do is add a -d flag, and this would be run it in detached mode. So let me go ahead and we're just going to add a -d up here. And what this will do is give you your console back if you haven't seen that before. So notice it gives me the ID of the started container, and then I get my console back, and now of course I can do docker ps -a, and you can see it's been up for 7 seconds approximately. And there's my port forwarding going on. So now what we can do is start up the browser. So I have it open here already, and notice we're going to port 8080, I'm going to a default route, although because we have the nginx configuration file in place I can go to the root or I can go to a client-side route. And recall earlier we talked about how you can do that redirect, and I'm doing it through the nginx config file. So everything works as expected, and now we're actually running a container that I can get going anywhere very easily because I have this image now, and that makes it very easy to work with. So that's how easy it would be to get a container actually going. So just as we saw before, we can go ahead now and we can do docker stop 32c, and then we can do docker remove if we want, 32c. And this is if it wasn't there. Now of course my image is still going to be up here somewhere towards the top, and there we go. So now that we have that running locally, let's look at one more way we can do this, and then we'll start discussing how we can move this up to cloud repositories and maybe run it in a different location.
Running the Angular Container Using the VS Code Docker Extension
Earlier in the course, we saw how we could use the VS Code Docker extension to take a multi-stage Dockerfile and build the image simply by right-clicking. Now what we're going to take a look at is how we can get that image up and running as a container by also right-clicking so we don't actually have to type anything on the command line. So coming back into VS Code, as a reminder, I have the Docker extension installed. If you don't, you can click on your extension icon and then search for Docker. Make sure it's this one from Microsoft, because that's the one we'll be using. Now once that's installed, you're going to see a little area again down at the bottom for the Docker containers, and images, and registries. And right now I don't have any containers. We can refresh that, and notice everything is empty. But I do have several images. Here is the one we've been working with. This is our runtime image that has Angular inside of the image. As a reminder, we built this by right-clicking on the Dockerfile, and at the bottom we selected Build Image, and then I renamed it to nginx-angular, and we can do a very similar thing now to get this running. So I'm going to right-click on it, and I can either choose Run, which will give me my command line back, this is detached mode, or Run Interactive, which is not detached mode. So I'm going to go ahead and do Run, and now you'll notice we get detached mode, and then it defaulted the port to 80, and then internal 80. It gave us back the ID of the container. If we do docker ps -a, we can see it there. And then if we come back over to here, we can see it right there, and I can even right-click and remove, restart, show the logs if I'd like, not a whole lot going on there right now, but do all kinds of fun stuff that way, and it makes it really, really easy to work with this. So to kind of show that this container is indeed running, we can come back to the browser and instead of 8080, which now longer works, that container is not running, do 80 and there we go, we now have that running container. Now as of today, the extension doesn't allow you to change the port, there's no prompt when we right-click on the image. Maybe you want to enter a custom external port, for example. Well, as of today it doesn't do that, so now you're kind of back to just typing in the command line, which isn't that hard anyway, but it is a very nice and quick way to just try out a container when you're not so worried about that and you want to default it, in this case, to 80 as an example, on the external port. Now when I'm done with it I can right-click and we can go ahead and say stop. Now this will actually do two things: it'll stop it and it'll remove it. You'll notice if I refresh here it doesn't show anything, and then if we go back to docker ps -a it's gone. So that's a nice kind of quick and easy way that you can actually get an image up and running as a container. It's not going to work in every scenario, because sometimes when you start up an image you might have a volume, a different port or other settings that you want to apply, but for simple test cases this provides a great way to just kind of right-click and try out your container.
Image Registry Options
Once you have your image created, and your container is running as you would like it to be, we'll want to move our image up to some type of repository. Now a repository is basically a place to store the image so that other servers could then pull that and then get that container up and running on a staging, or production, or some other type of server, or even just pull it down to a VM within your company potentially. So we know that we can create a Docker image using the docker build command. Now what we're going to talk about is how can we push that up to some type of a container registry that's actually going to store that image for us, and it's going to track that layered cake I talked about. So remember that every statement you have inside of your Docker file becomes a layer. Each layer gets an ID, and the registry will actually track that for each layer of your image. That way when you update an image later, if you only changed one part of the actual Dockerfile, for instance a new file was added, it won't have to re-upload everything, it'll only re-upload or update the changed layer. And that's actually really efficient, so that even though an image might be let's say 100MB as an example, normally when you go to update that later, you're not actually going to update all 100 MB, you might only update a MB or less potentially. Of course, it all depends on what your build process does. So the way registries work is you can think of them like this. You're going to create images, as you see kind of in the middle here, and a registry is kind of like a shelf system that can track that. The registry that most people start out with at least is called Docker Hub, but we're going to integrate with that just a little bit, and talk about it a little more. Once I put something on the shelf of the registry, now a server that has access to that, especially if it's a public image, can pull that down, and then get that container up and running, and it should start up and run exactly as it did when you tried it out on your system or maybe a another server for testing within your company. That's the beautiful thing we talked about earlier in the course with images and containers, you get that consistency between the environments. Now, earlier I talked about CDNs, content delivery networks for frontend. And the question often comes up, why use a container when I could just use a CDN, and I mentioned that if you have a public app that everybody can get to, and when I say app I mean the Angular portion of the app, or any frontend for that matter, then using a CDN might be a viable solution for you. They work very well, and they have the regional distributed aspects, and the scaling and caching and all of that. But what if you want to host frontend code in your own CDN within your company? Well this is where you might have a private registry within the company, so a shelving system, if you will, for your images, and then you could use nginx or one of these other servers in a container to act as your private CDN. And that might even have a volume that just points to a folder where your static JavaScript, HTML, CSS code sits. So while a container could be on its own, either up in the cloud or maybe locally, privately within your company, you could also use it as a CDN type scenario, and the CDN would run in the container. That would make it much easier to update whatever server you're using down the road. So when it comes to container registries, I mentioned that Docker Hub is by far the most popular, but as you start digging more into images and publishing these, you now have to make a choice. Are we going to go up and make the image public? Well, probably not if it's a company type of image. You're probably going to want to do a private image, and you can do that on Docker Hub, in Azure, AWS, Google Cloud, and others. Or, a lot of companies, what they'll do is set up a private registry, a private shelving system if you will, within the company, and that way your images never leave the company they just stay within it. It really depends on how cloud-oriented your company is. If you use the cloud a lot already, then you'll probably use a registry with your cloud provider. If you don't, you'll probably want to go with a private registry, at least initially, that's within your company. There's a lot of different options for creating private registries. There's some third-party paid solutions, and then Docker Hub even provides a free one. So I put a link here to the Docker Hub registry if you want to get a private hub going within your company, and they have a walkthrough on how to get that up and running. It's very simple, actually, to do, and within literally a matter of minutes you could have a private registry ready to go, and then you could push to that registry. Now you'll have to give that registry some additional info about it, what's the IP address or domain name, things like that, but it's very easy and they walk you through all of that in this document if you're interested. Now what we're going to do moving forward is just use Docker Hub, mainly because it's free, it's easy to get started with, but I do want to re-emphasize that there are many registries out there for Azure, Google Cloud, AWS, and others if you'd like. So now that we've taken a look at the registry scenario and what that's all about, let's see how we could actually deploy our image up to a registry.
Deploying the Angular Runtime Image to a Registry
In this section, we're going to see how we can deploy our runtime image up to a registry and put it on the shelf so that we can then pull it down anywhere else that we'd like. So if I go back to docker images, we know that we have our nginx-angular image already created, and of course we can just come over to our Images here and see it as well. Now, when you're about to deploy to a registry, you want to associate your registry ID with this image, so you want to retag it, and you might also want to add a specific version to it. What I'm going to do is right-click on my image here, and I'm going to say Tag Image. Now we could do this through the command line as well, but it'll do the command for us. So let me clear the command line here, and now I'm going to go ahead and say Tag Image, and what I'm going to do is put my user account for Docker Hub, which is just danwahlin, and then a slash, and then I'm going to go ahead and put the version. So let's kind of pretend that this was version 1.0, and what this will do is now re-tag the image. So we'll hit Enter there, and now notice that we have our nginx-angular, but then I also have this danwahlin/nginx-angular.1.0. Now I can right-click and we can run that if we'd like and try it out. So let's go ahead and do that, and just make sure it's working the same. Okay, so let's go back to our browser, we'll refresh, and everything is looking good. Now, you'll notice that when it did the run here it actually put the user ID followed by the tag name, the name of the image, and then the version for that. Now you don't have to version it, but in a real-life scenario you'll definitely want to do that, because that way you can move forward or backward if you need to with your images and your containers. And I normally take the approach that we never change a container, we make a new image, add that new version to it, and then replace the existing container with a new container, and that way you can always go forward and backwards, whatever you'd like to do there. Okay, so that's working great, so let's go ahead and we'll stop the 228 here. So now what we're going to do is push this up. Now, I have this name that's tagged, and I could do a docker push and then type all that in, the name, the danwahlin/nginx-angular.1.0, but I can very easily right-click over here and say Push. So I'm going to go ahead and do that. Now you'll notice it wrote out the command for us, just saved me a little bit of typing, and now this is going to push up, these are the layers, up to Docker Hub, and it just did that. Now what's really nice about this is I can then come down to my Registries, and because I've already registered with Docker Hub here, I can expand my user account, and let me refresh, and there we go, there's my nginx-angular, and there's my 1.0 under it, and so now I can actually browse this on Docker Hub. And here we go, there's my tag and some basic information. Now it doesn't have a lot more here because we didn't do much, but now this is available to actually pull from Docker Hub if we wanted. In fact, if we go back to the Repo Info here it'll even have the command we could put to get this going. So let's say we came back, let's go to our Images. I'm actually going to come into this particular image here and we're going to remove it. Alright, and there we go. And now I can come into here, and I'm just going to paste in that command docker pull, and we can even pull the specific version. And notice that almost all of the layers already existed. This is what I was talking about earlier, that based on these IDs each layer gets, if it doesn't detect a change then it just kind of leaves it alone. It only pulls the differences, if you will, between what you have and what's up in the registry. And then we already know we can right-click on this and run it, because we just did that. So that's how we can actually push up to a repository. Now, if you had your repository up in Azure, Google Cloud, AWS, you would have to actually put a little bit more. There's a command line syntax you can do to specify the registry, and you can even do that through the Docker extension as well if you've already hooked up and logged in to your Azure registry. So that's how we can get our image up into a repository, and then pull that back down and run it. Now that we've done that, we can easily pull that image onto any server we'd like, whether it's staging, production or somewhere else, and we can even easily move between a local system and a cloud system, because all we have to do is pull that image down and we'd have that Angular app up and running.
Running the Angular Container in Azure
Now that we have our image up in a registry, we can pull it down to any server we'd like, and in this section I'm going to show you how we can do that with Azure and use a Web App for Container service to get this container up and running very quickly and easily. So I've logged into my Azure portal, and I'm going to click on Create a resource. Now from here if you just type containers you'll see Web App for Containers comes up, so I'm going to click on that. And then you can read about the details here if you'd like, but I'm going to hit Create. Now what this'll do is I can give it a name, so I'm just going to say test-nginx-angular, go ahead and select my subscription, and I have actually a resource group, kind of a group where you put this in called Sandbox I like to use for these type of things. And then you can pick your location. I'm going to go ahead and leave the default. Now the last piece here is configure a container, so I'm going to click on that, and now I can choose if I go to the Azure Container Registry, some private registry I have, or you can see it defaulted to Docker Hub. Now from here I can say, alright, is this a public or a private image that's on docker Hub? Well we know it was public, so I'm going to go ahead and put danwahlin/nginx-angular.1.0.0 in this case. And this will be the name of the image that's up in my registry that I want to go ahead and grab and pull down to this Web App for Container service. So now I can hit OK, and we're ready to go. So everything is configured. I have a name, I have my account, I have my resource group, the location where this is going to run, and then the container image that we're actually going to use. So now it's as simple as hit Create, and then this will take a moment the first time it starts to kind of fire it up. It's going to download the image, and then once it's done it'll let us know, and through the magic of video I'll jump ahead to when it's ready. Alright, so my deployment has succeeded now, and I can get an overview of this. So here's my resource, I'm going to go ahead and click on that. Now I can come down to my Apps, and here's my test-nginx-angular. We'll go ahead and drill into that, and here we go. So now I can test it out and actually hit this container by going to the URL that you see here. So I'm going to click on that. That's now going to fire up the container, and once this is all ready to go we should see our Angular site come up. And there we have it. So now I'm running that exact setup. The image was pushed up to the registry, instead of pulling it down to my local machine, in this case we pulled it over to Azure, but again, it could be any cloud service, it could even be any machine out there that supports Docker, and now we're able to run the app exactly as we would if we ran it on our machine. And the beauty of this is now we can start and stop, we can swap them out as the image changes, and do all kinds of fun stuff there, and it's very easy to work with you can see. Now there's a lot more to the story here. Right now this is a self-contained Angular, the data is hard-coded, and in a real-life app you would, of course, have RESTful services or some other endpoint that you're going to hit to actually get the data for the Angular app. So what we'll do in the next module is we'll start to talk about what do you do when you have multiple containers, you have your Angular container and then maybe one or more RESTful services or some other type of container that provides data.
Summary
In this module, we've seen different ways to take an image and get a running container on whatever system we'd like to get that up and running on. So we learned about docker run and also the VS Code docker extension and how they can be used to start and stop a container. We talked about pushing an image up to a registry, and also talked about how if you're going with a cloud provider they have their own image registries you can use if you don't want to use Docker Hub. The push command can be used directly through the command line, or as you saw, we can do it right through the Docker extension in VS Code. And then we saw how Azure provides a way to get a container up and running with the Web App for Container service. There's a lot of other options out there as well if you're using a different cloud provider. So that's a walkthrough of getting our Angular image converted to a running container, and running that locally, as well as in the cloud. Now we're going to talk about how do we work with multiple containers.
Running Multiple Containers
Running Multiple Containers
So up to this point in the course, you've seen how we can put an Angular app inside of a running Docker container, and that works really, really well, it's very easy to deploy that way, but what happens when you have multiple containers? How would I get those up and running? For example, I might have an Angular application that calls out to one or more RESTful services, and maybe each of those services is in its own container. Well that's the type of thing we're going to talk about in this module. So we're going to start out by talking about running an application with multiple containers using a tool called Docker Compose. And what I'll have for you in the project is the same Angular app we've been working with, which up to this point had the data kind of hard-coded into the app, but now we're going to change it so that we can call a separate RESTful service, and we're going to bring that up in another container. Once I show you how to get multiple containers up and running, we're going to talk about the Docker Compose yml files, and the services they define that will allow you to very easily build multiple images and run multiple containers. And then we're going to wrap up with some different options you can consider for deploying images and containers out to servers or out to different cloud providers. Let's go ahead and jump right in to getting the app up and running with multiple containers.
Running the Application with Docker Compose
Up to this point, we've run the Angular application in a container, but it's been in isolation. All the data was in that container, and so we didn't have to call out to any other services. If we brought up the container, and Angular called into services that were already up and running, either in dev, or staging, or production, then that's really all you'd have to do. Maybe somebody else on your team or another team actually manages the services. But what do you do in cases where your team is not only writing the frontend app, the Angular app, but also writing some of the services that the Angular app is calling? Now we might want to bring up multiple containers. So in this section we're going to take a look at a tool called Docker Compose that's part of Docker that allows us orchestrate the process of bringing up multiple containers. It makes it very easy to build multiple images and then get those images converted to running containers very quickly and easily. So the first thing I'm going to do is show you just how to get the app up and running with multiple containers, and then a little later in this module we'll talk about some of the Docker Compose orchestration files, these are YAML files. So let's jump on into the project here. So the first thing I've done is I've gone into the src, app, core folder, and in the data.service file I've taken out the hard-coded URL. So before when we built it was just copying some JSON files into our assets here, and that's why the single-container solution was able to run okay. But now I would like to change that to actually call out to an endpoint, to a RESTful service in this case. Now this is a very simple Node.js service that's included in the project, you'll see it here in the server folder. So I'm not going to go through that, because it could be anything. It could be Java, ASP.NET Core, Node.js, PHP, Python, whatever it is you use, but the point is I now have two containers, the Angular container, and then I have this Node.js container that I want to bring up. Now I want to bring those up in a way that I can talk to each other not only in development, but possibly in staging and other environments as well. So that's the first thing I've done. Now, of course, we'd have to rebuild the project through an ng build, I've already done that, and we're kind of ready to go. Now what I'm going to do is get both of these containers up and running with just a single command, and that's called docker-compose, if you haven't done it before. Now if you've watched my Docker for Web Developers course, or some of the other Docker courses on Pluralsight, there's a lot of info covered about Docker Compose. I cover it quite a bit in the Docker for Web Developers, so if you do want more details on it feel free to check that course out. I'm just going to focus at this point on how do we use it to build our images and then get our containers up and running. So I'm going to go ahead and say docker-compose build, and this is going to go in and actually build our images for the server, and for our Angular container that we've been working with throughout the course. So let's go ahead and do that. Now this will be very fast because I've already done it earlier, and so it's just reusing the layers I had, but you'll notice that I have a Node image, and that's going to be for the Node.js RESTful service. And here's the nginx image that we've been working with up to this point in the course. Okay, so now our images are ready to go. Now how do I make it so I can get both of these containers up and running? Well, now we can say docker-compose up, and that's going to start up the containers based on some information that's in some docker-compose files, and I'll talk about those a little bit later in this module. But now my containers are up and running, and I actually have three containers here. So I have the Angular container. It's going to call out to the Node.js RESTful service container. Then I added a third called cadvisor, and cadvisor is a container that can monitor and provides a little web app to monitor other containers. So let's see if everything is running properly in the browser now. So I'm going to come into localhost, alright, and everything seems to load, and we can get the customers and the orders, but how do we know it's maybe not the hard-coded data being cached? Well, if I right-click, Inspect, we'll go to Network, XHR. Let's Refresh. Alright, so here's our customers. Notice it did call localhost 3000, so let's try that out. This is the other container. And it called into api/customers, and there we go. And then if I go to the orders it's calling into orders, and there's the order JSON data, so it is working properly. We have multiple containers up and running, and now these containers are actually communicating with each other, which is super nice and pretty easy to get going you can see. Now I mentioned I threw in a third one, mainly just so I could show you that we can bring up as many containers as we'd like. This one is called cadvisor, and it's on port 8080 by default, and what it allows us to do is monitor our Docker containers. So some of these are for something called Kubernetes, but here's my angular-node-service, and this is the one for the RESTful service. I can get to the CPU, and the memory, file system, all that fun stuff. And then if we go back I can also get some information about my nginx Angular container, and we can see that type of info here. Obviously it's not doing too much right now because I'm not hitting it on the website. But that's called cadvisor, and that's just a free image and container you can use when you'd like to monitor containers that you actually have. So that's an example of Docker Compose, and what we're going to do moving forward is take a closer look at the Docker Compose yml files. So to stop this I'm going to say Ctrl+C and now I'm going to say docker-compose down, and this is going to take down my containers and now I don't have to manually say docker stop, and docker remove, and all that fun stuff, it'll do it for me. Alright, so those containers are down, we can say docker ps -a, everything is gone, and just as a kind of side note, I locked up the console, so I ran in interactive mode, but I could have done the -d as well. I could have done docker-compose up -d, and what that'll do is bring those three containers up, but then I get my console back, and then we'll go ahead and take those down as well. And that's how easy it is to orchestrate containers on your development machine using Docker Compose. So now, as mentioned, let's take a closer look at these yml files.
Exploring the Docker Compose File
The magic behind the docker-compose build and docker-compose up commands is a file called docker-compose.yml. So what we're going to do in this section is take a closer look at what a yml file is, and explain how we can create services which equate to the images that we want to build, and then how we can bring those services up as containers. So going back into the project, we've already talked about the Docker files for our nginx-angular. In the server folder for our RESTful service, there's also a Dockerfile there for that particular item that's being built, a very simple one actually that just copies the source code and then runs npm install. And then aside from that we have cadvisor, but that's not an image that we're creating here. That's provided for us up on Docker Hub, so I'm just pulling that down and using it. Now when we went to the console, though, and ran docker-compose build, well there obviously had to be something there for it to build, and as I mentioned there's a docker-compose.yml file here. Now this file defines three services, as I mentioned, and the first is an nginx-angular, so that's the one we've been working with throughout the course. Then it has a node, and then here's cadvisor. Now for the cadvisor configuration settings you see here, this isn't something I memorized, I went and looked it up and they provide some details on how to use it, but if we go back to the nginx and the node, these are services I had to define, so let's walk through these real quick here. So the first thing I did is said what is the container name going to be once this nginx service is actually brought up and used. I also said what the image name is going to be when we do a docker-compose build. Now to do the build, we have to tell it where to find the Dockerfile. So I said that the context is the current folder where the docker-compose build command would be run, I'm running that at the root, and then the file that actually accomplishes the image build is our nginx Dockerfile, and this is the one we talked about a little bit earlier. This one is the very basic one for development purposes. Now in addition to that, I'm actually setting up the volume through docker-compose, and if you recall we did this through docker run, and we did a command line switch, -v. But with docker-compose, you can define this in the yml file, so that now when I do docker-compose up ultimately to get the container up and running, it'll automatically create this volume from the container folder back to our dist folder, so this provides a really convenient way to do that type of thing. I'm also defining external and internal ports. I even added 443, although we don't have an SSL certificate for this one. This container is going to depend on one called node, that's another service we'll get to in a moment, and then it's in a network called app-network. Now what you'll typically do is when you use docker-compose you'll set up some type of a network. Now if I go to the very bottom, I set up a bridge network it's called, give it a name of app-network, and you'll notice that I have networks app-network for cadvisor, for node here, and for nginx, so basically that means all three of these containers are in the same group, if you will, and they can all talk to each other. Now if we move on to the node container in this one, we have, again, the name of the container once it gets running, the name of the image once it builds. Now the build context is a little different. From where I'm currently at go to the server folder, and then get this node.dockerfile, and so this gives it the context of where this file is. I'm setting an environment variable in this case of NODE_ENV to development, setting my external/internal port, and then we've already talked about the network. So I do want to re-emphasize that I'm giving you a quick overview of this, so if you are new to Docker and Docker Compose, definitely go back and check out the Docker for Web Developers course, because I break this down in much more detail. I hope that even if you're new to this it helps give you an idea about what's going on with the build process, and then as the container is brought up. Now the cadvisor, as mentioned, on this one I'm saying the container name, and then the image is actually an image that's already up on Docker Hub called google/cadvisor. So you'll notice there is no build context like we did up here, because I'm not actually building this image I'm just using it, and then it has some volumes it requires you to find that you can get in the documentation for cadvisor. So in the end, when we do a docker-compose build it's building the nginx image and the node image that I have here, those are the two services that it has the build context for. The cadvisor it's not building because, as I mentioned, it just pulls it from Docker Hub. When we do a docker-compose up, it looks at all the services, finds the image, and then starts up the container and then does any extra settings like volumes, ports, and even environment variables if you'd like to pass those into the container. Now if we open up the docker-compose prod, this is the one that we would use for testing, QA, things like that, and maybe even production to build our images. So this is very, very similar. The only main difference is that this one uses the prod version of the dockerfile, we covered this earlier in the course, that actually copies the dist folder into the image. So I don't have a volume in this case, because I have the code in the actual running container once we get this up and running. Now aside from that, everything is pretty much the same down here except for I've changed the environment variable of node to production, and then cadvisor is identical. So if we wanted to build the images for this file, we could come on in and instead of saying docker-compose build, well that would build the docker-compose.yml file. that's the default. We would say -f, and then we would say build. Now this would go and build all three of the images, but this would be more of what I'd call the runtime type image, the ones that might run in production or staging type environments. So now that it's done building, we can do the same type of thing. We say -f, give it the prod file, and now we can say up, and this'll bring up all three as you saw before, but now these will be the kind of more official running containers. So if we jump on back, refresh, we'll see the same exact thing, but again these will be more of production type of containers that we might want to run now. So that's an example of how we can actually work with Docker Compose, and specifically the yml files that we have. So that's a quick run-through of how we can get multiple containers up and running with yml files and the docker-compose commands.
Options for Deploying Multiple Images/Containers
The question that usually comes up at this point is, alright, Docker Compose is great because I can get the images built and run multiple containers on my system, but what do I do when I'm going to try to do that on my server, up in the cloud, or somewhere else? What are the different options for deploying multiple image and get getting those containers up and running? Now this is a big topic that I can't address in this class, it's way outside the bounds and scope of this class, so what I'm going to do here is talk about some of the different options, so at least you know, and then you can go research those either on Pluralsight.com or other avenues out there. So when it comes to deploying multiple images and containers, there are several options you can choose from. One of the easiest is you could just create a virtual machine, either on your server or up in the cloud, and then copy over to your Docker Compose file for production, staging, QA, whatever it is, and run docker-compose up, assuming that you already have the images in a registry somewhere. Now that would be by far the easiest way, and there are many people that actually do that. Now the challenge is, you're not going to be able to scale out easily, so if you want to move those containers on multiple VMs, that would be very challenging, it's not designed for that. So if you just need to get the containers up and running on some server, then Docker Compose is definitely the easiest option to do that. Now the second option is to use one of the cloud providers' container management services. I showed one of those earlier on Azure, and that was the Web Apps for Containers. Now the challenge there is there's no orchestration mechanism built in. It's kind of a one by one adding the containers. You can scale, so there is that benefit, but there's not an orchestration mechanism, there's no orchestration files, things like that. So it does provide a way, though, that you could get multiple containers up and running, and then on the one for Azure I showed you could just use HTTP to communicate between those different containers, and in the Angular world that would work perfectly. But, it really depends on how many containers, and what you're trying to do. Now the most robust solution that's extremely popular out there these days is to use Kubernetes. Now I'll talk a little bit more about Kubernetes to wrap this section up, because it is very popular, but it's also a very big topic and there are entire courses on Pluralsight just on Kubernetes, so let's talk about that just a little bit here. So here's the official definition from kubernetes.io. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Now let me give you my summary. If you'd like to manage and orchestrate many containers, scale them out, replace containers as updates come out, it has self-healing and much, much more, then Kubernetes is the go-to mechanism as of today to do the management of the containers. It's very popular across all the main cloud providers, and almost all of them provide some type of a managed Kubernetes service that you can subscribe to. So when it comes to these services, Azure has Azure Kubernetes Service, or AKS; AWS has their elastic container service for Kubernetes, or EKS; and then Google Cloud has the Google Cloud Kubernetes Engine. And these would provide a way to get your containers up and running on one node, a node being like a VM, or scale it out to many nodes. And it allows you to dynamically scale, and as I mentioned it has self-healing capabilities and much, much more when it comes to fixing issues that come up. So while this is a very big topic and outside the scope of this course, as mentioned, I did want to make you aware that this would be a good option to look into if you want an official way to manage multiple containers in an environment. Kubernetes can also be used in your local environment if you'd like to get it going locally on a server. You don't have to use one of the cloud providers. So if you'd like more information on that, you can go to the kubernetes.io link that I showed earlier, and they'll provide some additional information, help documentation, commands you can run, and much more, but hopefully that gives you a nice starting point for how you can deploy multiple containers to a server either locally on-premise, or up into the cloud.
Summary
So in this module, we've talked about how Docker Compose provides excellent orchestration capabilities to get one or more containers up and running on your system quickly and easily. It also supports building your images, and in the end it just saves a ton of time. So while we can use docker build to manually build our images and even create an automated process if we have multiples to build, with Docker Compose we can build multiple images very quickly, and then we can run multiple containers very quickly. So we have the docker-compose build command we talked about, and docker-compose up. Finally, we talked about how cloud providers offer several different solutions for managing multiple containers. Kubernetes is definitely one of the most popular as of today. And you'll find a lot of great courses out there on Pluralsight, and documentation in other areas. So I hope you have a better feel for how we can work with and deploy multiple containers. Docker Compose definitely provides the easiest way, but when you want a more robust solution you can look to Kubernetes and other options that are out there.
Summary
Course Summary
Thank you for taking your valuable time to watch the Containerizing Angular Applications with Docker course. Let's review some of the key topics covered throughout the course to wrap things up. We started off by talking about the benefits that containers offer, and compared them to frontend application options such as directly deploying code to a server or using content delivery networks, or CDNs. You saw that containers bring several benefits to the table including enabling team members to be productive quickly when first getting started with a project. Also, you can isolate application versions, and even allow multiple versions of a server or framework to run side by side. We also looked at consistent deployments between multiple environments, and how containers can generally simplify the process of deployment. From there we learned how to get an existing nginx Angular image running as a container using the docker run command. This included discussing the role of volumes and how they can be used to link a container to a local folder for development purposes. As a side note, volumes are also very common with some production scenarios where artifacts in a container need to be stored outside of the container. We also learned how to write custom Dockerfiles, and how a multi-stage Dockerfile can be used to build an Angular application and copy the final code into an image. This can result in a smaller image size, which is great for deployment, and multi-stage Dockerfiles can provide significant benefits in deployment setups used in organizations. Once an image is created, we learn how to run the container using the docker run command, and by using the VS Code Docker extension. The extension provides a quick and easy way to work with images and containers without having to type any commands in many cases. Finally, we discussed how to orchestrate the process of building multiple images and running multiple containers by using docker-compose.yml files and the docker-compose command. Many applications require multiple containers, so by using docker-compose you can greatly simplify the process of building and running containers. Now that you've watched the complete course, I hope that you have a solid understanding of the role that containers can play with Angular and many other types of applications. I've sincerely enjoyed creating the course, and hope that you'll be able to apply the knowledge that you've gained to your development projects back at work. So thanks again for watching the course, and I hope you'll check out my other courses on Pluralsight as well.
Course author
Dan Wahlin
Dan Wahlin founded Wahlin Consulting, which provides consulting and training services on JavaScript, Angular, Node.js, C#, ASP.NET MVC, Web API, and Docker. He is a Google GDE, Microsoft MVP and...
Course info
LevelIntermediate
Rating
(18)
My rating
Duration1h 55m
Released26 Jul 2018
Share course