What do you want to learn?
Skip to main content
by Cory House
Resume CourseBookmarkAdd to Channel
Table of contents
You Need a Starter Kit
You Need a Starter Kit
A Starter Kit Is an Automated Checklist
Who Is This Course For?
What Belongs in Your Starter Kit?
Set Up Github
Editors and Configuration
Another common war on any dev-team is editor configurations and stylistic choices like tabs versus spaces. I have great news, you don't need to fight this war any longer. There's now a handy way to maintain consistency. If you create a file called ".editorconfig" and place it in the root of your project, you can specify how your editor should handle common settings like tabs versus spaces, indent size, line feeds, char sets, and trailing white space. And your team doesn't even have to use the same editors to enjoy all of this consistency. The editors at the top have support built-in, and those listed here at the bottom require a plug-in. So here's how this works: you create a file called ".editorconfig" and place it in the root of your project. Here's the editor config that I typically use. Yes, I'm choosing spaces, but hey, if you prefer tabs, we can still be friends, I guess. Unless you start preaching to me. Anyway, that's the beauty of this file, although I standardize my code to use spaces via editorconfig, I honestly always hit Tab to indent my code, but the nice thing is with editorconfig in place, any tab that I hit is automatically converted into two spaces based on the indent style and indent size settings that you see here. How cool is that? This way, those that prefer hitting tab can continue to do so, and the editor will quietly do the right thing, so handy.
Alright, enough talk, let's now set up an editorconfig for our project. Alright, here I am in VSCode, the editor that I'll be using throughout the course. To assure that everyone uses the same basic editor setting, let's add an editorconfig file here to the root of the project. Make sure that you call it ".editorconfig", you need that leading dot, it is significant. And I will just paste in the contents here. As you can see, I am enforcing a two-space indentation style, the end-of-line should use a line feed, and we will be trimming white space and inserting a final newline automatically with this editorconfig. And now that I've added this file, all the future files that I create will be formatted based on these editorconfig settings. However, as we just saw in the slides, some editors like VSCode require a plug-in to provide editorconfig support. To install the editorconfig plug-in for your editor, go over to editorconfig.org and click on download a plug-in. Then just scroll down to find your editor, and I'll click on Visual Studio Code in blue, since that's what I'm using, I will copy the command that I need to paste in to the command palette. I'll hit command-P, and paste this in, and when I hit Enter, you can see the top result is what we're looking for. I'll click Install, and after it's installed, I will enable it. At this point, VSCode will need to be restarted, I'll hit OK, and when it fires back up now editorconfig will enforce any future files that I create. So when your team uses editorconfig, you just need to be sure that everyone installs the appropriate plug-in for the editor that they're using so that the settings are enforced.
Demo: Install Node and npm Packages
Let's jump back in the editor and get a few foundational pieces set up. We're going to install Node and npm and also create a package.json file that's going to store the references to all the packages we're going to use for our development environment. To get started on our environment set up, let's install the latest version of node. I prefer using the 6.x branch so that I can enjoy the latest features. Also, the 6.x branch loads modules four times faster than the 4.x version, but the 4.x version should work just fine for this course as well. Just make sure that you're running at least version four. Now if you already have node installed and you're concerned about upgrading and breaking existing projects, you can run multiple versions of node using nvm on Mac or nvm windows on Windows. When you install node, npm, node's package manager comes bundled along with it so in the next clip we'll add package.json to our project. Node's package manifest is package.json. Package.json stores the list of npm packages that we are using as well as the npm scripts that we'll end up setting up later. Now instead of installing all these packages one by one, let's just use the package.json in this jist. The real URL is long so, use this shortened URL to get here, this will save us time because we won't have to manually type the names of all the packages that we'll be using. These are the packages that we'll use throughout this course. You likely don't recognize some of these names and that's okay. I'll introduce them as we go along. I'm going to copy this and go back to the editor and create package.json here in the root. Just paste this into our editor, save it. Now we have our references to the packages that we'll use in this course. If you've never seen a package.json file before just understand we have the name of the project, the initial version, a description, some scripts which we'll be adding in later to help automate our processes. The author's name, the license for this code, and then a list of Production dependencies, and development dependencies since we're building a development environment, our dependencies are all listed down here and these are all npm packages with their version. Now that we have a packaged json file on our project, let's install the necessary npm packages. To do that, we'll open up our command line which in vs code is Integrated. I can open this Integrated Terminal. Of course if I preferred I could just use the terminal built into my operating system. On Mac it looks like this, on Windows you'll have a DOS command line instead or you could install git bash if you prefer a UNIX style command line on Windows. I'm just going to use the built in terminal for vs code. The nice thing about using a built in terminal on an editor is that it opens by default in your root directory for that project so you don't end up having to cd into here after opening a terminal. I really prefer using these built in terminals and to install our packages, I'll just type npm install. This should take a minute or two on your machine as it downloads all of these packages and places them within a folder called node modules in your project, but instead of waiting, I'll just speed this along through pure magic. As you do your install you may see a few warnings along the way and if you look at the node modules folder now we can see all of the different modules that we'll potentially be using throughout this course. These are dependencies, but all of these dependencies have other dependencies each of those get installed in their own separate folder here in node modules. We just downloaded dozens of packages, wouldn't it be nice if we could scan these to help protect ourselves from known security vulnerabilities? Let's explore how we can do that in the next clip.
It's worth remembering that packages can be published npm by anyone, so that might make you a little bit paranoid. Now thankfully there are people working on this exact problem. Reire.js and the Node Security Platform are two ways that you can check your project's dependencies for known vulnerabilities. Node Security Platform is my preference at this time. It offers a simply command line interface that you can automate checking for security vulnerabilities automatically. All you do is call nsp check as part of your build and then it reports the number of vulnerabilities found if any. We're going to run node security project by hand, but you may want to consider adding the check as part of your start script. There are a number of options for when to run this and they all have their tradeoffs. The first and most obvious is to manually run node security check, but that's easy to forget. The second option is to run it as part of npm install but the packages you use may have security issues later so merely checking at install isn't sufficient. You can run it during production build or if you're using GitHub automatically as part of your pull request, but both of these options are a bit too late. By then you've already used the package so you have a lot of potentially expensive rework ahead if you need to select an alternative package. The final option is to check as part of npm start. This way each time you start development, the security status of your packages are checked. This does the downside of slowing start a bit and requiring a network connection to perform the check, but it has the advantage of notifying you quickly when a security issue exists. You'll see how to automate tasks like this in an upcoming module on automation.
Demo: Node Security Platform
Let's install Node Security Platform so that we can automate security checks for our dependencies. To perform node security scanning, let's install the node security project globally so that we can run it directly on the command line. I'll say npm install globally nsp which stands for node security project. We'll wait a moment for that to install and now that we've installed it globally we can run it right here on the command line by just saying nsp check and what it will do is check all of our packages for any known vulnerabilities. You can see that it comes back right away saying that no known vulnerabilities are found. It looks like we're in good shape. Now you'll note that we installed this module globally here so that we could easily run it directly on the command line. In an upcoming module on automation, I'll show how we can call this from an npm script and thus avoid having to install it globally. Now that our Package Manager is set up, let's wrap up this module with a short summary.
In this module, we saw that there are currently many Package Managers to choose from, but in reality, this choice is easy. Npm has clearly become the defacto standard and is now the largest Package Manager in the world. We created a package.json file which is the manifest that npm uses and in future modules we'll augment this file with scripts that will automate our development and production build processes. We wrapped up by looking at services for security scanning and ultimately installed Node Security Project and ran it on the command line. In the next module, let's review the long list of interesting Development Webservers to choose from and we'll set up a Development Webserver so that we can see our application shell run in a web browser.
Development Web Server
Development Web Servers
Demo: Set up Express
For this course, let's use Express as our development web server because it's powerful, highly configurable, and extremely popular. We're going to need a web server for development, so let's use Express. Now, we already installed it when we ran MPM install earlier because Express is one of the mini packages listed in our package.json. So now we just need to configure Express. I like to keep all my build-related tools in a single folder. So let's create a folder called build scripts in the root of our project. Inside, let's create a new file and call it srcServer.js. So I'm going to follow the popular convention of abbreviating source as S-R-C. Now this file will configure a web server that will serve up the files in our source directory. So to set this up we will call Express and we will need a reference to Path as well. And finally a reference to Open, which we'll use to open our site in the browser. And I'm going to set a variable here that stores the port that we are going to use in this course. We're going to use port 3000; of course feel free to choose a different port if that one isn't available on your machine. And then I'm going to create an instance of express. And set that to the variable app. Then what we want to do is tell Express which routes it should handle. For now, we're going to say any references to the root should be handled by this function. Which takes a request and response. And in this function, we will call res.sendfile, and I will use path to join... I'll use a special variable dirname that will get us the directory name that we're currently running in. I'm going to join that together with a path to our source directory which we haven't created yet. But within our source directory, that's where we will place our index.html file. And now we've declared our routing, we just need to tell Express that we'd like it to listen on the port redefined above. And then we'll add some err handling here, just say if there is an err, then go ahead and just log that to the console. Otherwise we will open up the website and I will just hard code in the address of the website. And the port that we will hit. And now that we're this far we should be able to open the terminal and say node buildScripts, 'cause we're going to reference the path where we put this. srcServer.js, so we're telling node to run this file. And when we do it's going to try to open it, but it will not be able to find that index file because we haven't created it yet. So let's go back over here and create a new folder called src. This is where we will place all of our source code. And I should've placed this at the root instead, so let me drag it up the root. And inside src, let's create a new file and we will call it index.html. I'll just paste a little boiler plate html in here to get us started. A little hello world. Hit save. Now I'll come back down here to the terminal, I'm going to hit ctrl+C to kill the previous process and re-run Express. And now we can see Express starting up on the port that we selected saying hello world, so we know that Express is handling our first call successfully. So it's coming in, it's seeing that we're requesting the root, and then it is sending the file that we specified here, and opening the application at this address on this port. I'll just add a semi colon that I'm missing here. So great, we now have Express configured. But in the next clip, let's talk about some easy ways to share our work-in-progess with other people.
Demo: Sharing Work-in-progress
Let's try localtunnel to see how easy it is to share our work-in-progess. To quickly share my work-in-progess I'm going to use localtunnel. So to get started let's intall localtunnel globally. So we'll say npm install localtunnel -g. And that g flag signifies that it should be installed globally. Now in the next module on automation I'll show how you can run this and other tools without needing to install the package globally. And now that it's installed we're ready to share whatever is running on our web server. So let's start our development web server using the command that we ran in the previous clip, which was node buildScripts srcServer.js. I'm missing an S. And now that we have this pulled up, we'll keep this running but I'm going to open a second terminal, so I'll just click the plus sign here. Of course if you're using a terminal outside of VS code, you could just open a second command line in this same directory. And now to run localtunnel I'll say lt and then I'm going to pass it the port that I'm using for my development web server which is 3000. So we're just telling localtunnel to create a tunnel and expose port 3000. And you can see what it does is it returns a random url to me. So now I can take this url and share it with anybody, and anybody with internet access could see my local running application at this url. If I go over and change the application in any way, I could come over here to index.html and add a question mark and hit save and now if I come back over here and reload we can see that reflected on this public url. So this is a great way to share my work-in-progess in a very low-friction way. And I can even make this more deterministic by adding in a second parameter here. And I can specify the sub-domain that I'd like. Let's say I'd like a sub-domain of cory. So it's a little easier to type and tell other people than this random set of characters. Now as long as no one else has requested this particular sub-domain at the time that I'm using it, then I should be able to come over here and hit it. If I had chosen a sub-domain that someone else was using then localtunnel would warn me of that at this time. But this is a nice way to get a shorter url that I can share with others. Now also, we're using Express for our web server, but I wanted to share an interesting trick. If you choose to use Browsersync for your web server instead, that synchronized browsing experience that if offers will continue to work when you're using localtunnel. So this way even if your mobile devices and your development machine are on separate networks, they can still interact in a synchronized browsing experience. So I'm not going to walk through it in this course but it's worth nothing that if you have a wall of devices that you want to test against your locally hosted work-in-progess, combining Browsersync with localtunnel is a compelling combination. And then once you're done with your work, just come back to the terminal where you're running localtunnel and hit ctrl+C to stop it. Now that I've stopped it, I come back over here and try to refresh, we can see that it no longer loads. And it will say that there's no active client anymore at this sub-domain. Let's wrap this module up with a short summary.
In this module, we considered a variety of development web servers including ultra-light options like http-server and live-server, more full-featured options like Express, bundler-specific options like budo and Webpack dev server, and the amazing Browsersync which integrates with various options like Webpack dev server and Express. We also saw many low-friction and free ways to share our work-in-progess on the public web. We used localtunnel, since it offers a low-friction way to share our work. But ngrok, now, and Surge are all great options for quickly sharing work-in-progess as well. And depending on your needs, ngrok and Surge are both potential options for production hosting also. Now it's time to begin automating tasks, so in the next module let's explore popular options for task automation.
Hey, we're developers, so of course we don't want to do things manually. Automation is a necessity to assure that development builds and related tooling are integrated and utilized in a consistent manner. If we do our job right, all the team will benefit from rapid feedback along the way. So here's the plan. In this module we'll review the options for automating our development environment and builds, and after discussing the options I'll introduce npm scripts which is the automation tool that I recommend for gluing all of our tooling together in an elegant and repeatable manner.
Demo: npm Scripts
So we've made our pick. Okay you're right, I made the pick. But I think it was a pretty easy call. Okay, anyway, let's automate with npm script so you can see how straightforward this is. Now npm scripts allow you to make command line calls, utilize npm packages, or even call separate scripts that use Node, so they give us all the power that we need for creating a robust build process. When working with npm scripts, it's popular to call separate files from the script section of your package.json file, right here. In a previous module we created source server.js which configures express to serve up our index.html. Now let's create an npm script that will start our development environment. By convention this script should be called start, that way by convention we can just type npm start and it will run this command. And what we're going to do is just put the exact command that we've been typing manually right here within our package.json file under our script for start. And now we can open up our command line and type npm start. When we do, we can see that it starts up our application just like before but now we have less typing to do, we have something repeatable that we can work with. It's also worth noting that you don't need to type the run keyword when you're calling npm start or npm test. Both of those are commonly used so npm doesn't require you to type the run keyword for either of those scripts. In the next clip let's augment our application startup by displaying a friendly message when our developmental environment starts up.
Demo: Pre/Post Hooks
It would be nice if we received a helpful message when starting up our development environment. To do that lets create a file called start message.js here in build scripts. And I'll just paste in the content here, you can see I'm going to reference chalk which is a library that allows me to specify the color of the output that I'm displaying to the console, and we'll just display a message that says that we're starting the application in dev mode. Now I'd like to run this before our web server starts up so I'm going to use a handy convention if I read a script called pre start, then by convention it will run before our start script. And I'll just place it right here. And don't forget the comma here at the end. Because this is JSON. See npm scripts support convention based pre and post hooks so any scripts that you prefix with the word pre will run before the script with the same name. And any scripts you prefix with the word post will run after the script with the same name. So if I created a script called post start, it would run after start, and since this is called pre start, it will run by convention, before my start script. So now let's come down here and hit control c to kill the previous process, and now when we re-run npm start, we can see that the first thing we see is a nice message that says that we're starting the application in dev mode. Great. And now that we have the basics of npm scripts down, let's use it to automate some of the other processes that we've been doing manually in previous clips.
Demo: Create Security Check and Share Scripts
Now that we've chosen npm as our method for handling automation, we can automate some of the commands that we've run manually in previous steps. First, let's create a script for running Node security check. And we can call the script security check. Now, sure, typing npm run security check is more typing than simply saying nsp check, so you might be wondering why this is beneficial. Well let's think about the advantages to this approach. First, it's more descriptive. Npm run security check is a lot more descriptive than nsp check. Second, and this is the most important, remember how we had to install Node security check globally in a previous module so that we could run it directly on a command line? Well with npm scripts you don't have to do that, you don't need to install tools globally to run them. So let me clarify why we don't need to install it globally anymore. When you install node modules, just saying npm install to install all the packages that are listed in package.json they get added to a dot bin folder right here, and all of the items here in dot bin are automatically in path when I call them from npm scripts, so you should see the nsp is down here. So this script, nsp, is in the path. All of these scripts in the bin folder are automatically added to the path so that way nobody needs to install these packages globally. Creating this dedicated npm script will allow us to automatically run this as part of the application startup process if desired. We'll see how to handle concurrent tasks in the next clip. So we just created a reusable script for running our security check, let's do the same thing now for local tunnel. So we will call this script share and it will just run local tunnel and open it on port 3000. Now just to prove that it works we can come down here and type npm run security check and see that that runs nsp check for me. And no known vulnerabilities are found. So we're in good shape. Of course it would also be nice to be able to run certain scripts in parallel, so we'll look at that in the next clip.
Demo: Concurrent Tasks
The pre and post hooks are handy when you want to run something before or after a script, but you'll likely find that you also want to run multiple things at the same time. To do that, we'll use a package called npm run all. This is a handy way to run multiple items at the same time in a cross-platform way. So in other words, it'll run on Linux, Unix, and Windows. Let's say that we'd like to run the security check each time we start the app at the same time we start up the web server. To do this we can change our start script to instead call npm run all and specify that we want to run some tasks in parallel. There's two tasks that we want to run here. We want to run the security check task and we'd also like to start up our web server. So let's take this and put it in a well-named script below and we'll call it open source. And now we can reference it right here on our start script. So now our start script is saying run all the tasks that I list over here on the right-hand side in parallel using npm run all. So let's try this out. I'll come down here and type npm start. And when I start we can see that it starts up our application as expected and it also ran our security check as part of that start script. And of course since we have a prescript set up, it also said it was starting the app in dev mode. You'd see we also get some other noise here, if this noise bothers you, you can always type npm start minus s, which means silent, and now we'll have a silencing of most of that noise, so all we see now is the messaging that we've explicitly asked for which is starting the app in dev mode and then any output from our security check. And we can see that it starts our application just fine. So great, we're now running different npm scripts in parallel using npm run all. Now there's another place that I'd like to run tasks in parallel for convenience. I'd like to have a single command that starts up the web server, and shares my work via local tunnel so that I don't have to manually open two separate terminals like we did in the previous module. To do this let's create a script called local tunnel. I'm going to rename this to local tunnel since that's specifically what it's doing is local tunnel, and then I'll create a new script below that actually handles the entire sharing process because what we want to do is run two things in parallel. We want to open up our development web server and start local tunnel, so that way local tunnel will expose port 3000. Test this out, I'm going to open up the terminal and then say npm run share. We can see that it starts up our application running at 3000 and it also starts up local tunnel in parallel. So I can copy this and see that our application's also exposed on a public URL. So great this is a pretty handy way to share our work in progress with a single command. And that's the power of npm scripts in a nutshell. All right, let's close out this module with a summary.
Transpiling Build Scripts
Demo: Set Up Babel
Why ES6 Modules?
Choosing a Bundler
Demo: Configuring Webpack
Demo: Configure Webpack with Express
Demo: Create App Entry Point
Demo: Handling CSS with Webpack
Now that we're bundling our code, there's another important tool that we need: Sourcemaps. Once we start bundling, minifying, and transpiling our code, we create a new problem. Our code becomes an impossible-to-read mess when it's running in the browser. So you might be wondering, how do I debug? Hey, that's commendable that you're concerned. Thankfully, this is a solvable problem. The solution is to generate a sourcemap. Sourcemaps map the bundled, transpiled, and minified code back to the original source. This means that when we open our browser developer tools and try to inspect our code, we'll see the original ES6 source code that we used. It's like magic. Now, the sourcemaps can be generated automatically as part of our build process. You might be wondering how minifying the code actually saves any bandwidth, if we have to generate a big map back to the original source. That's a good question. Credit to you, very perceptive. The beauty of sourcemaps is they're only downloaded if you open the developer tools. So this way, your users won't even download the sourcemaps, but they'll be available for you in case an issue arises, in either your development environment, or in production. So effectively, sourcemaps give you all the benefits of being able to read your original code, without any additional cost to regular users.
Demo: Debugging via Sourcemaps
Alright. Now that we've set the stage, let's configure our build to automatically generate sourcemaps as part of the bundling process. When we set up Webpack for development, we told Webpack to generate a sourcemap by specifying the devtool setting that you see here. There are many potential settings to consider, but I'm using inline sourcemap for this course. I encourage you to experiment with the different settings to find one that's best for you. As you can see from this table, the basic trade-off is between sourcemap quality and speed. Higher quality sourcemaps take a little more time to create, so you'll notice a little more delay on larger apps. Let's jump back to our source code. My preferred approach for setting breakpoints, is to type the word debugger on the line where I'd like the breakpoint to hit. So let's set debugger right here. I'll hit save. And now, let's jump back over to the browser and reload. And when we do, we can see our breakpoint hit. As you can see, the browser sees the debugger statement, and it breaks on the line where I typed debugger. And since we're using sourcemaps, the original code that we wrote is displayed in the console. Even though the actual code that's running in the browser looks like the code that I showed you earlier, that was hard to read. And this is really handy, because in a later module, we'll set up a production build that minifies our code for production. And still, because of our sourcemap, we'll be able to easily read the code just like you see here. Because we'll be seeing our original source code. And again, if you want to see the source code that's running in the browser, you can click on bundle.js, to see the transpiled and bundled code that's actually being parsed by the browser. Alright, let's wrap up this module. Time for another summary.
In this module, we began by considering our options for bundling our code under a usable and encapsulated modules. We briefly looked at IIFEs, AMD, UMD, and CommonJS. But we saw that ES6 is the future, because it's standardized and statically analyzable, which enables rich features such as autocompletion supoprt, deterministic refactoring, and reduced bundle sizes via tree shaking, assuming that you select a bundler that supports it. Then we moved on to discussing bundlers. I chose Webpack for this course, because it's very popular, extremely powerful, and highly configurable. But Browserify, Rollup, and JSPM are all excellent alternatives to consider. Then we implemented ES6 modules and bundled our code via Webpack. And we closed out this module by discussing and generating sourcemaps. Sourcemaps are awesome because they allow us to see our original source code in the browser when we open the developer tools. This way we can set breakpoints and debug in the browser, even after our code has been transpiled, bundled, and minified. And since they're only requested by the browser when devtools are open, they don't slow down the customers' experience in any way. In the next module, let's protect ourselves from mistakes, and enforce consistency in our code base. It's time to set up an automated linting process, so that we're notified when we make typos and errors, as soon as possible.
ESLint Configuration Decisions Overview
Decision 1: Configuration File Format
Let's begin with decision one. Where should you put your configuration? The number of different ways to configure ESLint is just plain silly. There are five different file names that it currently supports for configuration or you can add your configuration to package.json so how do you decide? Well, the most common universal approach is to create a dedicated .eslintrc file using one of the file names and formats mentioned on the previous slide but, assuming you're already using npm you can also configure ESLint in package.json. The advantage of configuring via a separate file is it's universal; it is not tied to npm. But, using a package.json avoids creating yet another file in your application. To configure ESLint via package.json add a section called eslintConfig. The contents of this section will be the same as the .eslintrc approach and we'll walk thought the contents of eslintrc in a moment.
Decision 2: Which Rules?
After choosing a configuration method, decision two is, which rule should we enable? ESLint catches dozens of potential errors out of the box. Comma dangle, no duplicate arguments, no extra semi-colons. I suggest gathering as a team and deciding once and for all which of these rules are worth enabling. Yes, it will be a painful meeting but once you get it done and in your starter kit, it's settled and you can enjoy the benefits.
Decision 3: Warnings or Errors?
Now that you've decided which rules you want to enable you have yet another decision to make. Which of your rules should merely produce warnings and which rules are a big enough deal to justify throwing errors instead? Let's consider the implications of warnings versus errors. With a warning, it doesn't break the build so you can continue with development without fixing the issue. In contrast, errors actually break the build which can be helpful when the linter finds a more critical issue that should catch your attention immediately and keep you from moving forward. But, since warnings don't stop development they can ultimately be ignored. This is handy in the moment when you're focused on implementing a feature and don't want to stop your flow to fix a minor issue But, it also means that warnings can potentially be committed because they may not break the build. Errors, in contrast, are a clear wall to moving forward. They can't be ignored. Due to these trade-offs, I've seen some shops only use warnings because they favor moving as fast as possible and I've seen other shops use only errors because they favor stopping any work that isn't good enough to commit at that moment. Now, I suggest using both. Warnings are good for minor stylistic issues and errors are useful for items that are likely to produce bugs. The bottom line is, it's important that your development team agrees that warnings are not acceptable. If you commit code that produces warnings then quickly, your linting output will get so noisy that it's useless and it will mask other helpful output like test results that also display on the same command line. If you choose errors then you don't have to worry about people ignoring linting issues. They will be forced to comply because the application won't build. In summary, I recommend choosing carefully based on context. You'll likely decide that only some rules weren't throwing an error versus a warning.
Decision 4: Plugins?
Now that you've configured ESLint's built-in rules you have another decision to make. Should you enhance ESLint with some plugins for your library or framework of choice and if so, which ones? For instance, I primarily work in React and my react course is, you can see how I use ESLint-plugin-React to perform a number of additional checks that are specific to React and similar plugins are available for other popular frameworks like Angular as well as for node.js There's a handy list of ESLint configs, plugins, parsers and more, at this URL. As you'll see, there are plugins for many popular frameworks and libraries available. These plugins are useful because they help enforce a consistent style in the way that your team works with your framework of choice.
Decision 5: Preset
Watching Files with ESLint
Now that we have decided that we are using ESLint there are a few different ways to actually run it. Of course, the simplest way to run ESLint is via the command line, however, there's one obvious limitation with this approach. ESLint doesn't currently include a watch setting built-in so if you want to automatically run ESLint each time that you hit Save running ESLint by itself won't work. Here are two ways to get around ESLint's lack of file watching capability. Fist, since we're using webpack, one option is to use eslint-loader so webpack will run ESLint each time we run our build. The advantage of this approach is all files being bundled by webpack are re-linted every time that you hit Save. So, you see an ongoing summary of any linting issues. However, I recommend going a different route and using and npm package called eslint-watch. This npm package is simply a wrapper around ESLint that adds file watching capability. So, this stands alone and isn't tied to webpack in any way so you can use this approach to linting regardless of the bundler that you choose. eslint-watch also adds some other nice tweaks like better looking warnings and error messaging and it displays a message when linting comes back clean unlike eslint-loader, which is completely silent when there are no linting issues. But, finally, the biggest win with this approach is that you can easily lint all your files even if they're not being bundled as part of your app. So, this means you can lint your tests, webpack config, and any build scripts as well. I really like this so that I can ensure that all the code in my project is held to the same standard and has a consistent style.
Linting Experimental Features
Why Lint Via an Automated Build?
Now, as we've been talking through this process you might be wondering why we should be bothering to lint via an automated build process, because many editors offer ESLint integration built-in, so they just monitor your code and output results inline right there within the editor. However, I prefer to integrate ESLint with my build process for multiple reasons: First, outputting all feedback on my code to the command line gives me one single place to check for all the feedback related to my code quality. This means I have one place to check not just for linting issues but also for any compile time errors or any testing errors. This is especially helpful on teams where developers all use different editors. We all have the same development work flow because we all utilize the same starter kit and a command line. Pair programming is also easier when everyone has the same development process, and most importantly, ESLint should be part of your build process so that the build is broken on your continuous integration server when someone commits any code that throws a linting error. This helps protect your application from slowly getting sloppy. Even if a developer ignores ESLint locally the build can be rejected automatically by your continuous integration server.
Demo: ESLint Set Up
Demo: Watching Files
Let's create another handy script. Oddly, eslint-watch doesn't watch our files by default. Instead, you have to pass it a command line flag to enable watch. So, let's create a separate npm script that will watch our files. We'll call it lint:watch and I'll place it right here below our lint script and in this we'll say npm run lint and here's where things get weird. I'm going to say -- --watch so we're passing the watch flag along up here to our lint script, so this is saying run the npm lint script but pass the watch flag to eslint-watch. So let's hit Save and see if it works. npm run lint:watch So, now, if I come over to source server and take out this disable, hit Save, we can see ESLint reruns immediately and throws a warning about me using the console statement in this file right down here, and if I put this back in and hit Save then ESLint runs again and then reports everything back clean. Excellent, so, now we have linting watching our files and now that linting's set up, we'll know when we make many common mistakes in our code. The linting errors will display immediately in our console when we hit Save. Now, there's one final piece that's missing here. We'd like ESLint to run every time that we start our app so we just need to add our lint:watch task here to our start script, and it will run in parallel since we're already using npm-run-all and telling it to run any of the scripts that we list over here in parallel. Now, if I type npm start -s we should see that linting is part of our start now. We get our message, we get our security check and there our linting is running as well, So now when we type npm start it displays our start message, runs webpack, starts our development web server, opens the app in our default browser, lints our files and reruns webpack and ESLint any time that we hit Save. That's a lot of power in so little code. Let's wrap up this module with a quick summary.
In this module we saw two core reasons to lint. Linting helps enforce consistency so that our code is easy to read and it helps us avoid many common mistakes related to typos, globals, and accidental assignments. We saw that there are multiple linters but we chose ESLint because it's currently the most popular, configurable and extensible linter available. We reviewed a variety of configuration choices including the config format, the rules that you enable, whether you use warnings versus errors, which plugins you should add and, if you're overwhelmed, you can just select a preset instead such as airbnb's configuration or the standard JS config and we wrapped up by enhancing our development environment to use ESLint's recommended rules and run ESLint every time we hit Save using eslint-watch. Our development environment is really coming together now but we haven't covered a critical aspect yet. What about testing and continuous integration? Let's explore these topics in the next module.
Testing and Continuous Integration
Test Decisions Overview
Decision 1: Testing Framework
Decision 2: Assertion Libraries
Many test frameworks such as Jasmine and Jest come with assertions built in. But some, such as Mocha, don't come with an assertion library, so we have to pick our own. So what's an assertion? An assertion is a way to declare what you expect. For example, here I'm asserting that two plus two should equal four. This is an assertion because I'm telling my test what I expect to happen. If the statement is false, then the test fails. There are many potential ways to declare assertions and test. Some frameworks might look like this example, others might use a key word like assert instead of expect. Don't let the minor differences confuse you. Most of the choice between assertion libraries come down to minor syntactic differences. The most popular assertion library is Chai, but there are other assertion libraries out there to consider like Should.js and Expect. Most frameworks include their own assertions built in, but since Mocha doesn't, we need to choose one. Again, the core difference between these are relatively minor syntax differences, so don't spend too much time worrying about this. In this course, we'll use Chai for assertions because it's popular and offers an array of assertion styles to choose from.
Decision 3: Helper Libraries
There's another question to answer before we start writing tests. Should we use a helper library? And if so, which one? JSDOM is one interesting library to consider. JSDOM is an implementation of the browser's DOM that you can run in Node.js. So with JSDOM, we can run tests that rely on the DOM without opening an actual browser. This keeps your testing configuration for automated tests simpler and often means that tests run faster because they're not reliant on running in the browser. So JSDOM is useful when you want to write tests that involve HTML and interactions in the browser using Node. We'll write a test using JSDOM later in this module. Cheerio is another interesting library worth mentioning. You can think of Cheerio as jQuery for the server. This is really handy if you're using JSDOM because you can write tests that assert that certain HTML is where you expect it. And the great news is, if you understand jQuery, you already know how to work with Cheerio, because it uses jQuery selectors for querying the DOM. Imagine you wrote a test that expects a specific DOM element to exist on the page. With Cheerio, you can query JSDOM's virtual DOM using jQuery's selectors. If you already know jQuery, this can save some typing compared to writing traditional DOM queries.
Decision 4: Where To Run Tests
Decision 5: Where Do Test Files Belong?
Decision 6: When Should Tests Run?
Okay, one final decision to make regarding testing. When should our tests run? Well, if we're talking about unit tests, the answer is simple. Unit tests should run every time that you hit save. This rapid feedback loop assures that you're notified immediately of any regressions. And running tests each time you hit save facilitates test-driven development, since you can quickly see your tests go from red to green just by hitting control S. If you run your tests manually, it creates unnecessary friction. When the test suite is run manually, it's easy to forget to run the test suite after making a change, so make it automatic and reduce friction. Finally, running tests on save increases the visibility of the tests that do exist. It helps to keep testing in the forefront of your mind, so I believe this is an easy decision. Your unit tests should run automatically when you hit save. I know what you're thinking, but I can't run my test suite every time I hit save, that'll be way too slow. Well, I should emphasize. I'm talking about unit tests here. Unit tests should run extremely fast. Integration tests are also useful and admittedly slower, so you'll want to run those separately. But your unit tests should be fast because they shouldn't hit external resources. Now let me back up for a moment and clarify the difference between unit tests and integration tests. Unit testing is about testing a single small unit of code in isolation. Integration testing is about testing the integration of multiple items. So unit testing often involves testing a single function all by itself while integration testing often means firing up a browser and clicking on the real UI using an automation tool like Selenium and often making actual calls to a web API, though you can of course write integration tests using just Node and JSDOM, for instance. And since unit tests seek to test a small portion of code in isolation, they run extremely quickly, quick enough that you should be able to run all your unit tests every time that you hit save. In contrast, integration tests are slower because they often require real external resources like browsers, web APIs, and databases, which take much longer to spin up and respond than native function calls. Now since unit tests run fast, they should be run every time you hit save, and if your unit tests don't run fast enough to rerun every time that you hit save, that's often a sign that they're not really unit tests. But since integration tests typically interact with slow external resources, they're often run on demand or in QA. In summary, the answer to when your tests should run comes down to whether you're writing unit tests or integration tests. We're going to write unit tests in this module, so we'll run our tests every time that we hit save.
Demo: Testing Setup
Demo: DOM Testing
Demo: Watching Tests
Another detail is missing in our tests. We shouldn't have to run our tests manually. So let's run them every time we hit save. To do that, we'll add a script down here called test watch and you should see this looks almost identical to lent watch, we're using the same pattern of telling the test script to run but we're passing another parameter to it with this dash dash space dash dash watch syntax. So it's just as though I'd taken this flag and added it up here. Now of course we also want to call this as part of our start script. So now let's run the app and see how it works. Say npm start minus S. And we can see the app starts up just fine. Of course there's much more to testing than this. You could set up mocking, code coverage, reporting, and more. Now that we have testing set up, it would be nice if we could fail the build if someone commits broken tests. So in the next clip, let's begin by exploring continuous integration.
Why Continuous Integration?
While we're talking about assuring quality with testing there's another important practice to consider, continuous integration. When your team commits code, it's handy to confirm immediately that the commit works as expected when on another machine. That's what a continuous integration server is for, or CI server for short. So let's wrap up this module by setting up a continuous integration server to assure that we're notified when someone breaks the build. I'm sure you've heard that annoying guy Jimmy say this to you multiple times. Weird, it works on my machine! Well thanks Jimmy, that was super helpful. Wouldn't it be nice to find out right away when someone else has broken the build or when you made a bad commit that has broken the build and ruined someone else's day? That's what a continuous integration server is for. Now the question that you might be asking is how do we end up in a situation where it works on our machine but it breaks on the CI server? Well the CI server catches a number of potential mistakes. Have you ever forgotten to commit a new dependency? Have you ever installed an npm package but forgotten to save the package reference to package.json? Or maybe you added a new step to the build script but it doesn't run cross-platform. Perhaps the version of Node that you're running locally is different from the one you're using in production, so the app may build just fine with the version on your local machine but fail on the continuous integration server. Maybe someone just completed a merge but made a mistake along the way that broke the build. Finally, perhaps someone on your team committed a change without running the test suite. In this case, I typically recommend covering their entire desk in aluminum foil, but the good news is, with the CI server, you don't have to worry. It will catch the culprit and notify the developer of his or her transgression. These are just a few great reasons to run a continuous integration server. The point is, a CI server catches mistakes quickly.
What Does Continuous Integration Do?
So what does a CI server do to provide all these benefits? Well first it builds your application automatically the moment that you commit. This assures that your application builds on another machine. Sure beats the all too common alternative where hours or days later, someone gets latest and complains that someone broke the build. A CI server makes it clear who broke the build by checking every commit. It also runs your test suite. Of course, you should be running your tests before committing but a CI server assures it always happens and it assures that the tests pass on multiple machines. If the tests don't pass, then your commit has issues. So it's important to have a separate server run your tests to confirm they pass on more than just your machine. A CI server can run tasks like code coverage and reject a commit if code coverage is below a specified threshold. And finally, although it's not required, you can even consider automating deployment using a CI server. With this scenario, if all these aforementioned checks pass your application is automatically deployed to the production.
Choosing a CI Server
Demo: Travis CI
We just saw how to set up continuous integration on a Linux server using Travis CI. Now it's time for Windows, so we're going to use Appveyor as an alternative to Travis CI that runs on Windows. Now, as you can see, here on Appveyor.com, you can click to sign up for free, and once you do you can sign in using your existing GitHub account, and you will need to authorize it, so I'll click authorize application and actually I can't do that because I have an existing account so I'll go click sign in instead and log in. Once I do, we can see some different projects that have run fairly recently. Of course for you with a new account, you won't see any existing projects. I'm going to click new project and as you can see Appveyor supports a number of different online repositories, but we're going to look at GitHub. Now I can see a list of all my GitHub reposts. I'll come down here. This should look pretty familiar compared to Travis CI. Here's what I'm looking for and I'm just going to click add. So now we're redirected to a page where you can view the build history, deployments, and settings. Let's click on settings. Here you can change a long list of settings, but again, the defaults here are just fine for our purposes. And just like Travis CI, we need to jump back into the editor to finish configuring Appveyor. Now Appveyor is configured with a file called appveyor.yml and it should again reside out in the root of your project. The recommended Appveyor configuration is a little more involved, so I'll just paste this in and we can talk through it. As you can see, we're telling Appveyor that we should be using node.js version six. We could add other versions here below with another dash if desired, and the rest of this boilerplate is recommended by Appveyor so that we can declare that we want to install our npm packages and also run our tests. We're telling it the specific npm tasks that it should run. And this output is just here because it's useful to see the Node and npm version that are being run when we're trying to debug. And with this file saved, we should be able to open the terminal and say git add dot, so we'll add this file to staging. We say git status, we should see that this is now staged for us, so let's commit this file. We'll say git commit minus M add Appveyor CI, hit enter. Now that's committed locally, let's push that up to GitHub by saying git push. When we do, let's go back over here and click on latest build. So we can already see that our build is in progress. Great, and it looks like our build succeeded. Let me scroll back to the top. We can see the green bar, so it all worked. Just like with Travis CI, if our build had failed, then we'd have received an email notifying us that we had broken the build. And just like Travis CI, we can see that it installed Node, installed our dependencies, and ran our tests successfully. So great, we can now feel confident that our development environment runs properly on both Linux and Windows. Alright, that's it for this module, let's summarize what we just learned.
HTTP Call Approaches
Centralizing HTTP Requests
Here's an important key I see people often overlook when making API calls. Make sure they're handled in a single spot. So why is this important? Because it centralizes key concerns. First, it gives you one place to configure all your calls, this way you can configure important configuration like base URLs, preferred response type and whether to pass credentials in a single spot. You make sure all git put post and delete calls are handled consistently, when asynchronous calls are in progress, it's important that the user is aware. This is often accomplished via a moving preloader icon, commonly called a spinner. By centralizing all your calls, you can keep track of how many asynchronous calls are in progress, this assures a preloader continues to display until all async calls are complete, centralization also gives you a single place to handle all errors, this ensures that any time an error occurs, your application can handle it in a standardized way, perhaps you want to display an error dialog, or log the error via a separate HTTP request. By centralizing your API calls, a single method can assure that this occurs for all calls. Finally, centralizing your API calls gives you a single seam for mocking your API. Centralizing your calls means you can point to a mock API instead of a real one by changing a single line of code that points to a different base URL. We'll discuss this more in an upcoming clip.
Now you might be wondering if Fetch is already supported natively in some browsers, why are we sending our polyfill down to all browsers? Well in short, we did so because it was easy and it's also quite common. The idea is that you can remove the polyfill altogether later when all the browsers that you care about have added support for Fetch. But if you want to send a polyfill only to browsers that need it, there's a handy service called Polyfill.io which does just that, it offers a wide array of polyfills. Here's an example of using polyfill.io to polyfill only the Fetch feature, so if we put this at the top of index.html, Polyfill.io will read the user agent and use that information to determine if the browser requires a polyfill for the feature or features listed. Since I'm using Chrome it will send back an empty response since my browser doesn't need it, pretty slick. Now what if we need a wide variety of data to build our app and the services we need to call don't exist yet? That's just one of the reasons that you might want a robust mock API, so in the next clip, let's discuss approaches for mocking APIs and why it's so useful.
Why Mock HTTP?
We've now set up our development environment to handle making HTTP requests, but it's often helpful to mock HTTP. Why? Well maybe you want to unit test your codes so that your tests run quickly and reliably, or maybe the existing web services in your development or QA environment are slow or expensive to call. Mocking HTTP means that you can receive consistently instantaneous responses. Or maybe the existing service is unreliable, with a mock API you can keep working even when the services are down. Maybe you haven't even created any web services yet. If you haven't decided how to design your web services, mocking allows you to rapidly prototype different potential response shapes and see how they work with your app. Perhaps a separate team is creating the services for your app, by mocking the service calls, you can start coding immediately and switch to hitting real web services when they're ready, you just need to agree on the API's proposed design and mock it accordingly, finally, maybe you need to work on a plane, on the road or in other places where connectivity is poor. Mocking allows you to continue working while you're offline.
How to Mock HTTP
So does that sell you on the benefits of mocking HTTP? Assuming so, here's a few ways to get it done. If you're writing unit tests then Nock is a handy way to mock HTTP calls on your tests. You tell Nock the specific URL that you want to mock, and what it should return. Nock will hijack any HTTP request to the URL that you specified and return what you specified instead. This way, your tests become deterministic and no longer make actual HTTP calls, but if you're wanting to do day to day development against a mock API, you'll want something more, if you've already centralized all your API calls within your application then you can use this centralization to your advantage by pointing to a static file of JSON encoded data rather than making the actual HTTP call. Or you can of course create a web server that mocks out a real API. Thankfully there are libraries that make this easy such as api-mock and JSON server. I personally use JSON server, with JSON server you create a fake database using static JSON, then when you start up your JSON server it creates a web service that works with your static JSON behind the scenes, so when you delete, add or edit records, it actually updates the file. So this provides a full simulation of a real working API but against local mock data that's just sitting in a static file, this is really useful because the app feels fully responsive and you don't have to go through the work of standing up a local database and web server by hand. What if you want to use dynamic data instead of the same hard coded data? Well that's where JSON Schema faker comes in handy. JSON Schema faker generates fake data for you. You specify the data type you like, such as a string, number or boolean, and it will generate random data which you can write to a file. And you can specify various settings that determine how it generates the data such as ranges for numbers or useful generators that create realistic names and emails. Finally you can just go all out and wire up a fake API yourself using your development webserver of choice such as Browsersync or Express, of course, this is the most work, but it also provides you with the most power. So how are you going to decide between these options? Well in short, as you move to the right, you have to spend more time up front configuring, but in return, you enjoy more realistic experience and more power to customize. See, with Static JSON, your app will load the same data every time, and if you try to manipulate that data in any way, it won't be reflected upon reload. Now JSON Server actually saves the changes that you make to the data, so it increases the realism of your mock API. Now you can make your mock API more dynamic by using JSON Schema Faker, JSON Schema Faker can create different fake data every time you start the app. This can be really helpful for catching edge cases in your designs such as pagination, overflow, sorting and formatting and finally, setting up a full mock API from scratch using something like Express and a real time database filled with mock data of course gives you all the power to customize as desired. But if you don't already have a service later and a database you're on the hook to do all that hard work up front before you can enjoy a rapid front end development experience. In summary, if there's already a solid service layer available then I suggest putting it to use. But if a separate team is building a service layer and you haven't build it yet, I suggest trying a mock API so that you can move quickly without being reliant on a real API backend. The lessons you learn with your mock API can often help guide your API design. So now that we've talked about the different decisions, in the next clip let's talk about our plan for mocking HTTP in our starter kit.
Our Plan for Mocking
For this course, let's use a three step process to create a mock API, we'll build a few handy open source projects to use, first we'll declare the schema for our mock API using JSON Schema Faker, this will allow us to declare exactly what our fake API should look like, we'll declare the objects and properties that it will expose including the data types. Step two involves generating random data, JSON Schema Faker supports generating random data using a few open source libraries, faker.js, chance.js and randexp.js. Faker and chance are very similar, both of these libraries offer a wide variety of functions for generating random data including realistic names, address, phone numbers, emails and much more. Randexp focuses on creating random data based on regular expressions. Now JSON Schema Faker allows us to use faker, chance and randexp with our schema definitions. So we'll declare exactly how each property in our mock API should be generated, this will ultimately produce a big chunk of JSON, and the nice thing is, that big chunk of JSON will contain different data every time that we run JSON Schema Faker, and that's where our final piece comes in. JSON server creates a realistic API using a static JSON file behind the scenes, so we'll point JSON Server at the mock data set that we dynamically generate. The beauty of JSON Server is it actually supports create, reads, updates and deletes, so it saves changes to the JSON file that's being created by JSON Schema Faker. This way, the API feels just like a real API but without having to make an actual over-the-network HTTP call or needing to stand up a real database. This means that to get started on development, we just need to agree on the calls that we want to make and the data shape that those calls should return. Then the UI team can move ahead without having to wait on a service team to actually create those associated services, everyone can code to an interface and get back together later. Now that we've talked about the high level plan, let's explore the technologies that we're going to use in a little more detail in the next clip.
Demo: Creating a Mock API Data Schema
Alright, it's time to mock some HTTP, here's the plan. Let's use JSON Schema Faker to declare a fake schema, it comes bundled with three handy libraries that we'll use to generate a random data, faker, chance and regexp. And once we've created our schema and generated our mock database file, we'll use JSON Server to serve it up and simulate a real API. Let's dive in. As we just discussed in the slides, we're going to use a combination of useful open source projects to create a mock API, to begin, let's define a schema that describes what our mock data should look like. Let's create a file called mockDataSchema within our buildScrips folder. And I'll just paste in the schema and then talk through the structure. If you don't want to type this you can grab the snippet from this Url. Now as you can see I'm exporting a chunk of JSON and this JSON describes the shape of our mock data. I'm declaring at the top level that our data structure is an object, and that object has a set of properties, the first property is users, and that users property has a type of array. I'm specifying that I want that array to contain between three to five items, and then below, I define the shape of the items that should sit inside the users array. I'm saying that inside the users array I should find an object and then again I define the properties for that object. As you can see there are four properties I'm defining, ID, first name, last name and email, the ID should be a number, I'm saying it should be unique because I'm trying to mimic a primary key in a database, and I want that minimum value to be 1, I don't want any negative numbers. Then I have a first name and a type of string, and you can see this is where things get interesting, I start using the faker library and I start asking for a fake first name, I do basically the same thing with last name, and then I also use faker on email to say that I would like a fake email address returned. Finally, down here at the bottom, I say that all four of the properties that we defined up here above are required, that means that they will always be populated. If I forget and I leave one of these out of the array, then it will occasionally leave one of these out, so that we can simulate an API that doesn't always send a property if it's not populated. And I also specified that my one top level property which is users is also required, so in this case, our schema will always return all of these properties since I've required them all. Pay close attention to these required properties. This really confused me at first when I was wondering why some of my properties were occasionally not showing up. That's it, with only 34 lines of JSON we declared detailed rules about how our mock data should be generated. And now that we've declared how it should look, let's use it to generate mock data in the next clip.
Demo: Generating Mock Data
We just wrote the schema that declares the shape of our mock data, now we can use JSON schema faker to generate some mock data using this schema, to do that let's create another file in buildScripts and we'll call it generateMockData, this file will use JSON Schema Faker to generate a mock data set and write it to a file. And as you can see I'm pulling in JSON schema faker, I'm referencing the mock data schema that we just created and then I'm using FS which comes with Node and chalk to be able to color our output. I began by calling JSON.stringify on results of JSON Schema Faker, as you can see, I pass the schema that we just defined to JSON Schema Faker, so effectively, JSON Schema Faker is going to look at that schema, generate a lot of randomized data based on our schema, and then I'm going to convert that into a JSON string using JSON.stringify, so now we have a string of JSONs stored on line 14. Then we use Node's built in fs to be able to write our database file which I'm going to place in the api folder, we'll call it db.json, if any error occurs then I'll log it to the console in red using chalk, and if it succeeds, then I will say mock data generated and I will output that in green using chalk. And now that this is set up, let's write an npm script that makes all of this easy to call, so we can jump over to package.JSON and inside let's create a new script called generate-mock-data, I use babel-node to call my generateMockData file that we just created, and of course I use babel-node because I wrote it in ES6 just to make sure that Node can parse it. When we run this script it should write a random data set that matches the schema we defined to our API folder. So let's save our changes and see if this works. Npm run generate-mock-data. We got our green message so that's a good sign, and now we can see that db.json was written to the API folder, and if we open it up we can see that random data was generated that honors the shape we just defined. We can see that we're getting randomized Ids and realistic first and last names and email addresses. We can also see that there was an array of users generated as I requested, great, so we now have a simple repeatable way of generating random data that suits our specific needs. In the next clip let's put this to use on a mock API.
Demo: Serving Mock Data via JSON Server
Now that we have the mock data we need, let's start up JSON Server and tell it to use our mock data, now the great thing about JSON server is it will parse our JSON file and make a mock API for each top level object that it finds. So let's create a new npm script to start our mock API server. As you can see I'm telling it to use the db.json file that we generated and to surf the api on port 3001. Again, pick a different port if 3001 isn't available on your machine, but I'm deliberately choosing a different port than port 3000 which we're using to host our app. So let's open the command line and try it out, say npm run start mock api. When you do, we can see the list of resources that JSON Server is exposing, in this case it found our top level object, users, but if we added more top level objects it would create an end point for each one, slick. Now let's take this URL and go back to browser. We'll open up a new tab and paste it in, and there we go, awesome, we can see an array of users is being returned as expected, so this is the mock data that's sitting in db.json but now it's getting served up over HTTP on a mock API, now I prefer for my mock data to change every time that I open the app, this way we're constantly viewing different potential edge cases in the system. Randomized data helps simulate the real world and it captures issues in development such as edge cases, empty lists, long lists, long values, it also provides data for testing, filtering, sorting and so on. So let's generate new mock data every time that we start the app, to do that, let's go back to package.json and we'll create a script that should run before we start the mock API, so I'll place it right before start mock api we'll call it prestart mock api, and remember by convention because this starts with the word pre but otherwise has a matching name, it will run before start mock api, and what we're telling it to do is to generate mock data before it runs start mock api, finally let's update the start script to start the mock api each time that we start the app. Simple enough, so now every time that I start the app it will generate new mock data and start up the mock api that serves the data, and the interesting thing about JSON Server is if we manipulate the data by making calls to edit or delete records it will actually manipulate the data file behind the scenes, this means you can even use this for integration tests or reload the page and see your changes reflected, it does a great job of mimicking a real api with an actual database behind the scenes. Of course to see this in action we need to update the application to hit the new mock api instead of that express api call that we created earlier in this module, so let's assume that the express server that we set up here is for our real production API and that the mock API that we set up is what we want to use during development. So what we need is for the application to intelligently point to the proper base url in each environment. To do that, let's create a file called baseurl.js in the API folder. This file will look at the host name to determine if the application is running in development. If it is it will point our mock API which is hosted on port 3001, if it's in production it will point at that production api that we set up that serves from Express. Alright so let's put this new file to use in our user API file, I'm going to add an import for getBaseUrl, here at the top of baseUrl and then I will store that in a constant right here, and of course we need to use this information in the API call below, so I will say baseUrl + Url, this way it will change based on the environment, and assuming this worked, we should now be able to start our app again and see that it's pointed at our mock API because we're in development. And now that it's up if we come over to the brower, we can see that our user data is displaying. And a quick note, since we're starting Express and the mock Api at the same time, the app may fail on the first load if it tries to call the mock API before it's up. If so just hit F5 to refresh. We can see that it is different data than we were seeing before so we know that we're hitting our mock API. We can also confirm this by coming over here to db.json and seeing that the first record is Kole Kessler and that is what we're seeing right here, so we know that we're getting the data from db.json served up into our app. We can also see this if we reload that we're making a call to port 3001, so our application is hosted on 3000, our mock API is on 3001, and it's returning that mock data that we just generated. And of course you'll have different mock data than me because every time we run the application now, it's going to generate realistic looking mock data. Now you'll notice that delete link? Well it doesn't work because we haven't wired up yet. But this is where things get really interesting. JSON Server supports manipulating data as well, so if we submit a request to delete our add records it will write to the db.json file so our changes are persistent on reload. So the data will remain in db.json until we restart the app. So let's wire up these delete links in the next clip.
Demo: Manipulating Data via JSON Server
In this short module we began by reviewing how to choose an HTTP library. We saw that HTTP is the low level option in Node, but you'll probably want to use requests with Node due to its streamlined API. In the browser, you can choose the old standards like XMLHttpRequest and jQuery but Fetch is probably what you should reach for since it's the new standard that streamlines the clunkyness of old XMLHttp requests. Just remember to pull in the Fetch polyfill so it will work properly cross browsers, or if you're looking for a full featured library, especially one that works in both Node and a browser, then isomorphic Fetch is the most future friendly approach since it utilizes the browsers built in Fetch support if available, however you can also consider using the XHR library on npm, SuperAgent or Axios. All of these are excellent options regardless of whether you need to run on both Node and the browser. And we closed out this module by exploring HTTP call mocking, if you're testing the way to get that done is Nock, and if you're needing to mock an API for development then the simplest way to get that done is likely just some hard coded JSON. If you have a small app that's perhaps all that you'll need but if you want to simulate interactivity then a custom web server approach involving JSON Schema Faker and JSON Server likely makes more sense, we saw that JSON Schema Faker is quite powerful and contains enough built in intelligence to create realistic fake data for a wide variety of scenarios, and of course if you want to go fully custom you can configure development web server of choice to simulate a real API. This is certainly the most work but also offers the most complete flexibility. Now we have HTTP request taken care of, we have a powerful foundation for building real applications, so in the next module let's put all this to use, we'll discuss key principles for project structure, we'll learn why demo apps are so important and we'll build a quick demo app that helps convey best practices. And in the final module we'll wrap up the course by creating an automated production build.
Why a Demo App?
Tip 1: JS Belongs in a .js File
Tip 2: Consider Organizing by Feature
Time for my second tip. On larger, more complex projects, consider organizing by feature instead of by file type. There are two popular ways to organize your code: by file type or by feature. When you organize by file type, all files that serve the same purpose are placed together. This is a popular approach when working with MVC frameworks, which commonly expect you to use model, view, and controller folders to organize your application. However, the downside of this approach, is you end up having to bounce around the file system to open and work with related files. So, I recommend organizing by feature on larger projects. The larger the project, the more organizing by feature pays off. Because you can go directly to the feature that you're working with and all the related files are sitting inside.
Tip 3: Extract Logic to POJOs
Of course our application isn't very useful until we actually prepare it for production. So in this module, let's create an automated production build. We'll cover a variety of considerations including minification to speed loads, with source maps generated to support debugging in production. Hey, let's be honest, you and I both know this happens. We'll setup dynamic HTML handling for production specific concerns, and cache busting to ensure that users receive the latest version of our code upon deployment. We'll setup bundle splitting so that users don't have to download the entire application when just part of it changes. And finally, we'll setup error logging so that we know when bugs sneak their way into production. Now this sounds like a lot of work, but as you'll see, this moves fast. Alright, let's dig in.
Minification and Sourcemaps
Demo: Production Webpack Configuration with Minification
Now let's put Webpack to work to bundle and minify our app code for production. To begin configuring our application for production, let's make a copy of our development webpack config. And we'll call it webpackconfig.prod.js. And now let's tweak some settings for production. First, we'll change the Dev tool setting. Remember that this setting specifies how source maps should be generated. We explored source maps earlier in the bundling module. Let's change the Dev tool setting to sourcemap, since that's what's recommended for production. It's a little bit slower to build but it provides the highest quality sourcemap experience. This will assure that we can still see our original source code in the browser, even though it's been minified, transpiled, and bundled. That's the beauty of source maps. And we're going to write our production build to a folder called dist so let's change the output path. When building the app for production, we'll write physical files to a folder called dist. This is a popular convention and it stands for distribution. Next, let's set up minification. We want to minify our code for production so let's add our first plugin to the array of plugins. We're going to use a plugin called UglifyJs. And I like to put a comment above each of my plugin references. So you can see we're calling webpack.optimize UglifyPlugin. And now that we're calling some specific webpack features we need to add the import up here. And before we minify, let's use another handy webpack plugin that will eliminate any duplicate packages when generating the bundle. This plugin's called the Dedupe plugin. So this will look through all the files that we're bundling and make sure that no duplicates are bundled. Now we'll enhance the webpack config with additional features throughout this module but this is a good start. Let's now shift our focus to writing a script that will run our production webpack config build. So we'll go over to buildScripts and create a new file called build.js. And here's all it takes to run our webpack build for production. We're importing webpack, our production config, that we just defined, and chalk so that we can color our output. And then we're calling webpack and passing it that webpack config. We're handling any errors that might occur, otherwise we return zero which signifies success. So this is pretty simple, but in the real world you'll likely want to add a little more to this. So let's enhance this script a little bit. First, let's go up here above the call to webpack and declare that we are running Node in production mode. Although it's not necessary for our setup, I'm adding this line here because this is important if create a Dev specific configuration for Babble in your .babblerc file. See, Babble and potentially other libraries you may use look for this environment variable to determine how they are built. And before we get started running the production build, I like to output to the console just so we can see that the production build has started. Since we're doing minification, as you'll see, the production build does take quite a few seconds to run. So it's nice to get some notification that it is doing the job. I like to display some stats to the command line. Now this looks like a lot of code and it isn't required but this ensures that warnings, errors, and stats are displayed to the console, and at the bottom we display a success message if everything worked. So this is for displaying any errors that occur, this is for displaying warnings that occur. I display the stats right here. Finally, we just output a message so that we know that our production build has succeeded. Great, so there's quite a bit of code here but it's really conceptually simple. This just runs our production webpack config. I've added some extra goodness here, just to improve our experience. Let's save our changes and in the next clip let's try this out by setting up an automated production build.
Demo: Configure Local /dist Server
This isn't required, but I like to run the final production version of the app on my local machine, just so I can make sure that everything looks good. This can be really helpful when you need to debug an issue with a production build. So let's create a file called distserver in the build scripts. We already have a source server that serves our source folder, now we'll have a distserver that serves our dist folder. So let's just copy the content of our source server and paste it over into distserver, because we're only going to make a few minor changes here. First, let's remove any webpack calls here, because we're no longer going to be interacting with webpack for our distserver. We're going to be serving up just the static built files. So we'll remove the two webpack related imports at the top, we'll also remove the call to the compiler, and the calls to configure webpack dev middleware. So our file gets simpler. And then, the other thing that we need to add is to now add support to express for serving static files. So we'll add line saying app.use and we'll tell it to serve static files in the dist directory. And for production we'll be serving index.HTML from the dist folder rather than the source folder. One final tweak that I like to make for our dist server is enabling Gzip compression. Your production webserver should be using gzip, and if it's not, pause this video and go turn it on. Now anyway, I like to enable gzip so I can see the final gzip file sizes when I'm serving the app locally. This gives me a clear understanding of the file sizes that will be sent over the wire to the user. To do this, let's import compression, and then down here above our call to express.static we'll add a line in to use compression. Make sure that you add the parentheses so it's invoked and with those two lines of code we've now enbaled gzip compression in express. And that's all we need to do to configure our dist server for serving up our production app locally. Again, this is not for use in a real production scenario. I'm only creating this so I can serve the app on my local machine just to confirm that the production build works locally. Then, it's a separate decision to move all these files up to serve them on some host. Perhaps a cloud provider. And also yes, I'm leaving in the hard coded data for users. Again, just pretend that this is hitting real data and production. And speaking of API calls, we also need to decide what API we'd like to use when we're checking out our production build locally. So let's work on that next.
Demo: Toggle Mock API
Demo: Production Build npm Scripts
Dynamic HTML Generation
When you bundle your code, you obviously need to reference it. And if you're doing web development then of course you'll end up referencing your bundle in an HTML file. But what if you want to run some slightly different HTML in production than development? So why would you want to manipulate HTML for production? Well there are many potential reasons. If you're bundler's generating a physical file for you, wouldn't it be nice to automatically reference the bundle in your HTML file? And as you'll see in a moment, we'd like to generate dynamic bundle names so that we can set far expires headers in production in order to save HTTP requests. When bundle names are dynamic, we need a way to reference the dynamic bundle name in our HTML. And what if we want to inject some scripts or resources only for production. We'll see an example of this in a moment when we discuss error logging. Finally, maybe we just like to save a little bandwidth by minifying our HTML. The point is, there are a variety of reasons to manipulate HTML for production. Now when you're generating a bundle, a common question is how to setup your index.HTML file to reference the bundle. This example shows the simplest approach. A hard coded reference to bundle.js. This has been working great for us so far, but there are other more powerful approaches to consider for handling all the issues that we just discussed. I see three specific options for handling your HTML. If you have a simple setup, you might just want to hard code in a reference to bundle js, as we've done so far in the course. This is the simplest approach. However, maybe you want to dynamically add some other code to the page for production. That's when you want to dynamically generate your HTML file. One obvious way to do so is via Node. You can write a Node script that copies your HTML file and uses regular expressions to manipulate sections or replace place holders. Or if you choose webpack as your bundler, there's a powerful approach that I prefer to use called HTML-webpack-plugin. This plugin simplifies the creation of your applications HTML file. It can create the traditional HTML boiler plate for you. Or you can declare a template, which is my preferred approach. This plugin is especially useful if you're doing cache busting in webpack by generating a different file name each time your assets change. We'll set that up in a moment.
Demo: Dynamic HTML Generation
Demo: Bundle Splitting
Demo: Cache Busting
Demo: Extract and Minify CSS
Demo: Error Logging
Demo: HTML Templates via EmbeddedJS
We now have TrackJS logging errors, but it would be nice if it only ran in production since logging errors in our development environment isn't useful and would just add noise to our error logging reports. I want to use this opportunity to show you a way to add conditional logic to your HTML so that this code is only added to index.HTML in production. So instead, let's use the templating engine support that's built in to HTML-webpack-plugin to add conditionals to our template. HTML-webpack-plugin supports a number of tip loading languages out of the box including Jade, EJS, Underscore, Handlbars, and HTML loader. And if you don't specify a loader then it defaults to embedded JS, or EJS for short. So let's just use EJS since it's the default and it's easy to use. You can read about the EJS syntax at embeddedjs.com. And there's a handy _____ on this page so you can play around with the syntax and learn from rapid feedback that will display in this box. But for our purposes, we just need to declare a simple conditional. We want to inject the TrackJS code but only during our production build. Let's do that with a little bit of EJS. First, let's store the TrackJS token that we were just assigned on the website in our webpack config. Going to add it right here below the call to inject. Any properties that you define here within the HTML-webpack-plugin will be available within our index.HTML file. You'll see how to call this as we shift our focus over to index.HTML. And the token that I'm defining here is the token that you should've received right here when you setup TrackJS. And now let's shift our focus over to index.HTML. In here we're going to use EJS to declare that this section should only be rendered when we have a TrackJS token defined in webpack config. So let's say if HTMLWebpackPlugin.options.trackJSToken and then close our if statement, and we'll put the closing curly braces, right down here and now we can reference this variable instead of the actual token right here. Of course to reference it as a variable we need to wrap it in the angle bracket percent syntax. I'll close it out right here and be sure to wrap this in single quotes. I'll close the side bar just so we can see this better. So now, if a TrackJS token is defined within our webpack config, this section of code will be rendered into our index.HTML. Otherwise, it won't exist. And since we've only defined our TrackJS token within our production config, this section of code will only render for production. So this way, our errors are only tracked for production. Now of course, once you've added this code, your HTML file arguably isn't an HTML file anymore, it's now an EJS file. So you can consider changing the file extension to .EJS, but I prefer to keep the extension HTML so that editors will properly apply code coloring to the file contents. Let's go ahead and add a useful comment up here to the top of our HTML file, that just explains what's going on. Rather than changing the extension to EJS, I figure this comment is sufficient. And yeah it's a big comment but who cares. This will stripped out by the build process anyway. Okay, and with that set up, we should be able to Npm, run, build and make sure that this is getting injected as we expected. And if we look in the browser and view page source, we can see now that our call to TrackJS is here, and the token is getting injected into our page as expected. And with this, I think it's safe to say that we're at a point that we can confidently talk about shipping. Let's wrap up this module with a short summary.
Congratulations on making it to the final module, we're finally ready to discuss the last missing piece, Production Deployment. We'll begin this final module by discussing the merits of separating the user interface from your application's API, into completely separate projects. We'll briefly discuss the wide variety of cloud hosting providers. Then, we'll create an automated deployment for both the UI and the API, using popular cloud service providers. We'll wrap up the course by discussing approaches for keeping existing projects updated as your starter kit is enhanced over time. And I'll quickly provide some tips for further inspiration as you start designing your own development environment. And I'll close out the course with a short challenge. Alright, let's dig in, and wrap this up.
Separating the UI and API
Demo: Automated API Deploy via Heroku
Alright, back to the editor, let's set up an automated deployment of our API to Heroku. We're going to create a completely separate project for handling this, to show how we can host and manage our UI and API separately. As I just discussed, there are many benefits to completely separating your UI and API projects. In our example app, we created an API endpoint, hosted via Express, so, we need to select a Node-friendly host for the API. Let's host our API on Heroku. Heroku offers a really slick setup for automated deployments that integrates with GitHub, and it offers a free option that's perfect for showcasing an automated deployment. Heroku's docs already do an excellent job of walking you through setting up an account, and creating a new Node.js project. If you go their docs here at devcenter.heroku.com, and click on Node.js, and then, click on Getting Started on Heroku with Node.js. So, if you want to follow along with me, please pause this video and go through the Introduction and Set up steps on the Node.js Getting Started page, then, come back here to continue. OK, for the rest of this clip, I'm going to assume that you signed up for Heroku, and walked through the Introduction and Set up steps for Node.js. On step three of their Set up process, which is called Prepare the app, Heroku provides a link to an example app that you can clone. However, instead of using this, I created a separate starter kit for you, that will work well with Heroku. This repository contains a slightly modified version of Heroku's starter kit, that includes our API, so this should help you get started quickly. If you want to use this repository to follow along, just click Fork up here to fork the repository. This will make sure that you have a completely separate copy that you can work with, so that you have the proper rights to work with it in Heroku. Now, I already have this repository pulled down on my local machine, so, let's jump back to VS Code, and walk through it. As usual, be sure to run npm install after forking the API repo. There are only five files in this repository so let's review each. First, package.json contains only two dependencies, Express and CORS. We'll use CORS to enable cross-origin calls, since we'll be calling our Heroku-based API, from a different domain. Make sure that the repository field down here points to your repository, if you create your own. And, there's only one npm script necessary, which starts the app. index.js should look quite familiar to you, it's a slightly modified version of the dist server that we created in the previous module. To keep things simple, I'm using the common JS style up here on line one, to require Express, since that's the syntax that Node understands. I'm also referencing the CORS package, which we're enabling down here, to ensure that we can call the Heroku-based API from our UI, which will be hosted on a different URL. To clarify it, this is necessary because cross-origin resource sharing must be enabled to make Ajax calls to a different domain. We could of course transpile and bundle our code in this project, but, I'm keeping this project as simple as possible, so that you can see how to work with Heroku. If you diff this file with the dist server that we set up in the previous module, you'll also notice that we're calling open to open a browser there. However, this just starts up Express, and displays a message to the console. So, to try this out, you'll need to open the URL in your browser manually. I also left out Gzip compression, again, just to try to keep this as simple as possible. OK, this leaves us with two new files, that help us configure our app for Heroku. The first is app.json, which describes our app to Heroku. There are many potential properties that we can define here, but we're going to keep our app.json simple, we'll just define the name, a description, the repository where our project can be found, and then a few keywords. The other new file here is Procfile, the Procfile declares the command that Heroku should run. That's why there's just one line here, we're telling Heroku to run Node on our index.js file. And, this is all that Heroku needs to host our Node and Express-based API. Now, of course, I'm deliberately leaving out all the complexities of testing, transpiling, linting, bundling, and more in this project, so that you can clearly see what it takes to create an automated deployment to Heroku. But of course you can feel free to start adding those in, once you're comfortable with hosting on Heroku. So now we should be ready to complete an automated deploy to Heroku, I've already signed up for Heroku and installed the Heroku CLI. So now, let's open up the terminal, and type heroku login. At this point you'll be prompted for your email, and your Heroku password, and after entering your credentials you should see your email listed in blue, which shows that you are logged in successfully. And note that you may receive some warnings about file permissions, so, consider tightening file permissions for security, if you like. Now it's time to configure our app to work with Heroku, so we can type heroku create, this will prepare Heroku to receive our app. This command returns a URL, and if we load it, we can see a welcome message. Heroku generates a random name for your app, or you can pass a parameter to specify your own name. Now that we've created our app, we need to configure a git remote that points to our Heroku application. So, let's go back to the command line, and we'll say heroku git:remote -a, and then pass it the name of the app that we were assigned right up here. And now that we've set the git remote, we should be ready to publish the app. We can say git push heroku master. We can see the deployment output as it builds the source and pushes our app out to Heroku. It displays the random URL where our app is now hosted, so, you'll have a different URL than me, if you're following along. And of course, for a real app you'll want to specify a domain name that you've registered. But now, we should be able to take this URL, and go over to the browser, and when I load it up, there we go, we can see our Hello World! And if I go to \users, I can see the JSON coming back for our users, so we have our API now hosted in production. And any time that we make changes to our API, we'll just commit our changes, and then run git push heroku master, to be able to push our changes up to Heroku. Heroku will take the code from GitHub, and deploy it to our URL, slick. And, now that we have our API running in production via Heroku, let's jump over to our UI project, and update it, so that it will hit our Heroku-hosted API. To do that, let's open up baseUrl.js. Note that right now, we are either using the mock API which is hosted at 3001, or we were assuming that we were hosting Express locally, for production. Now instead, we have a new URL to use for production. So, I'll just paste in the Heroku URL, that I was assigned, in the previous step. And, make sure that you include the trailing slash on the end, and now we can also open up distServer.js, and we can remove this section because we're going to be hitting Heroku, instead of local, when we do our production build. This way our production build is more realistic, we know that our production API will be hosted on Heroku, and we're going to host our UI on a separate service here in a moment. But now when we do a production build of our UI, it will hit our production API hosted on Heroku. In the next clip, let's set up an automated deployment for the user interface, so that we can see all this work together.
Demo: Automated UI Deploy via Surge
Okay, now it's time to code our automated UI deployment. Here's our goal for the process that we're going to set up for the front-end. It's a three-step process to get code into production. First, we run npm start to do our development. Once we're done coding, we type npm run build, to build the application in production mode. This opens the application's production build on our local machine, so that we can verify that everything looks good. If we're happy, then it's time to deploy to production, so we can type npm run deploy, this will automatically push the final built application up to our production host. Of course, we already set up steps one and two in the previous modules, so now, it's time to focus on this final step. Alright, it's time for our final coding session in this course, but this one is critical, it takes our front-end public. To do so, we'll host our static front-end on Surge, let's make it happen. I'm a big fan of Surge, because it's a low friction way to deploy a static front-end, and for all the reasons that I mentioned earlier, I strive to build static front-ends. Getting started with Surge couldn't be easier, typically you'd install it globally, using npm, but, we don't need to, because we already installed it at the beginning of the course, since it's listed in our package.json right down here. And, we also don't need to install it globally, since we're going to run it via an npm script. Remember, Node's bin folder is added to the path automatically, for npm scripts. So, to set up Surge, I'm going to add one whopping line of code here, in package.json. Yes, it's seriously that easy, and that's why I love Surge. Now, first of course, we need to call npm run build, so that we have something to push out to production. And when our app starts up in production mode, we can see the data coming back as expected. If we come in and Inspect the Network, we should see, when we refresh, we can see that we're making a call to Heroku, as expected. And with this set up, we can now hit Control + C, and type npm run deploy. We can see Surge gets called, it assigns a random domain and when we hit Enter it says success, and now we know that our app is up in production, at this random URL. So if I open a new tab and load it, there we go, we can see our app loading in production and using the Heroku API for the data, success! Of course, the Delete link won't work, since we never added that functionality to our Heroku API, but if you want a challenge, you could certainly add a database behind the scenes, to support adds and deletes. If we open the browser tools, and go to the Network tab, we can see that Surge serves our assets using Gzip, by default. And of course, you'll likely want to set up a custom domain, and Surge will let you use your own domain for free, or if you just want to request a subdomain, you can specify it, via a command line flag. There are quite a few nice tweaks that you can make, but you get the idea. We should strive to build static front-ends, and if you do, Surge is awesome. Of course, your starter kit is likely to change over time, so, in the next clip, let's talk about how to keep existing projects updated as our starter kit changes.
Starter Kit Update Approaches
Now, once you've created your team's starter kit, how do you keep existing projects updated over time, as you enhance your development environment down the road? Let's review a few approaches. Let me first run through a common scenario to help clarify the problem that we're discussing. Imagine that your team watches this course and creates a development environment that works great. In the first quarter, you launch your first project, using your new starter kit. Then, in the second quarter, you launch another project successfully, using the same starter kit. In quarter three, you learn some lessons, upgrade some libraries, and tweak your starter kit with various enhancements and bug fixes. The question is, how do you easily get these enhancements into the products that you launched earlier this year? Of course, one way is to simply manually update these previous projects by hand, and that works, but we're developers, so let's talk about some more automated approaches. Here are three more automated ways to handle updates to starter kits over time, Yeoman, GitHub, and npm. In the next few clips, let's discuss each of these approaches in more detail.
Option 1: Yeoman
Yeoman is a handy scaffolding tool for starting new projects, so, once you're happy with your development environment, you can create a Yeoman generator. This makes it easy for people to create new projects by typing yo, and then the name of your generator. Yeoman hosts a long list of generators that will give you a headstart on some of what we've covered in this course. Few will cover all of the features that we just implemented but it's another great place to check for inspiration, or, as a good starting point for your framework or library. Assuming that you've created a Yeoman generator for your development environment, it's a three-step process to update an existing project later. First, be sure that you've committed all your code to your source control system. Then, rerun the generator on your existing project, Yeoman will prompt you for each file that's being overwritten. Then, you can diff the files, and manually resolve any conflicts that occur. Of course there's much more to know about Yeoman so, if you want to learn more, check out the Yeoman Fundamentals course by Steve Michelotti.
Option 2: Github
Another approach for updating existing projects is to use GitHub. With this process, you begin by hosting your project of course on GitHub, and then you fork the starter kit, when you start any new projects. This way, you can pull changes from master, as the starter kit is enhanced over time.
Option 3: npm
Another approach is to wrap your starter kit in an npm package. With this approach, you abstract away the configuration and build scripts behind an npm package, and then you update the npm package to receive the latest changes. This approach has the advantage of abstracting complexity away, and it's also the simplest update, since you don't need to resolve conflicts like the other two approaches. However, this advantage also has an obvious downside, you're restricted from tweaking anything inside the npm package, for a given project. So, this approach is great if you want to programmatically enforce that all projects use the exact same config, but, some may find this approach overly restrictive. Let's talk about this approach in a little more detail, because it's becoming increasingly popular. Depending on the complexity of your starter kit, updating an existing project manually isn't much work, so you might consider this hybrid approach. Let's walk through the files that are in our demo starter kit. The most significant piece is in buildScripts. This is the easiest thing to centralize, you can move all of buildScripts to an npm package. The other big piece is package.json. There are two sources of complexity here, the scripts, and the dependencies. For the scripts, you can streamline your npm scripts to just call buildScripts instead. Since you can put all your buildScripts in an npm package, this means that your scripts in package.json are nothing but a list of function calls to your separate scripts. This effectively centralizes your buildScripts, allowing for bug fixes and tweaks in the future. Webpacks config is just a chunk of JSON, so it need not be stored in a webpack.config file. Instead, you can move it into a buildScripts npm package as well, so that it's centralized. The ESLint configuration can be centralized by creating your own preset. This way, each project can define its own .eslintrc, but use the preset that's stored in npm, as the baseline. So this covers most of the moving parts in our starter kit, so what's left? Well, the approaches I outlined on the previous slide, centralized all the items on the left using npm. That's a significant win, since it's the vast majority of the starter kit's code. So, what files would we still have to update manually? Well, editorconfig, which is unlikely to change much over time, babelrc which contains very little code, and the Continuous Integration server configuration, which is also unlikely to change over time. The final piece that we didn't centralize on the previous slide, is the package references in package.json, but, these are easy to update with existing npm tooling, so again, not necessarily that big of a deal. I'm a fan of this hybrid approach, it provides most of the benefits of centralization, without the cost of creating and maintaining a Yeoman generator. And it gives us a lot of flexibility as well, since we can decide what's worth centralizing, and what isn't. So that's three different ways to keep your projects updated over time. Let's close out the course by discussing some sources for inspiration, as you move forward on creating your own starter kit.
And that's a wrap! In this final module, we began by discussing why separating the UI and the API can be useful, including the ability to deploy the UI and the API alone, separation of concerns, which allows separate teams to manage each of these, the ability to select cheap hosts that handle static assets, and, the flexibility to select whatever API technology that you like. We reviewed a list of potential cloud hosting providers, but ultimately created an automated deployment using Heroku to host the API, and Surge to host the UI. We discussed approaches for keeping your projects updated with bug fixes and enhancements, as your starter kit improves over time, including Yeoman, GitHub, and npm. And I quickly mentioned a few resources for inspiration, including some terms that you can use, to help you search for starter kits that are specific to your preferred technology stack. And I wrapped up with my very simple challenge, set up a meeting with your team, to discuss the path forward. And that's a wrap! Please share your links to your starter kit in the course discussion, I'm excited to see what you create. Thanks so much for watching.
Cory is an independent consultant with over 15 years of experience in software development. He is the principal consultant at reactjsconsulting.com and a Microsoft MVP.
Released10 Nov 2016