What do you want to learn?
Leverged
jhuang@tampa.cgsinc.com
Skip to main content
Pluralsight uses cookies.Learn more about your privacy
Node Application Patterns
by Rob Conery
Design and development patterns for building applications in Node.js.
Start CourseBookmarkAdd to Channel
Table of contents
Description
Transcript
Exercise files
Discussion
Recommended
Introduction
Introduction
Welcome to Node Application Patterns here at Pluralsight. In this production, we'll go beyond the fun demo and get into real concerns that come up with building modern web applications with Node.js. For many developers, knowing what to put where and when is only a first step. Many want to know why. That's what I hope to do in this production, to help you see interesting ways to build your next node application. We'll dabble with some therapy, but then we'll put that theory into practice. Let's tackle this right up front. Why should you care about Node? Aside from being new and shiny, there are plenty of compelling reasons that Node makes sense as a development platform. Let's have a look at each one. Node is incredibly simple and easy to use. Working with a platform requires a quick install that takes seconds. Building on top of it is a joy, which I hope to show you in this course. Upgrading your installation is usually a matter of reinstalling or running your favorite package manager like Apt or Homebrew. Node's package manager, NPM, makes working with Node incredibly easy. In fact, Node's packages, or modules, are the main reason I like working with Node and is at the core of what we will be doing today. In fact, I like NPM so much it's spoiled me. Working in other platforms, such as Ruby on Rails or ASP.NET MVC, it just feels clunky and disorganized. The Node ecosystem is on fire. You'll find all kinds of amazingly helpful tools and code online that solve every kind of problem. From hashing an encryption to date formatting, Node is the fastest growing platform right now, and there's no shortage of help out there. Node is scalable and fast. It's built on top of Google's V8 engine and uses an asynchronous style of execution, called the Event Loop. That means your CPU is free to focus on many tasks at once. So rather than take up an entire thread from your thread pool, waiting for the execution of a long-running query, Node flexes JavaScript's asynchronous nature and goes off to do other things with that thread, like handle more incoming requests. Node is also single threaded, which means it won't have deadlocks commonly associated with asynchronous programs in other platforms like Java or .NET. First class deployment platforms support Node out of the box. Azure, AWS, and Heroku all support Node deployments directly. Amazon recently released an elastic beanstalk implementation for Node, and Azure supports Node's websites. Same with Heroku. This means you can autoscale your deployments, as well as set up load balancing and redundancy. We'll take a look at deployment options more in this course. Finally, you get to work in a language that you already know, JavaScript. Now, I don't know if that's a plus or a minus. But, if you've gotten past the initial hurtles with JavaScript, you have probably come to like working with it. Node allows you to grow your JS skills and gives you an environment where it's easy to flex the better parts of JavaScript. So what are we going to do today? Well, I want to build a membership system based on Devise, which is a Ruby on Rails gem that allows you to build out a user membership system really easily. And I find it's the one thing that's missing from Node and Express. I built a system just like this for ASP.NET MVC for our pragmatic BDD screen cast, so now I'm going to build one just for Node. I've pulled up the weeds with Node in many different ways over the last two years, and I'd like to share what I've learned. We'll do this by understanding some concepts first and then putting those concepts to the test by writing up some code. When you finish this course, you'll be able to create a modular Node application, which you can share with others, understand effective patterns for working with JavaScript and Node itself, effectively test your work using BDD, and you'll become familiar with the basic tools and practices that Node developers use everyday. To get the most out of this course, you should understand JavaScript enough to not be afraid of it. I'll touch on a few basic things as we go along here, but I won't get too deep on language constructs or the more basic parts of JavaScript. It's also helpful if you have a basic understanding of Node. I'll go into the concepts behind Node a lot, so if you've never used it, you should be okay. If you get lost at all, just pause the video, hit Google, and then come on back. You should have a firm grip on the web and how it works, as well as newer frameworks out there like ASP.NET MVC and Ruby on Rails. We'll be borrowing a few concepts along the way, and you should understand the references I'll be making. Finally, have an open mind. Node is likely to be a lot different than what you're used to, and your cheese, as they say, will likely be moved. Half of learning is letting go of preconceptions and bias. So if you've only done Rails or ASP.NET, it's time to learn something new. And with that, let's get started by installing some tools and setting up our environment.
WebStorm
One question I get asked quite often when it comes to Node development is what IDE do you use, and, well to be honest with you, it varies. There're a lot of choices when it comes to working outside the Microsoft Universe, and you can use just a plain old text editor with your shell and have a lot of functionality, or you can use a full-blown IDE. The neat thing is that the folks at JetBrains, the people who make ReSharper, have gotten into making dedicated IDEs for Python, PHP, JavaScript, and Node, and Ruby on Rails or just Ruby in general, I should say. Anyway, WebStorm is their IDE for JavaScript, and also for the web in general. And it works really well with Node. It's $49 for a personal license, and it is great. I bought it just because, you know, I'm working on Node; I want to have some functionality. Anyway, I want to show you a few features that I really, really like. But before I do, I did want to say it is JetBrains, so it is a smart, smart IDE, and there's a lot of extensibility that goes into it. It integrates with source control, and it also allows you to have these live templates and document templates that are so very helpful. Anyway, you can download a demo, and I'm not getting any money for this. I just really like it, so I suggest you do if you want to play around. It runs on Windows, Linux, and Mac. If you want to go over to JetBrains TV, it's got all these great videos from the team, and you want to select WebStorm, but cruise on through here and see some of the things that WebStorm can do. I love the CSS integration. It's pretty neat. And, yeah, so let's go take a look at what I dig about WebStorm in general. And here is just the demo site that I created, and I have a few modules set up for it. And notice inside of here, this is an Express Web App, and if you don't know what you're looking at, we'll get to this later on, but this is the configuration for the Express Web App. And one thing I like about WebStorm is that it has IntelliSense, or code completion, and it's really extensible. For instance, here it notes that I'm talking about app, which is great, but if I hit ".", which for folks that use Visual Studio is a cue for "go find the methods and properties on app," well, it's not really working. It's kind of just giving me best guess of what it thinks I want. It gives me some shortcuts for other things, but, yeah, in general, it's not really IntelliSense. But that's okay. What it allows me to do is I can say this file is actually using this library, and right down here, I can specify Use the JavaScript Library, whatever. Inside of here, it says well, I'm using Node modules, so I'm going to allow you IntelliSense on all the modules that you've installed. Though I see you have Ember, this is Ember, JQuery, Mocha, test framework. What---are you using a version of Node?, and, yes, I am actually. So this is going to get checked and popped in here, but it still doesn't help me because app is Express. It's Express---the Express Framework. So I need to have IntelliSense for that, so I can configure that. Let's do it. I'll open up my preferences, and you can cruise through here and go to JavaScript and libraries. And inside here, these are all the things that are configured for IntelliSense if you will, and I don't have anything for Express, so I can hit download. And then it says, well, here's our official libraries that we offer for IntelliSense, Modernizr, Knockout, which is cool, Moo Tools, Prototype, script,aculo.us, on and on, and it's not in here. Well, if I drop this open, here are community stubs or community libraries that folks have made. And there is a ton of stuff in here for all kinds of frameworks out there, including Bootstrap, which is kind of cool. But if I come on down, or if I just type in ex, we are here at Express. Download and install, and it's going to grab it, boom, pops it right in there. And hit OK. Now, notice that all the squigglies went away and things look a little bit different. I'll type in app., kaboom, now I have all kinds of interesting IntelliSense in here, request, response, mime, version, and if I say request, and hit a "." on that, it still gives me full IntelliSense. One other neat thing about WebStorm is the ability to have plugins that do all kinds of things. So, if I type in plugins in the preferences, you can see all of the plugins that have been installed, and these are the ones that I have active in here. Man, it does a lot. "Copy" on steroids is one that I installed myself. This allows me to select some text and right-click, and when I hit copy, it copies it as RTF in memory. And so I can drop it onto slides and have coloring, which is groovy. The Angular plugin is installed, so if I use Angular, I've got full IntelliSense and code completion, which is handy. CoffeeScript, CSS Support, of course, Cucumber.js. It's the weirdest thing. Node, which is, again, very handy. So all kinds of uh---all kinds of plugins here, but if want to install another one, I have a couple of choices. I can go install from JetBrains, and I get a number of different plugins straight from the JetBrains folks, and, including Vim. That's Vim support. So if you like Vim, you can drop it right on top of that IDE, which I think is pretty cool. TFS Integration, right on. If you're a TFS person, you can integrate WebStorm right with it. Okay, those are the JetBrains plugins, but there're a whole bunch more provided by people out there. And look at this. These are all community plugins that do big things to little things. Look at this one. AWS Elastic Beanstalk Integration. Man, that's pretty handy. It will allow you to push to Amazon Web Services Elastic Beanstalk directly. Wow! That's neat. Elastic Beanstalk is kind of like Azure websites. There're a bunch of other things in here too. Pomodoro. A lot of people really like Pomodoro. It's a timer for keeping you efficient and not burning your brain up when you're working and writing code. I think that's kind of fun. I've never used it before, but anyway. Browsing through these things is a lot of fun. One of my very favorite new features of WebStorm 8 is the integration with the Mocha test framework. I'll talk about the details of Mocha later on when we start writing tests. For now, I just want to show you how we're going to hook it up to the IDE. So here is a Mocha test file. This DSL looks just like our spec. These individual calls are functions to call out to Mocha, and you can wrap things up in a very BDD sort of way. So this is just a, obviously, silly test, and we'll talk about the syntax of it later on. The test file is located inside a test directory, inside of my cart module, inside of a lib folder. We'll talk about the structure later on. None of this is terribly important, but I do want to show you how to hook up Mocha. What I'm going to do is I'll come over here to Edit Configurations, and you have all kinds of runtime configurations and debug configurations that you can work with. I'll show you really quickly, you can structure the defaults as you like. And one of the defaults in here is Mocha, as you can see. And I have a bunch of default values. So let's create a brand new Mocha configuration, and I'm going to call this cart, oh, let's call it Cart Specs, and the interpreter, that is where the Node executable is, so you just have to put that in there. That's usr/local/bin/node if you're on a Mac by default using the installer. If you put it somewhere else, just put wherever you put it. The working directory is going to be the main director of our site, which is just demo, so this defaults to the project directory. You can set any environment variables you need to in here, as well as any Node options. The Mocha package itself is exactly right there. Node module is Mocha. I installed it before. You can check which interface you want. And this is neat. This is one of the reasons I really like Mocha, is here you have different ways of working with actually writing tests. So this is the BDD way of doing things, using 'describes' and 'It Should' and all that stuff. You can also say I'm more used to doing a TDD idea, and so it has the 'suite' keyword and 'test' and stuff like that. And then if you want to do a test for module, you certainly can do that. And then finally, there's a QUnit way of doing it. Anyway, that's just a little background for you. So if we head back over to WebStorm, we're just going to leave it on BDD. Any extra Mocha options you want, you can throw in there. And then, finally, we just need to tell it where our test directory is. So that will be in lib and cart and test, and we'll choose that, and then we're good. So, hit Apply, OK, and then now that we have these cart specs configured, you can see that we have the option to run them. We can also debug them, which is really cool. So first, I'll run this. And you see the test runner pops up down below. And, yea, everything is green. And so you can see here how you might want to have some good naming in case something goes wrong. So let's change this to '2'. I'll save it, and I'll rerun the test, and you'll see it crash. And you can click to see some diffs, but you don't need to do that. And you can also click to see where the error is. So if I click on here, it's saying this is where I barfed. And it puts the cursor right there. The other thing that I really, really enjoy is to have this thing autorun so we can click auto-test, and what that does is it just waits and it sits, and anytime you put in a file change and hit Save, it waits for a few seconds and then reruns them. So, we're still seeing the error. This is handy, but I'd like it go a little bit faster. By default, it's 3 seconds. Let's set it to '1'. So now when I come in here, and I change this back to one and I hit Save, boom, we have green. Change it back there, and we have red. So this is almost an instantaneous change, which is such a great feedback loop. We can unpin this, so we can put this into floating mode and then put it over to the side if we want, which I think is pretty handy. And there're different ways of organizing this. Don't show the---don't show the completed ones. There're ways you can filter and search and look and whatnot. And once all your tests are done, and you're super proud of yourself, you can choose to export them, and I'll just put it on the desktop here and say, yup, open it up for me. You can export them in XML if you like or HTML and hit OK, and open it up. You can show this to people if you want, print it out, say look, I'm a rockstar, look at all my tests and how well they're going. It's up to you. Another really nice feature of WebStorm is its integration with Git, and I really like that. It seems like the JetBrains people are really paying attention to how developers, modern web developers are working. And so it goes far beyond just flagging files for you. Like, here we can see that product_spec is green, so it's newly added to source, and down here, these are blue, which means they've changed, which is neat to know. And, you know, we do have buttons that say Commit, Pull and Push, but it goes a little bit beyond that. Let's take a look. So let's open up this tab down here that says Changes, and it shows you these are the files that've changed. We can push these changes up to---up to our origin, if we'd like to do that. We can also come into each file. We can look at a diff. And the diff tells us, hey, you took a line out and added one here. Pretty neat. And in addition, we can also go to the Version Control tab, and we can see all kinds of history for our site. Inside here, we can see that the last few changes have come in in the cart folder, and we can see the specific things that have happened in there. What's happened in our entire project, right here. What's happened for the index file. So these are the things that are lastly changed. If we go over back to the Changes window, and we take a look at the log, we can see a really neat visual Git graph, I guess is what you'd call it, that you can see a lot associated with Git. Anyway, we can at this point push. So we can come up here and just hit VCS push, and it'll show us all the files that are ready to be pushed. And we can say, yup, commit them all. And then we have a little message in here Updated specs, let's put that, and here're some more fun things. We can specify a different author if we want to. But I don't know why we'd do that, but we can reformat the code according to some coding standards, or rearrange code does. But it can perform code analysis for us before we check it in, and it'll also check Todos that might have been left in the files that we're changing. Now Todos is one of those things that IDEs have been trying to get right for so long. WebStorm does a really nice job with it. I'll show you that in just two seconds. But for now, what we can do is we can commit, or we can commit and push. So I'm just going to hit Commit, and boom, up it goes. And you can see visually that everything has changed. Updated specs. That's where the HEAD's at. Like most modern IDEs, WebStorm has the notion of todo lists, and you can create a Todo simply by adding a comment and then saying TODO: Fix this mess. So if we do that, we can pop open the TODO tab here, and it'll show us app.js, Fix this mess, double-click it, boom, in we go. But you probably are also noticing that we have 54 Todo items in 32 files. Where did these come from? Well, some of them are in Mocha.js and file.js. Stuff that we have no control over, which is a bit of a bummer, because these are all inside of our node modules right here. And, as much as I wish we could ignore these things, we can't really, but we do have an interesting workaround, which leads us down a better path. So let's take a look at filtering these things. Now right now by default, we're showing all the Todos that are in our directory here, so if I want to edit filters, let's pull this down, you can see that, well, we have all kinds of filters in here, like Todo and Fixme. We can create another one, and we can call it Hack if we wanted to, but I think what I want to do is something specific like Todo and then maybe---I've got to put a backslash there--- todo\-rob, and then mark it as the end. And let's just change the icon here to that, red rob, why not? And then we say OK. And then what we'll do now is we will add this filter down below todo\-rob, check it, and then Rob's Todos. Hit OK. And, okay. So now if we open up our Todo list, and we want to apply a filter, where's Rob's Todos? Well, there're none. So if I come in here and say Todo and then Rob, there we go. Found one. App.js and Fix this mess. This is handy because what this will allow you to do is, of course, mark stuff for yourself, but you might want to have other things in here like hack, Todo and then hack, which means, man, this is a mess. Or Todo, speaking of hack, you might want to do Phil, and for another team member who might pull it in, and they can take a look at their own Todos. WebStorm doesn't impose itself on your development process, meaning everything that you do doesn't have to be done through the IDE. So I showed you Git just a second ago. I certainly don't need to use the IDE if I don't want to. I can pop open the terminal down here, and I could run something like git status and do all the things that I would normally do using the terminal. This is handy. I mean, the terminal, however, is just right over here, and I could also just use that if I want to. It's up to you. But there're other things that you can do in here, as well, and the nice thing is that WebStorm will respond as if this is just part of the IDE itself. For instance, I could say, let's put in a README file, and I could say README.markdown, and if I come up here, boom, it autoloads it. So I will just delete this, or I could come down here and say remove README, and coming up here, boom, it's gone. The other thing that I like about having a terminal is installing and uninstalling Node modules. There are ways you can do it using WebStorm, but who needs that. I don't need that. It's easier just to come down here into the terminal and say NPM install async, and then save it please. And it'll go out and grab it and do the needful, and in it goes. And if we come up here to Node modules, you'll see that there is async. In addition, like most good text editors, you can get around in WebStorm really easily, so I could do the typical thing of enter class name, and then off I go to that class. However, you'll notice I put Product. I don't have a class named Product, well that's because JavaScript doesn't have classes. I think if I was using CoffeeScript, then it would work, although I haven't tried that. However, if I did command option O, entered a symbol name, and I said take me to Product, I have to do, there we go, take me to the capitalized Product. Boom, in I go. Or I could do command shift O, take me to the file, and if I wanted to go to rout/index, I could do partials in here, and it would respect partial names, so I could say rout index or rout user, and then in we go. It's really easy to get around and maneuver around inside of WebStorm. And you can also, if you want, split the windows. I really enjoy splitting when you're running tests on one file, and you're doing something in the main code file. A good code editor allows you to be really efficient when writing code, and that usually involves speed. And for me, one of the key things that makes me a faster coder is snippets. I like my snippets. Especially ones that are custom fit to my---the way I work and the things that I like to write. I find that WebStorm's code snippets system is really, really well done. So let's take a look at how they work. And to do this, I will write out some specifications for a shopping cart here that I'll be writing later on. The first thing I know that I need to do is to have an assertion library, so I'll use my little Req snippet and require("assert"). Next, I wanted to find the feature that I'm going to be working on here. So I'll use my Describe, and we'll say Shopping Cart, and then I'm going to describe Adding an item, and it should up the quantity, it should add the product to the backing store, and what else should it do? It should set the last item added. This is another reason I like snippets. I'm the world's worst typer. All right. I got this file cracked out in pretty short order, and you can see that I have my snippets. I like the way they work. So how do you actually create a snippet in WebStorm? Well, if you find something that you type for 100,000,000 times over and over and over like require statements, you can just highlight it and go up to the Tools menu here, and you can say Save as Live Template. And then up pops this dialog. And then down below here, you can see that it's taken my selection and already dropped it into the template text. So here I know I need a variable. And variables can be just ad hoc things, and I can just say name and surround it by dollar signs, and then here, I will just pop over and put it in there. So anytime I type name there, it goes in there. I can edit the variables. And in here, I can have default values if I want, and then also set it to an expression if I need to. And then Default expand with the tab. I can expand with other things if I want. And finally, I can say it should be available on CoffeeScript, JavaScript, whatnot. Well, now that I've written all this out, we could jockey this around a little bit more. And let's do something like this. I could say describe A Feature and then describe A scenario. Good. And leave it like that. And maybe even give myself a little prompt here and say more specs here. Yeah, very good. So now I can go up here to Save File as Template, and then I just need to give it a name, so I can call this a Spec, and the extension will be JavaScript. And I can just leave this just like this. I don't need variables or anything. So I hit Apply. There's Spec. OK. So now when I'm writing out some tests, I can come up here and say New, Spec, and what spec are we writing? And I'll say product_spec. And then, yep, I'll add it to Git, and boom, all of this is now in here with little prompts to remind me.
Grunt
For every project I've ever worked on, I've had the ability to run certain tasks. Even with the latest projects I've been doing with Massive and Biggy and other things in the .NET space, I'll always have a console app that has spikes in it or performance tests or maybe even building and dropping test databases. Who knows? These task runners are indispensable, and they're built right into other platforms, like Ruby. Ruby has Rake, which is kind of Ruby's take on Make. Anyway, you can just use Ruby code to automate whatever tasks you need. Grunt is a JavaScript task runner. There's also Jake, which is the corresponding thing to Rake. Anyway. Grunt is pretty well liked. It's got a ton of plugins, and it does a whole bunch of things. So, why Grunt? Why do people use Grunt? Well, its image, and it's a task runner. But it does things that maybe an IDE might do for you. That is one reason that people like to use it when working with JavaScript. If you're just using a straight up text editor, like TextMake or Sublime Text 2 or Vim, then something like Grunt is really, really helpful. For instance, here's one thing that it does. It runs this linter, jshint, and if you don't know what a linter is, basically it goes through and checks for errors in your code. It's kind of like if you're a .NET person, it's like using, what's it called?, FxCop that goes through and looks at code policy. It's probably not as intense as FxCop, but anyway, it'll go through and say, hey, you forgot a semicolon there, or this is a global, did you mean to do this? Where's your FARR? Stuff like that. And you can have it run in the background watching your code, which is pretty neat. I use it for building databases out and just helping me do everyday tasks. So, this is the first tool I'll be using. Now I should also point out that people are starting to use another tool called Gulp. They find it very helpful. They find it more helpful than Grunt for some reason. So it's up to you. Gulp, Grunt, either one. They both do the same thing.
Command Line
If you're a Microsoft developer, and you've been working with .NET for awhile using Visual Studio, this might seem a little odd. Working in the terminal or the command line is not something that Microsoft develops normally do. However, working in Node or Ruby or Python or anything that's not Microsoft, you do end up working in the terminal. So let's just quickly take a look at some of the commands and some of the things I'll be doing in the terminal, because it will be a big tool that we'll be using. So this is the demo site that I just created, and I showed you with WebStorm. If I want to see the files in here, I have a shortcut called l, but if I hit ls -l, then it's the same thing, so ls -l, and it shows who owns what. It shows the size of the files. It shows the file names, when they were last updated. It shows you just about everything. Now why is this important? Well, if you want to navigate around and take a look at things, you certainly can. For instance, if I wanted to run the tests without involving WebStorm, let's just say I'm really quickly wanting to see are these tests, certain tests, running or failing or whatnot, I can do that. So let's go into our module where we put some tests, and I forgot the module name, but here I can see what it is. It's cart, and I come on in here, and I take a look and see there's the test directory. I also see that there's a new file called product.js, and I can take a look at that and, well, there's the code. By using the cat command, it just throws the file contents right out into the terminal window. Alright, well, let's run our test, and I do that by executing Mocha itself. It's an executable. It'll go in. Take a look at any test directory or a test.js file that you happen to have, and it found our test directory and ran the files inside of it. And, unfortunately, it didn't find any tests, which is weird because I thought I wrote some. What's going on here? Well, let's find out. We'll go into the test directory, take a look. Do we have any specs? We do. We have two. Product_spec and cart_spec. Well, I can do the same thing. Why isn't this showing up? And I can see here I've got no specs. What about the cart. I can see here---whoops, I can't go into it, sorry about that, fat fingers. And there, okay, so now I'm recalling that I backed all these things out. And these are just two files that have no specs in them. Well, this is all making sense. I was able to do this really quickly, and I was able to execute the tests and see that I had none. But I'm going to guess that maybe you are thinking, now that's neat, you're on a Mac, and it's a whole lot easier for you than it is for me. I'm running on Windows, and we don't really have the same kind of terminal you do. And I can dig that. But at the same time, it's not true. You actually have a better one. PowerShell is amazing. If you haven't spent any time working with PowerShell, please consider this to be a little prod from me to you. It is amazingly capable. So let's do something fun. Here's a Windows 7 VM I have running on my Mac, and what I'm going to do is to take the demo folder I've been working in and just drop it right in and there is it. Good. So now I have a demo folder, and I want to open up this demo folder in PowerShell. So I can come right up here into the address bar and type PowerShell and kaboom. PowerShell is open and in this directory. Good. I've taken the time to make this look as close to my shell on my Mac as I can. I'm actually a little bit jealous. It's an amazing tool. And just really quickly, to show you what I did. Head on into Properties, and inside of here, take a moment and set the colors, the background colors, to something pleasing to you. I just copied what I had here on my Mac, and I replaced the RGB values right over there. Simple to do. In addition, I set Consolas as my font. Set it to 20. It's sometimes between 20 and 24, so I can see it a little bit better. Make sure to bold the fonts as well. Anyway, that makes it just a little bit more of a pleasing environment. And what we can do in here is check our version of Node. We can do all the same things with PowerShell as we can with regular Bash. In fact, a lot of the commands, like ls -l works almost exactly the same. The "-" arguments evidently don't work, but ls will work. And you can see LastWriteTime, Length Name. Little bit different than what we saw over in Bash, but, anyway, it does work. So, what I can do here is say NPM install, and it's going to go into my package JSON. As you can tell, I have Node installed on my project here. And I can do roughly the same things. I can go into our lib directory, and I believe it was cart, and it corrected my forward slash to a backslash. That's pretty groovy. And then take a look around. There's our test, and I should be able to just run this. So let's do NPM. I don't have it installed globally. Install mocha -g. So it'll take just a second. Here we go. And now if I run Mocha in the test directory, same exact thing. It runs the test, and then if I want to take a look at the test files, I can come in and say take a look at---let's actually go into our test directory, and I will do cat, and then product spec, and same exact thing, which is really, really handy. So, point being, as I'm cruising along here in the command line, crack open PowerShell. As a matter of fact, there is a beginning course on PowerShell introduction here at Pluralsight. It is outstanding. I watched it for a bit. I couldn't believe some of the things you can do. To taunt you with it a little bit, you'll notice that I'm working on the C: drive here. One of the things that you can do with PowerShell is to mount individual directories as drives including your registry, which I think is pretty cool.
Web Frameworks
We're about ready to set things up, but before we do, let's take a look at the choices we have for web frameworks. Most people only know of one when it comes to Node. Express. But there are quite a few others, and we'll take a look at those. But first, why do so many people choose Express? The short answer is that it's lightweight. It's easy to use. It's well documented. And it's highly extensible. Express uses a middleware system called Connect, which is one of Express's many strengths. You can find middleware and modules for Express that'll allow you to plugin just about any functionality, composing your web application just the way you want it to be. Express embraces the Node philosophy of Do one thing well, which itself was taken from the Unix philosophy of Do one thing well. Modules are the name of the game with Node and, therefore, how you use Express. Many developers find this to be exactly what they need for most of the web apps they create, myself included. This can work really well for large systems. But often it leads to re-factoring things out as the system grows. Indeed, projects tend to split apart, and you have to learn new ways of doing things. Express, for instance, used to bundle a generator that built a site for you. But no longer. It has been split out into its own module. Grunt did something similar, breaking apart its core functionality into multiple separate projects. Confused a lot of people, but everybody just went along with it because that's how you do Node. Koa is another web framework, and it's considered to be the next rev of Express, created again by the Express team. It leans on new features coming in the next version of JavaScript including generators that promise to help get rid of all the callback nightmare craziness of asynchronous coding. And it is actually supposed to be smaller than Express, if that's even possible. It's still pretty raw, and I could spend a whole lot of time explaining it to you, but I'd rather focus on other things for this screen cast, so I'm going to pass on Koa for now. Geddy, named after Geddy Lee, is an interesting web framework that is essentially patterned after Rails. It has generators, an ORM, and you follow pretty much the same workflow that you do with Rails or ASP.NET MVC. You create a model, and that model's conceptually tied to a controller. And that controller conveys the model's data using a set of views. Sails is another framework. Get it, Sails, Rails. It works the same sort of way, but it's a little bit more modern. It's a tad lighter weight than Geddy. And when you generate a model, you also generate API endpoints on controllers that are in your application, and they have basically a full restful back-end that you can immediately hookup to a front-end system like backbone. The developers who built Sails built it on top of Express specifically for the backbone work they were doing. You can still have normal views, but it tries to make the single-page app thing somewhat seamless. I've used it, and I really, really like it. It's great for smaller sites, and the documentation is pretty good as well. So it may sound like Geddy and Sails are pretty robust. So why am I not using them? The simple answer is that Node favors modularity, in case you didn't see those other slides, and these projects don't. Specifically, if I wanted to create an eCommerce system, I'd have 15 or so models and then 15 or so controllers and then 15 times four or five different views that deal with customers' invoices, sale orders, product catalogs, promotions, and we'd basically end up with a huge ball of mud. This is really the first major bit of understanding with Node. You have to think in terms of modules. It's really easy to take what you know from other frameworks, like Rails, Django, ASP.NET MVC, and port it right over to Node and say, ah hah, I have a great web framework, but you'd really be missing out. Node's modules are portable and self-contained. And you access them through a well-defined API that you construct using Node's common js module pattern. If you take the time to break your application out into smaller applications, things become really, really easy really quickly. Your app can be built according to principles you already know, like domain-driven design, we'll talk about that later, and your team can stay focused on a smaller code base, that each code base has its own tests. We'll see all of this in action later on. For now, Express is our choice because it keeps us close to the middle with Node.
Setting Up Your Project
Setting Up Grunt
Let's get rolling and set up our environment. I'm working on a Mac using iTerm as my shell client. If you're on Windows, you can follow along in PowerShell using all the same commands you see me use here. If you haven't used PowerShell, this is the perfect time to get started. If you haven't installed Node yet, let's do it. Head over to Node.js.org and download the installer. I used a package installer on my Mac, and if you're on Windows, well, you don't have much choice. This will install the Node executable, as well as Node's package manager NPM. Node runs just fine on Windows, but there are a number of Node modules that don't. And it's a pain to find out when you're deep into your development cycle. My recommendation to those just getting started on Node, if you have a Mac, use it. If you don't, kick up VirtualBox or your favorite VM host and install the latest version of Ubuntu. This will eliminate any surprises you might have with regard to Node and Windows. And it's a good chance to get more familiar with Unix. Now that Node is set up, let's set up our project directory. Our first task is to create a directory for our project. This can be harder than it looks as it requires coming up with a name, and I'll choose one out of thin air, froggyfrog. He's a little guy that my daughters left sitting on my desk. Why not? Listing out the contents of this directory using ls, I have a shortcut l. You can see this directory is empty. Now I'll open up WebStorm using a shortcut I created, and there is our empty project. One of the great things about Node is the amount of existing code out there that you can use in your project. UHub is loaded with these modules. And searching through them can be daunting. But why so many? Let's talk about this before we do anything else. Node's philosophy follows that of Unix. Do one thing, do it well. This leads to needing more modules overall, but also decreases the amount of bloat, if you will, in your final project. It's common for authors to create a useful module and then break it apart into smaller modules if they feel it's too big or doing too much. A good example of this is Grunt, a task runner tool that many JavaScript developers use to help them write their code, test it, build it, and in some cases deploy it. To install Grunt, you need to do it in two steps. The first step is to install the command line binary, grunt-cli. We do this by telling NPM to install grunt-cli and add the -g flag to make sure it is a global install that would be available everywhere. Next, we need to install Grunt itself. We do this by using NPM install grunt. NPM looks to the NPM registry using a basic GET request built from the name of the module itself. If that URL exists, NPM pulls it down for us into a local Node module's directory that it created for us. Taking a look in that directory, and there's our Grunt module. Grunt can do a number of useful things, and we'll get to a few of them later on. Right now, I want it to lint my JavaScript code, which is another way of saying that I want it to watch what I write and validate that it's not written with bad JavaScript coding habits, so just inadvertently creating a global variable or forgetting a semicolon. This might sound like overkill. But consider that the JavaScript code you write will be used in someone else's application. When they go to build that application, they will likely have a linter of their own, and if it trips on your code killing the build, well, they won't be happy with you. I don't want to be that person, so I'll set that up now with Grunt. First thing I need to do is install a linter. And for this, I'll use jshint and install it with NPM. This used to be built into Grunt but was separated into its own project keeping with the Node philosophy. I'm also going to install another module for watching code files and firing off tasks based on file changes. This is grunt-contrib-watch. I've just installed three modules into my app. And I'd like to remember that I'm using them. I'd also like to be sure that I have the right versions installed so I can share this app with my team and also work on it on other machines I have at home. NPM has a notion of a manifest for just this reason. And by convention, it's called package.json. It's a simple structure that at the very least should have the project name, its version, which I should have added here but didn't, whoops, I'll add it later, and what dependencies it requires. In our case, Grunt and its associated modules aren't really dependencies of the project. They're things I need as a developer. NPM allows you to specify that with the dev dependencies key. I'll add my three modules here specifying that the latest versions of each module will work. We'll be working with package.json throughout the development process. Using Grunt means creating a single file called Grunt file and placing it in the root of the directory in which you want it to work. You can have more than one Grunt file if you like. This is usually your project, or your modules root directory. The Grunt file is in Node module, which we'll talk more about in just a minute. So I'll need to export some functionality from it. When this module is called Grunt itself is passed in, so we can configure it and tell it what to do. We can configure Grunt by calling initConfig and passing in JSON config information as needed. Here, I'll tell the jshint module that I want it to lint all the files in my lib and models directory. Grunt is a task runner, which means for it to work, it needs to have tasks created for it. These can be simple functions you write, which I'll do later, or tasks that others have created and collected into Grunt modules, just like the jshint module I installed a few minutes ago. I can load those tasks in by calling grunt.loadNpmTasks and sending in the name of the module. We can see these tasks by cracking open Node modules, grunt-contrib-jshint and its task directory. This is a refreshing thing about Node modules. They're right here and typically include documentation, examples, and a readme file in addition to their source code. Let's run our Grunt task and see what happens. As expected, no files are linted because we don't have a lib directory or a models directory. Let's add a lib directory in a file called index.js. (Typing) And I'll forget a semicolon. Oh no. Running the linter again and, boom, it finds a missing semicolon. It tells me where it is. Fixing the problem, and I'm lint free. We're lint free but I know that if I have to run this command all the time, the lint will pile up. I'd rather have a system watch over the changes and let me know if I've done something bad. So let's set that up with the second module I installed, grunt-contrib-watch. Here I'll configure watch to stare at the same subset of files that I'm wanting linted. When they change, I want the jshint task run. Pretty easy. Let's see if it works. Nope. Can you guess why not? I forgot to load the tasks for the watch module. (Typing) Fixing that, and we're in business. I even get to hear a fun little beep in my ear when I forget those semicolons. Tools like Grunt and jshint are indispensable when building JavaScript applications. Grunt can do so many things so for you that when coupled with your favorite text editor, it can approximate and in some ways surpass what an IDE can do. That said, I love WebStorm, so let's continue on and build out a module.
Understanding Modules
Working with Node modules is one of the joys of working with Node. The system is so simple, yet it's incredibly powerful at the same time. Let's take a look at the basics now. The first thing I'll do is to create an app file. This will allow me to write some JavaScript code that Node can execute. Here, I'll just write a message back to the terminal. To run this, I'll run Node and pass the file. All there. Now I'll create a module called utility. Doesn't everyone have one of these laying around somewhere, and I'll drop it into my lib directory. I'll say 'hi' from this file, as well, using the same code but with a lower cased 'hi.' To use this module, I need to require it. I do that by using the Require keyword and assigning it to a variable, which I'll call utility. Running this code, it works. This module will have one method, 'sayHi', and it will do the exact same thing. This is where the Node packaging system starts to shine. The notion of a module is not restricted to a single file. It can be one or more files contained in a directory. To see this, I'll create a directory called utility and drop my utility.js file in. And then I'll rename it to index.js. Rerunning our code, and it still works. This is because Node's lookup system will see that a directory is referenced and will look for a starting file, which by convention is named index.js. That's interesting, but it's hard to see why you'd want to have both a directory and a file rather than just a file. The answer is that index.js is just an entry point. Ideally, you'll have more than one file in your module. Let's expand what we're doing in here, and I'll add a models directory and move my index.js file into it. Next, I'll add a file we've seen before, package.json. This file is a manifest that describes your module. Taking a look at our project tree, you can see that we already have a package.json in the root. Believe it or not, that is okay. In fact, it's more than okay, because modules can and often do contain other modules, which themselves contain even more modules. Node's module system is recursive. We'll see this more later on. For now, I'll just tell Node which one of these files in my module to start with by specifying it with the Main key. Running our code again, and it works. Notice too that we didn't change any code in our app.js file. We simply restructured our module, and everything worked just fine leaning on convention. Specifying a Main key, however, is not what most Node developers do. It's pretty common to see an index.js file that references other files within your module. It can also act as your module's API letting you make public only what you wish to make public. Again, we'll see this more later on. This structure that you see here is very common for Node modules, an outer directory with a module name, a package.json with the descriptive information and dependencies, an index.js file as an entry point, and code files arranged in subdirectories. When working with Node, it's easiest to always think in terms of modules. These don't have to be public modules, just self-contained little bits of logic that, when combined with other modules, make up your app. That's at the core of what we're doing today, and I'll be going back to this idea repeatedly throughout the video. Node modules are elegant and powerful, but they can also surprise you in some ways. Specifically, their lifecycle. Upfront all Node modules are instantiated once and cached in memory for the life of the application. This means basically that you can think of them as singletons. This can cause problems for you if you're not aware, so let's see an example. Here, I'll create a user module, and I'll keep it as a single file. I'll add some get/set methods for working with the notion of a name, as well. Now let's head back over to our app.js file, and I'll require the user module. This will return the instance, the only instance, of the user module to my user variable. Just for fun, I'll require that same module again, and I think you know where this is going. But let's play it out just so you can see what happens. Running this, and both return Rob as the name. If I set the name value for the first user module and then ask the second user module what the name is, yeah, as you can see, the name Steve is now set as the name everywhere in your application. Okay, that might have been a little farfetched, but let me show you some code that I wrote just two weeks ago that had me groaning for days. I had to write a shopping cart, so I sat down a created a cart module, which had an items array backing it. It had the methods that you might expect--addItem, getItem, removeItem, and so on, and this of course won't work. The items array would be shared across your entire application because this module will be cached and reused everywhere. If you were to pass in a unique key, however, and work completely with a data source like RETIS, this module would indeed work. This aspect of Node's module system bites everyone from time to time, and it's good to be aware of it right away.
Organization
It's easy to say something along the lines of think in terms of modules when discussing Node and how to organize your code. But what does it mean? Here's one way that I personally have found very helpful. Straight upfront, I try not to have a parent project. In other words, if you're developing a web app, try to avoid putting all your logic inside of a single lib directory that in turn is in the root of your web app. What I've typically done is to create a single directory for my project and then subdirectories for each module that goes into it. But what are these modules? How do you know what should go into a module? This is not a straightforward answer. The best possible advice I could give is to not over-think it from the start. That's easy to say, but what does it mean? Let's walk through this. This project is an eCommerce project that I am building for fun, eventually to open source it or something else. Right off the bat, I know that I need a website and that I'll need modules to handle all kinds of things, from shopping cart interactions to checking out, from order fulfillment to customer management. And that's a lot of stuff. My goal with slicing the project up into modules is not so that I can reuse it later on. It's so that development can become tightly focused, and future maintenance becomes really simple. I'm going to borrow from what I know of domain-driven design and think about this project in terms of business services, also known as bounded contexts. So, that's how I'll start, and know this as well. You can always extract things out even more later on. I'll start with a membership module, as we'll have the notion of members, as well as customers. I'll also add sales, accounting, and fulfillment. You certainly don't need to structure your folders in this way. However, being the sole developer on this one right now, I prefer simplicity. So I'm going to keep everything right in front of me. What's most important is that each module is separated from the other. This is what I want. Now what I can do is drag package.json, Grunt file, and Node modules into the membership directory. These files are now dedicated to my membership module. Before going any further, I'll rename my module to something a bit more specific. For this, I'll add froggyfrog-membership knowing that I can rename it later. I'll also add an author and description while I'm here. As a side note, I should also have a version in here, but I overlooked it at the time I recorded this. But why did I do all of this? Node developers have adopted the Unix mantra, Do one thing and do it well. I want to do that here. To develop a membership module and hopefully do it well. That means I'll need to have models and processes specific to a membership system and to expose that to calling code through some type of API in my index.js file. I also want to have tests specific to this module contained directly in the module itself. This approach is different from a larger, all-in-one approach like you might see in Rails or .NET where your tests are in one place for all of your modules or projects. This makes things difficult to manage as a project grows and expands.
Choosing a Test Framework
Let's set up testing for our module. There are a few test frameworks to use, Nodeunit, Vows, QUnit, Jasmine. Those are some of the more popular ones. But most people I know use Mocha. I'll install Mocha globally using the -g flag. This installs the Mocha test runner and executable file into my path. I'll also install the Should assertion library making sure to save it in my dev dependencies. For many development frameworks, installing the test framework and runtime is the easy part. Getting everything configured and running and working correctly, well, that's the hard part. With the Mocha and Node, we add a test directory and our test files into that directory. Now we get to write a test, which I'll just stub out here with a bunch of silliness. (Typing) To see the results of this test, I just execute the Mocha command in the root of my module, and that's it. Tests run quite fast, but they don't always stay this fast, however. As your project goes, yes, it will take awhile to run all these tests, but nowhere near as long as an un-tweaked Rails RSpec setup. But, that is not the Node way. We want our module to do one thing and do it well. This means our test suite can stay smaller, and there's no large environment to load up. Looking at the output here from Mocha, it's okay, but it's not the nicest I've seen. Let's change that output by adding a few options to Mocha. I can do this as flags in the Mocha call itself or I can create a file called mocha.opts in my test directory. Running Mocha again, much better.
Flexing Git and Github
Let's get rid of this sample test file and commit our module to Git. This is very, very important. If you're working on a larger system as I am, you don't want to add the entire parent directory to Git. Each module should have its very own repository so that we can use NPM to load it. NPM works directly with Git and Github believe it or not. And you'll see this in detail later on. I'm going to add a .gitignore file here, and this is something you don't want to forget, as well. You don't want to commit your Node modules to your source tree. Never do this. This adds a huge amount of bulk to your repository and makes it hard for people to work with your module. Your module will contain modules that contain even more modules, and this can go on for a very long time. That is a lot of files. Node developers are used to working from the package.json file letting NPM install the dependencies for them using NPM Install. Let's add our files to Git. Oops, I forgot to initialize there. And we'll do our first commit. I've also added a remote up at Github so I can show you why it's important to have a single repository per module. Using NPM, I'll install the Express Web Framework and then ask Express to create a new site right here in our projects' root, which I'll call froggyweb. Next, I'll crack open our Express apps package.json and tell it that I have a dependency at tekpub/froggymembership. NPM has the ability to search for your module in a lot of places. As we've seen, it will look for it on your local drive if it sees a file or directory reference, which must start with a '.' or a slash. If your reference isn't a local one, NPM will assume that you want a module listed in the NPM registry up at registry.npmjs.org. If it can't find your module there, it will parse your module into a Github repo. I can tell it's trying to do this by reading the output of the command. So NPM found my repo and is about to install it, but we have a problem, one that I mentioned previously. I don't have a version number in my repo, which is very bad. This is how NPM knows which version to work with and will stop installation if one isn't specified because then it wouldn't even work at all. Let's fix that by adding a version number to our package.json and pushing back to Github. Now let's run the installer again, and it works. Taking a look at our Node modules directory, there's a module called froggyfrog-membership.
Building a Registration Module
The User
In this module, we're going to shift gears into live coding. We need to start somewhere, and for me it's usually building the thing that allows data into the database. For this project, that's the registration system. I'll be coding at a bit slower pace, and I've edited things just a bit slowing down when important concepts need explaining. So in the next 40 or so minutes, we'll flip into behavior-driven development mode with Mocha specifying how our registration should behave and then writing the code to fulfil those specs. As we create the test, I'll show you common testing patterns to help you write faster, more targeted tests. You'll also see me screw up a few times, which is a good thing as some problems are rather common, and you'll see how to solve and avoid them. Finally, we'll talk about patterns you can use for building your own modules and various ways you can avoid callback hell. Let's get started writing some tests that we'll actually use. We know that we're going to need a user, so let's describe what a user is. I should note that I'm not going to be doing BDD here. This is just straight unit testing. Even though I'm not doing strict BDD, this is a familiar setup when using a DSL like Mocha. The outer describe block is your feature or your subject. The inner describe block is an aspect of the thing that you're testing. When we get to BDD later on, this will morph into feature and scenarios. Each inner block should work against a single bit of data, which you structure just for the block. There's a mouth full of jargon to go along with what I'm doing, but I'm going to sidestep all of that and see if I can explain the process here in simple terms. We know we're working with a user, and we know that we want to spec out a user's default settings, so we set that up in the very beginning and write tests to confirm our design ideas below. If you find that you're creating new instances of a user within your tests, create a new subject block. You really only want to be testing one instance per block. This kind of thing may seem obvious when talking about users and default settings, but when you get to more complicated things, it gets a bit less clear. We'll touch on this again, but for now, this is a typical testing structure when using Mocha. Now let's write some tests to verify our design's specifications. I have some shortcuts wired into WebStorm, so I can quickly type these things out. Each specification in Mocha is created with an It function. How you write this specification is very important. It will convey what you're trying to do to the rest of your team, and, if done correctly, to the business people that care about this code. Use declarative statements, and stay away from the repetition of the word Should. For instance, in my first spec, I'm stating the email is rob@tekpub.com. Many BDD fans would prefer seeing the word Should in there somewhere, but that simply doesn't fit. And moreover, it's unclear and waffle worded. Email either is the default sent in or it's not. This is the Yoda rule. Do or do not. There is no should. This is your notepad, the place where you're supposed to muse on what your code is going to do and how it will behave. Here, I'll jot down what I expect from a specifications for the information user contains. Before we run this, I'll need to add a user module so things don't error out, and I'll require it right at the top of the test suite. (Typing) Now let's run Mocha using the -w flag. This will watch our files and when we change anything, we'll rerun our tests. And we get an error. A very common error at that. And I am confident that it won't be the last time that we see this. TypeError: object is not a function. This error is telling us that we tried to invoke something thinking it was a function, and it wasn't a function, it was an object. Looking over our code, there is only one place for doing that, and that's on line 10 where we try to use the new keyword with our user module. The new keyword only works if our user module is a function, which it isn't. So let's fix that. The simplest thing I can do is create a variable called User and then export it. I should note that the naming here is not required. I could have called my function Monkey, but I chose User, proper cased, for consistency. Rerunning our tests, and the error is gone. Each of our tests is blue indicating that they are pending, which is exactly what we want. By default, if you don't have an assertion function in Mocha, your test will be pending. Now for the fun part. Making our specs pass one by one. I'll set up the windows side by side so you can see the changes as I make them. And before I get going, I'll take my coat or hat off and put on my wonky architect's hat. I have a number of patterns I can follow here that follow from basic software practice. Just because we're using a funky dynamic and somewhat encumbered language, in other words JavaScript, it doesn't mean that we can't apply the skills that we've learned so far in our careers. In my test code, I used a constructor. And this is a pattern in JavaScript, the constructor pattern. The idea here is that you take in some initial settings, apply them to an object, and then return that constructor to object. If I didn't want to use the new keyword, I could use an explicit method, perhaps something like createEmptyUser or something, and that would be the factory pattern. I'll get more into this later on. For now, I want to keep it simple. I'll create an object and assign it to a variable, and then I'll assign an email field assuming one exists on the arguments passed in. I don't care for assumptions. Email is required of all users, so let's make sure it's there and formalize this. For this, I'll use Node's built-in assertion library. If there's no email, we'll throw. Let's continue on with optional parameters passed in, and for this, we can use JavaScript's truthy-falsey feature. Here, I'll set a createdAt field if one exists on the past ten arguments. If it doesn't exist there, it will be undefined, which is valued as false, and if it's put into a conditional statement. We can put this into a conditional by sticking an Or statement in the assignment. This is how you write a default in JavaScript. Here I am asking if there's a createdAt field on the past ten parameters. If there isn't, assign a new date. Fast forwarding a bit, and here we have a reasonable clone of a Devise user. Now let's get our test to pass. The first check here will be email making sure it equals rob@tekpub.com. Half the battle in working with Node is remembering the assertion library syntax. Many developers I know just go with Node's assert library, which you've just seen me use, but I like to use Should. And our first test passed. It feels good. Let's fill all these out as we go along. I having a failing test with the authentication token, but I'll get there in just a second. (Typing) These last tests don't make much sense in terms of defaults. I put them in because, as I mentioned, these are fields that you see with Devise. But I don't think they belong here, so I'm just going to remove them. And there we go. We still have one failing test, the one that checks the authentication token. I didn't set it because I don't have a way of creating a random string just yet, so let's do that. I'll stick it on the utility module. Yes, I know, but I'll see where I can push it later on. This is a function I've used a few times before, so I'll just snap it in here, and great. Now I just need to require it and use it. (Typing) Oops. That's what happens when you copy and paste stuff. Nice to have tests to cover your butt, isn't it? Fixing that, and we have some passing tests. Woohoo!
A Pattern Discussion
We just saw the use of a very simple pattern, the constructor. As a programmer, you are already familiar with this way of creating an instance of a class, and as mentioned, you can use all the same patterns you are used to in other languages in JavaScript. Let's quickly touch on that. Addy Osmani, a very active JavaScript community member and a developer at Google, wrote a great article on design patterns in JavaScript and how you can work with them. For this production, I'm only going to be using a constructor. I'll tell you that straight upfront so there're no surprises ahead. But there are a number of other creational patterns that I could use if and when complexity arose. These are patterns you are already, hopefully, familiar with including Factory, Singleton, Builder, and Prototype. The idea here is that if you know your pattern's in another language, such as C#, you can apply that to JavaScript as well to solve a particular problem. Have a read through Addy's post. It's brilliant, and you'll find yourself getting more and more comfortable with JavaScript if you're not already. There are three patterns that I want to talk more about, however, as you see them quite often with JavaScript and not so much with other languages like Ruby and C#. They are the module pattern, revealing module pattern, and the Prototype pattern. The module pattern allows you to wrap functionality onto a JavaScript object. It's very simple, and you can reference it directly. You've already seen the module pattern in action. It's at the core of Node's module system. If you've been wondering why Node modules use the words Exports and Requires, well it all stems from the JavaScript module pattern. As you can imagine, there are many ways that one file can use the functionality from another file if we lean on software design patterns. JavaScript developers recognized this back in 2009 as JavaScript was making its way out of the browser and decided that rather than thrash about and reinvent many wheels, it might be good to agree on a common way that functionality from one module could be used within another. This is CommonJS, a project started by Kevin Dangoor that has a simple goal--formalize a common API for JavaScript modules and packages. This is the standard that Node has adopted. The system is profoundly simple. When building a module, export the functionality that should be part of your module object. This can be a function, a primitive value, or an object literal. When someone wants to use your module, they require it by using the Require keyword. Your module will be instantiated, and every bit of functionality you've exported will be ported to a module object. You then set that module to a variable, and off you go. The module pattern works well, but it can be unclear to other developers what is visible through your module's API. One way to formalize this is to wrap everything in a function. What's public and private is evident just by reading the code that is returned. This is a derivation of the module pattern, and many developers prefer it because it's explicit. If you've worked with any client-side JavaScript libraries like Backbone, you've probably seen something like this. This is how you declare a model backbone. The idea here is that Backbone.Model is an object that's being defined for you, and you're cloning and extending its prototype and setting it to a new prototype. You can access an object's prototype by using the Prototype Property, extending it explicitly, rather than through the use of a helper function, like we just did, with extend in Backbone. You'll see many Node developers favoring this style of development. It can be much clearer than inline function declaration in a constructor. For our purposes, our objects are very simple. So I'm going to leave it the way we have it. Speaking of, let's get back to our project. I know that my app will need to do logging, so I'll create a log function using the constructor pattern one more time. I'll follow the same process that I did when creating our user module, but this time, I'll go just a bit faster. I'll assert that a subject, entry, and user ID are passed in, and then return our constructed object. Now as a side note here, you often see a constructed object use the This keyword referring to the scope of the constructor function. If I'm creating an object that doesn't extend another object's prototype, I'll create an explicit empty object from which to build. That way, I know that I'm not working on top of another prototype. (Typing) That's it. We now have a logging object.
Defining a Service
Up to now, we've touched lightly on testing using it to help us verify that that a user and a log exist. And that we're setting default values correctly. Let's go a bit deeper now getting into BDD as we describe the behavior of our registration module. I should note that there are many ways to do BDD. This is my attempt to adhere to the original idea and philosophy behind BDD without getting mired down in the syntax dances and testing ceremony that have since evolved. I'll start with a feature in the outer described block, Registration, that's our feature. Next, I'll define some scenarios and specify how my application will behave in those scenarios. The first scenario, what I like to call the happy path, is a scenario where everything entered is correct and valid. The happy path should always work, and it's my habit to make this path straightaway and to make sure it keeps passing as I think of various ways to make my system fail and scenarios in which failure might happen. Speaking of, what happens when we enter an empty email or password? Let's specify that. Same with password and confirmation mismatch, and an existing email. Now let's specify what should happen. In order words, how does our application react when a valid application is registered? The first thing that comes to mind is that it should be successful. Next, a user should be created, as well as a log entry. And finally, the user status should be approved. There are a lot more things to enter here, but this is a good start. For the bad pass down below, the results should be the same in every case, the results should not be successful, and the user should know why. So I'll copy and paste the specs on down the list here. (Typing) Running our specs, and they're all pending. Great. There are no errors in execution. For each scenario, I want an explicit object that represents that scenario so I can test what happened effectively. This means I'll need a Registration module with an explicit return value. I'll create a registration.js file and export a single function called applyforMembership, which will accept some arguments in the form of an object literal. This is a common pattern that you see with Node and JavaScript in general. You also see it in the Ruby and Python communities. Use an object when passing arguments as opposed to individual primitive parameters. It keeps your AGI flexible allowing you to change arguments as needed as your library changes in the future. I'll create an object that will represent the results and default the success property to false. No, this needs to be its own constructor, so I'll create it explicitly right here in the same file. If it gets complicated, I'll move it to its file later on, but I think it's going to be pretty simple stuff. I'll declare a result object inline and return it, setting the defaults as you've seen me do. There are no arguments I need to set on this result, so I can just create an object straightaway and return it. There we go. This should be enough to get a first failing test. I'll require the Registration module and then call the Apply for Membership method, passing in typical registration arguments. Then I'll assign this result object to a variable only visible in the scope of this described block. (Typing) My first test should fail as I'm defaulting success to be false, and, yea, it fails. Now, I'll just default this to success with a welcome message. I know that this looks nuts, but it's up to my follow-on scenarios to bulletproof this, so hang in there. Our test passed. I don't like this. Let's keep going and define a user as well, which is bringing something to mind. I don't like having random records pinned to objects for convenience. I'd much rather have an explicit object that does one thing the Node way. So I'll create another object called Application, which will move through the registration process. Same constructor pattern and same setting of arguments passed in. I won't assert the presence of email and password here as I don't want to throw. I'll handle this explicitly with validators in just a bit. Right now, this looks good. But I'm forgetting something. Can you tell what it is? I'll require my application prototype and then create an instance of it immediately in my Apply for Membership method. Boom! Hit an error. Do you know why? I forgot something, and you have seen this error message before. That's right. I forgot to export my Application prototype. That's a common error for me. Now let's put this Application object through its paces. My personal habit is to write out the things we need to do in the comments. Then one by one create the individual methods. This is typical registration stuff. The fun is in implementing it cleanly with an asynchronous language. Let's see how I do. The first thing I want to do is find out if my application is valid. If it is, I'll return a successful result. This will error as I don�t have this function, so I'll add that to my Application object. As a side note, if I was to use This instead of an object called App, I could extend the prototype of my Application object explicitly defining the isValid method below the body of the constructor. As I mentioned before, this follows a common style you see in Node, the prototype pattern. But it's up to you how you want to do it. This reads better to me as my application class is rather small. Here I'll just see if an internal status variable is set to Validated, which brings to mind that I'll need a way to set this variable, so I'll just go ahead and create a setInvalid function, as well, which sets the status. I'll also accept a message here that I can convey back to the user. And finally, I'd like to also have an explicit isInvalid method. Why not? So, now we have things failing all over the place. Good. This makes me feel better. I'll need to validate the input sent in, so I'll create a function that does just this, making sure email and password are present. I could return true or false here, but I'd rather set the application's status directly, so I'll do that. And there we go. If everything passes, I'll validate the application setting the status as needed. (Typing) Okay. Things are still failing, which means I'll need to actually call this function. It passes. Good. (Typing)
Dealing With Data
I don't mind testing against a test database. I find that many errors are avoided that way. There's a little bit of extra work involved, not much, and I know this will ruffle a lot of feathers, but in general I follow Rails lead here and just stop worrying about it and get some work done. This is just a personal preference. It's never bothered me. There are ways to mock these things out, and I'll talk about that in just a bit. For now, let's install our database. I've been wanting to use RethinkDB for awhile now, and this is my chance. It's MongoDB done right, according to the Rethink team, a sort of second generation document database. And I love it. You can install it using Homebrew if you're on a Mac, or follow the instructions for installing Ubuntu. If you're on Windows, this will be a problem for you. But, hopefully, you can take a second and kick up an Ubuntu VM, because RethinkDB doesn't run on Windows yet, but it's worth it to have an Ubuntu VM so just take a second and fire up in VirtualBox or whatever you have there. I'll be working with a data access library that I created for working with RethinkDB called SecondThought, something I extracted from a Node project I was working on recently. It's a super light bit of abstraction that helps you work with things a bit more declaratively. You can install SecondThought using NPM, and if you get an error, don't worry. This is from the RethinkDB driver itself wanting to compile an optimizer. Things will work just fine, so you can ignore it. Once installed, you can fire up RethinkDB in our project directory by simply typing in RethinkDB. You should see the output that you see here. If you don't, feel free to send me an email at rob@tekbub.com, and I'll do my best to sort you out. Let's use Grunt to help us set up the DB, as well. Here, I'll create a task called Install DB, which will use SecondThought to create our tables. I need to make this task asynchronous, and I can do that by setting a variable to this.async right inside the registerTask method. Now, I'll add the install code. And there we go. Running the task. Looks like we have an error, but I think everything installed. I forgot to require the Assert module. I do that often. But the error happened after everything ran, so I think that means that things were installed. Let's take a look. And, yeah, there we go. This is the RethinkDB Web Interface. From here, you can manage just about everything pertaining to your RethinkDB server and databases, including simple sharding and replication. It's really an amazing system. Now I can check the users table to see if a user exists. I just need to require SecondThought and use the Connect method. A call back will have a DB object populated with all the tables, and we can carry it as needed. If I wanted the user record back, I could just use First. There's a better one I could use, which is Exists. So I'll use that. (Typing) If the record exists, I'll set the Application to invalid, passing a message back, and then I'll invoke the callback. I'm also going to assert that there was no error. I'll want this thrown right here and not leave it to the calling code to swallow it. If we have an error in this query, then there's something really wrong, and it should be thrown. But I have a bigger issue here, one that is starting to stink, and one that I'm sure many of you are scratching your heads over. Why, oh why, am I embedding the database connection information right here inside my method? Indeed, I don't want to couple my registration module to my database implementation. So let's make that a little bit better. We don't want to embed our connection string inside of our module. This should be configurable as far up the abstraction stack as possible, ideally at the application level or execution level as part of some configuration file. We can do this by leaning on good programming patterns that we already know. Particularly using dependency injection where we pass data access into a constructor, which means that we need to move from a module pattern to a constructor pattern here in our Registration module. This is a small change at this point, so let's just do it. I'll copy and paste the code that we've written so far and then export our prototype. I'll also make sure I don't have scoping problems in the future and reset this to Self. There we go. And everything breaks, which is to be expected. In my test code, I'll require SecondThought, and I'll create a Before function that will go off before all the tests in my test suite. (Typing) I'll connect to the database and pass the live connection into my Registration module. This means, of course, that I'll need to work with Registration as an instance rather than a module. (Typing) We have an error. "Object has no method applyforMembership." I'm not returning self at the end of my constructor. Wait. That shouldn't be the problem because Self = This by reference. Oh dear, do you see the problem here? If you're unused to working with a synchronous code, this might look fine to you. If you're a JavaScript person, you're probably throwing things at your computer screen right now. This function is asynchronous, but our test suite has no way of knowing this. Our reg object here is just an empty object literal. Our connection is happening, but we're not waiting for it to return using a callback. Oh boy. I can tell Mocha to wait for our Before function to finish before moving on in the same way that I did with Grunt, by passing in a Flag function called Done. Mocha will wait until that function is invoked before moving on from the Before function. Now things are working. Let's plug in our existing user check and remove the connection call from our method. We also don't need to have an explicit callback here. We can just pass the callback straight through, which simplifies our code dramatically. (Typing) Now I can add a check in for our Apply membership method being sure to assert for any errors. Oh boy. Something's still not working. Let's output our Exists variable. That's right. It's false. This is our problem. Our assertion is failing because we're expecting success to be true. Since no users exist, as our callback is telling us, it should be returning true, so what's the problem here. See the issue? Asynchronous programming. It takes awhile to get used to this stuff. The problem is that because we've started using callbacks with our checkIfUserExists method, well, that means we need to start using callbacks all the way up the stack. Here, I'm simply returning a result object. But once again, it's completely ignoring everything inside the asynchronous checkIfUserExists function. I can fix this by having a callback passed into our Apply for Membership method, then invoking that after calling all the methods that we want. Okay, we're almost there. Now I just need to be sure our Before function in this describe block is asynchronous as well, and I do that by using Done once again. (Typing) What you just witnessed is enough to deter many developers from using JavaScript. This stuff is just not apparent if you come from a language like C#, Ruby or Python. But we did it. And hopefully, you understood all the dumb things that I just did. Now, we can go back and attach a New User to our result object so our next test passes.
Saving the User Record
The next step is to save the New User to the database. There is a bit of work that we'll need to do to repair the New User records, so let's get started. When a user registers with our site, they provide us with a password and a confirmation, traditional stuff. It's in our best interest to never store this password in a retrievable format. This includes clear text, of course, obfuscation, and also encryption. What we need to do is hash the password, which is still not completely secure, but it's a reasonable best attempt. We're going to be using a respectable hashing algorithm. This is a very hot topic, and I'm not a crypto expert. I'm simply going to follow Devise's lead and go with using Bcrypt. But I know that a few people think that Bcrypt is not as good as other solutions. So if you want to change this, feel free. I'll install the Bcrypt Node.js library once again saving the reference to our package.json dependencies, and I'll reference the module in our registration module. Once again, when you install a module from NPM, it grabs the entire code base, including a README. Take some time and read through the module that's right in your Node module's directory if you want to learn how to use the API. It's very handy. This hash call is just what I need, a synchronous routine that I can call right inline. I'll use this routine and set the hash password on user. (Typing) Now I just need to create a save routine, and I'll follow the same pattern I did with the checkIfUserExists method and simply pass the callback to the save routine directly. (Typing) This means that my little JavaScript Christmas tree is going to grow a bit more as I have another callback to deal with. Oh man. I'll follow the same pattern here, as well, asserting that there is no error from my database and then assigning the New User object to our results object. By the way, I should mention that the New User you see here is complete with a new ID generated by RethinkDB, which is nice. I'll call Next here as well. Oh my God, this is getting ugly. I want you to know what's going on when we create a user, so let's add a log entry for when the user was created. Logging is not specific to membership, and I really should be using a centralized module for this, and I very well may at some point. We can start the building of it here. And eventually we can move the log out to its own module if we want, or maybe see if someone else has created a better one. (Typing) But for now, I'll set up an asynchronous event and create a new log using my Log module that I created earlier. Next, I need to call that function. This means growing my little Christmas tree another level. (Typing) Looks like we have a reference error. I have a bad variable name here. Gotta rename this to bc, and we have another error. Does this look familiar to you? Can you guess what I did wrong? Hopefully, you can. It's the third time I've done this. My Log module isn't exporting anything, so let's fix that. And that error is dealt with. But now we have another. We now have a timeout error, and this is another one you're going to see often when working with JavaScript, especially when testing. This is happening in our Before call, which means that we're timing out before our function is returning. This usually means that we've screwed up one of our callbacks, not returning anything at all. Since things were just working a minute ago, this means we're missing a callback invocation, in the new stuff I just wrote. Looking it over, yup, typing too quickly there. The constructor doesn't take a callback. Call to the database does. I'll do the same thing here and pass the callback into our Save routine and see if that changes our problem. No, which is odd. When you encounter callback hell, which I'm slowly lowering myself into here, one way to figure out what's going on is to drop log messages to the screen. Now, this is hackish, but it works. I should note that WebStorm has a pretty good debugger that I could use if I wanted, but I find this to be faster, believe it or not. Let's find out if we're even getting to the Save User call, and we're not. Is the hash even going off? No. Let's move our console.log all the way to the top and see if we're even calling this function in the first place. And I'll output the Exists variable. Of course, one of our previous test runs created a User record. Of course it exists. This means I need to delete all user records before I run this registration routine. I'll do that here in the beginning. There we go. And it works. Let's open up our RethinkDB console and see the data. Yeah, there it is. Wiping the DB like this might seem ridiculous, but this one step is all I have to do. Now I could put it in an after block, which would go off after this block's tests have run, but it would wipe out my data, and I like to look at it sometimes, like right now. One thing that I'm seeing is that I haven't updated the status to Approved, so I'll do that here, and saving. This will rerun my tests. Now I can rerun my query, and there is a proper user record. Let's finish up our test, which should be a bit simpler to deal with. (Typing) There we go. I also want to increment the user's signInCount, so I'll do that too and add a test to make sure it's accurate. (Typing) Great!
Pyramid of Doom
Let's jump forward a little bit. I filled out the rest of the test, which was rather uneventful, so I decided to spare you all of that. What I've ended up with is rather typical JavaScript written by someone who's just trying to make something work. That's me. There are better ways to do this. Let's take a look at a few. The simplest thing I can do is to avoid using inline anonymous functions as callbacks. This doesn't do much for the pyramid of pain, but it does make things a bit more discrete. I'd like to do something a bit more elegant, however. I could uses promises, and they are, simply put, an object that represents the results of an asynchronous function call. Some frameworks call them futures or deferreds. And you often see them in platforms like JQuery, Angular, and Amber. Node removed promises early on with the feeling that if a platform did less, creative platform developers would actually do more and probably do it better. Some people like promises and others don't. It would work out pretty well here, and for that, I could use the Q library calling each function in a familiar tri-catch way. Right here is why Node developers use the callback signature standard that they do and why you should to. Error as first argument, return data as second. If you standardize on this, you'll be able to use helper libraries pretty easily. For me, this code reads well, and it's simple to use. But I'd like a bit more clarity and the ability to add extensibility later on. One of the libraries I like to use to handle flow control issues with JavaScript is the async module. Generally, I like to lean on and embrace JavaScript's asynchronous nature, which can be hard to do if you come from a procedural, object-oriented background as I do. But sometimes, you need to do things in a step-wise fashion. Here's one spot where I needed to do just that. This is the Install method in my SecondThought library. Basically, I need to create a table for every string element passed into the table's array, and I do this by calling the createTable function. Async makes this easy using the Each method. For every element in my array, that element is passed to the createTable function, and that function is invoked. All of these invocations are run in parallel, and only when they're all finished is the callback you see here invoked finally. Async allows you to call functions one at a time if you like using the Serial method, and you can pass result data from one function to the next using the Waterfall method. It's a very useful library. Take some time and get familiar with it as you'll likely find it useful in your Node adventures. The async module would likely work for me, but I want to have a bit more control over extensibility, and I don't want to impose the knowledge of this library on future collaborators. I find it much easier to embrace the asynchronous nature of JavaScript, and Node for that matter, and just go with it, so to speak. Promises and flow libraries are interesting, but it feels like I'm bending JavaScript to be synchronous. what I'd rather do is write code that reacts to changes using events. That's how it's done and always has been done on the browser, and we can do it with Node, as well, using the EventEmitter. Let's take a look.
The EventEmitter
First thing I need to do is require Node's Events module, and set the module's EventEmitter prototype to an emitter variable. I'll also reference Node's Util library, which will help me with inheritance in just a minute. Next, I'll invoke the EventEmitter constructor and pass in this, which will be an instance of my registration function. This step isn't necessarily required, but it's a good habit to be in. Finally, I'll use the Util library to graft the EventEmitter prototype onto my Registration prototype. This is one way to do inheritance in a classless language like JavaScript. It would be nice if I could use eventemitter.extend, but that would be a bit overbearing to me. This is nice and clean. We can now emit and listen for events on our Registration module. This can happen both externally and internally. I'm going to use them internally to wire things up the way I want them. I'll start at the top with the validateInputs function, and I'll remove the callback. If the app is invalid, I'll emit an Invalid event passing the application along. If the app is valid, I'll emit a Validated event, and once again pass the application along. I'll do the same with each function in the chain here, remove the callback, and emit the appropriately named events. (Typing) Okay, so how does this all work together? I don't want to remove my callbacks completely. They're expected to Node, and I want to be sure my API is compliant with that. But how do I make my module use both events and a callback? I create variable call to continueWith and assign the callback is passed into my Apply for Membership method. This way, I can use it later as needed. Then I get to do something very fun, removing this horrible pyramid of pain. Okay, maybe a bit dramatic, but that's a lot of code that's not particularly easy to follow, and it makes me happy to slim this thing down. In its place, I'll emit a simple event, application-received, and I'll pass out the new application. How clean is that? How does this all go together? And that's next. You can listen to events on an event emitter by using the On method. So here, I'll listen for application-received and then pass a reference to the validateInputs function. It's important that you don't invoke a function here. If you do, it will be invoked when the event is wired up, not when the event actually happens. The event emitter will automatically pass whatever data is passed when a given event is emitted. So our new application will be passed directly to validateInputs as an argument, and this is precisely what we want. Now we just need to listen out for each event and call the next function we want. There we have a nice process flow list. I like the way this reads, and I especially like the way my code is slimming down. Until we come to the very end, what do we do here? Let's create a new function called registrationOK, as well as registrationNotOK. These will be the final endpoints of our process chain. Move the instantiation of the registration result down to these functions and set the result up based on what's in the application object, which should be finalized at this point. (Typing) Finally, if continueWith isn't null, we have a callback. So I'll invoke it and send out the result. I'll do the same with registrationNotOK. This is a really simple way to handle callbacks cleanly if you're using events. (Typing) Okay, here comes the fun part. Let's see if I did this right. Nope. No, wait. This looks like it might not be that hard. Oh my goodness! It works. Now here's a note. I won't lie to you. I got a bit lucky here. I was expecting this to be a little messier because working with events can be very tricky when it comes to tests and tracking down bugs. One problem you run into building something using events, it's very difficult to do it incrementally. It's possible, but generally I end up writing specs first then writing all of my implementation to fit it. I then spend some extra time debugging things that, frankly, look very strange. I'll go through this process one more time with authentication, and I'm sure that lightning won't strike twice. You'll get to see me debug internal eventing, and it is not fun. As weird as it can be, once you understand what's going on, debugging is rather straightforward. Also, if you're using an IDE like WebStorm or Visual Studio, just hit the debugger and walk through your code. In this module, we took things a bit slowly and built out a registration module that drop people into our RethinkDB database. Along the way, we used simple focused BDD to help us spec out our functionality. We looked at some common issues when working with Node, and finally basic patterns and techniques for working with Node modules, including using Node's EventEmitter to avoid callback hell. In the next module, we'll speed things up and do this all over again as we write our authentication system.
Building an Authentication Module at Full Speed
Again, in Real Time
In the last module, we took our time and discussed various patterns and pitfalls that arise when building a Node module. And if you're unfamiliar with Node or JavaScript, much of that could have passed you by. So let's do it all again. This time with the Authentication module that will log our users in. Repetition is a great teacher. And, hopefully, by watching the process through again, the patterns will become more familiar to you. Okay, let's do all of this again. But this time, I'll let this play out real time so you can watch me go, end-to-end and at real speed. I'll follow all the same patterns that I did with the Registration module. See if you can spot places that I might screw up, and if you get stuck, rewind as needed. (Typing)
Debugging Evented Code
Okay. Hopefully, nothing surprised you there. Let's pick this up as I've run into a bit of a wall. Somehow I broke my Registration module while working on my Authentication module. That's strange. Let's rewind the tape a bit here and start all over. I'll comment out the entire Authentication spec just to see if my registration specs will run. Yeah, as I expected. Un-commenting, and they break again. This is something that I hate to say is normal when events are involved with testing engines like Mocha. Somehow Mocha is getting confused about what's throwing when and where. The result here is unreliable. So let's slowly step through this. If I comment out the Before block in my Valid Login Describe block, things work. I have a feeling that Mocha doesn't like the way I'm handling callbacks here. So let's separate this a bit. Just a general rule of thumb: if you see something in JavaScript that absolutely does not make sense, it's an asynchronous problem. And in this case, that's what we're seeing. And that is indeed what our problem is. I'll move the DB connection bits and the data cleaning bits into the outer Before block. This feels a bit better anyway. Then I'll move the authenticate call back down below. Here's another area you'll see when working with events and testing--done() called multiple times. This happens most often if you try to subscribe to events in your test suite. It's easy to forget to use a new instance for each block when subscribing to events. But that's not really what I'm doing here. There's an extra done call right. Okay, now we're getting somewhere. But, wow, how's that for an error message? This isn't something Node is reporting, and it's not from my code. It must be from a module I'm using, so let's go spelunk this. Once again, I'll use console.log to output the AuthResult, which is the common object I'm using in my process chain here. And there it is. So, we're okay there. Let's try in the next function in line, and there it is again. Works okay in the findUser. I think I know what might be happening. If you pass null to the hashing routine---nope, that's not it. Moving on down to saving the user stats, it's in here somewhere. Let's take a look at the updates. They look okay. Let's take a look at the AuthResult. There's my user, and, yeah, there's no ID. This is the record that's coming out of the database. Right. I am not setting the ID on the return to object. I want to let RethinkDB create the ID for me, so I'll just check and see if an ID is present in the args, and if it is, I'll set it. Otherwise, no default here. And look at that. At this point, it should be a simple matter to fill out the rest of the tests. And, yup, we are good.
Integrating Our Module into a Web App
Creating an API With Index.js
We're almost at the end of this course. This is the last module. All we have left to do is integrate what we've built into an Express Web Application and make sure it all works. We'll create an API for our Membership module so that other applications can use it. We'll set up another Express Web Application and then install our Membership module as you've seen me do using a file reference. Then, we'll hook up Passport, an Express middleware package that controls access and handles user authentication and authorization. After that, we are done. So far, I've been working on the core bits of the module without worrying too much about how it will be used. Let's fix that by layering on an API using an index.js page. One reason I like working with events is that consumers of your API can subscribe to events and do what they like. Given that, I need those events to ripple up through my API, so I need to make the API event driven as well. I have a handy template for that in WebStorm, which I'll use, and I'll call my API, surprise, Membership. The one difference here is that I'm going to allow calling code to pass in the name of the DB we need to connect to. This will likely change. As I mentioned, I don't like passing strings directly, but for now, this will work, so I'll leave it. I'll require my Registration and Authentication and then create an Authenticate method. And this is a tough choice. I just said that it's encouraged that you pass in a single object for your argument data. But this is a login routine, and I think for many people, it's common and somewhat expected to have a signature just like this. In just a minute when I plug this module into a web application, you'll see more of why I made my API this way. I'll connect to the DB and pass a live DB object to my Authentication module and then emit the events as they come back. (Typing) I was hoping to find a more elegant way to do this, but from what I can tell, this is the best way to ripple events up. Finally, I'll call Authenticate and pass along the callback. A quick note, you'll notice that I wired the events before I called Authenticate. You have to do it this way or you won't be subscribed to those events when they go off. It might seem obvious now that I've explained it, but it's not very intuitive when you're writing the code at development time, trust me. Let's do the same thing for Registration. And it's just occurred to me that I am not emitting anything when Registration is successful, so I'll fix that here as well. (Typing) Finally, I'll need a way to look up a user based on a cookie token, so I'll add that to the API too. I did things a bit backwards here. I don't like that. I don't have tests, so I'll -- those right now. (Typing) And there we go. Just passing. Now let's see how we can actually use this thing.
Plugging into a Web App
We are ready to work with our module in an external app, so I've changed the name of it to Membership and upped the version to 0.0.3. Whenever you make changes, you have to be sure you change this version. Otherwise NPM will ignore it when users go to update it. I'll install an Express web app in the root of our project once again and change the name of the app to Froggyweb. Next, I'll install our module by referencing it directly using a file system reference. You'll notice that it installed everything just fine, but I don't have a directory reference here. It's just the name of the module. This clearly will not work for us the way we want. If I was to use NPM update, this module would be overwritten by the module registered in the NPM registry using the name Membership, and there happens to be a module called Membership up there. If you're working on a team, and you don't want things like this to happen, be sure to use a Git or a Github reference here. I can update this manually by installing it again if I want to. And that's what I'll do for this demo. Next, I'll need to have a bit of middleware that will sense when a user is logged in and control the session bits, etc. And for this, most Node developers use something like Passport. I'll install that using NPM. Passport is fascinating, and if you're a .NET developer, you can think of it like forms authentication. Forms auth doesn't handle your persistence or the notion of a user. You just tell it to log someone in and out and whether to persist the session. Same deal with Passport. If you're a Rails developer, this is where Node and Rails part ways in terms of philosophy. Devise in Rails fashion does everything for you. It bolts itself onto Rails, creating routs, views, controllers, and so on. Passport, on the other hand, is a single piece to the puzzle. Our Membership module is the other piece. The Passport documentation is pretty good, but it can be a bit of work to figure out just what to do, so let's go through this step by step. I've just installed the Passport module. Now I need to wire it up to our application. The first thing to do is to set Passport's strategy. You can have it authenticate using external services like Facebook, Twitter, Google, so on, using OAuth or OpenID. You can use basic authentication. Or you can use what they call a local strategy, which basically means you'll tell Passport what to do. For that, I need to create a local strategy. And the first function it wants from you is a way to log a user in, eventually returning an error or the user record. What you see here is the Passport template I pasted directly from their site. Let's comment that out and, in its place, I'll use our new Membership module. (Typing) That was pretty simple. Again, this is the function that Passport will use to log the user in. All we need to do is fire off the Done callback. Notice also that this function accepts an email, password, and a callback. This is why I wanted to keep my API call the way I did. Even though I've suggested that you pass objects instead of primitives, it keeps it in line with Passport, which I think is important. Okay, we're almost there. Now I need to tell my application to actually use Passport. The first step is to provide the cookie parser with a secret key. Next, I'll initialize Passport and pass that Passport instance to Express as middleware. Finally, I'll plug Passport's Session Handler into Express. Okay, let's run things and see where we're at. I can do that with Express by calling Node app.js. I forgot to install the dependencies for Express. As I mentioned before, when you clone Github repositories or generate Node apps, the Node modules won't be there. You'll have to install the dependencies yourself using NPM install-d. Now that that's done, let's run our app. And, no problems. Hey. Let's fast forward a bit here. Added some styling to this site using Bootstrap and put our login page right on the home page just for simplicity. Now I just need to plug in a login rout. I'll remove the user.js rout, that doesn't make much sense, and I'll add an account.js page to the routs. Here's a handy tip I picked up from Geoffrey Grosenbach of Peepcode. When you work with routing, as we're doing here, just pass the entire app in the constructor. That way, I can wire the routs in one place as I need. (Typing) I'll require Passport here and set up some redirects that I'll use with Passport in just a second. Next, I'll create a post rout for accepting login credentials, and I'll hand it off to Passport's Authenticate method along with the redirect manifest. And that's it. If this looks weird to you, it does take some getting used to. Express works with middleware in a very elegant way. You can chain whatever functions you want into the Rout Definition function, as long as the final argument is a callback that tells the response what to do. All that Express cares about is getting a callback at some point, and it will look for that as the last argument of the rout registration. This pluggable model is what makes Express such a joy to use. But if you're not used to it, well, it can be a bit confusing. The final thing to do is require our accounts rout so that our new rout is picked up in our apps configuration. I'll be sure to invoke the module and pass in our app object. Okay. Let's try and login and see what's what. I can use the credentials I've been testing with, and an error. Believe it or not, this is promising. Passport doesn't know what to do to remember who the user is, and that's because we never told it what to do. We do this by calling Passport.serializeUser and passing in the function we want used. This function is called after Authenticate. Whatever user record is returned from Authenticate is passed here. What we need to do is save some information into this session so we know which user to pull out later. And I'll do that by serializing the user's authentication token. Next, I'll just need to de-serialize the user so Passport can pull the user back out. This first argument passed in is the thing I asked Passport to serialize, which in our case is the authentication token. I'll use that to fetch the user out of the database and then pass the callback directly. Okay. Let's try this out. Logging in. No error, which is great, but how do we know if we're logged in? Let's head over to the index rout and pass user information to the view. When you login with Passport, the De-serialize User function will take the user record you provide and will stick it on Express's Request object for you to use anywhere. So here, I can just send request.user down to the view. One thing I'm not showing here, which is very important, is that if you want to protect a given rout, you need to plug a global function into the Rout Definition. You can do this by flexing Express's extensible middleware approach. Here's a typical function that I grab from the Passport Github repository, ensureAuthenticated. You can plug this into your Rout Definition function right after the Rout Definition itself, and it will get called inline. If everything's okay, then the next function will be called. If not, the user will be redirected. Okay, let's head over to our view page and output our user's email to see if he's there. And, yes! Looks like we have the beginnings of a pretty neat system.
Summary and Goodbye
Developing Node apps can be frustrating, exciting, exhausting, and exhilarating. You can have all of those feelings within the same 20-minute coding period. If you work with Node and find yourself feeling frustrated and exhausted, remember that you're likely working in a system that's alien to you as a developer. If, like me, you come from a procedural, object-oriented background, making the transition to Node will take patience and effort. Also, knowledge of various patterns, which is why you're watching this video. These are the patterns I've learned after working with Node for the last year and a half. As I mentioned, I'm not a Node expert, but I have been programming and working with the web for a very long time, and these are the patterns that I find most helpful. And they keep me from getting overwhelmed and exhausted. I do hope you've enjoyed watching this production. I'm going to keep working on this module and others. And if you have any feedback, I'd love to hear it. You can drop me at line at the email onscreen or drop me a note on twitter. Thanks again for watching. See you soon.
Course author
Rob Conery
Rob Conery co-founded Tekpub and created This Developer's Life. He is an author, speaker, and sometimes a little bit opinionated.
Course info
LevelIntermediate
Rating
(374)
My rating
Duration2h 30m
Released11 Jul 2014
Share course