What do you want to learn?
Skip to main content
by Joe Eames
Start CourseBookmarkAdd to Channel
Table of contents
Introduction to QUnit
Integrating with the DOM
Integrating with CI
Asynchronous tests are a feature QUnit that give us the ability to do some things that would be very difficult otherwise. The basic purpose of asynchronous test is to allow us to test our code when it contains setTimeout interval function calls. A second but less common purpose of asynchronous tests is to allow us to test UI effects that take time to actually occur, such as fade out or fade in. And the third purpose of asynchronous tests is to allow us to test ajax calls. Let's first look at testing setTimeout and set interval. The first test I'm going to show is a broken asynchronous test. Here I'm going to create a setTimeout call. Inside that setTimeout call I'm going to call assert. And I'm going to give it a time out of 100 milliseconds. Now, in this example, I'm putting my assert inside of the setTimeout. But I could also reverse the code and put inside the setTimeout the actually code that I'm executing and put my assert outside. And we'll get the same situation. Let's run this in the browser and see what happens. Here you can see that test is completed, but it says that we have no asserts. We know no that's not true we have our ok assert, but because it's inside the setTimeout, the test completes before the assert has it's opportunity to run. So essence, the code is being called out of order. The code after the setTimeouts being called before the code inside the setTimeout. If we had the situation where we had our code under test inside of our setTimeout and our asserts outside, the asserts would get called before the code in our tests had a chance to run. So let's use QUnits asynchronous capabilities to fix this issue. So the first thing I will do is go up here -- I'm going to issue a call to the stop function at the beginning of my test. The stop function tells QUnit to pause running tests and notified. Now, I'm going to go down inside my setTimeout call, and I'm going to issue a call to start this tells QUnit to go ahead and resume running the tests. Now, let's run this in the browser and see what we get. And we can see here that our tests are now passing. We have reversed the problem that we had before where the code -- it was as if inside the setTimeout wasn't running until after the test was completed, and now QUnit is waiting until after the code inside the timeout has run. Now, let's look at the situation where we have more than one setTimeout. I'm going to duplicate this test. And inside the second test I'm going to create a second setTimeout call with its corresponding assert. Now, in order to make it a little more obvious what's happening, I'm going to change the timeout on the first setTimeout to be something a little bit longer. And inside of it I'm going to write out to the console so that we can see what's happening. All right. Now run this test and watch the console here. See the test is completed, but we still get our logging statement after the fact. So our second setTimeout isn't running until after the test is completed. Even though it showed us the test has passed, it's actually giving us a false positive because we aren't running all the code that we want to test. Fixing this situation is rather simple. We just need to go back into our code and adden a second call to stop. Now, QUnit knows that it's waiting for two calls to start before it can continue. Now, let's run this test in the browser, and watch what happens. Okay. You can see that the test actually paused for the full two seconds and didn't complete until the second setTimeout had run which we see by the logging statement. Now, we got a shorthand method for doing this. Instead of issuing a call to stop twice, we can just pass in a parameter to stop telling it how many starts to wait for until it can resume running the tests. Now, if we want to run this test in a browser, we can see that we get the same result. Now, thankfully, QUnit has a little shorthand method for us that allows us to make writing these asynchronous tests just a little bit easier. Instead of calling the test function, we actually call the async test function. And now that we've done that, we can actually remove our call to stop because it's now implied with the async test. We just have to retain our call to start. So for convenient sake, let's set our long setTimeout to something a little bit shorter in the run this series of tests in the browser, and see that they are indeed passing. Now, the second reason we mentioned for asynchronous tests is to wait for UI effects. Calls to start and stop can help us with that as well. Let's look at a little sample code. Here I've written a simple function that we'll fade out a div over a given duration. I'm going to go back to my test suite and write a new test this function call. So here inside my UI test I'm going to issue a call to that function, and I'm going to have it take a half a second and then I'm going to use setTimeout to check to make sure that the div is now invisible. I'll do that by grabbing the div and checking its property. Now, I'll need to set the duration on this timeout to something slightly longer than how long it takes to fade out the div otherwise the div won't be completely faded out when I run my assert. Running this in the browser we can see that this new test passes as well. Now, even though this works, there's a much better way to do this. Let's go back into our code under test. And we're going to addin a call back function that will be called as soon as the fade out is complete. Fortunately, jQuery already supports that callback function so all we have to do is pass it in to our call to jQuery fade out. So now that we've got that code written, we'll go back to our test and we'll change it to -- instead of using the setTimeout, we will now pass in a second parameter that's going to be a callback function. Now, running this adjusted test we can see that it's still passing. Now, the last reason for asynchronous tests is to test with ajax. Now, I'm only going to mention this to be thorough, in reality, ou should never unit test an ajax call. Instead we should be using some kind of a test doubles, and we will go in to one of the techniques for doing that later on in our module on mocking. So let's recap what we covered in this module. Using the asynchronous tests within QUnit we can tests setTimeout and set interval calls. We can tests UI effects, and we can test our ajax code. But as I said before, you should avoid it at all costs. You should write some kind of abstraction layer over your ajax calls, and then you should use test doubles instead of trying to test your code that makes ajax calls. (Silence)
In this last section of the QUnit module we're going to look at four last tidbits of information about QUnit. The first of these is the noglobal setting. The noglobal setting tells QUnit to fault a test if it introduces any global variables. Let's look at how that works. I'm going to write a quick little test which actually introduces a global variable. Inside this test, the first thing I'll do is create a new variable called globalvar set it equal to 3. You can see that since I'm not putting var at the beginning of this it's actually creating a global variable. Next, I'll create a quick assert that verifies that the globalvar equals 3. Now let's run this test in the browser. We can see that it passes. Now I'm going to go up here, and I'm going to check the noglobals option. This immediately reruns the test, and we can see that the test is now failing. And we get this little message that says it introduced a global variable named globalvar. Now, to fix that I only have to go up here and put var in front of the variable name so it's not creating the global variable -- rerun it, and of course it's now passing. The next thing we'll look at is the notrycatch setting. In order to show this, I'm going to need a new test. So I'll duplicate the existing test. I'll come in here, and rename this to a more appropriate name. So I'll call it hidden exception, and then I don't need the code that's in here. Now, let's take a look at this code that I've already written. I've got a class that has a function called do something. And you can see that all it does is throw an exception. So I'll go into my test, and I'm going to call that do something which is just going to throw an exception. Now, let's run this in the browser, and you can see that our test is failing. But the problem is the error message isn't very helpful at all. It says died on test number 1 undefined, but that's the actual problem. The problem is I'm getting an exception. QUnit actually wraps all the tests inside of the tricatch block so any exceptions that are thrown inside of your code are suppressed. So let's go up here and check the notrycatch option, and that will rerun our test. And now we can actually see that true reason why the test is failing. So let's go back to our code, and we will fix this by commenting out the call to do something, and then we'll throw in a quick assert. And now let's rerun the test. And everything's passing. Now, the next piece we're going to look at is the expect method. So let's go back into our code, and we'll create a new test. And this one I'm going to name expect some asserts. And inside this test I'm going to put in another assert, and then I'm going to call the expect method. But I'm going to pass in 3. So I'm telling my test to expect three asserts. But when I run the test you can see that it's actually failing. And the reason it's failing is that it expected three assertions, but only two have run. So let's fix this by adding arrive assert, and then let's rerun the test. And that's now passing. Now, there's a quick way to do this. I can go over here and remove the call to expect instead my second parameter will be the number of asserts to expect. Now I'll comment one of the asserts so there's only two. Run it again, and it's failing. Come back in comment that assert. Run it again. And now we're passing again. So there's a quick way to verify the number of asserts that is happening is what you expect. Now, it may mean having to put this parameter in all the time, but this is definitely not something that you should put in most of your tests. Really nice for asynchronous tests to make sure that the number of asserts that you need to run are running, and asynchronous code is getting delayed until after the test is finished. But in general, for most tests, it's just duplicating information and making your tests more brittle. Now, the last piece of information we're going to look at about QUnit is the events in QUnit. There are quite few of events. Here's that list of events, and for the most part their pretty self-explanatory. The only ones that could be a little bit confusing is log event which actually happens every time an assert is passed. And then start and done which happens at the very beginning of the test run and at the very end of the test run. So let's look at these events in action. I'm actually going to copy in some prewritten code that just goes and lists each event, and then logs out a message based on the event. I'll comment out all but one test, and then we'll go and run this in the browser. I'm going to expand the area for firebugs so that we can see the console messages a little bit easier. After I run the tests, you can see that we're getting a message for when we started the whole run for the module for each test, each assertion, and then when we're completely done. Now, the main reason you're going to use these events is actually when integrating with a CI system. Most of the time listing to these events is not going to be useful in your testing. But if you are integrating with the CI system printing out the right comments for when your tests pass or fail is going to be critical for letting the CI system know whether or not test pass or fail. And that's the major reason why you would use events in QUnit. So in this section we covered four things. The noglobal setting -- which makes sure we don't introduce unequal variables. The notrycatch which doesn't hide any exceptions inside of our trycatch block done by QUnit. The expect method, and the corresponding expect parameter that lets us verify the number of asserts that happen in a test. And last but not least, the events that QUnit fires which we can listen to and then take an action based on what goes on in those events.
Introduction to Jasmine & TDD
Setting up Jasmine
Setting up Jasmine is a relatively simple process. Of course the very first step is always to go in and get the source code. You can get the source code here or as always, you can just Google for it. Once you've downloaded the source code as a zip file, you can extract that and then pull out the relevant libraries to use in your project, but there's a lot of stuff in the Jasmine zip file that actually can be very useful to you. So let's take a look at that zip file now. Here's the extracted zip file. You'll notice that it has three directories and an HTML file. That HTML file is a sample file for how to structure an HTML file to run your Jasmine tests. It's useful to note that in Behavior Driven Development, tests are usually called specs. So everywhere you see the word spec like in the SpecRunner file, that really just means test. We'll take a look inside that HTML file in a minute. The live directory actually has the Jasmine source so let's go inside there. There's going to be a folder inside there for your specific version of Jasmine that you've downloaded. Within that, there are four files; the license file, which is not very interesting; a CSS file, the style the page that shows your Jasmine tests; then we've got two files here that actually have the Jasmine source code. The first one is the core of Jasmine itself, the second one, that's Jasmine html file is actually the html reporter for Jasmine. Jasmine is set up in such a way that reporting to an html page the results of your test is only one of the supported ways to report the results of Jasmine tests. There are other reporters that you could find online that report to XML and other formats. Now going back to the root of the zip file, we've got a couple of other directories that are of interest to us. This one actually has sample tests that we can use as a reference point for when we're writing our own tests. There are two files in the Spec directory. The first one Playerspec contains a bunch of sample tests that you can use as a reference when writing your own tests. The second one SpecHelper is an example of how to write your own custom matcher or a cert in Jasmine. And the last directory, source, contains a couple of source files that used in those sample tests. Now let's take a moment and look at the SpecRunner file itself and how it's organized. The html file that you use in order to run your Jasmine tests looks like this. If you remember the test runner file for module one for key QUnit, you can see that these two files are organized quite similarly. At the top, we've got our CSS file. Then we have the libraries that we use for testing. If there's any third party libraries that your source code will need to run, you'd put those here as well. Then Jasmine suggests that we include our spec files or test files next. And the last piece is to include our actual source files that we're going to be testing or system under test. I suggest you swap the location of your spec files and your source files because depending on how your write your code and how you write your tests, you might actually get into a situation where the test files need to be included after the source files. The very last piece of this file, which is quite different from QUnit, is a large section of code, which is actually used in order to launch Jasmine. I suggest that you do yourself a favor and take all of this code and extract it out into a separate file and then just include that file in each of your SpecRunners. If nothing else, it'll just clean up your SpecRunner file. So after you've gotten your source code, the next thing you're going to do in order to run your Jasmine files is to create your own SpecRunner file. And of course it's easiest just to copy the sample included with the Jasmine source code. Once you've got your SpecRunner file created and are including all the third party libraries that you need and all the source code that you're going to need to test, the next step is to actually create the test files that will test your code. Let's take a look at the simplest possible setup we can have for running Jasmine. I've created here an extremely simple Jasmine setup. I've got the Jasmine source files. I've got a spec file and I've got a SpecRunner file. I don't actually have any system under test or source code files created because they're not technically required in order to run a Jasmine test. Let's take a look inside the SpecRunner file. You can see in here, I've still got the standard style sheet and Jasmine includes. My source code file section is blank. My spec files just has that one spec file that I've created. Let's take a look at that spec file. Don't worry about what this code actually does or means. We'll cover that in-depth. Let's just see how this looks when we run it in the browser. This is how the browser looks when running that one SpecRunner file. It's extremely simple and doesn't really have too much to it. The main part that you want to pay attention to is that green bar across the center, which tells us that we have one spec and zero failures. So that means that our spec passed. And that's how easy it can be to run Jasmine tests. Next, let's take a look at how to organize our tests. ( Pause )
When organizing our tests, first thing we should consider is how we organize our folders and our files. Let's say that you've got the following project structure. Inside this project structure we've got a couple of directories for a couple of modules that we're using and inside each of those modules we've got one source file. When setting up our test code we're going to want to use something like the following. It's nice to be able to create a single root directory for all of our test code. Within that, a library directory that lets us put all the third party test specific code, such as the Jasmine source code and its CSS file etc. Then we want corresponding directories and files for each of our source code files. So you can see I've got a Module 1 and Module 2 directory just like the original Module 1 and Module 2 for our source code. In addition I've got a spec file for each of the source code files. Just like QUnit, there are several options for grouping our tests within our test runner files. We can go with the option of just having one test burner file and have all of our specs within that or we can create multiple test runner files and create different specs for different source code files within each one. I've got another option showing over here where we have three different test runner files. In the first one we have one test file and two source files. In the second one we have two test files and one source file, and the third two test files and two source files. It's best if you choose the organization that makes the most sense to you and the most sense to your project. Now that we got our files and our folders organized, the next thing you want to do is look at ways that we can organize our tests within Jasmine. Test fixtures within Jasmine are organized using a describe block. Describe blocks let us group up sections of code. One of the nice things about describe blocks is they can be nested. Here's an example of the describe block. It's a function that takes in two parameters. The first parameter is the name of the describe block or a description of it; the second parameter is a callback function that actually contains the test code. Let's look at another example. In this example here, we've actually got two describe blocks one nested within the other to show exactly how we can nest describe blocks. In this one we have a user describe block and the next describe block has the when new description to indicate that the test within that are corresponding to when new users are created. In Behavior Driven Development, we should consider that we are taking all the descriptions of our described blocks and concatenating them together with the names of our tests. So if we have a test called should have a blank user name, then when we concatenate all the descriptions together, we get user when new should have a blank user name. It's nice when we concatenate all of the descriptions together if we end up with sentences because there are many tools out there that will take all those descriptions and put them together in a form of documentation for you.
Writing Jasmine tests is rather easy. Once we have created a describe block, the it function is how to create an actual test within Jasmine. In addition to the it function which creates an actual test, we also have ways to group common setup and teardown for groups of tests. These are the beforeEach and afterEach functions. In Jasmine expectations are called Matchers. Jasmine has a large set of built in Matchers, which we will go over, but sometimes it is beneficial for clarity to create a new Matcher. With Jasmine, it's simple to create custom Matchers and we will look at how to do that in a minute. Let's take a look at some samples. The it function is the container for each unique test in Jasmine. It must be nested within a describe function. Let's look at an example from the sample code that is provided with the Jasmine zip file. Here we have a describe block with a description of player and inside of it we have a single test called should be able to play a song. When we put those two descriptions we get player should be able to play a song, which is a nice description of a feature of our system. You'll see that the it function works just like the describe function. It's a global function that takes two parameters. The first of which is a description of the test and the second is a callback function that actually contains the code and assertions that we'll need in order for our test to work. The beforeEach and afterEach functions are also very simple to use. They're actually more simple than the it and describe functions because they don't even take in a description. You can see here I've got a describe for a class called user. And I've created a variable called SUT. I've created outside the beforeEach so that the scope is available when I get down to my test. And my beforeEach function I'm creating the new SUT and in my afterEach function, I can put any cleanup code that I need. This way the it function itself can assume that the SUT has been created and initialized. As I said before, Jasmine has a large set of built in Matchers. Let's take a look at them. Each of them work off of the global expect function. The first Matcher is the toEqual Matcher. The toEqual Matcher is a very complex Matcher that will check whether or not two objects are equal or two arrays are equivalent or other complex structures are equivalent. The toBe Matcher is much simpler. It simply uses the triple equals comparison. The toMatch Matcher uses regular expressions. The toBeDefined Matcher compares against undefined. The toBeUndefined compares against undefined. The toBeNull compares only against null. ToBeTruthy compares against any truthy value. ToBeFalsy compares against any falsy value. ToContain is used for finding items in an array. ToBeLessThan is the mathematical less than comparison. ToBeGreaterThan is the mathematical greater comparison. And the last one is the toThrow comparison where you pass a callback function and expect it to throw a particular exception. Now it's important to know that you can negate each of these Matchers by adding a not in front of them, which really increases the versatility and the expressiveness of all of these Matchers. The next thing we'll look at is how to create custom Matchers. Custom Matchers are typically created in the beforeEach function. The this.addMatchers function is what we use to create a new Matcher. Let's look at a code example. In here inside of a beforeEach function I've created a new Matcher by calling this.addMatchers and passing in an object with the key toBefive with a value of a function that returns a bullion. The result of that bullion determines whether or not the Matcher passes or fails. Inside a Matcher, the this.actual contains the actual value that you're comparing against and even though this code example doesn't show it you can also add parameters that are passed in to your Matcher function. There, you can use to compare against. This Matcher that we created here will have a very unhelpful error message if it fails. So customizing the error message is very important. Thankfully that's very easy. In order to customize the message inside of your Matcher function, all you have to do is call this.message, set it equal to a function that returns the string that you want to be your error message. Because of closures you can use all the values that are available in the actual comparison of the Matcher. Now that we've seen how to write tests, let's give it a try. All right. So in order to start writing some tests, let's do a simple example. Here I've got a little calculator class. And I'm going to add a couple of methods to that. First we'll add an add function that takes in two arguments. And we'll just return A plus B. And next we'll add ourselves a little divide function. That'll take in two arguments as well. And we'll just return a divided by B. All right. So now that we got our class ready, let's go ahead and write some tests for it. So I'm going to switch over to the spec and here I've got this empty describe function for my calculator class. I'll write a first test using the it function. And this test I'm just going to test that we can add two numbers. So I'll say that it should be able to add 1 and 1. And our second parameter of course is that callback function. Within that I'm going to set my expect and here I'll call our calc. Well, we're going to have to declare our calc first. And initialize it and then now we can expect that calc.add and we put in 1 and 1 and that will be 2. And close this up. And now we're ready to run this in the browser and see how it works. Just jump over to our browser, tests and we can see that the test is passing. So let's go and try another variation on that. This time we will test the divide function and we'll test that we can divide 6 by 2. ( Pause ) So we'll create our calc again. And then we'll expect that calc.divide of 6 and 2 will be 3. Now we can see that we've got a problem here because we've got a little bit of duplicated code in our test. So here's a great excuse to use the beforeEach function. So in our beforeEach function we will move this line of code up there and we just need to take the var and move it outside so that it's visible when it gets to the tests. And now we can delete this line of code and let's see how that works in the browser. Okay. Both tests are passing. That's great. So let's try another variation. This time, let's say that we can divide two numbers that will produce a rational result and so we'll choose 1 and 3 and that way we can get a non-integer division going on and that'll give us another variation in our tests. So we'll divide 1 and 3. And now in order to test this, if I try just like 0.33333 then we're going to get a failure because it's not exactly what the result of the division is. So we can say that we will expect it toBeGreaterThan 0.3 and that passes. And then that doesn't really make the test correct so let's try to pin it by also testing that it's less than 0.34. And now the test is passing. Unfortunately these two expectations kind of point out the fact that if we had a single expectation that did both parts of that sort of like a between expectation, that would work out a lot better. So this is a good opportunity for us to write our own custom Matcher. So in order to do that we just go up into our beforeEach function. We call this.addMatcher and we'll call it toBeBetween and that's going to take in two parameters, the low value and the high value. And that's going to return that this.actual is greater than or equal to the low value and that this.actual is less than or equal to the high value. Close that up. And now we can go down into our tests. We'll change this expectation to use the -- our new Matcher ( Pause ) and then we can remove that second Matcher, run it in the browser and we can see that it all -- Matchers -- all of our tests are still passing. So here you can see that writing tests and writing our own custom Matchers is actually quite easy to do.
Running Jasmine tests is a lot like running QUnit tests. As we discussed before they run inside a browser. Also by default, Jasmine doesn't show us past tests so in the Jasmine UI there's a check box that we can check that'll allow us to see what tests have actually passed rather than just the failing ones. Also by default Jasmine does not display skip tests so there's a check box for that as well. Also Jasmine has some filter options so that we can run just one test at a time or run one describe block at a time. These filtering functions are implemented rather similarly to the way the QUnit allows us to filter tests. Jasmine has a great little feature that makes it easy to skip tests. All you have to do is add an X in front of either the describe function or the it function. So let's look at an example of each of those. In this first one we've changed our it function to on xit function that will skip that test. And this next example we've added an X in front of our describe function and changed it to an xdescribe function and now all the tests within that describe block is going to get skipped. Now that we've learned about running tests let's actually try some of what we've learned. So I've taken the calculator that we built in the last exercise and I've modified it just a little bit. Now I've got three passing tests but this test right here is going to fail. So let's run that in a browser. Okay. So we can see that we've got four tests and one of them is failing and you can also see that by default only the failing test is being shown. So let's go over here and let's click the passed check box and now our three passing tests are visible. Now if we come over here and click this skipped check box it would normally show us any skipped tests but with the current build to Jasmine skipping a test with the X actually hides it completely from Jasmine so this check box isn't really doing anything. Another thing we can do is just run a single test. We do that by clicking on the name of the test. So if we want to run just our failing test, we can click on that. Now you can see it's only running one spec that is failing. We can also run all of the tests in the describe block by clicking on the describe label and that again is going back to running all four of our tests with one of them being a failure. So now let's look at how we ignore tests. We'll go back into our code and we will change our failing tests it to an xit. Save that and we'll go back to the browser and refresh. And now we can see it's only running three specs and none of them are failing. So it's skipping our failing test. All right. Let's go back to our code and move the X all the way up to describe and rerun our tests and now it's not running any test because we're skipping everything in our describe which is our only describe within the entire test suite. So as you can see there are a few options for running Jasmine tests and for filtering them. ( Pause )
Integrating with the DOM
Integrating Jasmine with the DOM is a lot like integrating QUnit with the DOM. Anytime you test the DOM you really got to consider whether or not your test is appropriate. We cover this concept in depth in the QUnit module so if you didn't watch that module now might be a good time to go back and watch just the portion of that module about testing the DOM with QUnit specifically the part about when and why to test the DOM and when and why not to test the DOM. Once you've decided that you really do want to write those kinds of tests then you're going to have to write some additional setup and teardown. QUnit had one little facility that made this a little bit easier. Jasmine does not ship with this so it's something you have to do by hand and we'll go ahead and show you an example of this and how we can handle this manual setup and teardown so that we can manipulate the DOM. So let's go ahead and see what this looks like in code. We're going to take the calculator that we used in our last section. We're going to modify it just a little bit. In this version of the calculator, let's take in an element and we're going to hold on to that and then we'll come down in here in our add function and instead of returning the result we're going to modify that element and set its html property to the result of the operation. Let's do the same thing for our divide function as well. ( Pause ) All right. Now that we got that, we're going to go look at our spec. So the first thing we need to do is track the element that we're going to manipulate. So let's create a little variable that will hold the elements ID and we'll call our element calc fixture. And now that we know the ID let's go ahead and get a handle of that and pass it into our calculator constructor. Okay. So now our calculator is getting the element. Unfortunately when we run our test this element is not going to exist. So let's switch over to our SpecRunner. We'll go down to the bottom and inside the body we're going to create a New DIV. ( Pause ) So here we got our element that we can manipulate back into our spec, let's test the addition function using that element. So we'll call calc.add and then we test that that element has the text that we expect. So. ( Pause ) All right. Let's go ahead and run that and see how it looks. So we run this in Chrome and we can see that we are passing our test. But let's go back to our code and discuss a problem that we've got. In our test we're testing just this one DIV down here at the bottom. We're manipulating this. Since we're just changing the inner content it's not really that big of a deal, but what if we were to be adding some events, some click handlers, all that sort of stuff, we'd have to clean up after ourselves so that any further tests weren't polluted. So we can do that by going in to our tests and creating an afterEach clean up but each different test might actually change the DOM in different ways. So that can get out of hand pretty quick trying to clean up how each different test manipulates the DOM. So a much better way is for us to create sort of a template and then on every test, we will take that template, copy it, insert it into the DOM and then we know that we've got a fresh pristine DOM element that looks the way that we want it on every test. So let's look at how to implement that. First thing I want to do is go back in the SpecRunner and I'm going to wrap this DOM element that I want inside of a different DOM element. ( Pause ) And then I'm going to change the ID on my template to be dash template, that way when I manipulate and add in a new copy of this, I won't have a duplicate ID. Okay. So now we've got this template. Let's go back into our spec and in the beforeEach we'll go in and we will manipulate the DOM. So grab the body. We're going to append into that the contents of that template. ( Pause ) Grab the html and let's replace the html and remove that template suffix. ( Pause ) So now that we've done that it should create a new DOM element. Let's just check and see if our test is still passing. Okay. So we're still passing but now we've got the problem that we're adding in a DOM element. Every time we run the test it's going to add in another DOM element, or if we have two or three tests we're going to keep adding in DOM elements. Duplicated ones. So we need to clean up after ourselves. We do that by going into our afterEach and in here we just grab that DOM element ( Pause ) and we call remove. ( Pause ) Let's go ahead and check that out in the browser. And now we've got a pattern whereby we can do tests on the DOM and manipulate them and then clean up after ourselves and refresh the DOM for every test. And it wouldn't be very difficult to add in different templates in there so that different tests can have template html. After a while this will get a little unruly. So again, consider carefully when you do and do not want to be testing the DOM, but this is a good method for accomplishing just that.
Integrating with CI
Integrating Jasmine with CI is very similar to integrating QUnit with CI. A lot of the same considerations apply when using Jasmine as when using QUnit and CI, so I again recommend that if you haven't watched the QUnit module that you go back and review the portion on integrating QUnit with continuous integration. Let's go ahead and start looking at how we integration Jasmine with continuous integration servers. The basics of integrating with CI is to use PhantomJS and capture the output. You can get PhantomJS at the URL listed here or you can Google for it. You're going to need a couple more pieces before you're finished. The first one is the Jasmine.teamcityreporter. Remember, Jasmine uses reporters to output its results and the Team City Reporter outputs the results of tests in the format that Team City will work with. The other piece that you're going to need is the run-Jasmine.js file, which comes with the PhantomJS distribution. So you just need to grab that and copy it into your distribution directory. Let's go ahead and take a look at the code. So here's the files that I'm going to need. I put them all in a single directory for convenience sake but in production system you'll almost definitely have some of these files in their own directories. I still got my calculator, my calculator spec. I've simpled that calculator spec up to just have a few tests that all pass. Then I've got my Jasmine and Jasmine CS and then you can see the Jasmine.teamcityreporter, which I got out of the PhantomJS distribution. I still have the Jasmine html in there. That makes it easy for me to run the SpecRunner for both Team City and for when I'm building and running my tests. Then I've got the PhantomJS executable and that run Jasmine JS file which I downloaded from github. And of course, the last thing is my SpecRunner. This is the only file I had to actually modify in order to get Jasmine running under CI so let's take a look at that. I had to make a couple of changes in order to get Jasmine running under CI. The first thing I had to do is I had to include the Jasmine.teamcityreporter as a script in my SpecRunner file. The second thing I had to do is go down here where the reporters are actually declared and create a new Jasmine.teamcityreporter and then add the reporter to the Jasmine environment. Now that that reporter is there, Team City will be able to parse the results of running tests. So now let's take a look at how we actually configure Team City to run our Jasmine unit tests. You can see I've got a build configuration that I've created named Jasmine CI. I've gone in and added in a command line build step. I name that step run JS tests. And it's an executable with parameters. I'm going to need to set my working directory. So I'll set that to where I've put the files at. And then of course the Executable that I'm going to run is PhantomJS. And the parameters are going to be that run Q -- Jasmine file. And then the name of my SpecRunner file, which is SpecRunner.html. All right. Now I'm going to save that. And now that I've got that going, I can go back in my projects and I'm going to run my Jasmine CI configuration. And you can see that all three tests have passed. Now let's go back into the source code and let's modify our spec and let's pick one of the tests fail. So we can see what it looks like when one of our tests is failing. Change that to be a 3. Come back. Run our configuration again and now our tests have failed. We can see that we've got one test failing and two passing. Let's go back in and fix it. Run it again. And we are back to green. So you can see that the steps to get Jasmine to run under continuous integration are pretty straightforward and rather simple. ( Pause )
In the last section, we looked at how to test asynchronous code with Jasmine. As we saw in that last section, the clock.useMock method is actually quite an elegant way to test code that uses setTimeout or setInterval. The resulting tests are really rather readable and a good way to test that code. Unfortunately the same is not true of the runs and waitsFor methods. As you look at that code it's quite obvious that you're hiding a lot of what the test does inside of those runs and waitsFor methods and really obscuring the intent of the test making it a lot less readable and therefore a lot less maintainable. So in this section I'd like to show you an additional way to test that code that uses an add on to Jasmine created by a community member named Derrick Bailee. This add on is called Jasmine.async. This is the URL for that add on. Now the goal of this add on was to making writing asynchronous tests just a little bit cleaner with Jasmine. And I think you'll agree that using this add on to replace runs and waitsFor actually makes your code quite a bit more readable. So let's go ahead and code up an example. Here I've got the calculator that we were using before. I've simplified it just a little bit and left just the hide result method in there. Everything else is pretty much the same. And when our calculator spec, I basically left the same exact beforeEach and afterEach that we had before but I've removed those other tests. Now let's go ahead and see what a test looks like using Jasmine.async. The first thing that's a little bit different is when writing an asynchronous test with Jasmine.async we're going to need to go up here and at the very top and we're going to create an async variable. ( Pause ) We're creating an instance of the async spec class, which is the main part of the Jasmine.async library. Now when I want to actually write a test, I'm actually going to call async.it when I write my test. Other than that the test works exactly the same. So here we're going to say it should make the result invisible. Now at this point is where we get our second main difference between Jasmine.async and the runsFor and waitsFor methods. We're actually going to include an argument called done. That argument is a function that we call in order to tell Jasmine.async that our test is complete and it can go ahead and run the next test. Now the code that we're going to be testing is calc.hideResult. So the first thing we'll do is put in our call to hideResult. So remember we needed to pass in a callback. So let's go ahead and define that up here. ( Pause ) Now as you remember, our callback is the last thing that gets called after the fadeout is complete and the element has been made invisible. So that's the correct point for us to call that done method and the Jasmine.async know that we're done. Now the other thing that's missing is our expectation that verifies that the element is now invisible. We can't put that expectation down after the call to hide result because it's not finished fading out at that point and so the expectation would fail. But when this callback gets called it actually has finished fading out so we can go here and put our expectation right in here. ( Pause ) Now it is a little bit funny to be putting our asserts above the actual call to the code that we're testing but when writing asynchronous code we sometimes have to make some concessions. So let's go ahead and run this in the browser and see how it works. Okay. Our test has passed. Now it's still taking the time to let the element fade out so too many of these tests and your test suite will take way too long to run but the code that we wrote is a lot more readable. Let's go back and compare that code side by side with the test that uses runs and waitsFor. Here's those two methods. Now you can see just glancing at this code that not only is the Jasmine.async code a lot shorter but it's definitely a lot more readable and therefore a lot more maintainable. So when writing asynchronous tests if you're using setTimeout or setInterval inside your code use clock.useMock in order to test that code since that'll let you avoid waiting the actual timeout intervals that are in your code. But if your code doesn't use setTimeout or setInterval, then using Jasmine.async is definitely better than using the runs and waitsFor methods.
Setting up Mocha
Writing & Running Tests
Writing tests in mocha is pretty comparable to QUnit or Jasmine. In fact, one of the great things about mocha is the fact that it can truly be like both QUnit or Jasmine. Mocha supports both TDD and BDD style tests. So you can write tests in a manner similar to how Q Unit works with the TDD style tests or you can write them in a style similar to Jasmine with BDD style tests. Now it's important to note that TDD and BDD style tests in Jasmine really have very little to do with test driven development and behavior driven development. Mostly it's just a name for how your tests look. You can actually write both TDD and BDD style tests with mocha, but not actually be practicing test driven development or behavior driven development. So let's take a look at both of those testing interfaces. We'll look first at how TDD style tests look in mocha. These tests will look really similar to how Q Unit looks. The first thing we need to do is come into our spec runner file and in a mocha setup call we need to pass in the string TDD. This tells mocha that we are intending to do TDD style testing. Then we come in to our test file and we can begin writing our tests. Now, much like Q Unit, we start off with a suite function and we can give this a name. And then you pass in a call back. And in the case of TDD style tests which is a little different than Q Unit we actually wrap our entire set of tests within the suite function. And then we can write our first test by calling the test function, giving it a name, and then passing in a call back. And then in that call back we can write our actual test. Now we can run this in our browser and see how it looks. And see here we've got our first test passing. Now let's go ahead and look at some of the other options that we've got with the TDD style testing. Just like Q Unit we've got the option to write our own setup and tear down functions. And just like Q Unit they're simple. They just take in a single call back. And let's write a tear down. And here I'll just log out again. Now mocha offers us a couple of additional options. First it's got a before function, and this function will actually get called before any other test gets run and it only gets called once, unlike setup which gets called before every test. And it's got a similar function called after. So let's go ahead and put a log in our test. And let's see this in action. Let me refresh the page and you can see that we're calling our before first, then setup, then test, then tear down, then after. Now this interface is a little bit different than Q Unit. If you actually want your test to really look like Q Unit there's another switch you can give it instead of TDD. Actually give it the string Q Unit. And then for that to work you change your suites to be just like they were in Q Unit. So instead of wrapping them actually just have the suite beforehand and it's just a function. Then the Q Unit style, setup and tear down aren't supported. Instead it's before each and after each. I can run that and we see that things are still exactly the same only now our tests look exactly like Q Unit. So if you are migrating from Q Unit this is an easy way to make that happen. Now let's switch back to TDD style interface, and I'll show you one more feature. And we're going to change these back to setup and tear down. And make this back into a call back. And -- And one of the things we can actually do is we can nest our suites. We we're going to create an inner suite. I'll give that a call back. And then in here I'll just create another test. That's on call back. And this one just going to have another simple expect. All right. Let's go ahead and run that in the browser and see how that looks. Okay. So now you can see that we're actually nesting our suites inside of each other. That's a nice feature of the TDD style test you don't get with the Q Unit style. And Q Unit style, if we change this over to Q Unit style, have to go back and change our suites. And change my setup and tear down to a before each and after each. And refresh my browser and you can see that they've changed. Now they're all on the same level. So now let's look at the BDD style tests. Some of the things that have changed is instead of a suite we use to describe -- Again, these are going to be a lot like Jasmine. And we are going to wrap everything in a function and a call back. And instead of test we have the it function. Change that. Change that to describe before each and after each and before and after are going to be the same in BDD style tests as they were in the TDD style tests. I'm going to change this other test to init. Now of course we need to go over and change our style to BDD. Let's run that in the browser. So if you look at the output you can see we're still getting the same result. Before and after are called only once at the very beginning and end. Setup and tear down are called before and after each test. And just like TDD style we can nest our suites or describe blocks within each other. So that's how to write tests in the TDD and its related Q Unit style and the BDD style interface of tests inside of mocha. So now that we know how to write tests in mocha, let's look at some of the other features that it supports. Mocha will allow you to filter your tests just like we saw in Q Unit and Jasmine. And it's very similar to them. Let's go back to our same suite, and I'm just going to come in and click on the outer suite. Now you can see what it's done is up in the top it's added a grep URL parameter. Now this didn't actually do anything different because it still concludes all of my tests. So I'll click on this inner suite and now you can see that we're being filtered and only the tests within the inner suite are being run. Now in addition to actually clicking on these that URL parameter is hackable so I can go in here and I can type in whatever I want. I'll just type in my -- I'm going to go back to everything that starts with "my." And so this allows you to filter your tests by just typing in phrases that you want to match your test with. So if I come up in here and I type in second you can see it's only running my second test. So that's how you can filter your tests with mocha. Another thing that we can do with mocha is actually view the source code. We can do this right from the browser. So let's go back to the browser and I'm going to change my grep and take that off again. And then now I'm going to come up here and instead of clicking on a suite I'm going to click on one of these tests. And you can see that it's expanded open. It's actually showing me the code within that test call back, and I can do this for both the tests and I can open and close however I want. So this is a convenient little way to remind yourself of what the code looks like in one of your tests. Now this is a change from some of the other libraries because instead of clicking on a single test to run just that run test we're now seeing the source code. That means that there's not quite as convenient a way to run just a single test. You actually have to use grep for that or another feature of mocha is its ability to run tests exclusively. And the way that you do that is you come in to a test and you add the only function to it. So let's run that in our browser. And you can see that it only ran the test that had only on it. Now of course you could write that only function on multiple tests and all that mocha does for that is just run the last one that you wrote the only function on. We can also add that only function to a describe block. And let's go ahead and duplicate this test. And doing it that way we're just running that one describe block. In addition to running only an exclusive test we can actually do test skipping. And that's very similar to the only function except in this case you use the skip function. Now you can see that we've actually skipped that inner describe block and all the tests within it are showing up in blue without a checkmark, indicating that they've been skipped. Just like with only we can actually run these on one of the tests. And now it's just skipping that one test. Now there's an alternate way to do this that matches up with Jasmine. You can actually put an X in front of the test instead of dot skip. That way if you're converting from Jasmine, again, it's a little bit more convenient because it matches up a little bit more. Mocha also has a convenient way to write pending tests which are if you have an idea for a test you want to write but you're not quite ready to implement it you can just write the it function and then just close it off like that with no call back. And then when we run it we'll see that it's showing up as pending which looks just like skipped. In any case, it's a reminder to you that you've got something you need to go in and fix. Mocha's also nice because it will detect global variable leaks. So let's go back in to our code and let's create a global variable. Let's say I equals 3, and obviously this is an anti pattern. We're creating a global variable and that's something we almost never want to do. Thankfully mocha will actually catch that for us. As you can see, it's throwing an error. A global leak is detected, the variable I. So we can go in there and if we come back out then mocha's just fine. Now let's say that you actually have a global variable that is valid because of some other library that you've included and mocha's throwing an error on it, but you don't want mocha to throw an error on it. So let's put our I back in. You can just come in to mocha and do -- You can easily configure that with mocha by calling the mocha dot setup function. And in here we pass in an options object. And in this case I'm going to create a key called globals. And the value that I pass into that is an array of strings that are each of the globals that I want it to ignore. So I've got an I in there. I go back to my browser, run this again. You can see it's not failing. If I remove that line of code we're going to be failing because of that global. There's another way to do this, more of a shotgun approach. Instead of calling globals we call mocha dot setup and we pass in the key ignore leaks. And set that to true. And when we do that it doesn't matter how many global variables we've got. Mocha will no longer detect them and throw errors when it detects them. And the last feature of mocha that we're going to talk about is its ability to detect slow tests. So let's go to our browser. And we're going to set a break point on one of our tests. And I'm going to put a break point right in here. So our first test because it's that break point is actually going to take a long time. Run it again and this time we'll wait and then continue and now you can see I've got this little red area next to my test indicating how long it took. And the red is an indicator that it took too long. So that's a nice way to quickly see which of your tests are taking too long. And the threshold here is 75 milliseconds. Anything over 75 milliseconds shows up. So let's run this again. And you can see 267. If you're kind of quick about it you can actually get it into a range where it gives you a little warning saying that, "This is kind of taking a while, but not necessarily terribly long." And so this is a way to detect tests that are still not perfoming, but not terrible. So as you can see mocha has a lot of features when writing and running tests. Its flexibility is definitely one of its greatest strengths. And getting comfortable with these options will be beneficial as you write tests using mocha.
Writing asynchronous test with mocha is extremely smooth and easy. Mocha was built with asynchronous testing in mind so the developers did their best to make testing async code as painless as possible. The process for writing an async test in mocha is extremely simple. And if you remember the Jasmine dot async plug-in from the last module it will look very familiar to you since Jasmine dot async was inspired by mocha. This first code example shows how a typical BDD style test looks in mocha, and the second example shows how we can make this into an async test. All we have to do is make our test call back take a done parameter and then at some point we call that done parameter. This particular examples is a bit nonsensical since it doesn't actually show how it would work in a real async test. So let's take a look at a slightly more real world example of doing asynchronous testing with mocha. Here I've got an empty test. I want to make this test asynchronous by adding a set time out call. Just do a simple assert so we can get to the point. And I'm going to set my time out to 10 milliseconds. Now if I were to run this as is it would actually time out because the done parameter has never been called. Once you accept that done parameter you have to call it. Otherwise the test will time out regardless of what you do. So let's go in to our set time out function and call done. So let's run that in the browser. And we can see that our test is passing. So even though we have asynchronous code and you can see here our assert is actually inside of an asynchronous call back the test is passing because we added in the asynchronous functionality into our mocha test. An important feature of asynchronous testing is the ability to set a time out for your tests. With asynchronous tests there's the possibility that an expected call back doesn't get called. In this case you want to be able to have your test report an error at some point rather than just hanging indefinitely. Mocha implements a time out feature to handle this situation. By default mocha will time out after 2 seconds, but that time out is configurable. You can either set it globally using the same mocha dot setup call we saw earlier or you can configure it by test. Let's look at how to do that. Here I've got pretty much the same asynchronous test only I've changed the time out to be 2 and a half seconds instead of 2 seconds. Let's go ahead and run this as is. And you can see that after 2 seconds our test has timed out. Now if it only waited another half second the test would have passed. So let's go ahead and change our time out, and we'll do that globally by calling mocha dot setup and we pass in that object and this time we're going to set the time out. And we'll set it to 3 seconds. Let's run our test again. And now after the two and a half seconds the test is passing. Now in addition to setting this globally we can also do this on a test by test basis. Let's say that we only want this test to have the 3 second time out and everything else to have the default 2 seconds. You can set this.timeout to 3 seconds. Let's go ahead and run that in the browser. And our test is still passing. So we can set our time out either way just based on whatever makes sense. In some cases you maybe want to set your time out shorter than 2 seconds if you want a test to fail fast because it should be completing very quickly and on the off chance the test does fail you don't want to wait around for 2 seconds for it to finally time out. So that's it for asynchronous testing in mocha. As you can see, it's very simple and straightforward and really easy to do and especially compared to some of the other testing frameworks that are available.
Integrating with CI
Integrating mocha with CI is no more difficult than it was to do Jasmine or Q Unit. And, of course, the process is pretty much identical. First thing we're going to do is we're going to use PhantomJS and we're going to capture the output. So if you didn't watch the module on Q Unit you definitely have to go back and watch just the part of that module that's about integrating Q Unit with CI because that will give you the basic rundown. I'm not going to go in to everything in depth here, and this is not a course about TeamCity so if you don't know TeamCity very well you'll just have to follow along as best you can. If you're using a different CI system the steps should be pretty analogous since most CI systems operate in fairly similar ways. Just like with Q Unit and with Jasmine we're going to start by using PhantomJS. PhantomJS is a headless webkit browser and it allows us to run our tests within the context of an actual browser execution environment. You can download it here. Then we're going to use the TeamCity reporter. Remember mocha has a bunch of different reporters. We used the html reporter for our browser reporting, but we're going to use the TeamCity reporter as well. I'm going to show you how to set that up. And we're going to make our own run mocha JS file. The code for this is going to be included in the demo code for this course. So let's start by looking at the simplest piece which is our test. I'm just using a single BDD style test for this. The next thing we have to do is we have to modify our index html file. Instead of the normal call to mocha dot setup where we just pass in the BDD string this time we're actually going to pass in a whole configuration object. BDD goes with the key of UI, and then we pass in another key reporter and in that one we actually pass in a function. This function takes a runner parameter and the body of this function creates two new mocha reporters using new mocha reporters dot html and new mocha reporters dot TeamCity and passing in that runner parameter. This is the syntax that allows us to use multiple reporters at the same time. The last piece is our run mocha JS file. I'm not going to explain the code to you, but this is the code that you'll need in order to run your tests under PhantomJS. So let's take a look at our directory and the files that we've got inside of there. Here I've got my directory that I'm going to be testing against. You can see I've got my PhantomJS file, my run mocha file, the html file, the test file itself, the CSS, and the Chai. Chai, of course, is required. The CSS isn't necessarily required, but if you want to run the html file manually to check the tests are working correctly outside of the CI system then you're going to want to have that CSS file available. So let's go look at our CI configuration. Under TeamCity this is how I configured it. I pointed up my working directory. The command executable is phantomjs and the command parameters are my run mocha JS file with a second parameter of the index dot html file that I want to run. This is pretty much like we did Jasmine and Q Unit. So now I'll go back to my projects and run that build configuration. And you can see that we've passed. Build number 3 is up. The test passed. Now let's go back in and change the code. I'll make the test fail. And run it again. And we can see that indeed the tests are failing and that's failing the build. So let's go back, make that test pass again, run it one more time, and now everything is passing again. So, as you can see, running client side mocha tests under continuous integration is rather simple, and only requires a few steps.
Why do we Mock?
The first question we need to consider is why should we mock at all. There are four reasons to mock when writing tests. The first one is isolation. The second reason we might want to mock is because mocking gives us easy flow control at our tests. The next reason we might want to mock is in order to remove interaction with complex or external systems. And the last reason we might want to mock is in case we need to test some interactions between our system under test and other classes. Let's take a more detailed look at each of these reasons. When we decide we want to isolate some of our code for testing, our motivation is that we only want to test a specific part of our system. For that reason, if our system under test calls another component or class as is shown here, we don't want that to actually happen in our test because we decided that we are not testing that other component. So, in this first scenario is where we want to use some mock object. You can see in the second diagram that I have replaced component A with a mock version of component A that allows me to test the system under test and only the system under test. Another reason to use a mock object is for flow control. Looking at this code, let us assume that the implementation of component A's check something method is such that it is very difficult to set up the state to get it to return anything but true. So, to ease the burden of creating an initial state in our test that will cause it to return faults, we can just use a mock object and tell it to return true when we want and false when we want. That makes our setup significantly simpler and makes our test a lot less brittle. Excluding complex systems is really just a variation of isolation, but our motivations are different. When we isolate a system under test we are trying to test it and only it. In this scenario, our system under test talks to component that is extremely difficult, if not impossible to test. Perhaps it talks to database or to the network and setting up test versions of those systems and keeping them clean can be very time consuming and error prone or perhaps it talks to an external vendor system and we incur costs whenever we do that. Whatever the reason, if we have a difficult to test external component, then we will want to exclude it from our automated tests. So, in this case, a mock object will let us simulate the functionality without involving the live system. Sometimes, we want to test the interaction between two components, but the effect of the interaction may not be easy to observe externally. Take the following scenario. This is a sample save method on an object. In this case, the object needs to tell its persistence mechanism represented by the persister parameter to either insert or update based on the return value of the isNew function. Our test may want to assert that we always call insert if isNew returns true, but the real persister object won't be able to tell us if that is what happened. This is where a mock object comes in handy. Most mock object implementations will allow us to ask the implementation if a certain method was called or not. So, we've covered the basic reasons to use mock objects. There are certainly more reasons to use a mock object than this, but mostly they are just variations of those four reasons.
Types of Mocks
So, now that we have learned about the motivations for using mock objects, let's talk about vocabulary and the different types of mock objects. In general, most programmers refer to all mock objects as mocks, but there are definitely several subtypes of mock objects. Martin Fowler who was one of the original designers of the Agile Manifesto and a luminary on agile development has codified these subtypes here on his blog entry and I will follow his classification. First of all, the general term he gave isn't mocks, but instead, Test Double. Mocks are just a specific type of Test Double. Unfortunately, this is not a very commonly followed convention. Most programmers will say mock when what they mean is Test Double according to this breakdown. It is still important to know since different mocking libraries will have different kinds of Test Doubles and understanding what they mean when they have a mock versus a stub can make a great deal of difference as you are learning to use the library. In this course, I will do it most developers do and generally use the word mock when I am really referring to all kinds of Test Doubles. In most cases, this won't be ambiguous, but I will make every effort to be clear where it is necessary. The first kind of Test Double is the simplest. A dummy is simply an object that is passed around but never actually used. Most domains are used to fill a parameter list. At most, the type of a dummy is tested, but no properties or methods are ever actually called on a dummy. The next kind of Test Double is a fake. A fake object actually has a working implementation, but the implementation is generally a shortcut of some kind. This means that the fake actually does what it is supposed to do, but does it so simplified that it can't be used in production. Fakes aren't very frequently used. Instead, most developers opt to use the next type of Test Double, a stub. Stubs provide canned answers to questions. Perhaps every called to a given method returns the same answer or perhaps it returns only a couple of different answers. For example, if the method was supposed to return a large PRI number, a stub might randomly return one of three known large PRI numbers. Stubs are usually programmed to return specific answers to a test. For this reason, stubs are frequently used to determine flow control and tests. Most mocking frameworks implement some kind of stub. A spy is enhanced stub. A spy can not only provide canned answers, it can also record and hold information about how methods are called. For example, how many times a method was called and what parameters it was called with. Most mocking frameworks don't differentiate between stubs and spies. The last type of Test Double is the infamous mock. A true mock is created with expectations on how it will be used and what calls will be made, a mock asserts behavior. A mock object is set up with specific expectations, say perhaps it expects that a certain method will be called and if that doesn't happen during a test, the mock will actually fail the test because of a failed expectation. One of the great uses for a mock is to make sure that a specific method was called on an object and no other methods.
Mocking by Hand - Demo
Spying on Callbacks
So let's look at the code that we use to create a spy in Jasmine. The first scenario we'll look at is how to create a spy for a freestanding function, like a callback. Let's assume we have the following code that we want to test. This function takes in a callback, does some stuff, and if that stuff works right, it calls our callback. Now, we want to have a test that asserts that the callback is called in the right circumstances. This is perfect for a Jasmine spy who would simply create a freestanding spy and use that as the callback. That code would look like this. Here in our test we create the spy using the createSpy method, and then we call our method passing in the spy, and then we do an expect on it, making sure that it was called, and that's all there is to creating a freestanding spy in Jasmine. All right. The demo I'm going to show here is really simple. I've got this very simple function. It takes in a callback, and then just calls it. This lets us demonstrate what we want to do. So I'm going to create the spy using that Jasmine createSpy and I'm going to give it a name of myspy. Now, this name that you give it here really isn't very important. So you don't need to sweat what you're going to name it. Then next I'm going to call the system I have on our test, which is callMyCallback. Passing in that spy. Now, of course, it's going to call it, and I want to assert that that happened so I'll call expect that the spy toHaveBeenCalled. All right. Let's go over and run that in my browser and we can see that we're passing our test because, indeed, it is checking that our callback got called. So that's the very basics of creating a spy for using in place of a freestanding function.
Spying on Existing Functions
Creating Spy Objects
The next function we'll look at is the createSpyObj function. This function lets us create an object from scratch with spy methods. That way we don't have to have a pre-existing object to hijack. Instead, we can just create an object that looks the way we want it. The syntax to do this is really simple. We call the jasmine.createSpyObj function. That function takes in two parameters, the name for the spy and then an array, which is a list of strings. Each of the elements of that array is used to become the name of a spy method on that object. Let's dig right into the code and see how this works. So we'll start by creating a spy, but this time it's a spy object rather than just a spy method. We're going to call jasmine.createSpyObj, and again we pass the name, and then we give it an array of strings. So we're going to have two methods. We're going to have fake getName and save. So now since getName obviously returns some kind of a name, we need to provide an implementation for that. So we're going to use the andReturn function. So we're going to say spy.getName.andReturn, and we'll call save. Well, it's going to return Bob. For save let's provide a fake implementation that logs out to the console, andCallFake, and here we've got a function, console.log ('save called'). All right. Now we can set up our expectations. So let's expect that I'm going to call spy.getName, and it's equal to Bob, and then we're going to call spy.save, and we'll just check that that's been called, using the toHaveBeenCalled method. Let's run that and see how that looks in the browser. We can see that our spec is passing. So it's really quite simple to create a full spy object and give it whatever methods you want and provide fake implementations for all those methods or have them return the values that it want. Now, again, this isn't necessarily something that you would do when you're creating an object that you want to test. This is for -- if the object that you want to test has some dependency that has a lot of functions that you might be calling, rather than just calling spyOn over and over again, you can just call createSpyObj and create the object that has all the spies already set up in one easy call.
Jasmine Spy Matchers
In this section we will look at all the different assertions, or in Jasmine terminology, matchers that we can use with a Jasmine spy. All these matchers are available from the exist method. We'll start by looking at the simplest one, which is the one we have already seen. This is the toHaveBeenCalled method. This matcher simply asserts that the spy was called at least once. Since we've already seen this one in action, we won't take a look at a code example. The next matcher we might want is to check to make sure that a spy was never called. This is really easy to do. We just add the not right before toHaveBeenCalled. Again, this is so simple we won't look at a code example. Now, if we want to assert that our method was called with some specific arguments, we use the toHaveBeenCalledWith method, and we give it the arguments that we are expecting it to have received when it was called. Just like the toHaveBeenCalled method, this method only asserts that it was called at least once with these arguments and makes no guarantees about multiple calls with the same arguments or calls with other arguments. The one thing that it does is verify all arguments in the call. So the arguments have to match exactly. We'll take a look at an example of that. Let's start with just the basics. We're going to create a test, and we'll call it should verify arguments. In this one we'll start by creating a spy. I'll name this one mySpy, and I'm going to call that spy, and I'm going to pass it a 1. Now let's set up an expectation that spy is toHaveBeenCalledWith, and then we'll pass it a 1. Okay, now let's look at that in the browser. Okay, so we can see that that's passing. Let's go make this a little bit more complex. Let's just show that it if we call spy with a different parameter, then our test is still going to pass because we're still calling it at least once with the parameter of 1, and that works even if we call spy with 1 and something else. It doesn't care about this call. It only cares about this one because that one matches. So we're still passing, but if we come in here and comment the second one out, even though we're calling spy with a 1 in its first parameter, it has a second parameter and that's not what we're expecting. So our test is going to fail. So you can see here the feedback, which is actually one of the really cool things about this, and it shows you that we're expecting it to be called with just a 1, but it's actually called twice, one with a 2 and once with two 1s. So you can you look at this and see why you might not be matching up when you think you should be. Very useful for diagnosis. Let's go up here and make this pass again, and that's the basics of verifying your parameter calls. Now, just like with toHaveBeenCalled, toHaveBeenCalledWith can also be negated with the not prefix. This will make sure that it was never called with a specific set of arguments. Let's go and create a new test for that. This one will start out with the same way, and we'll call it -- I'm passing that 1, and this time we'll expect that spy is not.toHaveBeenCalledWith, and pass it a 4. Of course, we can't use single quotes inside of our string. Take that out. All right. Now we can see that this test is passing as well because we never called spy with just a 4. Again, if I go in here and call spy and pass in a 4 but also pass in a 1, it's still going to be fine because it was never called with just 4. But if I call spy with 4, now it's going to fail, and again, just like the other scenario, it shows you what all the calls were so you can match that up. Because this is definitely one of those things that can get a little confusing when you're trying to diagnose why a test isn't behaving the way that you want it to behave when you're checking parameter values against what you expect to be and what they actually are.
Jasmine Spy Metadata
Now, the rest of what we're going to cover here isn't so much matchers as it is just metadata about the calls that we can use in our asserts. So the first of these is the calls collection, and everything that we'll cover starts with that. The calls collection is an array of objects that represents each time that your spy was called. So if a given spy was called three times, there will be three objects in the array. Each of these objects has two properties with information about that specific invocation. The first is the args property, which itself is an array that has each argument that was passed for that invocation. The second is the object that was the context of the call. Or in other words, it was the object that was the this when the spy was invoked. This can be really useful for callbacks to determine what object they were called on. Let's look at some examples. I'm going to start off by creating a test to check that we can test metadata. Now in this test I'm going to do something pretty simple. I'm going to create an object, and I'll just give this object one method. Then I'll create a spy, and I spy on that object, spying on its one method, and now I'm going to call that method, and I'm going to call it with 1, and then I'll duplicate that and do it with this 2 as well as 3. So now if I were to just do an assert, spy.calls would be an array with three elements because there are three calls. So I'm going to look at the first one. I'm just going to check the arguments on that, and of course I only have one argument. So I'm going to check that first argument. I want to check that it's equal to 1. Now let's go up and fix our other test so that it will pass, and then we can run this in the browser and see how it looks, and now all three of our asserts are passing. So let's check something else on this. Let's make use of that calls.object. Check the first call again, and we'll check that it's object that was the this is equal to myObj. Running that again. It's still passing. So let's add one more thing. Now, since calls itself is just an array, we can actually check its length, and we can check that it was actually called exactly three times. Again, we're still working. So you can see there's quite a bit of checking we can do using the metadata on calls to verify that our code is working correctly. Now, there are also a few convenience methods available to us. The first one is the callCount method. This is just a convenience method for calls.length. Then next is the mostRecentCall method, and that just gets, of course, the last call; and lastly is the argsForCall, and this is the args array off of calls. So you can use this instead of writing out calls.args. So let's look at a couple of examples of these as well. So I'm going to add in an expectation that spy.callCount is equal to 3, and let's also check that spy.mostRecentCall, and that first argument is equal to 3; and lastly that spy.argsForCall. This is an array of arrays. So I want the arguments for call one and I want to get the first argument, and then I can say that that's equal to 2. Now let's run all these, and things are still passing. So between the metadata available and the convenience methods on that metadata, there's a lot of really easy checking that we can do and check that our code's correct, that we are passing in the correct arguments to our calls and that they are getting called the right number of times.
Now, the last thing we're going to cover about Jasmine is just a couple of little utility methods that you may find useful when writing your Jasmine tests. The first one is the isSpy method, which a little method on the Jasmine object that lets you tell whether an object is a spy or not, and the last one is the reset method, which is actually a method on spies, as it lets you reset them. This will be a lot easier to explain by just showing you some code. So let's go directly to a demo. So I'm going to create a new test. I'm going to create a spy, and I'm going to expect that spy is actually a Jasmine spy, and let's run this in a browser. So you can see that all four of our tests in this suite is passing. So let's just adjust this test as well, and we'll actually call the spy, and now if we were to expect that spy.callCount was equal to 0, it would fail because it's actually been called once, but if we come in here and say spy.reset, that's going to reset the call count. So at this point the call count will be zero. Let's run that, and our test is still passing. So there's a couple of utility methods that you can use in your Jasmine tests, make sure that spies isn't really our spies and to reset them if you need to. Both of these utility methods are really things that you probably won't end up using very much, if at all, but they are there if there is a really big need for them.
In this module we covered Jasmine Spies, which is Jasmine's mocking library. We started with the spyOn function, which lets us spy on a method of an object. With spyOn we can either set our own return value or let the spy call through and use the underlying value. We also saw the createSpyObj function, which lets us create an entire object that has only spies for methods. We also looked at Jasmine spy matchers, which lets us make assertions about spies. Jasmine Spies offer a lot of simple and convenient functionality that let you mock objects if you're using Jasmine for your testing framework.
(background noise) We'll start our coverage of Sinon by looking at the most simple feature of Sinon, spies. Spies have 2 purposes in Sinon. The first is to provide a test double for a single function, for example when testing something that takes a callback. The second purpose is for watching an existing function or method of an object. In this case, the original implementation of the method will still be used. This is pretty much identical to Jasmine's and call through function, which we saw earlier in this module. Spies are created with sinon.spy function. There are 3 ways to call it. The first method is to call it without any parameters. This creates a simple anonymous function with no return value. This function is suitable to be used wherever a call back is needed for testing purposes. Let's take a look at a working example of that. Since Sinon is not an actual test framework but just a mocking library, we still need a test framework to use, so we'll continue to use Jasmine as our test framework, but we'll just add to our SpecRunner HTML file a reference to Sinon so that we can use Sinon instead of Jasmine for spies. I've written a little bit of code here. I've created a simple system under test object that has a single method called, callCallback, that does exactly what it says. It takes in a callback and calls it. So let's write a test for that method. ( Background Noise ) Since it's just calling the callback and doesn't care about the return value, I can just use a simple spy by creating a spy, using Sinon.spy. (background noise) My system under test, and I call it's callCallback method, passing in that spy I just created, and at that point the spy is going to get called so we can do an assert (background noise) on the spy.called property that it is true (background noise) and now if I run this in the browser, that's going to pass. I'll actually forgo running this in the browser since by now you should be pretty comfortable how Jasmine looks, and that feedback isn't of a terribly great amount of value for us as we're looking through this code. Instead, we'll just continue on. So now that I've got this code working, let's go back to our system under test and introduce another method. This one is going to be called callCallbackWithReturnValue. (background noise) This is going to take in a callback as well (background noise), and this time it's going to return out the value of that callback. (background noise) So now that we've got that, things are going to be a little bit different. We can't just use a plain spy because it has no return value, so instead we're going to look at the second usage of spies in Sinon, and that is spying on actual methods. So were going to write a new test. ( Background Noise ) And in this one we'll also again create a spy using the same spy method, but this time we're going to pass in an actual function. I'll call it realCallback. Let's go create that really quick. ( Background Noise ) And we'll just have this one return out 4 (background noise). Now that we've got the spy based on that callback, we can call the system under test, and we'll capture the return value output ( Background Noise ) And now at this point, we can actually call some asserts, so we can expect that spy.called is again true (background noise), but we can also expect that our return value is 4. ( Background Noise ) And that's how we would test that our callback is actually getting called and it's actually calling the real implementation because the return value is going to be exactly what the actual function's return value was, but our spy is still able to watch that function. Now the third usage of Sinon spies is to spy on not just an actual freestanding function but instead a method of an object. So let's go back to our system under test and let's add in another call. This time we'll call it callDependency . (background noise) And what this does is this is going to call somebody else (background noise). We'll call it myDep and someMethod (background noise). So what's going on in here is it's actually calling an external object. So let's create that external object. (background noise) And it's got a method called someMethod (background noise), and we'll have that return 10. (background noise) And what's going to happen up here in this call is it's going to call the someMethod method on the myDep object and then return that value. So if we want to write a test for that. ( Background Noise ) We're going to create our spy again, and we're going to use the third parameter set, which is passing in the actual object that we want to spy on and then the name of the method we want to spy on on that object. Remember, someMethod is a method of myDep not a method of mySUT. And then our return value is my system under test dot callDependency. ( Background Noise ) And then we can set our expectations, spy.called is set to true (background noise), and we can expect also that the return value is now 10. ( Background Noise ) And this would all work. Again, it's just watching the actual someMethod method, it's not doing anything but just watching it, so the implementation of someMethod is still going to be the same, which as we saw before returns out of 10, so this is going to return out of 10. Now, I just want to make one small note here. Of course none of the code that's used in demonstrating these functions should be considered best practices. In fact, most of the code we use in this entire course is pretty much arbitrary and overly simplified. But I feel it's worth mentioning something that just happened here. The class mySUT has an implicit dependency on a global named myDep. You can see that right here. This type of thing is never good. Implicit dependencies are the kind of invisible gotcha that just lurks about waiting to bite you as an invisible side effect, and it makes your code brittle and more difficult to change. So let's go in and rewrite this method to make the dependency explicit by actually passing in the object. ( Background Noise ) And pass it in like that (background noise) and return dep.someMethod. ( Background Noise ) And then we'll go back to our test, and we'll call better and we're going to pass in that global (background noise) and now that's really made the code a lot more maintainable because instead of these 2 objects having an implicit dependency on each other, the dependency has been made explicit by passing it in as a parameter to this call dependency function, and that's how you use spies in Sinon. Again, you can use them for 2 purposes. You can either provide a test double for a single function or you can watch an existing method of an object, and there are 3 ways to call it. One without parameters, where it creates a function for you that has no return value. With a single parameter that is another function that you're going to watch and with an object and a method name that you're going to watch.
The Sinon spy API is rather large. Spies have a lot of methods that you can use to determine how they ran and whether they ran the correct way. So let's go ahead and take a look at the spy API. We'll actually through this fairly quickly because of the large number of methods within the spy API, but most of these are going to have some similarities with others, so there's really just a few different methods with a bunch of variations on them. The first one we're going to look at is called. Called is very straightforward, it just checks that the spy has been called at least once. The variations on called are calledOnce, calledTwice, calledThrice. Those 3 check to make sure that the spy was called an exact number of times. Then we have the firstCall method. This actually gets you a call object, which we'll look at in few minutes. This method gets you the call object that represents the very first time that your spy was called. We've also got a secondCall, thirdCall, and lastCall. The next method is calledBefore, which takes the parameter of another spy, and its checks to make sure that your spy was called before a different spy. And then the corresponding calledAfter. We've also got calledOn, which makes sure that your spy was called with a particular object as this, and alwaysCalledOn makes sure that it was called every time with that particular object as the context or the this object. CalledWith takes the set of arguments and checks that the spy was called at least once with that set of arguments. Notice that this is an exact match, so it must be these arguments but it can be additional arguments in addition to whatever you give it. AlwaysCalledWith is the same as called With but checks that every call on that spy included the arguments that you pass to it. CalledWithExactly is an exact match. So if you give it 2 arguments, then any matching call must be using those exact 2 arguments and no more. AlwaysCalledWithExactly does the same thing as calledWithExactly but checks every call. NotCalledWith checks that at least one time your spy was called without the arguments that were given. Never called with checks that it was never called with the arguments that you give it. CalledWithMatch takes in a set of matches. We will look at matches later, but there are ways to specify arguments and not be quite so exact about what arguments you want to match up. AlwaysCalledWithMatch does the same thing as CalledWithMatch but checks every call matches instead of just one call. NotCalledWithMatch checks that it was called at least one time without the arguments that you specify and neverCalledWith checks that your spy was never called with arguments that match the arguments that you supply. The next method is calledWithNew. That checks that your spy was called with the new operator and used it as constructor. The threw method checks that your spy at least once threw an exception. Threw with a string parameter checks that your spy at least once threw an error, and the type of the error has to match the string that was passed in. So you might for example pass in the string syntax error. Threw with a nonstring parameter checks that your exception threw a particular object. AlwaysThrew checks that every call to your spy threw an exception of any kind, and then you got 2 variations on that that match the variations on threw. Returned checks that your spy at least once returned an object that matches the given object. AlwaysReturned checks that every call returned the given object, and lastly we have a set of methods that give us the metadata back about each call that our spies made. GetCall gives you back the metadata about a specific call. It has 4 convenience methods, which we saw earlier, firstCall, secondCall, thirdCall, and lastCall. ThisValues gets back an array of all the objects that were used as the context for each of your calls to that spy. Args gets back an array of arguments very similar to what we saw in Jasmine. Exceptions gives you back an array of exceptions that were thrown. And returnValues gives you back an array of the return values that were given for each call. We have the reset method, which is just like what we saw with Jasmine. If you call this, it will reset your spy. And lastly, we have printf, which is a debugging statement that you can use whenever you are just having trouble figuring out why you can't match up a particular call on a spy. You can use printf and you can print out a whole bunch of metadata about each call. It takes a bunch of parameters. I'm not going to go into depth on this method here in this course. I'll leave it up to you to look at the documentation if you ever need to use printf to debug why you can't match up a call. Now we talked about the getCall and its convenience methods for second, third, and lastCall. Those methods return a call object, and that call object has its own API that we're going to look at next. So this is the Sinon call API. The first method is calledOn, which checks that a particular call was called with a given context. Then we have calledWith, where you're given a list of arguments, and it checks that call was called with those arguments. Again, like the previous variations on this method we saw, these arguments are just the minimum set, and there can be other arguments on top of that. Of course, we have the corresponding calledWithExactly. Then calledWithMatch, again using matchers, which we will cover shortly, and notCalledWith checks that the call was not called with a particular set of arguments. NotCalledWithMatch. Then we've got threw, which checks that this particular call threw an exception. Threw with a type string. Threw with a particular object. Then we've got the thisValue, which actually gives you back what the thisValue was for this call. The args, which gives you an array of all the arguments for this particular call, and the exception, which gives you back the exception that was thrown in this call, if anything. And lastly, we have the returnValue, which gives you back the return value for that call. So this is the Sinon by API. As you see, it's really large. There's a lot of methods, but there are basically just a few methods with variations on each of them, and there's basically 2 sets of methods, one for the spay and one for each call on the spy. Because this API is so comprehensive and with the combination of matchers, which we'll look at in a little bit, it's really easy to check and see if a particular spy was invoked in a particular manner and check that your code was being called correctly. It's also important to note that the spy API is actually supportive for the other types of objects that Sinon gives us, which are stubs and mocks. So those 2 objects build upon this API and allow you to use this in conjunction with additional methods to assert that your code is correct.
Sinon comes with an included set of assertions that you can use on Sinon objects. These assertions can be used instead of the assertions that come with your testing framework. At first, you may wonder why to use a different set of assertions instead of the one that come included in your test framework, but there are a couple of really good reasons to use Sinon's assertions instead of the ones included with the test framework. First, the assertions included with Sinon are pretty specific, so they can make your tests a little more expressive and readable, but by far the biggest reason to Sinon's assertions is because the air messages are significantly clearer when the assertion fails. Let's look at an example of that. (background noise) I've already created a little bit of code here. I've 2 tests. The first test uses a built-in assertion. We're using Jasmine again here. So it calls expect that spy.called is true. In the second test a created a spy, but this time instead of using one of Jasmine's assertions, we're going to use one of Sinon's assertions. So I'm going to do the same kind of assertion. I'm going to assert that the spy was called, but to do that with Sinon's assertions, I call Sinon.assert.called and pass in the spy. Now, you can see in both cases I actually haven't called the spy, I've only created it. So both of these assertions are going to fail. So let's see what kind of feedback we'd get from the browser when we run these. (background noise) All right, looking at the built-in assert, the only error message that we get is expected faults to be true, but using the Sinon assert, we get an assert error, expected spy to have been called at least once but was never called. Obviously the feedback from the Sinon assert is much more clear and leads you to exactly what the problem is. So, this is an example of why using Sinon asserts is really advantageous when dealing with Sinon spies, stubs, and mocks. All right, so let's go back, and we'll actually take a look at the assert API. The first method we're going to look at is called, which we just saw an example of. Again, all these methods are off of the sentinel node.assert object. In addition to that, we've got notCalled, which checks that a spy was never called. We got calledOnce, calledTwice, called Thrice. Each of these methods takes in the spy as its single parameter. Then we've got callCount, which takes not only the spy but a number, which then checks that that spy was called that exact number of times. Callorder, which checks that a certain set of spies was called in a specific order. CalledOn, which checks that a spy was called with a given contexts. And alwaysCalledOn, which checks that every call to a spy was given with a certain context. We've got calledWith, which checks that a spy was called with a certain set of arguments. AlwaysCalledWith checks every call on that spy. NeverCalledWith checks that it was never called with those arguments. CalledWithExactly. Again, as we saw previously in the spy API, this checks the exact set of arguments and not just that the set of arguments given is a subset of the actual arguments used in the call. CalledWithMatch uses matchers, which again we will cover shortly. AlwaysCalledWith Match, neverCalledWithMatch, and we've got threw that checks that a spy threw an exception Threw with a spy and an exception as the second parameter, checks that that spy threw that specific exception, and finally alwaysThrew, which checks that a spy always threw a particular exception. So those are the built-in assertions included with Sinon. You don't need to use them. You can use the assertions that come with your own assertion library, and if you're really comfortable it, maybe it just doesn't make sense to switch, but if you do use the Sentinel node assertions, as we saw before, you really will get a lot better feedback when you have a failing test.
Stubs are the next Sinon object that we're going to take a look at. Stubs are basically just spices with preprogramed behavior. As I mentioned before, stubs support the spy API, but they also contain additional methods to control behavior. Stubs can be anonymous, or they can wrap existing methods. When using a stub, unlike a spy, the original function is not called. There are 3 purposes for using a stub. The first is to control behavior. The second is to suppress existing behavior because the underlying implementation is not called, and the third is for behavior verification. Just like spies, you can check that stubs were called in a particular manner. There are 4 different ways to create a stub using Sinon. The first is to just call Sinon.stub. This creates and an anonymous function that acts as a stub. The second is to call Sinon.stub, pass in an object and the name of a method. This is very similar to some syntax that we saw previously in Jasmine. This allows us to stub a given method on an object. Again, this replaces the existing implementation and uses the stub implementation instead. The third is to do the same but also give the stubbed implementation as the third parameter. This is useful when the behavior is just too difficult to adequately specify using the Sinon stub API. And the last way is to pass in an object and Sinon will stub all of that object's methods. This is really convenient when you have a large object with a bunch of methods that you want to stub. Now you may be wondering why there's not an overload that simply takes in a function and stubs that function. Well, that really doesn't make sense since stubs, unlike spies, actually suppress the underlying implementation, so having the original implementation as a parameter to construction does not give any value. The Sinon stub API really isn't very big, at least not in comparison with the spy API. It just adds a few methods that let you control behavior. The first one is returns. That takes in a single parameter, and that specifies that whenever you call that stub it will return the given object. Next one is throws. This tells Sinon to throw a general exception whenever that stub is called. Throws with a tight parameter tells Sinon throw a particular type of exception. Throws with an object tells Sinon to throw a particular object. WithArgs is a cool little function. It lets you customize the behavior of a Sinon stub on a per invocation basis, so you can basically say when the stub was called with these arguments act this way, and when the stub is called with a different set of arguments, act this way. The returnsArg tells Sinon to just take a particular argument that was given and return that argument out, and that is specified by the index parameter. So for example if you pass in a 3, it will take the third argument and return that out. The callsArg is very similar to the returnsArg except instead of returning an argument at a particular index, you're actually instructing Sinon to call an argument at a particular index, implying that that argument is a function. CallsArgOn is just that calls except you can also pass in a context and tell Sinon to call that particular function with the given context. CallsArgWith is just like CallsArg except you can also supply set of arguments to be passed to the function that gets called. CallArgOnWith is a marriage of the 3 different methods, which lets you call a particular argument, which is passed into the stub using a particular context and a particular set of arguments. In addition to all of these, there are another set of 8 methods on the stub API. These additional methods are all about having your stub method call one or more callbacks that are passed to is, very similar to callsArg but just a lot more complex. Because of the complexity of these methods, they are beyond the scope of this course so I'll only mention them here for completeness. In the next clip, we'll see a demo of stubs.
Mocks are the third and final form of test double provided by Sinon.JS. Unlike stubs and spies, mocks contain preprogrammed expectations. Mocks will fail tests if these expectations are not met. Mocks are like stubs except the assertions are stated before instead of after. There's a few guidelines that we should keep in mind whenever using mocks. Try to only use one mock object per test. If you find yourself using more than one mock object, you're probably trying to do too much within your test. Minimize your use of expectations. Try to keep it down to just a couple of expectations per test. And lastly, minimize the use of assertions that are external to the mock. It's more common for the mock itself to be the only assertion to use within that test and therefore the only reason that that test can fail. Now since we've looked at spies, stubs, and now mocks, it can start to get a little confusing to remember the difference between all 3 of them and how they are used, so before we get deep into the usage of mocks, let's take a look at some diagrams that show the differences in the architecture and usage between spies, stubs, and mocks. With a spy, we take an original function, and we wrap it in a new function, and then we use this new function in place of the old one. With a stub, Sinon takes a method on an object and actually replaces the reference to the original method with a new method that has nothing to do with the original one. With mocks, instead of wrapping the original methods, or replacing them, you create a mock of your object, and from that mock you create expectations on the methods. Each of those expectations then causes the original method to get replaced by a mock method, and again the original implementation is discarded. Under the hood, there's a little more that goes on, but this conceptual model is complete enough to understand the basics of mocks and how they differ from spies and stubs in Sinon. Now let's take a look at the code for creating and using mocks. First, you call Sinon.mock, and you pass in your object, which gives you back the mock object. From that, you can create expectations by calling mock.expects and passing in a method name. The API for mocks themselves is pretty bare. All it has for the most part is the ability to create expectations. The API for expectations is also fairly simple. The majority of the methods revolve around how many times the method was called. It's important to note that each of these methods returns the expectation so that they can easily be chained to create compound expressions. So first we have expection.atLeast, then we pass in a number that tells the mock to fail the test if that particular mock function is not invoked at least the given number of times. Then we have its corresponding atMost. We've got never, which tells the mock to fail the test if the mock method is ever called. Then we have once, which tells the mock to fail the test if that particular expectation isn't called exactly 1 time. Twice, of course, thrice, and then of course we have exactly if one of the above 4 doesn't work. Then we have withArgs, which tells the mock to fail the test if the mock method isn't called at least once with the arguments that are given and withExactArgs, which tells it to fail the test if it's not called with these exact arguments. Then on tells the mock to fail the test if it's not called at least once with that given object as the context. And lastly, the key to it all is the verify method. You would typically call this last new test, and that checks that all the expectations have been met, and if there are any that haven't been met, then the test fails. Next, we'll look at a simple example of using mocks. ( Background Noise )
Using the same code in the stub example, we're going to write a different test. Since mocks and stubs can both be used to verify behavior, we're going to rewrite our same test, but this time we use a mock instead of a stub to assert that the code worked correctly. ( Background Noise ) So we're giving it the same name. (background noise) We're going to start out kind of similar. We're going to create a combat object. ( Background Noise ) Then we'll create a defender. ( Background Noise ) Now we're going to mock that defender, so I'm going to create a mock defender. ( Background Noise ) Remember, I've got to grab a reference to a new object, mock defender. This is a little different than what we saw with stubs. With stubs, the functions are actually replaced in place, and we'd on a separate stub object that we talked to, but with mocks, we have a separate mock object that we're going to talk to. Now that we've got a handle on that mock object, we can create an expectation. ( Background Noise ) And we call the expects method, pass in the name of the method we want to mock, (background noise) and we're going to say that we only want it to be called once, and we want it to be calledWithArgs of 5. Now we can create our attacker. (background noise) We're going to do the same thing we did before and stub out the attacker. ( Background Noise ) We still need to stub him out because we still have to control the flow of the method by having the attackers calculateHit.return(true). We also go to set its damage, (background noise) and then tell the stub (background noise) to have its calculateHit.return(true). (background noise) Finally we call combat.attack, passing in the attacker and the defender, and notice were not passing in the mock attacker, we're passing in the actual attacker, and then we do the magic expectation.verify (background noise), and at that point, it's going to verify that all the expectations we set, which are the once and the withArgs expectation have been met, otherwise the test will fail. Let me close it up, and let's run this in the browser and see how it works. (background noise) We can see that both of our tests are passing. Let's go back and look at our code. So here you have a really good comparison of verifying behavior using either a stub or using a mock. And looking at these two, you can decide for yourself which you like better, using mocks or stubs to verify behavior. Probably the key takeaway and difference that you should pay attention to is the fact that with expectations the assertions that we have are set before the actual action takes place, but with stubs, those expectations are set afterwards, so it's the location of where in the test that we verify behavior that changes based on whether or not we use a stub or a mock to verify behavior.
Matchers in Sinon are a way to match up calls to a test double based on the arguments used without specifying those arguments exactly. Let's say for example that you wanted to make sure that a certain spy was called a number, but you didn't care which number was passed so long as it was a number. You can use a matcher for that. Or perhaps you wanted to make sure that a certain argument was a function. A matcher can do that as well. Using Sinon matchers is pretty easy. Remember, they are used in the place of arguments when you are checking to see if a test double was called correctly. Let's take a quick look at an example usage. The code here is very simple. We create a spy, call it, and then use the calledWithMatch method and pass in one or more matchers. Here I am using the number match, which will be true if the argument was a number. You can see in the second line that we called spy with a value 3, which is indeed a number. So that calledWithMatch statement will return true. If instead of a 3 we had passed in a string high, then calledWithMatch would have returned false. Let's look over the complete set of matchers provided by Sinon. All of the matchers are available through the Sinon.match function. The first one you just pass in a number as an argument. This matcher isn't as useful because it verifies that the argument matches that number exactly. The next one is string. This one's a little bit more vague because it allows the string that you match up to be a substring of the expectation that you provide. Let's look at this in code. So here I've got a simple test where I've created a spy. I'm going to actually call that spy and pass in the number 1234. I'm going to go in and verify that that spy was called with at least a substring of what I expect. So I'm going to say spy.calledWithMatch, (background noise) and here I'll call Sinon.match (background noise), and here I'll pass in my match. So I'm going to just give it a substring. And this is actually going to work because 1 is a substring of 1234. (background noise) And that's how Sinon.match with a string works. The next one is Sinon.match with a regular expression, which matches an argument if it was a string, which matches the regular expression that you give to the match. The next one is Sinon.match(object), which matches an argument if it was an object that matches up to the object that you give it. Sinon.match(function). This is how you actually create a custom matcher, and we're going to go over that a little bit later. Sinon.match.any will match any argument at all. The next one is match.defined. That works as long as the argument was anything but undefined. Then Sinon.match.truthy, which matches any truthy value, and falsy matches any falsy value. Then we have Sinon.match.bool, which matches as long as the argument passed was a Boolean (phonetic) value, and then Sinon.match.number matches if the value passed was a number. Sinon.match.string will match so long as the match was any kind of a string at all. Sinon.match.object will match as long as the argument is some kind of an object, that means not a primitive value. Sinon.match.function will match up against a function. Sinon.match.array will match against an array. Match regular expression will match against any regular expression. Sinon.match.date will match so long as the value passed was a date. Sinon.match.same will only match so long as the object that was passed as the argument is the exact same argument that you give to the same matcher. Let's look at an example of this too. I'm going to create and object. ( Background Noise ) And I'm going to call spy and pass in an object. (background noise) Then if I call spy.calledWithMatch (background noise), if I call Sinon.match.same (background noise), and I simply recreate the object (background noise), this would pass because this object and the O object, all they're very similar, are not actually the same object. They might have the same properties with the same values, but they're not the same object. If I wanted this to pass, I'd actually have to pass in the actual O object here, and at that point, that would return a true. The type of match will match as long as the argument is of the given type. The type that you can pass in here is a string, and there's a set of values that you can pass in. The values that you can give it are undefined, null, Boolean, number, string, object, function, array, reg ex, or date. There are already matchers for everything in that list besides undefined and null, so this is really not a matcher that you should use very often. The next one is instanceOf(type), and that requires the value to be an instance of the given type. The has matcher will match so long as the object in the argument has a property that matches the property that you give here. In addition, you can provide a second parameter that is the value of the property. Let's take a look at that to clarify. (background noise) I'm going to call my spy again, and this time I'm going to call it with an object. I'm going to call it with myProp, and it has a value of 42. I'm going to call spy.calledWithMatch (background noise), Sinon.match.has (background noise) and I pass in the string myProp. That's going to return true because the object and the argument does have a property called myProp. Additionally, I can pass in a second parameter and pass in the 42, and that would be just a little bit more exact. These would still match. Now even if the object here actually had more of the properties than this, (background noise) this matcher would still return true because it has at least the one property given. HasOwn is the same as has except for the fact that prop that it matches up has to be defined on the object itself and can't be inherited. And that is the entire set of matchers that Sinon provides. As you can see with this list, there is a lot that you can do to match up arguments that you don't know exactly what the argument was, but you do know something about the argument. Now in the rare case that one of the matchers that we looked at just doesn't cut it, you can always create a custom matcher. This gives you the greatest amount of flexibility. Doing this is really easy. Here I have created a matcher that passes if the value given is less than 100. You can see that this is pretty simple. The second parameter is just a readable version of the name that would be used whenever argument fails to match against this matcher. The last thing we'll cover about matchers is the ability to combine them. Let's say that you support passing in either a string or a function for the first parameter of a method. Combining matchers will let you do that. You can use the and and or functions, which are available on every matcher to combine matchers into a compound matcher. Here's the code to implement the example of a matcher that matches either a string or a function. This concludes our look at matchers. Matchers are a very powerful feature of Sinon that you can use to match arguments of calls to testibles (phonetics) when you need a somewhat fuzzy match. The list of provided matchers is quite comprehensive and can handle just about any situation. For the rare case that a situation arises that can't be handled by one of the built-in matchers, then you can use a customer matcher instead. And if you ever need to create compound matchers, that is done with the and and or statements. ( Background Noise )
Sinon allows you to fake timers very similarly to the way that Jasmine's mock clock works. When you fake timers in Sinon it allows you to control the clock and advance the clock when you wish by as much as you wish. This works with both setTimeout and setInterval. If you're working in IE, you'll also have to include the Sinon IE library, which you can get from the Sinon site. Sinon will provide you with a clock object that you can use to control the passage of time in your tests. What's really cool about this is that it also lets you control dates so that you can create dates with a specific timestamp. And another really cool advantage is that this works with jQuery animations. So if a jQuery call animates something over 500 msec, and you advance the clock by that much, then the animation will complete and fire any callbacks. Let's jump into some code and see fake timers in action. ( Background Noise )
Faking Timers Demo
Let's take a glance at the code we're going to use to see fake timers in action. I've created a little spec file here, and I've actually created a class inside the spec file. The reason I'm doing that is because this class isn't so much a system under test as it is a utility that we're going to use to highlight how fake timers work. The class myClass has a couple methods. The first one here, doTimeout, just calls setTimeout with a 1-second timeout. It takes in a callback and calls that callback after the 1 second has passed. The second method, hide, this one actually uses a jQuery animation and makes this div (phonetic) with an ID of hideMe invisible. The reason I'm doing both of these methods is to show that fake timers work not only with the timing functions like setTimeout but also with jQuery animations. I created a barebones test with a describe called timers and inside here I've just started out my code with a spy, a callback that logs out to the console, and inside my beforeEach I've made sure that that div is visible and then I've initialized that spy to a Sinon spy that wraps around the callback. That way we can use the spy to not only watch how the callback gets called but we can also see console output from it as well. So the first test I want to write is just handling timeouts. ( Background Noise ) All right. Inside this test, the first thing you want to do is I'm going to create a clock. So what we do is create a clock by calling Sinon.useFakeTimers. ( Background Noise ) Now that I have my clock, I'm going to call a doTimeout on that myClass object, and I'm going to pass in our spy. Now, remember, timeout takes 1 second. So I'm going to tell the clock to go for just over a second. I'm going to do that with the tick function on the clock. (background noise) Then I'm going to expect that spy was called. ( Background Noise ) And I'm also going to restore the clock using clock.restore. (background noise) And if we run this in the browser, we can see that the test is passing, and it is calling our callback. So, again, what's going on is because were using a fake timer, when we call a doTimeout, it actually doesn't do anything until we tell the clock to tick. Once that tick has passed 1000 msec, that timer will fire and our spy will get called, and then of course we have to restore the clock by the end of the test otherwise the setTimeout function will be hijacked after the test is completed, and we always, always, always want to clean up our state after every test. We never want polluted state in between tests. Even if we know that the next test is going to be use a faked out clock, we always want to clean up our state. Now let's see what it's like to use fake timers for animations. I'm going to create another test. ( Background Noise ) And so this one I'm going to start out the same way, creating a clock. ( Background Noise ) This time I'm going to call the hide method on that object. (background noise) And again I'm going to have the tell the clock to tick for just over a second. ( Background Noise ) And I'm going to set an expectation that the div is now invisible. So I'm going to do that by expecting that the div itself, if we ask for it only if it's visible, that the array that we get back has a length of 0. ( Background Noise ) And of course the last thing I need to do is restore the clock (background noise), and let's see how that looks in the browser. And now we got 2 specs passing, so that new test is passing as well. And the final test that I'm going to write is going to show off how we can use fake timers in order to fake out dates. ( Background Noise ) Inside this test, we're going to create an initial date. ( Background Noise ) And I'm going to create a clock. (background noise) And this time I'm going to be just a little bit different because I'm going to pass in that initial date. (background noise) When I pass in the initial date to the fact timers, what it is doing to do is set the clock to that exact time. Now if I come down here and create a date. ( Background Noise ) That date will actually be this value of the initial date. So let's show that by logging onto the console (background noise), and then let's tick the clock 1 second (background noise). Now let's create another date. ( Background Noise ) And let's log that out. ( Background Noise ) And then of course we need to restore the clock. ( Background Noise ) And let's go view that in the browser. And so you can see here, it's logged out those 2 different date time stamps. The first one is that initial value they've created, and the second one is exactly 1000 msec later. So this is another advantage of Sinon in that you can actually set dates to the values that you want. That can be really helpful when testing things such as some event that only happens at midnight or an event that happens every hour, so as you can see from the code that we've written, it's really not very difficult to use fake timers in Sinon. Just always remember to restore your clock, and of course the right place to do that is an afterEach function, not inline in the test like I've done it.
Faking the XHR
(background noise) Sinon allows us to create a test double for the XMLHttpRequest object. That way if our system under test is making ajax (phonetic) calls, we can write tests around them and verify their requests and responses, or we can use the fakeXHR object to provide canned responses for our tests. This way, we can inspect each request and provide canned responses to them. If we are working in IE, we will need a custom library from Sinon. There are 2 ways in Sinon to fake requests. The first is with the lower level useFakeXmlLHttpRequest method and the second is with the higher level fakeServer. Let's look first at the useFakeXmlLHttpRequest method. Here we have the basics of using it in tests. First we create an array to hold our requests, then we actually tell Sinon to use the fake xhr and that creates an xhr object. Then we register with that xhr object to listen for whenever a request is made, and we grab that request and put it in our array. At this point, we can actually do something like make and ajax call, and when we're all done, we restore the original xhr object. So this part of using fake xhr's is pretty simple, but these request objects that get created are where the meat is. Now using the request object is all about what your purpose is in your test and what you're testing. If you need the fake xhr to respond a certain way but your tests will be testing your own objects and how they deal with the result, then what you will need to do is know how to make the fake xhr object respond the way that your server would. You might also want to test that your code is creating the requests correctly and send in the correct data to the server. If that is so, then you will want to be able to inspect the request object. So we will start by taking a look at how to do that. The request object has a few properties and a couple methods on it that you can use when asserting that a particular request was made correctly. The URL property gives you the url with which the request was made. The method property gives you the http method. RequestHeaders is all the headers, and requestBody is the body. And status of course is the status code. This property and all the ones that follow isn't information about the request itself but instead the response to that request. How you set up Sinon to respond will determine the value of this and the rest of the members of the request object. We've got the status text, the user name, the password, then there's the responseXml that will sometimes have a value based on the headers. ResponseText will have the text of the response unless it's an xml based response. And then you can also look at the response headers either one at a time using getResponseHeader and passing in a string indicating which header you want to check or getAllResponseHeaders will give you back all the headers as a string. Now since Sinon is controlling the xhr object in the browser, just because you make a request doesn't mean that it will get responded to. If you want to send a response, you have to tell Sinon to do that. There are a few methods that you use to do that, but by far the simplest way is to just call request.respond. This method takes in 3 parameters: The status, which is an integer; the headers, which is an object; and the body or response data, which is a string. When you call this on a request object, then Sinon will respond to that request. This will allow any callbacks to fire or promises on that request to complete. There are a couple other methods that we can use. We can set our response header separately by calling setResponseHeaders, and we can set the body separately by calling setResponseBody. Now let's go see how to actually use this in some real code. The first thing I want to do is put in that template code that we looked at earlier. So I've got to capture the xhr object, so I'm going to put that in an external variable, and I also need my array to grab the requests. (background noise) Now I'm going to go into my BeforeEach. ( Background Noise ) Now I'm going to initialize that xhr object by calling Sinon.useFakeHXR. ( Background Noise ) And the I'm going to initialize that requests array as well. ( Background Noise ) Now the next thing I need to do is listen to the onCreate event. ( Background Noise ) And now that I've got that done, there's one more thing that I needed to do. I need to clean up after myself, so my afterEach (background noise) I'm going to call xhr.restore. (background noise) This will restore the original xhr object back to the browser. Now let's write a test to exercise this code. ( Background Noise ) So I'm going to do something very simple, and that's create a responseData variable here (background noise) and put in a little bit of JSON. ( Background Noise ) And then I'm going to call jQuery.getJSON method. ( Background Noise ) And I'm going to pass in a callback that will receive the data and just log it out to the console. ( Background Noise ) Now that I've made a request, I need to tell the fakeXhr to respond to that request, so I'm going to go to requests element 0 and I'm going to tell it to respond with a 200, and the headers that I'll use are content type (background noise) of application JSON. ( Background Noise ) And I'll send in the response data, and now I can do an assert, so I'll expect that that request at the url property is equal to some/url. Let's go in here and clean this up so we're calling the right object, close it up, and now we can run this in the browser and see how it looks. (background noise) Okay. So you can see that our test is passing. We can also see that it's logging out an object, which has a property called myData with a value of 3. Going back to our code, we can see that that's exactly what we passed in was that object with a property of myData with a value of 3. So that's how we use the fakeXhr object in order to respond to requests. Again, this is kind of a low level API. It allows us to investigate each request object and respond to each one uniquely and see the url on each one. The sentinel node fakeServer is another API for hijacking the xhr object in a browser. The fakeServer is a higher level function than use FakeXhr. This interface is more about setting up a pattern for responses and less about being able to examine specific requests and responses. As such, its usage is quite a bit simpler. We can see here in the sample code that there are only 3 main parts to it. First, we create the server. Then we define how we want it to respond to requests, and finally we restore the server so that the original xhr object is back and available to browser. The API of the fakeServer has a few methods centered around how it should respond to different kinds of requests. We already saw the respondWith method. There is an overload of that where you get to specify the url and the response, so when you request that url, we'll get that response. Then we have the method url and response so that you can have custom responses based on method and url, and then if you want to be a little more vague about which URLs you want to set up a response to, you can use a regular expression for the url in both the overloads. And lastly, the respond method actually causes all these responses to fire. Until you call this method, the requests that have been received so far won't actually be responded to, so that API is a bit simpler, but let's still go take a look at that in an actual test. ( Background Noise ) Okay, now that I've got my describe set up, I'm going to create a server variable (background noise), and then in my beforeEach. ( Background Noise ) I take that server variable and initialize it to Sinon.fakeServer.create. Then I'm going to tell the server to respond with the following information. I want it to respond with a 200, and the headers I'm going to use are going to be the same as before, content type (background noise) of application JSON (background noise) and let's go with the same return value. We'll change up the value just a tiny bit. ( Background Noise ) Okay. Close that up and then of course our afterEach. (background noise) We will restore that server. ( Background Noise ) And let's write our test. ( Background Noise ) So here I want to just create a blank spy (background noise), then I'm going to call jQuery getJSON again with some/url (background noise) and I'm going to pass in the spy and tell the server to respond (background noise) and then I'm going to say Sinon.assert. It was called with (background noise) spy and the object that was passed in was myProp (background noise) 35. Okay. Let's go over this in the browser and see if this works. Okay. So we have both of our specs passing so indeed that new test is actually getting a spy called correctly by our server. So there's the basics of using the 2 method available in Sinon for faking the XHR and allowing you to provide response to your ajax calls and inspecting your requests to make sure that they were made correctly. ( Background Noise )
Sandboxing is a way in Sinon of making sure that any global objects you touch with Sinon are restored to their original state. This includes the following, spies, stubs, mocks, fake timers, and the fake XHR object. There are a couple ways to implement sandboxing. The first is to use the Sinon.sandbox.create method right inside your test. The other way is to use the Sinon.test method as a wrapper to your test callback. Let's jump right into the code and look at how to implement sandboxing. You can see up at the top of this file I've created a little object called myXhrWrapper. I've got 2 methods in there, get and save. Each of those methods just logs out to the console. I've had them do this for demonstration purposes so we can see the timing on when things happen. So I'm going to go into this test. I'm going to create sandbox. And I set equal to Sinon.sandbox.create, and then the next thing I want to do is stub out that XhrWrapper, so I'm going to call sandbox.stub myXhrWrapper and then I'm going to call both get and save on it. ( Background Noise ) And then I'm going to restore it by calling sandbox.restore. (background noise) Now the other thing I want to do is put in a little bit of logging, so I'm going to say console.log and a sandbox test (background noise). And remember stubs won't actually use the underlying implementation, so when this gets stubbed, the get and save are not going to return any values. So let's go and create and afterEach method. ( Background Noise ) And we'll have it log out after the test. ( Background Noise ) And we'll have it call the XhrWrapper get and save as well. ( Background Noise ) And remember, those are going to log out to the console, so what should happen if this works is because we're sandboxing we're creating a stub, we're calling get and save, and they're not going to actually do anything because those are stubbed out methods. Then when we call restore, it's going to automatically get unstubbed, and so in the afterEach when we call get and save, they're actually going to log out to the console. So let's go ahead and run this and see the output. ( Background Noise ) All right. So we can see down in the console that after we go into the sandbox test nothing gets logged out until the sandbox has been restored, and now when we call get and save, they're actually logging out to the console again using their original implementations. Going back to the code, it's important to note that when we wanted to stub we actually had to call sandbox.stub and not Sinon.stub. This links that stub to this particular sandbox so when we call sandbox.restore those stubs are actually restored. And let's look at the other method that we can use in order to sandbox the test. In this one, we do the same thing (background noise) except this time I'm going to call Sinon.test and then pass in the callback function. And now because this is actually creating the sandbox and restoring it for me, I won't need to include that code, but I will still log out to the console. ( Background Noise ) And I'm going to call this.stub. So again, when I'm creating a normal Sinon stub, I call Sinon.stub. When I've created a sandbox by hand, I call sandbox.stub, and when I'm using a Sinon.testWrapper function then I call this.stub (background noise), and I'm going to call the get and save on that. ( Background Noise ) All right, let's go run this one in the browser and see the output. (background noise) Okay, so we can see from the console output we ran our first test, which is the sandbox we created by hand, and then we can see that our second test functioned just like the first. Even though we call get and save on them, nothing happened because they were stub implementations, but after the sandbox got restored, then our get and save went back to their original implementations. So sandboxing is a really great way if you're going to modifying any globals to restore those globals back to their original state. Hopefully your tests don't have to deal with too many global objects, but if they do, this is the way that you can deal with them.
The first thing we need to do is install Grunt. Since Grunt is built on Node it is installed through npm so we'll be using the command line to do our installation of Grunt and its plugins. Let's switch over to the command line now and get Grunt installed. One of the big changes between 0.3 and 0.4 of Grunt is the command line part of Grunt has been separated out from the engine. In 0.4 the command line is installed globally where the engine of Grunt is installed locally in each project for which you want to use it. So we're going to install the CLI first. We can do that with an npm install Grunt dash CLI and we use the dash G to make it global. Now after this downloads and installs the Grunt CLI we're going to go into our demo directory. And this is representative of the project directory that you will have. And here is where I'm going to install Grunt locally. So I do with that with the command npm install Grunt. Now as I said before the default version of Grunt is 0.3 but I want to install the current version of 0.4. So in order to do that I just add the command at 0.4 and that's going to install 0.4 version of Grunt. Now it is worth noting that the version number is only required for right now. Eventually 0.4 will become the default version of Grunt so you won't need to put in the version number when you install. It's also worth mentioning that knowing npm a little bit better can help you troubleshoot any installation problems you might have when installing Grunt. That's not covered in this course but again, Pluralsight had courses on Node and the introduction to Node course has a very in depth section on npm so it's definitely worth your time to watch that course. And now that we have Grunt installed we have to create two files. The first one is a package.json file and the second one is a Grunt file.js file. For the package.json file you can create one by hand or you can use npm to create it for you. We'll use npm. We do that with npm init and that's going to ask us a series of questions. I'm just going to hit enter for all of these and accept the defaults except for author, I'll type in my name. And type in yes and that's created my package.json file for me. Since none of these settings really matter for us you can pretty much answer anything you want to to these questions. Now the second file we need to create is a Grunt file.js. We're going to create that one by hand. So let's look at an initial version of that file. This file is ready to be filled in with details that are needed for our project. Now you can just create one exactly like this by hand or you can grab the sample one from the demo files. And now that we've got Grunt installed we're ready to move on and do some testing.
Testing with Grunt
Testing with Grunt gives us a simple way to run our tests much like refreshing a page except it will let us run our tests on the command line. With Grunt you can test Jasmine, Qunit and Mocha and each of them is rather easy to do. So let's start with Jasmine. First off I have a few files already created. The first is the code file that represents our system under test. That's here in this code directory. Then I have a test directory and inside there I've created a directory for each of our test frameworks. So let's look at just the Jasmine directory for now. You can see that I only have one file in here. That's because with Grunt, you don't need any other files to run Jasmine tests. Naturally you can still have your test runner file and all the Jasmine files but they aren't required to run a Jasmine test with Grunt. Next we'll need to install the Jasmine plugin for Grunt. We're going to go back to the CLI to do that. The command to install the Jasmine plugin for Grunt is going to be npm install Grunt dash contrib dash Jasmine. Now we're going to add an optional flag here at the end, save dash dev. That's going to add it to our dependencies inside of our package.json file. ( pause ) Now that one took a few extra seconds because in addition to the Jasmine plugin, it also installed Phantomjs which is how it's going to run those tests. And now that we've got Jasmine installed all we need to do is configure our Grunt file and we're ready to go. So to configure our Grunt file with Jasmine, the first thing we're going to do is go up here and create a load npm tasks command. So that's done with Grunt.load npm tasks and we're going to pass in our Grunt dash contrib dash Jasmine. And the next thing we do is go up into the init config and add a Jasmine section. And this we do like this and it's got one section within that called pivotal and within that we give it two parameters. The first one is source and here's where we're going to have our system under test and we've got that remember in code, I'm just going to do that star.js and then our options, this is the second parameter and that has one section within that which is specs and that's going to be our tests slash Jasmine slash tests.js. Again, I could use wildcats in here if I had more than one test file. And I'll close that up. Now I'm going to do one more thing before I'm done. I'm going to go down here into the default task and I'm going to set Jasmine as the default task. I'll show you what the benefit of this is in just a second. And now we can save our file. Go back to our command line and Grunt will actually work. So let's type in Grunt Jasmine and it's going to run our Jasmine taskforce. We can see it ran one spec with no failures. Now I since I added Jasmine to the default tasks, I can just type in Grunt and it'll still run that Jasmine task for me. And that's all there is to setting up Jasmine to run with Grunt. Next one I want to install and run Qunit. So the first thing we need to do is install the Qunit task for Grunt. And we do that just like Jasmine with npm install and the plugin for Qunit is called Grunt contrib dash Qunit. Now unlike Jasmine, Qunit hasn't caught up totally yet to 0.4 so we're going to need to install a particular version of the Qunit plugin. So I'm going to add at tilde 0.1.1 and that will make sure that the right version gets installed. And then I need on the parameter to save into the package.json file. And I'll let that install. That should take just a moment as well. ( pause ) So it installed another copy of Phantomjs and now we can go into our Grunt file and configure that. So we'll go up and add an npm task for our Qunit contrib. And then we can go up here and add a Qunit section. This one's going to be a lot simpler than Jasmine was. All we need is an all and that's an array with a list of the HTML runner files that we want to use. So that will be test slash Qunit slash startup HTML and then close that up. And let's go look at the contents of the Qunit directory. So you can see here I've got more than just the test file. It actually got the HTML runner file, Qunit itself and CSS files. With these files inside this directory I can actually run that HTML file and it will run my tests for me in addition to running it on the command line with Grunt. So let's go back to our command line and let's type in Grunt Qunit and we can see it's got one assertion passed and no errors. Now let's go back in and make one more modification. In addition to Jasmine, let's also have it to Qunit. We'll go back and this time type in just Grunt. And you can see that it ran both our Jasmine tests and our Qunit tests. And there you have running Qunit with Grunt. And so our final testing framework we're going to configure with Grunt is going to be Mocha. And of course we have to install that plugin as well. So we do npm install Grunt dash Mocha. Now this is a little bit different than the other two plugins that we created because it's not part of the Grunt contrib project, it's its own project. And of course we want to add our dash dash save dev and go ahead and install it. And it'll take just a second as well. And just like the other two, this one installs a copy of Phantomjs. Okay, now that we've got that installed, we can go and configure it in our Grunt file. And we'll do the same thing that we did before but we'll change this to Mocha and then we'll go up and add a Mocha section. This one we have an all just like Qunit and that takes a source and that's going to be an array of files. I'm just going to put in the one file which will be tests slash Mocha slash index.HTML which is my HTML runner file. And then it takes one more piece which is options and here you have to specify run with true. And close that up. Now there's one other change that I've got to make. So let's go and look at the directory structure and look at our Mocha files. So here we've got the same Mocha files we've seen previously in this module but there's one change that I had to make. I had to take the index.HTML and make a modification to it. So let's look at that and see what we did. Down here on line 16 you can see I've got if navigator user agent index of Phantomjs is less than zero then Mocha run. So what this does is it doesn't allow Mocha.run to be called here if it's being run through Phantomjs. The reason for that is we need to specify in our Grunt file configuration that we want Mocha to run there and not here. If we don't wrap our Mocha run with this if statement and then put run true in the configuration for Mocha in our Grunt file, then Phantomjs is actually going to timeout on us whenever we run our Mocha tests. So now that I've got that, I'm going to go back to the command line and I'm going to run Grunt Mocha. And we can see that one assertion passed. Now let's go in and make this a default task as well by adding comma Mocha and go back and just run Grunt. And again, it's going to run all three testing frameworks, all of them passed, no failures. So there we've seen how to configure Grunt to run Mocha, Jasmine and Qunit tests.
Linting with JSHint
Watching Files with Grunt
Now we're going to tie everything that we have done so far together by getting Grunt to watch our files and whenever any of them changes we are going to lint and run our unit tests. Like all the other tasks we've added to Grunt so far, we're going to have to install Grunt watch. And we do that by doing npm install Grunt dash contrib dash watch with the save dev option. And we let that install. Okay, now that we've got that installed we can go ahead and configure it. So the first thing of course we need to do is go down and add a task for that and then we can come up here and add a config section. Now we watch and just test two options. First one is files and that's an array of files we want to watch so we're going to do tests star star slash star.js to watch every JS file inside of our test directory. And then the same thing with code. And the second option it takes this tasks and this is an array of tasks that we actually want to run whenever one of the files changes. So we want to run first our lint which is going to be JSHint, then we want to run Jasmine, then we want to run Qunit and finally we want to run Mocha. Okay, let's go ahead and run that in the command line and see what we get. So it's very simple. I just type in Grunt watch and now you can see that watching's actually on and it's just sitting there waiting. So let's go and make a change. I'm just going to do something simple, add a space here. Go back and you and you can see that as soon as I did that it kicked off running all the tasks we have configured to run whenever watch triggers. So now if I'm developing and writing tests, I can just take this command line, stick it off in the corner of my screen and every time I make a change to one of my code files or every time I make a change to one of my test files, it's going to rerun my tests and let me know if I broke anything. So just like the other T utilities we looked at, this allows me to keep something going all the time to let me know as soon as I happen to do anything that breaks one of my tests.
Joe has been a web developer for the last 13 of his 16+ years as a professional developer. He has specialized in front end and middle tier development . Although his greatest love is writing...
Released12 Feb 2013