What do you want to learn? Leverged jhuang@tampa.cgsinc.com Skip to main content Pluralsight uses cookies.Learn more about your privacy Testing Clientside JavaScript by Joe Eames Learn the tools and techniques to write comprehensive unit tests for your clientside JavaScript code. Start CourseBookmarkAdd to Channel Table of contents Description Transcript Exercise files Discussion Learning Check Recommended QUnit Course Introduction Hello, welcome to Pluralsight's course on testing client-side JavaScript. My name is Joe Eames, and I'm excited to present this course to you. I'd look to give you just a very brief introduction of me so that you will know why I decided to author this course for Pluralsight. As I said already, my name is Joe Eames. I've been a professional software developer for almost two decades now. You can reach me most easily on twitter handle at Joseph Eames. You can also feel free to contact with me at LinkedIn. If you happen to have any questions or comments about this course, please contact to me. I'm the curator for the website test- driven js. And I'm also a panelist for the Java jobber pod cast as listed here. As to why I decided to offer this course in 2005, I read Ken Becks very influential book Test-Driven Development by Example. Since then, I've been a very vocal proponent of TDD, sometimes called test first development. I personally believe that there is no other single thing you can do that will improve the quality of your code more than practicing Test-Driven Development. Sadly today, there seems to be way to little Java code being written and TDD. But before you can practice TDD, you need to be able to write tests. So this course is not about Test-Driven Development. This course is about writing tests, specifically unit tests. Although the tools and techniques you will learn will help you in writing other kinds of automated tests. If you are interested in Test-Driven Development Pluralsight has several excellent courses on it, please check them out. This course is broken up into high-level areas. The first area testing frameworks comprises the first three modules. In those modules we will learn how to use the three popular testing frameworks -- QUnit, Jasmine and Mocha. Each of them has their own strengths and weaknesses. Then in the second section we will discuss mocking. There will be a module on mocking in JavaScript in which we will introduce you to the concept and rationale of mocking. Then we will cover the two popular mocking frameworks for JavaScript, jasmine spies and the feature rich sign on JS. Then when we finish this course, we will look at some testing utilities. These utilities will help you get quick feedback from your test as you write them. Having been a software engineer for a lot of years now, the one thing that caused mere professional grief than anything else has been poor quality code. Sadly, most of that was my own. My great hope that is after viewing the course you will write more tests around your JavaScript code, and therefore you will write better code. And that in turn will make the world just a little bit better (silence). Introduction to QUnit Hello, this is Pluralsight's course on testing client- side JavaScript. In this module, we will be discussing the QUnit testing framework. We'll start with an introduction to QUnit. Then we'll go over how to organize your tests, then how to run your tests. Then we'll talk about how to integrate your test with the DOM. Then we will talk about integrating QUnit with CI. Then how to test asynchronous code using QUnit, and we'll finish up our module with a discussion on a few miscellaneous QUnit tidbits. QUnit is a popular and full featured unit testing framework for JavaScript. It is similar to server-side frameworks such as JUnit or NUnit. So if you're familiar with those, QUnit will make a lot of sense. QUnit was built by the jQuery team in order to test jQuery itself. QUnit has no dependency, and it can be used to test you server-side JavaScript code. When setting up Q unit to use for ourselves, the, irst thing we'll need to do is get the source code. This is the URL where the source code is currently, but you can always just Google for QUnit in order to find it. Let's take a look inside the first code after we've downloaded it. This is what the source code zip file looks like. In here you're going to find the QUnit library itself, which is this QUnit JS file. Also, next to it is the QUnit CSS file. This style sheet lets us see the results of our tests in a more readable format. There are some alternative style sheets that are included that we can use as well. The next directory I want to look in is the test directory. Inside here we'll actually find tests for testing QUnit. Using some of the files in here as templates for the work that you will do can be very convenient for getting yourself started. And now let's look inside of this addon's directory. This contains several different addons that are including with the actual distribution of QUnit. Some of the more interesting addons that you will find in here is this addon here in the canvas directory, which is actually a canvas pixel tester. Which you can use to test certain pixels on the canvas if given color that you're expecting. This close enough addon is used for testing numbers within a given range. The composite addon used combining multiple sets of tests together. That can be really convenient once your test suite grows large. Now, and the last addon I want to mention is the themes addon. This one gives us a couple of additional themes for styling up our results page when we're actually running our QUnits tests. So let's look at how we actually setup QUnit to run. The first thing I'm going to do is open up the index of HTML file I found within the distribution of QUnit. Then I'm going to copy the contents out of there and paste it into a new file that I'm going to use for my own tests. I call this file the test runner file. Now that I've got this file here as a start, I am going to need to make some edits to it. The first thing I'm going to do is to remove the reference to those two files. This is the code that is actually being tested or system under test as it is frequently called. I'll often refer to this as system under test or SUT. The next thing I need to do is go up here and change our reference to QUnit to actually point to where my unit file is i relation to this HTML file. In this case, I've got them in the same directory. Then I'll throw in here a reference to our system under test file. This file doesn't actually exist, but I'm doing this as an example for how you would reference the actual file that you're going to test. Then I will need to change our reference to the test file. We're using a file called tests -- plural-- so I'll add the s here. And then the last thing that I need to do is change the reference to the CSS file because that's also in the same directory. Now I'm going to go over into our tests file, and we're going to write our first test. In QUnit, the way that you write a test is you call a function called tests. Then the first parameter that you pass is the name of your test. Now, one thing that's great about JavaScript testing, QUnit in specific, is that you're test names can actually have spaces in them something that not very many server-side frameworks allow. Then a second parameter is going to be a function. And inside that function is going to be where we actually put the body of our test. Now the simplest test that we can actually write is to call the ok function, which is just the truthful assert for a similar to assert dot is true or assert is true. And within we'll just pass in a true. And let's go ahead and run our test by opening up the test runner file in the browser. And this is what the output of QUnit looks like after you've run a test. We can see that this test here is passing. If it was failing, it would be marked in red. Next let's look at the assertions that are available to us within QUnit. The first assertion is ok which is just the truthful assertion. Then we have the equal assertion which just compares two values for quality. We have the notEqual assertion which compares two values for inequality. DeepEqual which will compare more complex objects like arrays or actual classes with nested classes and make sure that every value within them is the same. Notdeepequal which just does the inverse. Then we have StrictEqual. Now it's important to know that the StrictEqual should be your default equal comparison. StrictEqual uses the triple equals comparison within JavaScript. So it won't allow types that don't match to pass, even though they're convertible to each other. Only use equal when you truly aren't sure if you're going to be getting out say number or a string of four, but you want the test to pass anyway. Then we have not StrictEqual which is the not comparison for StrictEqual of course. And the last assertion we have is raises which tests that is an exception was thrown. And you can test what kind of exception was thrown or the message of the exception. Now, notice that these assertions have a message parameter is their last parameter, and that parameter is optional. Organizing Tests When writing tests just like any other code, we must keep ourselves organized in order to keep our code maintainable. Server-side testing frameworks have long had several methods for organizing tests. The general structure for this organization is they organize tests into folders within those folders we organize tests into files. And within those files we're going to tests into fixtures. QUnit has the same organization features. Although in QUnit fixtures are called modules. Let's start by looking at organizing our folders and files. Let's assume you're starting with this directory and file structure. It's always a good idea to keep your files and your directories logically organized, and mimicking your source files and directories is usually the best idea. So you should create something like we have here on the right. Where you're files and your folders for your tests mimic the structure of the file and folders of your source code. This is generally an accepted best practice for server- side tests as well. In any case, we're going to want to spend a minute before hand figuring out how to organize our tests. Grouping tests and server-side testing frameworks involves using fixtures. As I mentioned before in QUnit, fixtures are called modules. The purpose of modules is to group tests for organization and to group common setup and teardown. In our first tests, we didn't have a module. That's because in QUnit modules are not required. You only need them if you want some common setup or teardown, or if you want to group your tests. So let's go ahead and add a module to our code. The way we do that is by calling in a function called module. The first parameter of the module function is the frame of the module. And again, you could include spaces in this name. Let's run that in the browser and see what we get. Notice that name of the test now is prefixed with the name of the module. This allows us to easily see which tests belong to which modules. Now, let's go back, and we'll add another module to our test file. This one we'll call module 2. And we'll also add another test to the test file. We'll call this test my second test. And again, we use the simple ok assert. Passing in a true. Viewing this in the browser we can see that this second module is now prefixing the name of the second test. You'll also notice that the test are not grouped within the modules they simply follow the modules. Any tests that follows a module belong to that module. Now, just like server-side frameworks, we can add a setup method to every module which will be run before each test within that module and a teardown method that we run after each test within that module. This is simple to do. This is what the second parameter of the module function is for. In the second parameter, instead of function this time we'll create an object. That object can have two properties -- one named setup, which will be a function that we run before every test. And the second one is called teardown, which is a function that we run after every test. Both of these functions are optional. There are a couple of purposes for creating common setup functions. The first purpose is to create common objects that could be used by all the tests within the module. The second purpose is for setting up the DOM so that it looks the way you want it for your tests. The purpose of having common teardown methods is for one, cleaning up the DOM after you've manipulated it, and two for any generic clean up. A typical server- side test is simply a class that is also a fixture, and you typically limit yourself to one test class per system under test class. But because modules are so easy to create in QUnit, you can easily find yourself creating multiple modules for tests for a single object. There are a few reasons why we would do this. The first one is for logically grouping our tests. The second reason is for grouping our tests by component. This is the usual method where we create one module for each class that's we're testing. The third reason is for grouping our tests by common setup or initial state. It's common in testing to follow arranged act assert pattern. Where first we arrange the initial state of the code, then we act upon the code, then we assert the certain state changes or calls to external collaborators have taken place. To arrange our code, we can use the setup method in a module to create the initial state then we can create several different tests which each test a different state change. So far, we've only discussed the scenario where we have one test file referenced within our test runner HTML file. Remember when we first created our test runner HTML file we referenced the code that we had in our test and the test file that contained our tests. There was one of each that corresponded with the HTML file. Well, that scenario isn't the only scenario that we can run. Over in this second diagram you can see some other scenarios that we could do. The first one shows an HTML file that references one test file, but it references two different source code files. Perhaps this test file contains tests for more than one class so we would need to reference more than just one of our source code files. The second test runner file contains two test files and one source code file. In this scenario, perhaps our code contains more than one class or perhaps we need more than one test file to adequately test the class we've created. In the third example, we have two test files and two source code files referenced by the HTML file. In this scenario, test file four might correspond with source file four, and test file five might correspond with source file five. So each test file just tests the classes within that one source file, but we want to group them up together within test runner HTML file so that we can run all of our tests together. But we also might be mixing and matching. Perhaps source file four contains two or three classes, source file five contains one, and in test file four we're testing two of them. And in test file five we're testing two more classes. So you can see the ability for us to match what source codes file and what test files are grouped together within a single test runner file is actually quite versatile. When running our tests in QUnit, we have several different options as to which tests and how many of them we want to run. The first thing we can do is just run all tests within a test runner file. So we've seen that already where we just opened up HTML files. The second option that we have is running just a single test within a test runner file. You notice here that there's a rerun button next to each test. If you click that, it'll rerun just that one test. The third option is a custom filter. If you look closely as we click the rerun button, you'll notice that the URL's changed, and it's added a filter parameter to the URL. Well, that filter parameter is simply just a string based surge for matching string within the module name concatenated to the test name. So if we go up here and change the filter to something we want, then we can fill tests by matching strings within the name. The fourth option that we have for running tests is the composite to addin. Now, the composite addin was built to handle situations that you see here. I've got three runner files and three tests files. Each of these test runner files has a corresponding test file. So let's say I want to look at that first set of tests. I open up the first test runner file. Now, if I want to look at the second tests, I have to open the second test runner file. And a third set of tests; of course, I have to look at the test runner file for that third set. Now, that can be problematic once my test suite gets to be pretty large. So in this case where the composite addin comes in to play -- so let's open up this test composite's HTML file and take a glance at it. You can see that I've got this little script that lists all the test runner files that I want to combine. Once I open up this HTML file in the browser, what it'll actually do is run each of those separate tests runner files and then combine all the test results into one HTLM page. Now, the last option that I've got running tests is to actually use ReSharper within Visual Studio. So if you are using Visual Studio for writing your JavaScript tests this can be a very convenient option. A couple of the benefits of using ReSharper are it is simple to setup, and of course it is within Visual Studio. Let's look at an example of how we make that happen. You can see I've got a test file here, and ReSharper has gone ahead and added these little marks on the left-hand side that indicate that I can run these unit tests within the ReSharper unit test runner. The only thing that I've done special is I've added this comment up at the top that indicates where my source code actually is. And ReSharper will load that source code whenever it runs it. And this is actually all the setup that you have to do to get this to run in ReSharper within Visual Studio. Now, let's actually run this test, and we'll see what happens. So what ReSharper has done is actually executed its own web server, and then run our test within that. And it's opened it up in the browser. So as you can see this is very convenient, but there are some significant drawbacks to using you ReSharper to run your QUnit tests. So to start off with the drawbacks, let's assume that I've manually changed either my code or the test, and I want to rerun the test. I come back here into the browser and just hit F5, and now we get no response. ReSharper is no longer running that web server. So in order to execute this test again, we got to come back into ReSharper, and run it again. Now that it's run again, let's assume we've changed it again. Let's go run it again. Now look up here in the browser up at the top, and you can see I've got three tabs open. Every time I execute that unit test it opens it up in a new tab one the browser. So there's another drawback of ReSharper in that it keeps opening up tabs every time we run and execute the test. The last drawback that we'll talk about is how the code has to be structured within the project. Now, in order for ReSharper to actually run our unit tests, the project that contains the test class and the HTML test runner file has to also contain your test code which means that the project that has your test code in it, now also has to have its own tests within it. This is definitely not a recommended structure for your projects. You should not be putting test code alongside live code within the same project. It's certainly makes for difficulties when you're trying to do deployments. So because of that, using ReSharper to run your QUnit tests -- although it looks and appears convenient at first -- is definitely something that should not be done at this time with the current restrictions that are in place in ReSharper. Once they iron out a few of these bugs, then ReSharper becomes a much more viable alternative. But for now with version 6, I would not recommend that anybody uses ReSharper for running their QUnit tests. Integrating with the DOM Unit testing the DOM is the unique capability of JavaScript. Almost no other technology can unit test the actual UI. Some other technologies have the ability to run tests against the UI, but these are not unit level tests that actually abstract the layers below the UI. These tests actually run everything from the UI down. JavaScript actually has the capability to unit test against the UI. So for example, we can test that an element exists in the DOM. Here we do this by using a StrictEquals, selecting that element, and checking that the length is equal to one. We're also testing that there is one, and only one, element that matches that selector. In addition to that, we can test the text value of an element is exactly what we expect. We do this by calling the text function. In fact, we can use any jQuery method to test aspects of the DOM. But testing the DOM is a double- edged sword. DOM tests are prone to being brittle. Any small change in the UI can break one of the UI tests. We'll discuss how to avoid this in a minute. Now, it's not useful to test the way that the page looks. It would take far too much code to test that the page lays out the way that we want it. So why would we test the DOM? Well, there are a couple of reasons. First, we can test that our symptom under test correctly manipulates RUI. Second, we can test that our code correctly reads from the DOM. It is important to note that DOM tests don't replace UI tests. We can assure that our code works fine with test HTML, but that's not the same as live HTML. There's still is a good place for UI level tests using tools such as selenium. Hand coding our DOM into a test page is also a bad idea it limits scenarios that we can test. For example, we can't have two different elements with the same ID. So we can use a setup method to create DOM elements for us. Here I have a very simple example. This code reads the text value from div with an ID of div1. If we wanted to test this code one of the ways you might do that is to hand code a div1 right into our test runner HTML page, and then we could just write a test like this that reads that text and then compares it to a known value. If we did that that would be rather brittle, since there can only be one div on the page with the ID of div1. That every time we want to have a test that reads a different set of text we'd actually have to change the text at the beginning of our test. And if our DOM places were more complex, we might be creating elements. And at that point we'd have to remember to clean up at the end of every test. Now, we can create these elements inside of our setup method which is what I'm going to do here. But this is kind of limiting because if I ever want to have two different tests, the tests against two different sets of text, I'll have to create two different modules so that I can have two different versions to test against. I'll do that here really quickly so that we can see what that's like. And I'll just go up here and change the name of the module a little bit and change the name of this test. But we can see already that is kind of a lot of work just so that we can use two different variations of the tests to run against. No that I've got the text changed up there I'll change expectations so the test matches the expectation, and there we go. Now, we've got two different flavors of the test to make sure that we're correctly reading the text from the div. Now, of course, we also need to remember to clean up. This is one of those things that if we ever forget to clean up a DOM element that we create in one of our tests in our setup methods or even the test directly, we can really create a problem for ourselves with tests that are unpredictable because they pollute the state of the next test. Thankfully, QUnit has thought of this, and they actually have a fairly decent solution. This div right here which has an ID of QUnit -- fixture will actually take a snapshot of its self at the very beginning of when we run our tests then at the end of every test it will reset itself back to the original state it was in. Now, this won't allow us to do variations and have multiple divs with the same ID that have different content within them but at least it will allow us to not have to remember to clean up after ourselves whenever we pollute the DOM, so long as we limit our changes to something inside of this div right here. Now, there are two main drawbacks to testing the DOM. The first one it requires a lot of additional setup that can very quickly become unruly and become a maintenance problem in our testing. And by far the worst drawback is that our tests now become tied to the HTML and easily become very brittle. There are some techniques for minimizing the brittleness of tests that test the view. So let's take a look at those now. Let's say we're writing a simple to do application. In this application in response to some event the code will need to create a to do item, and we will put that code into a create to do item method. So here's what we'll do. We'll go and grab a very specific div. We'll append another div to it and stick some text inside of that div, and that's how we'll implement our create to do item method -- very simple. Now, in order to test this method, we will want to write a test that will assure that the DOM node is created correctly. So what we'll do is we'll go into our HTML runner page, and we're going to write that div1 node right into that special fixture, and that way we don't have to worry about setup and teardown and cleaning up after ourselves. Next we'll go into our test file, and I've got this test, this empty test, that's already prepared. I'm going to call the create to do item, and I'm going to create a test that will test that the DOM node was created correctly. Now, of course, I could use jQuery and test that node that was created looks exactly the way that I expect. Maybe I could grab the HTML and compare it to a known string, but if I did that that we be really brittle. What if I decided that my to do items no longer needed to be divs they need to be spans or list items something like that. If I made those changes, my tests would now be broken. So there's a better way to do that. Instead of testing the exact HTML, we'll just test that node exists that we're looking for. And the way that we'll do that is by checking for a node that has a specific class rather than using say a selector that goes to div1 and looks for a div as a child of div1. Here I've chosen a class JS dash to do container. I like to use the prefix of JS dash for any classes that I'm using specifically just for programming purposes and not actually for styling purposes. And if I do that that gives us a lot more flexibility to change around the HTML of our resulting code rather than having to know that the HTML is structured in an exact way. All we have to do is look for elements that actually have a specific class that we know that they should have, and that way it can be div, it could be a span, it could be an article, it could be a list item. We don't really care at that point what the HTML looks like. We just care that the node exists not what the node looks like so much. And that allows our tests to be more about the functionality of the view and less about the presentation of the view which is something that's far more brittle than just testing the functionality of our code. And then I'll go back to my system under test, and I'll change my create to do item method to append a new div that has that class, and go in here and fix my scenical. And then we'll run our test and see how this looks. And our test is now passing. And that is some techniques for reducing the brittleness of testing the UI. Integrating with CI Integrating with continuous integration systems is very important when writing tests. Having tests is good, but having them under continuous integration is even better because now no failing tests can slip by for very long. Putting JavaScript under continuous integration has its own challenges. Unlike many other languages, the runtime environment for JavaScript or browsers are actually big heavy programs that have a lot of overhead so running them under continuous integration can be problematic. So in basic procedure for putting JavaScript under continuous integration is to one run PhantomJS, which is a headless web kit. Web kit is the engine behind chrome. And PhantomJS is a program that wraps around the web kit engine and gives us a runtime environment for JavaScript that doesn't have all the overhead of a browser. Once our JavaScript is running underneath the PhantomJS, then what we want to do is just write out to the console, and PhantomJS will translate that out to the console that the continuous integration server can capture. And as long as we're putting our output in the format that whatever CI system we're using expects, then the CI system can capture that output and determine whether or not our tests are passing or failing. Now, doing PhantomJS for your continuous integration does have one significant drawback it is the web kit engine which is what chrome runs, but it isn't the same engine that the other browser's running. So if you're tests really truly need to verify that you're JavaScript is running in all browsers, then PhantomJS by itself isn't good enough because you're not verifying that your JavaScript is actually going to run under a specific version of Internet Explorer or under Firefox or under any of the other browsers. Therefore, if you do need to truly test your JavaScript under a different browser, there are some programs out there that will allow you to run your tests under multiple different browser setups. But there are some unique challenges. I'm not going to go into that in depth, but by running multiple browsers under continuous integration has some challenges because of running all the different versions you may want to run. Internet Explorer being by far the most difficult of those since it's nearly impossible to get multiple versions of Internet Explorer running on the same machine. So now you have to have multiple machines setup each with a specific version of the browser, and you can use those machines to run your JavaScript against those specific versions of those browsers. But all that setup and maintenance can really be a nightmare. There are some third party solutions out there that will run your JavaScript for you under multiple different versions of browsers, and they're very easy to find. All you need to do is Google across browser testing, and you can find several different options for a third party testing where you can send your JavaScript in an automated fashion. They'll run it for you, and then you can get the results back and see whether or not your JavaScript is actually running in different browsers. And you can do other things like make sure that you're HTML page is actually working under different browsers as well as rendering correctly. But running it yourself is not something that I would recommend doing because of all the maintenance overhead that's involved. If you have a large team and a lot of IT resources maybe that's more feasible. Now, let's look at how to get QUnit running underneath continuous integration. I'm going to use teamcity for my demo. But even though I'm using teamcity the steps here are completely analogous to how you would get QUnit running underneath any CI system that you're using. So long as you're minimally comfortable with the CI system, you can follow the same steps here. This will work on pretty much any CI server out there. So now the first thing we need to do is we need the PhantomJS executable. It's available here. But we can also just Google for it to find it. From there we download the zip file, and inside the zip file there are two files that we care about. I've already grabbed both of the files and put them here into my demo directory. But they are the PhantomJS executable file itself and this run dash QUnit dot JS file which you will find inside the examples directory. For simplicity, I put them here with all my other files, but by no means do they need to be in the same directory as your test code and the system under test. The next file that we need is this QUnit dot teamcity JS file right here. And in this file we're actually going to reference inside of our HTML test runner file like this. You can get that file from this URL right here. Now, this is a URL that you probably want to take notice of and copy down. It's kind of hard to Google for. It was actually put there by just a community member. But it's what allows the output of our test to be interpreted by teamcity. And let's take a look at the file and see what it actually looks like. So you can see what it's doing is it's binding two events in QUnit and writing out messages to the console that will ultimately be interpreted by teamcity so that teamcity will know whether or not our tests are passing or failing. In order to do this with a different CI, you just need to figure out what the format is of the messages that your CI system will read and then match that. So you can either build it by hand or search online and find one that's already been created by somebody and use that. Now that we have those files ready, we're going to use a couple of prebuilt tests. The first one here passes, and the second one fails. You can see that I'm actually calling my system under test here to demonstrate the code that I have written for my system is truly being run by our CI server. So I'm going to go into my teamcity. Got a little project here called QUnit CI. And I'm going to add a new step to that -- make it a command line. Come down here, and we'll name it run our JavaScript tests. Now the working directory I'm going to set to be the same directory where I've got all of the tests that I'm going to run. And you'll have to figure this out according to whatever location you're codes being checked out and deployed to. Then for the command executable it's going to be PhantomJS dot exe, and the command parameters are going to be that run QUnit JS file that we talked about earlier. And then the second parameter will be our tests HTML file. Now, this a point I you need to consider what test runner HTML files you have and how to combine them into the correct set that you want for running under CI. Since you can only specify a single tests HTML file here, if you want to run more than one at a time, you're going to actually have to create more than one step. So to avoid that -- make it a lot easier -- either create one new test runner HTML file that incapsulate all of the JavaScript testing we've got, or you can use the composite plugin that we covered earlier -- this is a great place to use that addon to run all of your tests at the same time. So let's save this, and then we will go ahead and build. Going up here clicking run -- and my results are going to show that indeed our build is failing. One test is passing; one's failing. So we'll go into the actual code, and we'll change that failing test so that it's passing. Run this again and we can see now that our build is green. Both tests are passing. So integrating JavaScript tests into your CI system is just that simple. Asynchronous Tests Asynchronous tests are a feature QUnit that give us the ability to do some things that would be very difficult otherwise. The basic purpose of asynchronous test is to allow us to test our code when it contains setTimeout interval function calls. A second but less common purpose of asynchronous tests is to allow us to test UI effects that take time to actually occur, such as fade out or fade in. And the third purpose of asynchronous tests is to allow us to test ajax calls. Let's first look at testing setTimeout and set interval. The first test I'm going to show is a broken asynchronous test. Here I'm going to create a setTimeout call. Inside that setTimeout call I'm going to call assert. And I'm going to give it a time out of 100 milliseconds. Now, in this example, I'm putting my assert inside of the setTimeout. But I could also reverse the code and put inside the setTimeout the actually code that I'm executing and put my assert outside. And we'll get the same situation. Let's run this in the browser and see what happens. Here you can see that test is completed, but it says that we have no asserts. We know no that's not true we have our ok assert, but because it's inside the setTimeout, the test completes before the assert has it's opportunity to run. So essence, the code is being called out of order. The code after the setTimeouts being called before the code inside the setTimeout. If we had the situation where we had our code under test inside of our setTimeout and our asserts outside, the asserts would get called before the code in our tests had a chance to run. So let's use QUnits asynchronous capabilities to fix this issue. So the first thing I will do is go up here -- I'm going to issue a call to the stop function at the beginning of my test. The stop function tells QUnit to pause running tests and notified. Now, I'm going to go down inside my setTimeout call, and I'm going to issue a call to start this tells QUnit to go ahead and resume running the tests. Now, let's run this in the browser and see what we get. And we can see here that our tests are now passing. We have reversed the problem that we had before where the code -- it was as if inside the setTimeout wasn't running until after the test was completed, and now QUnit is waiting until after the code inside the timeout has run. Now, let's look at the situation where we have more than one setTimeout. I'm going to duplicate this test. And inside the second test I'm going to create a second setTimeout call with its corresponding assert. Now, in order to make it a little more obvious what's happening, I'm going to change the timeout on the first setTimeout to be something a little bit longer. And inside of it I'm going to write out to the console so that we can see what's happening. All right. Now run this test and watch the console here. See the test is completed, but we still get our logging statement after the fact. So our second setTimeout isn't running until after the test is completed. Even though it showed us the test has passed, it's actually giving us a false positive because we aren't running all the code that we want to test. Fixing this situation is rather simple. We just need to go back into our code and adden a second call to stop. Now, QUnit knows that it's waiting for two calls to start before it can continue. Now, let's run this test in the browser, and watch what happens. Okay. You can see that the test actually paused for the full two seconds and didn't complete until the second setTimeout had run which we see by the logging statement. Now, we got a shorthand method for doing this. Instead of issuing a call to stop twice, we can just pass in a parameter to stop telling it how many starts to wait for until it can resume running the tests. Now, if we want to run this test in a browser, we can see that we get the same result. Now, thankfully, QUnit has a little shorthand method for us that allows us to make writing these asynchronous tests just a little bit easier. Instead of calling the test function, we actually call the async test function. And now that we've done that, we can actually remove our call to stop because it's now implied with the async test. We just have to retain our call to start. So for convenient sake, let's set our long setTimeout to something a little bit shorter in the run this series of tests in the browser, and see that they are indeed passing. Now, the second reason we mentioned for asynchronous tests is to wait for UI effects. Calls to start and stop can help us with that as well. Let's look at a little sample code. Here I've written a simple function that we'll fade out a div over a given duration. I'm going to go back to my test suite and write a new test this function call. So here inside my UI test I'm going to issue a call to that function, and I'm going to have it take a half a second and then I'm going to use setTimeout to check to make sure that the div is now invisible. I'll do that by grabbing the div and checking its property. Now, I'll need to set the duration on this timeout to something slightly longer than how long it takes to fade out the div otherwise the div won't be completely faded out when I run my assert. Running this in the browser we can see that this new test passes as well. Now, even though this works, there's a much better way to do this. Let's go back into our code under test. And we're going to addin a call back function that will be called as soon as the fade out is complete. Fortunately, jQuery already supports that callback function so all we have to do is pass it in to our call to jQuery fade out. So now that we've got that code written, we'll go back to our test and we'll change it to -- instead of using the setTimeout, we will now pass in a second parameter that's going to be a callback function. Now, running this adjusted test we can see that it's still passing. Now, the last reason for asynchronous tests is to test with ajax. Now, I'm only going to mention this to be thorough, in reality, ou should never unit test an ajax call. Instead we should be using some kind of a test doubles, and we will go in to one of the techniques for doing that later on in our module on mocking. So let's recap what we covered in this module. Using the asynchronous tests within QUnit we can tests setTimeout and set interval calls. We can tests UI effects, and we can test our ajax code. But as I said before, you should avoid it at all costs. You should write some kind of abstraction layer over your ajax calls, and then you should use test doubles instead of trying to test your code that makes ajax calls. (Silence) QUnit Tidbits In this last section of the QUnit module we're going to look at four last tidbits of information about QUnit. The first of these is the noglobal setting. The noglobal setting tells QUnit to fault a test if it introduces any global variables. Let's look at how that works. I'm going to write a quick little test which actually introduces a global variable. Inside this test, the first thing I'll do is create a new variable called globalvar set it equal to 3. You can see that since I'm not putting var at the beginning of this it's actually creating a global variable. Next, I'll create a quick assert that verifies that the globalvar equals 3. Now let's run this test in the browser. We can see that it passes. Now I'm going to go up here, and I'm going to check the noglobals option. This immediately reruns the test, and we can see that the test is now failing. And we get this little message that says it introduced a global variable named globalvar. Now, to fix that I only have to go up here and put var in front of the variable name so it's not creating the global variable -- rerun it, and of course it's now passing. The next thing we'll look at is the notrycatch setting. In order to show this, I'm going to need a new test. So I'll duplicate the existing test. I'll come in here, and rename this to a more appropriate name. So I'll call it hidden exception, and then I don't need the code that's in here. Now, let's take a look at this code that I've already written. I've got a class that has a function called do something. And you can see that all it does is throw an exception. So I'll go into my test, and I'm going to call that do something which is just going to throw an exception. Now, let's run this in the browser, and you can see that our test is failing. But the problem is the error message isn't very helpful at all. It says died on test number 1 undefined, but that's the actual problem. The problem is I'm getting an exception. QUnit actually wraps all the tests inside of the tricatch block so any exceptions that are thrown inside of your code are suppressed. So let's go up here and check the notrycatch option, and that will rerun our test. And now we can actually see that true reason why the test is failing. So let's go back to our code, and we will fix this by commenting out the call to do something, and then we'll throw in a quick assert. And now let's rerun the test. And everything's passing. Now, the next piece we're going to look at is the expect method. So let's go back into our code, and we'll create a new test. And this one I'm going to name expect some asserts. And inside this test I'm going to put in another assert, and then I'm going to call the expect method. But I'm going to pass in 3. So I'm telling my test to expect three asserts. But when I run the test you can see that it's actually failing. And the reason it's failing is that it expected three assertions, but only two have run. So let's fix this by adding arrive assert, and then let's rerun the test. And that's now passing. Now, there's a quick way to do this. I can go over here and remove the call to expect instead my second parameter will be the number of asserts to expect. Now I'll comment one of the asserts so there's only two. Run it again, and it's failing. Come back in comment that assert. Run it again. And now we're passing again. So there's a quick way to verify the number of asserts that is happening is what you expect. Now, it may mean having to put this parameter in all the time, but this is definitely not something that you should put in most of your tests. Really nice for asynchronous tests to make sure that the number of asserts that you need to run are running, and asynchronous code is getting delayed until after the test is finished. But in general, for most tests, it's just duplicating information and making your tests more brittle. Now, the last piece of information we're going to look at about QUnit is the events in QUnit. There are quite few of events. Here's that list of events, and for the most part their pretty self-explanatory. The only ones that could be a little bit confusing is log event which actually happens every time an assert is passed. And then start and done which happens at the very beginning of the test run and at the very end of the test run. So let's look at these events in action. I'm actually going to copy in some prewritten code that just goes and lists each event, and then logs out a message based on the event. I'll comment out all but one test, and then we'll go and run this in the browser. I'm going to expand the area for firebugs so that we can see the console messages a little bit easier. After I run the tests, you can see that we're getting a message for when we started the whole run for the module for each test, each assertion, and then when we're completely done. Now, the main reason you're going to use these events is actually when integrating with a CI system. Most of the time listing to these events is not going to be useful in your testing. But if you are integrating with the CI system printing out the right comments for when your tests pass or fail is going to be critical for letting the CI system know whether or not test pass or fail. And that's the major reason why you would use events in QUnit. So in this section we covered four things. The noglobal setting -- which makes sure we don't introduce unequal variables. The notrycatch which doesn't hide any exceptions inside of our trycatch block done by QUnit. The expect method, and the corresponding expect parameter that lets us verify the number of asserts that happen in a test. And last but not least, the events that QUnit fires which we can listen to and then take an action based on what goes on in those events. Summary In this module, we have looked at the QUnit unit testing library. QUnit writes TDD style tests. That means that it's very similar to a lot of server-side unit testing frameworks. QUnit has a very versatile HTML interface allowing you to easily filter a test to only run the tests that you want to run, or skip tests that you don't want to run. QUnit sports asynchronous tests so you can test any asynchronous code that you've got. And QUnit integrates with CI. This lets you include your tests with your CI system so that if you break any tests in JavaScript, your entire build can fail just as if you'd broken any server-side tests. QUnit is a really great unit testing framework. If you like doing TTD style testing, QUnit is definitely one of the libraries you should look at for your client-side tests. Jasmine Introduction to Jasmine & TDD Joe Eames: Hello. This is Pluralsight's course on Testing Clientside JavaScript. In this module we will be discussing the Jasmine Testing Framework. In this module we're going to go over the following topics, we'll start with an introduction to the Jasmine Unit Testing Framework and since Jasmine is a BDD framework we'll also go over a brief introduction to BDD or Behavior Driven Development. Next we'll talk about how to set up Jasmine. Then we'll go over how to organize our tests. Now we're not going to go as in depth as we did in the QUnit module, so if you didn't see that module, you might want to view just that portion of that module at least. Then we'll talk about running Jasmine tests and the different ways we can filter and run our tests. Then we'll talk about Spies, which are a lightweight form of test doubles or mock objects, which are included with Jasmine. Then we'll talk about how to integrate with the DOM. Then we'll go over how to integrate with continuous integration products. And lastly we're going to cover how to test asynchronous code. And we'll cover two different features that Jasmine has that allows us to test asynchronous code. Jasmine is probably the most popular unit-testing framework for JavaScript. It is an open source framework and it was built on the principles of Behavior Driven Development, or BDD. Jasmine also supports both client side and server side testing. Behavior Driven Development is a process that was created in 2006 by Dan North. It is also a superset of Test Driven Development, which means that although it includes Test Driven Development, it also adds many more pieces. Behavior Driven Development focuses on the language used in development, which they call the ubiquitous language, which incidentally is also an important part of domain driven design. The basic process for Behavior Driven Development is to start with acceptance tests which are a higher level test than unit tests and just like Test Driven Development, we first write a failing test. Once we have a failing acceptance test, the next step is to write a failing unit test that satisfies a piece of what that acceptance test needs in order to do its work. Once we have a failing unit test, then we go into our typical Test Driven Development process where we write the code to pass the test and then we re-factor if necessary and then continue on. Once we have enough unit tests in place that are acceptance test as now passing, then we are free to write our next failing acceptance test and then continue on in the process of writing unit tests to satisfy that acceptance test. Setting up Jasmine Setting up Jasmine is a relatively simple process. Of course the very first step is always to go in and get the source code. You can get the source code here or as always, you can just Google for it. Once you've downloaded the source code as a zip file, you can extract that and then pull out the relevant libraries to use in your project, but there's a lot of stuff in the Jasmine zip file that actually can be very useful to you. So let's take a look at that zip file now. Here's the extracted zip file. You'll notice that it has three directories and an HTML file. That HTML file is a sample file for how to structure an HTML file to run your Jasmine tests. It's useful to note that in Behavior Driven Development, tests are usually called specs. So everywhere you see the word spec like in the SpecRunner file, that really just means test. We'll take a look inside that HTML file in a minute. The live directory actually has the Jasmine source so let's go inside there. There's going to be a folder inside there for your specific version of Jasmine that you've downloaded. Within that, there are four files; the license file, which is not very interesting; a CSS file, the style the page that shows your Jasmine tests; then we've got two files here that actually have the Jasmine source code. The first one is the core of Jasmine itself, the second one, that's Jasmine html file is actually the html reporter for Jasmine. Jasmine is set up in such a way that reporting to an html page the results of your test is only one of the supported ways to report the results of Jasmine tests. There are other reporters that you could find online that report to XML and other formats. Now going back to the root of the zip file, we've got a couple of other directories that are of interest to us. This one actually has sample tests that we can use as a reference point for when we're writing our own tests. There are two files in the Spec directory. The first one Playerspec contains a bunch of sample tests that you can use as a reference when writing your own tests. The second one SpecHelper is an example of how to write your own custom matcher or a cert in Jasmine. And the last directory, source, contains a couple of source files that used in those sample tests. Now let's take a moment and look at the SpecRunner file itself and how it's organized. The html file that you use in order to run your Jasmine tests looks like this. If you remember the test runner file for module one for key QUnit, you can see that these two files are organized quite similarly. At the top, we've got our CSS file. Then we have the libraries that we use for testing. If there's any third party libraries that your source code will need to run, you'd put those here as well. Then Jasmine suggests that we include our spec files or test files next. And the last piece is to include our actual source files that we're going to be testing or system under test. I suggest you swap the location of your spec files and your source files because depending on how your write your code and how you write your tests, you might actually get into a situation where the test files need to be included after the source files. The very last piece of this file, which is quite different from QUnit, is a large section of code, which is actually used in order to launch Jasmine. I suggest that you do yourself a favor and take all of this code and extract it out into a separate file and then just include that file in each of your SpecRunners. If nothing else, it'll just clean up your SpecRunner file. So after you've gotten your source code, the next thing you're going to do in order to run your Jasmine files is to create your own SpecRunner file. And of course it's easiest just to copy the sample included with the Jasmine source code. Once you've got your SpecRunner file created and are including all the third party libraries that you need and all the source code that you're going to need to test, the next step is to actually create the test files that will test your code. Let's take a look at the simplest possible setup we can have for running Jasmine. I've created here an extremely simple Jasmine setup. I've got the Jasmine source files. I've got a spec file and I've got a SpecRunner file. I don't actually have any system under test or source code files created because they're not technically required in order to run a Jasmine test. Let's take a look inside the SpecRunner file. You can see in here, I've still got the standard style sheet and Jasmine includes. My source code file section is blank. My spec files just has that one spec file that I've created. Let's take a look at that spec file. Don't worry about what this code actually does or means. We'll cover that in-depth. Let's just see how this looks when we run it in the browser. This is how the browser looks when running that one SpecRunner file. It's extremely simple and doesn't really have too much to it. The main part that you want to pay attention to is that green bar across the center, which tells us that we have one spec and zero failures. So that means that our spec passed. And that's how easy it can be to run Jasmine tests. Next, let's take a look at how to organize our tests. ( Pause ) Organizing Tests When organizing our tests, first thing we should consider is how we organize our folders and our files. Let's say that you've got the following project structure. Inside this project structure we've got a couple of directories for a couple of modules that we're using and inside each of those modules we've got one source file. When setting up our test code we're going to want to use something like the following. It's nice to be able to create a single root directory for all of our test code. Within that, a library directory that lets us put all the third party test specific code, such as the Jasmine source code and its CSS file etc. Then we want corresponding directories and files for each of our source code files. So you can see I've got a Module 1 and Module 2 directory just like the original Module 1 and Module 2 for our source code. In addition I've got a spec file for each of the source code files. Just like QUnit, there are several options for grouping our tests within our test runner files. We can go with the option of just having one test burner file and have all of our specs within that or we can create multiple test runner files and create different specs for different source code files within each one. I've got another option showing over here where we have three different test runner files. In the first one we have one test file and two source files. In the second one we have two test files and one source file, and the third two test files and two source files. It's best if you choose the organization that makes the most sense to you and the most sense to your project. Now that we got our files and our folders organized, the next thing you want to do is look at ways that we can organize our tests within Jasmine. Test fixtures within Jasmine are organized using a describe block. Describe blocks let us group up sections of code. One of the nice things about describe blocks is they can be nested. Here's an example of the describe block. It's a function that takes in two parameters. The first parameter is the name of the describe block or a description of it; the second parameter is a callback function that actually contains the test code. Let's look at another example. In this example here, we've actually got two describe blocks one nested within the other to show exactly how we can nest describe blocks. In this one we have a user describe block and the next describe block has the when new description to indicate that the test within that are corresponding to when new users are created. In Behavior Driven Development, we should consider that we are taking all the descriptions of our described blocks and concatenating them together with the names of our tests. So if we have a test called should have a blank user name, then when we concatenate all the descriptions together, we get user when new should have a blank user name. It's nice when we concatenate all of the descriptions together if we end up with sentences because there are many tools out there that will take all those descriptions and put them together in a form of documentation for you. Writing Tests Writing Jasmine tests is rather easy. Once we have created a describe block, the it function is how to create an actual test within Jasmine. In addition to the it function which creates an actual test, we also have ways to group common setup and teardown for groups of tests. These are the beforeEach and afterEach functions. In Jasmine expectations are called Matchers. Jasmine has a large set of built in Matchers, which we will go over, but sometimes it is beneficial for clarity to create a new Matcher. With Jasmine, it's simple to create custom Matchers and we will look at how to do that in a minute. Let's take a look at some samples. The it function is the container for each unique test in Jasmine. It must be nested within a describe function. Let's look at an example from the sample code that is provided with the Jasmine zip file. Here we have a describe block with a description of player and inside of it we have a single test called should be able to play a song. When we put those two descriptions we get player should be able to play a song, which is a nice description of a feature of our system. You'll see that the it function works just like the describe function. It's a global function that takes two parameters. The first of which is a description of the test and the second is a callback function that actually contains the code and assertions that we'll need in order for our test to work. The beforeEach and afterEach functions are also very simple to use. They're actually more simple than the it and describe functions because they don't even take in a description. You can see here I've got a describe for a class called user. And I've created a variable called SUT. I've created outside the beforeEach so that the scope is available when I get down to my test. And my beforeEach function I'm creating the new SUT and in my afterEach function, I can put any cleanup code that I need. This way the it function itself can assume that the SUT has been created and initialized. As I said before, Jasmine has a large set of built in Matchers. Let's take a look at them. Each of them work off of the global expect function. The first Matcher is the toEqual Matcher. The toEqual Matcher is a very complex Matcher that will check whether or not two objects are equal or two arrays are equivalent or other complex structures are equivalent. The toBe Matcher is much simpler. It simply uses the triple equals comparison. The toMatch Matcher uses regular expressions. The toBeDefined Matcher compares against undefined. The toBeUndefined compares against undefined. The toBeNull compares only against null. ToBeTruthy compares against any truthy value. ToBeFalsy compares against any falsy value. ToContain is used for finding items in an array. ToBeLessThan is the mathematical less than comparison. ToBeGreaterThan is the mathematical greater comparison. And the last one is the toThrow comparison where you pass a callback function and expect it to throw a particular exception. Now it's important to know that you can negate each of these Matchers by adding a not in front of them, which really increases the versatility and the expressiveness of all of these Matchers. The next thing we'll look at is how to create custom Matchers. Custom Matchers are typically created in the beforeEach function. The this.addMatchers function is what we use to create a new Matcher. Let's look at a code example. In here inside of a beforeEach function I've created a new Matcher by calling this.addMatchers and passing in an object with the key toBefive with a value of a function that returns a bullion. The result of that bullion determines whether or not the Matcher passes or fails. Inside a Matcher, the this.actual contains the actual value that you're comparing against and even though this code example doesn't show it you can also add parameters that are passed in to your Matcher function. There, you can use to compare against. This Matcher that we created here will have a very unhelpful error message if it fails. So customizing the error message is very important. Thankfully that's very easy. In order to customize the message inside of your Matcher function, all you have to do is call this.message, set it equal to a function that returns the string that you want to be your error message. Because of closures you can use all the values that are available in the actual comparison of the Matcher. Now that we've seen how to write tests, let's give it a try. All right. So in order to start writing some tests, let's do a simple example. Here I've got a little calculator class. And I'm going to add a couple of methods to that. First we'll add an add function that takes in two arguments. And we'll just return A plus B. And next we'll add ourselves a little divide function. That'll take in two arguments as well. And we'll just return a divided by B. All right. So now that we got our class ready, let's go ahead and write some tests for it. So I'm going to switch over to the spec and here I've got this empty describe function for my calculator class. I'll write a first test using the it function. And this test I'm just going to test that we can add two numbers. So I'll say that it should be able to add 1 and 1. And our second parameter of course is that callback function. Within that I'm going to set my expect and here I'll call our calc. Well, we're going to have to declare our calc first. And initialize it and then now we can expect that calc.add and we put in 1 and 1 and that will be 2. And close this up. And now we're ready to run this in the browser and see how it works. Just jump over to our browser, tests and we can see that the test is passing. So let's go and try another variation on that. This time we will test the divide function and we'll test that we can divide 6 by 2. ( Pause ) So we'll create our calc again. And then we'll expect that calc.divide of 6 and 2 will be 3. Now we can see that we've got a problem here because we've got a little bit of duplicated code in our test. So here's a great excuse to use the beforeEach function. So in our beforeEach function we will move this line of code up there and we just need to take the var and move it outside so that it's visible when it gets to the tests. And now we can delete this line of code and let's see how that works in the browser. Okay. Both tests are passing. That's great. So let's try another variation. This time, let's say that we can divide two numbers that will produce a rational result and so we'll choose 1 and 3 and that way we can get a non-integer division going on and that'll give us another variation in our tests. So we'll divide 1 and 3. And now in order to test this, if I try just like 0.33333 then we're going to get a failure because it's not exactly what the result of the division is. So we can say that we will expect it toBeGreaterThan 0.3 and that passes. And then that doesn't really make the test correct so let's try to pin it by also testing that it's less than 0.34. And now the test is passing. Unfortunately these two expectations kind of point out the fact that if we had a single expectation that did both parts of that sort of like a between expectation, that would work out a lot better. So this is a good opportunity for us to write our own custom Matcher. So in order to do that we just go up into our beforeEach function. We call this.addMatcher and we'll call it toBeBetween and that's going to take in two parameters, the low value and the high value. And that's going to return that this.actual is greater than or equal to the low value and that this.actual is less than or equal to the high value. Close that up. And now we can go down into our tests. We'll change this expectation to use the -- our new Matcher ( Pause ) and then we can remove that second Matcher, run it in the browser and we can see that it all -- Matchers -- all of our tests are still passing. So here you can see that writing tests and writing our own custom Matchers is actually quite easy to do. Running Tests Running Jasmine tests is a lot like running QUnit tests. As we discussed before they run inside a browser. Also by default, Jasmine doesn't show us past tests so in the Jasmine UI there's a check box that we can check that'll allow us to see what tests have actually passed rather than just the failing ones. Also by default Jasmine does not display skip tests so there's a check box for that as well. Also Jasmine has some filter options so that we can run just one test at a time or run one describe block at a time. These filtering functions are implemented rather similarly to the way the QUnit allows us to filter tests. Jasmine has a great little feature that makes it easy to skip tests. All you have to do is add an X in front of either the describe function or the it function. So let's look at an example of each of those. In this first one we've changed our it function to on xit function that will skip that test. And this next example we've added an X in front of our describe function and changed it to an xdescribe function and now all the tests within that describe block is going to get skipped. Now that we've learned about running tests let's actually try some of what we've learned. So I've taken the calculator that we built in the last exercise and I've modified it just a little bit. Now I've got three passing tests but this test right here is going to fail. So let's run that in a browser. Okay. So we can see that we've got four tests and one of them is failing and you can also see that by default only the failing test is being shown. So let's go over here and let's click the passed check box and now our three passing tests are visible. Now if we come over here and click this skipped check box it would normally show us any skipped tests but with the current build to Jasmine skipping a test with the X actually hides it completely from Jasmine so this check box isn't really doing anything. Another thing we can do is just run a single test. We do that by clicking on the name of the test. So if we want to run just our failing test, we can click on that. Now you can see it's only running one spec that is failing. We can also run all of the tests in the describe block by clicking on the describe label and that again is going back to running all four of our tests with one of them being a failure. So now let's look at how we ignore tests. We'll go back into our code and we will change our failing tests it to an xit. Save that and we'll go back to the browser and refresh. And now we can see it's only running three specs and none of them are failing. So it's skipping our failing test. All right. Let's go back to our code and move the X all the way up to describe and rerun our tests and now it's not running any test because we're skipping everything in our describe which is our only describe within the entire test suite. So as you can see there are a few options for running Jasmine tests and for filtering them. ( Pause ) Integrating with the DOM Integrating Jasmine with the DOM is a lot like integrating QUnit with the DOM. Anytime you test the DOM you really got to consider whether or not your test is appropriate. We cover this concept in depth in the QUnit module so if you didn't watch that module now might be a good time to go back and watch just the portion of that module about testing the DOM with QUnit specifically the part about when and why to test the DOM and when and why not to test the DOM. Once you've decided that you really do want to write those kinds of tests then you're going to have to write some additional setup and teardown. QUnit had one little facility that made this a little bit easier. Jasmine does not ship with this so it's something you have to do by hand and we'll go ahead and show you an example of this and how we can handle this manual setup and teardown so that we can manipulate the DOM. So let's go ahead and see what this looks like in code. We're going to take the calculator that we used in our last section. We're going to modify it just a little bit. In this version of the calculator, let's take in an element and we're going to hold on to that and then we'll come down in here in our add function and instead of returning the result we're going to modify that element and set its html property to the result of the operation. Let's do the same thing for our divide function as well. ( Pause ) All right. Now that we got that, we're going to go look at our spec. So the first thing we need to do is track the element that we're going to manipulate. So let's create a little variable that will hold the elements ID and we'll call our element calc fixture. And now that we know the ID let's go ahead and get a handle of that and pass it into our calculator constructor. Okay. So now our calculator is getting the element. Unfortunately when we run our test this element is not going to exist. So let's switch over to our SpecRunner. We'll go down to the bottom and inside the body we're going to create a New DIV. ( Pause ) So here we got our element that we can manipulate back into our spec, let's test the addition function using that element. So we'll call calc.add and then we test that that element has the text that we expect. So. ( Pause ) All right. Let's go ahead and run that and see how it looks. So we run this in Chrome and we can see that we are passing our test. But let's go back to our code and discuss a problem that we've got. In our test we're testing just this one DIV down here at the bottom. We're manipulating this. Since we're just changing the inner content it's not really that big of a deal, but what if we were to be adding some events, some click handlers, all that sort of stuff, we'd have to clean up after ourselves so that any further tests weren't polluted. So we can do that by going in to our tests and creating an afterEach clean up but each different test might actually change the DOM in different ways. So that can get out of hand pretty quick trying to clean up how each different test manipulates the DOM. So a much better way is for us to create sort of a template and then on every test, we will take that template, copy it, insert it into the DOM and then we know that we've got a fresh pristine DOM element that looks the way that we want it on every test. So let's look at how to implement that. First thing I want to do is go back in the SpecRunner and I'm going to wrap this DOM element that I want inside of a different DOM element. ( Pause ) And then I'm going to change the ID on my template to be dash template, that way when I manipulate and add in a new copy of this, I won't have a duplicate ID. Okay. So now we've got this template. Let's go back into our spec and in the beforeEach we'll go in and we will manipulate the DOM. So grab the body. We're going to append into that the contents of that template. ( Pause ) Grab the html and let's replace the html and remove that template suffix. ( Pause ) So now that we've done that it should create a new DOM element. Let's just check and see if our test is still passing. Okay. So we're still passing but now we've got the problem that we're adding in a DOM element. Every time we run the test it's going to add in another DOM element, or if we have two or three tests we're going to keep adding in DOM elements. Duplicated ones. So we need to clean up after ourselves. We do that by going into our afterEach and in here we just grab that DOM element ( Pause ) and we call remove. ( Pause ) Let's go ahead and check that out in the browser. And now we've got a pattern whereby we can do tests on the DOM and manipulate them and then clean up after ourselves and refresh the DOM for every test. And it wouldn't be very difficult to add in different templates in there so that different tests can have template html. After a while this will get a little unruly. So again, consider carefully when you do and do not want to be testing the DOM, but this is a good method for accomplishing just that. Integrating with CI Integrating Jasmine with CI is very similar to integrating QUnit with CI. A lot of the same considerations apply when using Jasmine as when using QUnit and CI, so I again recommend that if you haven't watched the QUnit module that you go back and review the portion on integrating QUnit with continuous integration. Let's go ahead and start looking at how we integration Jasmine with continuous integration servers. The basics of integrating with CI is to use PhantomJS and capture the output. You can get PhantomJS at the URL listed here or you can Google for it. You're going to need a couple more pieces before you're finished. The first one is the Jasmine.teamcityreporter. Remember, Jasmine uses reporters to output its results and the Team City Reporter outputs the results of tests in the format that Team City will work with. The other piece that you're going to need is the run-Jasmine.js file, which comes with the PhantomJS distribution. So you just need to grab that and copy it into your distribution directory. Let's go ahead and take a look at the code. So here's the files that I'm going to need. I put them all in a single directory for convenience sake but in production system you'll almost definitely have some of these files in their own directories. I still got my calculator, my calculator spec. I've simpled that calculator spec up to just have a few tests that all pass. Then I've got my Jasmine and Jasmine CS and then you can see the Jasmine.teamcityreporter, which I got out of the PhantomJS distribution. I still have the Jasmine html in there. That makes it easy for me to run the SpecRunner for both Team City and for when I'm building and running my tests. Then I've got the PhantomJS executable and that run Jasmine JS file which I downloaded from github. And of course, the last thing is my SpecRunner. This is the only file I had to actually modify in order to get Jasmine running under CI so let's take a look at that. I had to make a couple of changes in order to get Jasmine running under CI. The first thing I had to do is I had to include the Jasmine.teamcityreporter as a script in my SpecRunner file. The second thing I had to do is go down here where the reporters are actually declared and create a new Jasmine.teamcityreporter and then add the reporter to the Jasmine environment. Now that that reporter is there, Team City will be able to parse the results of running tests. So now let's take a look at how we actually configure Team City to run our Jasmine unit tests. You can see I've got a build configuration that I've created named Jasmine CI. I've gone in and added in a command line build step. I name that step run JS tests. And it's an executable with parameters. I'm going to need to set my working directory. So I'll set that to where I've put the files at. And then of course the Executable that I'm going to run is PhantomJS. And the parameters are going to be that run Q -- Jasmine file. And then the name of my SpecRunner file, which is SpecRunner.html. All right. Now I'm going to save that. And now that I've got that going, I can go back in my projects and I'm going to run my Jasmine CI configuration. And you can see that all three tests have passed. Now let's go back into the source code and let's modify our spec and let's pick one of the tests fail. So we can see what it looks like when one of our tests is failing. Change that to be a 3. Come back. Run our configuration again and now our tests have failed. We can see that we've got one test failing and two passing. Let's go back in and fix it. Run it again. And we are back to green. So you can see that the steps to get Jasmine to run under continuous integration are pretty straightforward and rather simple. ( Pause ) Asynchronous Tests JavaScript as we all know is asynchronous by nature so getting around testing asynchronous code is something you're very unlikely to be able to do so because of that it's really a good idea to learn to how to test asynchronous code and what methods are available in the different libraries in order to test your asynchronous code. There are essentially three different kinds of asynchronous code that we can test. There is setTimeout and setInterval. UI effects such as J queries, fade in and fade out and finally ajax calls. Now as I said in the QUnit module, ajax calls are the kind of thing that you really should avoid testing if at all possible. Much better to write your own abstraction over your ajax calls and then mock that out, which is something that we will cover in a later module. The UI effects and testing them really means that you're testing the DOM and so if you are going to test the DOM again, you want to be very careful about how you test the DOM and when you test the DOM so that you're not writing tests that are too brittle and there's still a high value in the tests that you write. Now Jasmine gives us essentially two techniques for testing asynchronous code. The first one is runs and waits for. This method allows us to test not only setTimeout and setInterval but also UI effects and ajax calls. The second method is the clock.useMock, which is only useful for testing setTimeout and setInterval. With these asynchronous tests it's so much easier to understand what we're talking about by actually looking at examples. So let's go ahead and dig right into the code. All right. We're going to take our calculator and we're going to modify this quite a bit. I want to add a couple of methods to the calculator. The first function I want to add is going to have a visual effect on it so we can show what it's like to test UI effects. So we're going to call this visual effect a hide result. And this hide result method will just assume that we've got a DOM element that contains our result and we want to at some point hide it. So that's what this method is going to represent. Now, rather than just doing a regular old hide, in order to show what it's like to actually test UI effects that takes some time we're going to have it do a fade out. So let's assume that the calculator has a reference to the element contained in dollar sign EL and we'll call fadeout on that and we'll have it take a second and then you notice I put in a callback pass to this function. So we're going to call that callback when fadeout is finished. Now we don't have a this.EL right now so let's go ahead and add that. For the constructor we'll pass in the element and then we're going to set it equal to this.EL and let's wrap it into a J query object and even if it isn't a J query object that's okay. We're just going to make sure that it actually is wrapped in J query so that we can call fadeout on it down here. So let's go ahead and go over to our tests now and we're going to want a new test. So we will describe we're testing some visual effects so we'll call this FX tests. And we'll create a variable to hold that element that's going to hold the result and now I need to create it. So I've already gone in to the TestRunner html file and I've created a DIV with the ID of container. So let's create the element that's going to have the result in it that we're going to hide. We'll do that in our beforeEach and we will save it EL equals a brand new DIV. Put some content in it. And come down here we'll grab our container that I made. And we'll just append that element into it. And we can't forget we need to new up our calculator. And close this off. And our beforeEach is done. Now of course, whenever we create DOM ELs we have to clean up after ourselves and remove them at the end of each test otherwise we can have our test environment corrupted so in our afterEach, we will take that element and remove it. All right. That's done. We write our actual test. So in this one we're going to say that it should work with a visual effect and in here, we actually need to write the code that's going to test original effect works. Of course the first thing we're going to do is we're just going to actually call the calc.hideResult. All right. Now remember that takes callback so we need to define that callback. Let's define that callback here. And we'll just leave it empty because it doesn't do anything. And we'll set some callback in. And then the last thing we need to do is run our assertions and so we're going to do expect that our elements got the CS property of display and then that is set to none. Now that will happen after we do our fade-out so let's close off our describe here. And we also need to go up here, close this off properly. Okay. Now if we were to run this test right now it would actually fail because the hideResult method calls that fade-out function and that takes one second. And so the elements display value is not going to be set to none until after that fade-out is finished, but as soon as it starts the fade-out it's going to continue on and Call the expect method long before fade-out is finished. And we can verify that by running our test. And you can see that we're failing now because it's still set to block when we actually run our assert. So let's see how we address that with Jasmine. With Jasmine the way to handle this is to use the runs and waits for methods. So the first thing we want to do is we want to take our actual action step and put it inside of a runs method. So call runs like this. ( Pause ) Okay. Then we need a waitsFor. And what waitsFor is going to do, is it's going to cause the program to pause at this point until some bouillon value has become true that is going to basically check continually through polling. So we call waitsFor and in this case typically what you do is return a flag and you set the flag by default to false and then eventually set it to true when things are working correctly. Now this waitsFor method takes a couple other arguments. The second one is an error message in case it times out. And we'll see how that gets plugged in in a minute. And the third parameter is a time out value. And since our fade-out is taking a second we'll give it just a little bit longer, give it 1100 milliseconds and then the next thing we need to do is wrap our expect into a runs function. ( Pause ) And so now what we've got is we're running our hideResults then we're waiting for this to become true and then we finally run our asserts. So let's plug in how we're going to handle this flag. So we'll start by defining the flag and we initialize it to false. And then since what we need to do is have that flag become true when fade out is finished, we can just go right into our callback and set flag equal to true and doing this is going to actually make our test pass. So we'll go back to our test, and we'll run it and notice when I ran that test it took that whole second in order to wait for that flag value to be true. This is a kind of an important point because if you put a lot of UI tests into your code that are doing a lot of UI effects that are taking some time, your tests can actually get pretty slow which is something that we don't want. And there are a couple of ways to make that better. Unfortunately they don't actually work with UI effects but they will work with setTimeout and setInterval. So let's look at how to deal with setTimeout and setInterval next. So let's go back to our test and we'll go over to our calculator and we'll add our second method. This is going to be our final method. And this method is going to utilize setTimeout so that we can show what it's like to test that. So let's assume that before we actually want to hide our result that we need to make a pause. Now these will see the result before it gets hidden. So we want to call this pause before hiding. It's a function again taking a callback and this time we'll assume that a pause before hiding callback is actually the hideResults. So it's going to call that. After a short pause we'll setTimeout. And we'll call our callback then wait two seconds this time. Close that up. All right. Now since we're going to pass in the call to hideResult into pause before hiding them makes it convenient for us so we can use that callback to determine whether or not pause before hiding is waiting the appropriate amount of time. So let's go and write a test for that. So we're going to create a brand new describe. I'm going to call this pause before hiding. ( Pause ) And it's going to have a single test. Should call my callback function after two seconds. ( Pause ) All right. In here we're going to create our calculator. We're going to create our flag like before. A callback. Sits inside there, so it's flag equals to true. And then a runs function. Inside there we'll call calc pauseBeforeHiding. Going to get a callback. We're going to expect right here the flag is going to be false. This way we can assure that the callback hasn't been called quite yet. And we'll close that off. Now our waitsFor method. ( Pause ) That's going to return a flag, of course. And then we're going to say pauseBeforeHiding to return and we're going to wait 2100 milliseconds on this one. And our final runs which is going to have our expect in it. We're going to expect flag has been called and that indicates that our callback was called and that it's truth. ( Pause ) All right. Let's see how this runs in the browser. ( Pause ) All right, both of our tests are passing, but you can see now it's actually taking 3 seconds to run both tests which is starting to take a pretty reasonable amount of time. Imagine if we had hundreds of tests like this. So what could we do to possibly speed that up? Well, Jasmine actually provides us with one mechanism for testing setTimeout and setInterval that will let us dramatically speed up our tests. And let's look at that now. So we're going to go down here and we're going to create one more describe. I want to call this mock clock. I'll explain what that means. Jasmine has a feature that allows you to hijack setTimeout and setInterval and basically just determine how quickly they run. And fast-forward them to points in time that you want them to be at. So in order to do that, all we do is add a number beforeEach. We call Jasmine.clock.usemock. All right. And that's turned on the magic. So now let's write our test again. Call my callback function after two seconds I want to call this mock clock. All right. Cleaning out our calculator again. And this time we still need a bullion to indicate that our callback's been called. We can use something more expressive. Let's call it callbackCalled, set that equal to false and we'll create our callback. Inside there we'll say that callbackCalled is true. Now we can call calc.pauseBeforeHiding. That's going to callback. And this is where the real magic comes in to play. What we can actually do is we can say Jasmine.clock.tick and at that point we can actually fast forward the clock to whatever we want so we know that this is going to tick every 2000 milliseconds so let's say 2001. And at that point it's actually going to have fired the setTimeout called and so then we can call our expectation toBeTruthy and test that our callback actually got called. So let's go back to our browser. Let's close that off. Close off our describe and go run that on the browser. Okay. Now everything's passing. Let's just run just the mock clock test. Okay. See how that's instantly returning? It's not actually waiting for that setTimeout to call, it's fast-forwarding it for us. And this gives us an extra ability. We can come back in here and we can verify that the callback didn't get called just before it was supposed to so instead of having it tick 2001 milliseconds, let's have it tick 1990 milliseconds and let's test that this is -- that the callbackCalled hasn't been called yet after 1900 milliseconds and then we'll add in one more tick and this time we'll just tick it forward 100 milliseconds and at that point we should be safe and it should have been called. So let's run that and see how that goes and that is still passing. So that's another cool feature that Jasmine has is the ability to hijack setTimeout and setInterval and run them forward to whatever we want them to be. Jasmine.async In the last section, we looked at how to test asynchronous code with Jasmine. As we saw in that last section, the clock.useMock method is actually quite an elegant way to test code that uses setTimeout or setInterval. The resulting tests are really rather readable and a good way to test that code. Unfortunately the same is not true of the runs and waitsFor methods. As you look at that code it's quite obvious that you're hiding a lot of what the test does inside of those runs and waitsFor methods and really obscuring the intent of the test making it a lot less readable and therefore a lot less maintainable. So in this section I'd like to show you an additional way to test that code that uses an add on to Jasmine created by a community member named Derrick Bailee. This add on is called Jasmine.async. This is the URL for that add on. Now the goal of this add on was to making writing asynchronous tests just a little bit cleaner with Jasmine. And I think you'll agree that using this add on to replace runs and waitsFor actually makes your code quite a bit more readable. So let's go ahead and code up an example. Here I've got the calculator that we were using before. I've simplified it just a little bit and left just the hide result method in there. Everything else is pretty much the same. And when our calculator spec, I basically left the same exact beforeEach and afterEach that we had before but I've removed those other tests. Now let's go ahead and see what a test looks like using Jasmine.async. The first thing that's a little bit different is when writing an asynchronous test with Jasmine.async we're going to need to go up here and at the very top and we're going to create an async variable. ( Pause ) We're creating an instance of the async spec class, which is the main part of the Jasmine.async library. Now when I want to actually write a test, I'm actually going to call async.it when I write my test. Other than that the test works exactly the same. So here we're going to say it should make the result invisible. Now at this point is where we get our second main difference between Jasmine.async and the runsFor and waitsFor methods. We're actually going to include an argument called done. That argument is a function that we call in order to tell Jasmine.async that our test is complete and it can go ahead and run the next test. Now the code that we're going to be testing is calc.hideResult. So the first thing we'll do is put in our call to hideResult. So remember we needed to pass in a callback. So let's go ahead and define that up here. ( Pause ) Now as you remember, our callback is the last thing that gets called after the fadeout is complete and the element has been made invisible. So that's the correct point for us to call that done method and the Jasmine.async know that we're done. Now the other thing that's missing is our expectation that verifies that the element is now invisible. We can't put that expectation down after the call to hide result because it's not finished fading out at that point and so the expectation would fail. But when this callback gets called it actually has finished fading out so we can go here and put our expectation right in here. ( Pause ) Now it is a little bit funny to be putting our asserts above the actual call to the code that we're testing but when writing asynchronous code we sometimes have to make some concessions. So let's go ahead and run this in the browser and see how it works. Okay. Our test has passed. Now it's still taking the time to let the element fade out so too many of these tests and your test suite will take way too long to run but the code that we wrote is a lot more readable. Let's go back and compare that code side by side with the test that uses runs and waitsFor. Here's those two methods. Now you can see just glancing at this code that not only is the Jasmine.async code a lot shorter but it's definitely a lot more readable and therefore a lot more maintainable. So when writing asynchronous tests if you're using setTimeout or setInterval inside your code use clock.useMock in order to test that code since that'll let you avoid waiting the actual timeout intervals that are in your code. But if your code doesn't use setTimeout or setInterval, then using Jasmine.async is definitely better than using the runs and waitsFor methods. Summary In this module we have taken a look at the Jasmine unit-testing library. Jasmine is a great Library and is probably the most popular client side unit-testing library for JavaScript. There's a lot of reasons for that and certainly one of the benefits for that is that Jasmine is well tested and well used. Jasmine is a BDD style unit-testing library. Jasmine has a very versatile test runner that allows you to easily skip tests and filter tests. Jasmine also lets you create custom Matchers, which is a really great feature for making your tests as expressive and readable as possible. Jasmine includes its own spies for mocking your components. Jasmine also integrates really well with the DOM and allows you to test code that integrates with the DOM very easily. Also Jasmine supports asynchronous tests, although that support is much easier if you include the Jasmine async library in your tests. Jasmine is a great unit-testing library and it's not just because of its popularity that you should really seriously consider using it for your own Clientside unit testing needs. Mocha Introduction Hello. This is Pluralsight's course on testing client side JavaScript. In this module we will be discussing the mocha unit testing framework. This is our third and final module on unit testing frameworks. We're going to start with an introduction to mocha. Then we're going to talk about installing and setting it up. Then we're going to discuss assertion libraries which are third party libraries which let you choose from a variety of different styles of assertions in your tests. After that, we'll cover writing and running tests in mocha, and then how mocha handles an asynchronous test. And we'll finish up with a discussion on using mocha with a continuous integration server. Mocha is a very interesting library. It is possibly the most versatile JavaScript testing framework. It is open source and hosted on GitHub. It was primarily developed for node, but has gained widespread adoption for browser testing. Mocha supports both client side and server side testing, and it supports both BDD and TDD style tests. It can be run both from the browser and on the command line, although it requires node if you wish to run it on the command line. Mocha supports any assertion library. We'll discuss exactly what that means in the assertion library section of this module. Mocha also supports async testing right out of the box. The mocha implementation of asynchronous testing is what inspired the async Jasmine library which we saw in the last module. So, as you can see, mocha's really quite versatile and supports a lot of different styles of testing. Setting up Mocha There are two ways to install mocha. The first way is to simply get the source code. The source code is available at this URL. This method will only help you when you intend to run mocha inside the browser. This won't allow you to run mocha on the command line. The second way to install mocha is to install it using nodes and NPM. Here we see the NPM command for installing mocha. The dash G parameter installs in globally which is the typical way to install something like this. Although you do have to do a global install if you do want to put your testing framework under source control. In that case omit the dash G. Downloading mocha from GitHub will let you run it on a browser and installing mocha through NPM will let you run it through the command line. But if you want to be able to do both then you'll either have to install both ways or install using NPM and then go and find the install directory and copy the relevant files out and in to your project directory. Setting up mocha for the browser is pretty easy. This is pretty much the html file you'll want to create in order to run mocha from within a browser. You can see that the first step is to include the mocha CSS. Now pay close attention to the rest of the file. The order of the items is important. The first thing we have is this div with the ID of mocha. After that we include the mocha library itself. Then we call mocha dot setup which lets us decide whether or not we're going to do BDD or TDD style testing. After that we can include all the test files that we want to have tested. And the very last step is to make a call to mocha dot run. Now there are likely to be other libraries that you're going to want to include inside of your test suite such as jquery or something else, and you can include those pretty much anywhere that it's appropriate. But the order of these components relative to each other is something you'll need to follow in order to make mocha work. If you installed mocha using node there's an easier way to set up an html file for testing. You simply type mocha init in the path. At that point mocha will create inside that path an html file that looks just like the one that we just saw. It will copy in both the mocha JS file and the mocha CSS file. Setting up mocha for the command line is a little bit different. First of all, you need to decide where you'll place your files. By convention mocha expects you to have your tests in a directory called tests which is just off the root. You don't have to follow this convention, but if you don't you'll have to give an additional parameter to the mocha command line. Now there are a couple things to note about running mocha from the command line. You can't have your mocha JS file in the same directory that you're going to call mocha from, and you also can't have it in the same directory as your tests. So if you're going to run mocha from both the command line and the browser be careful where you place your mocha JS file. Now that we've seen how to install mocha for both using in the browser and on the command line we won't be discussing mocha on the command line anymore. When writing client side tests other than using the mocha init command we saw earlier there's really few reasons to run mocha on the command line and there's several problems with it. Not the least of which is that instead of running your browser hosted version of JavaScript you're now running your tests in node so the validity of your tests is now in question. So for the rest of this module we will only be discussing mocha in the browser. Assertion Libraries Assertion libraries are JavaScript libraries that allow you to choose your own assertion syntax. They are add on libraries that you can use with just about any unit testing framework. The reason that we're discussing assertion libraries when discussing mocha is that mocha doesn't come with its own assertion library. So you are forced to choose one. But that doesn't mean that you can only use them with mocha. You can use assertion libraries with just about any unit testing framework. So let's take a quick look at using a different assertion library with Jasmine. You can see I've got a simple spec file here, and in the spec runner html file which is now shown I have included a reference to the popular Chai assertion library framework. Technically it's a global Chai variable. That object has an assert object which you can see that I'm calling the equal method on the assert object. In this case I'm comparing 3 and 4, and then that last parameter is an error message. I've started off by making this fail so that you can see that the assertion library will truly cause my tests to pass or fail. So here we can see that the assertion library has caused this test to fail. Let's go and correct it and run it again. And now our test is passing. So there's a simple example of using the Chai assertion library framework with Jasmine. There are several popular assertion libraries. I've listed 6 of them here, but this is by no means an exhaustive list. These are simply some of the more popular ones that exist. The last one that's listed, Chai dot JS, is the one that we will be focusing on the most. It is a very common library and it is quite versatile. Since we're going to be using Chai let's take a quick look at it. Chai supports 3 different assertion syntaxes. Although this is a bit unusual, this flexibility is actually a huge strength of Chai. The first syntax it supports is the assert syntax. This is a bit similar to server side XUnit testing framework assertions. If you want to use the assert syntax you should put the following line of code inside your spec runner file. This will give you access to a global assert object that you can use. Next is the expect syntax. This is a little bit like Jasmine's assertion syntax, and it has a corresponding line of code to create a global expect object. And lastly there's the should syntax. This is another BDD style syntax like the expect syntax, although this one extends object dot prototype. Also it has some issues with IE so be careful when using it. In order to use should you use this line of code in your spec runner file. You notice that the should syntax is a little bit different. You actually have to execute the function. That's because, as I mentioned, it extends object dot prototype. Going in depth into all three of these syntaxes is beyond the scope of this course. We will be using the expect syntax. So that's the only one we care about. If you're interested in learning the other two syntaxes you can go to chaijs.com. The Chai JS expect syntax is fairly straightforward. After you put in a line of code from the previous slide you can call the global expect method and pass in the actual object, and then there are a set of chained properties and methods that create a readable sentence of sorts. The first example I've shown with the two equal is by far the most common assertion and the one that we'll be using the most. Writing & Running Tests Writing tests in mocha is pretty comparable to QUnit or Jasmine. In fact, one of the great things about mocha is the fact that it can truly be like both QUnit or Jasmine. Mocha supports both TDD and BDD style tests. So you can write tests in a manner similar to how Q Unit works with the TDD style tests or you can write them in a style similar to Jasmine with BDD style tests. Now it's important to note that TDD and BDD style tests in Jasmine really have very little to do with test driven development and behavior driven development. Mostly it's just a name for how your tests look. You can actually write both TDD and BDD style tests with mocha, but not actually be practicing test driven development or behavior driven development. So let's take a look at both of those testing interfaces. We'll look first at how TDD style tests look in mocha. These tests will look really similar to how Q Unit looks. The first thing we need to do is come into our spec runner file and in a mocha setup call we need to pass in the string TDD. This tells mocha that we are intending to do TDD style testing. Then we come in to our test file and we can begin writing our tests. Now, much like Q Unit, we start off with a suite function and we can give this a name. And then you pass in a call back. And in the case of TDD style tests which is a little different than Q Unit we actually wrap our entire set of tests within the suite function. And then we can write our first test by calling the test function, giving it a name, and then passing in a call back. And then in that call back we can write our actual test. Now we can run this in our browser and see how it looks. And see here we've got our first test passing. Now let's go ahead and look at some of the other options that we've got with the TDD style testing. Just like Q Unit we've got the option to write our own setup and tear down functions. And just like Q Unit they're simple. They just take in a single call back. And let's write a tear down. And here I'll just log out again. Now mocha offers us a couple of additional options. First it's got a before function, and this function will actually get called before any other test gets run and it only gets called once, unlike setup which gets called before every test. And it's got a similar function called after. So let's go ahead and put a log in our test. And let's see this in action. Let me refresh the page and you can see that we're calling our before first, then setup, then test, then tear down, then after. Now this interface is a little bit different than Q Unit. If you actually want your test to really look like Q Unit there's another switch you can give it instead of TDD. Actually give it the string Q Unit. And then for that to work you change your suites to be just like they were in Q Unit. So instead of wrapping them actually just have the suite beforehand and it's just a function. Then the Q Unit style, setup and tear down aren't supported. Instead it's before each and after each. I can run that and we see that things are still exactly the same only now our tests look exactly like Q Unit. So if you are migrating from Q Unit this is an easy way to make that happen. Now let's switch back to TDD style interface, and I'll show you one more feature. And we're going to change these back to setup and tear down. And make this back into a call back. And -- And one of the things we can actually do is we can nest our suites. We we're going to create an inner suite. I'll give that a call back. And then in here I'll just create another test. That's on call back. And this one just going to have another simple expect. All right. Let's go ahead and run that in the browser and see how that looks. Okay. So now you can see that we're actually nesting our suites inside of each other. That's a nice feature of the TDD style test you don't get with the Q Unit style. And Q Unit style, if we change this over to Q Unit style, have to go back and change our suites. And change my setup and tear down to a before each and after each. And refresh my browser and you can see that they've changed. Now they're all on the same level. So now let's look at the BDD style tests. Some of the things that have changed is instead of a suite we use to describe -- Again, these are going to be a lot like Jasmine. And we are going to wrap everything in a function and a call back. And instead of test we have the it function. Change that. Change that to describe before each and after each and before and after are going to be the same in BDD style tests as they were in the TDD style tests. I'm going to change this other test to init. Now of course we need to go over and change our style to BDD. Let's run that in the browser. So if you look at the output you can see we're still getting the same result. Before and after are called only once at the very beginning and end. Setup and tear down are called before and after each test. And just like TDD style we can nest our suites or describe blocks within each other. So that's how to write tests in the TDD and its related Q Unit style and the BDD style interface of tests inside of mocha. So now that we know how to write tests in mocha, let's look at some of the other features that it supports. Mocha will allow you to filter your tests just like we saw in Q Unit and Jasmine. And it's very similar to them. Let's go back to our same suite, and I'm just going to come in and click on the outer suite. Now you can see what it's done is up in the top it's added a grep URL parameter. Now this didn't actually do anything different because it still concludes all of my tests. So I'll click on this inner suite and now you can see that we're being filtered and only the tests within the inner suite are being run. Now in addition to actually clicking on these that URL parameter is hackable so I can go in here and I can type in whatever I want. I'll just type in my -- I'm going to go back to everything that starts with "my." And so this allows you to filter your tests by just typing in phrases that you want to match your test with. So if I come up in here and I type in second you can see it's only running my second test. So that's how you can filter your tests with mocha. Another thing that we can do with mocha is actually view the source code. We can do this right from the browser. So let's go back to the browser and I'm going to change my grep and take that off again. And then now I'm going to come up here and instead of clicking on a suite I'm going to click on one of these tests. And you can see that it's expanded open. It's actually showing me the code within that test call back, and I can do this for both the tests and I can open and close however I want. So this is a convenient little way to remind yourself of what the code looks like in one of your tests. Now this is a change from some of the other libraries because instead of clicking on a single test to run just that run test we're now seeing the source code. That means that there's not quite as convenient a way to run just a single test. You actually have to use grep for that or another feature of mocha is its ability to run tests exclusively. And the way that you do that is you come in to a test and you add the only function to it. So let's run that in our browser. And you can see that it only ran the test that had only on it. Now of course you could write that only function on multiple tests and all that mocha does for that is just run the last one that you wrote the only function on. We can also add that only function to a describe block. And let's go ahead and duplicate this test. And doing it that way we're just running that one describe block. In addition to running only an exclusive test we can actually do test skipping. And that's very similar to the only function except in this case you use the skip function. Now you can see that we've actually skipped that inner describe block and all the tests within it are showing up in blue without a checkmark, indicating that they've been skipped. Just like with only we can actually run these on one of the tests. And now it's just skipping that one test. Now there's an alternate way to do this that matches up with Jasmine. You can actually put an X in front of the test instead of dot skip. That way if you're converting from Jasmine, again, it's a little bit more convenient because it matches up a little bit more. Mocha also has a convenient way to write pending tests which are if you have an idea for a test you want to write but you're not quite ready to implement it you can just write the it function and then just close it off like that with no call back. And then when we run it we'll see that it's showing up as pending which looks just like skipped. In any case, it's a reminder to you that you've got something you need to go in and fix. Mocha's also nice because it will detect global variable leaks. So let's go back in to our code and let's create a global variable. Let's say I equals 3, and obviously this is an anti pattern. We're creating a global variable and that's something we almost never want to do. Thankfully mocha will actually catch that for us. As you can see, it's throwing an error. A global leak is detected, the variable I. So we can go in there and if we come back out then mocha's just fine. Now let's say that you actually have a global variable that is valid because of some other library that you've included and mocha's throwing an error on it, but you don't want mocha to throw an error on it. So let's put our I back in. You can just come in to mocha and do -- You can easily configure that with mocha by calling the mocha dot setup function. And in here we pass in an options object. And in this case I'm going to create a key called globals. And the value that I pass into that is an array of strings that are each of the globals that I want it to ignore. So I've got an I in there. I go back to my browser, run this again. You can see it's not failing. If I remove that line of code we're going to be failing because of that global. There's another way to do this, more of a shotgun approach. Instead of calling globals we call mocha dot setup and we pass in the key ignore leaks. And set that to true. And when we do that it doesn't matter how many global variables we've got. Mocha will no longer detect them and throw errors when it detects them. And the last feature of mocha that we're going to talk about is its ability to detect slow tests. So let's go to our browser. And we're going to set a break point on one of our tests. And I'm going to put a break point right in here. So our first test because it's that break point is actually going to take a long time. Run it again and this time we'll wait and then continue and now you can see I've got this little red area next to my test indicating how long it took. And the red is an indicator that it took too long. So that's a nice way to quickly see which of your tests are taking too long. And the threshold here is 75 milliseconds. Anything over 75 milliseconds shows up. So let's run this again. And you can see 267. If you're kind of quick about it you can actually get it into a range where it gives you a little warning saying that, "This is kind of taking a while, but not necessarily terribly long." And so this is a way to detect tests that are still not perfoming, but not terrible. So as you can see mocha has a lot of features when writing and running tests. Its flexibility is definitely one of its greatest strengths. And getting comfortable with these options will be beneficial as you write tests using mocha. Asynchronous Tests Writing asynchronous test with mocha is extremely smooth and easy. Mocha was built with asynchronous testing in mind so the developers did their best to make testing async code as painless as possible. The process for writing an async test in mocha is extremely simple. And if you remember the Jasmine dot async plug-in from the last module it will look very familiar to you since Jasmine dot async was inspired by mocha. This first code example shows how a typical BDD style test looks in mocha, and the second example shows how we can make this into an async test. All we have to do is make our test call back take a done parameter and then at some point we call that done parameter. This particular examples is a bit nonsensical since it doesn't actually show how it would work in a real async test. So let's take a look at a slightly more real world example of doing asynchronous testing with mocha. Here I've got an empty test. I want to make this test asynchronous by adding a set time out call. Just do a simple assert so we can get to the point. And I'm going to set my time out to 10 milliseconds. Now if I were to run this as is it would actually time out because the done parameter has never been called. Once you accept that done parameter you have to call it. Otherwise the test will time out regardless of what you do. So let's go in to our set time out function and call done. So let's run that in the browser. And we can see that our test is passing. So even though we have asynchronous code and you can see here our assert is actually inside of an asynchronous call back the test is passing because we added in the asynchronous functionality into our mocha test. An important feature of asynchronous testing is the ability to set a time out for your tests. With asynchronous tests there's the possibility that an expected call back doesn't get called. In this case you want to be able to have your test report an error at some point rather than just hanging indefinitely. Mocha implements a time out feature to handle this situation. By default mocha will time out after 2 seconds, but that time out is configurable. You can either set it globally using the same mocha dot setup call we saw earlier or you can configure it by test. Let's look at how to do that. Here I've got pretty much the same asynchronous test only I've changed the time out to be 2 and a half seconds instead of 2 seconds. Let's go ahead and run this as is. And you can see that after 2 seconds our test has timed out. Now if it only waited another half second the test would have passed. So let's go ahead and change our time out, and we'll do that globally by calling mocha dot setup and we pass in that object and this time we're going to set the time out. And we'll set it to 3 seconds. Let's run our test again. And now after the two and a half seconds the test is passing. Now in addition to setting this globally we can also do this on a test by test basis. Let's say that we only want this test to have the 3 second time out and everything else to have the default 2 seconds. You can set this.timeout to 3 seconds. Let's go ahead and run that in the browser. And our test is still passing. So we can set our time out either way just based on whatever makes sense. In some cases you maybe want to set your time out shorter than 2 seconds if you want a test to fail fast because it should be completing very quickly and on the off chance the test does fail you don't want to wait around for 2 seconds for it to finally time out. So that's it for asynchronous testing in mocha. As you can see, it's very simple and straightforward and really easy to do and especially compared to some of the other testing frameworks that are available. Integrating with CI Integrating mocha with CI is no more difficult than it was to do Jasmine or Q Unit. And, of course, the process is pretty much identical. First thing we're going to do is we're going to use PhantomJS and we're going to capture the output. So if you didn't watch the module on Q Unit you definitely have to go back and watch just the part of that module that's about integrating Q Unit with CI because that will give you the basic rundown. I'm not going to go in to everything in depth here, and this is not a course about TeamCity so if you don't know TeamCity very well you'll just have to follow along as best you can. If you're using a different CI system the steps should be pretty analogous since most CI systems operate in fairly similar ways. Just like with Q Unit and with Jasmine we're going to start by using PhantomJS. PhantomJS is a headless webkit browser and it allows us to run our tests within the context of an actual browser execution environment. You can download it here. Then we're going to use the TeamCity reporter. Remember mocha has a bunch of different reporters. We used the html reporter for our browser reporting, but we're going to use the TeamCity reporter as well. I'm going to show you how to set that up. And we're going to make our own run mocha JS file. The code for this is going to be included in the demo code for this course. So let's start by looking at the simplest piece which is our test. I'm just using a single BDD style test for this. The next thing we have to do is we have to modify our index html file. Instead of the normal call to mocha dot setup where we just pass in the BDD string this time we're actually going to pass in a whole configuration object. BDD goes with the key of UI, and then we pass in another key reporter and in that one we actually pass in a function. This function takes a runner parameter and the body of this function creates two new mocha reporters using new mocha reporters dot html and new mocha reporters dot TeamCity and passing in that runner parameter. This is the syntax that allows us to use multiple reporters at the same time. The last piece is our run mocha JS file. I'm not going to explain the code to you, but this is the code that you'll need in order to run your tests under PhantomJS. So let's take a look at our directory and the files that we've got inside of there. Here I've got my directory that I'm going to be testing against. You can see I've got my PhantomJS file, my run mocha file, the html file, the test file itself, the CSS, and the Chai. Chai, of course, is required. The CSS isn't necessarily required, but if you want to run the html file manually to check the tests are working correctly outside of the CI system then you're going to want to have that CSS file available. So let's go look at our CI configuration. Under TeamCity this is how I configured it. I pointed up my working directory. The command executable is phantomjs and the command parameters are my run mocha JS file with a second parameter of the index dot html file that I want to run. This is pretty much like we did Jasmine and Q Unit. So now I'll go back to my projects and run that build configuration. And you can see that we've passed. Build number 3 is up. The test passed. Now let's go back in and change the code. I'll make the test fail. And run it again. And we can see that indeed the tests are failing and that's failing the build. So let's go back, make that test pass again, run it one more time, and now everything is passing again. So, as you can see, running client side mocha tests under continuous integration is rather simple, and only requires a few steps. Summary So, as you've seen in this module, mocha is a great unit testing library. One of the most important things to remember about mocha is how versatile it is. Because of that versatility it's easy to use it the way that you want to. You can write the style of test that you want using the kind of assertions that you want to use. That versatility also makes it easy to migrate from Q Unit and Jasmine or even any other unit testing framework in JavaScript. This versatility means that mocha supports both BDD and TDD style tests. In order to maximize versatility mocha doesn't come with its own assertion library. Instead it lets you go out and choose one of the existing assertion libraries that's out there. Mocha's browser interface is fully featured which allows you to filter tests, run single tests, or skip tests however you want. And it's also very easy to write asynchronous tests within mocha. In fact, it's so good that other unit testing frameworks have adopted mocha's pattern for writing asynchronous tests. So mocha is definitely one of the unit testing frameworks that you should consider seriously whenever you need to unit test your client side JavaScript. Mocking Introduction Joe Eames: Hello, this is Pluralsight's course on Testing Clientside JavaScript. In this module and in the next two modules we will be discussing Mocking in JavaScript. We'll start in this module with an introduction to mocking in which we'll talk about mocking in general, why we do it, and what the benefits and drawbacks are. We'll discuss the different types of mocks and how mocking is different in JavaScript than a language like java or C-sharp. Then, we'll look at how we might go about writing our own mocking library so that we can understand a little better how JavaScript mocking libraries actually work. And in the following modules we will cover two mocking tools. We'll start with Jasmine and look at Spy's functionality. Then we will move on to Sinon.JS, which is a library solely dedicated to mocking in JavaScript. This is a large library so we will spend quite a bit of time looking at the different ways we can mock using Sinon. And note about vocabulary. I have used the term system under test or SUT in previous modules and I will continue to do so in this module, so whenever you hear the term system under test, I'm referring only to the specific part of an entire system or program which I want to test at that moment. Generally, this corresponds to single class, but that isn't always true. Why do we Mock? The first question we need to consider is why should we mock at all. There are four reasons to mock when writing tests. The first one is isolation. The second reason we might want to mock is because mocking gives us easy flow control at our tests. The next reason we might want to mock is in order to remove interaction with complex or external systems. And the last reason we might want to mock is in case we need to test some interactions between our system under test and other classes. Let's take a more detailed look at each of these reasons. When we decide we want to isolate some of our code for testing, our motivation is that we only want to test a specific part of our system. For that reason, if our system under test calls another component or class as is shown here, we don't want that to actually happen in our test because we decided that we are not testing that other component. So, in this first scenario is where we want to use some mock object. You can see in the second diagram that I have replaced component A with a mock version of component A that allows me to test the system under test and only the system under test. Another reason to use a mock object is for flow control. Looking at this code, let us assume that the implementation of component A's check something method is such that it is very difficult to set up the state to get it to return anything but true. So, to ease the burden of creating an initial state in our test that will cause it to return faults, we can just use a mock object and tell it to return true when we want and false when we want. That makes our setup significantly simpler and makes our test a lot less brittle. Excluding complex systems is really just a variation of isolation, but our motivations are different. When we isolate a system under test we are trying to test it and only it. In this scenario, our system under test talks to component that is extremely difficult, if not impossible to test. Perhaps it talks to database or to the network and setting up test versions of those systems and keeping them clean can be very time consuming and error prone or perhaps it talks to an external vendor system and we incur costs whenever we do that. Whatever the reason, if we have a difficult to test external component, then we will want to exclude it from our automated tests. So, in this case, a mock object will let us simulate the functionality without involving the live system. Sometimes, we want to test the interaction between two components, but the effect of the interaction may not be easy to observe externally. Take the following scenario. This is a sample save method on an object. In this case, the object needs to tell its persistence mechanism represented by the persister parameter to either insert or update based on the return value of the isNew function. Our test may want to assert that we always call insert if isNew returns true, but the real persister object won't be able to tell us if that is what happened. This is where a mock object comes in handy. Most mock object implementations will allow us to ask the implementation if a certain method was called or not. So, we've covered the basic reasons to use mock objects. There are certainly more reasons to use a mock object than this, but mostly they are just variations of those four reasons. Types of Mocks So, now that we have learned about the motivations for using mock objects, let's talk about vocabulary and the different types of mock objects. In general, most programmers refer to all mock objects as mocks, but there are definitely several subtypes of mock objects. Martin Fowler who was one of the original designers of the Agile Manifesto and a luminary on agile development has codified these subtypes here on his blog entry and I will follow his classification. First of all, the general term he gave isn't mocks, but instead, Test Double. Mocks are just a specific type of Test Double. Unfortunately, this is not a very commonly followed convention. Most programmers will say mock when what they mean is Test Double according to this breakdown. It is still important to know since different mocking libraries will have different kinds of Test Doubles and understanding what they mean when they have a mock versus a stub can make a great deal of difference as you are learning to use the library. In this course, I will do it most developers do and generally use the word mock when I am really referring to all kinds of Test Doubles. In most cases, this won't be ambiguous, but I will make every effort to be clear where it is necessary. The first kind of Test Double is the simplest. A dummy is simply an object that is passed around but never actually used. Most domains are used to fill a parameter list. At most, the type of a dummy is tested, but no properties or methods are ever actually called on a dummy. The next kind of Test Double is a fake. A fake object actually has a working implementation, but the implementation is generally a shortcut of some kind. This means that the fake actually does what it is supposed to do, but does it so simplified that it can't be used in production. Fakes aren't very frequently used. Instead, most developers opt to use the next type of Test Double, a stub. Stubs provide canned answers to questions. Perhaps every called to a given method returns the same answer or perhaps it returns only a couple of different answers. For example, if the method was supposed to return a large PRI number, a stub might randomly return one of three known large PRI numbers. Stubs are usually programmed to return specific answers to a test. For this reason, stubs are frequently used to determine flow control and tests. Most mocking frameworks implement some kind of stub. A spy is enhanced stub. A spy can not only provide canned answers, it can also record and hold information about how methods are called. For example, how many times a method was called and what parameters it was called with. Most mocking frameworks don't differentiate between stubs and spies. The last type of Test Double is the infamous mock. A true mock is created with expectations on how it will be used and what calls will be made, a mock asserts behavior. A mock object is set up with specific expectations, say perhaps it expects that a certain method will be called and if that doesn't happen during a test, the mock will actually fail the test because of a failed expectation. One of the great uses for a mock is to make sure that a specific method was called on an object and no other methods. Mocking in JavaScript There are several issues that create some special circumstances when mocking in JavaScript. This is especially true if you are familiar with any mocking frameworks on a typical statically type service side language like Java of C-Sharp. The first issue is that JavaScript doesn't have the concept of an interface. Most service side mocking frameworks rely on interfaces to define the class they will be mocking. Some will also allow you to subclass a class that you want to mock. Either option is feasible in JavaScript. The second issue is freestanding functions. Most object or any languages don't support a function outside of a containing class, but JavaScript sure does. Because of this, being able to mock a function is the basis of all JavaScript mocking libraries. Lastly, because JavaScript is a dynamic language we can at anytime change the implementation of an object and remove or replace any methods on it. This becomes our main method for mocking objects as we will see in a moment. So, now that we have covered what's different about mocking and JavaScript, let's discuss the general approach for mocking in JavaScript. The simplest thing to mock in JavaScript is just to single function and the implementation is just as simple. We simply replace the function with a Test Double function. Mocking objects is a bit more complex. Now, one option is to always just code up a Test Double yourself by hand and use that instead of the real object. This lets you mimic whatever functionality you want, but often this gets to be a brittle approach since as you changed the implementation and interface of the real object that you are mocking, you have to update your Test Double. So, rather than code up something by hand, we'd like to be able to take the definition of an object and create a mock of that. Again, because of lack of interfaces in JavaScript, the only way we have to define the interface of an object is to have an actual instance of that object. This means that we have to do one of two things. We either have to create a fake object with the same interface as the real object we want to mock or we have to create an instance of the real object we want to mock and then mock it. So, let's take a close look at those two options. Here I have a simple class with two methods. Let's assume I am using the typical constructor function and prototype modification to define that class. If I want to mock just that class, I can create a brand new object that just happens to have the same interface, then I can use that object as the basis for the mock object of class A. The primary drawbacks of this approach is that we have to do some extra coding initially to define another class that looks like a real class and also as the real class changes, we have to duplicate those changes in our fake interface class. This is actual duplication and something we would like to avoid if possible. The second method is slightly different. In this method, we create a real live instance of our class and then we use that as the interface definition for us to mock. This has its own set of drawbacks. The biggest of which is that if our constructive for our object has any side effects, then creating the real object might be things we don't want done in a test. So, these are our two options for creating an interface from which to create a mock object. Now that we have an object to mock, let's look at exactly how we would go about creating the mock. Let's take a sample class with three methods. Remember that in JavaScript, functions are first class citizens. They are objects that can exist on their own so the stub then is meaningful showing each of the methods as being separate from our class. After running our object for a mocking framework, we replace all the existing methods with spy methods. Now whenever we call a method on the object, we don't get the real implementation. Instead we get a spy implementation. We can have this implementation return a value we want and have it keep track of how often it was called and with what parameters. Another option is to just insert the spy method between the class and the real implementation. That way we can get the actual code to run, but we can still track how often and with what parameters each method was called. Remember we are not mocking our system under test; we are mocking its collaborators. That lets us test how our system under test calls its collaborators to assure that our code is working correctly by calling its collaborators correctly. What is awesome about JavaScript is how easy it is to mock an object. Let's look at exactly what it would take to create our own basic implementation of a mocking library. Mocking by Hand - Demo Here I have got a class with one method and using one of those basic form of prototype of inheritance in JavaScript, I've got a subclass with one method as well. And then let's write a method that will allow us to take an instance of our subclass and create a spy based on that instance. So, what I am going to do is create an instance of a subclass to begin with. ( Typing ) And I am going to call this method SpyOn and I'll pass in the instance. Now, what would that spy method look like? Well-- ( Typing ) Create the method and takes in the parameter and all I have to do is flip through a for loop and fridge key in the class to spy on. ( Typing ) We just did something with each key and remember each key is going to be a method on the class itself, so in this case with our subclass, there's going to be two of them, method one and method two. So, let's take the class to spy on and we'll take the method at each location and we are going to reassign it a new value. Let's create a new method called create a spy that takes in that key. Okay, now that is going to look something like this. ( Typing ) Inside here we'll return a new function. Inside that function we'll just simply do a console log and we'll pass out in the message the key that we are spying on which will be the name of the function. ( Pause ) Okay. Now, we have got our class and we are spying on it, we call our methods. ( Typing ) And let's run this in the browser and see what we got. ( Pause ) All right, so we are actually getting our console messages, I am a spy for method one and I am a spy for method two. We get that as we call each of those two methods. So, here in just a few lines of code we've already created a simple implementation for spying on a class. Of course we are missing a lot of functionality like being able to count when a method was called and how often it was called and what parameters it was passed, but those are simple features that it can be added on. Now remember, one of our scenarios was for a spy to not actually replace the original function calls, but to instead intercept them and then pass through the calls to the original methods themselves. So, let us go ahead and expand our create spy function to do just exactly that. ( Typing ) So, we'll call this new method, create spy pass through. Now, in addition to the name of the function we are going to call, we are actually going to need the context which is the object itself and the original function implementation. We'll go on to method and we'll change things a little bit. We are going to create the pass through spy object which is a function. Now, let's log on the message that we know were actually hitting our pass through function. ( Typing ) And then, let us call our original function and return that. ( Typing ) Now, if you're confused at all, the details of the apply function are a little bit beyond the scope of this course, so I am not going to go into it, but you can easily Google this and understand why this code works. So, let's finish this up and we are going to return that function we've created. ( Typing ) And now, let's change our implementation. ( Typing ) On spy on to call that method instead and the other thing we are going to need to do is pass in as the context the class itself and the function. ( Typing ) Now let's run this code in the browser and see how it looks. Okay, so now we can see that we are getting the message on the pass through spy and then we are actually seeing the inside of each of these methods. We are actually calling the real implementation and go back to the code and see that each of these methods actually calls canceled out log and says that they are inside themselves. So again, our implementation is a bit naive, but this is still the basics for how mocking libraries and JavaScript work and understanding this code will help you understand how those libraries work when working with them. There is a very important thing to note here and that is if we come down here and look again at my class instance, you can see that we created a new instance of the class, but as soon as we spy on it, it actually mutates the original object. It doesn't create a brand new object and leave this one untouched. This object itself is actually changed and become the spy. So, there you have the basic implementation of creating a spy in JavaScript. ( Pause ) Summary In this module, we looked at mocking in JavaScript. We looked at the reasons why we would mock, which include isolation, flow control, removing complex systems, and testing interactions. We also covered the different kinds of mocks or Test Doubles which include dummies, fakes, stubs, spies, and mocks. These different kinds of mock objects each served different purposes when testing our code and allow us to accomplish the four goals that we discussed for mocking. We also talked about the different challenges of mocking in JavaScript like the lack of interfaces. And finally, we looked at writing our own simple mocking library, although what we produced was extremely simplistic, it helped us to understand how mocking libraries work in JavaScript which will be helpful in the coming modules. Jasmine Spies Introduction Joe Eames: Hello. This is Pluralsight's course on testing client-side JavaScript. In this module we will be looking at Jasmine Spies. In the introduction I mentioned two mocking libraries that we will look at, Jasmine Spies and Sinon.JS. We're going to start with Jasmine Spies first since their functionality is a bit simpler than what we will find in Sinon.JS. In fact, you might say that spies in Jasmine are pretty much a subset of the functionality in Sinon. Albeit, they have a nice tight integration with Jasmine so there are built-in matchers for using spies that Sinon doesn't have, but other than that there is little functionality available in Jasmine Spies that isn't available in Sinon.JS. The first thing to note about spies in Jasmine is that with only a single exception which we'll cover in a minute, spies in Jasmine are built around mocking a single function, whether that is a freestanding function or a single method on an object. Because of that, they are great for mocking callbacks and other functions that operate alone, but when you need to mock every function in an object, they can do this, but they are a bit unwieldy. The next thing to take note of in Jasmine Spies is that they support both scenarios we discussed in our manual mocking, namely, replacement mocking, where the spy completely replaces the actual method, and pass-through mocking, where the spy only records the information about the call but still passes the execution off to the real method; and lastly, as I mentioned just a second ago, Jasmine Spies have a set of matchers built into Jasmine for bricking with them. So the integration is very nice, and the code to assert on a spy is very readable. Also, remember you can create your own custom matchers in Jasmine. So if there's any one that you feel is missing, it's easy to create it. Spying on Callbacks So let's look at the code that we use to create a spy in Jasmine. The first scenario we'll look at is how to create a spy for a freestanding function, like a callback. Let's assume we have the following code that we want to test. This function takes in a callback, does some stuff, and if that stuff works right, it calls our callback. Now, we want to have a test that asserts that the callback is called in the right circumstances. This is perfect for a Jasmine spy who would simply create a freestanding spy and use that as the callback. That code would look like this. Here in our test we create the spy using the createSpy method, and then we call our method passing in the spy, and then we do an expect on it, making sure that it was called, and that's all there is to creating a freestanding spy in Jasmine. All right. The demo I'm going to show here is really simple. I've got this very simple function. It takes in a callback, and then just calls it. This lets us demonstrate what we want to do. So I'm going to create the spy using that Jasmine createSpy and I'm going to give it a name of myspy. Now, this name that you give it here really isn't very important. So you don't need to sweat what you're going to name it. Then next I'm going to call the system I have on our test, which is callMyCallback. Passing in that spy. Now, of course, it's going to call it, and I want to assert that that happened so I'll call expect that the spy toHaveBeenCalled. All right. Let's go over and run that in my browser and we can see that we're passing our test because, indeed, it is checking that our callback got called. So that's the very basics of creating a spy for using in place of a freestanding function. Spying on Existing Functions The spyOn function is another way to create a Jasmine spy. Unlike the createSpy function, the spyOn function is used to spy on a method of an existing object. So we wouldn't use this so much for creating a spy to use in place of a concrete callback like we saw with the createSpy function, but instead we can take a dependency on a class that we are trying to test and create a spy for one of the dependency's functions. A spy created in this way will replace the function that it is spying on so that the old function will no longer be called. The spyOn function is a global function, and it's attached to the window object. So you can use it anywhere. The basic usage for the spyOn function is really quite simple. We call the function, and the first argument is the object that has the method we want to fake, and the second argument is the name of the method as a string. So let's just take a look at this in a live test. So what I've done is I've created an object here that has two methods in it. The first one is save. Save takes no parameters and has no return value, and just does some work. All right. You notice I commented out the center. That's because what it does is really not important. Then we've got getQuantity. Again, taking no parameters but returning a 5. This is a very simple object. I've put this object in the actual same file as our tests just to make it easy to demo what's going on. Of course, you would never do this in a live system. So the first test we want to do is just do a basic implementation of spying on an existing function. So our test will be should spy on save, and here we're going to create a spy, and this is going to be the object. Remember, our second parameter is the name of the function as a string. So I'll put in save instead of a string here. Now what I've got now is spy is a pointer to that function, but we've also replaced the actual implementation of the myObj.save function. So if I go ahead and call it, what I'm actually doing is calling the spy. Then I can just set my expectation expect(spy).toHaveBeenCalled. Close up, and let's run this in the browser, and that's passing, no problem. So let's look at a more interesting example. This time we want to spy on getQuantity, but you'll notice we've got a problem. GetQuantity actually returns a value. If we replace get quantity with an entirely different function, it's not going to, by default, return anything. So that's not really going to work out. So what we do to handle this case is we still create the spy the same way, but now we can tell it what we want it to return. So I'm going to tell it to return 10 by calling the andReturn function, and now I can expect that myObj.getQuantity, is equal to 10. Let's go ahead and run that in the browser, and that is wills passing. Now, notice if we hadn't replaced the function, it would have been returning 5. Because we compared it to 10, we know that we're actually using whatever our return value is. We can set this value to anything that we want, and as long as it matches here the test is going to pass. So we've seen how to call a spy and check that it got called. We've also seen how to force a return value out of a spy that we create. Another thing that we can do is provide an actual implementation for a spy. So let's assume that we wanted to log out to the console whenever our system in our test calls that method. That's quite simple to do. Let's go back to our code, and we'll create our test, should spy on getQuantity, using a fake, and this time create a spy -- again, using myObj and getQuantity. I'm going to call andCallFake. I'm passing in a function, and this is the function that it's actually going to use as the implementation. I'll have it log out, and then I'm actually going to have it return the 20, and my expectation will reflect that. ToEqual 20, and close up. Let's go run that in the browser, and we have all three of our specs passing, and if we look at the console output, we can see that it is logging out and returning 20. So this gives us a way to put a custom implementation in whenever we create a spy on a function. In addition to providing our fake implementations, either through the andReturn or the andCallFake methods, we can also create a spy that still dispatches the call off to the real implementation. This is useful if the real implementation is useful for our test, but we still want to be able to monitor how many times the function was called or monitor the arguments that were passed. So let's take a look at how we do that. So this time it should spy and call through. Our spy's going to be created the same. Now we call the andCallThrough function. This tells it to watch the function but still use the original implementation. So we can expect that myObj.getQuantity, is going to return 5. This is what the original implementation was. We can also expect that the spy, again, the handle that we created, that it's toHaveBeenCalled method will assert that it actually got called and that the spy is tracking the call, and now four specs are passing. That new spec is, indeed, doing what we said it would do. Now, the last thing we might want to do is cause our spy to throw an error so that we can test our exception handling in our class. This is also really easy with Jasmine Spies. So we just go in here and we create a new test, and this time when we create our spy, we're going to call the andThrow function. I'll throw a new error, and I'm going to create a quantity variable outside of a try catch block, and I'll put the call to that getQuantity inside of a try, and I'm going to catch the exception, and if we catch the exception, I'll set quantity to 100. Now we can expect the quantity is going to equal 100 because that method is now going to throw an exception even though the original implementation of getQuantity did not. So let's run it again, and all five specs are passing. Even though we're actually throwing an exception, it's getting caught and our quantity's getting set to 100. So we have seen how to use Jasmine to spy on methods of existing objects. This is a really common scenario for mocking in JavaScript, and Jasmine makes it really easy to do with the spyOn method. Creating Spy Objects The next function we'll look at is the createSpyObj function. This function lets us create an object from scratch with spy methods. That way we don't have to have a pre-existing object to hijack. Instead, we can just create an object that looks the way we want it. The syntax to do this is really simple. We call the jasmine.createSpyObj function. That function takes in two parameters, the name for the spy and then an array, which is a list of strings. Each of the elements of that array is used to become the name of a spy method on that object. Let's dig right into the code and see how this works. So we'll start by creating a spy, but this time it's a spy object rather than just a spy method. We're going to call jasmine.createSpyObj, and again we pass the name, and then we give it an array of strings. So we're going to have two methods. We're going to have fake getName and save. So now since getName obviously returns some kind of a name, we need to provide an implementation for that. So we're going to use the andReturn function. So we're going to say spy.getName.andReturn, and we'll call save. Well, it's going to return Bob. For save let's provide a fake implementation that logs out to the console, andCallFake, and here we've got a function, console.log ('save called'). All right. Now we can set up our expectations. So let's expect that I'm going to call spy.getName, and it's equal to Bob, and then we're going to call spy.save, and we'll just check that that's been called, using the toHaveBeenCalled method. Let's run that and see how that looks in the browser. We can see that our spec is passing. So it's really quite simple to create a full spy object and give it whatever methods you want and provide fake implementations for all those methods or have them return the values that it want. Now, again, this isn't necessarily something that you would do when you're creating an object that you want to test. This is for -- if the object that you want to test has some dependency that has a lot of functions that you might be calling, rather than just calling spyOn over and over again, you can just call createSpyObj and create the object that has all the spies already set up in one easy call. Jasmine Spy Matchers In this section we will look at all the different assertions, or in Jasmine terminology, matchers that we can use with a Jasmine spy. All these matchers are available from the exist method. We'll start by looking at the simplest one, which is the one we have already seen. This is the toHaveBeenCalled method. This matcher simply asserts that the spy was called at least once. Since we've already seen this one in action, we won't take a look at a code example. The next matcher we might want is to check to make sure that a spy was never called. This is really easy to do. We just add the not right before toHaveBeenCalled. Again, this is so simple we won't look at a code example. Now, if we want to assert that our method was called with some specific arguments, we use the toHaveBeenCalledWith method, and we give it the arguments that we are expecting it to have received when it was called. Just like the toHaveBeenCalled method, this method only asserts that it was called at least once with these arguments and makes no guarantees about multiple calls with the same arguments or calls with other arguments. The one thing that it does is verify all arguments in the call. So the arguments have to match exactly. We'll take a look at an example of that. Let's start with just the basics. We're going to create a test, and we'll call it should verify arguments. In this one we'll start by creating a spy. I'll name this one mySpy, and I'm going to call that spy, and I'm going to pass it a 1. Now let's set up an expectation that spy is toHaveBeenCalledWith, and then we'll pass it a 1. Okay, now let's look at that in the browser. Okay, so we can see that that's passing. Let's go make this a little bit more complex. Let's just show that it if we call spy with a different parameter, then our test is still going to pass because we're still calling it at least once with the parameter of 1, and that works even if we call spy with 1 and something else. It doesn't care about this call. It only cares about this one because that one matches. So we're still passing, but if we come in here and comment the second one out, even though we're calling spy with a 1 in its first parameter, it has a second parameter and that's not what we're expecting. So our test is going to fail. So you can see here the feedback, which is actually one of the really cool things about this, and it shows you that we're expecting it to be called with just a 1, but it's actually called twice, one with a 2 and once with two 1s. So you can you look at this and see why you might not be matching up when you think you should be. Very useful for diagnosis. Let's go up here and make this pass again, and that's the basics of verifying your parameter calls. Now, just like with toHaveBeenCalled, toHaveBeenCalledWith can also be negated with the not prefix. This will make sure that it was never called with a specific set of arguments. Let's go and create a new test for that. This one will start out with the same way, and we'll call it -- I'm passing that 1, and this time we'll expect that spy is not.toHaveBeenCalledWith, and pass it a 4. Of course, we can't use single quotes inside of our string. Take that out. All right. Now we can see that this test is passing as well because we never called spy with just a 4. Again, if I go in here and call spy and pass in a 4 but also pass in a 1, it's still going to be fine because it was never called with just 4. But if I call spy with 4, now it's going to fail, and again, just like the other scenario, it shows you what all the calls were so you can match that up. Because this is definitely one of those things that can get a little confusing when you're trying to diagnose why a test isn't behaving the way that you want it to behave when you're checking parameter values against what you expect to be and what they actually are. Jasmine Spy Metadata Now, the rest of what we're going to cover here isn't so much matchers as it is just metadata about the calls that we can use in our asserts. So the first of these is the calls collection, and everything that we'll cover starts with that. The calls collection is an array of objects that represents each time that your spy was called. So if a given spy was called three times, there will be three objects in the array. Each of these objects has two properties with information about that specific invocation. The first is the args property, which itself is an array that has each argument that was passed for that invocation. The second is the object that was the context of the call. Or in other words, it was the object that was the this when the spy was invoked. This can be really useful for callbacks to determine what object they were called on. Let's look at some examples. I'm going to start off by creating a test to check that we can test metadata. Now in this test I'm going to do something pretty simple. I'm going to create an object, and I'll just give this object one method. Then I'll create a spy, and I spy on that object, spying on its one method, and now I'm going to call that method, and I'm going to call it with 1, and then I'll duplicate that and do it with this 2 as well as 3. So now if I were to just do an assert, spy.calls would be an array with three elements because there are three calls. So I'm going to look at the first one. I'm just going to check the arguments on that, and of course I only have one argument. So I'm going to check that first argument. I want to check that it's equal to 1. Now let's go up and fix our other test so that it will pass, and then we can run this in the browser and see how it looks, and now all three of our asserts are passing. So let's check something else on this. Let's make use of that calls.object. Check the first call again, and we'll check that it's object that was the this is equal to myObj. Running that again. It's still passing. So let's add one more thing. Now, since calls itself is just an array, we can actually check its length, and we can check that it was actually called exactly three times. Again, we're still working. So you can see there's quite a bit of checking we can do using the metadata on calls to verify that our code is working correctly. Now, there are also a few convenience methods available to us. The first one is the callCount method. This is just a convenience method for calls.length. Then next is the mostRecentCall method, and that just gets, of course, the last call; and lastly is the argsForCall, and this is the args array off of calls. So you can use this instead of writing out calls.args. So let's look at a couple of examples of these as well. So I'm going to add in an expectation that spy.callCount is equal to 3, and let's also check that spy.mostRecentCall, and that first argument is equal to 3; and lastly that spy.argsForCall. This is an array of arrays. So I want the arguments for call one and I want to get the first argument, and then I can say that that's equal to 2. Now let's run all these, and things are still passing. So between the metadata available and the convenience methods on that metadata, there's a lot of really easy checking that we can do and check that our code's correct, that we are passing in the correct arguments to our calls and that they are getting called the right number of times. Utilities Now, the last thing we're going to cover about Jasmine is just a couple of little utility methods that you may find useful when writing your Jasmine tests. The first one is the isSpy method, which a little method on the Jasmine object that lets you tell whether an object is a spy or not, and the last one is the reset method, which is actually a method on spies, as it lets you reset them. This will be a lot easier to explain by just showing you some code. So let's go directly to a demo. So I'm going to create a new test. I'm going to create a spy, and I'm going to expect that spy is actually a Jasmine spy, and let's run this in a browser. So you can see that all four of our tests in this suite is passing. So let's just adjust this test as well, and we'll actually call the spy, and now if we were to expect that spy.callCount was equal to 0, it would fail because it's actually been called once, but if we come in here and say spy.reset, that's going to reset the call count. So at this point the call count will be zero. Let's run that, and our test is still passing. So there's a couple of utility methods that you can use in your Jasmine tests, make sure that spies isn't really our spies and to reset them if you need to. Both of these utility methods are really things that you probably won't end up using very much, if at all, but they are there if there is a really big need for them. Summary In this module we covered Jasmine Spies, which is Jasmine's mocking library. We started with the spyOn function, which lets us spy on a method of an object. With spyOn we can either set our own return value or let the spy call through and use the underlying value. We also saw the createSpyObj function, which lets us create an entire object that has only spies for methods. We also looked at Jasmine spy matchers, which lets us make assertions about spies. Jasmine Spies offer a lot of simple and convenient functionality that let you mock objects if you're using Jasmine for your testing framework. Sinon Introduction Hello. This is Pluralsight's course on Testing Clientside JavaScript. In this module, we will covering SinonJS. Sinon was created in 2010 by Christian Johansen as part of the work he did while writing the book, Test-Driven JavaScript Development. The website for that book is at tddjs.com. Sinon is a complete mocking library for JavaScript. Pretty much everything we saw with Jasmine spies is available with Sinon. In addition to that, Sinon can do quite a bit more. That doesn't mean that Jasmine spies aren't sufficient for your mocking needs while testing. Of course, that really depends on your usage, but again, Sinon is very comprehensive. It has a very popular mocking library. The official URL for SinonJS is here, and it has a link to download the latest version. The documentation on that site is really pretty good, so it's a great place to go for a refresher on how to do something with Sinon. Spies (background noise) We'll start our coverage of Sinon by looking at the most simple feature of Sinon, spies. Spies have 2 purposes in Sinon. The first is to provide a test double for a single function, for example when testing something that takes a callback. The second purpose is for watching an existing function or method of an object. In this case, the original implementation of the method will still be used. This is pretty much identical to Jasmine's and call through function, which we saw earlier in this module. Spies are created with sinon.spy function. There are 3 ways to call it. The first method is to call it without any parameters. This creates a simple anonymous function with no return value. This function is suitable to be used wherever a call back is needed for testing purposes. Let's take a look at a working example of that. Since Sinon is not an actual test framework but just a mocking library, we still need a test framework to use, so we'll continue to use Jasmine as our test framework, but we'll just add to our SpecRunner HTML file a reference to Sinon so that we can use Sinon instead of Jasmine for spies. I've written a little bit of code here. I've created a simple system under test object that has a single method called, callCallback, that does exactly what it says. It takes in a callback and calls it. So let's write a test for that method. ( Background Noise ) Since it's just calling the callback and doesn't care about the return value, I can just use a simple spy by creating a spy, using Sinon.spy. (background noise) My system under test, and I call it's callCallback method, passing in that spy I just created, and at that point the spy is going to get called so we can do an assert (background noise) on the spy.called property that it is true (background noise) and now if I run this in the browser, that's going to pass. I'll actually forgo running this in the browser since by now you should be pretty comfortable how Jasmine looks, and that feedback isn't of a terribly great amount of value for us as we're looking through this code. Instead, we'll just continue on. So now that I've got this code working, let's go back to our system under test and introduce another method. This one is going to be called callCallbackWithReturnValue. (background noise) This is going to take in a callback as well (background noise), and this time it's going to return out the value of that callback. (background noise) So now that we've got that, things are going to be a little bit different. We can't just use a plain spy because it has no return value, so instead we're going to look at the second usage of spies in Sinon, and that is spying on actual methods. So were going to write a new test. ( Background Noise ) And in this one we'll also again create a spy using the same spy method, but this time we're going to pass in an actual function. I'll call it realCallback. Let's go create that really quick. ( Background Noise ) And we'll just have this one return out 4 (background noise). Now that we've got the spy based on that callback, we can call the system under test, and we'll capture the return value output ( Background Noise ) And now at this point, we can actually call some asserts, so we can expect that spy.called is again true (background noise), but we can also expect that our return value is 4. ( Background Noise ) And that's how we would test that our callback is actually getting called and it's actually calling the real implementation because the return value is going to be exactly what the actual function's return value was, but our spy is still able to watch that function. Now the third usage of Sinon spies is to spy on not just an actual freestanding function but instead a method of an object. So let's go back to our system under test and let's add in another call. This time we'll call it callDependency . (background noise) And what this does is this is going to call somebody else (background noise). We'll call it myDep and someMethod (background noise). So what's going on in here is it's actually calling an external object. So let's create that external object. (background noise) And it's got a method called someMethod (background noise), and we'll have that return 10. (background noise) And what's going to happen up here in this call is it's going to call the someMethod method on the myDep object and then return that value. So if we want to write a test for that. ( Background Noise ) We're going to create our spy again, and we're going to use the third parameter set, which is passing in the actual object that we want to spy on and then the name of the method we want to spy on on that object. Remember, someMethod is a method of myDep not a method of mySUT. And then our return value is my system under test dot callDependency. ( Background Noise ) And then we can set our expectations, spy.called is set to true (background noise), and we can expect also that the return value is now 10. ( Background Noise ) And this would all work. Again, it's just watching the actual someMethod method, it's not doing anything but just watching it, so the implementation of someMethod is still going to be the same, which as we saw before returns out of 10, so this is going to return out of 10. Now, I just want to make one small note here. Of course none of the code that's used in demonstrating these functions should be considered best practices. In fact, most of the code we use in this entire course is pretty much arbitrary and overly simplified. But I feel it's worth mentioning something that just happened here. The class mySUT has an implicit dependency on a global named myDep. You can see that right here. This type of thing is never good. Implicit dependencies are the kind of invisible gotcha that just lurks about waiting to bite you as an invisible side effect, and it makes your code brittle and more difficult to change. So let's go in and rewrite this method to make the dependency explicit by actually passing in the object. ( Background Noise ) And pass it in like that (background noise) and return dep.someMethod. ( Background Noise ) And then we'll go back to our test, and we'll call better and we're going to pass in that global (background noise) and now that's really made the code a lot more maintainable because instead of these 2 objects having an implicit dependency on each other, the dependency has been made explicit by passing it in as a parameter to this call dependency function, and that's how you use spies in Sinon. Again, you can use them for 2 purposes. You can either provide a test double for a single function or you can watch an existing method of an object, and there are 3 ways to call it. One without parameters, where it creates a function for you that has no return value. With a single parameter that is another function that you're going to watch and with an object and a method name that you're going to watch. Spy API The Sinon spy API is rather large. Spies have a lot of methods that you can use to determine how they ran and whether they ran the correct way. So let's go ahead and take a look at the spy API. We'll actually through this fairly quickly because of the large number of methods within the spy API, but most of these are going to have some similarities with others, so there's really just a few different methods with a bunch of variations on them. The first one we're going to look at is called. Called is very straightforward, it just checks that the spy has been called at least once. The variations on called are calledOnce, calledTwice, calledThrice. Those 3 check to make sure that the spy was called an exact number of times. Then we have the firstCall method. This actually gets you a call object, which we'll look at in few minutes. This method gets you the call object that represents the very first time that your spy was called. We've also got a secondCall, thirdCall, and lastCall. The next method is calledBefore, which takes the parameter of another spy, and its checks to make sure that your spy was called before a different spy. And then the corresponding calledAfter. We've also got calledOn, which makes sure that your spy was called with a particular object as this, and alwaysCalledOn makes sure that it was called every time with that particular object as the context or the this object. CalledWith takes the set of arguments and checks that the spy was called at least once with that set of arguments. Notice that this is an exact match, so it must be these arguments but it can be additional arguments in addition to whatever you give it. AlwaysCalledWith is the same as called With but checks that every call on that spy included the arguments that you pass to it. CalledWithExactly is an exact match. So if you give it 2 arguments, then any matching call must be using those exact 2 arguments and no more. AlwaysCalledWithExactly does the same thing as calledWithExactly but checks every call. NotCalledWith checks that at least one time your spy was called without the arguments that were given. Never called with checks that it was never called with the arguments that you give it. CalledWithMatch takes in a set of matches. We will look at matches later, but there are ways to specify arguments and not be quite so exact about what arguments you want to match up. AlwaysCalledWithMatch does the same thing as CalledWithMatch but checks every call matches instead of just one call. NotCalledWithMatch checks that it was called at least one time without the arguments that you specify and neverCalledWith checks that your spy was never called with arguments that match the arguments that you supply. The next method is calledWithNew. That checks that your spy was called with the new operator and used it as constructor. The threw method checks that your spy at least once threw an exception. Threw with a string parameter checks that your spy at least once threw an error, and the type of the error has to match the string that was passed in. So you might for example pass in the string syntax error. Threw with a nonstring parameter checks that your exception threw a particular object. AlwaysThrew checks that every call to your spy threw an exception of any kind, and then you got 2 variations on that that match the variations on threw. Returned checks that your spy at least once returned an object that matches the given object. AlwaysReturned checks that every call returned the given object, and lastly we have a set of methods that give us the metadata back about each call that our spies made. GetCall gives you back the metadata about a specific call. It has 4 convenience methods, which we saw earlier, firstCall, secondCall, thirdCall, and lastCall. ThisValues gets back an array of all the objects that were used as the context for each of your calls to that spy. Args gets back an array of arguments very similar to what we saw in Jasmine. Exceptions gives you back an array of exceptions that were thrown. And returnValues gives you back an array of the return values that were given for each call. We have the reset method, which is just like what we saw with Jasmine. If you call this, it will reset your spy. And lastly, we have printf, which is a debugging statement that you can use whenever you are just having trouble figuring out why you can't match up a particular call on a spy. You can use printf and you can print out a whole bunch of metadata about each call. It takes a bunch of parameters. I'm not going to go into depth on this method here in this course. I'll leave it up to you to look at the documentation if you ever need to use printf to debug why you can't match up a call. Now we talked about the getCall and its convenience methods for second, third, and lastCall. Those methods return a call object, and that call object has its own API that we're going to look at next. So this is the Sinon call API. The first method is calledOn, which checks that a particular call was called with a given context. Then we have calledWith, where you're given a list of arguments, and it checks that call was called with those arguments. Again, like the previous variations on this method we saw, these arguments are just the minimum set, and there can be other arguments on top of that. Of course, we have the corresponding calledWithExactly. Then calledWithMatch, again using matchers, which we will cover shortly, and notCalledWith checks that the call was not called with a particular set of arguments. NotCalledWithMatch. Then we've got threw, which checks that this particular call threw an exception. Threw with a type string. Threw with a particular object. Then we've got the thisValue, which actually gives you back what the thisValue was for this call. The args, which gives you an array of all the arguments for this particular call, and the exception, which gives you back the exception that was thrown in this call, if anything. And lastly, we have the returnValue, which gives you back the return value for that call. So this is the Sinon by API. As you see, it's really large. There's a lot of methods, but there are basically just a few methods with variations on each of them, and there's basically 2 sets of methods, one for the spay and one for each call on the spy. Because this API is so comprehensive and with the combination of matchers, which we'll look at in a little bit, it's really easy to check and see if a particular spy was invoked in a particular manner and check that your code was being called correctly. It's also important to note that the spy API is actually supportive for the other types of objects that Sinon gives us, which are stubs and mocks. So those 2 objects build upon this API and allow you to use this in conjunction with additional methods to assert that your code is correct. Assertions Sinon comes with an included set of assertions that you can use on Sinon objects. These assertions can be used instead of the assertions that come with your testing framework. At first, you may wonder why to use a different set of assertions instead of the one that come included in your test framework, but there are a couple of really good reasons to use Sinon's assertions instead of the ones included with the test framework. First, the assertions included with Sinon are pretty specific, so they can make your tests a little more expressive and readable, but by far the biggest reason to Sinon's assertions is because the air messages are significantly clearer when the assertion fails. Let's look at an example of that. (background noise) I've already created a little bit of code here. I've 2 tests. The first test uses a built-in assertion. We're using Jasmine again here. So it calls expect that spy.called is true. In the second test a created a spy, but this time instead of using one of Jasmine's assertions, we're going to use one of Sinon's assertions. So I'm going to do the same kind of assertion. I'm going to assert that the spy was called, but to do that with Sinon's assertions, I call Sinon.assert.called and pass in the spy. Now, you can see in both cases I actually haven't called the spy, I've only created it. So both of these assertions are going to fail. So let's see what kind of feedback we'd get from the browser when we run these. (background noise) All right, looking at the built-in assert, the only error message that we get is expected faults to be true, but using the Sinon assert, we get an assert error, expected spy to have been called at least once but was never called. Obviously the feedback from the Sinon assert is much more clear and leads you to exactly what the problem is. So, this is an example of why using Sinon asserts is really advantageous when dealing with Sinon spies, stubs, and mocks. All right, so let's go back, and we'll actually take a look at the assert API. The first method we're going to look at is called, which we just saw an example of. Again, all these methods are off of the sentinel node.assert object. In addition to that, we've got notCalled, which checks that a spy was never called. We got calledOnce, calledTwice, called Thrice. Each of these methods takes in the spy as its single parameter. Then we've got callCount, which takes not only the spy but a number, which then checks that that spy was called that exact number of times. Callorder, which checks that a certain set of spies was called in a specific order. CalledOn, which checks that a spy was called with a given contexts. And alwaysCalledOn, which checks that every call to a spy was given with a certain context. We've got calledWith, which checks that a spy was called with a certain set of arguments. AlwaysCalledWith checks every call on that spy. NeverCalledWith checks that it was never called with those arguments. CalledWithExactly. Again, as we saw previously in the spy API, this checks the exact set of arguments and not just that the set of arguments given is a subset of the actual arguments used in the call. CalledWithMatch uses matchers, which again we will cover shortly. AlwaysCalledWith Match, neverCalledWithMatch, and we've got threw that checks that a spy threw an exception Threw with a spy and an exception as the second parameter, checks that that spy threw that specific exception, and finally alwaysThrew, which checks that a spy always threw a particular exception. So those are the built-in assertions included with Sinon. You don't need to use them. You can use the assertions that come with your own assertion library, and if you're really comfortable it, maybe it just doesn't make sense to switch, but if you do use the Sentinel node assertions, as we saw before, you really will get a lot better feedback when you have a failing test. Stubs Stubs are the next Sinon object that we're going to take a look at. Stubs are basically just spices with preprogramed behavior. As I mentioned before, stubs support the spy API, but they also contain additional methods to control behavior. Stubs can be anonymous, or they can wrap existing methods. When using a stub, unlike a spy, the original function is not called. There are 3 purposes for using a stub. The first is to control behavior. The second is to suppress existing behavior because the underlying implementation is not called, and the third is for behavior verification. Just like spies, you can check that stubs were called in a particular manner. There are 4 different ways to create a stub using Sinon. The first is to just call Sinon.stub. This creates and an anonymous function that acts as a stub. The second is to call Sinon.stub, pass in an object and the name of a method. This is very similar to some syntax that we saw previously in Jasmine. This allows us to stub a given method on an object. Again, this replaces the existing implementation and uses the stub implementation instead. The third is to do the same but also give the stubbed implementation as the third parameter. This is useful when the behavior is just too difficult to adequately specify using the Sinon stub API. And the last way is to pass in an object and Sinon will stub all of that object's methods. This is really convenient when you have a large object with a bunch of methods that you want to stub. Now you may be wondering why there's not an overload that simply takes in a function and stubs that function. Well, that really doesn't make sense since stubs, unlike spies, actually suppress the underlying implementation, so having the original implementation as a parameter to construction does not give any value. The Sinon stub API really isn't very big, at least not in comparison with the spy API. It just adds a few methods that let you control behavior. The first one is returns. That takes in a single parameter, and that specifies that whenever you call that stub it will return the given object. Next one is throws. This tells Sinon to throw a general exception whenever that stub is called. Throws with a tight parameter tells Sinon throw a particular type of exception. Throws with an object tells Sinon to throw a particular object. WithArgs is a cool little function. It lets you customize the behavior of a Sinon stub on a per invocation basis, so you can basically say when the stub was called with these arguments act this way, and when the stub is called with a different set of arguments, act this way. The returnsArg tells Sinon to just take a particular argument that was given and return that argument out, and that is specified by the index parameter. So for example if you pass in a 3, it will take the third argument and return that out. The callsArg is very similar to the returnsArg except instead of returning an argument at a particular index, you're actually instructing Sinon to call an argument at a particular index, implying that that argument is a function. CallsArgOn is just that calls except you can also pass in a context and tell Sinon to call that particular function with the given context. CallsArgWith is just like CallsArg except you can also supply set of arguments to be passed to the function that gets called. CallArgOnWith is a marriage of the 3 different methods, which lets you call a particular argument, which is passed into the stub using a particular context and a particular set of arguments. In addition to all of these, there are another set of 8 methods on the stub API. These additional methods are all about having your stub method call one or more callbacks that are passed to is, very similar to callsArg but just a lot more complex. Because of the complexity of these methods, they are beyond the scope of this course so I'll only mention them here for completeness. In the next clip, we'll see a demo of stubs. Stub Demo (background noise) We're going to take a look at how to use Sinon stubs. I've written a little bit of code here to use as a basis for a couple of tests. I've created 2 classes. The first is the combat class. This is going to be our system under test, and we're going to check that combat class works correctly. I've also created a character class. You'll notice that the character class has no implementations for its 2 methods. That's because we're going to stub out any characters that we created. Now these 2 classes are part of what might be the beginning of a game that you would write in JavaScript. Let's take a brief look at the attack method on combat so that we understand how it works. It takes in 2 parameters, an attacker and a defender, and it calls the calculate hit method on the attacker passing into the defender. Assumably, what happens inside that method is some kind of complex calculation to determine whether or not the attacker actually hit the defender. Then, if it was successful and returns a Boolean (phonetic) value of true, we call the defenders takeDamage method, which takes in the attacker's damage parameter. Again, this particular method might have a complex implementation that takes into account all kinds of factors. Since the test we're going to write is to determine whether or not the attack method works correctly, we don't really care about the implementations of calculateHit and takeDamage, which is why it's beneficial to stub them out. And in this case, we've actually really done that because we've even shown that the implementations don't matter in any way, shape, or form, so it's possible that I might not have even implemented these methods when I write the tests for the attack method. The test I'm going to use to demonstrate stubs is a test that checks that the defender should get damaged if the hit was successful. So my test is going to be it should damage the defender (background noise) if the hit is successful. ( Background Noise ) All right, so the first thing I need to do is create a combat object. So I'll create one of those. (background noise) I've also got to create a defender. (background noise) Now the defender I'm going to stub out, so the way to do that is I use Sinon.stub. ( Background Noise ) And then creating a new character. ( Background Noise ) And I need to do the same thing for an attacker. ( Background Noise ) Now that I've got the attacker created, I've got to assign it's damage, because that's an important part of the method. (background noise) Now here's where we actually get to do something with our stub. We're going to control the behavior of the stub by telling it to return true whenever somebody calls calculateHit. The way that we do that is by using the returns method, which is now a method off of the calculateHit method. ( Background Noise ) Now that I've got that, I can go ahead and run my combat attack (background noise) passing in the attacker and the defender (background noise), and then I can set up my expectations. So I'm going to expect that the defender's takeDamage method (background noise) was called (background noise) and check that equal to true. (background noise) Now the other thing I want to know is did the defender's takeDamage method get called with the correct parameter. So I can go ahead and duplicate this line, but this time, I'm going to getCall and ask for the very first call, and I'm going to check that that one was called with the number 5, which matches up with the attacker's damage. (background noise) Now this is everything that I need in order to check that the defender is actually damaged if the hit is successful, and this is a nice sample test with a couple of stubs to do 2 different things. We're using the stub for the attacker to determine whether or not calculateHit returns true or false. Now, again, calculateHit might be a really complex method, and setting it up to return true might be really complex, but using a stub this is very simple. I just tell the calculateHit method to return true. I'm using a defender for a different purpose. I'm actually using that stub to verify that the behavior was correct. I'm using some of the spy API that the stub also implements to get that particular call, and call calledWith to verify that the stub was called correctly. Now let's run this in a browser and make sure that our test passes, (background noise) and indeed our test is passing. So here's a simple example of using stubs to not only control behavior but also to verify it as well. Mocks Mocks are the third and final form of test double provided by Sinon.JS. Unlike stubs and spies, mocks contain preprogrammed expectations. Mocks will fail tests if these expectations are not met. Mocks are like stubs except the assertions are stated before instead of after. There's a few guidelines that we should keep in mind whenever using mocks. Try to only use one mock object per test. If you find yourself using more than one mock object, you're probably trying to do too much within your test. Minimize your use of expectations. Try to keep it down to just a couple of expectations per test. And lastly, minimize the use of assertions that are external to the mock. It's more common for the mock itself to be the only assertion to use within that test and therefore the only reason that that test can fail. Now since we've looked at spies, stubs, and now mocks, it can start to get a little confusing to remember the difference between all 3 of them and how they are used, so before we get deep into the usage of mocks, let's take a look at some diagrams that show the differences in the architecture and usage between spies, stubs, and mocks. With a spy, we take an original function, and we wrap it in a new function, and then we use this new function in place of the old one. With a stub, Sinon takes a method on an object and actually replaces the reference to the original method with a new method that has nothing to do with the original one. With mocks, instead of wrapping the original methods, or replacing them, you create a mock of your object, and from that mock you create expectations on the methods. Each of those expectations then causes the original method to get replaced by a mock method, and again the original implementation is discarded. Under the hood, there's a little more that goes on, but this conceptual model is complete enough to understand the basics of mocks and how they differ from spies and stubs in Sinon. Now let's take a look at the code for creating and using mocks. First, you call Sinon.mock, and you pass in your object, which gives you back the mock object. From that, you can create expectations by calling mock.expects and passing in a method name. The API for mocks themselves is pretty bare. All it has for the most part is the ability to create expectations. The API for expectations is also fairly simple. The majority of the methods revolve around how many times the method was called. It's important to note that each of these methods returns the expectation so that they can easily be chained to create compound expressions. So first we have expection.atLeast, then we pass in a number that tells the mock to fail the test if that particular mock function is not invoked at least the given number of times. Then we have its corresponding atMost. We've got never, which tells the mock to fail the test if the mock method is ever called. Then we have once, which tells the mock to fail the test if that particular expectation isn't called exactly 1 time. Twice, of course, thrice, and then of course we have exactly if one of the above 4 doesn't work. Then we have withArgs, which tells the mock to fail the test if the mock method isn't called at least once with the arguments that are given and withExactArgs, which tells it to fail the test if it's not called with these exact arguments. Then on tells the mock to fail the test if it's not called at least once with that given object as the context. And lastly, the key to it all is the verify method. You would typically call this last new test, and that checks that all the expectations have been met, and if there are any that haven't been met, then the test fails. Next, we'll look at a simple example of using mocks. ( Background Noise ) Mocks Demo Using the same code in the stub example, we're going to write a different test. Since mocks and stubs can both be used to verify behavior, we're going to rewrite our same test, but this time we use a mock instead of a stub to assert that the code worked correctly. ( Background Noise ) So we're giving it the same name. (background noise) We're going to start out kind of similar. We're going to create a combat object. ( Background Noise ) Then we'll create a defender. ( Background Noise ) Now we're going to mock that defender, so I'm going to create a mock defender. ( Background Noise ) Remember, I've got to grab a reference to a new object, mock defender. This is a little different than what we saw with stubs. With stubs, the functions are actually replaced in place, and we'd on a separate stub object that we talked to, but with mocks, we have a separate mock object that we're going to talk to. Now that we've got a handle on that mock object, we can create an expectation. ( Background Noise ) And we call the expects method, pass in the name of the method we want to mock, (background noise) and we're going to say that we only want it to be called once, and we want it to be calledWithArgs of 5. Now we can create our attacker. (background noise) We're going to do the same thing we did before and stub out the attacker. ( Background Noise ) We still need to stub him out because we still have to control the flow of the method by having the attackers calculateHit.return(true). We also go to set its damage, (background noise) and then tell the stub (background noise) to have its calculateHit.return(true). (background noise) Finally we call combat.attack, passing in the attacker and the defender, and notice were not passing in the mock attacker, we're passing in the actual attacker, and then we do the magic expectation.verify (background noise), and at that point, it's going to verify that all the expectations we set, which are the once and the withArgs expectation have been met, otherwise the test will fail. Let me close it up, and let's run this in the browser and see how it works. (background noise) We can see that both of our tests are passing. Let's go back and look at our code. So here you have a really good comparison of verifying behavior using either a stub or using a mock. And looking at these two, you can decide for yourself which you like better, using mocks or stubs to verify behavior. Probably the key takeaway and difference that you should pay attention to is the fact that with expectations the assertions that we have are set before the actual action takes place, but with stubs, those expectations are set afterwards, so it's the location of where in the test that we verify behavior that changes based on whether or not we use a stub or a mock to verify behavior. Matchers Matchers in Sinon are a way to match up calls to a test double based on the arguments used without specifying those arguments exactly. Let's say for example that you wanted to make sure that a certain spy was called a number, but you didn't care which number was passed so long as it was a number. You can use a matcher for that. Or perhaps you wanted to make sure that a certain argument was a function. A matcher can do that as well. Using Sinon matchers is pretty easy. Remember, they are used in the place of arguments when you are checking to see if a test double was called correctly. Let's take a quick look at an example usage. The code here is very simple. We create a spy, call it, and then use the calledWithMatch method and pass in one or more matchers. Here I am using the number match, which will be true if the argument was a number. You can see in the second line that we called spy with a value 3, which is indeed a number. So that calledWithMatch statement will return true. If instead of a 3 we had passed in a string high, then calledWithMatch would have returned false. Let's look over the complete set of matchers provided by Sinon. All of the matchers are available through the Sinon.match function. The first one you just pass in a number as an argument. This matcher isn't as useful because it verifies that the argument matches that number exactly. The next one is string. This one's a little bit more vague because it allows the string that you match up to be a substring of the expectation that you provide. Let's look at this in code. So here I've got a simple test where I've created a spy. I'm going to actually call that spy and pass in the number 1234. I'm going to go in and verify that that spy was called with at least a substring of what I expect. So I'm going to say spy.calledWithMatch, (background noise) and here I'll call Sinon.match (background noise), and here I'll pass in my match. So I'm going to just give it a substring. And this is actually going to work because 1 is a substring of 1234. (background noise) And that's how Sinon.match with a string works. The next one is Sinon.match with a regular expression, which matches an argument if it was a string, which matches the regular expression that you give to the match. The next one is Sinon.match(object), which matches an argument if it was an object that matches up to the object that you give it. Sinon.match(function). This is how you actually create a custom matcher, and we're going to go over that a little bit later. Sinon.match.any will match any argument at all. The next one is match.defined. That works as long as the argument was anything but undefined. Then Sinon.match.truthy, which matches any truthy value, and falsy matches any falsy value. Then we have Sinon.match.bool, which matches as long as the argument passed was a Boolean (phonetic) value, and then Sinon.match.number matches if the value passed was a number. Sinon.match.string will match so long as the match was any kind of a string at all. Sinon.match.object will match as long as the argument is some kind of an object, that means not a primitive value. Sinon.match.function will match up against a function. Sinon.match.array will match against an array. Match regular expression will match against any regular expression. Sinon.match.date will match so long as the value passed was a date. Sinon.match.same will only match so long as the object that was passed as the argument is the exact same argument that you give to the same matcher. Let's look at an example of this too. I'm going to create and object. ( Background Noise ) And I'm going to call spy and pass in an object. (background noise) Then if I call spy.calledWithMatch (background noise), if I call Sinon.match.same (background noise), and I simply recreate the object (background noise), this would pass because this object and the O object, all they're very similar, are not actually the same object. They might have the same properties with the same values, but they're not the same object. If I wanted this to pass, I'd actually have to pass in the actual O object here, and at that point, that would return a true. The type of match will match as long as the argument is of the given type. The type that you can pass in here is a string, and there's a set of values that you can pass in. The values that you can give it are undefined, null, Boolean, number, string, object, function, array, reg ex, or date. There are already matchers for everything in that list besides undefined and null, so this is really not a matcher that you should use very often. The next one is instanceOf(type), and that requires the value to be an instance of the given type. The has matcher will match so long as the object in the argument has a property that matches the property that you give here. In addition, you can provide a second parameter that is the value of the property. Let's take a look at that to clarify. (background noise) I'm going to call my spy again, and this time I'm going to call it with an object. I'm going to call it with myProp, and it has a value of 42. I'm going to call spy.calledWithMatch (background noise), Sinon.match.has (background noise) and I pass in the string myProp. That's going to return true because the object and the argument does have a property called myProp. Additionally, I can pass in a second parameter and pass in the 42, and that would be just a little bit more exact. These would still match. Now even if the object here actually had more of the properties than this, (background noise) this matcher would still return true because it has at least the one property given. HasOwn is the same as has except for the fact that prop that it matches up has to be defined on the object itself and can't be inherited. And that is the entire set of matchers that Sinon provides. As you can see with this list, there is a lot that you can do to match up arguments that you don't know exactly what the argument was, but you do know something about the argument. Now in the rare case that one of the matchers that we looked at just doesn't cut it, you can always create a custom matcher. This gives you the greatest amount of flexibility. Doing this is really easy. Here I have created a matcher that passes if the value given is less than 100. You can see that this is pretty simple. The second parameter is just a readable version of the name that would be used whenever argument fails to match against this matcher. The last thing we'll cover about matchers is the ability to combine them. Let's say that you support passing in either a string or a function for the first parameter of a method. Combining matchers will let you do that. You can use the and and or functions, which are available on every matcher to combine matchers into a compound matcher. Here's the code to implement the example of a matcher that matches either a string or a function. This concludes our look at matchers. Matchers are a very powerful feature of Sinon that you can use to match arguments of calls to testibles (phonetics) when you need a somewhat fuzzy match. The list of provided matchers is quite comprehensive and can handle just about any situation. For the rare case that a situation arises that can't be handled by one of the built-in matchers, then you can use a customer matcher instead. And if you ever need to create compound matchers, that is done with the and and or statements. ( Background Noise ) Faking Timers Sinon allows you to fake timers very similarly to the way that Jasmine's mock clock works. When you fake timers in Sinon it allows you to control the clock and advance the clock when you wish by as much as you wish. This works with both setTimeout and setInterval. If you're working in IE, you'll also have to include the Sinon IE library, which you can get from the Sinon site. Sinon will provide you with a clock object that you can use to control the passage of time in your tests. What's really cool about this is that it also lets you control dates so that you can create dates with a specific timestamp. And another really cool advantage is that this works with jQuery animations. So if a jQuery call animates something over 500 msec, and you advance the clock by that much, then the animation will complete and fire any callbacks. Let's jump into some code and see fake timers in action. ( Background Noise ) Faking Timers Demo Let's take a glance at the code we're going to use to see fake timers in action. I've created a little spec file here, and I've actually created a class inside the spec file. The reason I'm doing that is because this class isn't so much a system under test as it is a utility that we're going to use to highlight how fake timers work. The class myClass has a couple methods. The first one here, doTimeout, just calls setTimeout with a 1-second timeout. It takes in a callback and calls that callback after the 1 second has passed. The second method, hide, this one actually uses a jQuery animation and makes this div (phonetic) with an ID of hideMe invisible. The reason I'm doing both of these methods is to show that fake timers work not only with the timing functions like setTimeout but also with jQuery animations. I created a barebones test with a describe called timers and inside here I've just started out my code with a spy, a callback that logs out to the console, and inside my beforeEach I've made sure that that div is visible and then I've initialized that spy to a Sinon spy that wraps around the callback. That way we can use the spy to not only watch how the callback gets called but we can also see console output from it as well. So the first test I want to write is just handling timeouts. ( Background Noise ) All right. Inside this test, the first thing you want to do is I'm going to create a clock. So what we do is create a clock by calling Sinon.useFakeTimers. ( Background Noise ) Now that I have my clock, I'm going to call a doTimeout on that myClass object, and I'm going to pass in our spy. Now, remember, timeout takes 1 second. So I'm going to tell the clock to go for just over a second. I'm going to do that with the tick function on the clock. (background noise) Then I'm going to expect that spy was called. ( Background Noise ) And I'm also going to restore the clock using clock.restore. (background noise) And if we run this in the browser, we can see that the test is passing, and it is calling our callback. So, again, what's going on is because were using a fake timer, when we call a doTimeout, it actually doesn't do anything until we tell the clock to tick. Once that tick has passed 1000 msec, that timer will fire and our spy will get called, and then of course we have to restore the clock by the end of the test otherwise the setTimeout function will be hijacked after the test is completed, and we always, always, always want to clean up our state after every test. We never want polluted state in between tests. Even if we know that the next test is going to be use a faked out clock, we always want to clean up our state. Now let's see what it's like to use fake timers for animations. I'm going to create another test. ( Background Noise ) And so this one I'm going to start out the same way, creating a clock. ( Background Noise ) This time I'm going to call the hide method on that object. (background noise) And again I'm going to have the tell the clock to tick for just over a second. ( Background Noise ) And I'm going to set an expectation that the div is now invisible. So I'm going to do that by expecting that the div itself, if we ask for it only if it's visible, that the array that we get back has a length of 0. ( Background Noise ) And of course the last thing I need to do is restore the clock (background noise), and let's see how that looks in the browser. And now we got 2 specs passing, so that new test is passing as well. And the final test that I'm going to write is going to show off how we can use fake timers in order to fake out dates. ( Background Noise ) Inside this test, we're going to create an initial date. ( Background Noise ) And I'm going to create a clock. (background noise) And this time I'm going to be just a little bit different because I'm going to pass in that initial date. (background noise) When I pass in the initial date to the fact timers, what it is doing to do is set the clock to that exact time. Now if I come down here and create a date. ( Background Noise ) That date will actually be this value of the initial date. So let's show that by logging onto the console (background noise), and then let's tick the clock 1 second (background noise). Now let's create another date. ( Background Noise ) And let's log that out. ( Background Noise ) And then of course we need to restore the clock. ( Background Noise ) And let's go view that in the browser. And so you can see here, it's logged out those 2 different date time stamps. The first one is that initial value they've created, and the second one is exactly 1000 msec later. So this is another advantage of Sinon in that you can actually set dates to the values that you want. That can be really helpful when testing things such as some event that only happens at midnight or an event that happens every hour, so as you can see from the code that we've written, it's really not very difficult to use fake timers in Sinon. Just always remember to restore your clock, and of course the right place to do that is an afterEach function, not inline in the test like I've done it. Faking the XHR (background noise) Sinon allows us to create a test double for the XMLHttpRequest object. That way if our system under test is making ajax (phonetic) calls, we can write tests around them and verify their requests and responses, or we can use the fakeXHR object to provide canned responses for our tests. This way, we can inspect each request and provide canned responses to them. If we are working in IE, we will need a custom library from Sinon. There are 2 ways in Sinon to fake requests. The first is with the lower level useFakeXmlLHttpRequest method and the second is with the higher level fakeServer. Let's look first at the useFakeXmlLHttpRequest method. Here we have the basics of using it in tests. First we create an array to hold our requests, then we actually tell Sinon to use the fake xhr and that creates an xhr object. Then we register with that xhr object to listen for whenever a request is made, and we grab that request and put it in our array. At this point, we can actually do something like make and ajax call, and when we're all done, we restore the original xhr object. So this part of using fake xhr's is pretty simple, but these request objects that get created are where the meat is. Now using the request object is all about what your purpose is in your test and what you're testing. If you need the fake xhr to respond a certain way but your tests will be testing your own objects and how they deal with the result, then what you will need to do is know how to make the fake xhr object respond the way that your server would. You might also want to test that your code is creating the requests correctly and send in the correct data to the server. If that is so, then you will want to be able to inspect the request object. So we will start by taking a look at how to do that. The request object has a few properties and a couple methods on it that you can use when asserting that a particular request was made correctly. The URL property gives you the url with which the request was made. The method property gives you the http method. RequestHeaders is all the headers, and requestBody is the body. And status of course is the status code. This property and all the ones that follow isn't information about the request itself but instead the response to that request. How you set up Sinon to respond will determine the value of this and the rest of the members of the request object. We've got the status text, the user name, the password, then there's the responseXml that will sometimes have a value based on the headers. ResponseText will have the text of the response unless it's an xml based response. And then you can also look at the response headers either one at a time using getResponseHeader and passing in a string indicating which header you want to check or getAllResponseHeaders will give you back all the headers as a string. Now since Sinon is controlling the xhr object in the browser, just because you make a request doesn't mean that it will get responded to. If you want to send a response, you have to tell Sinon to do that. There are a few methods that you use to do that, but by far the simplest way is to just call request.respond. This method takes in 3 parameters: The status, which is an integer; the headers, which is an object; and the body or response data, which is a string. When you call this on a request object, then Sinon will respond to that request. This will allow any callbacks to fire or promises on that request to complete. There are a couple other methods that we can use. We can set our response header separately by calling setResponseHeaders, and we can set the body separately by calling setResponseBody. Now let's go see how to actually use this in some real code. The first thing I want to do is put in that template code that we looked at earlier. So I've got to capture the xhr object, so I'm going to put that in an external variable, and I also need my array to grab the requests. (background noise) Now I'm going to go into my BeforeEach. ( Background Noise ) Now I'm going to initialize that xhr object by calling Sinon.useFakeHXR. ( Background Noise ) And the I'm going to initialize that requests array as well. ( Background Noise ) Now the next thing I need to do is listen to the onCreate event. ( Background Noise ) And now that I've got that done, there's one more thing that I needed to do. I need to clean up after myself, so my afterEach (background noise) I'm going to call xhr.restore. (background noise) This will restore the original xhr object back to the browser. Now let's write a test to exercise this code. ( Background Noise ) So I'm going to do something very simple, and that's create a responseData variable here (background noise) and put in a little bit of JSON. ( Background Noise ) And then I'm going to call jQuery.getJSON method. ( Background Noise ) And I'm going to pass in a callback that will receive the data and just log it out to the console. ( Background Noise ) Now that I've made a request, I need to tell the fakeXhr to respond to that request, so I'm going to go to requests element 0 and I'm going to tell it to respond with a 200, and the headers that I'll use are content type (background noise) of application JSON. ( Background Noise ) And I'll send in the response data, and now I can do an assert, so I'll expect that that request at the url property is equal to some/url. Let's go in here and clean this up so we're calling the right object, close it up, and now we can run this in the browser and see how it looks. (background noise) Okay. So you can see that our test is passing. We can also see that it's logging out an object, which has a property called myData with a value of 3. Going back to our code, we can see that that's exactly what we passed in was that object with a property of myData with a value of 3. So that's how we use the fakeXhr object in order to respond to requests. Again, this is kind of a low level API. It allows us to investigate each request object and respond to each one uniquely and see the url on each one. The sentinel node fakeServer is another API for hijacking the xhr object in a browser. The fakeServer is a higher level function than use FakeXhr. This interface is more about setting up a pattern for responses and less about being able to examine specific requests and responses. As such, its usage is quite a bit simpler. We can see here in the sample code that there are only 3 main parts to it. First, we create the server. Then we define how we want it to respond to requests, and finally we restore the server so that the original xhr object is back and available to browser. The API of the fakeServer has a few methods centered around how it should respond to different kinds of requests. We already saw the respondWith method. There is an overload of that where you get to specify the url and the response, so when you request that url, we'll get that response. Then we have the method url and response so that you can have custom responses based on method and url, and then if you want to be a little more vague about which URLs you want to set up a response to, you can use a regular expression for the url in both the overloads. And lastly, the respond method actually causes all these responses to fire. Until you call this method, the requests that have been received so far won't actually be responded to, so that API is a bit simpler, but let's still go take a look at that in an actual test. ( Background Noise ) Okay, now that I've got my describe set up, I'm going to create a server variable (background noise), and then in my beforeEach. ( Background Noise ) I take that server variable and initialize it to Sinon.fakeServer.create. Then I'm going to tell the server to respond with the following information. I want it to respond with a 200, and the headers I'm going to use are going to be the same as before, content type (background noise) of application JSON (background noise) and let's go with the same return value. We'll change up the value just a tiny bit. ( Background Noise ) Okay. Close that up and then of course our afterEach. (background noise) We will restore that server. ( Background Noise ) And let's write our test. ( Background Noise ) So here I want to just create a blank spy (background noise), then I'm going to call jQuery getJSON again with some/url (background noise) and I'm going to pass in the spy and tell the server to respond (background noise) and then I'm going to say Sinon.assert. It was called with (background noise) spy and the object that was passed in was myProp (background noise) 35. Okay. Let's go over this in the browser and see if this works. Okay. So we have both of our specs passing so indeed that new test is actually getting a spy called correctly by our server. So there's the basics of using the 2 method available in Sinon for faking the XHR and allowing you to provide response to your ajax calls and inspecting your requests to make sure that they were made correctly. ( Background Noise ) Sandboxing Sandboxing is a way in Sinon of making sure that any global objects you touch with Sinon are restored to their original state. This includes the following, spies, stubs, mocks, fake timers, and the fake XHR object. There are a couple ways to implement sandboxing. The first is to use the Sinon.sandbox.create method right inside your test. The other way is to use the Sinon.test method as a wrapper to your test callback. Let's jump right into the code and look at how to implement sandboxing. You can see up at the top of this file I've created a little object called myXhrWrapper. I've got 2 methods in there, get and save. Each of those methods just logs out to the console. I've had them do this for demonstration purposes so we can see the timing on when things happen. So I'm going to go into this test. I'm going to create sandbox. And I set equal to Sinon.sandbox.create, and then the next thing I want to do is stub out that XhrWrapper, so I'm going to call sandbox.stub myXhrWrapper and then I'm going to call both get and save on it. ( Background Noise ) And then I'm going to restore it by calling sandbox.restore. (background noise) Now the other thing I want to do is put in a little bit of logging, so I'm going to say console.log and a sandbox test (background noise). And remember stubs won't actually use the underlying implementation, so when this gets stubbed, the get and save are not going to return any values. So let's go and create and afterEach method. ( Background Noise ) And we'll have it log out after the test. ( Background Noise ) And we'll have it call the XhrWrapper get and save as well. ( Background Noise ) And remember, those are going to log out to the console, so what should happen if this works is because we're sandboxing we're creating a stub, we're calling get and save, and they're not going to actually do anything because those are stubbed out methods. Then when we call restore, it's going to automatically get unstubbed, and so in the afterEach when we call get and save, they're actually going to log out to the console. So let's go ahead and run this and see the output. ( Background Noise ) All right. So we can see down in the console that after we go into the sandbox test nothing gets logged out until the sandbox has been restored, and now when we call get and save, they're actually logging out to the console again using their original implementations. Going back to the code, it's important to note that when we wanted to stub we actually had to call sandbox.stub and not Sinon.stub. This links that stub to this particular sandbox so when we call sandbox.restore those stubs are actually restored. And let's look at the other method that we can use in order to sandbox the test. In this one, we do the same thing (background noise) except this time I'm going to call Sinon.test and then pass in the callback function. And now because this is actually creating the sandbox and restoring it for me, I won't need to include that code, but I will still log out to the console. ( Background Noise ) And I'm going to call this.stub. So again, when I'm creating a normal Sinon stub, I call Sinon.stub. When I've created a sandbox by hand, I call sandbox.stub, and when I'm using a Sinon.testWrapper function then I call this.stub (background noise), and I'm going to call the get and save on that. ( Background Noise ) All right, let's go run this one in the browser and see the output. (background noise) Okay, so we can see from the console output we ran our first test, which is the sandbox we created by hand, and then we can see that our second test functioned just like the first. Even though we call get and save on them, nothing happened because they were stub implementations, but after the sandbox got restored, then our get and save went back to their original implementations. So sandboxing is a really great way if you're going to modifying any globals to restore those globals back to their original state. Hopefully your tests don't have to deal with too many global objects, but if they do, this is the way that you can deal with them. Summary In this module, we took a look at the Sinon mocking library. We learned that Sinon has 3 kinds of objects. Spies, which let us watch how a given function or method is called. Stubs, which replaced the implementation of a given method or function with a stubbed function that lets you determine the return value, and lastly mocks, which act like stubs with the exception that you can predefine expectations on them that fail your test if they aren't met. We learned about Sinon matchers, which are assertions that can be used in just about any testing framework, they give us better feedback about why a Sinon object failed an assertion than the built-in assertions. We also saw how Sinon will let us take over the timing methods like setInterval and setTimeout so that we can essentially control the system clock in our tests. We also looked at Sinon's XHR mocking functionality, which lets us make ajax calls and provides canned responses to those calls. We saw 2 ways to do this, through the useFakeXMLHttpRequest method, which not only lets us provide responses but also lets us inspect each request, and the fakeServer, which is a higher level interface for simply defining canned responses. And lastly, we took a look at sandboxing in Sinon. This lets us restore any global objects we may have modified during our test in Sinon. Through this module, we have seen that Sinon is an extremely comprehensive mocking library for JavaScript. Testing Utilities Introduction Hello. I'm Joe Eames and this is Pluralsight's course on Testing Clientside JavaScript. In this module we will be coving three utilities for testing clientside JavaScript. Now by no means should you think for even a moment that there's three utilities we cover are the only testing utilities that you may find useful in your JavaScript testing. Like all open source development and especially JavaScript there is a veritable plethora of tools and utilities available that you may find useful when testing clientside JavaScript. The reasons that I chose these three tools to look at out of all the utilities out there is primarily because of my belief that they cover a decent range of options and currently their popularity is growing whereas other utilities may not be gaining as much in popularity or are too new or not widely used enough to be considered mainstream. All these utilities revolve around one common thing, running your tests as you change your code. All of them offer one common piece of functionality. They will watch your files and rerun your tests whenever you change a file be that a spec file or a code file. This ability is really nice when writing code because we will know at the soonest possible moment that you have broken one of your tests and you can react to it immediately. The three utilities we will cover are Live Reload, Testacular and Grunt. Live Reload is a simple way to make a browser reload whenever a file from a specified set of directories changes. Testacular is a testing utility that runs tests in multiple browsers. And Grunt is a build tool which has among many features, the ability to run tests on the command line. So without further ado, let's look at our first tool. Live Reload Live Reload is a fantastic project which will let you develop without having to refresh your browser every time you make a change to one of your clientside files. This includes changes to your HTML files, CSS files, JavaScript files and even clientside files that require a compile step such as CoffeeScript, SASS, LESS and several other compiled languages. And the list of languages that it will compile for you isn't fixed. There's a plugin system so that you can add your own compilers. Live Reload is available at livereload.com. Currently there are two entirely different versions available for Mac and Windows. The Mac version is available on the Mac App Store. It costs $9.99. The Windows version is still in alpha and is available from livereload.com. From there you can download an executable that you run to install the product. After we install Live Reload the last step is to install browser extensions for any browsers we're going to use. There are extensions for Safari, Chrome and Firefox. Unfortunately for Internet Explorer, you can only use it with Version 10 or later and in order to use Live Reload even with the later versions of IE, you have to include a special script in your page that you can find right in the Live Reload product. In order to find the browser extensions to install, just go to the Live Reload page, click on the getting started article and then from within here you can click on the use our browser extensions article. That takes you to this URL which has links to each of the browser extensions that are available. You want to hit this page from each of the browsers that you're going to want to use and click on the appropriate extension and install it from here. ( pause ) Now I've already installed the browser extensions for both Chrome and Firefox. But one of the things that is very important when using Chrome is to go and set one particular setting. So I'm going to go into my tools and into my extensions in Chrome. And I'm going to go down to my Live Reload and I'm going to make sure that allow access to file URLs is checked. By default this won't be checked. So let's get Live Reload working. We'll start by looking at the code that we're actually going to be using in our demo. Here's the test that I've created. It's a simple single test. I'm actually using an object from the system under test which is over in that code directory that I showed you earlier. So because that's part of the code that we're running, that's going to matter when we chose folders to watch. So let's configure Live Reload now. By default Live Reload looks pretty plain but we're going to have to make one change to it. It says there's no folder selected. We're going to go in and add a folder by clicking on the add folder icon. I'm going to go down an choose where my demo is located at. And I'm going to choose the root directory rather than the directory where my HTML file runner's at. That gives me access to both the system under test and the HTML file. Now that I've got that selected I'm going to click on that folder so you can see what kind of options are available for customizing. Option one is a site URL. This way we can set it to a URL is we're running through a Webserver. For this first demo we're not going to run through a Webserver, we're just going to use file URLs so we don't need to set that. Item two is a tag that we can insert in case we're using a browser that doesn't have an extension. Since both the browsers that we're using have an extension we won't worry about that one. And item three is a check box in order to get it to compile scripts. For this demo we have no compliable scripts, we'll leave unchecked as well. Now let's go back to our browser and let's navigate to our demo file. So here I've got this one test and it's passing. Now remember in Live Reload if you look down at the bottom it gives you a little bit of status. It says it's idle, that there are zero browsers connected. We're going to change that and connect this browser by going back into it and clicking this little button up here that enables Live Reload. Now that I've done that when I look at Live Reload it says one browser connected. And now that my browser's connected I can go ahead and make code changes and watch Live Reload work. So I'm going to change this so that the test is going to fail. And as soon as I hit save you see that Live Reload has reloaded my browser in the background. And now let's go ahead and add Firefox into the mix. I'm going to go back to my browser, put it on one side, and I'm going to put Firefox up on the other. And now that I've got Firefox up I've got to connect it as well by selecting this button. Now that I've got that I can back to the code and make another change. Click save and it instantly updates both Firefox and Chrome. So you can see how effective this is if I'm writing tests where I want them to run in both browsers or even if I'm doing HTML or CSS changes I can load up my actual page and see how it changes as I change my HTML and CSS. Now let's do one more thing. Let's see how this works when we're actually using a Webserver. So I'm going to go back into Live Reload and I'm going to delete that folder. I'm going to add a new folder pointing at that same directory. And this time I'm going to go in and I'm going to set the URL to local host 3001. I happen to have a little Webserver that's running off of that URL and it's pointing to that same directory. So I'm going to go back to my browser and I'm going to change the URL and you can see that it's still hitting the same page even though it's using a different URL. And I'll make the same change in Firefox. And now that I've got those up, I can open up my code again and I can make another change. And again, they're both updated even though we're not running this over HTTP instead of a file URL, it still watches the directory, looks for changes and then updates the browsers. So that's a basic introduction to Live Reload. It is a really powerful tool and can really help you out when testing your JavaScript. ( pause ) Karma Testacular is a testing automation tool built by the Angular team at Google. It was built as part of the angular project. Testacular is built on Node and allows you watch a set of tests and have those tests run in multiple browsers with the results sent back to a command line program that reports which browsers, if any, are failing and why. Unlike Live Reload, Testacular won't help you if you are trying to see the visual changes in your code. It only works for tests. The browsers that it runs don't actually show you the actual pages. But a big advantage of Testacular is how fast it is. The results of running tests come back from those browsers very quickly. Another advantage of Testacular is that you don't need a test runner HTML file anymore. You just need your code, the tests for it and a configuration file. Out of the box Testacular supports QUnit, Jasmine and Mocha tests. Currently Testacular supports Travis and Jenkins for CI but Teamcity integration is coming very soon and may even be released by the time this course is published. Now one thing that Testacular isn't nearly as effective at is debugging. So once you find a problem you may still want to run your code through a test or interface to help you solve the issue. But for running your tests in multiple browsers Testacular just can't be beat. Installing Testacular is really easy. You just use npm. The following npm command will get it installed for you. You'll have to install Node first if you haven't done so already and make sure that you have the latest version of Node. After Testacular is installed you may have to modify your environment a little bit. If you are on Windows you will need to add two environment variables, one to point at the path to your Chrome executable and one to point to Firefox. You'll only need to do these if you intend to use Firefox and Chrome in your Testacular testing. For Chrome you need to create a variable named Chrome Bin and give it the path to your Chrome executable. Here's the path to Chrome on my computer. For Firefox the variable's named Firefox Bin and here's the path to Firefox on my computer. So let's do that right now. Here's my system properties dialogue. I'm going to click on environment variables and then down in the system variables I'm going to click new. At this point I'm going to type in Chrome Bin and then I'll go down in here and type in the path to my Chrome. ( typing ) Of course the path to Chrome on your computer may vary so make sure that you verify where Chrome is actually installed at. The last step for setup is to create a Testacular configuration file. Now you can create one by hand or you can copy one from Testacular's get hub site and modify it. But by far the easiest way is to create one using Testacular itself. So let's see how that works. I've opened up a command line and here I'm going to change directory into the place where I've got this demo. ( typing ) And now to create that configuration file, I type in Testacular init. This will walk me through a set of questions that will generate that configuration file. So the first question is what testing frameworks do we want to use? I'm going to go ahead and accept the default as Jasmine. The next question is what browsers do we want to run our tests in? So I'm going to use two, I'm going to use Chrome and Firefox. The next question is what files I want to test? I've got two files, spec.js and code.js. Now this accepts wildcards so you could type in for example specs slash star star slash star.js and this would include every JavaScript file inside of your specs directory. But for our demo we only need to watch two files so I'm going to continue on. Now is a list of any files we want to exclude? For this demo we don't need to exclude any but of course you can use the same thing and enter any files you want excluded. And the next question is having Testacular watch your files. This is the setting that will have Testacular watch your files for changes and automatically rerun your tests whenever your files change. Now we're done, we've created testacular.conf.js. So let's open up that configuration file and look at it. Here's my Testacular configuration file. The first thing you may notice is that the base path looks a little funny. So I'm going to go through here and fix this and change this to just an empty string. This will make the base path be the current directory which is good enough for our demo. Now you can see next is the file setting. This includes the two files we included plus Jasmine and Jasmine adaptor which are the files required in order to run these tests in Jasmine. After that we have our exclude which you can see is empty. Then we have our porters. By default it uses the progress reporter. This determines how the command line looks when Testacular runs and gives you feedback. The Webserver port and runner port are settings that Testacular uses in order to communicate with the browsers. The only reason you'll ever need to change these is if any of these ports are being used by something else on your computer. Then we've got our colors equals true. That just allows us to see colors in the output in the console. Then we have our logging level which is set to info. You can determine however verbose Testacular is by changing this value. Then we have auto watch. That's the setting that determines whether or not Testacular will watch our files for changes and rerun the tests. The next setting is the list of browsers that Testacular is going to run. It's put in Chrome and Firefox for us because we have it, put those in when we created the configuration file. But I'm going to add one more. I'm going to add in IE. Now I found that if I don't put IE first it tends to timeout when trying to start up. So I'm going to put IE first and then Chrome and Firefox last. The next setting is the timeout setting. This is the amount of time that Testacular will wait before it figures that a browser is dead and stop listening to it. Then the final setting is single run. This is set to false by default. This is only used when we're doing continuous integration. If we set it to true then it will run once, capture the tests and then close down all the browsers and exit out. That's fine for CI but since we want to watch our files and rerun our tests anytime they change, I'm going to that set to false. Now that I've got the settings the way that I want, I'm going to save this file and I'm going to launch Testacular. And I do that by typing in Testacular start. Now you can see that it's opening up an Internet Explorer window and trying to connect to it. And that flicker there shows that it failed connecting to it within the five seconds and tried again. And it failed a second time and it failed a third time. So in that case it gives up. So let's terminate our Testacular. I'm going to go back into our configuration file. And we're going to set our timeout to ten seconds. Now we can go back to our command line and try it again. And this time of course the time limit's going to be a little bit longer so it should have an easier time connecting to our Internet Explorer. Let's look at the command line and see what the output shows. All right, so it connected to Internet Explorer fine but Firefox had a problem connecting. It retried and got Firefox successfully the second time. Now the next thing you'll notice is the outputs that it ran against IE, Chrome and Firefox, executed two tests and was successful for both tests in all three browsers. Sometimes you may see extra lines here showing Chrome twice or Firefox twice. That will happen when it times out connecting, it opens up another instance but doesn't successfully close down the old instance. And that situation's easy to fix. Just go find one of the extra Firefox windows that it actually opens up and close it down. Now let's look at our browser window. The output here shows us big green Testacular connected message and some information about what other browsers are involved. This browser window isn't going to be real useful to look at and see while your tests are running, so we're going to go ahead and minimize this because the command line is all that we want to see. Now I'm going to take my command line and I'm going to slide it over to the right and then I'm going to take my text editor and I'm going to move it over to the left. That way I can see both windows at the same time so as I make code changes in here it will rerun the tests in here. Now the next thing I'll do is open up my spec file here in my text editor so I can make some changes. So here I've got my two tests that you see, they ran in each browser when it first launched. I'm going to change one of these tests so it's going to fail and see what happens. So the output is in IE expect 2 to be 3, same thing for Firefox and Chrome. And along the bottom it shows that each browser executed the tests twice and had one failure. So I can fix that by changing my 3 back to a 2, rerunning and we get three successes. Now let's do something that's only going to fail in Internet Explorer because Internet Explorer has no console by default. Once I comment this line out and run it we can see that IE failed whereas Chrome and Firefox succeeded. So here's an example of where being able to run your tests inside of multiple browsers can help you know when you've broken something especially with IE and all of its unique requirements. This is a really big advantage. So that's Testacular. As you can see it's pretty easy to set up, it's really fast and the ability to run and show you the results of your tests so quickly as you change your code is a huge advantage as you write tests for your JavaScript. Grunt Grunt is a little utility that is getting a lot of attention right now in the JavaScript community. According the Gruntjs.com site, they describe Grunt as a task based command line build tool for JavaScript projects. But calling it that doesn't really do justice for how powerful Grunt is. Grunt was created by Ben Alman who's Twitter handle is shown here. Ben is a rather prolific author of open source projects. Grunt is built on top of Node so you will need to have Node installed in order to run Grunt. If you don't have it installed already, Pluralsight has an excellent course on Node JS that I recommend you watch to get Node installed. As I said before even their own description is a bit limiting. Grunt is an extremely comprehensive tool. You can think of it as an automation tool that is written in JavaScript. We are only going to look at a tiny fraction of the power of Grunt but it is far more powerful that what we are going to see. By default when you install Grunt it installs Version 0.3 but we will be covering the current version of Grunt which is Version 0.4. Grunt has made some major changes in 0.4. The difference between 0.3 and 0.4 is rather striking. Now don't worry that we are going to be covering an unstable version of Grunt. The 0.4 is rather stable and is being used by a lot of different high profile projects with no real problems. But because 0.4 is still pretty new many of the plugins for Grunt are still making the transition to 0.4. In addition, a lot of documentation on Grunt is targeted at 0.3. So keep that in mind whenever you read any articles on Grunt. As a result of this new version it is very possible the details in what we cover may change. The main thing I expect to be different will be changes in the installation commands which you will see in minute. But it is possible that there will be other changes. So if you have any problems following along with the video in this course, before you give up out of frustration be sure to check the demo files. I will keep those up to date with installation instructions that work and files that are configured correctly with the latest version of Grunt. And if you still can't get something to work, then contact me through Twitter or Pluralsight. Grunt Features As we look at Grunt we are only going to cover a few of the things you can do with it. These will be the features of Grunt that are directly related to testing your clientside JavaScript. The most obvious of these is the ability to actually test your code. So we will cover how to use Grunt to run tests in either Jasmine, Mocha or Qunit. In addition, many problems you may have in your JavaScript just come down to syntax errors so we will cover how to lint your JavaScript files using Grunt. And like the other tools we are covering in this module, the big thing we want is to have our tests ran whenever we change our files so we will also cover how to watch files using Grunt. Installing Grunt The first thing we need to do is install Grunt. Since Grunt is built on Node it is installed through npm so we'll be using the command line to do our installation of Grunt and its plugins. Let's switch over to the command line now and get Grunt installed. One of the big changes between 0.3 and 0.4 of Grunt is the command line part of Grunt has been separated out from the engine. In 0.4 the command line is installed globally where the engine of Grunt is installed locally in each project for which you want to use it. So we're going to install the CLI first. We can do that with an npm install Grunt dash CLI and we use the dash G to make it global. Now after this downloads and installs the Grunt CLI we're going to go into our demo directory. And this is representative of the project directory that you will have. And here is where I'm going to install Grunt locally. So I do with that with the command npm install Grunt. Now as I said before the default version of Grunt is 0.3 but I want to install the current version of 0.4. So in order to do that I just add the command at 0.4 and that's going to install 0.4 version of Grunt. Now it is worth noting that the version number is only required for right now. Eventually 0.4 will become the default version of Grunt so you won't need to put in the version number when you install. It's also worth mentioning that knowing npm a little bit better can help you troubleshoot any installation problems you might have when installing Grunt. That's not covered in this course but again, Pluralsight had courses on Node and the introduction to Node course has a very in depth section on npm so it's definitely worth your time to watch that course. And now that we have Grunt installed we have to create two files. The first one is a package.json file and the second one is a Grunt file.js file. For the package.json file you can create one by hand or you can use npm to create it for you. We'll use npm. We do that with npm init and that's going to ask us a series of questions. I'm just going to hit enter for all of these and accept the defaults except for author, I'll type in my name. And type in yes and that's created my package.json file for me. Since none of these settings really matter for us you can pretty much answer anything you want to to these questions. Now the second file we need to create is a Grunt file.js. We're going to create that one by hand. So let's look at an initial version of that file. This file is ready to be filled in with details that are needed for our project. Now you can just create one exactly like this by hand or you can grab the sample one from the demo files. And now that we've got Grunt installed we're ready to move on and do some testing. Testing with Grunt Testing with Grunt gives us a simple way to run our tests much like refreshing a page except it will let us run our tests on the command line. With Grunt you can test Jasmine, Qunit and Mocha and each of them is rather easy to do. So let's start with Jasmine. First off I have a few files already created. The first is the code file that represents our system under test. That's here in this code directory. Then I have a test directory and inside there I've created a directory for each of our test frameworks. So let's look at just the Jasmine directory for now. You can see that I only have one file in here. That's because with Grunt, you don't need any other files to run Jasmine tests. Naturally you can still have your test runner file and all the Jasmine files but they aren't required to run a Jasmine test with Grunt. Next we'll need to install the Jasmine plugin for Grunt. We're going to go back to the CLI to do that. The command to install the Jasmine plugin for Grunt is going to be npm install Grunt dash contrib dash Jasmine. Now we're going to add an optional flag here at the end, save dash dev. That's going to add it to our dependencies inside of our package.json file. ( pause ) Now that one took a few extra seconds because in addition to the Jasmine plugin, it also installed Phantomjs which is how it's going to run those tests. And now that we've got Jasmine installed all we need to do is configure our Grunt file and we're ready to go. So to configure our Grunt file with Jasmine, the first thing we're going to do is go up here and create a load npm tasks command. So that's done with Grunt.load npm tasks and we're going to pass in our Grunt dash contrib dash Jasmine. And the next thing we do is go up into the init config and add a Jasmine section. And this we do like this and it's got one section within that called pivotal and within that we give it two parameters. The first one is source and here's where we're going to have our system under test and we've got that remember in code, I'm just going to do that star.js and then our options, this is the second parameter and that has one section within that which is specs and that's going to be our tests slash Jasmine slash tests.js. Again, I could use wildcats in here if I had more than one test file. And I'll close that up. Now I'm going to do one more thing before I'm done. I'm going to go down here into the default task and I'm going to set Jasmine as the default task. I'll show you what the benefit of this is in just a second. And now we can save our file. Go back to our command line and Grunt will actually work. So let's type in Grunt Jasmine and it's going to run our Jasmine taskforce. We can see it ran one spec with no failures. Now I since I added Jasmine to the default tasks, I can just type in Grunt and it'll still run that Jasmine task for me. And that's all there is to setting up Jasmine to run with Grunt. Next one I want to install and run Qunit. So the first thing we need to do is install the Qunit task for Grunt. And we do that just like Jasmine with npm install and the plugin for Qunit is called Grunt contrib dash Qunit. Now unlike Jasmine, Qunit hasn't caught up totally yet to 0.4 so we're going to need to install a particular version of the Qunit plugin. So I'm going to add at tilde 0.1.1 and that will make sure that the right version gets installed. And then I need on the parameter to save into the package.json file. And I'll let that install. That should take just a moment as well. ( pause ) So it installed another copy of Phantomjs and now we can go into our Grunt file and configure that. So we'll go up and add an npm task for our Qunit contrib. And then we can go up here and add a Qunit section. This one's going to be a lot simpler than Jasmine was. All we need is an all and that's an array with a list of the HTML runner files that we want to use. So that will be test slash Qunit slash startup HTML and then close that up. And let's go look at the contents of the Qunit directory. So you can see here I've got more than just the test file. It actually got the HTML runner file, Qunit itself and CSS files. With these files inside this directory I can actually run that HTML file and it will run my tests for me in addition to running it on the command line with Grunt. So let's go back to our command line and let's type in Grunt Qunit and we can see it's got one assertion passed and no errors. Now let's go back in and make one more modification. In addition to Jasmine, let's also have it to Qunit. We'll go back and this time type in just Grunt. And you can see that it ran both our Jasmine tests and our Qunit tests. And there you have running Qunit with Grunt. And so our final testing framework we're going to configure with Grunt is going to be Mocha. And of course we have to install that plugin as well. So we do npm install Grunt dash Mocha. Now this is a little bit different than the other two plugins that we created because it's not part of the Grunt contrib project, it's its own project. And of course we want to add our dash dash save dev and go ahead and install it. And it'll take just a second as well. And just like the other two, this one installs a copy of Phantomjs. Okay, now that we've got that installed, we can go and configure it in our Grunt file. And we'll do the same thing that we did before but we'll change this to Mocha and then we'll go up and add a Mocha section. This one we have an all just like Qunit and that takes a source and that's going to be an array of files. I'm just going to put in the one file which will be tests slash Mocha slash index.HTML which is my HTML runner file. And then it takes one more piece which is options and here you have to specify run with true. And close that up. Now there's one other change that I've got to make. So let's go and look at the directory structure and look at our Mocha files. So here we've got the same Mocha files we've seen previously in this module but there's one change that I had to make. I had to take the index.HTML and make a modification to it. So let's look at that and see what we did. Down here on line 16 you can see I've got if navigator user agent index of Phantomjs is less than zero then Mocha run. So what this does is it doesn't allow Mocha.run to be called here if it's being run through Phantomjs. The reason for that is we need to specify in our Grunt file configuration that we want Mocha to run there and not here. If we don't wrap our Mocha run with this if statement and then put run true in the configuration for Mocha in our Grunt file, then Phantomjs is actually going to timeout on us whenever we run our Mocha tests. So now that I've got that, I'm going to go back to the command line and I'm going to run Grunt Mocha. And we can see that one assertion passed. Now let's go in and make this a default task as well by adding comma Mocha and go back and just run Grunt. And again, it's going to run all three testing frameworks, all of them passed, no failures. So there we've seen how to configure Grunt to run Mocha, Jasmine and Qunit tests. Linting with JSHint Linting has become very popular lately with JavaScript. Because of the nature of the language, most developers are finding it helpful to have a linter check their code. Linting will find all kinds of problems in your JavaScript files including but not limited to, syntax like forgetting a closing bracket, style issues like camel case versus underscores and identifiers and code smells like using double equals instead of triple equals. There are a couple flavors of linting so for this course we are going to use JSHint which is a slightly more forgiving linter. So the first thing we need to do is install JSHint. Of course we do that in the command line just like we installed all the other plugins. And we'll do that with npm install Grunt dash contrib dash JSHint and we'll add our save.dev option and go ahead and install it. That should take just a second. Okay, now that that's installed we can go ahead and configure it. So we'll go to our Grunt file and we'll add an npm task for that and that will be Grunt dash contrib dash JSHint. And then we can add our section for JSHint. So in this section, like several of the others, it takes an all. And that is an array of files and wild cards can be used here. So I'm going to do tests slash Qunit slash tests.js which will make sure that our test file for Qunit gets linted. And I want to also lint our Mocha file and our Jasmine file and finally submit our test file which is going to be code slash star star slash star.js. And that will work so that any other files we add in that same directory will also get linted. And now that I've got that configured I can go ahead and run it from the command line. So I'm going to call Grunt JSHint. And we can see that it's linted all four of those files and they're all lint free. So let's see what happens if we go into one of those files and create a problem. So let's go into our Mocha test file and we'll open it up and let's take the semicolon off this line. And go back to our command line and lint again. And now we actually get an error saying we've got a missing semicolon. Now there are some options you can configure JSHint with. As I said before, it's very versatile and a little forgiving so there are a few things that aren't turned on by default that we might want to turn on. In order to do that I have to go and create a new section called options, that's an object and we'll just demo the curly option. So we'll set curly to true. And what curly does is it makes sure that you have curly braces wrapping around all the locks. Say for example, if statements or while statements it requires you have curly braces for the blocks that apply to that if. So let's turn that on. And we'll go back to our tests and we'll say if 3 equals 3 then we want to run those tests. Close this up and now if we go and run the linter we can see that the error we get is expect that a curly brace and instead saw the word expect. We can fix that by going back to our test and adding the curly brace around this block. Go back and run it again and now everything's just fine. Let's undo that change. So that's how you can configure JSHint to lint your files. Watching Files with Grunt Now we're going to tie everything that we have done so far together by getting Grunt to watch our files and whenever any of them changes we are going to lint and run our unit tests. Like all the other tasks we've added to Grunt so far, we're going to have to install Grunt watch. And we do that by doing npm install Grunt dash contrib dash watch with the save dev option. And we let that install. Okay, now that we've got that installed we can go ahead and configure it. So the first thing of course we need to do is go down and add a task for that and then we can come up here and add a config section. Now we watch and just test two options. First one is files and that's an array of files we want to watch so we're going to do tests star star slash star.js to watch every JS file inside of our test directory. And then the same thing with code. And the second option it takes this tasks and this is an array of tasks that we actually want to run whenever one of the files changes. So we want to run first our lint which is going to be JSHint, then we want to run Jasmine, then we want to run Qunit and finally we want to run Mocha. Okay, let's go ahead and run that in the command line and see what we get. So it's very simple. I just type in Grunt watch and now you can see that watching's actually on and it's just sitting there waiting. So let's go and make a change. I'm just going to do something simple, add a space here. Go back and you and you can see that as soon as I did that it kicked off running all the tasks we have configured to run whenever watch triggers. So now if I'm developing and writing tests, I can just take this command line, stick it off in the corner of my screen and every time I make a change to one of my code files or every time I make a change to one of my test files, it's going to rerun my tests and let me know if I broke anything. So just like the other T utilities we looked at, this allows me to keep something going all the time to let me know as soon as I happen to do anything that breaks one of my tests. Summary In this module we took a look at three JavaScript testing utilities. Live Reload, Testacular and Grunt. Each of these three has their own strengths when it comes to testing JavaScript code. Live Reload is the only one that lets you actually see your page so in addition to unit testing you can use it for visual changes to your project. Testacular is solely dedicated to testing but that makes it the simplest one to set up and configure and its speed is unmatched. Grunt is by far the most versatile of the tools but it is quite a bit more difficult to install and configure than the others. All of these tools are fantastic tools so when you're writing and testing your JavaScript, do yourself a favor and use some tools that will ease the burden of running your tests and seeing the results. Course author Joe Eames Joe has been a web developer for the last 13 of his 16+ years as a professional developer. He has specialized in front end and middle tier development . Although his greatest love is writing... Course info LevelIntermediate Rating (494) My rating Duration4h 50m Released12 Feb 2013 Share course