What do you want to learn? Leverged jhuang@tampa.cgsinc.com Skip to main content Pluralsight uses cookies.Learn more about your privacy Building Command Line Applications in Node.js by Paul O'Fallon Most of us use command line applications in our jobs every day. This course will introduce you to the basics of building a CLI in Node.js, including managing configuration, interacting with the user, and distributing your finished product. Resume CourseBookmarkAdd to Channel Table of contents Description Transcript Exercise files Discussion Learning Check Recommended Course Overview Course Overview Hi everyone, my name is Paul O'Fallan, and welcome to my course Building Command Line Applications in Node.js. I've been working in Node for more than 8 years and I'm really excited to spend some time focusing on command line applications. Most of us use command line applications every day, some of which have likely been written in Node.js. In this course, we're going to demystify the process of building a CLI in Node. We'll start by covering how to manage configuration information, including storing sensitive data. We'll then move to interacting with the user. We'll wrap up by publishing a finished product to both npm and Docker Hub. By the end of this course, you should have a good foundation for building your own Node.js-based CLI. Before beginning this course, you should be familiar with Node and JavaScript in general. I hope you'll join me on this journey to learn how to create your own CLI with the Building Command Line Applications in Node.js course at Pluralsight. Setting up Your CLI Project A Brief Overview of Command Line Applications Hello, my name is Paul O'Fallan, and I'd like to welcome you to the course Building Command Line Applications in Node.js. In this module, we'll discuss the fundamentals of command line applications and why you might want to build one yourself. Then we'll dive right in and get started with the sample project we'll build together throughout the course. So what is a command line application? In the simplest terms, it's an application you invoke from a command line, of course. You might invoke these from a command prompt on Windows or from the prompt of a Unix shell. These programs are also called commands, a command line interface, or just a CLI. In this course, I'll most often refer them as CLIs, plus it fits better on the slides. Command line applications are at the heart of the Unix philosophy of doing one thing and doing it well. They're not large, complicated monoliths that do everything under the sun. In a way, command line apps are microservices, before microservices were cool. They're simple, concise applications that can be stitched together in a variety of ways to do new and interesting things. Some of these common execution patterns include combining multiple CLIs together in a single batch file or shell script. You can also use something like cron on Mac or Linux or schtasks on Windows to run a command line application on a schedule. It's also very common to take the output of one command and use it as the input to another command, this is known as piping. Finally, you can, of course, redirect the output of a command to a file. Likewise, you can use a file as input to a command line application. I can almost guarantee that you're already using command line applications today. They are very common in developer tools. If you're a front-end developer, you may use tools like Gulp, Grunt, Yeomen, or Webpack, A server-side developer might use Maven, Gradle, or the .NET command. Most cloud vendors provide CLIs for interacting with their infrastructure as well. This includes the AWS, Azure and Heroku CLIs. Finally, there are also command line desktop tools, like Homebrew on a Mac or Chocolatey for Windows. Even with all that, if you're still one of the few who hasn't yet leveraged your command line, now is a great time to start. The reach of the CLI can be near or far. The command you run may just do its work directly on your computer, for example, a Java compiler or a bundler, like Webpack. Other CLIs may reach out to a database or repository within your organization. An example of this might be a source control command that checks code out of a company repository. However, many of the most interesting and useful CLIs reach out to APIs or other services over the internet. And finally, of course, a CLI may do some combination of all three. So, we've talked a lot about existing CLIs, but why consider building your own? Well, there may be a few reasons. First, they can aid in developer productivity. Maybe there's a website that you, or others in your organization, interact with frequently to perform a very routine task. If this website has an API, you can write a CLI to help make that interaction more efficient and easily repeatable. Likewise, you may need to integrate two systems together. For example, let's say you want to post errors from your builder server to your favorite chat program. If there's not an explicit integration between those two, but the build system can run commands, then having a command to post messages to your chat program would be a great way to integrate those two systems together. And who knows, maybe you have an idea for the next great Webpack or Maven or Homebrew. Regardless of the reason, this course will equip you to turn those opportunities into reality. Throughout this course, we're going to be implementing a command line application from scratch. Our CLI will be called twine and it will be a tool for interacting with Twitter from the command line. There are some Twitter command line tools already out there, but this makes for a reasonable real world example. You might use a tool like twine if you have a command task of searching Twitter for tweets about your company. You might also use it to integrate with a monitoring tool, to post system availability or outage notifications to a status-oriented Twitter account. We'll only be scratching the surface, but you can read up on the details of Twitter's API at developer.twitter.com. Demo: Registering Your Sample Application with Twitter Before we can call any of Twitter's APIs, we need to get set up on their platform. First, we'll look at creating a Twitter account, if you don't have one already, and then we'll define an application. It's this application which will allow us to retrieve an API key and secret. Okay, let's dive right in. If you already have a Twitter account, you don't need to create a new one just for this course. I'm going to use my @paulofallan account for this course, so I don't need to create a new one. Before we define our application, let's swing by developer.twitter.com. We're going to go over to More and choose Pricing. Scrolling down, we see the list of features we have access to for free. We'll only be leveraging these free features during this course. So any API calls we make will come from this light blue column. Okay, now it's time to configure our application and retrieve our credentials. For that, we'll visit apps.twitter.com. Clicking on the Create New App button, we're taken to a page where we can enter the information about our application. The name of your application must be unique across all of Twitter. Unfortunately the name twine was already taken, so I chose PSTwine. You'll need to pick something different that is unique to you. Next, we'll enter a description and then a website. If you notice the instructions here, we can use a placeholder for now, which is what I just did. This website doesn't actually exist. We don't need a callback URL since we're building a command line application, not a website. Finally, you should examine the Twitter Developer Agreement, click the checkbox, and then the Submit button. Great, now we've successfully defined our application within Twitter. Scrolling down here, we see our access level, which is both read and write. And that's what we want. Next, is our API key. More on that in a minute. Towards the bottom, we see a series of URLs. These are the URLs we'll be invoking from our command line application to retrieve a token. We'll need this token to call the Twitter API. Clicking on the Settings tab, we see much of the same information we supplied when we created our application. Nothing really interesting here. On the Keys and Access Tokens tab, things get interesting. Here, we see both our API key and secret. Even though I'm showing you mine here in the browser, you should protect your key and secret, treat them like you would a username and password. My credentials here will be deactivated before this course is published. Scrolling down, you'll see that you can regenerate your key and secret if you think they've been compromised. And you can also generate an access token. Remember those URLs we saw on the previous tab, we'll be using those to create our tokens so we don't need to do that here. Finally, on the Permissions tab, we can see our read and write permission, and since we don't need any special permissions, there's nothing to change here. That's it. Our application has been created and our key and secret are ready to use. We'll come back here to get those credentials and use them in a later module. What Makes a Node.js Project a Command Line Application? If you have experience writing Node.js projects, those have most likely been web applications, maybe using the Express framework or deployed as an AWS lambda. So what makes a Node.js project a CLI? First, there's a special package.json property called bin. Values of this property are treated as executable scripts and, when the package is installed globally, are placed into the path so they can be run from any directory. The next item is more of a convention than it is a hard and fast rule. Many Node.js projects with complex command line requirements store these scripts in a bin directory within the project. It's not strictly necessary. You can point the package JSON bin property to any script within your project. However, it's a nice way to let the structure of your project segment the scripts, which are meant to be executed directly. When we put these two things together, what will we have? Well, we'll have a command twine that we can execute from anywhere on our machine. Let's go ahead and get started with our project and set that up. Demo: Initializing Your Node.js Project Now it's time to start our project. First, we'll use npm to initialize it. Then we'll make the necessary configuration changes to allow us to use it as a CLI. Let's get started. I'm using GitHub to manage the code for this project. I'll create a new project called twine, give it a description, and for now, since I'm still building this course, I'll make the repository private. By the time you're watching this course, the repository will be public. We'll let GitHub initialize the project with a README and a gitignore file configured for Node. Finally, we'll add an MIT license to the project, and we're good to go. Notice the URL here at the top of the screen. This is the URL where you can find the source code that goes along with this course. Now, I'll clone the project to my desktop. You may prefer using the Git command line, but I'm a big fan of GitHub Desktop, so I'll use that to do the cloning. Great. GitHub Desktop has placed this project in Documents\GitHub\twine. From here, we'll run the npm init command. This command helps bootstrap a Node-based project by asking you a series of questions and generating a package.json file based on your answers. For now, we'll take the default for almost everything, just changing the license to MIT. Okay, not let's fire up Visual Studio Code. You can launch it from the Start menu, or if you've added it to your path, you can launch it with just code and dot for the current directory. Just as we saw in the previous slides, we'll start by following convention and creating a bin directory to hold our actual command. We'll name it twine.js and, for now, just have it print Hello world! to the console. If we go back to the command prompt and use Node to run our script, we'll see Hello world! However, that's not really what we want. We don't want everything using our CLI to have to preprint Node every time, nor do we want them to have to specify the actual name of a JavaScript file. To fix that, let's take a quick look at that bin property in package.json. Here we see that the value of the bin property can be just a single file, in which case the name of the module will be put in the path, and when invoked, execute this script. Alternately, you can provide an object as the value for bin. This allows you to specify multiple commands, each of which would invoke a script. We'll opt for this latter approach. Note, too, that in all these examples, there's no mention of a bin directory. Remember, that's not required, it's just a convention. Back in Visual Studio Code, we'll open up our package.json file that was created when we ran npm init. We'll add a bin property, define the command twine, and point it to our bin/twin.js. Note that even though we're making these changes on Windows, we're using forward slashes, that's fine, npm will take care of those distinctions for us. Okay, now that we have identified our command twine, we're going to use another npm feature called npm link. This installs our project so that it can be invoked directly from the command line on our machine. However, instead of being installed directly from the npm repository, the install is linked to our project. This means that we can continue to build out our project, while still able to invoke it directly from the command prompt. Okay, so let's try to run our CLI. Uh-oh, that doesn't look right. Why did we get an error? Hmm, let's go back to the npm link page and examine it more carefully. Look at that last statement, please make sure your files referenced in bin start with usr/bin/env node. If you're worked with scripts on Linux or on Mac, this shebang syntax will look familiar to you, and even more so, will look like it doesn't belong on Windows. In almost all cases you'd be right, but hey, let's give it a try here and see. We'll run npn unlink to undo our previous failed attempt and go back to twine.js and add the shebang syntax to the top. Okay, now let's try to link it again. What do you know, that actually fixed it. Let's go see what npm actually created when we ran npm link. Looking in my appdata\roaming\npm directory, we see two files that start with twine. The first one is a shell script, the second twine.cmd is a batch file designed for Windows. If we check out the contents of this file, we can see that npm has created a command file that will invoke Node and pass it our twine.js file. Cool. Just to prove to ourselves that this isn't a copy of our file, but our actual project file, let's change what we print to the console. Now if we go back to the command prompt and run twine, nice, we see our change. Now just to prove it to ourselves again that this isn't limited to just our project directory, let's go to the root of the C drive and run it there. Same thing. And actually, if you were building the CLI just for your own personal use, this might be good enough, just npm link and you can run it from anywhere on your system. However, we want others to be able to run our app as well, so we won't stop there. Now that we have things all set up, let's commit our changes back to GitHub. Refreshing the site, we see our changes. It may seem unusual to being most of our work for this course on Windows. If we have a CLI that can run on Windows, the others, Mac and Linux, should be a no-brainer. However, let's verify that. Switching over to a Mac, we'll revisit our GitHub.com project and clone the source code to our local computer. Next, we'll cd into our project directory and run npm install. Next, we'll run the same npm link command we just ran on Windows. Now let's try twine, yep it works with our Hello Pluralsight! Just like before, let's get out of the project directory and try it again. Nice, it works there too. We'll continue to pop back over to the Mac as we make each round of changes, just to be sure everything works there as well. Summary Okay, that's a good start. Let's wrap up this module. We started out by examining some common CLI execution patterns, such as piping to other commands or redirecting to a file. We then discussed why you might want to build your own CLI, maybe you have the next great idea or just want to improve a process within your own organization. We logged into Twitter and defined our application, which gave us our API key and secret. Finally, we initialized our Node.js project and linked it so we can continue to run it from the command line as we build it out. In the next module, we'll look at how to manage the configuration of our CLI and how to prompt the user for this information. We'll also start writing some tests. Stay tuned. Managing Configuration A Brief Overview of Configuration Information Hello. Welcome to module 2 of our course. My name is Paul O'Fallon and in this module, we'll discuss how to manage the configuration of your CLI. We'll start by discussing the types of information you might want to store and where it's typically located on a user's computer. Then we'll look at one way to accomplish this in Node.js. Because not all configuration data is created equal, we'll also examine the security implications of storing sensitive data. And of course, we'll fold all of this into our sample application. Finally, because all good projects need tests, we'll add some of those as well. Let's get started. There are several types of configuration information you may come across when building your CLI project. First are credentials, which could be used to access some remote resource, like, in our case, the Twitter API. Rather than usernames and passwords, these often take the form of API keys or access tokens. Additionally, you may wish to store user preference information as well, allowing the user to customize their application experience. Finally, you may have some of your own settings, unbeknownst to the end user, that you wish to persist between invocations of the CLI. Since this configuration data is just written and read by the application itself, you could store it pretty much however you wish. That being said, there are some file formats that are commonly used for storing an application's configuration data. The first is the INI file format. To be honest, I've always associated this with Windows applications, but it's actually quite common. Git, Python, and Amazon Web Services all use this format for managing some of their CLI configuration. Next, and arguably most popular, is JSON. Visual Studio Code uses this format and even provides a way for the end user to directly override specific JSON properties with their own configuration values. The Yeoman scaffolding tool also uses this file format. Another format that's still alive and kicking is good old XML. You may see this used in both Maven and NuGet configurations. No matter what file format, this information is commonly scoped to the individual user. To accomplish this, many CLIs, and applications in general, will store their configuration information in the user's home directory. Taking a cue from Unix, these configuration files are often prefixed with a dot or period, or they're stored in a directory prefixed by a dot. This is because the ls list command on Unix will, by default, skip files and directories that start with a dot. Applications can safely create files using this naming scheme without cluttering up the user's view of the file system. Here is an example showing my home directory on Windows, which has many config files and directories that start with a dot. In fact, there's still even more. This screenshot is cut off at .m2, which is used by Maven. Okay, given all that, how can we manage our configuration data in Node.js? We'll need a way to read and write this information. Turning to npm, we discover a module called configstore. In fact, what you're looking at is a screenshot of that module from npmjs.org. This is probably a good time to take a minute and discuss things to look for when searching for modules on npm. If we look at the version and release count, we see that it's already on version 3.1.1 and has produced 26 releases. These numbers are pretty good. Even better, this is a module produced by the Yeoman project, a prominent node-based CLI. Moving down to the stats, this module has been downloaded a crazy number of times. That is a good indicator of wide usage and general acceptance within the Node community. Only two open issues and no open pull requests are another good indicator of a relatively stable and active project. Not every module you come across will have these kinds of numbers, that's okay, but I'll often consult these stats when picking between two otherwise similar modules. I'd rather use the one that's more popular and better supported. Focusing on the usage portion of the module's readme, we can see that configstore exposes a very simple interface, just Git, set, and delete. It should be very simple to use. In order to have some configuration data to write down, we're going to need to ask the user for input. Going back to npmjs.org, we find another module called inquirer. This one has even better stats than before, more releases and more downloads, well, and maybe a few too many open issues, but that's okay. The documentation for inquirer shows that a simple prompt function will allow us to solicit the user for information, which will resolve a promise with the results. Nice. One last thing before we jump into our first demo. Since I'm using Node version 8, there are a few new features we have available to our twine application. One of these is ES6 classes. In fact, the first code we write will be a class. The second new feature, which I'm really excited about, represents an evolution in Node programming and JavaScript in general. Previously we were limited to callbacks from our asynchronous functions, which then evolved into returning promises. Node version 8 supports the async/await syntax for handling promises, which should give us very concise and easy to read code. You'll see both promises and async/await show up in our sample application. Okay, enough slides, let's get to the code. Demo: Storing the Twitter API Key and Secret in Your Project In this demo, we're going to introduce a Credential Manager module to our twine application. This will retrieve our API key and secret from our configuration. And if they don't exist, it will prompt the user for these values. If you're just joining us and didn't start with module 1, that's okay, you can visit the project source repository on GitHub, switch to the module-2-start branch and clone the project to your local computer. This will bring you up to speed with everything we've done so far in the course. Okay, first, we'll create a new directory called lib, which will hold most of the code in our CLI. We'll then create our first file here, called credential-manager.js. Uh-oh, if you're using Visual Studio Code with ESLint running, like I am, you may see an error similar to mind below. Before we start writing our code, let's make ESLint happy and get this configured correctly. We can do that by running the eslint command described here, but first let's add it to our project. We can run npm install to add ESLint, while being sure to add save-dev so that it's only added as a development dependency. Funny enough, ESLint is its own node-based CLI with a bin property in its package.json, just like we discussed in the first module. Because of this, when we added it to our project, nmp created an ESLint command in our projects node_modules\.bin directory. We can run our ESLint init command directly from there. Of course, if you have ESLint installed globally on your system, you can run it without specifying the path, but I prefer to keep everything located within the project whenever possible. I don't have a specific opinion about coding style, so I'll just pick standard and JSON as the file format. Okay great, we have our ESLint configuration. In my experience, you need to restart Visual Studio Code to pick up these changes, so I'll do that now. Nice. Now ESLint is up and running. Revisiting our original twine.js, it's already picked up some issues. It doesn't like our double quotes, so we'll replace them with single quotes. Next, we'll add a new line to the end of the file. Okay, great, everybody's happy. Going back to our credential-manager, we'll start by importing the Configstore module. With that in place, we'll create our CredentialManager class by giving it a constructor that takes a name parameter. We'll use this name when instantiating our Configstore instance. Configstore uses this name under the covers to name the JSON file that holds our configuration information. We'll export our newly created class and, there you have it, the very beginning of our first module. Before we get too far though, let's not forget to add Configstore to our project. We'll nmp install it now. Great. The first method we'll add to our class is getKeyAndSecret. This will be responsible for returning the key and secret and prompting the user if we don't already have them. We'll start by looking for the apiKey in the Configstore. If we find it, then we'll also fetch the apiSecret and return both. Notice that we're returning them as an array. We'll see why in a minute. Now, we need to take care of the case where we don't find the apiKey in the Configstore. For that, we'll introduce inquirer. So let's install it now. Good. We can call the inquirer prompt function and pass it an array of questions we'd like it to ask the end user. Each question takes a type, to indicate the type of input being solicited, a name, which will be assigned to this value in the results, and a message property, to indicate what should be presented to the user when asking for this piece of information. Notice that in this case we have two different types, input and password. The latter will ensure that whatever the user types is not sent to the display for security purposes. We can then use the outcome of this function to set our two Configstore values, apiKey and apiSecret. Finally, we'll return these two values as an array just like before. One thing I glossed over here though is the await keyword. The prompt function returns a promise and so we've added the await keyword here. However, we still need our corresponding async keyword. So we'll add that to our method definition here. Finally, we'll import inquirer and we should be good to go. With our credential-manager in place, let's revisit our twine.js file and update it to use our new module. We'll start by importing our CredentialManager and instantiating it, passing in the name twine. Next, we'll call our get key and secret method and destructure the results into two variables, key and secret. I really like this approach and this is why we returned the values as an array in our method. Finally, we'll output these two values to the console. It would be great if we could stop here, but notice that we also added the await keyword to our getKeyAndSecret method call, since it too returns a promise. We can't have an await without an async. So how do we pair those up here? Well one way is to wrap this code in a function we'll call main. This main function will be defined with the async keyword. Now we can just call our main function and handle the promise that is returned in a more traditional way. For now, we'll just catch any errors and write them to the console. Okay, with all that in place, let's try running our application and see how it works. If you didn't start with us in module 1, you'll first need to run npm link with a dot or period to indicate the current directory. This will allow you to run twine as a command from anywhere on your system. Since I did that in module 1, I'll just run twine here. Notice that since we haven't previously specified a key and secret, we're being prompted by inquirer. Notice too, that when I enter the secret, the results are not printed to the display, that's pretty cool, and would actually be kind of a pain to do on our own. Once I've entered the values, they're logged to the console. Nice. Now let's run it again, having previously defined our key and secret. Awesome. It just finds them and outputs them to the console, just like we wanted. So where is this actually storing the information? It's in my home directory in a .config\configstore directory in a file named twine.json. This twine is the same name we passed into our credential-manager constructor. Looking at the contents, it's pretty straightforward, a JSON object with two properties, apiKey and apiSecret. Super simple. Mocking and Testing Command Line Applications Okay, now that we've started writing some real code in our sample CLI, we should really add some tests. To do that, we'll leverage three popular Node.js projects. We'll use Mocha for our test framework, the Chai assertion library, and Sinon for our mocks and stubs. Thinking about how we're handling user input, we have our code, which is using the inquirer module, which in the end is reading from process.stdin. Our code, of course, is what we want to test. Early on I focused on trying to mock process.stdin because I thought this would be the easiest and most straightforward. It actually turned out to be quite the opposite. After many failed attempts, I realized it was much easier to mock inquirer's prompt function. This makes more sense anyway, since we're not interested in testing inquirer, just our code. Let's stop now and add some tests to our project. Demo: Adding Tests to Your Sample Project In this demo, we're going to add some tests for our Credential Manager module. We'll start by adding Mocha, Chai, and Sinon to the project as development dependencies. Also, since ESLint will complain about the format of a Mocha test suite, we need to add the appropriate ESLint plugin for that as well. Back in our project, let's edit our .eslintrc.js file to enable this new plugin. We'll need to add a new env section, which specifies both node and mocha. Great. Next, we'll edit our package.json file and change the scripts test property to run our locally installed copy of mocha. Notice that I'm using forward slashes here, even though I'm on Windows. That's okay, npm will figure it out. Now, let's add a test directory in our first test JavaScript file. Inside our test, we'll start by importing our chai assertion library. Since we're using the bdd style, we'll create an expect constant. We'll also import sinon, the inquirer module, and our own CredentialManager. We'll structure our tests around a top level describe call for a credential manager. Before the enclosed tests are run, our before function will be invoked, which will instantiate a new CredentialManager. Note that now we're passing in twine tests instead of twine so that our test config file remains separate from any real twine config files that may be on our system. Our first test will be to verify that the user is prompted when there are no existing credentials. We'll start by using sinon to create a stub for inquirer's prompt. By using the stub's resolves function, we'll create a stub that returns a promise, a promise that, in this case, resolves to an object with two properties, key and secret. This mimics the object we receive after calling inquirer.prompt in our getKeyAndSecret method. After creating this stub, we can call our method like normal and get our two return values. We can add a couple of expect statements here to verify that the values returned from our method call match the values we provided to the stub. Finally, we'll call the restore function on our stub. This restores inquirer.prompt to its original state. That's a good first test. One thing to call out, our async/await syntax makes another appearance here. We're, of course, providing await with our call to getKeyAndSecret, which means we're adding async just above, alongside the function declaration. This is how to leverage async/await in a Mocha test. Okay, we'll round out our test suite by creating an after function that runs when all the tests in this suite have completed. This function reaches into the Credential Manager class and deletes the two Configstore values that were set during the test. Cool. Going back to the command prompt, we can run our tests with npm tests. What do you know, the test passes. Great. So, we've added our stub, but do we really know that our stub was called? We know that we weren't prompted, but that's about it. When Sinon creates a stub, it includes some helper properties that we can also inspect in our tests. We can expect the calledOnce property to be true. This property is true if the stub was called once and only once. Hmm. While this is a valid syntax, ESLint apparently doesn't like it. We can fix that with a module called dirtyChai. Let's install that now as a development dependency. We can import this new module and then configure it as a plugin by passing it to Chai's use function. What dirtyChai gives us, among other things, is a function named true to use in place of the property. So now, we can go back to our calledOnce expect statement and change it to a function call. There, that fixed it. Okay, now if we rerun our test, it still passes. Let's add a second test to verify that when the credentials already exist, they're just returned, the user is not prompted. In this case, we won't use Sinon because inquirer shouldn't be called and, frankly, if it is, we'd want our test to fail. In this much simpler case, we just call getKeyAndSecret and include our same expect calls on the results. Because the preceding test will have supplied the credentials, this one will simply return those same value. That's it, running our test, we see that it still passes. Great. Handling Sensitive Configuration Data We've made good progress so far, but something's not quite right. In this module's first demo, we looked at the JSON file underneath Configstore and saw that it was saving our API key and secret in plain text. Now, if you go back to module 1, you'll remember I specifically said you should treat your API key and secret like a username and password. We wouldn't want to store our password in plain text on the file system. Likewise, we shouldn't store our secret that way either. We need a way to store sensitive configuration data in a more secure fashion. Each operating system offers its own way to address this concern. Mac OS has its keychain, Linux has libsecret, and Windows has CredentialManager. Fortunately, there's a Node module that provides easy access to all three. Keytar allows you to manage passwords in a systems keychain, providing a consistent interface across each of these OS specific solutions. Notice too that this module is part of the atom text editor project sponsored by GitHub. Let's give keytar a try in our sample CLI. Demo: Adding Your Twitter API Secret to the Keychain In this demo, we're going to update our Credential Manager module to securely store our API secret using keytar. To do that, we're going to start by adding a module to our project called keytar-prebuild. Installing the keytar native module directly requires additional development tooling to build the module. As CLI developers, this is probably fine for you and me, but depending on how you distribute your CLI, you could be passing on this build requirement to every one of your CLI users as well. Installing keytar prebuild means that we can skip the build part by downloading a prebuilt binary specific for our OS. Note here, you can see, we installed a prebuilt keytar module for Windows. Okay, let's go back to our credential-manager module and import keytar prebuilt. First, we'll hold onto the name passed to our class constructor since we'll need it later for storing and retrieving from keytar. To store our API secret in keytar, we simply need to replace the Configstore set function with keytar's setPassword. Keytar requires some additional information, a service, which for us is the name passed to the constructor, and an account, which is our apiKey. These two pieces of information, along with the password, or apiSecret in our case, will store the sensitive information in the underlying keychain. Notice too that setPassword returns a promise, so we'll need another await keyword here. Likewise, we can retrieve the apiSecret from the keychain with the getPassword function. Here we pass the same service and account that we used when calling setPassword. This too returns a promise, in fact, all the keytar functions return a promise. So we have another await keyword here as well. Okay, so let's run our tests again. Great, they still pass. That's what we like to see. Since we're running on Windows, let's check out the system Credential Manager and look for our apiSecret. Yep, there it is. There's some discussion on the internet about how secure the Windows Credential Manager is, especially since we're storing our apiSecret without any sort of hashing or added encryption. However, without getting into that debate, this approach is certainly preferable to storing our sensitive information directly on the file system. Also, each operating system will have its own unique approach for delivering this keychain functionality. Now that we've added keytar to our project, we should update our tests as well. Our after function needs to be updated to remove the secret from keytar. Instead of interacting with configstore directly from our test, let's call a new clearKeyAndSecret method in our CredentialManager class. Since this return a promise, we'll need to be sure to add await in async. Now let's go back and create this new method. We'll retrieve the API key from configstore and then we'll delete it. Next, we'll call the keytar deletePassword function, passing in our service name and key. Running our tests again, and they still pass. Great. Just as in module 1, let's take a minute before we wrap up and verify our changes on some different operating systems. With the latest code on a Mac, we'll run npm install. Notice that this time keytar prebuild installs the binary for Darwin. Running npm test, we see that our tests pass. Since we're already run npm link here, we can run twine and give it an API key and secret. If we check the Mac OS keychain, we see our twin entry. This time, let's add a third operating system and try it on Ubuntu Linux. Here if we run npm install, we'll see the Linux version of keytar prebuild. Npm test passes here as well, that's good news. Let's run npm link so we can try running twine interactively. Entering our API key and secret, we can see that inquirer has hidden the password on all three operating systems, across both the Windows command prompt and the Unix shell. That's a great piece of work. If we run the Password and Keys app, we see our twine API key here as well. Perfect. Summary We're at a good stopping point, so let's wrap up this module. We started with an overview of the common types of configuration information. We then discussed how this configuration data is typically stored on a per user basis. We added configstore to our application as a way to store the API key and secret. We then migrated our API secret to use keytar, which securely stores the sensitive information in the operating system's underlying keychain. And of course, we added our tests. Stick around for our next module where we'll dive much deeper into our user interaction models. Stay tuned. Interacting with the User CLI Option and Command Patterns Hello. My name is Paul O'Fallon, and welcome to module 3 of our course. In this module, we're going to be covering options and commands. We'll start by looking at the format of options supported by a common Unix command. We'll then examine the command patter, implemented by more elaborate CLIs. Along the way, we'll extend our twine application to embrace this pattern, and in doing so, we'll authorize our application with Twitter and start calling the Twitter API. Let's get started. Command line options are parameters that follow the name of the command and, in the case of Unix commands, are preceded by one or two hyphens. Here's the man, or manual, page, for the Unix ls command. This command will list the contents of a directory. Originally, these options were specified by a single hyphen and a single letter, like -a and -b here. Later, commands began to support options longer than a single letter, for example, author and block size here. These longer options are denoted with a double hyphen. Note too that each single letter option also has an equivalent longer option, such as a and all. These options are synonymous with each other. Single letter options can be provided individually as here. This command lists all, -a, the contents of the current directory, using the long listing format, -l. These single letter options can also be grouped together. However, longer options cannot be grouped together, as we see here when we substitute the equivalent all for our -a. Long options that take parameters, such as block size, can have their value preceded by equal sign, however, that's not required. They can also just be preceded by a space. Another pattern found in command line applications is the use of commands within the CLI itself. Npm is a good example. The npm CLI takes many commands, such as access and adduser, which perform very different functions. The Git CLI is another example of the command pattern. While the Git command itself has many options that can be applied irrespective of the command, these commands may also have options and arguments of their own. Finally, the Amazon Web Services CLI has a tremendous number of commands, and even the concept of subcommands. Almost every AWS service can be managed from this one CLI with the right combination of command and subcommand. So, how can we add support for options and commands to our twine application? There's a great module on npm called commander, which can provide these capabilities for us. It has some very impressive stats, just like several of the modules we've already added to our project. Here is the example usage from the project's readme. After requiring commander, we can call a version function to set the version of our application. Then, by calling the option function multiple times, we can easily specify the options that our CLI understands. Note too, the built-in support for both the single character and longer option types. The fourth option here is particularly interesting because it takes an additional parameter, the type of cheese. Adding the extra marble parameter to the function call, specifies the default value if one is not provided. Finally, calling parse takes these options descriptions and applies them to the actual values passed in when the program was invoked. With this information, we can inspect the program variable to figure out what was actually passed to us. That seems simple enough. True to its name, commander also supports the command pattern we discussed earlier, including subcommands. It does this by leveraging a file name convention, putting each command in its own file. For example, if we have a script using commander, we can invoke the command function to define a command, foo in this case. If that command is found when parsing the command line arguments, the JavaScript file matching the original script name plus the command, something-foo in this case, is loaded and executed. This may continue through multiple levels, as we see here with the bar command. At each step, the command is appended onto the previous file name. Finally, when we're ready to act, we can supply the description and define an action, as with the baz command. In our twine application, the first command we implement will serve as work we've already done. We'll introduce a configure command, which will serve as a top level command. The first subcommand of configure will be consumer. Running twine configure consumer will allow a user to provide us their API key and secret, just as we did in the previous module. Why consumer you ask? Well, good question. The Twitter documentation refers to these credentials as the consumer API key and secret because the application is the one consuming the Twitter APIs. We're going to adopt that term here as well. Additionally, as we add this first command to our twine application, we'll also be evolving the directory structure of our project, where up until now we've only had bin and lib directories, we'll now be adding a commands directory. Our bin directory will be where we do our options parsing, display any help test, and actually execute the requested command. The code for this command will live in the commands directory, and these commands will be responsible for user input, console output, and orchestrating the sequence of steps necessary to complete a command. Finally, our lib directory will continue to be where we store and load credentials, as well as where we make API calls and handle any other utilities we create along the way. Okay, let's jump in and start making some of these changes to our sample application. Demo: Adding Commands to Your Project, with Test Coverage In this demo, we're going to add the commander module to our project, then we'll refactor what we've already written to create our first command. Before we're done, we'll also start reporting the code coverage of our tests. Before we dive into the code, let's run npm install to add commander to our project. With that installed, we'll revisit our twine.js script, remove pretty much everything, and start from scratch. We'll start by requiring commander, and then we'll use a nifty trick of requiring our own package.json file. This is handy way to get access to the information about your project stored in this file. With that, we'll invoke commander's version function, passing in the version of our project from the package.json file. For now, we'll stop here and call the parse function on the arguments passed into our CLI. Let's see what these few lines of code get us. Commander gives us support for the -h help option by default. We don't even have to specify it. This displays the usage information for our application with the options it already knows about. This includes both help and version, since we called the version function in our twine.js script. Notice too, how it gives us both short and long option names for these built-in options as well. Calling twine -V will print out the version. Wiring this to package.json means it will always display the correct version, as long as we keep package.json up to date. Next, we're going to call command to define our first command configure. We're going to take the functionality from the previous module, prompting the user for an API key and secret, and turn that into a subcommand of configure. For that, we'll follow the pattern outlined earlier and create a new script, twine-configure. Here we'll call the same two functions, but with an alternate syntax. You can chain these calls together, as we did before, or you can make them distinct calls, as we've done here. Now if we go back and run twine -h again, we see our first command configure listed, as well as the built-in help command. If we actually invoke twine-configure, it doesn't do anything yet, we'll fix that soon. However, if we pass in -h to twine-configure, we'll see the default help for twine's configure command. The built-in help command does the same thing. If we run twine by itself, it prints the help by default, however, as we saw, running twine-configure by itself does not. Let's fix that by adding some code to our twine-configure JavaScript file. This code is taken pretty much verbatim from the commander readme file. It uses commander's output help function to generate the standard usage help message. Running twine-configure again, without any additional parameters, now generates the help test we'd like. Let's create our commands directory and begin refactoring our existing code into our first command. We'll create our first file, configure.js, to match the name of our top level command. This isn't required, but it helps me keep track of what's what. We'll define a constant object of the same name with one property matching the name of our first subcommand consumer. For now, we'll just print something to the console. Finally, we'll export this object, and there you go, our first command implementation. Now let's go back to our twine-configure script and use what we just wrote. We'll require commands configure and use that to define our first command inside of twine-configure. Since we're not deferring to another third level script here, we'll call the command function, then add a description, and for the action we'll call our new consumer function. Now if we run twine configure consumer, we see our message printed to the console. Excellent. Okay, so now it's time to move some of our existing functionality over to this new function, essentially splitting our old CredentialManager in half. In our consumer function, we'll instantiate our CredentialManager, call inquirer.prompt, and then store the key and secret in our credentials store. This method doesn't yet exist, but we'll create it in just a minute. Also, notice that we've specified util.notEmpty as our validate function. We're going to add that to a separate utility module. So let's do that now. First, we'll add the import here. Next, we'll create our util.js file in the lib directory. Here we'll add our one line function and export it. Our CredentialManager takes on a new, more well defined role in our command-based CLI. It's job is really just to provide CRUD operations for credentials. Let's update getKey and Secret to reflect this. And also, add our new storeKeyAndSecret that we just referenced in our configure consumer command. Because our new consumer function returns a promise, we'll need to update our commander action function to use async and await and pass in the package name. Incidentally, this is cleaner than our previous approach where we had to define our own main function. Note too that we can use the name from package.json instead of having to hard code twine here. Okay, we're off to a good start. Let's take a few minutes now to update our tests to reflect our new command, as well as the changes to our CredentialManager. We'll start by replicating our directory structure inside our test directory and moving our CredentialManager test down into the lib directory. Next, we'll refactor this existing test to reflect the new role of our CredentialManager. There's lots to remove, since we're not using inquirer or sinon here, and in fact, we'll just rewrite our tests from scratch. We can test the storage and retrieval of credentials here, which looks similar to our previous tests. Next, in order to test a failure scenario, we'll need another plugin for chai, chai-as-promised. This will give us some additional API functions specific to promises. When adding this to chai with the use statement, be sure to leave dirtyChai as the last one. This will help ensure that it can make all the conversions necessary, including those of other plugins, like chai-as-promised. Okay, now we can proceed with our failure scenario, ensuring that it rejects when no credentials are found. To do that, we'll clear our key and secret and then try to retrieve them again. The rejected function allows us to verify that the promise is, in fact, rejected. There's one last change we'll make here. In our after function, clearing the key and secret still leaves a JSON file lying around with an empty object. Let's change that to actually remove the file that's created by configstore. This does require a little inside information as to where the file is actually located, but if we really want to clean up after ourselves, this will remove the file in its entirety. Mocha's done function parameter provides an easy way to deal with functions, like unlink, that expect a callback. We'll also need to add the fs and path require statements, as well. Now, if we run npm test again, we get an error that no tests are found. What? That seems weird. Well, we did move our test down one directory, which means there are no longer any tests directly in the test directory. To address this, we can edit our test script in package.json. First, I realize that we don't actually need to use the project relative path here, npm figures that out for us. But more importantly, adding the recursive option tells Mocha to look recursively for test files. Now running npm test, we get an error. There are actually a couple problems with our test. Because we moved our test down one directory, we need to change our require statement to reach one level back up to find the CredentialManager. Also, I should have removed the async designation from our after function, since it no longer returns a promise. Okay, running npm test again and our tests pass. Whew. Next, we'll create a new test for our configure consumer command. Here we'll bring back many of the things we removed from our CredentialManager test, like inquirer and sinon. Our first test will look similar to the one we removed earlier. It's stubs inquirer.prompt, but this time calls configure.consumer. Next, we'll retrieve the key and secret and make the same verifications we did before. Let's also write another test to ensure that it will overwrite existing credentials. This one is identical to the previous, except our stub returns two different values, which we expect to overwrite the two that were set in the first test. We'll add the same after function here, as in our previous test, with the necessary imports as well. Let's also not forget to load dirtyChai. Running our tests again, and those pass. Great. Even though it's not specific to building CLIs, it would be nice to get an idea of the code coverage of our tests. For that, we can install the nyc module. Adding this is easy, we just prefix our test script with nyc and we're all set. Now when we run our tests, we get coverage results as well. Hmm. Look at that, I totally forgot to add a test for our util module. It's pretty trivial, and you might decide it's not worth the trouble, but just for the sake of completeness, let's write that real quick. For now, our util tests will just check to be sure that the notEmpty function works as expected, when given a string with contents and also an empty string. That's it. Not a big deal really. Now if we run our tests again, nyc shows that we have 100% test coverage. I like that. One last nitpicky detail and we'll be done with this demo. Our original CredentialManager test used the phrase a credential manager, but all the others used the word the. Let's make them consistent. Okay, now they all match, I can sleep better tonight. Before we wrap up, let's actually run our command and see how it looks. Running twine configure consumer, we're prompted for our key and secret, just like before. That may seem like a lot of work for the same functionality, but we've set ourselves up for adding several other commands throughout the rest of the course. Implementing Twitter's PIN-based OAuth Using our twine application requires three steps. First is the configure the consumer API key and secret. Next, is to use these credentials to retrieve an account token and secret. Finally, with these account specific credentials, we can invoke the Twitter API. We introduced this consumer API key and secret in the previous module, while we'll address the account token and secret in this module. In a subsequent module, we'll begin adding support for invoking various Twitter APIs with these account credentials. It may seem unusual that we have two sets of credentials here, and in a way it is. The first set of credentials are specific to the application, our twine application in this case, while the second set are specific to a user or account. Thinking about it another way, if twine were a web application and not a CLI, the consumer key and secret would be configured once by the app developer at deploy time. The account credentials, on the other hand, would be generated for each user of the web application. Because we're both the app developer and the user, in this case, we have both. In an ideal world, we'd be able to ship our CLI with the consumer key and secret already configured, however, since these could be extracted and abused, we need to rely on each user of our CLI to start with their own set of consumer credentials. Twitter provides many different types of authentication for third party developers. One less common approach, but perfect for our use case, is PIN-Based OAuth. In fact, Twitter's support for this method is specifically why I chose them for this course. As you can see here, PIN-Based OAuth is intended for applications which cannot access or embed a web browser. In fact, they even mention command line applications specifically. Any type of OAuth involves a lot of back and forth, and PIN-Based OAuth is no different. In this sequence diagram, we have the user, our twine application, the user's web browser, and Twitter itself. This may seem intimidating, but let's walk through it. The user runs our twine application with a new command configure account. This uses the consumer key and secret to invoke a Twitter request tokens API endpoint. This call returns a token and secret to be used for the next step. Here twine opens a browser window, pointing to a special Twitter authorized URL. The user views this webpage, provides their username and password, and authorizes our twine application. In return, they are shown a PIN number. When the user returns to our twine application, they can enter this PIN, which we will then use to invoke an access token endpoint, giving us a set of credentials specific to the authenticating user. In the end, a process like this is designed to provide us with credentials we can use to invoke APIs on behalf of a user, without ever having to access their actual username and password. The PIN part is a handy way to do this when our application is not a web application. One comment that should have given you pause, however, is the twine opens a browser window. How are we going to do that? Once again, it's npm to the rescue. The opn command provides a cross-platform way to open files, or URLs in our case, in their default application. As you'll see in a minute, this is exactly what we need. Finally, it sure would be helpful if we could find an existing library for invoking the Twitter API. However, I found this experience to be quite the opposite of our earlier forays into npm. This issue from one of the libraries I found is a good example. There really wasn't a well maintained library that was a good fit for our use case. Either they didn't support PIN-Based authorization, there were lots of open issues, or the library was no longer maintained. This is a good counter balance to our earlier experiences with npm. Don't feel like you have to use something from npm. There may be cases where it's not much more effort to just add it yourself. That's what we're going to do for our Twitter API client. Demo: Adding Another Command In this demo, we'll implement a simple Twitter API client for use in our command line application. We'll then use this to add a second command to our CLI. In doing so, we'll implement Twitter's PIN-Based authorization. Before we're done, we'll take a look at running twine in the node debugger. Let's dive in and get started. In order to call the Twitter API, we'll need to install a couple of helper modules, OAuth 1.0a and axios. Axios is a nice, promise-based HTTP client that has a few tricks we can leverage to our advantage. Next, we'll go into our lib directory and create a twitter.js file. This will hold our Twitter API client. Much of this isn't specific to building a CLI, so we'll gloss over some of the details. We'll require the necessary modules, create a Twitter class, and export it. Our constructor will take our consumer API key in secret as parameters, set up a couple of properties, and configure our OAuth module. This use of the OAuth 1.0a module is taken almost verbatim from the project's readme. In fact, they've tested it against the Twitter API. So we're just going to use that as is here. Next, we're going to leverage one of those axios tricks I told you about. You can define interceptors in axios, which work a lot like middleware in Express. These are invoked in every request before it is sent to the server. In our case, we're going to use the OAuth authorize and toHeader functions to set the proper authorization header in our outgoing API request. Next, we'll set the Content-Type header to x-www-form-urlencoded. Finally, we'll set the axios base URL to the one we just configured above. When we defined our token above, we left it empty, so we'll create a setter for that here. With all that complexity in our constructor, the remaining methods are pretty simple. Our get method will just call the axios get function and return the data property of the response. Our post method is pretty much the same, except it takes the data to post as an additional parameter. That's really it. It took a little poking around to get all of that assembled just right, but once it's done, it's pretty straightforward. No need to adopt a poorly supported module just to get those capabilities. The next thing we're going to do is revisit our CredentialManager. Right now it's hardcoded to handle our consumer API key and secret. However, aside from the hardcoded strings, there's really nothing specific in here about those credentials. Now that we're faced with storing an account specific key and secret as well, it sure would be nice if we could leverage this same module. Let's make it more generic by removing the hardcoded property names, making those dynamic. We'll start by introducing a new parameter to getKeyAndSecret called prop. This will be used by our configstore function call. We'll also add this parameter to storeKeyAndSecret and make the necessary changes there as well. Finally, we'll do the same to clearKeyAndSecret. Seems simple enough. Now let's go change how our current command calls these methods. We'll add apiKey to our consumer commands storeKeyAndSecret call. Easy peasy. We're not done yet though. Let's change the tests as well. We'll edit the tests for our CredentialManager to provide the apiKey parameter everywhere it's required. Then, we'll pop over to our configure tests and update those as well. Just to be sure we haven't broken anything, let's rerun our tests. Okay, we're still good. Before we start adding our new command, configure account, let's add that opn module we discussed earlier. Okay. Now back in our configure script, we'll add the new function account. This function is basically going to implement the sequence diagram we stepped through earlier in this module. We're going to start by retrieving our consumer API key and secret and using those to instantiate our Twitter class. Then we'll use that client to post to the request token Twitter API endpoint and parse the response. Since we know from the Twitter API documentation that this response comes back in the format of a query string, we can use Node's built-in querystring parsing function. This response gives us a token and secret, so we'll pass those to our setToken method. That way they'll be used by our Twitter class for the subsequent API calls. The next thing to do is open a browser window for the user to sign in and authorize our application. We could just call opn directly, but that might be kind of jarring to the user, so we'll use inquirer here to add a prompt to let the user know what comes next. We'll ask them to press Enter and we'll give it the name continue, even though we don't intend to examine the contents of this answer. Once they've hit Enter, we'll use opn to open the user's default browser to the URL defined by Twitter for handling this authorization. Immediately afterwards, we'll use another prompt to ask the user for the PIN they received. This way, our application will wait while the user interacts with the web browser. Once the user has entered the PIN, we'll use it in our call to the access token API. We'll, again, set our Twitter class's token to the parsed response. At this point, we have our account token and secret. However, Twitter gives us another endpoint we can use to test these credentials. The verify credentials endpoint will give us some information about the user that belongs to these credentials. We'll call that here, and if we don't throw an exception, we'll store the token in secret and output the user's screen name returned by the verify credentials API call. Notice that we're able to leverage our new generic storeKeyAndSecret function by simply passing in a different property name, accountToken in this case. Finally, let's add the necessary imports for the modules we just used. Okay, if you need to, hit Pause, take a breath, and review what we just created. This is likely the most complicated code we will write in this course. It's not technically specific to building a CLI, however, it's not uncommon to have a linear set of steps like these executed in response to a CLI command. Demo: Adding Another Command, Part 2 Now that we have our second command, let's add some tests for it as well. We'll start by importing our new Twitter module. Because we're going to do a lot of mocking in these tests, we'll leverage a neat feature of sinon. We can define a sandbox at the start of each test, then instead of calling sinon functions in each test, we'll call them on our new sandbox. In fact, we can change our existing tests to use this new sandbox module. Finally, after each test is complete, we just restore our sandbox and everything is back to the way it started. We don't need to restore each mocked item individually. Okay, let's add our first test. Because most of what our account function does involves calling other things, we'll be doing a lot of mocking here. First, we'll mock our CredentialManager's getKeyAndSecret method. Then we'll stub our Twitter class. Since we call the post method twice, once for request token and again for access token, we can use sinon's onFirstCall and onSecondCall to return different values each time. Next, we'll also stub the get method call, which we use to invoke the verify credentials API endpoint. Finally, we'll stub inquirer, as we have in the past. Since we call prompt twice, we'll set the first one to an empty value representing when the user hits Return to open the browser, and a second one to a pin that would be entered by the user. Okay, now we have a problem. We want to keep the opn command from opening a browser, but sinon can't stub a module that returns a function. To get around this, we're going to add a new method to our utility module called open browser. This will simply run opn on the URL passed in. Nothing special really, but now it's something we can mock. Let's jump over to our configure script, change the opn call to util.openBrowser, and remove the opn import. Okay, back to our test. Let's import our util module, and now we can mock openBrowser to just return an empty string. Because our configure account command prints the results to the console, we'll add a spy to the console log function so we can examine how it's called. Okay. After all that stubbing, we're ready to actually call our configure.account function. Next, we'll restore just our CredentialManager's getKeyAndSecret method to check and see if the account credentials have been stored as we expected. Finally, since we've been spying on console.log, we can check to see if our account function called this with the string we expect. One more thing I forgot to do, since we're using sandbox now, we don't need to restore anything in our first two tests. So let me remove those now. Okay, running our tests and everything passes. However, our coverage is pretty weak. We don't have any tests for our Twitter module and that's dragging down our numbers. Let's add some real quick. We'll create a twitter.js file in our test lib directory, and we'll drop in several require statements that you'd expect by now. We'll instantiate a Twitter class before our tests and first we'll test our setToken method. Nothing earth shattering here. Next, we'll test our get method. We're going to stub the axios get method and just have it resolve to an object with a data property, since that's the only property of the axios response that we're using. Next, we'll do the same thing for the post method. Okay, let's rerun our tests. Well, our numbers are better, but not perfect. We could certainly spend a long time writing elaborate tests for our Twitter module, including verifying the OAuth headers, but that's beyond the scope of this course. This still leaves us with pretty good test coverage, and I'm okay with that. Before we move on, let's take a minute to pull up our account function and it's corresponding test side by side, so we can reconcile all those mocks with the original function. We first mocked our getKeyAndSecret call, then our first Twitter post, then the two inquirer.prompt function calls, our call to openBrowser, then our second Twitter post, and finally, our Twitter get method call. It's debatable, with this much mocking, how much of our account function we're actually testing. However, with working code and these tests, we can ensure that we don't break this linear process in the future. Well, we've written a lot of code, but haven't actually defined our new command, so let's do that now. We'll revisit our twine-configure script and add a new command. This one will look almost identical to our previous command, except that it will invoke the account function instead of consumer. It's worth pointing out here that one goal of putting all the logic in a separate configure module is to keep these bin directory scripts as simple as possible. The goal here is that we really shouldn't need to write any tests for these because they're just parsing the command line and invoking logic stored somewhere else. Okay, everything's been written and our tests pass. How about we take it out for a spin and see how it looks. First, let's run twine configure consumer. We'll go back to apps.twitter.com and look up the consumer key and secret we created in the first module. We'll copy and paste those here one at a time. Be careful when you paste your secret since you can't see it. If you get it wrong, the subsequent API calls will throw an exception, something we'll handle better in a later module. Next, we'll run twine configure account. The first prompt we see is to hit Enter to open Twitter in our default browser. In our browser, we see Twitter, asking us to authorize the PSTwine application to use our account. You've likely had to name your application something other than PSTwine, and that's what will show up here. By signing in and clicking on Authorize app, Twitter presents me with a PIN. Entering that PIN, I see the message account paulofallon was successfully added. This is the screen name returned from the verify credentials API call. Note that nowhere did I explicitly tell twine my username. By granting access to the PSTwine application, it was able to make that API call on my behalf and retrieve my screen name. We're not storing that anywhere, but it is a nice way to show the user that everything has worked as expected. Hmm, notice that the command never actually terminated. That's weird. I had to use Ctrl+C to exit out of the application. After doing some research, it appears that our call to opn takes a configuration object with a wait parameter that we can set to false. This parameter allows the opn promise to resolve immediately, instead of waiting for the spawned application to terminate. Admittedly this issue seemed to be hit and miss and this flag shouldn't matter on Windows, but after adding it I was unable to reproduce the error. Let's try configure account again. We'll authorize the application, enter the PIN, and our application terminates as we would expect. It seems like it's working okay, but we'll keep an eye on it. Anytime you want to view the list of applications you've authorized on Twitter, you can find those in the Settings and Privacy section of Twitter under Apps. Here we can see the newly authorized PSTwine alongside the other apps I've authorized. Demo: Using the Node Debugger Let's take a look at one more neat feature. Visual Studio Code makes it easy to debug your Node.js applications and this includes CLIs as well. By selecting the debug icon and choosing Add Configuration, we can define a new configuration for running our configure account command. We'll change program to run our twine-configure command and pass in the account argument. I tried setting program to just twine.js and passing in the arguments configure and account, but that didn't work. You should be sure to call the JavaScript file that's going to actually execute the command. The other key property to set here is console. Setting it to externalTerminal will give you a new terminal instance to interact with. Now just for kicks, let's set a couple of breakpoints in our account function. Going back to debug and launching our new configuration, we see a new terminal window asking us to hit Enter to launch a browser. Hitting Enter, however, stops our program at its first breakpoint, the openBrowser function. We can see all the common debugging information, just as we'd expect. Moving on, our browser opens and we can authorize our application. When presented with the PIN, we can enter that here just as before. Next, we hit our second breakpoint, at the point of our console.log function. Continuing again and our command completes. Debugging like this can be helpful when working through those long series of linear statements used to fulfill a command. Before we wrap up this module, we should check our other operating systems as well. On a Mac, after updating our code from Git and running npm install, we need to unlink and relink our project. After that, we can see that our help text is rendered correctly. Running twine configure consumer, and we are prompted to enter our API key and secret, just as before. Running twine configure account, and we're again prompted to open a browser. Even though I'm already logged into Twitter here, we're still prompted to authorize the app, which gives us a PIN. Entering that PIN, we're told that the account was successfully added. Trying the same thing on Linux, we see the help text, and running configure consumer, we're prompted for our consumer key and secret. Running configure account, it's the same browser and PIN process as the other two operating systems. Pretty neat. We have a working CLI that talks to the Twitter API and runs on three different platforms. Okay, that's enough, I'm tired. Let's wrap things up. Summary We've made a lot of changes in this module, let's look at where we are now. We've introduced commander and defined our first commands, configure consumer and configure account. We have our implementations of these two commands in configure.js. We reworked our CredentialManager, removing the user interaction, and making it more generic so we could use it for all our credentials. We have our basic Twitter API client and our two utility functions, not empty, for use in our inquirer prompt calls and openBrowser, which allows us to stub our use of opn. We also have all our tests with test coverage. Finally, our debugging configuration is stored in .vscode, launch.json. In this module, we started with a look at some common option and command patterns. We then restructured our project to support our new command-based interface. Along the way, we introduced some new modules, such as commander, axios, and opn. Out of necessity, we implemented our own minimal Twitter client. In doing this, we implemented the second of the three steps of our Twitter CLI. Stay tuned for the next module when we start adding commands that deliver Twitter functionality. Interacting with the Environment Handling Errors and Setting an Exit Status Hello. My name is Paul O'Fallon. In this module, we'll be looking at ways our CLI interacts with the environment. For starters, we'll revisit the error handling in our twine application. It's pretty minimal so far and we need to be more intentional, including returning the proper exit status. Next, we'll examine a pattern for leveraging environment variables to override our configuration settings. Finally, we'll make good use of both standard input and output, adding the final features to our CLI for this course. Let's get started. As we discussed in an earlier module, our application code is broken down into three directories. Our CLI commands are defined in the bin directory, while the actual implementations are in the commands directory. Finally, are commands make use of functions found in the lib directory. One way to think about it is that our commands cascade down from bin to commands to lib. Our errors, on the other hand, will flow in the opposite direction, errors originating at the lower levels will bubble up through intermediate layers, finally reaching the scripts in our bin directory where they will be output to the user. An important part of handling errors in a CLI is communicating to the operating system that the command was terminated due to an error. This is done by returning an exit status. On Windows, the exit status of the previous command is available in the error level environment variable. Here, a successful execution of the type command returns an error level of 0. Next, when I invoke type with a nonexisting file, you see the error message returned by the command and now the error level has been set to 1. Similarly, in a Unix shell, this exit status value is found in the $? variable. Here's a 0 return value for a successful cat command, and just like in Windows, a value of 1 to denote an error. One nice feature this enables on Unix is adjusting the prompt to reflect the last exit status. Here you can see that the arrow in my prompt is green, except when the previous command fails, in which case it turns red. That's handy. While these examples show an exit status of 1 representing an error, anything greater than 0 is an error. Technically the range of possible values is between 0 and 255, although a few values have special meanings in some shells. So, how do we return the proper exit status in our CLI? Well, the Node.js documentation has some helpful insight when it comes to exiting your program due to an error. Here is an example of what not to do. Calling process.exit with a value of 1 does in fact cause your program to exit with an exit status of 1. However, it does so immediately. Because writing to standard out, which is the usage in this case, sometimes happens asynchronously, it's possible that existing like this will terminate the program before the usage is printed. A safer way to exit on error is to just set the process.exitCode variable to the desired exit status and let the program terminate on its own. This way, the appropriate value is returned to the operating system while the Node.js application is allowed to properly exit. Let's see about adding support for these exit codes in our CLI. Demo: Add Error Handling and Proper Exit Statuses In this demo, we're going to catch and raise the appropriate errors in our CLI. And in doing so, we'll provide meaningful error messages to the end user. When we exit our CLI because of an error, we'll also set the appropriate exit status. Finally, to make all of this work, we'll need to do a little more light refactoring of our Credential Manager. Let's get started. So, actually we do already handle one error message in our Credential Manager, the case of a missing key. We're going to keep that, but change the error message to be more user friendly. This will be our overarching strategy. Create the user facing error message closest to where the error occurs and where we have the most context. Next, we'll replicate the same error handling for a missing secret. These error messages may seem a bit odd. We're using the prop passed in to render the error message, even using it to suggest as configure command. With credentials named apiKey and accountToken, as we have right now, those error messages won't make a lot of sense. However, there's no reason we can't store our consumer and account credentials under those exact key names. Doing so let's us leverage the key name in our error messages. Let's go change apiKey to consumer and accountToken to just account. There are a few places to change these in configure.js. Next, we'll swap these out in our configured tests. Finally, for consistency sake, we'll update them in our credential manager tests as well. Next, we're going to fix a lingering issue left over from the last module. When running our tests, we were leaving some keytar passwords lying around in our various key stores. We're going to add a new method to our CredentialManager to help with this. But first, we're going to namespace our credentials stored in config store. We'll prepend keys dot to the property value when calling get, set, or delete. This keys dot will store all of our credential related information in a keys object within config store. You'll see why we want to do this in just a second. Now once we have real users, we can't just make changes like this to our internal structure without providing some sort of backwards compatibility, but since we're still building our CLI, we can safely make these breaking changes. With this name spacing in place, we're going to add a new method to our CredentialManager, clearAll. For now, this will only be leveraged in our tests, but its job is to remove all the credentials from our configuration. To do this, it's going to get the keys object from the config store. We'll iterate over the key names and call clearKeyAndSecret for each one. This will remove both the config store entry, as well as the keytar entry for each credential. Good. Let's go back to our CredentialManager tests. We owe it some improvements based on our error handling, so we'll do that first. We'll change the existing test to represent a missing key. Also, instead of just testing that the call is rejected, we'll use rejected with and provide a portion of the error message it should look for. Next, we'll add a similar test to verify the handling of a missing secret. We'll explicitly set a key directly in the configstore with no secret. This should cause getKeyAndSecret to fail with a missing secret. Finally, we'll remove that lone consumer key we added above. Okay, now we're almost ready to revisit our after function and clean up those leftover credentials. The first thing we're going to do is npm install fs-extra. This provides a promise-based interface to Node's fs library. In fact, we'll just change our require statement, leaving the constant named fs. Before we introduce clearAll to our after method, let's write a specific test for it. We'll store a couple of credentials, one for consumer and another account. Then we'll call the clearAll method. Finally, we'll try to get those credentials, ensuring that they've all been removed. Okay, let's just remove our existing after method and replace it altogether. We'll start by calling our new clearAll method. Now we can call fs.unlink to remove our empty configstore file. However, since we're using fs-extra, our call to unlink returns a promise that we can await. No more using the done function. Given how many of our tests are async, this makes our after function consistent with everything else. Next, we'll make the same change to the after function in our configure tests. While we're here, I'm going to make one additional tiny adjustment. By changing our console log spy to a stub, we can cause it to swallow the output, which will keep our test results neat and clean. Nice. Let's stop here and rerun our tests. Everything still passes. Great. Now that we've tackled raising errors in our CredentialManager, let's focus our attention on our Twitter client. Starting with our get method, we'll wrap the call to axios in a try catch block. If we get an error, we'll call a handleTwitterError function that we haven't written yet. Next, we'll do the same for the post method. Good. Okay, now let's go write that handleTwitterError function. Basically, we're going to inspect the error message, looking for some common errors. If we find those, we'll craft our own error messages, allowing us to make suggestions, such as this 401 example. Also, because Twitter rate limits its APIs, we can trap for that too and throw a more helpful error message here as well. Finally, for all other Twitter related errors, we'll just prefix the error message with Twitter and throw that. Okay, that's good. But let's update our tests to verify that these errors are getting thrown. We'll start by stubbing the axios post method to reject with an error message that contains 401. Next, we'll call our twitter.post method and expect it to be rejected with the appropriate error message. We'll restore the post method and repeat this process for the get method. Next, we'll do pretty much the same thing for the 429 error message. And finally, we'll do this one more time for the generic Twitter error. Because we've introduced rejected with here for the first time, we need to import chai-as-promised and use it in chai before dirtyChai. Running our tests again, and they all pass. Okay, so we've done a lot of error throwing, but what should we do with those errors? Let's go back to the top, to our bin directory, and revisit our action function calls. We'll remove the async await and simply add a catch onto the promise that's returned. This catch will call the util.handleError function. We'll repeat this for both of our action calls. Now let's go write this handleError function. First, we're going to install chalk. Chalk is a nice little module for colorizing our output. We'll use this to render our errors in red. After importing chalk, we'll begin our handleError function. This will simply log to console.error, but what it passes to error is the output of a chalk function, redBright, to which we pass our error message. This will cause the console to render our error message in a bright red color. Next, as we discussed earlier, we'll set the exit code to 1 to indicate that our CLI terminated because of an error. Finally, we'll export this function, and we're all set. We'll write a really simple test for this new function by first importing sinon, as we haven't mocked anything in this test yet. Next, we'll add a new context for the handleError function in our first test to verify that the exit code is being set to 1. We'll stub console.error to keep it from being printed during our test and then call the handleError function. We'll also verify that the exit code is in fact set to 1. Finally, we'll verify that we're actually printing a message to console.error. Again, we'll stub the error function and call handleError, but this time we'll verify that the console.error method was called with the message we passed to handleError. Running npm test again, and we'll all good. So, we haven't actually seen any of these errors, but now's a good time to try it. Since we changed the layout of the credentials in our configstore, we need to delete and recreate our configuration anyway. So let's manually remove our configuration file. Now, with no configuration in place, let's try running twine configure account. Nice. That's our helpful error message from down in CredentialManager using the prop name of consumer. It's also rendered in a nice shade of bright red. Perfect. A Pattern for Environment Variable Overrides Another way our CLI can interact with its environment is through the use of environment variables. You've probably seen these before, maybe when manipulating your path environment variable. Here's a subset of the environment variable set on my Windows machine. Similarly in a Unix shell, the env command will output a list of the currently active environment variables. These are variables that are accessible to any of the programs running on your machine, CLI or otherwise. One environment variable pattern I've seen in CLIs is as overrides for configuration information. Maybe your CLI has configuration information stored in a file, as our twine application does. However, the CLI may also look for specific environment variables, if they are set, then those values are used instead of what was found in the configuration file. Finally, the CLI may provide command line options to again override both the environment variables, as well as the values found in the configuration file. One example of this is the Amazon Web Services, AWS CLI. It recognizes several environment variables, including the four you see here, and values assigned to those variables will trump what's found in any of the AWS configuration files. So, how do we access these environment variables in Node.js? The documentation, again, shows us the process.env variable. This returns an object containing all of the variables and their values. You can add or override values in this object, but those are only valid within the program itself, not the shell that executed the command. Okay, is there a way we can leverage this pattern in our CLI? Well, it may be a bit of a contrived example, but we can adopt an approach similar to the AWS CLI and support environment variable overrides for our credentials. Here's the twine in three steps slide from an earlier module. We can support two new environment variables, twine_consumer_key and twine_consumer_secret, which we'll consult before using what's in our configuration. While we're at it, we can do the same for our account credentials, checking for twine_account_key and twine_account_secret. Why might you want to do this? Well, you can retrieve an access token and secret from within your application's definition, on apps.twitter.com. Setting these values as environment variables would let you skip the whole pen-based authorization we established in the previous module. Demo: Implementing Environment Overrides for Credentials In this demo, we'll add support for overriding our configured consumer and account credentials. We'll do so by checking a set of environment variables and using those values instead. Let's get started. There aren't many changes required to enable this, just an update to our CredentialManager and some extra tests. We'll start by specifying the name of the environment variable we want to check. This is another case where using a property name that makes sense to the end user pays dividends. Our service is twine and our prop is either consumer or account, so we can easily build an environment variable named twine_consumer_key or twine_account_key here. Next, we'll check to see if this value is set. If it is, we'll use that for our key, otherwise we'll retrieve our key as we always have. Next, we'll replicate that same process for our secret. We'll craft the environment variable name, check to see if it's set, use it, if it is, or otherwise fetch it with keytar as before. That's it. Now let's write tests to validate this. Starting at the top, we'll add a test to ensure that credentials set in the environment are used. We'll do that by setting process.env for our two environment variables. Remember, we need to use twine-test here, since that's the service for our unit tests. Then, we'll call getKeyAndSecret as always. Finally, we'll verify that we have received the values that were set in the environment variables. Next, because we expect twine to actually prioritize these environment variables over any existing configuration, we'll test for that as well. With our environment variables still set from the previous test, we'll store a set of credentials and then retrieve them again. Here too, we still expect to see the credentials set in the environment. Running our tests, and they pass. Now let's try this for real by setting the environment variables to a bogus set of consumer credentials. Running twine configure account throws an error. In fact, we see that it threw one of our Twitter errors, since the Twitter API didn't like the incorrect consumer credentials we set in our environment. That's it. Easy peasy. Before we wrap up this demo, I want to encourage you, if you're following along, to close out your command window or shell and reopen it before continuing. I forgot to do this when recording this course and got really thrown for a loop later on when nothing was working anymore. Come to find out, it was still trying to use the leftover bogus environment variables. File Descriptors and Dealing with Unbounded Input Input and output file descriptors are important for getting information into and out of a CLI. An application may receive data by reading it from standard input. Similarly, an application typically writes its results to standard out. Although in the case of an error, it may write an error message to standard error instead. Standard out and standard error are typically shown to your in the console when you run a command, unless, you've redirected them somewhere else. In Node.js, console.log writes to standard out, while console.error writes to standard error. In Node.js, these three file descriptors are also accessible from the process object as stdoin, stdout, and stderr respectively. And Node treats these as streams. This means you can pipe from standard in to any other stream and you can also pipe to standard out. This one line of code here would simply pipe standard in straight through to standard out. So, how can we leverage what we know about standard in and standard out to improve our twine application? Well, we can support parameters either on the command line or piped to us from standard input. One tenant of the Unix philosophy is to, and I quote, expect the output of every program to become the input of another. And we can certainly abide by that here. For a single parameter, that means you can either call a twine command and include the parameter on the command line, or you can pass the single parameter to the command via standard in, using an echo here as an example. This becomes even more powerful though when we think about multiple parameters. We can support passing in a set of comma separated parameters on the command line, but we can also redirect a file of parameters to the program or, in the third example, pipe the output of some unknown parameter generator to our twine command. Let's take a real version of that last example. Say we want to implement a way to look up a Twitter user. We have no way of knowing how many users will be piped to standard in. Setting aside Twitter's API rate limits for a bit, let's see how we might handle an unbounded number of input parameters to our CLI. We'll accomplish this by stringing together a series of Node.js streams into a pipeline. We'll receive our input on the process.stdin stream. The first thing we'll do is split this input on newlines, so each chunk that passes through is one piece of input. Next, because some of Twitter's APIs accept parameters in batches of 100, we should be smart and batch up our input before making a Twitter API call. We can do this by including another stream that holds onto incoming data and only writes when it has accumulated a batch of inputs. As each batch is output, we can invoke the appropriate Twitter API. In fact, we can invoke several of these APIs in parallel, acting on multiple batches at once. Because the output of these API calls will be grouped by batch, we'll introduce another stream to flatten the results, removing these batch boundaries. Finally, we'll pass these individual twitter API results to a JSON stream, which will stringify the results. Piping this to standard out will cause our results to appear in the console. Doing this with a series of streams provides an important benefit, the proper handling of back pressure. Each of these streams, especially calling the Twitter API, will have throughput rates that vary from one another. Handling the data flowing through our CLI with a series of streams allows us to leverage back pressure to correctly manage these differences. We can use an almost identical pipeline to handle parameters passed in on the command line as well. Instead of starting with process.stdin, we'll split the comma separated parameters into an array, and use that array to create a readable stream. The data coming from this stream can be handled via the same pipeline as before. To help us out with this streams-based implementation, we'll need several new modules. First is split2, which makes it easy to split streaming data on newlines. Next, is parallel-transform. This module will help us call the Twitter API on several batches at once. Through2 is a module that makes it easy to build transform streams, those that take input, transform it, and pass it on. JSON stream will provide us with our stringify capability, allowing us to create a single JSON array of objects. When handling parameters passed in on the command line, from2-array makes it easy to create a readable stream based on an array of data. Finally, because we're making heavy use of promises, promise-streams will allow us to use promises in our error handling, just like we have everywhere else. Sounds good, let's give it a shot. Demo: Lookup Users via Standard In or Command Line In this demo, we're going to implement the lookup users command we just discussed, as well as one additional lookup command. In doing so, we'll support both parameters on the command line, as well as via standard in. As a little something extra, we'll introduce a utility that we can use to parse the JSON output of our commands. Let's get started. Before we dive into the code, let's install our new npm modules. These are the modules that we just discussed in the earlier slides. Okay, good. The first thing we'll do is implement our batch stream, creating a new file in the lib directory. Leveraging through2 makes writing this stream pretty simple. We'll define a constant batchStream, which takes an optional parameter to specify the size of the batch. We'll initialize our batch variable to an empty array. We'll then return an invocation of through2.obj, which creates an object mode transform stream. This function takes two parameters, each of which is a function itself. The first is called on each chunk and the second is called just prior to the stream ending. When we receive each chunk, we first want to add it to our batch array. Next, we'll check to see if we've reached our batch size. If so, we'll copy the data into a new variable, reinitialize our batch, and then invoke the next function with our batch of data. If we haven't reached a batch boundary yet, we'll just call the next function without any data. When our stream is done, our second function will check to see if there are any remaining items to pass along. If there are, next will be invoked with those items. We'll export our batch stream and we should be good to go. We'll create our new lookup.js in the commands directory. We're naming this command lookup because the Twitter API uses that term for endpoints that allow you to look up a group of something. We'll start with several require statements. Six of these are the stream-related libraries we just installed. We'll also import our CredentialManager, our Twitter class, and our newly created batch stream. We're going to start by creating a function called doLookup. This is not tied to a specific subcommand and we'll see why in a bit. This function will take several parameters, the API we want to call, our standard name parameter, any items that were passed in on the command line, and one unusual parameter called inout. We're going to use this variable to access our standard input and output streams and setting the default value to the process object makes that easy. However, allowing this to be passed in will make mocking and testing much easier. First, we'll get our consumer key and secret, then we'll use those to initialize our Twitter client. Next, we'll retrieve our account credentials. We'll pass those to the Twitter.setToken method. Okay, now we're ready to get down to business. Our function will simply return a call to the promise streams pipeline function. This function returns a promise, with any stream errors rejecting the promise. The parameters to this function call will be the steps in our pipeline, which will match the diagrams in our earlier slides. The first step in our pipeline will vary, depending on whether something was passed in on the command line or not. If it was, we'll split those items into an array and use that array to create a readable stream. If nothing was passed in, we'll pipe standard input to a stream created by split, which will split the incoming data on newlines. Now, regardless of how we started, everything else is the same. Our next stream will batch up the inputs with a batch size of 100. Each of these batches will be passed on to the stream created by parallel. This stream will invoke the following function with a concurrency of 2. We could easily raise this, or even make it a command line option, but this is enough to demonstrate the idea. The function invoked here will simply call the Twitter API we were given, appending a comma separate list of parameters for this batch. Next in line is our flattened stream, which we'll write here using through2.obj again. We'll iterate over the API response array and output each item directly. Finally, we'll JSON stringify these results and pipe that to standard out. Okay, so we have this pipeline, now let's leverage it to actually create a lookup command. We'll define our lookup object and create an entry users, which will take in a series of arguments. This function will simply call our doLookup function, specifying the Twitter API endpoint for looking up a group of users. For the remainder of our doLookup parameters, we'll just pass on what we received here. Finally, exporting this lookup object, we have our first lookup command. Let's write a corresponding lookup.js test. We'll start by requiring all the usual suspects and setting up chai. We'll start with a describe statement and leveraging sinon's sandbox capabilities again in our beforeEach and afterEach functions. So, if you remember when we first introduced our mocking, I talked about trying to avoid mocking standard in or out directly. Well, now we don't have much choice. Fortunately there's a great module called stream mock that we can install to help us with this. We'll require ReadableMock and WritableMock from this module. Next, we'll define a context for users, and in there we'll add another beforeEach function. Our first stub will be pretty simple, just stubbing our CredentialManager's getKeyAndSecret. The next line, however, will be something new. This time when we stub Twitter's get method, we're going to use callsFake to define our own function to be called instead. This one will operate on the URL that's passed in and take everything after the equal sign in the query string, split that on a comma, and map each of those entries to an object with a screen_name parameter. What this is basically doing is extracting the users we joined and added to the query string and turning them into a minimal response object from the API call. We can do this because the API call to users/lookup returns an array of user objects, each of which contains a screen name property. Our first test will verify that we can lookup users that have been piped to standard in. We'll start by instantiating a ReadableMock with the two users we want to come from standard in, foo and bar. Next, we'll create a WritableMock to represent standard out. Now, we're going to call our lookup.users function, passing in twine-test and our mock readable and writable streams. Next, when our WritableMock has finished, we'll expect the data it received to match the user representations we mocked above. Then we'll call the done function. Notice we're leveraging done here instead of async await. We want to be sure this finish event gets fired and our expect line gets evaluated, that's the only way we'll tell this test we're done. So, while we've tested reading from standard in, we haven't really exercised our batching capability, since we only read two parameters from standard in. Let's add another test to submit more than 100 parameters, that way we know we have two batches, not just one. For that, we'll start by creating an array of 101 users with the names foo 0 through foo 100. Then we'll instantiate a new readable mock, passing in those 101 users, but not before adding a carriage return to each one. Next, we'll instantiate our WritableMock and call lookup.users as before. Finally, when standard out emits the finished event, we'll again compare the data written to the WritableMock with our expected output. However, this time we'll also construct this expected output dynamically based on our original array of users. Let's try our tests again. Great, everything still looks good. Okay, now let's add the easy test to verify that it will lookup users passed in on the command line. We'll instantiate our WritableMock again to receive the output and we'll simply call lookup.users, pass in the two users, and just our standard out stream. On finish, we'll check our results, just as we did in the first test. Finally, just to be sure we're still handling our errors okay, let's test for that as well. We'll restore our Twitter's get method and stub it out to reject with a test error. We'll use our WritableMock here again, then we'll test our lookup.users to ensure it's rejected with our test error. Notice that here we pivoted back to async await because we're not testing the output of a stream, but rather the promise returned by the function call itself. Checking npm again, and everything's still green. Okay, so we have our lookup.users function with all of our tests. Now let's add the script to our bin directory, so we can actually call it. This one will look pretty much like our configure commands. We have our imports, then we handle the version option. We add our command, ensuring to pass any errors to handleError. We parse the arguments, and finally output help, if we weren't passed a command. The last thing we need to do is add lookup to our main twine.js, which wires everything together. That's it. Once you've cleared out your old configurestore JSON and rerun configure consumer and configure account, you'll be read to try your new command. If I run twine lookup users paulofallon, I see the JSON for my user account returned to the console. If I try it again, passing both my account, as well as Pluralsight, I get both back. Notice how the JSON is formatted in the console. JSON stream bookends our writes with the square brackets to turn them into a large JSON array. Now let's try reading from standard in. We'll create a users.txt file and add those same two Twitter accounts. We can use the type command to send them to standard out. If we pipe the type command to our twine command, we see that we get the same output. Awesome. Demo: Piping from Your CLI to JQ and Back Again It's great that we can look up users, but we get this huge blob of JSON as output. What are we supposed to do with that? Well there's a great command line utility called jq, which does an amazing job of parsing and extracting data from JSON. So for example, if we want to extract just the screen name from our twine output, we can pipe our standard out to jq with this query, which basically means for each array entry, extract the screen name. And there you can see the two screen names we started with. Notice how they're both in quotes. Well, that's because in JSON format, those values are in quotes. We can add the raw output parameter and get our data back without the quotes. The syntax of jq is very powerful and is also very complicated. We can construct a new JSON object based on a subset of properties using this syntax. Although, notice that it's just giving us two separate JSON objects. If we tweak the query syntax a little more, we can get it to give us back these values as a new JSON array. There. Now I can actually query users from Twitter and extract just the information I'm looking for. If you want to try jq on Windows and you're using Chocolatey, you can install jq with choco install jq. We'll cover the other platforms at the end of this demo. Twitter's APIs follow a few patterns, which are repeated for multiple endpoints. The lookup pattern shows up again for statuses, or, as you and I might call them, tweets. With our existing doLookup function and its pipeline, it's really easy to add support for looking up statuses as well. We'll just add a new function to our lookup object, passing in the API endpoint for looking up statuses. We'll also update our twine lookup script so we can invoke this new command from our CLI. To try it out, I'm going to copy a status id from Twitter by hand. Passing this in on the command line and we get the JSON back for this tweet. Now, let's grab a second one and try passing in multiple. Yep, works just like we would expect. Just like we did with users, we can create a statuses.txt file with these two tweet ids. Piping these to our twine command, and we get the same output. Let's try looking up the statuses again and piping the output to jq with a more complex query. Each status contains a list of the users referenced in the tweet. In one tweet, I mentioned AWSreInvent, in the second tweet, I referenced Pluralsight and code. This jq query extracts all those user mentions and outputs them as a list, one user per line. Hey, that sounds familiar. Let's pipe that list into twine again and look up those users, then we'll pipe that to jq again and extract some information about those users. Wow, that's pretty cool. Now I have the name and location of the users that I mentioned in those tweets. Before we wrap up this demo, let's double check and be sure this still works on all platforms. Going to the Mac, we'll start by running our tests. Good. They pass here too. If you don't have it already, you can use Homebrew to install jq on a Mac. Next, after we've removed our configstore entries and rerun our configure commands, we can try the same piped set of commands as before. We still have our statuses.txt file, which we can output with cat here. If we pipe that to lookup statuses, we see the same JSON as before. Next, if we introduce jq, we can extract just the mentioned users. Finally, if we run the same long piped set of commands, we see the same output. Nice. On Ubuntu Linux, our tests still pass. You can install jq here with apt get install jq. We still have our statuses and piping them to lookup statuses returns our status JSON. Bringing in jq works here just like everywhere else. And finally, the long series of piped commands also works, super. Future Integration Opportunities and Summary So, that last set of commands was pretty crazy. What did we just do? Well, we started by outputting a list of statuses. Then we used twine to look up those statuses. Jq was then able to extract the user mentions from that status JSON output. This list of users was piped back to another invocation of twine, which looked up each of those users. Then that final set of user JSON data was piped again to jq, which extracted the name and location of each user. Passing in status ids may seem odd, but what we've done is establish a set of primitives, users, and statuses, with ids that can be read from standard in and full objects written to standard out. So what else could we do with this approach? Well, with a few extra commands, you could retrieve the list of your followers, find out the last time each of them updated their status, and then unfollow people haven't tweeted in awhile. Or, go through your own tweets and delete the old ones that didn't receive any likes. You may eventually get throttled because of Twitter's API rate limits, but otherwise the sky is the limit. Because this twine project is open sourced and available on GitHub, you are welcome to add some of these additional commands and submit them as pull requests. I look forward to seeing what you come up with. To wrap up, in this module, we started by adding the proper error handling to our project, including setting the proper exit code. We added the capability to override our consumer and account credentials with environment variables. Then we added a couple of lookup commands that accepted input from standard in and used those to compose more elaborate scenarios. I hope you've enjoyed this module. Stick around for the next one where we'll be covering how to package and deploy your command line application. Stay tuned. Packaging and Distribution Scoped Packages, Publishing to npm, and Using npx Hello. My name is Paul O'Fallon, and welcome to the final module entitled Packing and Deployment. After all the work we've put into our command line application, we'll finally publish it to the world. But because we know this won't be the last time, we'll configure the app to notify users when an update is available. We'll also automate this process to make it easier to repeat in the future. Finally, we'll explore creating a Dockerized version of our application. Let's get started. Publishing to npm is very straightforward and there's little, if any, about it that's specific to deploying a command line application. It's just two simple commands, npm login to authenticate ourselves and npm publish to upload our project to npmjs.org. There is one unique thing about our project though, we need a scoped package. This is because the name twine is already taken on npmjs.org. Therefore, we need to prefix this package with @pofallon, my npm username. This @pofallon is the scope. There are several reasons to use scoped packages. The first is for deploying a private package to npmjs.org. All private packages are scoped. Another reason is for grouping. An organization may want to group a series of related modules together. For example, there are many Angular modules published under the @angular scope. A third reason, and the reason we're using scoped packages is for name spacing. Not the best reason, I suppose. Finding a creative name for our application that was available and didn't require scoping would have been ideal. In any case, the scope of a module is either the name of the user publishing the module, as in our case, or an organization that user belongs to. A fairly recent addition to npm is a handy utility called npx. It specifically helps with executing npm modules designed to run as command line applications. Under normal circumstances, to use a node-based CLI, you would npm install it, and then use it. However, npx makes that even easier. You can just run npx, the package name, and the remaining command line arguments, and it will download the package from npm and execute it, all in one command. This can be helpful if you need to leverage the CLI in a shell script. You don't have to worry about whether it's already installed, just include it as a call to npx. Demo: Publishing Your CLI to npm, Running It with npx In this demo, we'll convert our project to a scoped package, which will involve fixing our commands. Finally, with that done, we'll publish twine to npm. We'll start by adding the scope to the name property in pkg.json. While we're here, we'll clean it up a bit and remove the main property, since the bin property is really what matters for our application. Now if we try running twine, after changing the name, it doesn't work. This is because all of our configstore work is done based on the name of the module and now that's changed. To fix that, we'll create a new utility function called extractName, which will simply return everything in the string after the forward slash. This will extract the name of our project without the scope. Next, we'll revisit twine configure and wrap each instance of pkg.name and a call to this new function. While we're here, we'll reformat our code a bit to match our twine lookup script. Speaking of that one, we'll make the same changes here. Wrapping pkg.name in calls to extractName. One last thing, while we're here, we need to go ahead and add one more section to our pkg.json file, the publishConfig access public setting tells npm that even though we're using a scoped package, we really do want this to be public. Okay, running twine again and we're back in business. Now let's publish our module. First, we'll log in to npm. If you don't already have an npm account, you'll need to visit npmjs.org and create one. Here, I'll give it my username, my password, and my email address. Finally, the moment we've been waiting for, npm publish dot, and we're live. In order to try installing our newly published app, we first should unlink our development project. However, since we originally linked it under the name twine without a scope, we need to briefly change the name back, just so we can unlink it. Okay good. Now we can run nmp unlink dot, and we'll jump back and reapply the scope, so we don't forget. Trying our twine application again, we see that it's not found. So let's install it. We can run npm install -g @pofallon/twine, and there we go. Now, if we run lookup users, we see our results. Awesome. Let's uninstall it so we can try calling it with npx. Double checking to be sure it's really gone, okay it is. Now we'll run npx @pofallon/twine lookup users pofallon. Notice that npx is installing our module and then executing it, same results. Perfect. Npx can also read from standard in, so let's try that as well. We have a text file with two Twitter user names. IF we run our npx command again, piping in these user names, we see that both are returned. Great. Adding Update Notifications, Travis CI Automation One nice feature of npm, in fact, we saw it crop up in the last demo, is how it notifies you when there's an update available. It should would be nice to have that feature in twine. Ideally, it would automatically check for a new version. And it would notify the user, not every time, but periodically. And most important of all, it should not annoy them. When we get to the demo, you're going to see this is easy peasy. There's a node module that makes enabling this feature one line of code. Also, we want to automate our releases. Me typing npm login and npm publish wasn't hard, but it would be even better if it happened without me having to do that. Let's start by anchoring our automation to creating a new tag in GitHub. These tags will coincide with new releases of our twine CLI. This tag will trigger TravisCI to test and deploy our application. Once this process is complete, npm will have the latest tagged version of our application. It's worth noting here that this process shouldn't be our only testing. TravisCI will definitely run our tests, but only on Linux. Before creating a tag in GitHub and initiating this process, we should be confident that our CLI works on all three platforms, Windows, Mac, and Linux. Demo: Automating Your npm Publish with GitHub and Travis CI In this demo, we're going to add that one line of code to check for updates, and then we'll configure our integration with TravisCI. The module we're using to check for updates is update-notifier. It's part of the yeoman project. Now, we'll go into our main twine js script, import the module, and add our one line of code. That's it. In fact, isGlobal really isn't required since it defaults to true. But it helps to set it when we're developing via npm link. So we'll leave it this way. We're going to exercise this feature in the next demo, since we need to publish this version to npm and another version so that we have one to upgrade to. Let's move to TravisCI. We'll create a new file to hold our Travis configuration. We'll specify the language as node_js and the version as lts/carbon. Carbon is the 8.x version of node. We'll specify one add-on, libsecret, which is required simply to install the keytar dependency. Next, we'll visit TravisCI where I've already logged in with my GitHub credentials. We'll flick the switch on the twine repository here, then we'll commit all of our changes and push those back to GitHub. Quickly, we'll switch back to over to Travis and watch it build. We see it's already started, that was quick. It starts by outputting a tremendous amount of information about the build environment. If we scroll to the bottom, we see where it's installing our lts/carbon version of node. Next, it runs npm install and npm test, which kicks off nyc and mocha. Uh-oh. We have failing tests. A lot of failing tests. What's going on? This error cannot auto launch D-Bus without X11 display, is due to our tests attempting to run keytar on a headless Linux instance. There are very elaborate hoops we could jump through to try and get this to work, but frankly, I'm not here to test keytar, I'm here to test twine. Let's fall back to some of our earlier principles and simply mock keytar in our tests, that will make the problem go away. Before we start, during the time I've been working on this course, the keytar project has added prebuilt support directly into their core module. So let's uninstall keytar prebuild and simply install keytar. Cool. Moving down to our tests, we'll start with the first set of failing tests in the CredentialManager. Here we're going to do some good old mocking, like we've done before, importing sinon, the module we want to mock, keytar in this case, and lodash to help us out. We're going to create a new secrets variable, and in our before function, we're going to stub out the keytar setPassword function to call our own function instead. This function will use lodash to set a value in our secrets object. Because keytar's functions all return promises, we'll return a Promise.resolve here. Next, we'll do something similar for getPassword, although this time we'll either resolve the promise with the value we found, or reject it with an error Missing consumer secret. We can specify that exact error because that's the only one we're testing here. Finally, we'll stub deletePassword and use lodash unset to remove the value from our secrets object, and again, return a resolved promise. In these few lines of code, we've basically written an in-memory version of keytar. Now, going down to our after function, we'll restore each of those stubbed functions. Okay, let's go visit our other set of failing tests, our configure command. Here we'll be doing almost the exact same thing, except we only need to import keytar and lodash. We'll add our secrets object and then the exact same three stubs as before. This should probably be extracted out in some sort of in-memory keytar mock module, but for now, we'll just leave it here in these two tests. Okay, now let's commit these fixes and push them back to GitHub. Switching over to Travis, and our new build has already begun. If we scroll down to see the results, all our tests pass. Awesome. Now that we've established our integration with Travis and have a passing set of tests, let's begin configuring it to do our npm deployments for us. We're going to use the Travis CLI to help with this. You can install it with these instructions here. The first command we'll run is travis login, which prompts for my GitHub username and password. The Travis CLI has explicit support for configuring npm deployments, so next we'll run travis setup npm. It's going to prompt me for my email address and an npm API key or token. You can create a token on npmjs.org by going to your profile and choosing the Tokens tab. Here we'll create a new token for read and publish since we want to use this token for publishing to npm, and we'll copy it to our clipboard. Next, we'll paste this into our console and answer yes to only releasing tagged commits. Remember, in our earlier diagram, we stated that a GitHub tag is what we'll use to kick off our deployment process. We definitely want to answer yes to the next question because we don't want deployments coming from any other fork of this project, just this one. And finally, yes, we want to encrypt the API key. Looking at the results of this command in our yml file, we see a new deploy section, which includes our encrypted API key. However, it also has my email here, unencrypted. The support for encrypted values extends beyond just the API key, so I'm going to apply the same process to my email address. First, let's remove the value from the yml file, and then use travis encrypt to generate an encrypted value from my email address and add it as the deploy.email property in the yml file. Now you can see, we have two encrypted values, one for API key and another for email. I like that better. There's no reason for us to include this yml file when we deploy to npm, so let's create an npmignore file and specify the travis.yml file here. While we're at it, let's include .vscode and our tests as well. Okay, let's bump the version number and get ready to try our automated deployment. We'll commit and push our changes to GitHub.com. And TravisCI immediately kicks off our new build. Our tests still pass. Good. Notice that extra message though, the deployment was skipped because that was not a tagged commit. Let's go back and create a new release to generate a tag. Back on GitHub, we'll do just that. We'll create a 1.0.4 release, targeting the master branch and giving it the name initial release. Now, if we check the tags section, we'll see that this release also created a corresponding tag with the same name. Jumping back to Travis, it's already begun working on our tagged commit. Notice the tag value in the build description. Scrolling down, we see our tests pass again, but this time the build doesn't stop. We see some extra steps, including deploying applications. Expanding this, we see that it successfully deployed our project to npm as version 1.0.4. If we go back to npmjs.org and lookup our twine module, we see it here, as version 1.0.4, just as we would expect. Awesome. Creating a Docker Image of Your Command Line Application For our final segment, we're going to look at another distribution option for our CLI, shipping it as a Docker image. While this likely wouldn't be the only way you distribute your application, you may encounter some scenarios where this is a viable option. Building and publishing the image is a straightforward task. However, with the keytar issues we saw in our first Travis build, the most straightforward method here will be to use our environment variables support. So, for that, we'll create an env.list file to hold our keys and secrets. Finally, we can invoke Docker run, pass in this environment list, and invoke our application, just as before. Another option would be to pass in each environment variable to the Docker run command individually, but for such long values, this is an easier way to demonstrate it. We should also update our automated deployments to build and publish this Docker image as well. We can edit our Travis config to publish to both npm, as well as Docker Hub. Let's give that a try now. Demo: Creating a Docker Image, Updating Your Travis Automation In this demo, we'll create a Dockerized version of our CLI. Next, we'll update Travis to do this for us automatically. Oh, and we'll check back in on our automatic update notifications. Our Docker image will be based on the slim version of node 8 and will define an argument for the version of twine that we want to use in this image. Next, we'll install lib secret, just like we did for our TravisCI build. Even though we're not going to use keytar, we can't even install our app on Linux without lib secret. Next, we'll run the install of our CLI. Notice that we're not copying files from our project directly into our image, but simply installing from npm. This approach assumes that the version of twine we want is already out on npm. Also, because of how npm handles installing global packages as the root user, we need to provide an additional argument unsafe-perm=true. Finally, we'll install twine, but use our version argument to target a specific version from npm. We'll define our entry point as simply our twine command. Okay. Now we can build our image, being careful to supply the version argument and the latest tag for our image. Next, to test our image, we'll create our env.list file. These consumer values are the same ones we've been using throughout the course, but where did I get the account values? In the case where a Twitter developer wants to run their own app, as their own user, you can retrieve these account credentials from apps.twitter.com. On the Keys and Access Tokens page, if you scroll down to the bottom, you'll see an access token and secret. These are associated with a specific user, your Twitter account, and can be used in twine as the account credentials. So, with our image built and our credentials in place, let's try to run twine from a Docker image. Wow, it works. Nice. Next, let's add this deployment to our Travis process. We'll leverage the travis encrypt command to add these two environment variables to our file. One, from our Docker Hub user name, and another for my Docker hub password. Going back, we now see two new entries in your yml file. Interestingly, notice how not even the name of the environment variable is listed, it's all in the encrypted value. Next, we'll tell Travis that we need the Docker service in our build. Now Travis doesn't explicitly support deploying to Docker Hub like they do with npm, but they do have support for a script deployment, which will run a script of our choosing. This means we need to write a shell script that will deploy our project to Docker Hub. We'll create a scripts directory and one file called docker-deploy.sh. On the first line, we'll start by logging into Docker Hub with the credentials we just encrypted. Then we'll execute a docker build command, very similar to the one we just ran locally. However, notice that we're able to use a special Travis environment variable called TRAVIS_TAG, which is set to the tag associated with this build. As long as we name our tags correctly, this will get passed to npm, which we'll use it to target the right twine version for our Docker image. Next, we'll tag this new image with our twine version. Finally, we'll publish both the latest and our versioned image to Docker Hub, let's also add our scripts directory to npmignore since that doesn't need to be published there either. Okay, back to our travis.yml file. We'll first need to ensure that our shell script is executable. This bit me earlier, maybe because I'm committing to get from Windows, but in any case, it's good to be explicit here just to be sure. Now we're going to add a second section under deploy. The provider in this case is a script and the script itself is bash scripts docker-deploy. We'll skip cleanup for this deployment and also configure it to run only on tagged releases from the original repository. Because we've added a second entry, let's update our npm provider, indenting it as necessary. Okay, we're ready to give it a try again. Let's bump our version number, commit everything except our environment list, back to GitHub, and push it. Now, we'll go and create another release. Looking back, I really didn't do a good job the first time. I should have established a better pattern. Let's do that now. We'll prefix the release with a v, just like the npm version command does, and we'll also use that as the title. We'll note our changes in the description area. Going back to Travis, and we're off to the races. Scrolling down to the bottom, we see that our change mod command executed successfully. Jumping to our second deployment, we see it pulling down the base image, building our Docker image by installing lib secret, and then using our tag to install twine version 1.0.5. Finally, we see it pushing the new image up to Docker Hub. Going back just to be sure, we also see the same version published to npm, which technically must have succeeded since we just used it to build our Docker image. Checking npmjs.org, we see our version 1.0.5 just as we would expect. Jumping over to Docker Hub, we see our twine image there as well. Looking at the tags, we see version 1.0.5, as well as latest, both published just 2 minutes ago. Now, let's remove our local twine Docker image so we can test downloading it from Docker Hub. We'll use the docker images command to get the idea of our image, and docker rmi to remove it. Running docker images again, we see that the image is gone. Using our same environment list with docker run again, this time since it can't find the local image, it downloads it from Docker Hub and executes it. Before we leave Travis behind for good, let's take a quick look at another way to handle the sensitive information in our yml file. Right now, we have four encrypted entries, our npm api_key and email, and two environment variables holding our Docker credentials. Rather than store these long, encrypted strings in our source repository, we could instead enter them in the Travis UI. Under the settings page for our twine application, there is a place to enter environment variables that are made available to each build. We can add those same four entries here. Now, going back to our yml file, we can replace the npm api_key with its environment variable. And then do the same for my email address. We can simply remove the two env global entries, holding the Docker credentials since those will now already be exposed as environment variables. That's it. This build will work exactly as it did before. Okay. Let's verify that this works on a Mac like we expect. Also, we'll use this as a chance to test our upgrade notifications. We'll unlink our project and then we'll install version 1.0.4 of twine. This is the first version that introduced upgrade notifications. Now if we run twine to look up a user, we get a pop up that node wants access to our keychain. This is keytar trying to access our consumer and account secrets. Now, let's look at the file that update notifier uses to track its work. Notice that update notifier uses configstore too. Because update notifier only checks once a day, by default, let's adjust this last update check timestamp so that it'll check again now. Repeating our command a couple of times, and there it is, it knows that there's a version 1.0.5 out on npm and suggests we upgrade to it. All that from one line of code. Running the suggested command, we're now at the latest version. Let's do one last check on Ubuntu Linux as well. We can install our package, just as we did everywhere else. I have to say though, I'm using a local version of node installed with nvm. I did have issues with keytar when trying to install the app using the package manager's installation of node. Running twine again, and it works here too. Great. Summary So, in this module, we've modified our application to use a scoped package name and then we successfully published it to npm. With that in place, we added some automation, configuring Travis to publish to npm for us every time we create a new release in GitHub. We found a nice one line solution to notify the user when updates are available. And for good measure, we created a Docker image from our CLI and published it to Docker Hub. This wraps up our entire course on building command line applications in Node.js. I've really enjoyed going through this with you and I hope it's been beneficial to you as well. Course author Paul O'Fallon An Enterprise Architect by day and an open-source contributor by night, Paul has more than 19 years in the Information Technology industry spanning academic, start-up and enterprise environments. Course info LevelIntermediate Rating (12) My rating Duration3h 2m Released5 Jun 2018 Share course