What do you want to learn?
Skip to main content
Building Command Line Applications in Node.js
by Paul O'Fallon
Most of us use command line applications in our jobs every day. This course will introduce you to the basics of building a CLI in Node.js, including managing configuration, interacting with the user, and distributing your finished product.
Resume CourseBookmarkAdd to Channel
Table of contents
Setting up Your CLI Project
A Brief Overview of Command Line Applications
Hello, my name is Paul O'Fallan, and I'd like to welcome you to the course Building Command Line Applications in Node.js. In this module, we'll discuss the fundamentals of command line applications and why you might want to build one yourself. Then we'll dive right in and get started with the sample project we'll build together throughout the course. So what is a command line application? In the simplest terms, it's an application you invoke from a command line, of course. You might invoke these from a command prompt on Windows or from the prompt of a Unix shell. These programs are also called commands, a command line interface, or just a CLI. In this course, I'll most often refer them as CLIs, plus it fits better on the slides. Command line applications are at the heart of the Unix philosophy of doing one thing and doing it well. They're not large, complicated monoliths that do everything under the sun. In a way, command line apps are microservices, before microservices were cool. They're simple, concise applications that can be stitched together in a variety of ways to do new and interesting things. Some of these common execution patterns include combining multiple CLIs together in a single batch file or shell script. You can also use something like cron on Mac or Linux or schtasks on Windows to run a command line application on a schedule. It's also very common to take the output of one command and use it as the input to another command, this is known as piping. Finally, you can, of course, redirect the output of a command to a file. Likewise, you can use a file as input to a command line application. I can almost guarantee that you're already using command line applications today. They are very common in developer tools. If you're a front-end developer, you may use tools like Gulp, Grunt, Yeomen, or Webpack, A server-side developer might use Maven, Gradle, or the .NET command. Most cloud vendors provide CLIs for interacting with their infrastructure as well. This includes the AWS, Azure and Heroku CLIs. Finally, there are also command line desktop tools, like Homebrew on a Mac or Chocolatey for Windows. Even with all that, if you're still one of the few who hasn't yet leveraged your command line, now is a great time to start. The reach of the CLI can be near or far. The command you run may just do its work directly on your computer, for example, a Java compiler or a bundler, like Webpack. Other CLIs may reach out to a database or repository within your organization. An example of this might be a source control command that checks code out of a company repository. However, many of the most interesting and useful CLIs reach out to APIs or other services over the internet. And finally, of course, a CLI may do some combination of all three. So, we've talked a lot about existing CLIs, but why consider building your own? Well, there may be a few reasons. First, they can aid in developer productivity. Maybe there's a website that you, or others in your organization, interact with frequently to perform a very routine task. If this website has an API, you can write a CLI to help make that interaction more efficient and easily repeatable. Likewise, you may need to integrate two systems together. For example, let's say you want to post errors from your builder server to your favorite chat program. If there's not an explicit integration between those two, but the build system can run commands, then having a command to post messages to your chat program would be a great way to integrate those two systems together. And who knows, maybe you have an idea for the next great Webpack or Maven or Homebrew. Regardless of the reason, this course will equip you to turn those opportunities into reality. Throughout this course, we're going to be implementing a command line application from scratch. Our CLI will be called twine and it will be a tool for interacting with Twitter from the command line. There are some Twitter command line tools already out there, but this makes for a reasonable real world example. You might use a tool like twine if you have a command task of searching Twitter for tweets about your company. You might also use it to integrate with a monitoring tool, to post system availability or outage notifications to a status-oriented Twitter account. We'll only be scratching the surface, but you can read up on the details of Twitter's API at developer.twitter.com.
Demo: Registering Your Sample Application with Twitter
Before we can call any of Twitter's APIs, we need to get set up on their platform. First, we'll look at creating a Twitter account, if you don't have one already, and then we'll define an application. It's this application which will allow us to retrieve an API key and secret. Okay, let's dive right in. If you already have a Twitter account, you don't need to create a new one just for this course. I'm going to use my @paulofallan account for this course, so I don't need to create a new one. Before we define our application, let's swing by developer.twitter.com. We're going to go over to More and choose Pricing. Scrolling down, we see the list of features we have access to for free. We'll only be leveraging these free features during this course. So any API calls we make will come from this light blue column. Okay, now it's time to configure our application and retrieve our credentials. For that, we'll visit apps.twitter.com. Clicking on the Create New App button, we're taken to a page where we can enter the information about our application. The name of your application must be unique across all of Twitter. Unfortunately the name twine was already taken, so I chose PSTwine. You'll need to pick something different that is unique to you. Next, we'll enter a description and then a website. If you notice the instructions here, we can use a placeholder for now, which is what I just did. This website doesn't actually exist. We don't need a callback URL since we're building a command line application, not a website. Finally, you should examine the Twitter Developer Agreement, click the checkbox, and then the Submit button. Great, now we've successfully defined our application within Twitter. Scrolling down here, we see our access level, which is both read and write. And that's what we want. Next, is our API key. More on that in a minute. Towards the bottom, we see a series of URLs. These are the URLs we'll be invoking from our command line application to retrieve a token. We'll need this token to call the Twitter API. Clicking on the Settings tab, we see much of the same information we supplied when we created our application. Nothing really interesting here. On the Keys and Access Tokens tab, things get interesting. Here, we see both our API key and secret. Even though I'm showing you mine here in the browser, you should protect your key and secret, treat them like you would a username and password. My credentials here will be deactivated before this course is published. Scrolling down, you'll see that you can regenerate your key and secret if you think they've been compromised. And you can also generate an access token. Remember those URLs we saw on the previous tab, we'll be using those to create our tokens so we don't need to do that here. Finally, on the Permissions tab, we can see our read and write permission, and since we don't need any special permissions, there's nothing to change here. That's it. Our application has been created and our key and secret are ready to use. We'll come back here to get those credentials and use them in a later module.
What Makes a Node.js Project a Command Line Application?
If you have experience writing Node.js projects, those have most likely been web applications, maybe using the Express framework or deployed as an AWS lambda. So what makes a Node.js project a CLI? First, there's a special package.json property called bin. Values of this property are treated as executable scripts and, when the package is installed globally, are placed into the path so they can be run from any directory. The next item is more of a convention than it is a hard and fast rule. Many Node.js projects with complex command line requirements store these scripts in a bin directory within the project. It's not strictly necessary. You can point the package JSON bin property to any script within your project. However, it's a nice way to let the structure of your project segment the scripts, which are meant to be executed directly. When we put these two things together, what will we have? Well, we'll have a command twine that we can execute from anywhere on our machine. Let's go ahead and get started with our project and set that up.
Demo: Initializing Your Node.js Project
Okay, that's a good start. Let's wrap up this module. We started out by examining some common CLI execution patterns, such as piping to other commands or redirecting to a file. We then discussed why you might want to build your own CLI, maybe you have the next great idea or just want to improve a process within your own organization. We logged into Twitter and defined our application, which gave us our API key and secret. Finally, we initialized our Node.js project and linked it so we can continue to run it from the command line as we build it out. In the next module, we'll look at how to manage the configuration of our CLI and how to prompt the user for this information. We'll also start writing some tests. Stay tuned.
A Brief Overview of Configuration Information
Demo: Storing the Twitter API Key and Secret in Your Project
In this demo, we're going to introduce a Credential Manager module to our twine application. This will retrieve our API key and secret from our configuration. And if they don't exist, it will prompt the user for these values. If you're just joining us and didn't start with module 1, that's okay, you can visit the project source repository on GitHub, switch to the module-2-start branch and clone the project to your local computer. This will bring you up to speed with everything we've done so far in the course. Okay, first, we'll create a new directory called lib, which will hold most of the code in our CLI. We'll then create our first file here, called credential-manager.js. Uh-oh, if you're using Visual Studio Code with ESLint running, like I am, you may see an error similar to mind below. Before we start writing our code, let's make ESLint happy and get this configured correctly. We can do that by running the eslint command described here, but first let's add it to our project. We can run npm install to add ESLint, while being sure to add save-dev so that it's only added as a development dependency. Funny enough, ESLint is its own node-based CLI with a bin property in its package.json, just like we discussed in the first module. Because of this, when we added it to our project, nmp created an ESLint command in our projects node_modules\.bin directory. We can run our ESLint init command directly from there. Of course, if you have ESLint installed globally on your system, you can run it without specifying the path, but I prefer to keep everything located within the project whenever possible. I don't have a specific opinion about coding style, so I'll just pick standard and JSON as the file format. Okay great, we have our ESLint configuration. In my experience, you need to restart Visual Studio Code to pick up these changes, so I'll do that now. Nice. Now ESLint is up and running. Revisiting our original twine.js, it's already picked up some issues. It doesn't like our double quotes, so we'll replace them with single quotes. Next, we'll add a new line to the end of the file. Okay, great, everybody's happy. Going back to our credential-manager, we'll start by importing the Configstore module. With that in place, we'll create our CredentialManager class by giving it a constructor that takes a name parameter. We'll use this name when instantiating our Configstore instance. Configstore uses this name under the covers to name the JSON file that holds our configuration information. We'll export our newly created class and, there you have it, the very beginning of our first module. Before we get too far though, let's not forget to add Configstore to our project. We'll nmp install it now. Great. The first method we'll add to our class is getKeyAndSecret. This will be responsible for returning the key and secret and prompting the user if we don't already have them. We'll start by looking for the apiKey in the Configstore. If we find it, then we'll also fetch the apiSecret and return both. Notice that we're returning them as an array. We'll see why in a minute. Now, we need to take care of the case where we don't find the apiKey in the Configstore. For that, we'll introduce inquirer. So let's install it now. Good. We can call the inquirer prompt function and pass it an array of questions we'd like it to ask the end user. Each question takes a type, to indicate the type of input being solicited, a name, which will be assigned to this value in the results, and a message property, to indicate what should be presented to the user when asking for this piece of information. Notice that in this case we have two different types, input and password. The latter will ensure that whatever the user types is not sent to the display for security purposes. We can then use the outcome of this function to set our two Configstore values, apiKey and apiSecret. Finally, we'll return these two values as an array just like before. One thing I glossed over here though is the await keyword. The prompt function returns a promise and so we've added the await keyword here. However, we still need our corresponding async keyword. So we'll add that to our method definition here. Finally, we'll import inquirer and we should be good to go. With our credential-manager in place, let's revisit our twine.js file and update it to use our new module. We'll start by importing our CredentialManager and instantiating it, passing in the name twine. Next, we'll call our get key and secret method and destructure the results into two variables, key and secret. I really like this approach and this is why we returned the values as an array in our method. Finally, we'll output these two values to the console. It would be great if we could stop here, but notice that we also added the await keyword to our getKeyAndSecret method call, since it too returns a promise. We can't have an await without an async. So how do we pair those up here? Well one way is to wrap this code in a function we'll call main. This main function will be defined with the async keyword. Now we can just call our main function and handle the promise that is returned in a more traditional way. For now, we'll just catch any errors and write them to the console. Okay, with all that in place, let's try running our application and see how it works. If you didn't start with us in module 1, you'll first need to run npm link with a dot or period to indicate the current directory. This will allow you to run twine as a command from anywhere on your system. Since I did that in module 1, I'll just run twine here. Notice that since we haven't previously specified a key and secret, we're being prompted by inquirer. Notice too, that when I enter the secret, the results are not printed to the display, that's pretty cool, and would actually be kind of a pain to do on our own. Once I've entered the values, they're logged to the console. Nice. Now let's run it again, having previously defined our key and secret. Awesome. It just finds them and outputs them to the console, just like we wanted. So where is this actually storing the information? It's in my home directory in a .config\configstore directory in a file named twine.json. This twine is the same name we passed into our credential-manager constructor. Looking at the contents, it's pretty straightforward, a JSON object with two properties, apiKey and apiSecret. Super simple.
Mocking and Testing Command Line Applications
Okay, now that we've started writing some real code in our sample CLI, we should really add some tests. To do that, we'll leverage three popular Node.js projects. We'll use Mocha for our test framework, the Chai assertion library, and Sinon for our mocks and stubs. Thinking about how we're handling user input, we have our code, which is using the inquirer module, which in the end is reading from process.stdin. Our code, of course, is what we want to test. Early on I focused on trying to mock process.stdin because I thought this would be the easiest and most straightforward. It actually turned out to be quite the opposite. After many failed attempts, I realized it was much easier to mock inquirer's prompt function. This makes more sense anyway, since we're not interested in testing inquirer, just our code. Let's stop now and add some tests to our project.
Demo: Adding Tests to Your Sample Project
Handling Sensitive Configuration Data
We've made good progress so far, but something's not quite right. In this module's first demo, we looked at the JSON file underneath Configstore and saw that it was saving our API key and secret in plain text. Now, if you go back to module 1, you'll remember I specifically said you should treat your API key and secret like a username and password. We wouldn't want to store our password in plain text on the file system. Likewise, we shouldn't store our secret that way either. We need a way to store sensitive configuration data in a more secure fashion. Each operating system offers its own way to address this concern. Mac OS has its keychain, Linux has libsecret, and Windows has CredentialManager. Fortunately, there's a Node module that provides easy access to all three. Keytar allows you to manage passwords in a systems keychain, providing a consistent interface across each of these OS specific solutions. Notice too that this module is part of the atom text editor project sponsored by GitHub. Let's give keytar a try in our sample CLI.
Demo: Adding Your Twitter API Secret to the Keychain
In this demo, we're going to update our Credential Manager module to securely store our API secret using keytar. To do that, we're going to start by adding a module to our project called keytar-prebuild. Installing the keytar native module directly requires additional development tooling to build the module. As CLI developers, this is probably fine for you and me, but depending on how you distribute your CLI, you could be passing on this build requirement to every one of your CLI users as well. Installing keytar prebuild means that we can skip the build part by downloading a prebuilt binary specific for our OS. Note here, you can see, we installed a prebuilt keytar module for Windows. Okay, let's go back to our credential-manager module and import keytar prebuilt. First, we'll hold onto the name passed to our class constructor since we'll need it later for storing and retrieving from keytar. To store our API secret in keytar, we simply need to replace the Configstore set function with keytar's setPassword. Keytar requires some additional information, a service, which for us is the name passed to the constructor, and an account, which is our apiKey. These two pieces of information, along with the password, or apiSecret in our case, will store the sensitive information in the underlying keychain. Notice too that setPassword returns a promise, so we'll need another await keyword here. Likewise, we can retrieve the apiSecret from the keychain with the getPassword function. Here we pass the same service and account that we used when calling setPassword. This too returns a promise, in fact, all the keytar functions return a promise. So we have another await keyword here as well. Okay, so let's run our tests again. Great, they still pass. That's what we like to see. Since we're running on Windows, let's check out the system Credential Manager and look for our apiSecret. Yep, there it is. There's some discussion on the internet about how secure the Windows Credential Manager is, especially since we're storing our apiSecret without any sort of hashing or added encryption. However, without getting into that debate, this approach is certainly preferable to storing our sensitive information directly on the file system. Also, each operating system will have its own unique approach for delivering this keychain functionality. Now that we've added keytar to our project, we should update our tests as well. Our after function needs to be updated to remove the secret from keytar. Instead of interacting with configstore directly from our test, let's call a new clearKeyAndSecret method in our CredentialManager class. Since this return a promise, we'll need to be sure to add await in async. Now let's go back and create this new method. We'll retrieve the API key from configstore and then we'll delete it. Next, we'll call the keytar deletePassword function, passing in our service name and key. Running our tests again, and they still pass. Great. Just as in module 1, let's take a minute before we wrap up and verify our changes on some different operating systems. With the latest code on a Mac, we'll run npm install. Notice that this time keytar prebuild installs the binary for Darwin. Running npm test, we see that our tests pass. Since we're already run npm link here, we can run twine and give it an API key and secret. If we check the Mac OS keychain, we see our twin entry. This time, let's add a third operating system and try it on Ubuntu Linux. Here if we run npm install, we'll see the Linux version of keytar prebuild. Npm test passes here as well, that's good news. Let's run npm link so we can try running twine interactively. Entering our API key and secret, we can see that inquirer has hidden the password on all three operating systems, across both the Windows command prompt and the Unix shell. That's a great piece of work. If we run the Password and Keys app, we see our twine API key here as well. Perfect.
We're at a good stopping point, so let's wrap up this module. We started with an overview of the common types of configuration information. We then discussed how this configuration data is typically stored on a per user basis. We added configstore to our application as a way to store the API key and secret. We then migrated our API secret to use keytar, which securely stores the sensitive information in the operating system's underlying keychain. And of course, we added our tests. Stick around for our next module where we'll dive much deeper into our user interaction models. Stay tuned.
Interacting with the User
CLI Option and Command Patterns
Demo: Adding Commands to Your Project, with Test Coverage
Implementing Twitter's PIN-based OAuth
Using our twine application requires three steps. First is the configure the consumer API key and secret. Next, is to use these credentials to retrieve an account token and secret. Finally, with these account specific credentials, we can invoke the Twitter API. We introduced this consumer API key and secret in the previous module, while we'll address the account token and secret in this module. In a subsequent module, we'll begin adding support for invoking various Twitter APIs with these account credentials. It may seem unusual that we have two sets of credentials here, and in a way it is. The first set of credentials are specific to the application, our twine application in this case, while the second set are specific to a user or account. Thinking about it another way, if twine were a web application and not a CLI, the consumer key and secret would be configured once by the app developer at deploy time. The account credentials, on the other hand, would be generated for each user of the web application. Because we're both the app developer and the user, in this case, we have both. In an ideal world, we'd be able to ship our CLI with the consumer key and secret already configured, however, since these could be extracted and abused, we need to rely on each user of our CLI to start with their own set of consumer credentials. Twitter provides many different types of authentication for third party developers. One less common approach, but perfect for our use case, is PIN-Based OAuth. In fact, Twitter's support for this method is specifically why I chose them for this course. As you can see here, PIN-Based OAuth is intended for applications which cannot access or embed a web browser. In fact, they even mention command line applications specifically. Any type of OAuth involves a lot of back and forth, and PIN-Based OAuth is no different. In this sequence diagram, we have the user, our twine application, the user's web browser, and Twitter itself. This may seem intimidating, but let's walk through it. The user runs our twine application with a new command configure account. This uses the consumer key and secret to invoke a Twitter request tokens API endpoint. This call returns a token and secret to be used for the next step. Here twine opens a browser window, pointing to a special Twitter authorized URL. The user views this webpage, provides their username and password, and authorizes our twine application. In return, they are shown a PIN number. When the user returns to our twine application, they can enter this PIN, which we will then use to invoke an access token endpoint, giving us a set of credentials specific to the authenticating user. In the end, a process like this is designed to provide us with credentials we can use to invoke APIs on behalf of a user, without ever having to access their actual username and password. The PIN part is a handy way to do this when our application is not a web application. One comment that should have given you pause, however, is the twine opens a browser window. How are we going to do that? Once again, it's npm to the rescue. The opn command provides a cross-platform way to open files, or URLs in our case, in their default application. As you'll see in a minute, this is exactly what we need. Finally, it sure would be helpful if we could find an existing library for invoking the Twitter API. However, I found this experience to be quite the opposite of our earlier forays into npm. This issue from one of the libraries I found is a good example. There really wasn't a well maintained library that was a good fit for our use case. Either they didn't support PIN-Based authorization, there were lots of open issues, or the library was no longer maintained. This is a good counter balance to our earlier experiences with npm. Don't feel like you have to use something from npm. There may be cases where it's not much more effort to just add it yourself. That's what we're going to do for our Twitter API client.
Demo: Adding Another Command
In this demo, we'll implement a simple Twitter API client for use in our command line application. We'll then use this to add a second command to our CLI. In doing so, we'll implement Twitter's PIN-Based authorization. Before we're done, we'll take a look at running twine in the node debugger. Let's dive in and get started. In order to call the Twitter API, we'll need to install a couple of helper modules, OAuth 1.0a and axios. Axios is a nice, promise-based HTTP client that has a few tricks we can leverage to our advantage. Next, we'll go into our lib directory and create a twitter.js file. This will hold our Twitter API client. Much of this isn't specific to building a CLI, so we'll gloss over some of the details. We'll require the necessary modules, create a Twitter class, and export it. Our constructor will take our consumer API key in secret as parameters, set up a couple of properties, and configure our OAuth module. This use of the OAuth 1.0a module is taken almost verbatim from the project's readme. In fact, they've tested it against the Twitter API. So we're just going to use that as is here. Next, we're going to leverage one of those axios tricks I told you about. You can define interceptors in axios, which work a lot like middleware in Express. These are invoked in every request before it is sent to the server. In our case, we're going to use the OAuth authorize and toHeader functions to set the proper authorization header in our outgoing API request. Next, we'll set the Content-Type header to x-www-form-urlencoded. Finally, we'll set the axios base URL to the one we just configured above. When we defined our token above, we left it empty, so we'll create a setter for that here. With all that complexity in our constructor, the remaining methods are pretty simple. Our get method will just call the axios get function and return the data property of the response. Our post method is pretty much the same, except it takes the data to post as an additional parameter. That's really it. It took a little poking around to get all of that assembled just right, but once it's done, it's pretty straightforward. No need to adopt a poorly supported module just to get those capabilities. The next thing we're going to do is revisit our CredentialManager. Right now it's hardcoded to handle our consumer API key and secret. However, aside from the hardcoded strings, there's really nothing specific in here about those credentials. Now that we're faced with storing an account specific key and secret as well, it sure would be nice if we could leverage this same module. Let's make it more generic by removing the hardcoded property names, making those dynamic. We'll start by introducing a new parameter to getKeyAndSecret called prop. This will be used by our configstore function call. We'll also add this parameter to storeKeyAndSecret and make the necessary changes there as well. Finally, we'll do the same to clearKeyAndSecret. Seems simple enough. Now let's go change how our current command calls these methods. We'll add apiKey to our consumer commands storeKeyAndSecret call. Easy peasy. We're not done yet though. Let's change the tests as well. We'll edit the tests for our CredentialManager to provide the apiKey parameter everywhere it's required. Then, we'll pop over to our configure tests and update those as well. Just to be sure we haven't broken anything, let's rerun our tests. Okay, we're still good. Before we start adding our new command, configure account, let's add that opn module we discussed earlier. Okay. Now back in our configure script, we'll add the new function account. This function is basically going to implement the sequence diagram we stepped through earlier in this module. We're going to start by retrieving our consumer API key and secret and using those to instantiate our Twitter class. Then we'll use that client to post to the request token Twitter API endpoint and parse the response. Since we know from the Twitter API documentation that this response comes back in the format of a query string, we can use Node's built-in querystring parsing function. This response gives us a token and secret, so we'll pass those to our setToken method. That way they'll be used by our Twitter class for the subsequent API calls. The next thing to do is open a browser window for the user to sign in and authorize our application. We could just call opn directly, but that might be kind of jarring to the user, so we'll use inquirer here to add a prompt to let the user know what comes next. We'll ask them to press Enter and we'll give it the name continue, even though we don't intend to examine the contents of this answer. Once they've hit Enter, we'll use opn to open the user's default browser to the URL defined by Twitter for handling this authorization. Immediately afterwards, we'll use another prompt to ask the user for the PIN they received. This way, our application will wait while the user interacts with the web browser. Once the user has entered the PIN, we'll use it in our call to the access token API. We'll, again, set our Twitter class's token to the parsed response. At this point, we have our account token and secret. However, Twitter gives us another endpoint we can use to test these credentials. The verify credentials endpoint will give us some information about the user that belongs to these credentials. We'll call that here, and if we don't throw an exception, we'll store the token in secret and output the user's screen name returned by the verify credentials API call. Notice that we're able to leverage our new generic storeKeyAndSecret function by simply passing in a different property name, accountToken in this case. Finally, let's add the necessary imports for the modules we just used. Okay, if you need to, hit Pause, take a breath, and review what we just created. This is likely the most complicated code we will write in this course. It's not technically specific to building a CLI, however, it's not uncommon to have a linear set of steps like these executed in response to a CLI command.
Demo: Adding Another Command, Part 2
Now that we have our second command, let's add some tests for it as well. We'll start by importing our new Twitter module. Because we're going to do a lot of mocking in these tests, we'll leverage a neat feature of sinon. We can define a sandbox at the start of each test, then instead of calling sinon functions in each test, we'll call them on our new sandbox. In fact, we can change our existing tests to use this new sandbox module. Finally, after each test is complete, we just restore our sandbox and everything is back to the way it started. We don't need to restore each mocked item individually. Okay, let's add our first test. Because most of what our account function does involves calling other things, we'll be doing a lot of mocking here. First, we'll mock our CredentialManager's getKeyAndSecret method. Then we'll stub our Twitter class. Since we call the post method twice, once for request token and again for access token, we can use sinon's onFirstCall and onSecondCall to return different values each time. Next, we'll also stub the get method call, which we use to invoke the verify credentials API endpoint. Finally, we'll stub inquirer, as we have in the past. Since we call prompt twice, we'll set the first one to an empty value representing when the user hits Return to open the browser, and a second one to a pin that would be entered by the user. Okay, now we have a problem. We want to keep the opn command from opening a browser, but sinon can't stub a module that returns a function. To get around this, we're going to add a new method to our utility module called open browser. This will simply run opn on the URL passed in. Nothing special really, but now it's something we can mock. Let's jump over to our configure script, change the opn call to util.openBrowser, and remove the opn import. Okay, back to our test. Let's import our util module, and now we can mock openBrowser to just return an empty string. Because our configure account command prints the results to the console, we'll add a spy to the console log function so we can examine how it's called. Okay. After all that stubbing, we're ready to actually call our configure.account function. Next, we'll restore just our CredentialManager's getKeyAndSecret method to check and see if the account credentials have been stored as we expected. Finally, since we've been spying on console.log, we can check to see if our account function called this with the string we expect. One more thing I forgot to do, since we're using sandbox now, we don't need to restore anything in our first two tests. So let me remove those now. Okay, running our tests and everything passes. However, our coverage is pretty weak. We don't have any tests for our Twitter module and that's dragging down our numbers. Let's add some real quick. We'll create a twitter.js file in our test lib directory, and we'll drop in several require statements that you'd expect by now. We'll instantiate a Twitter class before our tests and first we'll test our setToken method. Nothing earth shattering here. Next, we'll test our get method. We're going to stub the axios get method and just have it resolve to an object with a data property, since that's the only property of the axios response that we're using. Next, we'll do the same thing for the post method. Okay, let's rerun our tests. Well, our numbers are better, but not perfect. We could certainly spend a long time writing elaborate tests for our Twitter module, including verifying the OAuth headers, but that's beyond the scope of this course. This still leaves us with pretty good test coverage, and I'm okay with that. Before we move on, let's take a minute to pull up our account function and it's corresponding test side by side, so we can reconcile all those mocks with the original function. We first mocked our getKeyAndSecret call, then our first Twitter post, then the two inquirer.prompt function calls, our call to openBrowser, then our second Twitter post, and finally, our Twitter get method call. It's debatable, with this much mocking, how much of our account function we're actually testing. However, with working code and these tests, we can ensure that we don't break this linear process in the future. Well, we've written a lot of code, but haven't actually defined our new command, so let's do that now. We'll revisit our twine-configure script and add a new command. This one will look almost identical to our previous command, except that it will invoke the account function instead of consumer. It's worth pointing out here that one goal of putting all the logic in a separate configure module is to keep these bin directory scripts as simple as possible. The goal here is that we really shouldn't need to write any tests for these because they're just parsing the command line and invoking logic stored somewhere else. Okay, everything's been written and our tests pass. How about we take it out for a spin and see how it looks. First, let's run twine configure consumer. We'll go back to apps.twitter.com and look up the consumer key and secret we created in the first module. We'll copy and paste those here one at a time. Be careful when you paste your secret since you can't see it. If you get it wrong, the subsequent API calls will throw an exception, something we'll handle better in a later module. Next, we'll run twine configure account. The first prompt we see is to hit Enter to open Twitter in our default browser. In our browser, we see Twitter, asking us to authorize the PSTwine application to use our account. You've likely had to name your application something other than PSTwine, and that's what will show up here. By signing in and clicking on Authorize app, Twitter presents me with a PIN. Entering that PIN, I see the message account paulofallon was successfully added. This is the screen name returned from the verify credentials API call. Note that nowhere did I explicitly tell twine my username. By granting access to the PSTwine application, it was able to make that API call on my behalf and retrieve my screen name. We're not storing that anywhere, but it is a nice way to show the user that everything has worked as expected. Hmm, notice that the command never actually terminated. That's weird. I had to use Ctrl+C to exit out of the application. After doing some research, it appears that our call to opn takes a configuration object with a wait parameter that we can set to false. This parameter allows the opn promise to resolve immediately, instead of waiting for the spawned application to terminate. Admittedly this issue seemed to be hit and miss and this flag shouldn't matter on Windows, but after adding it I was unable to reproduce the error. Let's try configure account again. We'll authorize the application, enter the PIN, and our application terminates as we would expect. It seems like it's working okay, but we'll keep an eye on it. Anytime you want to view the list of applications you've authorized on Twitter, you can find those in the Settings and Privacy section of Twitter under Apps. Here we can see the newly authorized PSTwine alongside the other apps I've authorized.
Demo: Using the Node Debugger
We've made a lot of changes in this module, let's look at where we are now. We've introduced commander and defined our first commands, configure consumer and configure account. We have our implementations of these two commands in configure.js. We reworked our CredentialManager, removing the user interaction, and making it more generic so we could use it for all our credentials. We have our basic Twitter API client and our two utility functions, not empty, for use in our inquirer prompt calls and openBrowser, which allows us to stub our use of opn. We also have all our tests with test coverage. Finally, our debugging configuration is stored in .vscode, launch.json. In this module, we started with a look at some common option and command patterns. We then restructured our project to support our new command-based interface. Along the way, we introduced some new modules, such as commander, axios, and opn. Out of necessity, we implemented our own minimal Twitter client. In doing this, we implemented the second of the three steps of our Twitter CLI. Stay tuned for the next module when we start adding commands that deliver Twitter functionality.
Interacting with the Environment
Handling Errors and Setting an Exit Status
Hello. My name is Paul O'Fallon. In this module, we'll be looking at ways our CLI interacts with the environment. For starters, we'll revisit the error handling in our twine application. It's pretty minimal so far and we need to be more intentional, including returning the proper exit status. Next, we'll examine a pattern for leveraging environment variables to override our configuration settings. Finally, we'll make good use of both standard input and output, adding the final features to our CLI for this course. Let's get started. As we discussed in an earlier module, our application code is broken down into three directories. Our CLI commands are defined in the bin directory, while the actual implementations are in the commands directory. Finally, are commands make use of functions found in the lib directory. One way to think about it is that our commands cascade down from bin to commands to lib. Our errors, on the other hand, will flow in the opposite direction, errors originating at the lower levels will bubble up through intermediate layers, finally reaching the scripts in our bin directory where they will be output to the user. An important part of handling errors in a CLI is communicating to the operating system that the command was terminated due to an error. This is done by returning an exit status. On Windows, the exit status of the previous command is available in the error level environment variable. Here, a successful execution of the type command returns an error level of 0. Next, when I invoke type with a nonexisting file, you see the error message returned by the command and now the error level has been set to 1. Similarly, in a Unix shell, this exit status value is found in the $? variable. Here's a 0 return value for a successful cat command, and just like in Windows, a value of 1 to denote an error. One nice feature this enables on Unix is adjusting the prompt to reflect the last exit status. Here you can see that the arrow in my prompt is green, except when the previous command fails, in which case it turns red. That's handy. While these examples show an exit status of 1 representing an error, anything greater than 0 is an error. Technically the range of possible values is between 0 and 255, although a few values have special meanings in some shells. So, how do we return the proper exit status in our CLI? Well, the Node.js documentation has some helpful insight when it comes to exiting your program due to an error. Here is an example of what not to do. Calling process.exit with a value of 1 does in fact cause your program to exit with an exit status of 1. However, it does so immediately. Because writing to standard out, which is the usage in this case, sometimes happens asynchronously, it's possible that existing like this will terminate the program before the usage is printed. A safer way to exit on error is to just set the process.exitCode variable to the desired exit status and let the program terminate on its own. This way, the appropriate value is returned to the operating system while the Node.js application is allowed to properly exit. Let's see about adding support for these exit codes in our CLI.
Demo: Add Error Handling and Proper Exit Statuses
In this demo, we're going to catch and raise the appropriate errors in our CLI. And in doing so, we'll provide meaningful error messages to the end user. When we exit our CLI because of an error, we'll also set the appropriate exit status. Finally, to make all of this work, we'll need to do a little more light refactoring of our Credential Manager. Let's get started. So, actually we do already handle one error message in our Credential Manager, the case of a missing key. We're going to keep that, but change the error message to be more user friendly. This will be our overarching strategy. Create the user facing error message closest to where the error occurs and where we have the most context. Next, we'll replicate the same error handling for a missing secret. These error messages may seem a bit odd. We're using the prop passed in to render the error message, even using it to suggest as configure command. With credentials named apiKey and accountToken, as we have right now, those error messages won't make a lot of sense. However, there's no reason we can't store our consumer and account credentials under those exact key names. Doing so let's us leverage the key name in our error messages. Let's go change apiKey to consumer and accountToken to just account. There are a few places to change these in configure.js. Next, we'll swap these out in our configured tests. Finally, for consistency sake, we'll update them in our credential manager tests as well. Next, we're going to fix a lingering issue left over from the last module. When running our tests, we were leaving some keytar passwords lying around in our various key stores. We're going to add a new method to our CredentialManager to help with this. But first, we're going to namespace our credentials stored in config store. We'll prepend keys dot to the property value when calling get, set, or delete. This keys dot will store all of our credential related information in a keys object within config store. You'll see why we want to do this in just a second. Now once we have real users, we can't just make changes like this to our internal structure without providing some sort of backwards compatibility, but since we're still building our CLI, we can safely make these breaking changes. With this name spacing in place, we're going to add a new method to our CredentialManager, clearAll. For now, this will only be leveraged in our tests, but its job is to remove all the credentials from our configuration. To do this, it's going to get the keys object from the config store. We'll iterate over the key names and call clearKeyAndSecret for each one. This will remove both the config store entry, as well as the keytar entry for each credential. Good. Let's go back to our CredentialManager tests. We owe it some improvements based on our error handling, so we'll do that first. We'll change the existing test to represent a missing key. Also, instead of just testing that the call is rejected, we'll use rejected with and provide a portion of the error message it should look for. Next, we'll add a similar test to verify the handling of a missing secret. We'll explicitly set a key directly in the configstore with no secret. This should cause getKeyAndSecret to fail with a missing secret. Finally, we'll remove that lone consumer key we added above. Okay, now we're almost ready to revisit our after function and clean up those leftover credentials. The first thing we're going to do is npm install fs-extra. This provides a promise-based interface to Node's fs library. In fact, we'll just change our require statement, leaving the constant named fs. Before we introduce clearAll to our after method, let's write a specific test for it. We'll store a couple of credentials, one for consumer and another account. Then we'll call the clearAll method. Finally, we'll try to get those credentials, ensuring that they've all been removed. Okay, let's just remove our existing after method and replace it altogether. We'll start by calling our new clearAll method. Now we can call fs.unlink to remove our empty configstore file. However, since we're using fs-extra, our call to unlink returns a promise that we can await. No more using the done function. Given how many of our tests are async, this makes our after function consistent with everything else. Next, we'll make the same change to the after function in our configure tests. While we're here, I'm going to make one additional tiny adjustment. By changing our console log spy to a stub, we can cause it to swallow the output, which will keep our test results neat and clean. Nice. Let's stop here and rerun our tests. Everything still passes. Great. Now that we've tackled raising errors in our CredentialManager, let's focus our attention on our Twitter client. Starting with our get method, we'll wrap the call to axios in a try catch block. If we get an error, we'll call a handleTwitterError function that we haven't written yet. Next, we'll do the same for the post method. Good. Okay, now let's go write that handleTwitterError function. Basically, we're going to inspect the error message, looking for some common errors. If we find those, we'll craft our own error messages, allowing us to make suggestions, such as this 401 example. Also, because Twitter rate limits its APIs, we can trap for that too and throw a more helpful error message here as well. Finally, for all other Twitter related errors, we'll just prefix the error message with Twitter and throw that. Okay, that's good. But let's update our tests to verify that these errors are getting thrown. We'll start by stubbing the axios post method to reject with an error message that contains 401. Next, we'll call our twitter.post method and expect it to be rejected with the appropriate error message. We'll restore the post method and repeat this process for the get method. Next, we'll do pretty much the same thing for the 429 error message. And finally, we'll do this one more time for the generic Twitter error. Because we've introduced rejected with here for the first time, we need to import chai-as-promised and use it in chai before dirtyChai. Running our tests again, and they all pass. Okay, so we've done a lot of error throwing, but what should we do with those errors? Let's go back to the top, to our bin directory, and revisit our action function calls. We'll remove the async await and simply add a catch onto the promise that's returned. This catch will call the util.handleError function. We'll repeat this for both of our action calls. Now let's go write this handleError function. First, we're going to install chalk. Chalk is a nice little module for colorizing our output. We'll use this to render our errors in red. After importing chalk, we'll begin our handleError function. This will simply log to console.error, but what it passes to error is the output of a chalk function, redBright, to which we pass our error message. This will cause the console to render our error message in a bright red color. Next, as we discussed earlier, we'll set the exit code to 1 to indicate that our CLI terminated because of an error. Finally, we'll export this function, and we're all set. We'll write a really simple test for this new function by first importing sinon, as we haven't mocked anything in this test yet. Next, we'll add a new context for the handleError function in our first test to verify that the exit code is being set to 1. We'll stub console.error to keep it from being printed during our test and then call the handleError function. We'll also verify that the exit code is in fact set to 1. Finally, we'll verify that we're actually printing a message to console.error. Again, we'll stub the error function and call handleError, but this time we'll verify that the console.error method was called with the message we passed to handleError. Running npm test again, and we'll all good. So, we haven't actually seen any of these errors, but now's a good time to try it. Since we changed the layout of the credentials in our configstore, we need to delete and recreate our configuration anyway. So let's manually remove our configuration file. Now, with no configuration in place, let's try running twine configure account. Nice. That's our helpful error message from down in CredentialManager using the prop name of consumer. It's also rendered in a nice shade of bright red. Perfect.
A Pattern for Environment Variable Overrides
Another way our CLI can interact with its environment is through the use of environment variables. You've probably seen these before, maybe when manipulating your path environment variable. Here's a subset of the environment variable set on my Windows machine. Similarly in a Unix shell, the env command will output a list of the currently active environment variables. These are variables that are accessible to any of the programs running on your machine, CLI or otherwise. One environment variable pattern I've seen in CLIs is as overrides for configuration information. Maybe your CLI has configuration information stored in a file, as our twine application does. However, the CLI may also look for specific environment variables, if they are set, then those values are used instead of what was found in the configuration file. Finally, the CLI may provide command line options to again override both the environment variables, as well as the values found in the configuration file. One example of this is the Amazon Web Services, AWS CLI. It recognizes several environment variables, including the four you see here, and values assigned to those variables will trump what's found in any of the AWS configuration files. So, how do we access these environment variables in Node.js? The documentation, again, shows us the process.env variable. This returns an object containing all of the variables and their values. You can add or override values in this object, but those are only valid within the program itself, not the shell that executed the command. Okay, is there a way we can leverage this pattern in our CLI? Well, it may be a bit of a contrived example, but we can adopt an approach similar to the AWS CLI and support environment variable overrides for our credentials. Here's the twine in three steps slide from an earlier module. We can support two new environment variables, twine_consumer_key and twine_consumer_secret, which we'll consult before using what's in our configuration. While we're at it, we can do the same for our account credentials, checking for twine_account_key and twine_account_secret. Why might you want to do this? Well, you can retrieve an access token and secret from within your application's definition, on apps.twitter.com. Setting these values as environment variables would let you skip the whole pen-based authorization we established in the previous module.
Demo: Implementing Environment Overrides for Credentials
In this demo, we'll add support for overriding our configured consumer and account credentials. We'll do so by checking a set of environment variables and using those values instead. Let's get started. There aren't many changes required to enable this, just an update to our CredentialManager and some extra tests. We'll start by specifying the name of the environment variable we want to check. This is another case where using a property name that makes sense to the end user pays dividends. Our service is twine and our prop is either consumer or account, so we can easily build an environment variable named twine_consumer_key or twine_account_key here. Next, we'll check to see if this value is set. If it is, we'll use that for our key, otherwise we'll retrieve our key as we always have. Next, we'll replicate that same process for our secret. We'll craft the environment variable name, check to see if it's set, use it, if it is, or otherwise fetch it with keytar as before. That's it. Now let's write tests to validate this. Starting at the top, we'll add a test to ensure that credentials set in the environment are used. We'll do that by setting process.env for our two environment variables. Remember, we need to use twine-test here, since that's the service for our unit tests. Then, we'll call getKeyAndSecret as always. Finally, we'll verify that we have received the values that were set in the environment variables. Next, because we expect twine to actually prioritize these environment variables over any existing configuration, we'll test for that as well. With our environment variables still set from the previous test, we'll store a set of credentials and then retrieve them again. Here too, we still expect to see the credentials set in the environment. Running our tests, and they pass. Now let's try this for real by setting the environment variables to a bogus set of consumer credentials. Running twine configure account throws an error. In fact, we see that it threw one of our Twitter errors, since the Twitter API didn't like the incorrect consumer credentials we set in our environment. That's it. Easy peasy. Before we wrap up this demo, I want to encourage you, if you're following along, to close out your command window or shell and reopen it before continuing. I forgot to do this when recording this course and got really thrown for a loop later on when nothing was working anymore. Come to find out, it was still trying to use the leftover bogus environment variables.
File Descriptors and Dealing with Unbounded Input
Input and output file descriptors are important for getting information into and out of a CLI. An application may receive data by reading it from standard input. Similarly, an application typically writes its results to standard out. Although in the case of an error, it may write an error message to standard error instead. Standard out and standard error are typically shown to your in the console when you run a command, unless, you've redirected them somewhere else. In Node.js, console.log writes to standard out, while console.error writes to standard error. In Node.js, these three file descriptors are also accessible from the process object as stdoin, stdout, and stderr respectively. And Node treats these as streams. This means you can pipe from standard in to any other stream and you can also pipe to standard out. This one line of code here would simply pipe standard in straight through to standard out. So, how can we leverage what we know about standard in and standard out to improve our twine application? Well, we can support parameters either on the command line or piped to us from standard input. One tenant of the Unix philosophy is to, and I quote, expect the output of every program to become the input of another. And we can certainly abide by that here. For a single parameter, that means you can either call a twine command and include the parameter on the command line, or you can pass the single parameter to the command via standard in, using an echo here as an example. This becomes even more powerful though when we think about multiple parameters. We can support passing in a set of comma separated parameters on the command line, but we can also redirect a file of parameters to the program or, in the third example, pipe the output of some unknown parameter generator to our twine command. Let's take a real version of that last example. Say we want to implement a way to look up a Twitter user. We have no way of knowing how many users will be piped to standard in. Setting aside Twitter's API rate limits for a bit, let's see how we might handle an unbounded number of input parameters to our CLI. We'll accomplish this by stringing together a series of Node.js streams into a pipeline. We'll receive our input on the process.stdin stream. The first thing we'll do is split this input on newlines, so each chunk that passes through is one piece of input. Next, because some of Twitter's APIs accept parameters in batches of 100, we should be smart and batch up our input before making a Twitter API call. We can do this by including another stream that holds onto incoming data and only writes when it has accumulated a batch of inputs. As each batch is output, we can invoke the appropriate Twitter API. In fact, we can invoke several of these APIs in parallel, acting on multiple batches at once. Because the output of these API calls will be grouped by batch, we'll introduce another stream to flatten the results, removing these batch boundaries. Finally, we'll pass these individual twitter API results to a JSON stream, which will stringify the results. Piping this to standard out will cause our results to appear in the console. Doing this with a series of streams provides an important benefit, the proper handling of back pressure. Each of these streams, especially calling the Twitter API, will have throughput rates that vary from one another. Handling the data flowing through our CLI with a series of streams allows us to leverage back pressure to correctly manage these differences. We can use an almost identical pipeline to handle parameters passed in on the command line as well. Instead of starting with process.stdin, we'll split the comma separated parameters into an array, and use that array to create a readable stream. The data coming from this stream can be handled via the same pipeline as before. To help us out with this streams-based implementation, we'll need several new modules. First is split2, which makes it easy to split streaming data on newlines. Next, is parallel-transform. This module will help us call the Twitter API on several batches at once. Through2 is a module that makes it easy to build transform streams, those that take input, transform it, and pass it on. JSON stream will provide us with our stringify capability, allowing us to create a single JSON array of objects. When handling parameters passed in on the command line, from2-array makes it easy to create a readable stream based on an array of data. Finally, because we're making heavy use of promises, promise-streams will allow us to use promises in our error handling, just like we have everywhere else. Sounds good, let's give it a shot.
Demo: Lookup Users via Standard In or Command Line
In this demo, we're going to implement the lookup users command we just discussed, as well as one additional lookup command. In doing so, we'll support both parameters on the command line, as well as via standard in. As a little something extra, we'll introduce a utility that we can use to parse the JSON output of our commands. Let's get started. Before we dive into the code, let's install our new npm modules. These are the modules that we just discussed in the earlier slides. Okay, good. The first thing we'll do is implement our batch stream, creating a new file in the lib directory. Leveraging through2 makes writing this stream pretty simple. We'll define a constant batchStream, which takes an optional parameter to specify the size of the batch. We'll initialize our batch variable to an empty array. We'll then return an invocation of through2.obj, which creates an object mode transform stream. This function takes two parameters, each of which is a function itself. The first is called on each chunk and the second is called just prior to the stream ending. When we receive each chunk, we first want to add it to our batch array. Next, we'll check to see if we've reached our batch size. If so, we'll copy the data into a new variable, reinitialize our batch, and then invoke the next function with our batch of data. If we haven't reached a batch boundary yet, we'll just call the next function without any data. When our stream is done, our second function will check to see if there are any remaining items to pass along. If there are, next will be invoked with those items. We'll export our batch stream and we should be good to go. We'll create our new lookup.js in the commands directory. We're naming this command lookup because the Twitter API uses that term for endpoints that allow you to look up a group of something. We'll start with several require statements. Six of these are the stream-related libraries we just installed. We'll also import our CredentialManager, our Twitter class, and our newly created batch stream. We're going to start by creating a function called doLookup. This is not tied to a specific subcommand and we'll see why in a bit. This function will take several parameters, the API we want to call, our standard name parameter, any items that were passed in on the command line, and one unusual parameter called inout. We're going to use this variable to access our standard input and output streams and setting the default value to the process object makes that easy. However, allowing this to be passed in will make mocking and testing much easier. First, we'll get our consumer key and secret, then we'll use those to initialize our Twitter client. Next, we'll retrieve our account credentials. We'll pass those to the Twitter.setToken method. Okay, now we're ready to get down to business. Our function will simply return a call to the promise streams pipeline function. This function returns a promise, with any stream errors rejecting the promise. The parameters to this function call will be the steps in our pipeline, which will match the diagrams in our earlier slides. The first step in our pipeline will vary, depending on whether something was passed in on the command line or not. If it was, we'll split those items into an array and use that array to create a readable stream. If nothing was passed in, we'll pipe standard input to a stream created by split, which will split the incoming data on newlines. Now, regardless of how we started, everything else is the same. Our next stream will batch up the inputs with a batch size of 100. Each of these batches will be passed on to the stream created by parallel. This stream will invoke the following function with a concurrency of 2. We could easily raise this, or even make it a command line option, but this is enough to demonstrate the idea. The function invoked here will simply call the Twitter API we were given, appending a comma separate list of parameters for this batch. Next in line is our flattened stream, which we'll write here using through2.obj again. We'll iterate over the API response array and output each item directly. Finally, we'll JSON stringify these results and pipe that to standard out. Okay, so we have this pipeline, now let's leverage it to actually create a lookup command. We'll define our lookup object and create an entry users, which will take in a series of arguments. This function will simply call our doLookup function, specifying the Twitter API endpoint for looking up a group of users. For the remainder of our doLookup parameters, we'll just pass on what we received here. Finally, exporting this lookup object, we have our first lookup command. Let's write a corresponding lookup.js test. We'll start by requiring all the usual suspects and setting up chai. We'll start with a describe statement and leveraging sinon's sandbox capabilities again in our beforeEach and afterEach functions. So, if you remember when we first introduced our mocking, I talked about trying to avoid mocking standard in or out directly. Well, now we don't have much choice. Fortunately there's a great module called stream mock that we can install to help us with this. We'll require ReadableMock and WritableMock from this module. Next, we'll define a context for users, and in there we'll add another beforeEach function. Our first stub will be pretty simple, just stubbing our CredentialManager's getKeyAndSecret. The next line, however, will be something new. This time when we stub Twitter's get method, we're going to use callsFake to define our own function to be called instead. This one will operate on the URL that's passed in and take everything after the equal sign in the query string, split that on a comma, and map each of those entries to an object with a screen_name parameter. What this is basically doing is extracting the users we joined and added to the query string and turning them into a minimal response object from the API call. We can do this because the API call to users/lookup returns an array of user objects, each of which contains a screen name property. Our first test will verify that we can lookup users that have been piped to standard in. We'll start by instantiating a ReadableMock with the two users we want to come from standard in, foo and bar. Next, we'll create a WritableMock to represent standard out. Now, we're going to call our lookup.users function, passing in twine-test and our mock readable and writable streams. Next, when our WritableMock has finished, we'll expect the data it received to match the user representations we mocked above. Then we'll call the done function. Notice we're leveraging done here instead of async await. We want to be sure this finish event gets fired and our expect line gets evaluated, that's the only way we'll tell this test we're done. So, while we've tested reading from standard in, we haven't really exercised our batching capability, since we only read two parameters from standard in. Let's add another test to submit more than 100 parameters, that way we know we have two batches, not just one. For that, we'll start by creating an array of 101 users with the names foo 0 through foo 100. Then we'll instantiate a new readable mock, passing in those 101 users, but not before adding a carriage return to each one. Next, we'll instantiate our WritableMock and call lookup.users as before. Finally, when standard out emits the finished event, we'll again compare the data written to the WritableMock with our expected output. However, this time we'll also construct this expected output dynamically based on our original array of users. Let's try our tests again. Great, everything still looks good. Okay, now let's add the easy test to verify that it will lookup users passed in on the command line. We'll instantiate our WritableMock again to receive the output and we'll simply call lookup.users, pass in the two users, and just our standard out stream. On finish, we'll check our results, just as we did in the first test. Finally, just to be sure we're still handling our errors okay, let's test for that as well. We'll restore our Twitter's get method and stub it out to reject with a test error. We'll use our WritableMock here again, then we'll test our lookup.users to ensure it's rejected with our test error. Notice that here we pivoted back to async await because we're not testing the output of a stream, but rather the promise returned by the function call itself. Checking npm again, and everything's still green. Okay, so we have our lookup.users function with all of our tests. Now let's add the script to our bin directory, so we can actually call it. This one will look pretty much like our configure commands. We have our imports, then we handle the version option. We add our command, ensuring to pass any errors to handleError. We parse the arguments, and finally output help, if we weren't passed a command. The last thing we need to do is add lookup to our main twine.js, which wires everything together. That's it. Once you've cleared out your old configurestore JSON and rerun configure consumer and configure account, you'll be read to try your new command. If I run twine lookup users paulofallon, I see the JSON for my user account returned to the console. If I try it again, passing both my account, as well as Pluralsight, I get both back. Notice how the JSON is formatted in the console. JSON stream bookends our writes with the square brackets to turn them into a large JSON array. Now let's try reading from standard in. We'll create a users.txt file and add those same two Twitter accounts. We can use the type command to send them to standard out. If we pipe the type command to our twine command, we see that we get the same output. Awesome.
Demo: Piping from Your CLI to JQ and Back Again
It's great that we can look up users, but we get this huge blob of JSON as output. What are we supposed to do with that? Well there's a great command line utility called jq, which does an amazing job of parsing and extracting data from JSON. So for example, if we want to extract just the screen name from our twine output, we can pipe our standard out to jq with this query, which basically means for each array entry, extract the screen name. And there you can see the two screen names we started with. Notice how they're both in quotes. Well, that's because in JSON format, those values are in quotes. We can add the raw output parameter and get our data back without the quotes. The syntax of jq is very powerful and is also very complicated. We can construct a new JSON object based on a subset of properties using this syntax. Although, notice that it's just giving us two separate JSON objects. If we tweak the query syntax a little more, we can get it to give us back these values as a new JSON array. There. Now I can actually query users from Twitter and extract just the information I'm looking for. If you want to try jq on Windows and you're using Chocolatey, you can install jq with choco install jq. We'll cover the other platforms at the end of this demo. Twitter's APIs follow a few patterns, which are repeated for multiple endpoints. The lookup pattern shows up again for statuses, or, as you and I might call them, tweets. With our existing doLookup function and its pipeline, it's really easy to add support for looking up statuses as well. We'll just add a new function to our lookup object, passing in the API endpoint for looking up statuses. We'll also update our twine lookup script so we can invoke this new command from our CLI. To try it out, I'm going to copy a status id from Twitter by hand. Passing this in on the command line and we get the JSON back for this tweet. Now, let's grab a second one and try passing in multiple. Yep, works just like we would expect. Just like we did with users, we can create a statuses.txt file with these two tweet ids. Piping these to our twine command, and we get the same output. Let's try looking up the statuses again and piping the output to jq with a more complex query. Each status contains a list of the users referenced in the tweet. In one tweet, I mentioned AWSreInvent, in the second tweet, I referenced Pluralsight and code. This jq query extracts all those user mentions and outputs them as a list, one user per line. Hey, that sounds familiar. Let's pipe that list into twine again and look up those users, then we'll pipe that to jq again and extract some information about those users. Wow, that's pretty cool. Now I have the name and location of the users that I mentioned in those tweets. Before we wrap up this demo, let's double check and be sure this still works on all platforms. Going to the Mac, we'll start by running our tests. Good. They pass here too. If you don't have it already, you can use Homebrew to install jq on a Mac. Next, after we've removed our configstore entries and rerun our configure commands, we can try the same piped set of commands as before. We still have our statuses.txt file, which we can output with cat here. If we pipe that to lookup statuses, we see the same JSON as before. Next, if we introduce jq, we can extract just the mentioned users. Finally, if we run the same long piped set of commands, we see the same output. Nice. On Ubuntu Linux, our tests still pass. You can install jq here with apt get install jq. We still have our statuses and piping them to lookup statuses returns our status JSON. Bringing in jq works here just like everywhere else. And finally, the long series of piped commands also works, super.
Future Integration Opportunities and Summary
So, that last set of commands was pretty crazy. What did we just do? Well, we started by outputting a list of statuses. Then we used twine to look up those statuses. Jq was then able to extract the user mentions from that status JSON output. This list of users was piped back to another invocation of twine, which looked up each of those users. Then that final set of user JSON data was piped again to jq, which extracted the name and location of each user. Passing in status ids may seem odd, but what we've done is establish a set of primitives, users, and statuses, with ids that can be read from standard in and full objects written to standard out. So what else could we do with this approach? Well, with a few extra commands, you could retrieve the list of your followers, find out the last time each of them updated their status, and then unfollow people haven't tweeted in awhile. Or, go through your own tweets and delete the old ones that didn't receive any likes. You may eventually get throttled because of Twitter's API rate limits, but otherwise the sky is the limit. Because this twine project is open sourced and available on GitHub, you are welcome to add some of these additional commands and submit them as pull requests. I look forward to seeing what you come up with. To wrap up, in this module, we started by adding the proper error handling to our project, including setting the proper exit code. We added the capability to override our consumer and account credentials with environment variables. Then we added a couple of lookup commands that accepted input from standard in and used those to compose more elaborate scenarios. I hope you've enjoyed this module. Stick around for the next one where we'll be covering how to package and deploy your command line application. Stay tuned.
Packaging and Distribution
Scoped Packages, Publishing to npm, and Using npx
Hello. My name is Paul O'Fallon, and welcome to the final module entitled Packing and Deployment. After all the work we've put into our command line application, we'll finally publish it to the world. But because we know this won't be the last time, we'll configure the app to notify users when an update is available. We'll also automate this process to make it easier to repeat in the future. Finally, we'll explore creating a Dockerized version of our application. Let's get started. Publishing to npm is very straightforward and there's little, if any, about it that's specific to deploying a command line application. It's just two simple commands, npm login to authenticate ourselves and npm publish to upload our project to npmjs.org. There is one unique thing about our project though, we need a scoped package. This is because the name twine is already taken on npmjs.org. Therefore, we need to prefix this package with @pofallon, my npm username. This @pofallon is the scope. There are several reasons to use scoped packages. The first is for deploying a private package to npmjs.org. All private packages are scoped. Another reason is for grouping. An organization may want to group a series of related modules together. For example, there are many Angular modules published under the @angular scope. A third reason, and the reason we're using scoped packages is for name spacing. Not the best reason, I suppose. Finding a creative name for our application that was available and didn't require scoping would have been ideal. In any case, the scope of a module is either the name of the user publishing the module, as in our case, or an organization that user belongs to. A fairly recent addition to npm is a handy utility called npx. It specifically helps with executing npm modules designed to run as command line applications. Under normal circumstances, to use a node-based CLI, you would npm install it, and then use it. However, npx makes that even easier. You can just run npx, the package name, and the remaining command line arguments, and it will download the package from npm and execute it, all in one command. This can be helpful if you need to leverage the CLI in a shell script. You don't have to worry about whether it's already installed, just include it as a call to npx.
Demo: Publishing Your CLI to npm, Running It with npx
In this demo, we'll convert our project to a scoped package, which will involve fixing our commands. Finally, with that done, we'll publish twine to npm. We'll start by adding the scope to the name property in pkg.json. While we're here, we'll clean it up a bit and remove the main property, since the bin property is really what matters for our application. Now if we try running twine, after changing the name, it doesn't work. This is because all of our configstore work is done based on the name of the module and now that's changed. To fix that, we'll create a new utility function called extractName, which will simply return everything in the string after the forward slash. This will extract the name of our project without the scope. Next, we'll revisit twine configure and wrap each instance of pkg.name and a call to this new function. While we're here, we'll reformat our code a bit to match our twine lookup script. Speaking of that one, we'll make the same changes here. Wrapping pkg.name in calls to extractName. One last thing, while we're here, we need to go ahead and add one more section to our pkg.json file, the publishConfig access public setting tells npm that even though we're using a scoped package, we really do want this to be public. Okay, running twine again and we're back in business. Now let's publish our module. First, we'll log in to npm. If you don't already have an npm account, you'll need to visit npmjs.org and create one. Here, I'll give it my username, my password, and my email address. Finally, the moment we've been waiting for, npm publish dot, and we're live. In order to try installing our newly published app, we first should unlink our development project. However, since we originally linked it under the name twine without a scope, we need to briefly change the name back, just so we can unlink it. Okay good. Now we can run nmp unlink dot, and we'll jump back and reapply the scope, so we don't forget. Trying our twine application again, we see that it's not found. So let's install it. We can run npm install -g @pofallon/twine, and there we go. Now, if we run lookup users, we see our results. Awesome. Let's uninstall it so we can try calling it with npx. Double checking to be sure it's really gone, okay it is. Now we'll run npx @pofallon/twine lookup users pofallon. Notice that npx is installing our module and then executing it, same results. Perfect. Npx can also read from standard in, so let's try that as well. We have a text file with two Twitter user names. IF we run our npx command again, piping in these user names, we see that both are returned. Great.
Adding Update Notifications, Travis CI Automation
One nice feature of npm, in fact, we saw it crop up in the last demo, is how it notifies you when there's an update available. It should would be nice to have that feature in twine. Ideally, it would automatically check for a new version. And it would notify the user, not every time, but periodically. And most important of all, it should not annoy them. When we get to the demo, you're going to see this is easy peasy. There's a node module that makes enabling this feature one line of code. Also, we want to automate our releases. Me typing npm login and npm publish wasn't hard, but it would be even better if it happened without me having to do that. Let's start by anchoring our automation to creating a new tag in GitHub. These tags will coincide with new releases of our twine CLI. This tag will trigger TravisCI to test and deploy our application. Once this process is complete, npm will have the latest tagged version of our application. It's worth noting here that this process shouldn't be our only testing. TravisCI will definitely run our tests, but only on Linux. Before creating a tag in GitHub and initiating this process, we should be confident that our CLI works on all three platforms, Windows, Mac, and Linux.
Demo: Automating Your npm Publish with GitHub and Travis CI
In this demo, we're going to add that one line of code to check for updates, and then we'll configure our integration with TravisCI. The module we're using to check for updates is update-notifier. It's part of the yeoman project. Now, we'll go into our main twine js script, import the module, and add our one line of code. That's it. In fact, isGlobal really isn't required since it defaults to true. But it helps to set it when we're developing via npm link. So we'll leave it this way. We're going to exercise this feature in the next demo, since we need to publish this version to npm and another version so that we have one to upgrade to. Let's move to TravisCI. We'll create a new file to hold our Travis configuration. We'll specify the language as node_js and the version as lts/carbon. Carbon is the 8.x version of node. We'll specify one add-on, libsecret, which is required simply to install the keytar dependency. Next, we'll visit TravisCI where I've already logged in with my GitHub credentials. We'll flick the switch on the twine repository here, then we'll commit all of our changes and push those back to GitHub. Quickly, we'll switch back to over to Travis and watch it build. We see it's already started, that was quick. It starts by outputting a tremendous amount of information about the build environment. If we scroll to the bottom, we see where it's installing our lts/carbon version of node. Next, it runs npm install and npm test, which kicks off nyc and mocha. Uh-oh. We have failing tests. A lot of failing tests. What's going on? This error cannot auto launch D-Bus without X11 display, is due to our tests attempting to run keytar on a headless Linux instance. There are very elaborate hoops we could jump through to try and get this to work, but frankly, I'm not here to test keytar, I'm here to test twine. Let's fall back to some of our earlier principles and simply mock keytar in our tests, that will make the problem go away. Before we start, during the time I've been working on this course, the keytar project has added prebuilt support directly into their core module. So let's uninstall keytar prebuild and simply install keytar. Cool. Moving down to our tests, we'll start with the first set of failing tests in the CredentialManager. Here we're going to do some good old mocking, like we've done before, importing sinon, the module we want to mock, keytar in this case, and lodash to help us out. We're going to create a new secrets variable, and in our before function, we're going to stub out the keytar setPassword function to call our own function instead. This function will use lodash to set a value in our secrets object. Because keytar's functions all return promises, we'll return a Promise.resolve here. Next, we'll do something similar for getPassword, although this time we'll either resolve the promise with the value we found, or reject it with an error Missing consumer secret. We can specify that exact error because that's the only one we're testing here. Finally, we'll stub deletePassword and use lodash unset to remove the value from our secrets object, and again, return a resolved promise. In these few lines of code, we've basically written an in-memory version of keytar. Now, going down to our after function, we'll restore each of those stubbed functions. Okay, let's go visit our other set of failing tests, our configure command. Here we'll be doing almost the exact same thing, except we only need to import keytar and lodash. We'll add our secrets object and then the exact same three stubs as before. This should probably be extracted out in some sort of in-memory keytar mock module, but for now, we'll just leave it here in these two tests. Okay, now let's commit these fixes and push them back to GitHub. Switching over to Travis, and our new build has already begun. If we scroll down to see the results, all our tests pass. Awesome. Now that we've established our integration with Travis and have a passing set of tests, let's begin configuring it to do our npm deployments for us. We're going to use the Travis CLI to help with this. You can install it with these instructions here. The first command we'll run is travis login, which prompts for my GitHub username and password. The Travis CLI has explicit support for configuring npm deployments, so next we'll run travis setup npm. It's going to prompt me for my email address and an npm API key or token. You can create a token on npmjs.org by going to your profile and choosing the Tokens tab. Here we'll create a new token for read and publish since we want to use this token for publishing to npm, and we'll copy it to our clipboard. Next, we'll paste this into our console and answer yes to only releasing tagged commits. Remember, in our earlier diagram, we stated that a GitHub tag is what we'll use to kick off our deployment process. We definitely want to answer yes to the next question because we don't want deployments coming from any other fork of this project, just this one. And finally, yes, we want to encrypt the API key. Looking at the results of this command in our yml file, we see a new deploy section, which includes our encrypted API key. However, it also has my email here, unencrypted. The support for encrypted values extends beyond just the API key, so I'm going to apply the same process to my email address. First, let's remove the value from the yml file, and then use travis encrypt to generate an encrypted value from my email address and add it as the deploy.email property in the yml file. Now you can see, we have two encrypted values, one for API key and another for email. I like that better. There's no reason for us to include this yml file when we deploy to npm, so let's create an npmignore file and specify the travis.yml file here. While we're at it, let's include .vscode and our tests as well. Okay, let's bump the version number and get ready to try our automated deployment. We'll commit and push our changes to GitHub.com. And TravisCI immediately kicks off our new build. Our tests still pass. Good. Notice that extra message though, the deployment was skipped because that was not a tagged commit. Let's go back and create a new release to generate a tag. Back on GitHub, we'll do just that. We'll create a 1.0.4 release, targeting the master branch and giving it the name initial release. Now, if we check the tags section, we'll see that this release also created a corresponding tag with the same name. Jumping back to Travis, it's already begun working on our tagged commit. Notice the tag value in the build description. Scrolling down, we see our tests pass again, but this time the build doesn't stop. We see some extra steps, including deploying applications. Expanding this, we see that it successfully deployed our project to npm as version 1.0.4. If we go back to npmjs.org and lookup our twine module, we see it here, as version 1.0.4, just as we would expect. Awesome.
Creating a Docker Image of Your Command Line Application
For our final segment, we're going to look at another distribution option for our CLI, shipping it as a Docker image. While this likely wouldn't be the only way you distribute your application, you may encounter some scenarios where this is a viable option. Building and publishing the image is a straightforward task. However, with the keytar issues we saw in our first Travis build, the most straightforward method here will be to use our environment variables support. So, for that, we'll create an env.list file to hold our keys and secrets. Finally, we can invoke Docker run, pass in this environment list, and invoke our application, just as before. Another option would be to pass in each environment variable to the Docker run command individually, but for such long values, this is an easier way to demonstrate it. We should also update our automated deployments to build and publish this Docker image as well. We can edit our Travis config to publish to both npm, as well as Docker Hub. Let's give that a try now.
Demo: Creating a Docker Image, Updating Your Travis Automation
In this demo, we'll create a Dockerized version of our CLI. Next, we'll update Travis to do this for us automatically. Oh, and we'll check back in on our automatic update notifications. Our Docker image will be based on the slim version of node 8 and will define an argument for the version of twine that we want to use in this image. Next, we'll install lib secret, just like we did for our TravisCI build. Even though we're not going to use keytar, we can't even install our app on Linux without lib secret. Next, we'll run the install of our CLI. Notice that we're not copying files from our project directly into our image, but simply installing from npm. This approach assumes that the version of twine we want is already out on npm. Also, because of how npm handles installing global packages as the root user, we need to provide an additional argument unsafe-perm=true. Finally, we'll install twine, but use our version argument to target a specific version from npm. We'll define our entry point as simply our twine command. Okay. Now we can build our image, being careful to supply the version argument and the latest tag for our image. Next, to test our image, we'll create our env.list file. These consumer values are the same ones we've been using throughout the course, but where did I get the account values? In the case where a Twitter developer wants to run their own app, as their own user, you can retrieve these account credentials from apps.twitter.com. On the Keys and Access Tokens page, if you scroll down to the bottom, you'll see an access token and secret. These are associated with a specific user, your Twitter account, and can be used in twine as the account credentials. So, with our image built and our credentials in place, let's try to run twine from a Docker image. Wow, it works. Nice. Next, let's add this deployment to our Travis process. We'll leverage the travis encrypt command to add these two environment variables to our file. One, from our Docker Hub user name, and another for my Docker hub password. Going back, we now see two new entries in your yml file. Interestingly, notice how not even the name of the environment variable is listed, it's all in the encrypted value. Next, we'll tell Travis that we need the Docker service in our build. Now Travis doesn't explicitly support deploying to Docker Hub like they do with npm, but they do have support for a script deployment, which will run a script of our choosing. This means we need to write a shell script that will deploy our project to Docker Hub. We'll create a scripts directory and one file called docker-deploy.sh. On the first line, we'll start by logging into Docker Hub with the credentials we just encrypted. Then we'll execute a docker build command, very similar to the one we just ran locally. However, notice that we're able to use a special Travis environment variable called TRAVIS_TAG, which is set to the tag associated with this build. As long as we name our tags correctly, this will get passed to npm, which we'll use it to target the right twine version for our Docker image. Next, we'll tag this new image with our twine version. Finally, we'll publish both the latest and our versioned image to Docker Hub, let's also add our scripts directory to npmignore since that doesn't need to be published there either. Okay, back to our travis.yml file. We'll first need to ensure that our shell script is executable. This bit me earlier, maybe because I'm committing to get from Windows, but in any case, it's good to be explicit here just to be sure. Now we're going to add a second section under deploy. The provider in this case is a script and the script itself is bash scripts docker-deploy. We'll skip cleanup for this deployment and also configure it to run only on tagged releases from the original repository. Because we've added a second entry, let's update our npm provider, indenting it as necessary. Okay, we're ready to give it a try again. Let's bump our version number, commit everything except our environment list, back to GitHub, and push it. Now, we'll go and create another release. Looking back, I really didn't do a good job the first time. I should have established a better pattern. Let's do that now. We'll prefix the release with a v, just like the npm version command does, and we'll also use that as the title. We'll note our changes in the description area. Going back to Travis, and we're off to the races. Scrolling down to the bottom, we see that our change mod command executed successfully. Jumping to our second deployment, we see it pulling down the base image, building our Docker image by installing lib secret, and then using our tag to install twine version 1.0.5. Finally, we see it pushing the new image up to Docker Hub. Going back just to be sure, we also see the same version published to npm, which technically must have succeeded since we just used it to build our Docker image. Checking npmjs.org, we see our version 1.0.5 just as we would expect. Jumping over to Docker Hub, we see our twine image there as well. Looking at the tags, we see version 1.0.5, as well as latest, both published just 2 minutes ago. Now, let's remove our local twine Docker image so we can test downloading it from Docker Hub. We'll use the docker images command to get the idea of our image, and docker rmi to remove it. Running docker images again, we see that the image is gone. Using our same environment list with docker run again, this time since it can't find the local image, it downloads it from Docker Hub and executes it. Before we leave Travis behind for good, let's take a quick look at another way to handle the sensitive information in our yml file. Right now, we have four encrypted entries, our npm api_key and email, and two environment variables holding our Docker credentials. Rather than store these long, encrypted strings in our source repository, we could instead enter them in the Travis UI. Under the settings page for our twine application, there is a place to enter environment variables that are made available to each build. We can add those same four entries here. Now, going back to our yml file, we can replace the npm api_key with its environment variable. And then do the same for my email address. We can simply remove the two env global entries, holding the Docker credentials since those will now already be exposed as environment variables. That's it. This build will work exactly as it did before. Okay. Let's verify that this works on a Mac like we expect. Also, we'll use this as a chance to test our upgrade notifications. We'll unlink our project and then we'll install version 1.0.4 of twine. This is the first version that introduced upgrade notifications. Now if we run twine to look up a user, we get a pop up that node wants access to our keychain. This is keytar trying to access our consumer and account secrets. Now, let's look at the file that update notifier uses to track its work. Notice that update notifier uses configstore too. Because update notifier only checks once a day, by default, let's adjust this last update check timestamp so that it'll check again now. Repeating our command a couple of times, and there it is, it knows that there's a version 1.0.5 out on npm and suggests we upgrade to it. All that from one line of code. Running the suggested command, we're now at the latest version. Let's do one last check on Ubuntu Linux as well. We can install our package, just as we did everywhere else. I have to say though, I'm using a local version of node installed with nvm. I did have issues with keytar when trying to install the app using the package manager's installation of node. Running twine again, and it works here too. Great.
So, in this module, we've modified our application to use a scoped package name and then we successfully published it to npm. With that in place, we added some automation, configuring Travis to publish to npm for us every time we create a new release in GitHub. We found a nice one line solution to notify the user when updates are available. And for good measure, we created a Docker image from our CLI and published it to Docker Hub. This wraps up our entire course on building command line applications in Node.js. I've really enjoyed going through this with you and I hope it's been beneficial to you as well.
An Enterprise Architect by day and an open-source contributor by night, Paul has more than 19 years in the Information Technology industry spanning academic, start-up and enterprise environments.
Released5 Jun 2018