What do you want to learn? Leverged jhuang@tampa.cgsinc.com Skip to main content Pluralsight uses cookies.Learn more about your privacy Node.js: Getting Started by Samer Buna The Node.js runtime powers back-end servers for big players like PayPal, Netflix, LinkedIn, and even NASA. This course will teach you the fundamentals of this very popular runtime and get you comfortable writing code for Node. Start CourseBookmarkAdd to Channel Table of contents Description Transcript Exercise files Discussion Learning Check Recommended Course Overview Course Overview Hello everyone. My name is Samer Buna. I love coding and I love teaching people to code. I work at AgileLabs.com where we create interactive content to help people learn coding. Welcome to this Node.js: Getting Started course from Pluralsight. Node is an amazing runtime. It's simply a game changer. Once I got comfortable working with Node, I never looked back to anything else I used before Node. I'm excited to show you how Node is going to make your life, as a back end developer, so much easier. I mostly write code in this course and you should do too. Learning to code is mostly a practical experience. Try to do and redo the examples I'll present in this course and try to expand their scope, and challenge yourself with every piece of knowledge that you learn. Some of the topics that we will cover in this course include the what, why and how of Node.js, a review of the modern JavaScript concepts, Node's REPL and command line interface, Node's patterns, globals, and utilities, Node's package manager, npm, working with CommonJS modules, Node's concurrency and event loop, working with web servers, and working with the operating system. By the end of this course, you should be comfortable creating and executing code for Node, understand the big pictures of the runtime and build simply back end applications with it. I hope you'll join me on this journey to learn the basics of the excellent Node.js runtime in this Getting Started course at Pluralsight. Introduction Course Introduction Hello. Thanks for tuning in to this course. My name is Samer Buna. And this is the Node.js: Getting Started course at Pluralsight. This is a beginning course for the Node.js runtime. I won't be assuming that you know anything about Node, but I will be assuming that you know the basics of programming and a little bit of the JavaScript language itself. This course has a module about modern JavaScript, but I do not cover the basic concepts of the language there. If you've never worked with JavaScript before, this course might be a bit challenging for you, and if you get yourself a bit more familiar with the JavaScript language itself, this course will be a lot easier to digest. If you know JavaScript, but you don't know the modern JavaScript changes that happened in the past few years, that's okay. This module will get you covered. If you're expecting this course to make you a professional Node developer, I need to get your expectations straight. This is a short course to only get you started on your path to learn Node. It's just a first step. Node is a big framework with many built in modules and concepts which you need to learn, but they require much longer time and bigger effort. This course is designed to help you get ready for that. Here are the main topics covered in this course and you can see them here in order. First, we'll go over some of the core features in Node and how to execute scripts and work with the command line. Then we'll do a review of the modern features in JavaScript that you can execute in your Node.js today. After that, we'll talk about Node's package manager, npm. Then probably the most important part about this course is when we start talking about modules in Node and how Node handles slow operations. Then we'll go over some examples about working with web servers in Node, both natively and with external tools. And finally, we'll talk about how to work with the operating system files and commands. A lot more features in the Node API will not be covered by this course. For example, this course will not cover C++ add-ons, buffers, and streams, modules like crypto, zlib, dns, net and dgram, and many others that I classify as more advanced concepts. The good news is that I created an advanced Node course here at Pluralsight as well, so after finishing this Getting Started course, I think you'll be ready to take a deeper dive and explore the other advanced concepts that I cover in this Advanced Node.js course here. I am recording this course on a Mac Book. If you have a Windows machine, things are going to be a bit different for you. Node itself is a bit different on Windows than it is on Linux and Mac. You might run into problems that I don't. If you do run into problems that block your progress in this course, please don't hesitate to ask for help in the discussion section of this course. If you have a modern Windows machine, one option that might work a lot better for you is to install the Windows subsystem for Linux. This option will give you the best of both worlds. You'll have your Windows operating system running Linux without needing to reboot, so you can work a Linux file system with your Windows editor, for example, which I think is great. I've tested this option and I can confidently say that this will probably be the future of writing code for Node on Windows. Node is usually deployed on Linux machines in production, so by using a Linux environment on your Windows machine, you'll be closer to the way your applications are running production, and that's always a good win. I often get complaints that my Pluralsight courses are a bit fast and it's hard for people to keep up. This is true and this course will no exception. It's not that I am a fast talker, it's the fact that these courses are tightly edited with no breaks. A lot of content is intentionally jammed into a short course. That, however, does not mean that you can't manually give yourself breaks. The most important button in the Pluralsight video player is probably the Pause button, use it often. For example, every time I ask a question, pause the video and think of it. Every time I use something that you've never seen before, pause the video and Google it. Rewind and watch things many times if you need to. If you're used to the pace and breaks of workshops, you'll find the pace here much faster. The Pause and Rewind buttons are your best friends. Also, in some of the modules of this course I'll be presenting you with challenges, pause the video and do these challenges. The best way to learn is really to do. I'll be also asking a lot of questions in this course and I'll answer these questions right after, but I want you to imagine yourself in an interview for a job about Node.js and treat these questions as if they were your interview questions. Try to answer them first before you listen to me answering them. What Is Node? Okay, so you can probably answer this question, but this is a first step course in Node, so let me start at the very beginning, what is Node.js? Here's probably the simplest definition of Node, it's JavaScript on your backend servers. Before Node, that was not a common or easy thing to do. JavaScript was mainly a frontend thing. This isn't really a complete definition because Node offers a lot more than executing JavaScript on the server. In fact, the execution of JavaScript on the server is not done by Node at all, it's done with a virtual machine, VM like V8 or Chakra. Node is just the coordinator, it's the one who instructs a VM like V8 to execute your JavaScript. So Node is better defined as a wrapper around a VM like V8. I'm going to use the term V8 in this course because that's the default VM in Node, but you can totally run Node with other VMs if you need to. So when you write JavaScript from Node, Node will pass your JavaScript to V8, V8 will execute that JavaScript and tell Node what the result is, and Node will make the result available to you. That's the simple story. But Node is a bit more useful than just that. Besides the fact that enables us to execute JavaScript on a server, which is done through a seamless integration with V8, Node comes with some handy built-in modules providing rich features through easy to use asynchronous APIs. In the next module, we'll talk about that and a few other reasons why developers are picking Node.js over many other options when it comes to creating servers on their back ends. Why Node? We learned that Node allows us to easily execute JavaScript code on servers, but this is not really impressive. It was even possible to do that before Node. So let me put the big question out here, why Node? Since you're watching this course now, you probably know a reason or two why Node is a great thing to learn, but let me make sure everyone knows all the reasons before we commit to this journey. So besides being an easy way to execute and work with JavaScript on the server, Node comes with some feature-rich, built-in modules. This makes it a great platform for tools, not just a platform to host backend servers. But here's the big deal about the modules that are packaged with Node, all of them offer asynchronous APIs that you can just use and not worry about threads. Yes, you can do asynchronous programming in Node, do things in parallel, without needing to deal with threads, which is probably the biggest benefit of using a runtime like Node. And if the built-in packages were not enough for you, you can build highly performing packages using C++. Node is JavaScript, but it has first class support for C++ addons, creating dynamically linked shared objects that you can use directly in Node. Of course, you can also write your addons in JavaScript too if you want. Node also ships with a powerful debugger, which I'll show you how to use in the last module of this course. Node also has some handy generic utilities that enhance the JavaScript language and provide extra APIs to work with, data types, for example. And we're not done, in fact these last two points on this slide is, in my opinion, the most important ones. And this is why this course will have full modules dedicated to these last two points. As a starter here, I wrote an article about the exact topic of why Node, with a lot more details around the last two points in the slide. Let me go over this article quick and summarize it here for you. This article is about why React developers love Node, but it really applies to just Node in general, why JavaScript frontend developers in general love Node. So Node is a platform for tools. So even if you don't want to host your whole application in Node, Node is a great platform for tools and the reason it's great, because you have a lot of options. You have so many tools out there because Node was the first major JavaScript execution engine that ships with a reliable package manager, which is called npm. We did not have a package manager for a long time in the JavaScript world, so npm is actually very revolutionary in here because it changed the way we work and share JavaScript, and Node was the enabler here because npm is part of Node. Npm is basically the world's largest collection of free and reusable code. You can make a feature rich Node application just by using code that's freely available on npm. Npm is a reliable package manager, which comes with a CLI that we're going to explore, and that CLI makes it really easy to install third party packages and, in general, share your own code and reuse your own code. And the npm registry, where the packages get hosted, has so many options, and by so many I mean hundreds of thousands of options of free tools that you can just install and use in your system. The other big thing about Node is that it comes with a reliable module dependency manager, which is different than npm. This module dependency manager, which is often referred to as CommonJS, is also another thing that we do not have in the JavaScript world. More accurately, we did not have for a long time. Because JavaScript today has what's known as ECMAScript modules, but these modules, despite being part of the language officially, are still a work in progress, as of 3 years after they were approved. They're still not completely supported by all implementations. Node's CommonJS dependency system has been available since Node was released and it opened the door to so much flexibility in how we code JavaScript. It is widely used, even for JavaScript that gets executed in the browser because Node has many tools to bridge the gap between its CommonJS system and what browsers can work with today. Npm, along with CommonJS, make a big difference when you work with any JavaScript system, not just the JavaScript that you execute on the server or in the browser. If, for example, you have a fancy fridge monitor that happens to run on JavaScript, you can use Node for the tools to package, to organize, to manage dependencies, and to bundle your code and ship it to your fridge. Node also is not just for hosting JavaScript servers, but it's also a very good option to do so because of its non-blocking asynchronous nature, which you can use without dealing with threats. This is the main point that made Node different than any other system that was popular before Node. Node comes with first class support and easy APIs for many asynchronous operations, like reading and writing files, consuming data over the network, and even compressing and encrypting data. You can do all these operations in Node asynchronously without blocking the main execution thread. This works great with V8 because V8 itself is single threaded. And this is true for both Node and browsers. You only get a single, precious thread to work with and it's very important to not block it. For example, in your browser, if your website blocks that single thread for, say, 2 seconds, the user cannot scroll up and down during these 2 seconds. In Node, if an incoming HTTP connection to your web server was handled synchronously rather than asynchronously, that will block the single thread and your whole web server cannot handle any other connection while the synchronous operation is active, and that's a big deal. Node asynchronous APIs also work great for cloud applications in general because a lot of these operations are asynchronous by nature. And of course, by using Node, you are committing to the flexible JavaScript language, which is used on every website today. It is the most popular programming language, and that statement will continue to be true for decades to come. And despite its problems, JavaScript is actually a good language today. With Node, you get to have a single language across the full-stack. Use JavaScript in the browser and use it for the backend as well, which means less syntax to keep in your head and less mistakes overall. This also means that you can have a better integration between your frontend code and your backend code. You can actually share code between these two sides. And using JavaScript on the backend also means that teams can share responsibilities among different projects. Projects don't need a dedicated team for the frontend and a different team for the backend. You would eliminate some dependencies between teams. The project can be a single team, the JavaScript people, they can develop APIs, they can develop web and network servers, and they can develop rich interactive websites. Node, of course, has a few cons, which are interestingly the same pro points if you just look at them with different bias. For example, the non-blocking nature is just a completely different model of thinking and reasoning about code, and if you've never done it before, it is going to feel weird at first. You need time to get your head wrapped around this model and get used to it. And the big npm registry with so many options means that for every single thing you need to do, you have many options to pick from, and some people hate that. You need to constantly research these options and make mental effort to pick the better options. These options usually have major differences. And also npm enabled shipping smaller and smaller code, which means you need to use more and more packages. It's not unusual for a Node application to use 300 or more packages. This is both a good thing and a bad thing, depending on who you ask. I think it's a good thing. Smaller packages are easier to control and maintain and scale, but you do have to make peace with the fact that you'll be using a lot of them. Smaller code is actually why Node is named Node. In Node we build simple, small, single process building blocks, nodes, that can be organized with good networking protocols to have them communicate with each other and scale up to build large distributed programs. Scaling a Node application is not an afterthought, it's built right in to the name. Some Analogies About Node I love thinking about real life analogies that compare with the coding world. One of my favorite analogies about coding in general is how it can be compared to writing cooking recipes. The recipe in this analogy is the program and the cook is the computer. In some recipes, you can use ready items and tools, like the cake mix that you can use to make cupcakes, and that specially shaped pan that makes it easier to create cupcakes. When compared to coding, you can think of these ready items and tools for recipes to including and using a package of code written by others in your own code. Node, and its powerful npm package manager, has their place in this analogy. Npm is a place where you can download code written by others. Within this analogy, you can think of npm as the store where you purchase the ready items and tools. You just bring them to your project with a simple command. And you can think of Node.js itself as your kitchen, as it allows you to execute lines in your coding recipes by using built-in modules, like the oven and the sink. Imagine trying to cook without these built-in modules in your kitchen, that would be a lot harder. And just because you have these built-in modules in your kitchen, that doesn't mean you have food ready to consume. Node is your kitchen, by itself it's not enough, but it does make your task to write useful code much more easier. Throughout this course, you'll hear me use the term callback all the time. So let me quickly tell you about it. A callback is just a fancy term for a function. In Node, we call the function a callback function if Node will call it back for us at a later point in the time of the program. This is done through an asynchronous method. Here's the simple callback function, which usually receives data as its argument, and we just pass its reference here to some asynchronous method, and that asynchronous method will get the callback invoked when the data is ready. I have another analogy for you about callbacks. When you order a drink from Starbucks, in the store, not in the drive-through, they'll take your order and your name and tell you to wait to be called when your order is ready. After awhile they call your name and give you what you asked for. That name you gave them is the callback function here. They called it with the object that you requested, the drink. So let me modify this generic example of a callback to demonstrate the Starbucks example. The callback function is the name you give during your Starbucks order, it's a function that will be called with some data, which is your ready drink. When you place your order, that's an asynchronous method because your order will take some time and they don't want to block the order in queue while your order is getting prepared, that's why they'll call you back. And that's the pattern of callback functions in Node. Now modern JavaScript and modern Node started adopting another pattern for asynchronous programming, which we call promises. And a promise is a little bit different than a callback. To understand promises, imagine that you ask someone to give you something, but they give you something else, call it a mystery object. They promise you that this mystery object might eventually turn into the thing that you originally asked for. This promise mystery object can always turn into one of two possible forms, one form is associated with success and the other with failure. This is like if we ask a chicken for a chick and the chicken gives us an egg instead. That egg might successfully turn into a chick or it might die and be useless. If you like analogies like these, I have a lot more of them. See this article under freeCodeCamp, and most of these analogies are really related to Node. For example, see the reactive programming one and the errors versus exceptions one. What You Get When You Install Node Follow this video for instructions on how to install Node. If you've successfully installed Node, you should see the following three commands available globally in your system. First, the node common itself, which if you run it without any argument, you'll get into this interactive program where you can type and execute JavaScript. Here's a quick test of how modern your Node is. I typed the test for you right here under this GitHub gist. Go find this code and then copy this line here, the first line, and paste it into the Node's interactive session. Don't worry if you don't understand this. It's just a test of your Node capabilities. If this line was executed successfully without any errors, like it did for me here, you have a good version of Node. If you get an error here, you should upgrade your Node for this course. This course has a modern JavaScript module where I'm going to explain the modern concepts this simple line is using. You can also test if your Node has the new promises APIs using these two calls in here. Util.promisify should return a function like this and require fs.promises should return an object of the full fs API, just like this. So if these calls are giving you errors or undefined objects, it'll be a good idea for you to upgrade your Node before starting this course. The other two global commands that you should have are npm and npx. You get these, as well, when you install Node. We'll talk about these commands in the course module dedicated for npm. If you see these commands working successfully for you, you can skip the rest of this installation instructions video. To install Node on Windows, you can simply download the installer from the nodejs.org website and run it locally. This way you'll be installing Node natively on the Windows operating system. But remember what I said about Node on Windows in the overview video, Node natively on Windows is not really ideal, but if it's your only option, don't let that stop you. If you were able to get a Linux subsystem working for you on Windows, great, all you need is two commands. Let me show you how to find them here real quick. In this downloads page, scroll and find the article about installing Node via a package manager, click that. You can also just Google this exact title here if the UI of the Node website has changed. The Linux distribution that you used in your subsystem is probably Ubuntu here, so click on that link, and under this section here, you'll find lines to install the recent version of Node. So these are the two lines that you need in your subsystem, or Linux in general. You should also invoke this other line here about build essentials, just in case you need to install any native add-ons in the future. On Mac, you can download and install Node directly, but if you're already using the excellent Homebrew package manager, you can just use that. All you need is brew install node. Now this will get you the latest Node. If you don't have Homebrew on your Mac, you should really take a look at it, it's pretty cool. If you have an older version of Node running through Homebrew, the command you need here is brew upgrade node, just like that. There's another option that works on both Mac and Linux, and that is nvm, Node version manager. I like this option because it allows you to run multiple versions of Node and switch between them using a simple command. If you find yourself in a situation where you need to work on a project that use different versions of Node, nvm will help you here. That's it for installing Node. Don't forget to make sure these lines work for you on your new Node you just installed. Example Files On the course review slide, there are references to folders associated with the modules of this course. These folders can be found on GitHub or under the exercise files section. Here is the GitHub repository that I'll be using throughout this course. It will give us starting points for code templates and let us focus our time on concepts rather than wasting time typing simple scripts. This repo is hosted under GitHub.com/jscomplete/ngs. Ngs here is Node getting started. You can see the different folders for the different modules right here. The same content of this repo is also available under the exercise files on the course page, if you don't have access to GitHub. Your first step is to clone this repo locally to your machine. To clone this repo, copy the ngs repo URL in here and type the following command in your terminal, git clone, the repo URL. This will create a direct renamed ngs under your current working directory. So cd into this new ngs, and in here you can see the list of numbered folders. Let me open up an editor here on this repo. So within the top level folders, which are associated with modules in the course, you'll see folders and files representing videos of the course module in order. However, some of the course videos will not have folders and some folders here might have multiple files representing the examples we are going to cover in its associated video. The files are usually numbered in the order of their example in the video. And all these files in the ngs repo is your starting point for each exercise we are going to go through in this course. One thing I'd like to point out before we begin, this course has many modules. We use the word module here to describe a section in the course. The word module is also heavily used with Node to refer to a file or folder of code. To make this a little bit less confusing, I will say course module every time I refer to a section of the course. So let's jump to the course module and get you properly introduced to the wonderful Node.js runtime. Getting Started with Node Node’s REPL Mode In this Getting Started module, we'll explore the Node command, and a few global features like timers and the process object. It you type in the node command without a script for it to execute, Node will start a REPL session. REPL stands for Read, Eval, Print, Loop, and it's a very convenient way to quickly test simple JavaScript and Node commands. You can type in any JavaScript here in the REPL. For example, a Math.random call, just like that, and then you press Enter and Node will read your line, evaluate it, print the result, and then go back to waiting for further lines. This is where the REPL got its name, read a line, evaluate it, print its output, and loop until the user exits the REPL. Note how the print step happened automatically. We didn't need to add any instructions to print the result. Node will just print the result of each line you type. This is cool, but keep in mind that some lines will not have any results at all. The Node REPL will print undefined in this case. For example, if you define a variable like this, let answer = 42, and you hit Enter, you'll see undefined. This is because this is a statement in JavaScript, it's not an expression. It does not have any output, and this is why the REPL printed undefined as the output of this statement. Don't let that confuse you. On the other hand, let's type out an expression, for example, 3 the number == 3 the character. This is a Boolean expression. By the way, quick question, do you think this expression evaluates to true or false? Well, this is one type of question that can be easily answered inside a Node REPL, you type it real quick and hit Enter, and there you go, this is surprisingly true. Well to be honest, if you're surprised by this answer, you might want to take a more basic course about JavaScript itself before this Node course. We are going to do a JavaScript review module, but the basics of JavaScript, like operations and types, will not be covered. If you need to brush up on your JavaScript basics, I recommend you check out the Pluralsight series Quick Start to JavaScript. So this line was a JavaScript expression and the REPL printed its result for us. Sometimes the expression that you need to test might need multiple lines. Let me clear this REPL session, and I do that by pressing Ctrl and L. This clears the session. And let's see an example of multiple lines. Say that you want to define a function that generates today's date and test it out. You'll start with a function name, I'm going to name it today, and then begin with a curly brace, right? You can hit Enter here. The Node REPL is smart enough to detect that your line is not done yet and it will go into this multi-line mode for you to type more. So we can type return new Date semicolon, hit Enter again, then the end curly brace and Enter. And now, Node figured out that this code can be executed and it did execute it. We can now use the today function in the REPL session. This REPL multi-line mode is limited. For example, if on line 3 you realize that you made a mistake on line 1, you cannot go back and edit line 1. In addition, you cannot type out multiple expressions in the same multi-line session. Luckily Node has a more featured editor right here inside the REPL. You type in .editor to open it up and when you do, you can type as many lines as you need. For example, you can define multiple functions or paste code from the clipboard. When you're done typing, you hit Ctrl+D to have the REPL evaluate your code. All the functions that you defined in the editor will now be available in your REPL session. The .editor command is a REPL command and there are actually a few other commands. You can see the list by typing the .help command. The .break command, or its .clear alias, let's you get out of some weird cases in the REPL session. For example, when you paste some code in Node's multi-line mode and you are not sure how many curly braces you need to get to an executable state, you can completely discard your pasted code by using a .break command. This saves you from killing the whole session to get yourself out of simple situations like this. The .exit command exits the REPL, or you can simply press Ctrl+D. The .load and .save commands can be used to generate and use external Node scripts inside your REPL. This can be a great time saver. Here is a fresh REPL session. Now let's I started typing in this REPL session, I defined one function and then I defined another function, and then I defined a third function, and now I have some history in this REPL session. And what I want to do is I'd like to statesve all these functions to an external file to maybe review them later or maybe commit them to Git. All I need to do is .save and give it a file name. I'm going to call it m7.js. And now if I inspect the content of m7.js, this file will have all the lines that we previously typed in the REPL session, which is really cool. More importantly, if later on we started a brand new REPL session and we wanted to redefine the functions we previously defined in m7.js, all we need to do is use the .load command with m7.js as the argument and Node will load all the lines in the file and evaluate them, and now we have access to the functions and variables defined in that file. TAB and Underscore I need to emphasize the importance of the Tab key. If you're not familiar with this powerful key, you're in for a treat. The Tab character itself is not a very useful one, but the Tab key is the driver of a very powerful feature called tab completion. You might be familiar with that feature in your code editor, but I'd like you to also be aware that it works inside Node REPL as well. A single tab in the Node's REPL can be used for auto complete and the double tab, which is pressing the Tab key twice, can be used to see a list of possible things that you can type from whatever partially typed string you have. For example, if you type the character c and then double-tap on that, you'll see all the possible keywords and functions that start with c. You could be defining a constant or clearing a timer, and if you single Tab on something that matches only a single option, it'll be auto completed. For example, crypto here is the only keyword that begins with cr, so if you single tab on cr, crypto will be auto completed. This is, by the way, is not about being lazy about typing the whole thing. The usefulness of this tab completion is about avoiding typing mistakes and discovering what is available. This latter point is important. For example, say I want to know what API functions and properties I can use on the array class. I can type Array, note how I used single tab for that, but then I can type the dot character and double-tap after that. And there you go, these are all the functions and properties that can be used from the Array class. This also works on objects. If I have an array object inside this REPL session, I can use the same dot and then double-tap trick to get a list of all the methods available on this object. If you can't remember the name of a method you need, this list is helpful. The tab completion discoverability works anywhere. Do you remember the special dot commands? Well you can see them by double-tapping on a single dot. This discoverability also works on a global level. If you double-tap on an empty line, everything that is globally available appears. This is a big list, but it's a useful one. The first section here has all the common globals in the JavaScript language itself, which you're probably familiar with, like Array, Number, String, and Object classes, and built in libraries like math and JSON, and some top level functions. The other section here mostly contains the globals that are available in the Node runtime. A few of these are truly globals in Node, like the Buffer class, the process object, and the various functions to set and clear timers. The lowercase variables here, for example dns, net, cluster, and http, those represent the built-in modules in Node. These are Node's powerful libraries that you get out of the box. Note how these are available here directly in the REPL, but when working with a regular Node script, which we'll do next, you'll need to require these modules to be able to use them. One of the useful REPL features that you can see here is the underscore variable. This is similar to the $? feature in bash. It stores the value of the last evaluated expression. For example, say that you executed a math.random call, and after you did, you wanted to put the same value in a constant. You can do that with underscore because it automatically stores the last value. You can use the underscore variable in any place where you use a JavaScript expression. I could do something like const random = _ to place the same last random value in a constant. Executing Scripts Here is Node Hello World example. This simple script represents a simple web server. You don't need to install anything to run this script. This is all Node's built in power. Don't worry about figuring out what's going on in this script just yet. We'll get there eventually. To execute this script with Node, you just specify the location of the script as the argument for the Node command. You can actually see the location of this script within the projects folder, right here in the corner for my atom editor here. And I can also click this region here to copy the full path for this file. So now I'll go ahead and paste this full path right here after the Node command and this will execute the file. As you can see, it reported back that server is running. The script location that you specify for a Node command can also be relative. So if I'm inside this 1-getting-started, 1-executing-scripts directory and I specify node 1-hello-world, just like that, Node will look for the file in the current working directory. If the script that we're executing has a running task, like a web server listening for connections, for example, then Node will continue running. Note that if the script does not have a running task, like this other script here that just prints a message, Node will execute this script and exit immediately after that. And Node process exiting is a normal thing in a Node cluster. Node will not idle and use resources unnecessarily. Okay because to the simple web server script, and we'll execute it again. So now this simple web server script is running and its task is active, it listens for all the HTTP connections. However, this Node process will only use V8 when there are HTTP connections, otherwise V8 will remain idle. Let's decipher this simple web server. The first line here is using the require function. This is the first thing you need to learn about Node's internal. The require function is what you use to manage the dependencies of your programs. You can use require to depend on any library, whether this library is a built-in one, like HTTP here, or if it's a third party installed library. This program here depends on the built-in HTTP module. It's the module that has the features of creating a web server. There are many other libraries that you can use to create a web server, but this one is built-in. You don't need to install anything to use it, but you do need to require it. Remember that when we were in Node's REPL, this library was available immediately without needing to require it. This is not the case with executable scripts, like this one. You can't use any dependencies without requiring them first. This line here creates a server constant by invoking the createServer function on the HTTP module. This function is one of the many that are available under the HTTP module API. You can use it to create a web server and it accepts an argument that is known as the request listener. This is a simple function that Node will invoke every time there is a request to the created web server. This function is also known as a callback, but this term is a bit old at this point. Remember this argument as a listener. This server will listen to requests and it will invoke the listener function for each request. And this is why this listener function receives the request object as an argument. It's named req here, but you can name it whatever you need. The other argument this listener function receives, named res here, is a response object. It is the other side of a request connection. We can use this object to write things back to the requester. It's exactly what this simple web server is doing. It's writing back using the .end method and the Hello World string. We'll learn about the .end method later in this course, but it can be used as a shortcut to write data and then end the connection. The createServer function only creates the server, it does not activate it. To activate this web server, we need to invoke the .listen function on the created server. This function accepts many arguments, like what OS port to use for this server. The last argument here is a function that will be invoked once the server is successfully running on that port. This example just logs a message to indicate that the server is now running. This listen function is what actually keeps the Node process running. It's the task that will keep the Node runtime busy and not exit. While the server is running, if we go to a browser and ask for an HTTP connection on localhost with the port that was used in this script, 4242 in this case, we will see the Hello World string that this example had in its request listener. Go ahead and try to change this line and return something else. You will need to restart the Node process to make that change work. Just kill the process with Ctrl+C, use the up arrow to get the last executed command, and execute that again. If you refresh the web browser now, you should see the newly returned string. When we build a full web server example, I'll show you how to avoid needing to restart the server every time you change something because this will be an annoying thing to keep doing manually. Working with Timers Some of the popular global functions in a browser environment are the timer functions, like setTimeout and setInterval. Node.js has an API for these functions as well, and it exactly matches the browser's API. These timer functions can be used to delay or repeat the execution of other functions, which they receive as arguments. For example, this code uses setTimeout to delay the printing of this greeting message by 4 seconds. The second argument to setTimeout is the delay in millisecond. This is why we are multiplying 4 by 1000 here to make it into 4 seconds. The first argument to setTimeout is the function who's execution will be delayed. If we execute this script, normally with the Node command, Node will pause for 4 seconds and then it'll print the greeting and exit after that. Note that this first argument to setTimeout is just a function reference. It does not have to be an inline function like this. For example, this code here will do the same job, but it uses a function defined before setTimeout. Note that if the function that we pass to setTimeout receives argument, like this example here, argument 1, 2, 3, and more, then we can use the remaining arguments in setTimeout to pass these arguments to the delayed function once it's executed with setTimeout. Here's an example. Take a look at this code and try to figure out what it will do. The rocks function, which is delayed by 2 seconds, accept a who argument and our setTimeout calls relays the value Pluralsight as the who argument. Executing the script will print out Pluralsight rocks after 2 seconds. Time for a challenge. Ready? Using what you learned so far about setTimeout, print the following two messages after their corresponding delays. Print the message "Hello after 4 seconds" after 4 seconds, then print the message "Hello after 8 seconds" after 8 seconds. This would be an easy challenge without constraints, but we are not here to learn just the easy stuff. I have a constraint for you. You can only define a single function in your script, which includes inline functions. This means many setTimeout calls will have to use the exact same function. Okay, pause here and try it out. I really hope that you will try to do these course challenges because, trust me on this, it is the best way to get comfortable with what you're learning. If you just watch, you'll learn, but it will be harder for you to retain and improve. You do that when you start pushing your limits and put yourself outside your comfort zone. Here's how I'd solve this challenge. I've made theOneFunc receive delay argument and used that delay argument in the printed out message. This way the function can print different messages based on whatever delay we pass to it. I then used theOneFunc in two setTimeout calls, one that fires after 4 seconds, and another that fires after 8 seconds. Both of these setTimeout calls also get a third argument to represent the delay argument for theOneFunc. Executing this script will print out the challenge requirements. The first message after 4 seconds and the second message after 8 seconds. But what if I ask you to print the message every 4 seconds forever? While you can put setTimeout in a loop, Node offers the setInterval as well, which would accomplish exactly that. Just use setInterval instead of setTimeout. In this example, this code will print its message every 3 seconds. Node will print this message forever until you kill the process with Ctrl+C. Another cool thing about these timers is that you can cancel them with code. A call to setTimeout returns a timerId and you can use it with a clearTimeout call to cancel that timer. Here's an example. This simple timer here is supposed to fire after 0 milliseconds, making it immediate, but it won't because we are capturing the timerId and cancelling it right after with a clearTimeout call. When we execute this script, Node will not print anything and the process will just exit. By the way, there is another way to do setTimeout with 0 millseconds, the timer API has another function, it's called setImmediate, and it's basically the same thing as a setTimeout with a 0 millisecond, but we don't have to specify a delay here, it's an immediate thing. We'll see a practical case for this function later in the course. And just like clearTimeout, there is also a clearInterval, which does the exact same thing but for set interval calls, and there is also a clearImmediate, which does the same thing for setImmediate calls. So as you can hopefully see from this example, executing something with setTimeout after 0 millseconds, does not mean execute it right away, but rather it means execute it right away after you're done with everything else in this script. Let me make this point clear with an example. Here's a simple setTimeout call that should fire after half a second, but it won't. Right after defining the timer, we block Node synchronously with a big loop. This is 1 with 10 zeros in front of it, so this is a 10 billion ticks loop, which basically simulate a busy CPU. Node can do nothing while this loop is ticking. This is, of course, a very bad thing to do in practice, but it'll help you here to understand that setTimeout delay is not a guaranteed thing, but rather a minimum thing. This 500 millisecond here means a minimum delay of 500 milliseconds. In reality, this script will take a lot longer to print its greeting line. It will have to wait on the blocking loop to finish first. Let's do one more challenge on timers. Print the message "Hello World" every second, but only 5 times. After 5 times, print the message "Done" and let the Node process exit. And you cannot use a setTimeout call for this challenge. Little hint, you need a counter here. Okay, you can pause here. Hopefully this was an easy one. I initiated a counter value as 0 and then started a setInterval call, capturing its Id. The delayed function will print the message and increment the counter each time. Inside the delayed function, an if statement will check if we're at 5 times by now, if so, it will print Done and clear the interval using the captured interval constant. The interval delay is 1000 milliseconds. In this file here, I've put a couple more challenges for you to practice timers. I will not solve these challenges here, to keep the course short, but I wrote an article on Medium about them, which you can read at this URL. I've also included the solutions in this course's folder. Node’s Command Line Interface So far we used the Node command in two modes, to start a REPL with no arguments, and to execute a script by using a single file path argument. The Node command, which is often called the CLI, also has options which make it work differently. For example, the -v option makes it output the version of the Node runtime, and the -p option makes it execute a string and print out its result, which I find super useful. For example, if I want to see how many CPUs this machine has, I can use a call to the built-in OS module, which has a function called cpus, and I can do .length on that to see the size of the returned array. This machine has four cores. Similarly, here's another one liner to see the version of the V8 used in the current Node installation. There are many other options. You can see a full list by using the -h option. This could be a big list, so you can pipe it on the less command or its Windows equivalent to paginate your way through this list. Take a look at these options and familiarize yourself with them. Don't memorize anything, except for the handy -p1, but just be aware of all the things that you can do with this CLI. One of these options also opens the door to the options of V8 itself. Let's explore that list. Using node --v8-options, pipe that on less as well, Node will report all the V8 options that it supports. These options get passed to V8. For example, to make V8 always execute your JavaScript in strict mode, you can pass the --use_strict option. I wish this one had a default value of true, but unfortunately it does not. However, many Node CLI wrappers set this one to true, which I think is great. The V8 options are a really big list, and it's mostly for advanced use, but let me tell you a few more things about it. You'll see some options that begin with the word harmony. These flags usually control the features that are still being tested. You can use them by including their flags, but just know that they are not final yet. You'll see many options for tracing. If you need V8 to give you more debugging power, you can make it up with more information. Some options control how V8 behaves and also control its limits. Other options will report information or give you control over what's usually not possible without them. Do a quick scan on this list, but usually if you think you might find a helpful option here, you can grep this list to search for it. For example, you can grep for "in progress" to see all the in progress harmony flags for this particular Node version. In addition to all these flags, the Node CLI also supports some environment variables that you can use to change the process behavior. You can see this list at the end of the -h output. For example, this Node debug environment variable instructs core modules to print out any debug information they have. If we execute the Hello World script with Node debug set to HTTP, which is the core module used there, then on each incoming connection, that HTTP module will print out some debugging messages. Most Node libraries, including the packages that you'd install and use, support a Node debug flag. You can give this environment variable a comma separated list of the modules for which you want to enable debugging. Another handy environment variable is this NODE_PATH one. By default, Node has certain paths it used to look up modules you require in your code, and you can use this environment variable to override that. I sometimes use this when developing local Node packages because I find it a lot simpler than the alternatives. These are the built-in environment variables, but you can have your Node process use any other environment variables as well. Let's talk about that next. The “process” Object You can use the Node command with custom environment variables, for example, we can do something like VAL1=10 VAL2=20, note, no commas here, then continue to execute a Node script like this. In the Node script, we can access the values in the environment variables using the process object, which has many properties, but the one that you can use for this purpose is the env property, which is an object that has all the env properties available through the operating system, like user here, it also includes the one we just customized, like VAL1 and VAL2. You can export environment variables prior to executing a script and Node will read those as well. For example, instead of this one liner, we can do something like export VAL1=100, export VAL2=200, and then we can execute this script normally and it will pick up our 100 and 200 values. This is a handy bridge between the operating system and the Node process. You can use it to communicate dynamic configuration values. For example, if you want your script to run on development port 4242, but in production you want it to run on port 80 instead, you can use process.env to make the port dynamic and control it on different machines. There is another way to pass information for the execution context of the Node process and that's through the process.argv array. This array will have an item for every positional argument you specify when executing the Node script in order. For example, if we have a Node command to print process.argv, using the handy -p CLI option, all the arguments after this command will be set in the resulting array as strings, even if you pass numbers, this array will always assume strings. The first item here in the resulting array is the location of the Node command, and if we're executing a script, the second item in this array would be the script name, but in this case we're not executing a script, so the remaining items in this array are the arguments that we passed to the Node command. And they're all strings in this array. This is a cool feature, but I think I prefer the process.env method because I get to name the passed values there. With argv, we'd have to do more tricks to accomplish this named values feature. Other properties you should be aware of on this special process object are the standard input output streams. There are three of them, stdin for input, stdout for output, and stderr for error messages. These control the communication channel between the Node process and its operating system execution environment. We've been actually using them under the hood. When you use a console.log line, that line writes to the stdout stream. In fact, you can accomplish the same functionality of console.log by using a process.stdout.write line, just like this. Stdin can be used to read information from users. Here's an example to do that. The std I/O objects are all streams, which is a topic that we are yet to explore, but the gist of it is that we use events and methods to use these streams. In here we are listening for a readable event and using the read method to read a chunk of data, and then we print out the same chunk to stdout, making this effectively an echo utility. It will echo everything you type to it. There are multiple ways to consume and benefit from these I/O streams, and streams in general. For example, the same echo example can be done using the excellent pipe function that's available on readable streams, like process.stdin. We pipe a readable stream into a writeable one, like process.stdout, using the argument for the pipe function, and this makes the exact same echo utility. We'll have a bit more to learn about streams later in this course, but for now, just make a mental note that streams are awesome and you should utilize them in every possible way. Nodes process object can also be used to terminate the process or do something when the process is terminate unexpectedly. Here's an example of that, this code has a timer that will fire after 2 seconds and it will call the exit function on the process object. This will manually terminate the process and make Node exit. As Node is exiting the process, it looks for any listeners registered on the exit event. We have done exactly that in here, which means right before Node exits, it will execute this function to print out this message. Because of the nature of asynchronous code in Node, this Hello line will be executed first, then the timer will go next, and the exit listener will fire. This simple example demonstrates the power of Node asynchronous nature and it's event-based methodology. We will learn more about that in the concurrency module of this course. Wrap Up I hope you are now excited about Node. This module has been just a teaser about the many powerful features that come built into the Node runtime. These include things like it's powerful REPL and CLI, and its built-in libraries, like HTTP, which we use to quickly create a web server. It also comes with plenty of customizable options and a built-in bridge to the operating system through the "process" object. However, we only scratched the surface in this module. There are so many other things that I want to show you, but before I do, let me make sure that you are familiar with the modern JavaScript features that I'm starting to use and will continue to do throughout this course. This next module will be exactly that, a crash course on all the modern JavaScript features that were added to the language since 2015 and are now natively available in your Node environment. Features like template strings, array functions, classes, block scopes, destructuring, promises with async/await, and more. If you're already familiar with these features, you should skip this next module. Modern JavaScript EcmaScript and TC39 JavaScript is a very different language than it used to be just a few years ago. ECMAScript, which is the official specification that JavaScript confirms to, has improved a lot in the past few years after a rather long period of no updates to the language at all. Today the ECMAScript technical committee, which is known as TC39, makes yearly releases of ECMAScript and JavaScript engines, like V8, shortly follow by implementing the new features introduced in the ECMAScript releases. This has started with ECMAScript 2015, or it's other commonly known name, ES6. Since then, we've had yearly releases named ES plus the current year. Some of these releases were big and others were very small, but the language now has a continuous update cycle that drives more innovative features and fades out the famous problems JavaScript had over the years. Anybody can propose features that they think should belong to the JavaScript language. The TC39 committee has a 5-stage process to filter and finalize the features that are considered for the language. A feature starts at stage 0, which is when anyone proposes anything to the committee, and if the proposed feature has a clear problem and a clear case for its need and someone is willing to back it up through the process, it gets labeled as stage 1. Once the proposed feature has an initial spec document, it gets labeled as stage 2, draft. At this point, there is a strong chance the feature will be part of the language. When the spec of the feature is finalized and the designated reviewers of the feature sign off on it, the proposal is labeled stage 3, a candidate. At this stage, the feature is queued for more tests and the committee will accept the spec text into its main specifications repository, which gets the feature to stage 4, finished, and that feature will be included in the next yearly release of ECMAScript. In Node.js, you only have access to features that are finished and are already part of the language. However, V8 often has harmony flags for you to experiment with candidate and even draft features sometimes. You can also use the Babel compiler to write many of the in progress features in JavaScript and have Babel compile it to the good old supported JavaScript before you take your code to production. Babel is popular in the front end because many browsers are usually slow to add support for all the new features in the language, including the finalized ones, and Babel offers an escape hatch for developers to use the latest and greatest without risking their code not being compatible with older browsers. In this course, I'll only use features that are natively available in Node.js, including many of the modern features that started becoming available since 2015. In the next few videos, we'll go over some of these features and learn the value they bring to the language. Variables and Block Scopes First up, let's talk about variables and block scopes. Here is one of my favorite JavaScript trick questions. Is this a valid JavaScript line of code? Testing it in a Node REPL seems to work fine, which means it is valid. So the question is now, what did it do? These are nested block scopes. We could write code in here, var a = 42, and then access a right after and it would be acceptable. JavaScript does not really care about the spacing or new lines in here. A block scope is created with a pair of curly braces, just like this example here. This also applies to if statements and for statements as well. These also get their own block scopes. Block scopes are a bit different than function scopes, which are created for each function. You can see one difference when using the var keyword to define variables. Variables defined with var inside a function scope are okay, they don't leak out of that scope. If you try to access them outside of the scope, you can't. As you can see here, we could not access the result variable that was defined inside the sum functions scope. However, when we define variables with var in a block scope, we can totally access them outside that scope afterward, which is a bit problematic. And here's one practical example of that. This for loop has an index variable that takes from 1 to 10. You can access that variable inside the loop normally, but you can also access the same variable outside the loop. After all iterations are done, the value of i here will be reported as 11, and that's a bit weird. This is why the more recommended way to declare variables in modern JavaScript is by using the let keyword instead of the var keyword, because when defining variables with let we won't have this weird out of scope access problem, if we replace this var here with let and do the same test, we'll try to access let after the loop. We need to start a new Node session here, paste in the code, try to access i after that, it will tell you that i is not defined, which makes sense because we're outside of the scope where it was defined. So this is much better. Block scopes, like function scopes, can also be nested, like the trick question that we started with. This is a nested block scope. Within each level of nesting, the scope will protect the variables defined in it, as long as we use the let keyword or the const keyword, which behaves in the good way like the let keyword. We use const when the reference assigned is meant to be constant because references assigned with const cannot be changed. Note how I'm saying references and not values here because const does not mean immutability, it just means constant reference, but if the reference is for an object, we can still change this object, just like we can do with functions that receive objects as arguments. So if the variable defined with const is a scalar one, like a string or an integer, you can think of it as an immutable thing here because these scalar values in JavaScript are immutable. We can't mutate the value of a string or an integer in JavaScript and because we used const with these scalar values, we can't change the reference either. However, placing an array or object in a const is a different story. The const will guarantee that the variable is pointing to the same array or object, but the content of the array or object can still be mutated. So be careful here and keep that in mind. To accomplish immutability for objects, JavaScript offers a freeze method, but it only freezes the first level of that object. So if you have a nested object within the first object, that nested object would not freeze. If you want to work with immutable objects, I'd recommend the Immutable.js library, which has an API that will guarantee objects immutability. Variables defined with const are much better than those defined with let for scalar values and for functions because you get a guarantee that the value did not accidentally change. Looking at this code example here, and assuming that between the first and the last line there is a big program, on the last line we can still confidently say that the answer variable still holds the 42 value because the code ran without problems. While for the same example with let, we would have to parse through the code to figure out if the answer variables still hold the 42 value. If you need a variable to hold a changing scalar value, like a counter, for example, then using let is okay, but for the most cases, it's probably much better for you to stick with using const. Arrow Functions There are many ways to define a function in JavaScript and the modern specification introduced a new way, arrow functions, a way to define a function without typing the keyword function, but rather by using an arrow symbol like this. This shorter syntax is preferable, not only because it's shorter, but also because it behaves more predictably with closures. So let me tell you about that. An arrow function does not care who calls it, while a regular function cares very much about that. A regular function, like X here, always bind the value for its this keyword for its caller, and if it didn't have an explicit caller, a regular function will use a value of undefined for its this keyword. An arrow function, on the other hand, like Y here, not caring about who called it, will close over the value for the this keyword for its scope at the time it was defined, making it great for delayed execution cases like events and listeners because it gives easy access to the defining environment. This is important, so let's take a look at an example. In any Node module, like this one, the top level this keyword is associated with the special exports object, which I'm going to tell you more about soon, but in this example, I'm just giving this exports object a label to identify it because it's empty by default. And here I'm printing the value for the exports object for you to see. Testing this script with Node command, you should see an object with id exports. Now I prepared this testerObj, which defines two similar functions where in both I am printing the value for the this keyword. Function 1 is defined with the regular syntax, while function 2 is defined with the arrow syntax. When function 1 is called, it's this keyword will be associated with its caller, which in this case is the tester object itself, and this is why you see the printed value for the this keyword in function 1, representing the tester object itself. However, when function 2 is called, its this keyword will be associated with the same this keyword that was available in the function's scope when it was defined, which was the module's exports object, the one I gave a label, as you can see here. This is a big benefit when working with listeners and Node.js, and it's why you'll see me using arrow functions all over the place. One other cool thing about arrow functions is that if the function only has a single line that returns something, you can make it even more concise by removing the curly braces and the return keyword altogether. You can also remove the parentheses around the argument if the function receives a single argument, making it really short. This syntax is usually popular for functions that get passed to array methods, like map, reduce, and filter, and functional programming in general. Object Literals You can create an object in JavaScript with a few different ways. But the most common way is to use an object literal. Here's an example of that. This is a lot easier than doing something like new object, which you can if you want to. Literal initiation is very common in JavaScript, we use it for objects, arrays, strings, numbers, and even things like regular expressions. The object literal syntax supports a few modern goodies. Here's a simple example where this object defined two regular properties. If you need to define a property that holds a function, you can use this shorter syntax with object literal. Of course, if you need an arrow function, you can still use the regular property syntax. Modern object literals also support dynamic properties using this syntax, which looks like an array literal, but don't confuse it with that, JavaScript will evaluate what's within the square brackets and make the result of that the new property name. So, assuming we have a variable named mystery defined before this object, and I'm going to go ahead and copy this code into a Node REPL session, and now here is your JavaScript interview question. What is obj.mystery? It's undefined because this mystery property was defined with dynamic property syntax, which means JavaScript will evaluate the mystery expression first, and whatever that expression evaluated to will become the object property. In this case, the object will have a property answer with the value of 42. Another widely popular feature about object literals is available to you when you need to define an object with property names to map values that exist in the current scope with the exact same names. If you have a variable named PI and you would like obj to have a property named PI as well, holding the same value as the variable PI, instead of typing the name twice, like this, you can use the shorter syntax by omitting the second part. And this shorter syntax is equivalent to what I had before. Objects are very popular in JavaScript. They are used to manage and communicate data, and using these features will make the code a bit shorter and easier to read. Destructuring and Rest/Spread The destructuring syntax is really simple, but I've seen it confused many people before. Let me make sure that does not happen to you. Destructuring works for both arrays and objects. Here's an example for objects using the built-in Math object in JavaScript. When you have an object like Math and you want to extract values out of this object into the enclosing scope, for example, instead of using Math.PI, you'd like to just have a constant named PI to hold the value of Math.PI, which is easy because you can have a line like this for Math.PI and another one for E if you need the same for E, and so on. With the destructuring syntax, these three lines are equivalent to this single line. It destructures the three properties out of its right-hand object and into the current scope. This is useful when you need to use a few properties out of a bigger object. For example, here's a line to destructure just the read file method out of the Node's fs module. After this line, I can use the readFile method directly like this. Destructuring also works inside function arguments. If the argument passed to a function is an object, instead of using the name of the object every time you want to access its properties, you can use the destructuring syntax within the function parentheses to destructure just the properties that you are interested in and make them local to that function. This generally improves the readability of functions. So here we have a circleArea function, which expects an object as its argument and it expects that object to have a radius property and we're destructuring the radius property out of that object and using it locally in the function. If we call this circleArea function with an object like circle, it will use its radius property inside for its calculation. Let's go ahead and test that. You'll see the circleArea calculation working as expected. Destructured arguments can also be defined with defaults, like regular arguments. If say, I'd like to use a default value of 2 for a precision property here, let's define second options are given for this circleArea function and destructure precision out of that argument to use it in the functions body. If I'd like to use a default value of 2 for the precision property, I can just use the equal sign here after destructuring precision and that means default for precision, if not specified, will be 2. I can also make this whole second argument optional using an equal sign after the destructuring syntax. The same call here will use an empty object for the second argument of the function, and then it will use a default value of 2 for the precision property that is now used in the function. Of course, if you called the circleArea function with a second argument that has a precision property, that value will be used inside the function. As you can see, this destructuring feature offers a good alternative to using named arguments and functions, which is a much better thing than relying on positional arguments. Destructuring, whether you do it in function arguments or directly with variables, also works for arrays. If you have an array of values and you want to extract these values into local variables, you can use the items positions to destructure their values into local variables, just like this. Note how I used double commas here to skip destructuring the third item in the array. The destructured variable fourth here will hold the value of 40. This is useful when combined with the rest operator, which has an example here. By using these three dots, we are asking JavaScript to destructure only one item out of this array and then create a new array under the name restOfItems to hold the rest of the items after removing the first one. Let's test that. So first here will be 10, and restOfItems will be an array of 20, 30, and 40. This is powerful for splitting the array, and it's also even more powerful when working with objects to filter out certain properties from an object. Here is an example of that, say that we have this data object, which has a few temp properties, and we'd like to create a new object that has the same data except for temp1 and temp2. We can destructure temp1 and temp2 and then use the rest operator to get the remaining properties into a new object called person. Just like the three dots of rest, you can use the three dots to spread one array or object into a new array or object. This is useful for copying arrays and objects. You can spread the items in an array into a newArray like this example. NewArray here will be a copy of the rest of items array that we destructured above. And similarly, you can also spread the key value pairs of an object into a newObject, like this example. The newObject here will be a copy of the person object. Note that these copies are also shallow copies. Any nested objects or arrays will be shared between these copies. Don't forget that. Template Strings Template strings are one of my favorite new features that were introduced to the JavaScript language a few years ago. Let me tell you about them. You can define a string in JavaScript using either single quotes or double quotes. These two ways to define string literals in JavaScript are equivalent. Modern JavaScript has a third way to define strings, and that's using the back tick character. On my keyboard, it's right above the Tab key. Strings defined within the back tick character are called template strings because they can be used as a template with dynamic values as they support what we call interpolation. You can inject any dynamic expression in JavaScript within these dollar sign curly braces holders. So for example, we can use Math.random here, and the final string will have the value of the expression included exactly where it was injected in the string. If you test this string in Node, you'll see a random value every time you do it. With template strings, you can also have multi-lines in the strings, something that was not possible with the regularly quoted strings. Back ticks look very similar to single quotes, so make sure to train your eyes to spot template strings when they are used in examples. Classes JavaScript offers many programming paradigms, and object-oriented programming is one of them. Everything in JavaScript is an object, including functions. Modern JavaScript also added support for the class syntax, a class is a template or blueprint for you to define shared structure and behavior between similar objects. You can define new classes, make them extend other classes, and instantiate objects out of them using the new keyword. You can customize the construction of every object and define shared functions between these objects. Here is a standard class example that demonstrate all these features. We have a Person class and a Student class that extends the Person class. Every student is also a person. Both classes define a constructor function. The constructor function is a special one that gets called every time we instantiate an object out of the class, which we do using the new keyword, as you can see here. We are instantiating one object from the Person class and two other objects from the Student class. The arguments we pass here when we instantiate these objects are accessible in the classes constructor function. The Person class expects a name argument, it stores the value on the instance using the this keyword here, and the Student class expects name and the level arguments. It stores the level value on its instance and since it extends the Person class, it'll call the super method with the name argument, which will invoke the Person class constructor function and store the name as well. Both classes define a greet function that uses the values they store on each instance. On the third object, which we instantiated from the Student class here, we also defined a greet function directly on the object. When we test this script, o1 will use the greet method from its class, the Person class, o2 will use the greet method from the Student class, and o3 will use its own directly defined greet method. Promises and Async/Await Node is event driven. Most of the functions that you'll be working with in Node return promises, and you'll have to consume them using the promise syntax with .then and .catch. However, a more preferable way to consume promises is using the new async/await syntax, which makes your promise consuming code a bit more readable and easier, especially when you start dealing with loops and other complexities. Here's an example code that is consuming a Promise object using the regular Promise syntax. Here we have a simple fetch function that reads an HTTPS response for a URL. Don't worry about the implementation of this function, just notice how it returns a Promise object. This is a modern alternative to using callbacks for the async nature of this function. To consume the fetch function, we use the .then syntax, which will expose the data available after the async action. Alternatively, we can also consume any promise using the async/await feature as seen here. We use the keyword await before the promise and that will give us access to the data available after the async action. We can use this data directly after the await line, just like this, which is a lot simpler than callbacks and using .then as well. However, to make this await feature work, you have to wrap your code with a function labeled with the async keyword and then call the function to execute the async action. Testing this code now, the same fetch promise will be consumed twice, once with the regular .then syntax, and another time with the new async/await syntax. The async/await syntax is definitely easier to read and it will make your life especially easier if you need to work with promises that depend on each other or promises that need to be within the loop. Wrap Up This module was as quick review of the modern JavaScript features that were introduced to the language in the past few years and are currently available natively in Node. We've talked about block scopes and how they are a bit different than function scopes and how it's wise to use the let and const keyword within them. We've seen how arrow functions are different than regular functions. We've explored the modern ways to work with object literals and talked about destructuring and the rest/spread properties. We saw how to work with template strings and how to create and use classes, and finally, saw an example of consuming promises with regular promise syntax and with the async/await as well. In the next module, we'll explore the npm tool. Npm is Node's Package Manager and it has a website hosting hundreds of thousands of Node packages. We'll explore the npm tool itself and talk about some of the most popular npm packages out there. NPM: Node Package Manager What Exactly Is NPM? Welcome back. Let's talk about npm, Node's Package Manager. Npm enables JavaScript developers to do three main things, share their code with others, re-use their own code across projects, and use code written by others in their projects. So npm is basically all about code sharing and reusability. If you have a piece of JavaScript code that you'd like to share with others or just re-use in other projects, npm is the tool you need to help with that. But npm is also about composability of bigger applications using smaller packages. You can use others work to complete yours, so you don't have to start from complete scratch. In general, npm is an asset for any team that is working on any JavaScript project, it makes it easy to separate and manage the different versions of code. The npm project runs at npmjs.com. This is the site that hosts the many useful npm packages that you're going to love and appreciate. It also hosts a lot of empty and useless packages because there is no quality control here. Anyone can publish anything. And we are about to publish something in this module, but I'll try to make it a bit useful. Npm is the official Node package manager. The npm project started with a small set of Node scripts to manage common tasks around folders that contain code for Node, and it since evolved into a fully featured package manager that is super useful for all JavaScript code, not just Node. If you browse the registry of the npm packages that are hosted on npmjs.com, you'll find packages that are for Node and packages that are libraries and frameworks, meant to be used in a browser or a mobile application. If you dig deep enough, you'll even see examples of apps for robots, routers, and countless other places where JavaScript can be executed. A typical Node project will have tens, if not hundreds, of npm packages. Some npm packages represent big frameworks, like express or sales. Some provide utility functions, like lodash here. Some just provide useful libraries, the request package here is an example of that. Many npm packages are small and specialize around one problem and focused on how to solve that problem well. Let's give credit where credit is due. Npm simply revolutionized the way JavaScript developers work. The life of a JavaScript developer was much harder before npm. Npm is a very important part of Node and a good understanding of how it works will prove to be very valuable for you as a Node developer. So what exactly is a package manager and why do we need it? Let's actually start with a more basic question, what is a package? The name package is what npm uses to label the bits of reusable code. A Node package is basically a folder that contains scripts that can be run by Node, which is to say that any folder that has some JavaScript code in it is basically a Node package. Another name that is commonly used to describe a code folder in Node is module. Some modules are built in Node, so npm is not needed to manage those, but most other modules that you'll be using are external to Node and npm is what we can use to manage them. When you have a project that has a lot of these code folders that we are going to start referring to as packages from now on, you'll need to manage them somehow, especially when these packages start depending on other packages, and when you start working with multiple versions and sources of these packages. That's where npm can be helpful. When developers talk about npm, they can be talking about one of many things. They can be talking about the npmjs website that hosts a public registry of the many open source npm packages. This website provides a few graphical features like searching for packages, for example. Every package page has some meta information about the package, like the number of downloads and other information managed by the package. If the package has a read me file, it'll be displayed here as well. Developers could also be talking about the npm command line interface, the CLI tool that we can use in our projects to manage the packages. This tool has many commands that you can see here. We are going to learn a few of them in this course module. Npm is also the name of the company, npm, Inc, that hosts and maintains the npm registry and CLI tool, and is doing more business around the npm registry and tool. They offer private repositories and more enterprise level services. Let's now talk about the npm CLI tool that gives us this global npm command. The NPM Command As a Node developer, you will be working with the npm command a lot. This command comes installed with Node itself, so you don't need to do anything to install it. If you have Node installed, you should have the global npm command installed as well. You can check the version with -v. Npm gets updated more frequently than Node, so sometimes you might need to update npm itself separately from Node to get the latest release. You do that using the command npm install -g, for global, npm. Yep, you can use npm to update npm. If you get an EACCESS error here, that means you most likely installed Node itself with some admin privileges. You can use sudo in that case or start a whole terminal with admin privileges. On Mac OS, if you install Node through Homebrew or NVM, you usually don't need to sudo any Node or npm commands. If you are facing an EACCESS error, you can also fix the npm permissions so that you don't have to run npm with sudo. There's an article in the npmjs documentation site, right here, that has some details on that. Let's quickly explore the first command of npm, the one that we just used, install. To do that, I'm going to create a test directory, test-npm, cd into that, and run npm install, and by the way, install has a shortcut, you can just do npm i, which is cool. And we're going to install one package called express. So this npm install command is our client that just downloaded the package from its source, which by default is the npmjs.com registry itself, but you can configure the command for other sources. And after it downloaded that package, npm placed the package in a special folder under the project named node_modules. So this is the process that's known as installed a package, but there is nothing magic about it really, it just places the package under this node_modules oflder. Node itself is designed to use this node_modules folder by default to look for any packages it needs to use. When Node needs to require a package, like express here, it does not communicate with npm at all, it just uses the node_modules folder and the packages that were placed inside that node_modules folder. Did you notice how when I installed the express package here, npm warned me that this folder does not have a package.json file? This package.json file is important. So let's redo this example with a package.json file. I have a folder in the 3-npm folder here called 1-npm-command, and this folder has the package.json file. So let's cd into this folder and redo the npm install express command and note what happens to the package.json file. Did you see that? The package.json file had a new section here. This is the section where npm is documenting this new dependency for this project, express. And not only that, the npm install process will also take care of installing any sub-dependencies for the main install package. Take a look at what you now should see under the node_modules folder. Although we asked npm to install a single dependency, express, npm installed many other dependencies. What exactly are these? These are the packages that express itself depends on, and since we made our project depend on express, these other packages are now in the project's sub-dependencies. Okay so let me quickly review what happened when we invoked the command npm install express. Npm first created a new node_modules folder because that folder did not exist before. The first npm install will do that. Npm then downloaded the express package from npmjs.com and placed it under the newly created node_modules folder. It then modified the package.json file to document this exact dependency and the version used for this dependency, and it also created a package-lock.json file. So let's talk about these two files, package.json and package-lock.json. The package.json and package-lock.json Files The package.json file is the one file that you'll see in every npm package. It's a JSON file that can be used to provide information about a package and it's required by npm. This file is mostly modified by the npm command itself, but in a few cases, you'll need to manually edit this file as well. In the previous example, we started with a simple package.json file that only had the required properties, name and version. The name of an npm package is its unique identifier. If you need to publish this package, that name has to be unique, and not used before, across the whole registry. The version property is a semantic versioning string. We'll talk about this string in the next video. When we installed the express dependency, npm automatically modified our package.json and added a dependencies section documenting the version of express that it used. Here is the package.json file for the popular express package. As you can see, this file includes meta information about express, things like description, license, and key words, but the most important information in this file is the dependencies section. These are the packages that express depends on and this is the same list of packages that we got when we installed express locally in the previous test. This is really the most important benefit of the package.json file. This file makes the building of the project dependencies a reproducible task. This means by sharing this package.json file with other developers or with build servers, the process of building the project dependencies on these other machines can be automated through the dependencies section of the file here. Let me show you how that works with a local example. Let me add one more dependency in the 1-npm folder example here. This time let's add the lodash package. The command that we need here is npm install lodash. Npm is now downloading lodash and it just placed it under the node_modules folder and updated the package.json file to document this new dependency. Now, the 1-npm folder right now has three things, the package.json file, the other package-lock.json file, and the node_modules folder. Now usually when you share your project with other developers, you don't share your node_modules folder. This is a big folder with code written by others. It does not belong to your team's get repo, for example. So what your team will get when you share this project is just the JSON file. So let me remove this node_modules folder to simulate that. So a team member just pulled this code, they now have the package.json file, and to build their version of node_modules, all they have to do is run the npm install command without any arguments, just like this. This command will install all the dependencies listed in package.json, along with their sub-dependencies. In fact, thanks to the package-lock.json file, they will get the exact same versions even for the sub-dependencies tree. For example, express depends on this bytes package here that was installed when we ran the npm-install command. Let's assume between the time that you added the express dependency and the time a team member pulled your code to use it, a new version of this bytes npm package was released. Your team member will not get that new version when they run npm install. They are going to get the exact same version that you used because of package-lock.json. So the version of bytes that was used here is 3.0.0. If you look at the content of this package-lock.json file, you'll see information not only about this project's direct dependencies, but rather the whole dependency tree for the project. Search for bytes, for example, you'll see how the exact version of the bytes package is included here, and you'll also understand how bytes was added to the project because it's a dependency of body-parser, which is a dependency of one of your top-level dependencies expressed in this case. While adding dependencies to package.json when you install them, you can also tell npm that a dependency is only a development dependency, which means it is not needed in a production environment. To do that, you run the same npm install command, but with a -D argument, D for development. For example, let me install the nodemon package with a -D argument here. You'll notice how npm added this nodemon package under a new section in package.json. This time it's devDependencies. This is where you should place things like your testing framework, your formatting tools, or anything else that you use only while developing your project. Now let's quickly take a look at the help page for the npm install command, and you do that using npm help install. In here, you'll see all the usage pattern and the options that you can use with the npm install command. And one of these options that you can see right here is the --production flag, or you can use the NODE_ENV environment variable and set that to production, and what that will do is it will completely ignore your devDependencies because these are development dependencies, so you don't need them in production. This is handy because this nodeman package is not something that you need in production. It's only use is in development to automatically restart Node whenever a file is saved in the project. Nodemon is one solution to the problem that you need to manually restart Node when you make changes to a file, and it's a good compromise in development environments. But it's totally not needed in a production environment. Before we move on to the next topic, which is this semantic versioning string that we've been seeing in package.json, let me show you a command you can use to automatically create a package.json file for a brand new project. So let me create another test folder here, I can use the make directory command here on this machine, and I'm going to call this my-project, and cd into it. So this is a completely empty directory. Now instead of manually creating a package.json file, you can run the npm init command. This command will ask you a few questions about your project and use your answers to create an initial package.json file for you. It tries to use defaults from your folder, for example, if your project is already a Git repo, it'll figure that out and include the repo URL. You can also run this command with --yes to just use the defaults instead of interactively answering questions. I'll do that for this example. Check it out. It created a package.json file with the name of this directory, an initial version, and this scripts section, which is an important one that we're going to talk about in a few videos, and it's a very good one. But first, let's talk about these little version strings, and understand the meanings of these elements. Semantic Versioning (SemVer) Npm uses semantic versioning, or SemVer for short, when it's time to update packages. Every package has a version. This is one of the required information about a package. That version in npm is written with the SemVer format. The SemVer string is basically a simple contract between a package author and the user's of that package. When that number gets bumped up to release a new version of the package, the SemVer communicates how big of a change to the package itself will this new release be. The first number, which is called the major number, is used to communicate that breaking changes happened in the release. Those are changes that will require users to change their own code to make it work with the new release. The second number, which is called the minor number, is used to communicate that new features were added to this release, but nothing major. All the changes are still backward compatible and it's safe for your users to install these minor releases and they will not require the users to change their code to make it work with these releases. The last number, which is called the patch number, is used to communicate that the release only contained bug fixes and security fixes, no new features and no breaking changes. You'll often see a few different special characters before the version strings in the package.json file. These special characters represent a range of acceptable versions and are put to use when you instruct npm to update your dependency tree. For example, the tilde character means that an update can install the most recent patch version. Remember, patch is the third number. For the version string on the screen here, it means npm can install any version that starts with 1.2 and is greater or equal to 1.2.3. So a 1.2.4 version would be okay and so would 1.2.5 or anything in that class, however, a 1.3.0 version will not be used. So basically this version range string is equivalent to 1.2.x where x is greater or equal to 3. On the other hand, a caret symbol in front of a SemVer string, is a more relaxed constraint. It will get the package updated to the most recent minor version. Remember, minor is the middle number. For example, caret 1.2.3 here will match any 1.x.y where x is greater than or equal to 2 and y can be anything. So you might get a 1.3.0 or a 1.4.5, for example, but npm will not update all the way to 2.0. I'll admit, this might be confusing at first, just remember that tilde is for safe patch level updates, whereas caret is for more relaxed minor level update. Npm has a site that may make understanding this a bit easier, it's hosted here at semver.npmjs.com. You can pick a package to see all of its available versions, then enter a range string to test. For lodash, for example, a tilde 4.16.3 matches all patch level updates that start with 4.16 and has the patch number greater than or equal to 3. While caret 4.16.3 will match what the tilde version matched, but it will also include all the 4.17 versions. You might find it easier to use the x notation, for example, your version range string can simply be 4.16.x, or even 4.x. The tilde and caret are helpful in communicating the exact version a range started with. I think SemVer is great and responsible npm developers should respect it when they release new versions of their code. But it's good to treat what it communicates as a promise rather than a guarantee because even a patch release might leak, breaking changes through its own dependencies. A minor version, for example, might introduce new elements that conflicts with elements you previously thought are okay to use. Testing your code is the only way to provide some form of guarantee that it's not broken. Installing and Using NPM Packages You have two options when it comes to installing an npm package. You can install it either locally or globally. When we worked under this 1-npm-command directory, all the three packages that we installed under this folder here were local. This is the default behavior of the npm install command. It'll just install the package locally under the project where you run the command. If the package is to be used within a Node folder, basically if you need to use it just for one project, you should probably install it locally. I'd say 99% of the packages that you use should be installed locally. The only exception is when you need to install global tools. For example, create-react-app is a package hosting a tool. React developers use that tool to, well, create a fully configured react application. This is an example of a package that's usually installed globally. You don't need to be in a specific directory to use this tool. You can use it anywhere. Also, once your generated react application is running, you are not really depending on the create-react-app tool itself anymore. You still depend on other packages related to create-react-app, but not on the package hosting the command. To install and update packages globally, you add the -g flag, which is short for global. Once a tool package is installed globally, its command will be available for you to run from anywhere. So for create-react-app, we npm install -g create-react-app. And once a tool package is installed globally, its command will be available for you to run from anywhere. The commands a package add to your system are listed right here when you install that package. So this create-react-app command is now a command that I can run from anywhere. Regardless of whether the package is installed globally or locally, once it's installed, you can require it from any Node script. Under this 2-usage folder here, we have a simple empty package.json file. Let's install the lodash package locally. Remember how to do that? Npm install lodash, and that's it, that will install lodash locally under the 2-usage folder, and place that package under the node_modules folder. Let's take a look under the node_modules folder, lodash is there. And note how this package has no sub-dependencies at all. The only package in our dependency tree right now is lodash. Now within the current folder, we can require the lodash package from any file. Here's a test that does exactly that. This test.js file requires the lodash library and then uses its sum function to sum the integers in an array and I'll output the result after that. Since lodash is already installed under the node_modules folder, this file will run fine and output the result. Now try to run the same file after deleting the node_modules folder. So rm -rf node_modules, all of it, and now you'll find out that if you try to run Node test.js, you'll find out that you can't run this file. Lodash is no longer available for Node to use. The node_modules folder is simply the connection between npm and Node. You can actually place the lodash dependency under the parent folder's node_modules folders. So the node_modules folder doesn't have to be in the exact directory where you are, Node will check all the parents for their own node_modules folder. So you can basically make a directory in your home directory called node_modules, just like that, and that directory would satisfy all the dependencies for all your projects, but really this is not a recommended approach. If your Node script requires lodash, like this, lodash should be a local dependency under the project's main node_modules folder, and the version you install should be documented in package.json so that all members on the team have a similar experience working the project dependencies. Creating and Publishing an NPM Package I think we're ready to bring out the big guns. Well actually, this is a lot easier than it sounds. Let's create and then publish an npm package. Try to do this exercise with me on your own. Pause the video as needed and mirror what I do. I've included a test file under the exercise files, right here under the 3-create-package directory. The goal is to make this file work and output what's expected, as you see here in the comment. So if we execute this file right now, it will not work because the frame-print package does not exist. This is the package that we will be creating. In this line, we're requiring this package and capturing the top level API of this package as the print variable. And then we're using the print variable as a function call. So the top level API in the package that we need to create should be a function. And here is the expected output. It just prints the message in the argument in a frame of stars. Alright, so let's start from a make directory command. Now usually you need to name the package as the string that we are using here to require it. So make directory frame-print. So under frame-print, we need to create a new file, let's call this file index.js. Now the name index.js is a bit special in Node and npm. By default, when you require a folder, Node will look for an index.js file under that folder, which will work in our case. So just to test this index.js, let's place a console.log statement here and just say Testing. So let me actually split this file in the editor here. Okay, so we've got this index.js file under the frame-print directory, and we have the index.js file that is under the test directory, which we're going to use to test the frame-print directory. Now to get this line here to work, this frame-print package should exist under the node_modules folder inside the test folder. We don't have that, we don't have a node_modules folder inside that folder. For testing purposes, instead of using this line, we can require the adjacent frame-print directory using relative paths. So instead of requiring this directly from node_modules, we can say let's go up one level and then require frame-print from the same level as the parent of this test folder where we're testing. With that, we can run this index.js file under the test folder and it will require the frame-print that we are testing right here. Alright, let's test that, so Node test/index.js, and you'll notice right away that we are in business. The console.log line is now showing up in our test. Very good. We're still seeing a problem that says print is not a function, because we did not export any API yet. So instead of console.log testing, let's go ahead and export an API. Now in a Node module, you can change the API using the exports special objects, so I can do something like a is 42 in here. However, if you need to change the top level export, you need to use the module.exports = notation. So our top level export here, which we are capturing in the print variable, is a function. So let's go ahead and do a function here. I'll name this function print. This function receives a single argument, so we'll call this argument msg. Now inside this function, let's console.log Testing from function, and let me go ahead and run the command again and make sure Testing from function appears. And it does. And you'll notice that the error about print not being a function is gone now, because our top level API is now a function. So now all we need to do is to make this function output the message with a star of frames. For that, we can use a few console.log statements. So we can do stars and we'll do another one, just like that, and inside the frame we'll just console.log the message, just like that. Let's go ahead and test, Node test. The output now matches what was expected here. Okay, so this is a very simple package and really the logic here doesn't matter. What matters is, we now need to make this package work without our modification to the test here, basically by doing that. And I'm going to assume that we're done developing this simple package. We need to publish this package and use it through npm. So what I want to do here, I'd like to go inside test and do something like npm install frame-print, and once this command is done, I should be able to run my test and see the exact same output that's expected. So, to get this npm install command to install our own frame-print, we need to publish that package. Now since the package name is unique at npmjs.com, to avoid conflict as we're all doing this exercise, I'm going to add the prefix to this package here and use my own username at npmjs.com. This way, when we npm install the package you can use your own username and you can publish your own package as well. So you actually need to create an account at npmjs.com if you don't have one. To publish any package at npm, you need to have credentials. So go ahead and do that. So once you have an account, you can use your username and password to publish the packages. So here's what you need to do in npm to accomplish that. I'm going clear this command. We'll come back to that in just a little bit, and now from anywhere here in your shell you need to do npm login. Npm login is going to connect your npmjs.com credentials with your local npm tool here so that you can publish to your account. So npm login will ask you for username and password. Go ahead and put these in and the email as well. This should match the email that you used when you created your account and now I am logged in to npmjs. Very good. Now we can publish the package. So this frame-print directory here is not yet a package because it does not have a package.json file. So we need to create a package.json file. Let me CD into it, frame-print, and to create a package.json we can simply use "npm init". The package name is no longer frame-print, so type in your npm username dash frame-print. The version can be 1.0.0, this is the first release. You can add descriptions, entry point test command, git repository, and I'm going to keep everything default here. Okay, we have a package.json, and guess what? Once you have a package.json, all you have to do is npm publish, just like that. This will publish your npm package to npmjs.com. So to test that the publish process was successful, you need to go to this URL: npmjs.com slash package slash the name of the package that you used and you should see your package published. This means that npm can now install this package on any machine. So if you haven't done the first half of this exercise, you can just install this package from npm and just use it. Now that we have a package published, we can go back to the test directory here, so I no longer need this index.js file. So focus on this index.js file, under the test directory here, it is using the name of the package that I just published, but remember that this package is still in npmjs.com itself. We need to bring it here locally using npm install for this file to access it. So the command that we need in this case is just npm install the name of the package, and this will download the package and place it under the newly created node_modules directory right here, as you can see. This is our package and it is exactly the code that we've written, but now this package was downloaded from npm, and once we have the package downloaded from npm we can go ahead and test the script as is and this script will just work. So this is really how simple it is to publish and use an npm package. NPX and the NPM Run Scripts Let's talk about npx and the npm run scripts. Npx is another binary that you get when you install Node, it's part of the npm toolset. And the npm scripts feature is useful and often underrated feature of npm. It enables you to automate the way your team uses your app. You can use npm scripts to start, stop, restart, or test you app in a standard way that encapsulates best practices. For this exercise, I prepared this 4-scripts directory here under the npm folder. In this folder you should see 4 files, package.json and package-lock.json, this is so that you can install all the dependencies of this exercise. A server.js file which creates a simple express web server, and a test.js file, which tests the Math square root function. Now you don't really need to test the functions that are available to you in JavaScript like these. These functions are already tested, but this is just an exercise to see examples of how you can use npm scripts. Take a look at package.json, you'll see a scripts section here with 3 entries, start, test, and check. You can add more scripts in this section if you need to. These scripts are defined with other CLI tools like node, jest, and eslint, in these examples. To run these scripts, we first need to satisfy the dependencies listed here in the files. We do that using the npm install command without any arguments, just like this. This will create a node_modules directory and place all the dependencies we have here under that. So let's test these scripts one a time. The start script is going to run this command: node server.js. This is the same command that you can just run here to start server.js. In fact, go ahead and test that, it will say server is running. And the Npm start script is going to do the exact same action. So to run the npm start script, you do npm run start, just like that. Now for some of the special scripts, start is an example, test is another example, they actually have shortcuts, so you don't need the run command, you can just do npm start. So go ahead and test that. This is equivalent to running node server.js command, and it will run the server for you. And you can go ahead and test the web server on port 800 and it should be running. Let's try the other script: npm run test is going to run the jest command. Now if you try the jest command here, it will not work. It will say command not found, although jest was a dependency here, and the binary for jest exists somewhere under the node_modules folder. However, the jest CLI tool here is not a global one because we installed it locally under this folder. The cool thing with npm scripts is you don't need to worry about that, as long as jest is installed locally under the node_modules, the npm scripts section is going to find it. So if you run npm test (another one of the special npm commands that has a shortcut for npm scripts), this will actually fire up jest. It will find jest under the node_modules, and it will fire it up, and run the test.test.js file, which is a special name that jest is going to find. Now if you need to run jest right here, outside of npm scripts, you have a few options, but the easiest option of all is to use the npx command. So if you do npx jest, npx is short for npm execute, npx will also find the jest binary under the node_modules folder and it will just use it. So npx jest works as well. The eslint binary here in the check script is the same story. You can either use it with npm directly, or if you use it through the npm scripts, it'll work as well. Now the check script here is not a special npm script. So to run it you need to do npm run check. So this will run the command eslint server.js and it will check the server.js file for any eslint errors. Now usually when you do eslint you need an eslint configuration, but I think I have a global one here, so it'll just work. And it's working and checking that I have a console statement in server.js. Now eslint is really helpful, if this command does not work for you, what you need to do is configure eslint. And to configure eslint, you need access to the eslint command itself. Now we don't have it, again, this is installed locally under the node_modules folder, so to configure eslint, we can use npx eslint. And what you need here is to initialize eslint with a configuration file. So you can use the --init argument here, and this will ask you a few questions about how you like to configure eslint. For example, I am using a popular style guide, so I can use the arrow functions here to pick "use a popular style guide", and I'm using airbnb. I like airbnb because it also supports React and I use React. So we'll go ahead and say No to React here, just airbnb. And the format can be anything, I like to have it in JavaScript. So what this command is going to do, it will install some other dependencies, as you can see here. Those dependencies will have the airbnb configuration, and it will also create an eslintrc file for your project to work with eslint. So what's important to learn here is that we use npx to access the local binary eslint that's included in this project, and we were able to configure eslint through that. And if everything worked for you, if you run npm run check again, it should use your eslint and say that console.log statement is unexpected. It's a warning, you're not supposed to leave console log statements around. So the npm scripts section in here is really a good way for a team of developers to have a standard way of running things. This is how we run the server, this is how we test the server, this is how we check the server, and this is really just the basic usage, but there is more to the story. Certain names in the npm run section are special. You can see the full list of these special commands using npm help npm-scripts, so let's take a look at that. So you'll see them here with prefixes like pre and post. So for example, you have pretest and posttest. These command will be run by the npm test command, before and after the command. So let's go ahead and test that, I'm going to do a posttest, and under this command I'll do something like, node -e to execute a script, and I'll just console.log in a string here, I'll console.log "Done Testing", just like that. So guess what? Now when we run the npm test script itself, it will automatically run the posttest, npm test. And it will run the jest command, and when it's done, it will execute the posttest script and output the console.log message that we're done testing. So these pre and post versions of the npm run scripts give you an easy way for some flexibility around major npm events like installing, publishing, and testing. And you can use them to customize what's happening around these events. They are very helpful when you need to integrate your project with external servers like a build or continuous integration server, or if you need to do custom actions before or after you release or deploy something new. Updating NPM Packages In any Node project, it's usually a good idea to update your npm dependencies before testing each release to get the updates that have been made to your dependencies. You want to at least get any bug and security fixes in the patch number of these dependencies. It's also much easier to deal with these updates if you do them frequently. The npm update is the command you can use to do that. It respects all the semantic versioning strings that are in package.json. For this exercise, I have this 5-update directory here under npm and it has a package.json file with 2 exact dependencies, lodash and request. If you notice, these version strings for lodash and request do not have an caret or tilde prefixes. This makes them exact, they're equivalent to doing equal sign here, which means I am interested in exactly this version. So the first step is to install these dependencies. So we do that with npm install. This will download the exact versions that we're interested in and place them under the node_modules directory. You can verify what versions were installed using the npm ls command. Now the npm ls command is going to ls the whole dependency tree. So basically all your top-level dependencies, and all of their dependencies, and this request package has some sub-dependencies. But if you scroll up a little bit here, you should see to top-level dependencies right here at the beginning of the tree. And npm installed the exact versions specified in the package.json, which means, if I issue the npm update command, nothing is going to be updated because package.json asked for exactly these versions. Now by default, when you install any npm package, let's, for example, install the express package, when npm installs that, it writes it with the caret notation. So this was added to package.json, but it was added with the caret notation, which means that any update in the minor section of the version is okay. We just got the latest version of express, so npm update will also not do anything. We don't have anything that can be updated. We have the exact versions that we're interested in. Before playing with the version strings, let me remove this express dependency that I just installed. You do that with npm uninstall express. And this will remove it from under the node_modules directory and it will also remove it from package.json, so we no longer depend on this express package. To explore the update process, let's add a prefix here to the request package, I'm going to add a caret to the request package, so this is the default behavior when you install request, and for lodash, I am going to add a tilde. So the caret in request means that any minor update is okay. So anything in this section will be updated, while the tilde in lodash means only patch updates are okay. You can see all the versions that are available in an npm package using the command npm show, the name of the package, for example request, and this will show everything about request, if you're interested in just the versions you add the word versions. So npm show request versions. And this will give us an array of all the versions available in request. Now the current version that we have is here, and the latest version right now is 2.87.0. Because we used the caret in request, the next update process is going to take us from 2.85.0 to 2.87.0 because this is a minor update and that's okay with the caret notation. What about lodash? Let's check that out. So npm show lodash versions, and the latest version for lodash right now is 4.17.10 and what I have here is 4.16.2 with tilde notation, which means only patch updates are going to be applied. So we're not going to go all the way to 4.17, but, we can get 4.16.6 because that's a security and bug level fix. So we'll go from 4.16.2 to 4.16.6. Before running the npm update command, anyway, I like to verify what packages are going to be updated and to what exact versions are they going to be updated. And instead of doing this manually in your head and checking the versions, there is an npm command, called npm outdated. This will tell you what packages will be updated. It will not update the packages, but it will you that if you run the npm update, these wanted packages are what you're going to get. So the current packages are what you have. The wanted packages are what you're going to get if you run the npm update command, and in here it will also show you the absolute latest packages if you're interested in those. Because of semantic versioning, in here, we're not really getting the latest package, but the rather the latest bug fix that we can have. And again this is the absolute minimum thing you should do when you're planning to update your packages. However, just be careful and test because sometimes even patch-level updates might introduce new bugs in your system. Alright, so to update, we just run npm update and this will update the packages according to semantic versionings and once it's done, you can run npm ls (I'm going to pipe that on less), and this will give you the latest dependency tree, starting from your top-level dependencies here, that were updated to the latest according to semantic versioning. The package.json file was also updated to reflect the new versions that we're starting with right now. And note how the npm update command used the caret notation here, although I was using the tilde notation before, so be careful about that. Now what if I'm interested in an update that is beyond the version string? I can do that with npm install, the name of the package, and I can actually place any version here. So I can say 4.17, or just 4, and that will give me the latest on the 4. Or I can just say, latest, npm install lodash at latest, and this will give me the absolute latest lodash library, as you can see here, 4.17.10, and this matches the latest version for lodash, right here, with npm show lodash version, the singular version here is going to give me the latest version available for lodash, and it's the current version that I have in package.json, and this happened because I specified that I am interested in the latest version of lodash. Wrap Up It's a wrap on this npm course module, we covered a lot of topics under npm, but they were really all important and will be needed when working with Node projects. I'd like to remind you thought that npm is just one package manager that you can use, there are a few others, most notably the yarn project by Facebook. The commands for these other package managers might be slightly different, but the concepts are similar. Npm is what you get by default when you install Node, but you can easily switch to other managers if you like them better. So in this module, we talked about what is npm and why is it needed. We explored the basic commands for working with npm. We discussed the purpose of package.json and package-lock.json. We learned about semantic versioning and how the npm commands work with them. We learned how to locally and globally install npm packages, and how to use these installed packages in Node. We created a simple npm package and published that on npmjs.com. We learned about using npx to execute locally installed executable npm commands. And we saw how to use npm run-scripts to have a standard way to run and test the tasks in a project. And finally, we learned how to check and update your project dependency tree. In the next module, we'll talk about the most important aspect about Node.js, which is how it works with slow operations without blocking other operations. Modules and Concurrency Introduction In this module, we're going to talk about two of the core fundamentals of Node. Its modules system, and how to define and require modules using the exports and require objects. And we'll also talk about how Node handles slow operations and allow the execution of many things at once without using any threads, which is a big deal. Let's start with the simple module system and talk about the three important keywords here, the module keyword, the exports keyword, and the require keyword. Defining and Using Node Modules Let's explore how to define and use modules in Node, but before we do that, let me start with a basic question. What exactly is a module? In Node, the word module is basically a fancy word that means a file, or a folder, that contains code. That's it. So this file here with this simple console.log line is a module. I am currently under the 4-modules folder under the 1-define-use folder. And the first file right there is a module. However, the reason the word module is a bit better to use than using just the terms file or folder here is because Node actually does not just execute the file as is, it wraps the file with a function first. This is something that you need to remember, so let me show you what I mean. You know the keyword arguments in JavaScript? If you access that keyword inside a function, like what I am doing in this file here, the keyword arguments will represent all the arguments passed into this function, regardless of how many of them get passed to it. So if I execute this file, 2-arguments, under this folder, you'll notice how, right here, this is the representation of the arguments keyword, and it's representing all the arguments that we passed here to the dynamic arguments function. So this arguments keyword here is a handy way to write functions that accept a dynamic number of arguments. Okay, here's a weird question now, and this is one of the questions I'd ask in a Node interview. What would Node do if you console.log the keyword arguments on the top-level of a Node file like this? If your answer is, arguments is not defined here because we are not inside a function, you would not be getting that Node job. Your answer is correct in a browser, but Node has a different story. Because Node internally wraps every file it executes with a function, this console.log line here will actually output something. Let's try it. Node 3-wrapper.js. It outputs exactly five arguments, which Node passed to that hidden wrapper function when it executes your file. You should remember that every time you hear the word module. This is not just a file, it's a function that receives arguments and it will also return something. Let me write in comments here what Node does internally to this file. There is a function wrapper here, so basically there the function call, and this function receives a set of arguments, and your code becomes the body of this function. And then Node will call this function. The arguments that this wrapping function receives are, in order, exports, module, require, __filename, and then __dirname. Have you used any of these arguments before in your Node file? You can use the exports or module.exports to define the API of a module, you can use require to require other modules inside this one, and the __filename has the path of this file, and the __dirname has the path to the folder hosting this file. All of these objects, which you can use inside this file, are not global objects. They are just arguments to the wrapping function. They are customized for each file, and they're different in each file. So when inside a file I do something like exports.a = 42, I am just using one of the arguments of the wrapping function. The keyword exports here is not some globally available keyword. It's just the first argument to the hidden wrapping function. This wrapping function is also the reason why, in Node, when we define a variable in any file on the top-level like, for example, let g = 1. This g variable will not be a global variable at all. It's just a variable inside a function. The scope of this variable is the hidden wrapping function. This is different than what a browser would do when you define a variable top-level like this. Browsers do not have this hidden wrapping function. So when you define a variable like this in a browser in a script, that variable will be global. It will be available to all the scripts you include after defining it because you're basically putting it on the global scope. But that's the browser, not Node. Node has the wrapping function. And this g here is not global at all. It's just a local variable inside the wrapping function. It is really important that you remember that. This variable here is just scoped to the built-in wrapping function. All right, so besides making five arguments available to you inside any file, the wrapping function also internally return something. And the thing it returns here is the module.exports property. This is what the built-in wrapping function return by default, all the time, for every file, it will return module.exports. And "module" here is just the module argument that gets passed to the function as well, and it's the object that Node uses to manage the dependencies and to manage the APIs of modules. Note that you don't need to use the return keyword here yourself, Node will always make this function return the module.exports object, and this is the object we can use to define the API of this module. The exports object here is just an alias for module.exports. When Node invokes the wrapping function, it simply passes module.exports here as the value of the first argument here. This is why we can use the exports keyword itself to export a new property on the API, but what we're really modifying when we do that is the module.exports object, which is the one being returned. So just like I did exports.a = 42 here, I can do module.exports, and let's export another property for this module's API, let's do b = 37. So both a and b are part of the API of this module because exports is just an alias to module.exports, and we're returning module.exports. To use this API that we just defined in these 2 lines here, there is another file here in the same folder, 4-require, and this is invoking the require function, and passing in as a string, the path of the module. The result of this require call, is really the module.exports object that Node returned from our module. So the module's API here becomes the module.exports that the file returned, and we can see the console.log in here. Let's go ahead and test that real quick, node 4-require, and as you can see here, we're getting the properties of the module that we defined here on lines 7 and 8. This alias relation between exports and module.exports is the reason why if we re-assign the exports object directly, if we do exports = something, we're not really changing the module.exports object anymore, we're just re-assigning the alias, we're making this variable point to a new local object in here and no longer point to module.exports. So if, for example, you want your top-level API to be a function instead of an object, which is a valid case that we use all the time. We want our top-level API to be a function, not just an object. You can't do it this way, this is not okay. This will not work, again, because you're not really modifying module.exports, you're just breaking the assignment reference between exports and module.exports, but you can totally do module.exports = a function, and that would be okay, because module.exports is what being returned, and I can change the value of module.exports itself. So this line here is okay. And as you can see, the top-level API does not have to be an object. In the next module, I'll go over a few examples of modules in Node and show you how you can require them and use their API. Examples of Module APIs Here are some examples that define and use multiple types of API objects. I am now under the 2-examples folder under the 4-modules folder, and in here we have 8 files, 4 different API exports and how to use them. The first file here is the simple case. When you want your API to be a simple object, you don't really need to use module.exports, you can just use exports, so you put exports and then you put properties on your API. And to use this module, when you call the require function on this module, you get back a simple API object so you can read the elements of the API as properties on that object, Node 1-use.js. We are getting the values that are exported through the API here because the API is just an object. The second example here, the top-level API is an array so I needed to use module.experts because I am reassigning the default export, which is usually an empty object, and now I'm saying my top level API is an array. To use this kind of API, under 2-use here .js, when you require the module, you get back the value that you need. That value itself becomes your API, and note how you can just inline the require call inside some other function because it's just a function call that returns a value. That value is your API and in this case it's an array. So node 2-use.js and you get the array directly here from the console log line. Your API can also be just a string. Look at this file. This API is returning a template string. Note that these characters are the back deck characters, not the single quote characters, right? Because this is a multi-line template string and the whole API is just returning this string. So if I want to use this API when I require the file, I get back an object which is a string, so if I console.log this object, it will be a string. So node 3-use.js. That is the string that this API is using. But let's say you do want an HTML template, but you want it to be dynamic. You want to tell your module that you'd like it to generate an HTML template for a certain title, for example, instead of a generic title. How do you do that? You do it by exporting a function. Check this one out, 4-function here. It's the exact same template text and it's being done through a template string as well, but the difference is that the top level API here is now a function and I used the shorthand notation for the arrow function here to have an arrow function that receives the title as a single argument and then return a template string that injects the title value right here inside the title tag. To use this API that is basically a function, when you require this function here, in 4-use, right, you get back a function. I called it here templateGenerator, it's a function, and you can invoke this function with any argument that you want because the top-level API is itself a function. I captured the result of the require call into a variable and I am now executing that variable because it's just a function and the result of executing this function is what the function returned here, which is the template. So this becomes my template and I can console.log that. So node 4-use.js and you get the exact same template, but now it is being customized with a custom title that I get to pass to the function that this API is exporting. So this API is now a lot more flexible than the previous one, which just hard-coded a value for me. So these are just a few examples of how flexible this module.exports object is and how you can use it to return any kind of API in a node module. Node's Global Object If defining a variable on the top-level scope in a node module does not make a global one, is there a way to define a global value in Node? The answer is yes, but you should really avoid doing that. The only global variables in Node are the variables attached to this special global object. This global object is the equivalent to the window object in browsers, and it's the only global concept in Node. If you inspect this global object, I'm using a .dir trick here to only show the top-level properties on this object rather than inspecting it as a tree, because it's a big one, if you execute this file, node 1-dir.js, you'll see a few things here on the global object. There is the process object, for example, and there are other features like buffers and timers. All of these here are globally available in all Node modules because they're attached of this special global object. So when you use a setTimeout call, setTimeout, you're actually using global.setTimeout, but because it's part of the global object, we can use it directly. We don't need to do global. because that's the concept of the global object in Node. Things will just be available globally. You can also attach new things to this global object yourself. In the 2-set.js file here I have a simple example. I am doing global.answer = 42, making answer a new global value now that you can access in ANY node module. Here's how you can test that, in the 2-test.js file here, I am requiring the file that added the global value, and then console logging answer directly. Not as part of an API. And guess what? If we execute 2-test here, because answer was set on the global object, you can access it from anywhere directly. Pretty Cool right? WRONG. Don't do that. Global values like this are bad. This is true here and it's also true in the browser environment. You should not make your code depend on any global state. Just treat this global object as a built-in thing that Node itself uses, and you shouldn't. Do not attach anything to it. The Event Loop Let's work under folder 4-event-loop. The first file here, 1-log.js has a single console.log line. Execute this simple file. Node 1-log.js line. What I want you to notice now is that Node has started an operating system process here and it's now done with that operating system process. Node has finished the process and the operating system terminated that process. A Node process, by default, will not keep running in the background unless it has a reason to. This 1-liner script here did not really give the Node process a reason to continue running. Let's now give the process a reason to continue running. We can do so by starting an interval timer. We've learned about timers in a previous module. Under the file 2-interval, there is a simple interval call here that executes a console.log line every 5 seconds. Go ahead and execute this file now, node 2-interval, and note how the Node process now did not exit. It is running and it will continue running forever until it gets crashed by an unexpected error, or the user kills it manually with CTRL+C. The real reason that process did not exist is that Node's Event Loop is itself busy now. What is this Event Loop thing, you ask? It is your best friend in Node. It's the hidden magic that will take care of anything asynchronous for you and you don't have to worry about working with threads. In other languages, if you need to do asynchronous work, you have to manage threads yourself, you have to start them, do the async work inside of them, monitor them, make sure they don't access shared data or if they do you want to make sure that there are no race conditions. It is a complicated task. Now some languages make it easier than others, but I think the big winner in that domain is Node because you just use Node's API and Mr. Event Loop here will do all the heavy lifting. The Event Loop is just a simple infinite loop that's built inside Node and its main task is to monitor any asynchronous operations that need to be run, and figure out when they're ready to be consumed. In this example, the Event Loop will monitor the setInterval timer, and every 5 seconds it'll take the interval's callback, which is the first argument to setInterval, the arrow function here, and it'll send this arrow function to V8, and V8 will execute what's inside that function. Because this is an every 5-second kind of thing, the Node process is going to continue to run forever, and it will not exit. So while this Node process is running, if you go to, in another terminal, and try the command ps -ef to to list all the running processes, and pipe the output of this command on grep node to just filter the list to any processes that are matching the word "Node", you'll see the process hosting our script, right here. You see that? It is still running. Now I have a few other processes that are matching "Node", I think these are actually the Atom editor itself here. Believe it or not, this Atom editor here is a Node application! It is a complicated one, but it simply run with Node using an application called Electron, which allows you to use web and Node technologies to build cross-platform desktop applications for Mac, Windows, and Linux. How cool is that. Okay, so our process here is the last one here because we started it last, so it will have the highest process ID. You can kill our continuously-running Node process using the Linux kill command itself and giving it the process ID here, but you can also kill any active Node process using Ctrl+C on its output here, and that action will stop the event loop and then remove the process from the operating system. Run the same ps -ef grep Node command now and you'll see that the process that was hosting the interval file is gone. Node's Event Loop works with multiple phases and queues. At this point, what I want you to remember is that EVERY Node process starts this infinite loop that we call the Event Loop, but when the process has no asynchronous operations to perform, the Event Loop will exit and the Operating System will terminate that Node process. Errors vs. Exceptions Do you know the difference between errors and exceptions? An error is simply a problem, so applications should not really catch that problem, they should just let it happen, while an exception is a condition, and applications usually catch that condition and do something with it. Let's talk about that in Node. In the 5-errors directory here, I have 3 files, the first file is 1-loop, and this file is simply looping over an array of files. Those are actual files that I have in my home directory, and they might be MAC-specific, so you'll want to change these files to something that you have in your local home directory. Then in this section, I am looping over this files array using a forEach call, and for each file, we are basically reading the content of the file. I use this path.resolve call to find the path of the file, and then, using the readFileSync method, which is probably the first time we're seeing this method, it's equivalent to readFile, but it will do the reading synchronously, not asynchronously. We're going to talk a little bit more about that in a future module, but for now, let's focus on thinking about what's going to happen if this code has an error, has a problem. So let's quickly run this code and make sure it doesn't have any problems as is. It is running, and it's reading 2 files data. Very good. Now, what if I injected a file here, that does not exist? So this file now, in the middle, does not exist in my home directory. What do you think the code is going to do? The code is going to try and read a file that does not exist, and Node will crash and exit. So after reading the first file successfully, for the second file, Node crashed and exited, so we're not reading the third file here. And this is normal. When Node executes an error like this one, which is basically not planned for, it will just crash. And this is important to remember here because although this process started the infinite event loop to do any async actions, it immediately stopped that loop when it encountered this unexpected error. And I say unexpected error here because the code did not account for the possibility of an error. We can make this code account for the error by simply making an exception for it. We write code to represent this exception. So let's assume that trying to read a file that does not exist is not really a problem for this code. We're going to upgrade this case from a problem, which is an error, into a condition, which is an exception. And the easiest way to do that is through a try/catch statement. So if you look at the second file under this folder here 2-try, you'll notice that this file is injecting the same not-found file, and it's exactly the same code except now, I've put the code inside a try/catch statement. So try this code, and then catch an error. And inside this catch block we can do whatever we want with this error object. Once you catch the error like this, the node process will not crash and exit when it encounters any errors within the catch block. When we run this file, Node 2-try.js, and because it is going to generate an error, but that error is wrapped inside a try/catch statement, the catch block will be executed. So note how the Node process did not crash and exit. We just got a file-not-found message for the second file, and then the Node process continued to read the third file, because the file-not- found is no longer a problem here, it's just an exception. It is okay to encounter files that do not exist, and just continue with your code. So this is a great feature, but it's also a dangerous one. This code means that we're okay with files not being found when we attempt to read them, but it also means that we're okay with any errors that happens when we read a file, not just the error about files not being found. If for example, the file is found, but has corrupt data, we'll treat that other error exactly the same, making the console.log line here not really an accurate one. Let me simulate that. The second argument for readFileSync accepts encoding string. So you can pass in utf-8 here to say I'd like to encode the content of this file when I read it with utf-8. Let's pass an invalid encoding label. If you execute the code now, because we did try/catch, and because we're generically saying file-not-found when you encounter an error, we're going to see the file-not-found message for all of them. But really the error that happened was not file-not-found, the error that's happening here is that the second argument here is an invalid encoding label. So this console.log message is no longer accurate. A more accurate message here would be "Something went wrong and we are going to ignore it" because this is just a generic catching of the error. And this is not good. Here's the thing, if you want your code to only catch the case when the file is not found, you should make this catch statement here a bit more specific. And in the third file here you're going to see an example of that. So take a look at 3-throw and you'll notice the exact same code, except we're now not generically catching the error message. We have an if statement. And this if statement is basically saying only ignore the error if it's a file not found error. Otherwise, throw the error object again, and this throw line is what Node would have done with the error if we did not have a try/catch statement, making this code only ignore a specific case and perform the default behavior otherwise. In Node, the default behavior for an error is to let the process exit because the state of that process is unknown and it's not safe to continue running with an unknown state. So let's try this one. Node 3-throw. This is fine because this is actually a file-not-found error, and now let's simulate a bad encoding string, run this process again, and the process will crash and exit, because this is a problem, this is an unknown error. It's a problem. And the safest to do for a problem is to just let the process crash and exit. Node Clusters Okay, you might be wondering now, if it's a normal thing for a Node process to crash and exit and we should definitely let it do so. How is this going to be an acceptable thing in production? Remember when we talked about why Node is named Node? Because you don't run a single Node process in production. You run many. One working process existing should not mean the system is down. You should be running a cluster of Node processes. And that cluster will have a master process, which should be monitoring your application worker processes. This is true even if you're running a Node application on a single server. It's often the case where production servers will have many CPU cores. You should be running a Node process for each core. If you're not running a cluster in that case, you're not really utilizing the full power of your machine. And even if your single production server has a single core, you should be running a cluster anyway because that cluster has the simple job of monitoring the actual Node process and starting a new one when a process crashes and exists. You can run a simple cluster in Node using the built-in cluster module, and I go in depth about that topic in the Advanced Node.js course here at Pluralsight. Alternatively, you can use one of the many tools that wrap the cluster module to run your Node application in production. PM2 is one example of these tools. If you run your Node application through a tool like PM2, that tool can automatically use all the available cores in your server, and it will automatically create a new process every time an active process crashes and exits. PM2 can also reload your application without any downtime. A tool like PM2 is really a must in production. Node’s Asynchronous Patterns Node originally used the callback pattern for everything asynchronous. Today, Node is changing and its incrementally adopting other patterns as they surface in the JavaScript language itself. Let me show you some examples of that. Under the async-patterns here, there are 6 files. Let's start with 1-sync. This is the pattern usually used in other programming environments. Slow actions are usually handled with threads directly and the code does not have any patterns to consume data coming from these slow actions. You can do that as well in Node. We've seen this readFileSync method in previous videos. It's the synchronous version that Node makes available to read file. You don't need any pattern to consume its data. You get the data when you call that method, and the whole program here does not go through the Event Loop. We're directly using the Operating System synchronous file reading API. When you execute this code, you'll see the File data console.log line before the TEST console.log line because it's all synchronous. Now look at the 2-cb-pattern file here. This one is using the readFile method from the built-in fs module. This method is asynchronous. It needs to go through the Event Loop. We can't access the data directly after calling this method. This is why Node came up with this callback pattern where the last argument of any asynchronous function, like readFile here, is itself a function, which is known as a callback function. This callback function is always the last argument of the host function. The Event Loop will make this callback executed when the operating system is ready with data for us as I explained in the first module of this course. One thing I want you to notice about the callback pattern is how the callback function always receives an error object as its first argument. The data comes after that in the second argument and sometimes in many arguments after that. This first error argument is standard in the Node idiomatic callback pattern, which is why it's that pattern is often referred to as the error-first-callback pattern. If there is an error in the code, the error-first argument will be an error object. And If there is no error in the code, this first argument will be passed as null. When we execute this file, you'll see the TEST line first, and then the File data line because of how readFile is async. Basically this code was executed in two iterations of the Event Loop. The first iteration executed readFile itself and the console.log TEST line, and that first iteration only defined the callback function. And also, in that same iteration, Node will ask the operating system for data. Later on, once the operating system is ready with data, and this could be a few seconds later, the event loop will go through another iteration where it will now invoke the callback function and execute the console.log on line 4 here. This callback pattern is good, but limited, and it introduces some complexities and inconveniences. One famous inconvenience about callbacks is the fact that we need to nest them inside each other if we're to make asynchronous actions that depend on each other. Here's an example under 3-cb-nesting.js. We start with the same readFile method here to read the data of a file, and then once we have that data, we'll use another fs method here, writeFile, to create a copy of this file. Note how I needed to nest the callback of the second operation within the callback of the first operation, and If I have yet another async operation to be done after the writeFile, I'll have to do more nesting here. This problem is famously known as the pyramid of doom in computer programming and it's not ideal. It makes the code harder to write, read, and maintain. Luckily, since the JavaScript language adopted the promise pattern which we talked about in the modern JavaScript course module, Node is doing that as well! Look at 4-promisify.js. Node today comes with a tool that you can use to promisify any built-in asynchronous API method. And this is a very good start. In this example, I am using the promisify function to create a new version of readFile, one that does need a callback, but rather returns a promise. And since it is returning a promise, I am able to consume this promise using the async/await feature. You can promisify any asynchronous action that's designed according to Node's idiomatic callback pattern (callback-last argument, err-first argument within that callback). Not only that, Node also ships first-level support for promises in some modules as well. This fs module is one of them. You don't really need to use the promisify function here, you can just use the native promises returned by the fs module itself. Look at 5-fs-promises.js and compare with the 4-promisify.js file, we're destructuring the readFile method here from a special object. The promises object that's attached to the top-level API of the fs module. By doing so, you get a promise-based readFile out of the box, and you can consume it with the async/await as we did before. How cool is that. This is the near future of all of Node's APIs. Promises are better than Callbacks. The code is easier to read here, and promises open the door to so much flexibility to nest operations and even loop over them. Take a look at 6-promise-nesting.js and compare it with 3-cb- nesting.js, and see how I made the exact same copy example, but with promises this time. And look how much more readable this is. We just await on readFile, and then we await on writeFile. And if we need to await further, we just do it line by line. There is no nesting going on here and that would make this code a lot easier to deal with. Event Emitters There is a library in Node called Event Emitter, and it's an important one. So let's talk about it here in this video. Under the 7-event-emitters folder here, there is an example.js for us to work an example on event emitting. And this is how we use the Event Emitter. We require the events library, which is built-in, we don't need to npm install anything here, and we usually name the result of requiring the event library, the EventEmitter class, just like this. The events library is one of the most important built-in libraries in Node because most of the other modules implement the Event Emitter module. For example, streams in Node are event emitters. We've seen a few streams when we talked about the process object, remember? event emitters as well, and in the next module, we'll work with more streams. So let's understand event emitters on their own first. After requiring the EventEmitter class, like this, we create an event emitter object, let me put it in a constant, name it myEmitter, and we use the new keyword on the EventEmitter class. So myEmitter is now an object that can emit events. An event emitting object has many methods, but here are the 2 most important ones. You can emit a string. This string is a name that can be anything and you use it to identify a certain event. So let's emit a TEST_EVENT string here. Now, the other method on any emitter object, is how you can subscribe to events emitted by this object, and you do that using the .on method. You say, myEmitter.on this TEST_EVENT string, and then you pass in a callback function in here as a second argument. The myEmitter object will invoke this callback function every time the event represented by this string is fired. So let me put a console.log line here and log something like TEST_EVENT was fired. Here's how this event emitting business is a lot more flexible than single callbacks. You can do this .on operation multiple times! This gives you the flexibility of defining different behaviors in different functions in response to a single event. Okay here is another one of them interview questions. If we invoke this file now, how many times will we see the "TEST_EVENT was fired message"? Think about it. And let me execute this file (node example.js) to test, and that console.log line was invoked ZERO times. The reason here is order. We subscribed to the TEST_EVENT three times, but that TEST_EVENT was not fired after we subscribed to it. It was emitted once BEFORE we subscribed, but no one was listening at that point. If you emit the event after subscribing to it, you should now see the three callbacks getting executed. Here is another interview question. If I don't emit this event after the subscription that I just did, and I just keep this line here, what can I do to this line to make it trigger the subscribers callbacks that happened after it, without moving it to after the subscribe calls? I can use the Event Loop. I can delay the execution of this line to the next tick of the event loop by using a simple setImmediate call, for example. Here's how to do that. If we wrap this emit line with a setImmediate call, the callback of setImmediate will be placed on the event loop, and it'll be invoked after the rest of this program is executed. So these three subscribe calls will happen before the emit call in that case, and we see the "TEST_EVENT was fired" message. This is event emitting. Very simple yet very powerful because it allows for modules to work together without depending on any APIs. Wrap Up This was an important module in the course. We talked about how to define and use modules in Node and we've seen some examples to define and use multiple types of APIs. We talked about the special global object and how you should avoid using it. We then talked about Node's Event Loop, the hidden magic that allows you to easily do asynchronous programming in Node without using any threads. We briefly talked about error handling and how to make exceptions for problems. We talked about the concept of Clusters in Node and how a master process can restart other workers when they have problems. We looked at Node's asynchronous pattern and saw how the promise pattern is a lot more flexible and readable than the callback one. And finally, we went through a simple example for the Event Emitter module, which is an important one because most other modules in Node use the Event Emitter module one way or another. In the next module of this course, we'll talk about Web servers and how Node provides some utilities to program these type of servers. Working with Web Servers Hello World... The Node’s Version Node makes it very easy for you to do network programming and create a full and customizable web server, and it's all built in the runtime. We've seen this hello world example in the second module of this course when we talked about requiring scripts. I've placed here under the folder 5-web to talk a little bit more about the http module. As you know, this http module is a built-in one, it comes with Node. This is why we were able to run this code without needing to npm install anything, but if instead of http here, we wanted to use another web framework, say express for example, we would need to npm install the express package, in this folder, before we can use it. But we don't need to install anything for the built-in http module. The require call here returned something. It's just a function call and if you remember from the previous module of the course, this require call returns the API of the module that we are requiring. In here, we're capturing the API of the http module into a local variable, which we also named http. We don't have to, but that's usually the convention. The local http variable now has all the methods defined on the public API of the http module. For example, one of these methods is createServer, which we're using here. The createServer is a function that accepts an argument and its argument is also a function. This is why we have an inline function reference here. This might be a bit confusing. If you heard the term that functions are first class citizens in JavaScript, this here is exactly what we mean by that. We can pass functions as arguments to other functions. Another term that you might've heard of here is the term higher order functions. Those are functions that receive other functions as arguments, or return other functions. In our code sample here, the createServer function is a higher order function because it receives another function as an argument. So let me improve this code a little bit by extracting this anonymous function that we're passing to createServer, and give it a name instead. So I'm going to capture it in a const here, and let's name it requestListerner here, and I'm just going to paste the code that I just cut from createServer. Now the createServer method can be passed the requestListener variable itself, which holds a reference to the function. This is equivalent to what we had before. Important side note here, you pass in the function reference here, you do not call it, like that, this is a big difference. By calling the function, the argument value here becomes what the function returns, not the function itself. What we want here is to pass a reference to the function itself, which is basically a pointer to where the function is defined. Listener functions receive two objects, the request object, and the response object. Now I named them req and res here, but you can really name them anything. These are positional arguments, the first argument represents the request side of a request event, and the second argument represents the response side of a request event. They're famously named req and res. So the requestListener here is a function that will be invoked every time there is a request event. This is important. Remember, when we talked about Event Emitters in the previous module? Well, guess what? This server object that we get as a result of calling the createServer method is an Event Emitter, and one of the events it emits is named "request". In fact, this same code can be rewritten using the event emitter API. Instead of passing the requestListener function to createServer, we can use a call to server.on, the event name is "request", and pass the requestListener right here. Every time we have a request event coming to this server, we're instructing the server object to execute our requestListener function. Are you connecting the dots now? I actually like this version better than using the shorthand notation of createServer. I think it's just more readable. Inside the request handling function here, we can read information about the request using this req object. For example, we can read what URL the user is requesting, what parameters are they sending along, what IP are they coming from, and many other things. We can write data back to the requester using the response object here, which is what we're doing using the .end method. This .end method is equivalent to doing .write, and then doing res.end. It's just a shorthand notation to writing a single line and ending the connection. Since this is an HTTP communication thing, we need to follow the HTTP protocol. That protocol, for example, requires an explicit signal that the communication is over. This is exactly why we needed to use the .end method here. This .end method is not optional because without it, the HTTP session will think that we're still streaming data to it. And of course we talked about how the createServer only creates the server method, it does not make it actively listening to requests. To run the server and activate it, we needed to call the .listen method on the server object. This listen method accepts many arguments. The first argument is the operating system port on which we want the server to listen to incoming requests, and the last argument, (if you remember, the idiomatic callback pattern), this is our callback that will get invoked if the server reserved the port and started listening on it successfully. This last argument callback can be used as a confirmation that the server is ready. That's why in here we're console logging the confirmation message. The reason we needed to use that callback for a confirmation here is because the listen method itself here is an asynchronous one. Note also that when we ran this file, the Node process did not exist, because the event loop is now also busy listening to incoming connections on port 4242, and it will do that forever. Monitoring Files for Changes Node.js, in development, is a bit different than many other runtimes and frameworks. It does not auto-reload any changes. You're going to have to remember to restart it, but let me show you a better option. Remember in this same example, when we changed things here and saved, we needed to restart Node to see the new changes in the browser? This is not ideal. The popular solution to this problem in Node is to monitor the operating system files for saving event, and try to auto restart Node when you get these events. There are many npm packages that are designed for this exact purpose. I usually use nodemon. You just npm i -g nodemon, it's probably a good idea to have this package installed globally, but it doesn't really have to be. nce you have access to the nodemon command, you run your code with nodemon instead of node. So nodemon 1-hello-world.js. The nodemon command is a wrapper around the node command, so the nodemon command we just typed will run our server as if we're running it with the node command, but now it will monitor the files for any save events and reload itself when files are saved. So when we change this to "Node" and just hit the save button, nodemon automatically restarted itself, and you'll be able to see the new changes in effect. Of course you don't need this nodemon package in production. This is just a development convenience. The “req” and “res” Objects The requestListener function receives these 2 arguments. Let's talk about them a bit more. How to use them. What information and capabilities do they provide? How to know what to do with them. The easiest way to explore them is to log them. For example, let's console.log the req object here. Now to get this line to execute, we don't need to restart Node, thanks to nodemon, but we need to go make an HTTP request, because this is the function that gets executed per HTTP request. When we do a request, you'll see this big object here in the logs. This is the request object and this is everything you can do on it. It has a lot of properties and the printing here is also printing everything nested under the main properties of the request object, which is a bit hard to read. To get a smaller output about this request object, use dir instead of log, and pass in, as a second argument to it, the object with a property depth equal to 0. This means do not print any nested objects. Only print the first level of properties for this req object. Node has restarted. Refresh to see the new output, and now, we see only the first level of properties and any nested objects will not be printed here. Note a couple of things about this output here. The request object is of type IncomingMessage. It's the class that was used to internally instantiate a request object. This is good to know because if you want to find the documentation about what you can do with the request object, you need to go the IncomingMessage class under the HTTP documentation. So that little request object here belongs to the IncomingMessage class. All the properties and events you find in this documentation section applies to the request object. Because the popular name of this incoming message object is "request", one might confuse it with this ClientRequest class. This ClientRequest class is used when you want to use the http library as an agent to fetch information from an http server, rather than as a server, which is what our example is doing. So remember, the request object within an HTTP server listener function is of type IncomingMessage. The other important thing to notice here is that the console.dir line was executed twice. We have 2 request objects here, which means for a single HTTP request coming from my Chrome browser, the listener function is executed twice, not once. If you want to investigate this a little bit further, and you should, drill down for more clues. For example, look at the request property values here. In particular, the url property. So instead of console.dir request here, let's just log req.url and see what that will output. So server restarted, refresh this guy, and take a look at the two URLs we're getting here. The root request, and the other is for a favicon request. My Chrome is automatically trying to ask the server if it has a favicon. This url is a property on the req object and it's actually the property that Web frameworks use to implement their routing features. We'll talk about web frameworks in the next video. Let me now console.dir the response object and take a look at that. Refresh the browser. And the response object is of type ServerResponse if you need to look it up in the documentation. We can use this response object to send a few things over to the requester. It can control things like the status code and the status message, the headers of the response, and any data we'd like to include in the response body, which is what our example is doing here. Both the request and response object are streams here. The request object is a readable stream, while the response object is a writeable one. Because streams are all event emitters, we can subscribe to events emitted by these objects too. We can also use them with other streams, which is a really good thing for you to learn right after this course. I cover it in details in the advanced node course at Pluralsight. Node Web Frameworks While Node comes with a built-in module to work with HTTP and HTTPS, and even HTTP/2, these modules are low-level and they offer limited capabilities. For example, there is no API to read body parameters from a POST request, you have to manually parse the text, which you can do using other Node built-in modules, but because of this fact, a number of web frameworks have surfaced in the Node ecosystem to provide a richer interface to create and work with web servers. These frameworks usually wrap the underlying built-in power in Node and give you a simpler API to create more complicated features in your web servers. The most popular of these options is the Express JS framework. So let's create an express-based web server example, and let's do it here under the 2-express folder, inside the 5-web directory. I have an index.js already there which is the starting template of an express server. But the first thing that you need to do really is to bring express here as a dependency, and for that we need to use npm install. And to use npm install you should have a package.json. So the first step is to create a package.json, we can use the "npm init" command here, and I'm just going to do --yes to get the defaults of npm init, and this command will create a package.json file for us. Here is package.json here. Once we have package.json like that, we bring in the express dependency using npm install express. This will download express, place it under the node_modules folder, and as you can see, it documented that we started depending on express for this project. If you look under the node_modules folder now, you'll see a lot more dependencies than express, express is one of these guys, but just keep in mind, that all of these packages are now part of your project dependencies. To use express, we require express using the require function. The express package exports its top-level API as a function. So when captured the returned API into a local variable here, which is usually named express as well, that express variable in our script is now a function. And this is why, to create an express server, we just call this function. The result of calling this express function is an object that's usually named server (or you'll see it named as "app" in some examples). Now to make this server object listen to incoming requests from users of this web server, we call the .listen method on it, which is very similar to the method we used to create a web server using the built-in http module in Node. It takes a callback that gets invoked when the listen operation is done. Now here's where express is a bit different than the built-in module. We don't define a single request listener. We define many listeners. We actually define a listener per URL. For every URL and HTTP operation that we want our server to support, we define a listener. For example, usually the first URL that you want to support is the root URL. We use the following syntax to do that. We do server.get, specify the URL in the first argument for this get function. So, slash here as a string, and the second argument is the listener function that receives both the request and response object. So I just inlined it here, and this guy receives request and response. Inside the listener function, these request and response objects are also a bit different. For example, instead of using the .write method on the response object, we can use .send method. You can use the .send method to send anything to the requester, not just a string. You can, for example, send an object if you need to. And with .send, you don't need to invoke the .end method, express will do that automatically. Let me just send a string here, "Hello Express", and let's run this code with nodemon. Remember, nodemon index.js here should run our express server. So with this simple code, the server will start accepting connections and requests, and if the request matches one of the defined URLs, the listener for that URL will be invoked. So when we go to the root endpoint in a browser, using the port we passed in the listen function, you'll get the string we defined in the .send method. However, if we go to a different endpoint, say /about, for example, instead of /, the server will not have a listener for that. You're just getting a 404 here. To define a listener for the /about, we just add one more get method here, and define it for /about instead of /, and let's respond with a different message here. Nodemon automatically restarted Node, so let's test both endpoints, the /about is going to respond with its message, and the / is going to continue to respond with the original message. So as you can see, express provides a more pleasant syntax to implement more advanced features in our web servers. Without express, we would have to do custom logic to implement the same capabilities using just the native HTTP module. And of course there are other frameworks that you can use besides express. I'll mention three of them here. There is Koa.js, which focuses on more modern of the JavaScript language and the Node runtime. There is Sails.js which is inspired by Rails really. So a more featured web framework that provides working with modules, and auto-generated APIs, and many other cool features. And then there is Meteor, which offers a more integrated way to build web applications in general. And these are just some of the popular options, there are actually hundreds of other options that you can use to build web servers with Node.js, and you get to pick the best one based on what you need. Using Template Languages One of the most popular needs when working with Web Servers is to work with a template language to deliver static HTML that's generated based on some data. Without a templating language, you would need to do a lot of string concatenation to accomplish that task. There are a few great options when it comes to template languages with Node, and most of these options work with the big Web frameworks like Koa and Express. The most popular templating language in Node is probably pug, which used to be named Jade. This is a simple language implementing the offside-rule, which means that the HTML nesting is based on indentation. I am not really a fan of the offside rule, but if it's something you like, check out pug. My favorite templating language in Node.js is Handlebars. This is the same language that the Ember framework uses and I find it to be simple and feature-rich. But the simplest templating language that you can use is probably EJS, for Embedded JavaScript. You get a template language to write HTML views, and you can embed JavaScript while doing so. I prepared an example for you to see how EJS can be used with Express here under the 3-ejs folder. Look at the package.json, you'll see that we have the two dependencies now, express and ejs, and to bring these dependencies, you need to run the npm install command without any arguments, just like that. Now look at index.js, this is the same example that we've seen without ejs, and here are the differences. We instruct express to set the view engine to ejs, this is the only line really that's needed for configuring express to work with ejs, you set the view engine to ejs. And then, inside of the request listeners, instead of .send here, I am using .render index. This means, render one of the templates, which express finds by default under the views directory, inside this folder. So if you look at the views directory, you'll notice that I have an index.ejs and an about.ejs and these are two templates that are being used to render HTML to the requester when they go to / and /about. So the index.ejs has a "Hello EJS" h2 element, and the about.ejs has an "About" h2 element. And if you run the server, and after that go to the port that we just used, you'll see the index.ejs template, and if you go to /about, you'll see the about.ejs template. And this is all HTML that's coming from the templating language. And because it's all embedded JavaScript, you can embed JavaScript. Let's put a div here, and inside the div, I'm going to embed some JavaScript. You use this kind of weird syntax here, and inside of that, you can put any JavaScript expression. So I'll do Math.random(), for example, and this will give me a random value when I refresh the template for ejs. There you go, embedded JavaScript templating language. Very powerful. Now this syntax here is specific to ejs, other languages will have different syntax for embedding JavaScript. If you follow my online courses, you know that I am a big fan of the React front-end library. And the React library uses the JSX extension. So React with JSX can be used as another templating option when it comes to working with Node on the backend. And today this is actually what I use when I need templating on the server. I use React itself and JSX on the server, not just on the front-end. If you want to check out the React.js library, we have the React.js Getting Started course on Pluralsight. Wrap Up Node was originally designed to help developers create web servers and then it was modified to be a more generic runtime for JavaScript. When it comes to working with web servers in Node, you have options. Lots of them. We've seen how to work with the native built-in http module itself that comes with Node. We learned how to use tools like nodemon to automatically reload Node in development. We talked about the request and response objects within request listeners, and we explored the express framework and saw how it enables us to write more featured web servers using shorter and easier to maintain code. We've also looked at an example that uses the EJS templating language. In the next module, we'll explore three other built-in Node modules that we can use to work with the Operating System. Working with the Operating System Introduction One of the common tasks in any backend program is to work with the operation system resources. Read information from the OS and write information to it as well. Node has a few built-in modules that provide some core features around these tasks. There is the OS module for general communication with the OS, and the FS module, which is specific to reading and writing to the OS file system. And there is also the child_processes module that enables you to run any operating system command from within Node. Let's talk about all these three handy modules. The os Module The os module is a simple one. It provides a number of operating system-related utility methods. I wrote a few examples of these methods here under the 6-os folder 1-os.js file. To use the os module, you require it first like this. Then the os variable will have a few methods you can call, for example, you can read what the operating system platform as set during the compiling of Node using this platform method. You can also read the architecture of this OS CPU. Is it 32, 64, arm, or something else? You can read information about each logical CPU core with this cpus method. I am just counting them here. You can read information about the environment, like where is the current user home directory, and you can do operating system agnostic operations, like, for example, creating a multiline string using the end of line character. Although this is really less important today because JavaScript itself has template strings that can have multiple lines. There are a few other handy methods for this os module that are all well-documented under the OS module documentation page. The fs Module The fs module provides a big API for interacting with the file system. We've seen a few of its methods already. You've seen the basic readFile and writeFile methods, but there are so many other methods under this module. This is probably the biggest API among all the built-in Node.js modules. Using the fs module, not only you can read and write files as buffers, you can also work files as streams, which is a lot more efficient when working with big files. You can also work with directories and do many operations on both directories and files. Let me show you a few examples of things you can do using the fs module. I am using the promise-based API for these examples. You can also use the original callback API for them, and most of the API methods here have synchronous versions that you can use as well. To use all of these promise-based methods, you just await on them within a function labeled async. Most of these methods are based on a filePath and some of them take an optional configuration object. These square brackets here indicate that this options object is optional. If you do want to read and write files, I recommend looking into the createReadStream and createWriteStream methods. These are so much better than readFile and writeFile, because the regular readFile and writeFile use buffers to work and will use a lot more memory than the streaming-based ones. Using the fs module you can append data to a file, this will also create the file if it does not exist. You can copy files, and if you attempt to copy a file to a destination that already exists, it'll be overwritten. You can read information about files using the stat method. Using this method, you can get data about the file, like its size, for example, without needing to read the whole contents of the file, which is nice. This method will also include time-related data about the file, like when was it created and when was it last modified. You can read user's permissions for files and directories using the access method and you can change the permissions and even the owner of a file with the chmod and chown methods. You can link and unlink files and you can even truncate a file's content if you need to. You can make new directories, read a directory list of files, remove a directory, and rename directories and files as well. This is a very powerful module even if you do not use Node as the host of your backend servers. For example, I recently used the fs module to generate test files for a project that had hundreds of UI components. It took me about 10 minutes to do that using the powerful methods you can see here for the fs module. The child_process Module The child process module provides four main methods that allow you to execute any operating system command from within a Node process using a sub process and then get the result of running that command in your main process. The sky's the limit here. Anything you can do in your operating system shell can be done from within Node. The four main methods are spawn, exec, execFile, and fork. Fork is a special one to create sub processes that run node itself again. This is the concept the powers the cluster concept in Node. The other three can be used to execute regular OS commands like ls or pwd, for example. These have a few pros and cons here, but I am going to save you some research here and tell you that you should use the spawn method. Here are a few examples of things you can do with spawn. You destructure the spawn method out of the require call to child_processes. Then you call the spawn with a string representing the command, pwd here, for example. Once you get the result of that, you can use the pipe method to send its output to the process stdout, which will print the spawned process output. In this second example, I am reading the content of a file under my home directory using the cat command. If you need to pass the command arguments, you can use the second argument of spawn, which is an array of all the arguments you wish to pass to the spawned process. Here's an example sending two arguments. This will list all the files in the current directory. And finally, if you need to use shell syntax in your spawned process, you can pass in a configuration object with shell set to true here, and with that, you can get all the power of the shell. I am using the tilde here and the pipe operator, and passing arguments to commands directly. This is a powerful option, but it's also a dangerous one, especially if you do not trust the source of where this string is coming from. So only use this shell option if you need to and if you have control over, or trust the source of the commands. Debugging Node Applications Before we wrap up this course, I'd like to show you one handy trick about debugging Node applications because I know you're going to run into problems. Node comes with a built-in debugging client, but that one is limited. The cool thing about Node's debugger is that it's beautifully integrated with Chrome Dev Tools! If you're familiar with these, you know how powerful they are. All that power is available to you when working with Node as well. To demonstrate this, I've prepared a file with a bug under the 6-os folder here. File 4-bug has a simple function that is supposed to take an array and convert into an object. So here is some test input, this is an array, and you can see the output that I am expecting here. It's supposed to convert the array into an object. You can easily convert an array into anything using the powerful reduce function. However, when we run this file, node 4-bug, we don't get the expected output, but rather a weird one. So what is going on here? We can debug! Simply run this file with --inspect-brk, brk here means break, so don't execute anything at first but rather stop in a debugging state. When you run this, it starts the debugging utility. It doesn't run the file. Now in your Chrome, go to this special URL: chome://inspect. You'll see your Node process listed here inside targets, and it is inspecting our file. Click on this inspect link. And just like that, you have the powerful Chrome dev tools working for your Node script. How cool is that? This is really a big deal. Go ahead and try it out and see how fast you can find bugs now. You can place and clear breakpoints, you can set watchers, you can step-over, step-into function calls, you can use the console to see values and structures, and many other powerful features, really. And before we debug this code, note how, the wrapping function showed up here, because every Node module gets this wrapper. You don't just execute your own code. You execute your code within a function. So this is a good reminder here, right? All right, so let's debug this code. Let's place a breakpoint right here on line 3, which is inside the reduce function. My expectation here is that the line should be called five times, and each time it should have an accumulated version of the desired object. And now, click the resume button here and it should stop on the first call of this line. And it did. Now you can hover over the variables here to see their values, or you can just use the console.log to see the values as well. So it looks like the current is an empty object, and the accumulator is not an empty object. So this right there is not matching my expectations. The accumulator should be an empty object at first, and the current should point to the current element that we're iterating over. So right away you realize that the current and the accumulator are swapped. The first argument is the accumulator, and the second argument is the current element. So to fix this bug, all we need to do is make the accumulator first, and the current second, and test to see if that works. And it did. Course Wrap Up In this last module, we talked about a few built-in Node modules, I went over a few of the capabilities of the os module, the fs module, the child_process module, and I showed you how you can debug Node applications using Chrome dev tools, which is my favorite feature in modern Node. It's a wrap on this course! I hope you've enjoyed it and learned a thing or two. Please don't hesitate to leave me any feedback or ask any questions in the Discussions tab at the course page. If this course wasn't helpful to you in any way, it'll be nice to hear why and help us here at Pluralsight to update this course for the better. If you're excited about Node and you're ready to learn more, check out the Advanced Node.js course here at Pluralsight, after understanding everything in this course, I think you're definitely ready for this advanced one. And if you like books, I wrote a book as well about some of the advanced concepts in Node. If you like smaller reads about Node, I actively blog on Medium.com and a lot of what I write is about Node and JavaScript. Here's one other thing you can do. Take the Pluralsight skill IQ Test for Node and see where your knowledge gaps are. Don't get distracted by my awesome score here, I can't brag about that because yours truly wrote the questions of this skill IQ test. After this getting started course, I am hoping you'll be able to score at least 150 on this test, and if you take the advanced course and finish that, you should be able to score at least 250. If you do take the score, tweet about it and ping me on Twitter. Putting these courses together is a long and hard work, it'll be nice to hear if that effort helped you! Good luck and thanks for watching. Course author Samer Buna Samer Buna is a polyglot coder with years of practical experience in designing, implementing, and testing software, including web and mobile applications development, API design, functional... Course info LevelBeginner Rating (39) My rating Duration3h 29m Released11 Sep 2018 Share course