Advanced Node.js


  1. Node != JavaScript Is This Course for You? Welcome to the Advanced Node.js course from Pluralsight. My name is Samer Buna and this course is designed to take your Node skills to the next level. This course will have a complete focus on Node itself, not JavaScript, and not the NPM packages built with Node. Just pure native Node modules and APIs. Node popularity is exploding. What started out as server-side framework in 2009 is now used in IoT and mobile devices and desktops applications. Node today has millions of users and that number has been doubling every year. Behind Node, there is a great diverse community and a big growing list of contributors. The Node ecosystem is the largest and fastest growing among its peers. So why is Node so successful? In my opinion, three simple points: The Web made JavaScript really popular. Every developer learned and used JavaScript, and having an option to write JavaScript on the server is a golden opportunity, because it means with Node, you can truly have a full stack using the same language. Not only on servers and browsers, but also on tablets, phones, desktops, IoTs, and cloud services. Node's non-blocking event-driven architecture offered measurable increase in performance. And working in a single thread environment was a much better option than manually dealing with the complexity of threads and shared state. But I am not here to tell you how awesome Node.js is. I am hoping that you're all set on that because I am hoping that you know the basics of Node.js already. This course is definitely not for the beginner, but don't be intimidated by the Advanced label either. If you know the basics of Node, you can survive this course. This course is for you if you are comfortable with JavaScript and you know the basics of Node, like how to create a simple web server, require modules, use callbacks and events. But before you're half-way through this course to only discover that you know everything we're talking about, let me save you some time. This course is NOT for you if: You understand how Node's event loop works with the V8 call stack You're comfortable working with Node sockets, event emitters, and streams You know how to scale a Node application with clusters. I wrote an article on EdgeCoders.com with a set of specific questions you can try to answer to test your knowledge of Node.js. This course will have answers to all of these questions. Try to answer those questions before taking this course. It's hard to draw the line sometimes between what's basic and what's beyond the basics, and while I intend to not cover the basics in this course, the chance that you do not know everything that's covered in this course is slim. There will be sections that you probably know already and others that you don't. Feel free to skip what you already feel comfortable about. This course will NOT teach you JavaScript. You certainly need to be comfortable with JavaScript itself before you can master Node. If you don't understand basic JavaScript concepts like closures and callbacks, you are not ready for this course. This course will mostly be about USING Node and not the underlying ways of how it works. And I tried to be as practical as possible with all the examples in this course. I try to avoid abstract examples when I can. There will be no foo or bar in this course, and I approach all module features to be covered with a practical need in mind. For example, when we talk about the file system module functions, we'll implement actual practical use cases to work with them, and not just a boring example to demonstrate their API. Here are some examples of things that we will not be covering: The History of Node.js JavaScript and modern Javascript Installing Node and the basics of Node like how to install and use modules, or how to work with callbacks. Less important objects and functions in the node API. This course will not be a reference style of all API methods, that's what the docs are for. We are not covering any NPM packages like Gulp, Express, or Socket.io. We will not cover how to work with external services like MongoDB or Redis. And we will not cover deploying Node applications. That's a topic for another course. I organized all the examples in this course into one github repository. The examples are in folders that match the module number and the clip number in the course. Most clips have a folder which has all the examples presented in that clip. This course took a long time to develop. In fact, when I started the course Node 7 wasn't even released yet, and when I was done with the course, Node was at 7.4. So you'll see me using different versions of Node, but you should be okay using the latest. I tried to tightly edit this course so that I don't waste your time watching me slowly type something or talk long about something that can be summed up in short. You'll see me fast forward typing sometimes and paste code sections other times. I am hopeful that you're going to like the pace and content of this course, but if for any reason you don't, I really want to hear from you. Tell me what worked for you and what didn't. Email me directly or come chat with me at this Slack channel, which you can invite yourself to. This Slack channel is also the fastest way to ask me questions about this course. A tightly edited course might mean I could be going too fast in some places, or maybe I'll still be slow for your pace of learning, so don't forget that you can control the speed and pause the video where needed. You should pause the video often and actually try all the code examples that you're going to see in this course. If you just watch the course, chances are you'll forget the knowledge you gain here eventually, but if you challenge yourself to redo the examples and even try different examples with every concept you learn, the knowledge has a much better chance to stick in your brain.

  2. Course Overview This course has mostly independent clips. A few concepts are presented over a series of clips, but you can still skip the clip when you feel that you're comfortable with that topic and continue with the clip after that. I'll go over all the modules here and summarize them so that you have a better idea of what you want to watch and what you want to skip. We start by talking about Node's architecture, V8 and libuv. How Node uses V8 by default, but can run on different VMs like Chakra. We cover V8 feature groups and command line options and talk about the interactions that happen between Node, V8, and Libuv. We cover Node's REPL mode and some of its tips and tricks, and talk about some of the useful command-line options that we can use with the node command. We talk about the global object, Node's process object, and the Buffer module. We also talk about the require module and it's five different steps. We'll explore the module object and the various ways a required module can be resolved. We cover the require function's support for loading json files and C++ addon files. We'll talk about how Node wraps every module with a function, and how the require function caches every module it loads. We then talk a little bit about NPM and its main commands and configurations and a few tips and tricks when working with it. In the second module of the course, we'll talk about Node's event loop and its event-based concurrency model. We define what I/O actually means and see the different options we have to handle slow operations, and how Node uses the event loop for that. We then talk about the different players that need to be understood in order for one to understand the event loop. We explore the V8's call stack and see how it's a record of where in the code V8 is currently executing. We then look at how slow operations affect the single-threaded call stack and look at some of Node's APIs like setTimeout, setImmediate and process.nextTick. In the third module of the course, we explore the event-driven nature in Node using its core EventEmitter module. We'll see the different ways we can deal with asynchronous code in JavaScript, and see how Node's standard way is to use an error-first callback. We'll also see how we can combine callbacks with promises and how we can consume these promises using the new async/await feature in JavaScript. We'll explore the Event Emitter module and see how events can be used for both synchronous and asynchronous code, how to use arguments with emitted events, and how to handle the special error events, and what happens when we register multiple listeners to the same event. We'll then implement a practical example using Node's event emitter. In the fourth module of the course, we'll see Node's capabilities for working with TCP and UDP networking. We'll create a basic network server example using the net module and see how we can communicate with TCP sockets for read and write and how the sockets are event emitters. We'll gradually improve this network server to become a chat server with simple features like asking connected clients about their name first. We'll then explore the DNS module and its various methods and how to use it to translate network names into addresses. And then we'll see an example for working with UDP sockets in Node using the dgram module. In the fifth module of the course, we'll explore Node's HTTP and HTTPS support. We'll see how to create a simple HTTP server that's non-blocking and streaming-ready out of the box. We'll see how to create an HTTPS server with a test self-signed certificate that we'll create using the OpenSSL kit. We'll then review the 5 major http module classes and identify them in the examples we use. We'll see how to use Node for requesting http and https data from the web. We'll also talk about how to support basic web routing using Node's HTTP module, and support responding with HTML files, JSON data, and to handle redirects and not-found urls. We'll then explore how to parse and format URLs and query strings using the native modules Node provide for them. In the sixth module of the course, we'll explore a few of the popular built-in modules in Node, starting with the os module that can be used for accessing information directly from the operating system. Then we'll explore the various synchronous and asynchronous capabilities of the fs module by implementing 3 practical tasks. We'll then explore the methods on the console object and see how we can create a custom console object, and how the console object uses the util module internally, and how we can use the util module directly if we need to. We'll then debug a problem in a script using the built-in debugger client, and see how we can set breakpoints, define watchers, and inspect any point in the code using a built-in REPL. We'll also seen how can use Chrome DevTools to debug our Node scripts, which gives us a much better graphical client to debug our code. In the seventh module of the course, we'll explore Node's stream. We'll look into how they're comparable to the Unix philosophy of doing single tasks, and then composing bigger tasks from smaller ones by chaining them together. We'll cover all four different types of streams with examples, Readable, Writable, Duplex, and Transform streams. We'll see how all streams are event emitters, and how we can consume them with either events or by using the pipe function, which can also be used to chain stream pipes. We'll talk about how there are two different tasks about working with streams in Node, the implementation and the consuming parts, which are different. We'll also talk about the two different modes of readable streams, paused vs flowing, and how those affect the way we can consume them. We'll see some examples to implement all types of streams. In the eighth module of the course, we'll talk about how scalability in Node is something that we start thinking about early on in the process, and talk about the reasons to scale an application, and the three different strategies that can be used to scale an application. We'll then look at the different ways we can create a child process using the child process module, starting with the spawn method. We'll see the various options and features we can use with that method, like using a shell, customizing standard io objects, controlling environment variables, and detaching the process from its parent. We'll talk about the differences between spawn, exec, and execFile, and see an example of forking a process to do a long blocking computation without blocking the main process. We'll then scale the simple http server example using the cluster module, see an example of how to broadcast a message from the master process to all forked workers, how to restart failed workers automatically, and how to perform zero-downtime restart as well. And finally, we'll talk about the problem of stateful communication when working with clusters and load balancers, and how to work around that. Let's dive right in.

  3. Node's Architecture: V8 and libuv The two most important players in Node's architecture are V8 and libuv. Node's default VM is, and will continue to be, V8, although V8 is one option out of many, and Node is on a path to be VM-agnostic. One other strong option for a Node's VM is the Chakra engine by Microsoft. The Chakra engine is what powers the Microsoft Edge web browser, and there is an active work in progress to get Node to work on ChakraCore, see this repo here. So Node uses a VM, which is V8 by default, to execute JavaScript code, which means the JavaScript features that are available in Node are really the JavaScript features supported by the V8 engine shipped with Node. This support is managed with three feature groups, shipping, staged, and in progress. Shipping features are on by default. Staged and in progress features are not, but we can use command-line flags to enable them. Staged features are almost complete but not quite there yet. You can say the V8 team is not completely comfortable shipping them yet. But we can use the --harmony flag to enable them. For example, string padding methods will be part of the next EcmaScript version, so right now, they are available to my current Node, which is at version 7.1.0, under a harmony flag. If we try to execute one of the string padding methods, padEnd, it will not work; however, with a --harmony flag, it works. In progress features are less stable, but we can still enable them if we want with specific flags. You can see a list of all the in progress features available in the currently used node with this command. For example, in this version of V8 that's being used by Node 7.1, this is the list of all the in-progress features that we can test. Trailing commas in function parameters is one of them here. If we try to define a function with trailing commas, we'll get an error. But with the --harmony_trailing_commas here, V8 will gladly accept that. By the way, you should take a look at all the V8 options. One of my favorites is the --strict-mode to enforce strict mode for all the executed code (instead of remembering to always use strict mode manually). You can also disable features that comes enabled by default if you need to. It's a big list of options, and you probably don't need most of them, but know that you can easily grep this list to see if you can control a specific V8 behavior. For example, if we want to see what options we can control for the garbage collector, all we have to do is grep. For example, here's the popular trace-gc to get a log line every time the gc runs, and here's the expose gc option, which is useful when you're measuring memory usage, because it allows you to manually force a garbage collector from within JavaScript. Although, just know that this will pause all other executions in your app, so don't use it too often. One final quick tip about v8 options. You can actually set them at run time using the v8 module. After you require it, there is a setFlagsFromString function that can be used to do so, but this is also to be used with care, and it'll probably not work for all the options. The more useful methods on this V8 module are the heapStatistics method, which can be used to get information about the heap memory, like its total size, the currently used size, and old/new heap spaces and more. Node is more than a wrapper for V8, it provides APIs for working with operating system files, binary data, networking and much more. It's useful to understand how V8 and Node interact and work together. First, Node uses V8 via V8's C++ API. Node itself has an API which we can use in JavaScript, and it allows us to interact with the filesystem, network, timers, and others. The Node API eventually executes C++ code using V8 object and function templates, but it's not part of V8 itself. Node also handles the waiting for asynchronous events for us using libuv. When Node is done waiting for I/O operations or timers, it usually has callback functions to invoke, and when it's time to invoke these callbacks, Node simply passes the control into the V8 engine. When V8 is done with the code in the callback, the control is passed back to Node. This is important to understand as when the control is with V8 and since V8 is single-threaded, Node cannot execute anymore JavaScript code, no matter how many callbacks have been registered, Node will wait until V8 can handle more operations. This is actually what makes programming in Node easy. We don't have to worry about locking or race conditions. There's only one thread where our JavaScript code runs. Libuv is a C library developed for Node, but it's now used by languages like Rust, Julia, and others. It's used to abstract the non-blocking I/O operations to a consistent interface across many operating systems. It's what handles operations on the file system, TCP/UDP sockets, child processes, and others. Libuv includes a thread pool to handle what can't be done asynchronously at the operating system level. Libuv is also what provides Node with the event-loop, which we will cover in Module 2. Other than V8 and Libuv, Node has a few more dependencies that we should talk about: http-parser is a small C library for parsing HTTP messages. It works for both requests and responses and it's designed to have a very small pre-request memory footprint. We're covering HTTP requests and responses in Module 5. C-ares is what enables performing asynchronous DNS queries. OpenSSL is used mostly in the tls and crypto modules. It provides implementations for many cryptographic functions. Zlib is used for its fast async and streaming compression and decompression interfaces.

  4. Node's CLI and REPL Node.js comes with a variety of CLI options. These options expose built-in debugging, multiple ways to execute scripts, and other helpful runtime options. Running the Node command without arguments starts a REPL (Read, Eval, Print, Loop). I use Node as a REPL very often. It's really a convenient way to quickly test JavaScript, especially when you're already in a terminal. So when you hit Enter in a REPL, it reads the command, executes it, and then prints its result to the screen, and then it waits for your next command. That's why you don't need to use console.log in the REPL, and sometimes you'll see it outputs undefined, because many commands, like defining a variable here, don't have any output. One of the most useful features of Node's REPL is the auto-complete. If you just tab-tab on an empty line, you get this big list, which is equivalent to tabbing on the global object. Auto-complete works on any object. For example, say that we have an array in a variable like this one. Doing a . and then tab on that gives you everything you can do on the array object. Of course, this tab auto-complete will work when you type some characters too, and if there is only one method, it will auto-complete it for you, that's why it's called auto-complete. Auto-complete works in your terminal with all the built-in commands, and I even configure commands like git to have auto-complete. I now simply cannot live without auto-complete. Not only because I am a lazy typer, which I am, but because I get instant validation on the things I want to type. The auto-complete list for the global object is a special one. This is all the top-level functions and modules available in Node. In a Node REPL, a lot of the native Node modules are preloaded already and available on the global scope. This is not the case when you execute a normal script. If you want to see just the actual properties of the global object, you can just print them with a console.log line. Node's REPL remembers the lines you previously tested and you can navigate to them with the up/down arrow. However, my favorite feature to lookup a previous command I type in a terminal is the reverse search feature with Ctrl+R, but that's not available in Node's REPL. However, we can use this rlwrap utility to get that feature. You'll have to install this utility separately, but once you have it you can use Node this way. You'll have to specify the NODE_NO_READLINE environment variable to disable the built-in readline and then use readline feature and many others from the rlwrap utility. And now we can use reverse search with Node REPL. Another helpful REPL feature is the underscore. You can use it to access the last evaluated value. This is handy if you want to capture this last output in a local variable like this. Node's REPL has special commands that all begin with a ., so .tab will get you a list of those, and .help will give you a description of each. Let me mention a few handy ones: .break can be used to break out of multiline session. Say you pasted something incomplete and you're stuck in the multiline mode, .break will simply get you out of it. .load can be used to load another script into the REPL session. You can use the .save command to save the history of your REPL session into a file. Very useful after a long session in REPL. The .editor command gives you a multiline editor for your REPL commands, so you can use it, for example, to define a function as if you're writing it in an editor. You can CTRL+D when you're done, and everything you typed in the editor will be evaluated. This is much better than the default multiline mode in the REPL. Node has a built-in REPL module, which is what it uses to give us the default REPL mode, but we can use this module to create custom REPL sessions, we just require it and invoke the start function. The start function options can be used to customize the session. We can customize things like the prompt, the readable and writable streams. For example, we can have a RELP that works with a socket instead of standard in/out. We can customize the eval function, colors, and many other things. Here is an example of a custom REPL, where I customized two options. First, don't print out the undefined values, and use strict mode. Both of these options I find helpful. We can also control the REPL's global context. So if you want REPL where you preload your favorite libraries, just add them to this context object like this. Now lodash will be available globally in this custom REPL. Let's now take a look at some of the options we have for the node command itself. We've already seen the --v8-options, which lists a lot more options that we can use with the node command, but some of the other options here are important. The -c is helpful, for example. We can use the -c in a pre-commit hook to make sure that we don't commit anything with bad syntax! We've also been using the -p option, which is another practical one. It evaluates a string and prints the result, just like in REPL mode. For example, here's a quick command to figure out what architecture node is running on, or the number of processors node sees. The -r option can be used to preload one or more modules before executing the script. This is useful to preload a certain customization without modifying the code itself. Say, for example, you want to run the code keeping track of all registered timers. You can preload a script that modifies all timer functions to do so, and you can similarly modify loggers or any other globally accessible feature. The list of arguments you supply after the command line can be used to supply the process with direct input. All the arguments can be accessed using the process.argv, which will be an array containing all of them. The first argument in the process.argv array is the node command itself, so usually this is to be excluded. Note that all the values in process.argv are strings.

  5. Global Object, Process, and Buffer The one and only true global object in Node is called global. When we declare top-level variables in Node, like the answer variable here, that answer variable is local to the util.js file, and we can't access it from other files, even after requiring the util.js module. However, when we define answer on the global object, the index file, which requires the util.js module, can now access that globally declared variable. It goes without saying that we should try to avoid defining things on the global object at any cost, but it helps to understand all the built in properties defined on the global object. You can quote me on this, you're not a node expert unless you recognize and know how to use everything you see in the global list of objects in REPL mode. In fact, if you do know everything in this list, this course is probably not for you, because this list represents most of what we'll be talking about in this course. But you should recognize some of the items here, like the Object/String/Array/Function to name a few. All the familiar top-level JavaScript functions that you know and love are listed here. But there are a lot more to discover. Note that most of these modules have to be required in a normal script before we can use them, and some are actually only valid in the REPL mode, like the underscore for example. Two of the most important things that are available on the global object in a normal process are the process object and the Buffer object. Let's talk about those first. The node process object provides a bridge between a Node application and its running environment. It has many useful properties. Let's explore some of them: We can use process.versions to read versions of the current node and its dependencies. We can use these version numbers to determine if we should run some custom code. For example, maybe for an older version of v8. One of the most useful properties on the process object is the env property. The env property exposes a copy of the user environment (which is the list of strings you get with the ENV command in Linux machines and the SET command in Window). The word copy here is key. If we modify process.env, which we can actually do, we won't be modifying the actual user environment, so keep that in mind. You should actually not read from process.env directly, we usually use configuration variables like passwords or API keys from the environment. Also which ports to listen to, which database uris to connect to. You should put all of these behind a configuration or settings module, and always read from that module, not process.env directly. This way, if you decide to change the way you read these configurations, you modify one module. Here's a new and interesting property, process.release.lts. This will have the LTS label of the used node release, and it will be undefined if the currently used Node release is not LTS, so we can check this label and maybe show a warning if an application is being started in production on a non LTS node. The most useful thing we can do with the process object, though, is to communicate with the environment, and for that we use the standard streams, stdin for read, stdout for write, and stderr to write any errors. Those are pre-established ready streams, and we can't actually close them. We're going to talk about streams in details in Module 7 of this course. The process object is an instance of EventEmitter. This means we can emit events from process and we listen to certain events on the process. We're covering Event Emitters in Module 3 of this course, but let me show you one quick handy event that you should definitely be aware of: On exit is emitted when Node's event loop has nothing else to do, or when a manual call to process.exist has been executed. We can't stop the node process from exiting here, but maybe we can log this somewhere, or send an alert that this particular process is gone and maybe someone should do something about it. However, we can only do synchronous operations inside this event handler, we can't use the event loop here. One other usually misused event on the process is the uncaughtException event, which is emitted whenever a JavaScript exception is not handled and it bubbles all the way to the event loop. In that case, and by default, Node will print the stack trace and exit. Unlike the exit event though, if we register a handler on the uncaughtException, Node will not exit, and this is bad and unpredictable in some cases, so you should avoid doing this interrupt and let the process exit. You can use another monitor process to restart the exiting process if needed. Here's an example to show the difference between exit and uncaughtException. Let's register handlers for both events. We can use process.stdin.resume to keep the event loop busy, because otherwise Node will exit anyway. Then let's trigger a TypeError exception by calling an undefined method on console. If we only had the process event handler, the script will report the exit code and actually exit. But since we registered a handler for the uncaughtException event and did not manually exit the process, node will keep running, often in an unpredictable state. The safest option here is to let process exit anyway. When you get an uncaught exception, your code has a problem that needs to be fixed, so not letting the process exit might actually do a lot more damage. When the process exits, the exit event handler will be invoked. The Buffer class, also available on the global object, is used heavily in Node to work with binary streams of data. A buffer is essentially a chunk of memory allocated outside of the V8 heap, and we can put some data in that memory, and that data can be interpreted in one of many ways, depending on the length of a character, for example. That's why when there is a buffer, there is a character encoding, because whatever we place in a Buffer does not have any character encoding, so to read it, we need to specify an encoding. When we read content from files or sockets, if we don't specify an encoding, we get back a buffer object. So a buffer is lower-level data structure to represent a sequence of binary data, and unlike arrays, once a buffer is allocated, it cannot be resized. We can create a buffer in one of three major ways: Alloc creates a filled buffer of certain size, while allocUnsafe will not fill the created buffer. So that might contain old or sensitive data, and need to be filled right away. To fill a buffer we can use buffer.fill(). Let's have some fun with allocUnsafe. This command allocates an 800 bytes buffer, without initializing it, so if we try to read its content, we'll actually see some data. In fact, we might see things that we can recognize. If we keep trying, we might actually access previous things that we used the memory for. So while allocUnsafe has clear performance advantages, be careful with it. We can also create a buffer using the from method, which accepts a few different types in its argument. Here's an example to understand the difference between buffered data and normal data. Using the same UTF8 string, when we put it in a buffer, the buffer does not have a character encoding, so it will represent the special character with its internal UTF8 representation and returns the actual number of bytes used, while the string is counting characters based on the default UTF8 encoding. Buffers are useful when we need to read things like an image file from a TCP stream or a compressed file, or any other form of binary data access. Just like arrays and strings, on buffers we can use operations like, includes, indexOf, and slice, but there are some differences with these methods when we use them on buffers. For example, when we do a slice operation on buffers, the sliced buffer shares the same memory with the original buffer. In this example, we have a conversion map, we want to process a file, and convert the last three bytes of the file according to the map, working with binary data directly. We read the file into a buffer, slice the buffer to get another binary buffer that holds only the last three bytes. Loop over this small buffer to do the conversion. Once done, not only the sliced buffer has changed, but the original buffer is changed too, because they share the same memory space. One final note on buffers, when converting streams of binary data, we should use the string_decoder module, because it handles multi-byte characters much better, especially incomplete multibyte characters. The string decoder preserves the incomplete encoded characters internally until it's complete and then returns the result. The default toString operation on a buffer does not do that. Let me explain this with an example. Say we have a binary input coming from somewhere and we want to encode it into UTF8 characters. We're receiving this input incrementally. For a simulation, I'll type bytes into the standard in stream, and this code will read what it gets, puts it in a buffer, and then tries to print it with both toString method and the string decoder .write method. Very simple. Now I'm going to feed the script 3 utf-8 encoded bytes, which represent the euro symbol. But you'll notice after every input, toString method is clueless, while the string decoder is smartly trying to make sense of the input. When it discovers that what we entered so far is actually a euro symbol, it outputs that. So in general, if you're receiving UTF-8 bytes as chunks in a stream, you should always use StringDecoder.

  6. How require() Actually Works Modularity is a first class concept in Node, and fully understanding how it works is a must. There are two core modules involved, the require function, which is available on the global object, but each module gets its own require function, and The Module module, also available on the global object, and is used to manage all the modules we require with the require function. Requiring a module in node is a very simple concept, to execute a require call, Node goes through the following sequence of steps: Resolving, to find the absolute file path of a module. Loading, is determined by the content of the file at the resolved path. Wrapping, is what gives every module it's private scope, and what makes require local to every module. Evaluating is what the VM eventually does with the code. And then, Caching, so that when we require this module again, we don't go over all the steps again. We'll talk about every step in more detail in this clip. Have you met the module object? It has some interesting properties! First, an id to identify it. The full path to the module file is usually used here, except for this root module, a . is used instead. The path to the filename can be accessed with the filename property. Node Modules have a one-to-one relation with files on the file-system. We require a module by loading the content of a file into memory. However, before we can load the content of a file into the memory, we need to find the location of a file. For example, if we require a "find-me" module from the index module, Node will look for find-me.js in these paths, which start from the current directory and go up all the way to the root directory. If it can't find find-me.js in any of these paths, it will throw a cannot find module error. To be exact, Node will actually look for the find-me.js in more folders, but those are supported for mostly historic reasons and using them is no longer recommended. Also, core node modules are an exception here. The resolve step returns immediately for core modules. Let's see how Node actually resolve a non-core module. We'll use the first path here, a node_modules directory under the current directory. We'll create one manually and simply add a find-me.js in there. Let's also add a console.log line here in this new file to identify it, and we'll do the same for nx.js. When we now execute the index file, node will find a path to find-me.js and it will load it. If we want to only resolve the module and not execute it, we can use require.resolve method. This behaves exactly the same as require, but does not load the file. It will still throw an error if the file does not exist. This can be used, for example, to check whether an optional package is installed or not. If another find-me.js now existed in any of the other paths, for example, if we have a node module's directory under the home directory, and if we have a differnet find-me.js file in there, I'm going to put a different console.log file in there, and I'm going to change this back to require. Now if we execute the index file, the find-me.js file in this new route node_modules directory will not be loaded, because node already resolved find-me.js to the local file found under the local node_modules directly. But if remove the local file and execute again, Node will actually just pick the next closest find-me.js, and that would be the one under the home directory node_modules directory. Modules don't have to be files. We can also create a find-me folder under node_modules and place an index.js file in there. We'll do a console.log line here also to identify it. And now when we execute node index.js, it will actually load the index file under node_modules. Index is the default fileName, but we can control what fileName to start with under the folder using the main property in package.json. For example, to make the require('find-me') line resolve to this start.js file instead of index.js, we just need to add this package.json file, which says when the find-me folder gets required, start.js is the file that should be loaded. And let's test that. And now you can see how the require find-me line actually loaded the start.js file and not the default index.js file. Other than resolving modules from within the node_modules folder, we can also place the module anywhere we want and require it with either relative paths, ./ and ../, or with absolute paths, starting with /. If, for example, find-me was under a lib folder instead of a node_modules folder, we can still require it this way. Let me actually create a lib folder and put a find-me.js file in there. Identify it with a console.log statement, make sure things work, and to see the paths in action here, let's console.log the module object. We'll also console.log module from within the index to see the file to see the difference, because those two module objects, which appear to be global are actually different. Let's see that. First, the id for the find-me module is the full path. You'll also see that our main index.js module is now listed as the parent of our find-me module. However, the find-me module was not listed as a child of the index module, instead, we have this weird Circular thing going on, because this is a circular reference. If Node prints the find-me module object here, it will go into an infinite loop. That's why it simply replaces the find-me content here with Circular. More importantly now, what happens if the find-me module required the main index module? This is where we get into what's known as the circular modular dependency, which is actually allowed in Node. To understand it better, let's first understand a few other concepts on the module object. First, let's talk about exports. In any module, exports is a special object. Anything we put on the exports object we can see here, and it's actually what we get in the findMeExports constant here. I'll console.log this constant and put an id attribute on the exports object in the find-me module to identify it. When we test that, the findMeExports constant has that id attribute. Let's also talk about the loaded attribute here, which Node keeps false as long as there is still more content to be loaded. In our example, when we printed this line, Node was not done loading the modules and it printed loaded false for both. If we simply put all this code in an immediate timer to be executed in the next round of Node's event loop, then both modules will be labeled loaded:true here as at that point Node was done loading them. Notice how the findMe constant did not get the exported values, because this require line was done executing before the timer kicked in. We simply cannot use the exports object inside timers. Let's now try to answer the original question, what happens when module 1 requires module 2, and module 2 requires module 1? Let's find out. Let's create an m1 module under the lib folder. We'll require it in index.js and console.log it. Under the lib folder, m1.js, let's give it an id to identify it. Let's add some more content in here. We'll start an array with one value and then push two more values after that. And let's duplicate m1 as m2, and use different values in there. Now we'll make m1 require m2 somewhere before it's done loading. Now to the interesting part, if anywhere within m2 we required m1 and printed it, we have a circular reference. Node will partially print m1 here. At this point of the lifecycle of m2, the m1 module is not ready yet, it still has 2 more lines to be processed, but Node was able to share a partial exports object from m1 to m2. In terms of the loaded attribute, in this line in m2, the m1 module had a loaded false attribute, while in this line of the index file, the m1 module has a loaded true attribute.

  7. JSON and C++ Addons We can define and require JSON files, and C++ Addon files with the node require function. The first thing Node will try to resolve is a .js file. If it can't find a .js file, it will try a .json file and it will parse the file if found as a JSON text file. After that, it will try to find a binary .node file. Let's see examples for those extra extensions. If we have under node modules a data.json file, and we have some valid JSON data in there, then we can require data in here, and Node will parse the content of data.json into our data constant. This is useful when, for example, all you need to export is some static configuration data. You can simply put that in a config.json file and just require that. I like to always use an extension when I'm not requiring a .js file, just to make that signal clear in the code, but you should know that it will also work without an extension. If Node can't find a .js or a .json file, it will look for a .node file and it would interpret the file as a compiled addon module. Let's do a quick addon example. The Node documentation site has a sample addon file, which is written in C++. It's a simple module that exposes a hello function, and the hello function outputs "world." If we have a require addon line like this, and we compiled this hello.cc file into addon.node, we can use the addon.node file directly here and use its hello method. To get this to work, we need to compile hello.cc first, which is simple. Create an addon-src directory, and place hello.cc in there, and then on that same level, copy this build configuration into a binding.gyp file. This simply tells the compiler which file to compile and what target name to use for the compiled module. Then, to compile, we need the node-gyp package, so install that, and once we have it, from within the addon-src folder, we run node-gyp configure, which will create the make files under a build directory. We then run node-gyp build, and that should create the binary compiled addon.node file. This is the file that we can require. I'll copy it under the local node_modules folder, and now our require script here should work. This is the output we've seen in the C++ code. We can actually see the support of these three extensions by looking at require.extensions. These are the actual functions that get executed. For example, for a .js file, node just compiles the content, while for a .json it uses JSON.parse on the content. For a .node file, it uses process.dlopen.

  8. Wrapping and Caching Modules Node's wrapping of modules is often misunderstood. To explain them, let me start by asking an important question. We can use the export object to export properties, but we cannot replace the export object directly. When we need to replace the export object, we need to use the module.exports syntax. The question is... why? Also, we've seen how the variables we define in the module scope here will not be available outside the module. Only the things that we export are available. So how come the variables are magically scoped and the exports object can't be replaced directly? The answer is simple. Before compiling a module, Node wraps the module code in a function, which we can inspect using the wrapper property of the module module. This function has 5 arguments: exports, require, module, __filename, and __dirname. This function wrapping process is what keeps the top-level variables in any module scoped to that module and it is what makes the module/exports/require variables appear to look global when, in fact, they are specific to each module. Same thing for the __filename/__dirname variables, which will contain the module's absolute filename and directory path. All of these variables are simply function arguments whose values are provided to the wrapped function by Node. You can see this wrapping in action if you run a script with a problem on its first line. This might actually depend on your environment, but on mine you can clearly see the anonymous wrapping function and its arguments here. Since we are in a function here, we can actually access the function's arguments in here, and you'll see how the first argument is the empty exports object, then we get the require object, then the module object, then filename and dirname. Note how both the require object and the module object are the copies associated with this index file. They are not global variables. The wrapping functions return value is this exports object reference. Note how we have both exports and the module object itself passed in the wrapper function. Exports is simply a variable reference to module.exports, so what happens here is equivalent to doing this line at the top of the module. So we can change the properties of the exports object, but if we change the whole export object, it would no longer be a reference to module.exports. This is the way JavaScript reference objects work everywhere, not just in this context. There is nothing special about require. It's a function that takes a module name or path and returns the exports object. We can simply override the require function to do our own logic if we want to. Say, for example, for testing purposes, we want every require line to be mocked by default and just returns an empty object instead of the required module exports object. This simple reassignment of require will do the trick, and if we test that by requiring any module and console.logging that module, we'll get the mocked object by default. To explore this require object a bit more, let's say we have this simple trend function. It takes a numeric argument stars and a string argument header and it prints this header between the number of stars that we specify. And we want to use this file in two ways. We want to use it on the command line, just like this, by passing the stars in the header arguments as command line arguments, but we also want to use it with require. We just require the file and then use whatever the file exports as a function and call that function. So those are two different usages here. And we need a way to determine if the file is being run as a script or if the file is being required. This is where we can use this simple if statement. If require.main = the module object. When, we run this as a script, this if statement will be true. So we can simply call the function here with process.argv elements. Otherwise, if require.main does not equal module, it means that this file is being required by other files. And in this case, we can just change the export's object to be the print function. And now we can test this script. It should work both from the command line directly, and when we run the index file, which requires the print stars. Caching is important to understand. Here's is a simple example to demonstrate it. Say that we have this ascii-art file that prints a cool looking header, and we want to display this header every time we require the file. So if we require this line twice, we want the header to show up twice. But because of module's caching, that's not going to happen. Node caches the first call and does not load the file on the second call. We can see the cache using require.cache, and in there you will find an entry for the ascii-art file. Those entries are indexed by the full file path. We can actually remove the cache entry here if we want to, using this full file path, and Node will re-load the module fine after that. But this is not the most efficient solution for this problem. The simple solution for this case is to wrap the log line here in a function, and exports that instead. Then when we require the file, we just execute the exports object as a function, and every time we do so, the function will execute the console.log statement.

  9. Know Your NPM NPM has a lot to offer. I want to make sure that you know all the great things you can do with NPM. NPM is not really part of Node. It just comes packaged with Node since it's the default and most popular package manager. It's also certainly not the only package manager. Recently, Facebook released another package manager, yarn, which claims to be a lot faster than npm, and if you compare them in the same project, that claim is probably true. But when we talk about NPM, there are two different things here, the NPM CLI, and the NPM registry at npmjs.com, they work together out of the box, but the NPM cli can actually work with different registries. It can be used with local files, folders, and private registries, and it can even be used with git repositories directly. For example, to install the express package directly from github, we can do npm I, expressjs, which is the name of the GitHub organization hosting the express project, and /express is the name of the repo, and this will install express from the last commit on the master branch of the github repo. We can see that, with npm ls express, when you have a package installed from github, the npm ls command will show the head commit for that install. We can also install the GitHub repo package from a specific commit or tag branch. For example, to install express version 4.14.0 directly from GitHub, we can do something like #4.14.0. This works with branches and direct commit hashes as well. So you have a lot of flexibility in installing the package at specific points in its history, which is helpful in many ways. If you want to check what package a command will install without actually installing them, you can use the --dry-run argument, which will only report what will be installed. To see a list of all globally installed packages, we can use the command npm ls -g, but this by default will list all top-level packages and their dependencies, which is usually a big tree. To see only the top level packages, we can control the depth of the tree with --depth=0. This shows only the first level of the tree. By the way, to see more information about installed packages, you can use the ll command instead of ls. This shows the description and more information about every package. If you're reading the output of npm ls from a program and you want to parse it, you can use the --json argument to get the output in JSON. When we installed express locally in this directory, a node_modules folder was created. I'm going to remove this node_modules folder, and I'll create a node_modules folder in the home directory. I'll then install express locally here. You'll notice that npm used the home node_modules folder and did not create a local one. Maybe you do need a single place to install all local dependencies, but this is usually a bad practice. The recommended way is to have a package.json file on this level, which instructs npm to manage the dependencies in a local node_modules folder. The bare minimum package.json file should have a name. This name is one word without spaces and it should all be lowercase, and a version which should follow the x.x.x semantic versioning style, first number is the major version, second is minor, and third is patch. Once we have a package.json, npm i express will create a local node_modules folder, even if parent folder also have it. Using npm i directly like this is not recommended. If you're installing a dependency, you should document it in package.json. You can document it under three main categories. If you use npm --save or the short -S, the dependency will be considered a production dependency. Let's install request as a production dependency. If you use --save-dev or capital D for short, the dependency will be considered a development dependency. Let's install babel-cli as a development dependency. If you use --save-optional or capital O for short, it will be considered an optional dependency. You can use this for a recommended tool that's not required. For example, let's install nodemon as an optional dependency. For optional dependencies, the code should usually check for their existence and only use them if they're installed. After these commands, take a look at package.json and see how these dependencies are documented under the three different categories. You can use the npm update command to update either a single package, or all installed packages (when the package name is not provided). When there is a version range specified in package.json, the update command will work according to that. A version range is composed of an operator and a version, operators like <, < or equal, or just equal. The equal operator is actually the default operator if no operator was specified. An x or a * can be used in one of the three semver numbers to cover the whole range for that level. This has the same effect as not specifying that same level. Other operators include the tilde, which works just like an X in the last level, but only when the x is greater that what's specified. So if we have 1.2.3 in here, this version range is equivalent to 1.2.x, but only for all x greater than or equal to 3. And this version range is equivalent to 1.2.x for all x greater than or equal to 0. There is also the caret operator, which works well for packages with versions less than 1.0, because it only allow changes that do not modify the left-most non-zero digit in the semver array. To update the npm CLI itself using npm, you can do npm install npm -g, and this will update npm itself. To check to see any of the installed packages are outdated, you can use npm outdated command. This works globally, too. Npm has a lot of configurations that we can control. We can see a list of all the configurations we can control with npm config list -l. For example, if you use npm init a lot, you might want to change the default values that it uses by changing the init configurations. For example, to set the default value for the package author's name, you can use a command like this one. Now we have a default value for the author name. And to delete that, you can use npm config delete. Another very useful configuration to have set is the save true configuration. This will make install always use the --save flag, which I think is great. We can search the npm registry using the search command right from the command line. Let me look for packages that has the word lint in them. And there is a lot of them. So you can grep this list and see which one you are looking for. The npm shrinkwrap command can be used to lockdown all the dependency versions so every time anyone runs the npm install command, they get the exact same versions for all the packages. And before we conclude, here are some of my favorite npm tricks. The npm home command can be used to open the home page of a package. How cool is that? There is also npm repo, which will open the repository page of a package. If you have packages installed in node_modules, but are not saved to package.json, for example, I think express was installed without saving it to package.json. So if we want to clean that up, we can use the npm prune command. So I think this removed express and its dependencies. And finally, npm has some easter eggs. My favorite is the Christmas easter egg.

  10. Summary In this module, we've first seen an overview of the full course, then started talking about Node's architecture and how Node interacts with V8 and Libuv. We've seen the different ways to use the node command and its REPL mode, covering some tips and tricks to use and customize the REPL mode, and a few of the useful command line options we can use with the node command. Then we talked about the global object and what's available on it by default and covered the process object and how it's a bridge between a Node's process and it's running environment. We also talked about the Buffer module and its Role in node applications, and also saw an example of using the string_decoder module. We then learned about the require module and it's five different steps. We first explored the module object, which is used to manage required modules, and then learned about the resolving steps and the various way a required module can be resolved. We also learned about what happens when two modules require each other. We then learned how the require function supports loading json files and C++ addon files, and compiled them built a simple c++ addon module with the use of the node-gyp package. We also talked about the Wrapping and Caching of required modules and explained how Node wraps every module with a function that it uses to control some global arguments. We also talked about how the require function caches every module it loads and how to work around that if we need the code to be re-executed every time we require it. And finally, we talked about NPM and its main commands and configurations and a few tips and tricks when working with it. In the next module, we'll take a about Node's concurrency non-blocking Model and take look at how the event-loop works.

  11. Concurrency Model and Event Loop Introduction One of the most important concepts to understand about Node.js is its concurrency model for handling multiple connections and the use of callbacks. You might know this as the non-blocking nature of Node.js. In Node, this model is based on an event model, just like Ruby's Event Machine, or Python's Twisted. In Node, this event model is organized through what's known as the event loop. Slow I/O operations are handled with events and callbacks so that they don't block the main single-threaded execution runtime. Everything in Node depends on this concept so it's extremely important that you fully understand it.

  12. What Is I/O Anyway Okay, so we know that I/O is short for input/output, but what exactly does that mean? I/O is used to label a communication between a process in a computer CPU and anything external to that CPU, including memory, disk, network, and even another process. The process communicates with these external things with signals or messages. Those signals are input when they are received by the process, and output when they are sent out by the process. The term I/O is really overused, because naturally, almost every operation that happens inside and outside computers is an I/O operation, but when talking about Node's architecture, the term I/O is usually used to reference accessing disk and network resources, which is the most time-expensive part of all operations. Node's event loop is designed around the major fact that the largest waste in computer programming comes from waiting on such I/O operations to complete. We can handle requests for these slow operations in one of many ways. We can just execute things synchronously. This is the easiest way to go about it, but it's horrible because one request is going to hold up other requests. We can fork a new process from the OS to handle each request, but that's probably won't scale very well with a lot of requests. The most popular method for handling these requests is threads. We can start a new thread to handle each request. But threaded programming can get very complicated when threads start accessing shared resources. A lot of popular libraries and frameworks use threads. For example, Apache is multithreaded and it usually creates a thread per request. On the other hand, its major alternative, Nginx is single threaded, just like Node, which eliminates the overhead created by these multiple threads and simplify coding for shared resources. Single threaded frameworks like Node use an event loop to handle requests for slow I/O operations without blocking the main execution runtime. This is the most important concept to understand about Node, so how exactly does this event loop work? Let's find out.

  13. The Event Loop The simplest one-line definition of the event loop is this. It's the entity that handles external events and converts them into callback invocations. Helpful? Maybe not. Let's try another definition: It's the loop that picks events from the event queue and pushes their callbacks to the call stack. This definition is probably worse than the first one. Truth is, it's not easy to understand the event loop without first understanding the data structures it has to deal with. It's also a lot easier to understand the event loop with visuals rather than text, so the next few clips will explain the various things around the event loop with visuals. What I want you to understand first is that there is this thing called the event loop that Node automatically starts when it executes a script, so there is no need for us to manually start it. This event loop is what makes the asynchronous callback programming style possible. Node will exit the event loop when there are no more callbacks to perform. The event loop is also present in browsers and it's very similar to the one that fires in Node. To understand event loop, we need to understand all the players in this diagram and we need to understand how they interact. So V8 has this thing called stack, which we're going to cover in detail in the next clip. It also has a heap. The heap is simple, it is where objects are stored in memory. It's basically the memory that gets allocated by the VM for various tasks. For example, when we invoke a function, an area in this heap is allocated to act as the local scope of that function. Both the Stack and the Heap are part of the run-time engine, not Node itself. Node adds APIs like timers, emitters, and wrappers around OS operations. It also provides the event Queue and the event loop using the libuv library. The event loop, as the name clearly describe, is a simple loop that works between the event queue and the call stack. But again, we can't understand it without first understanding those other entities.

  14. The Call Stack The V8 Call stack which is simply a list of functions. A stack is a first in last out simple data structure. The top element that we can pop out of the stack is the last element that we pushed into it. In the case of V8 Call stack, these elements are functions. Since JavaScript is single threaded, there is only one stack, and it can do one thing at a time. If the stack is executing something, nothing else will happen in that single thread. When we call multiple functions that call each other, we naturally form a stack. Then we back-track the function invocations all the way back to the first caller. Remember how if you want to implement something that's recursive without recursion you need to use a stack? Well, that's because a normal recursive function will use a stack anyway. Let's walk through a simple example to demonstrate what happens in the stack when we call functions. Here we have three simple functions, add, double (which calls add), and printDouble which calls double, and let's assume that all these functions are wrapped in an immediately invoked function expression. When we run this code, V8 uses the stack to record where in the program it is currently executing. Every time we step into a function, it gets pushed to the stack, and every time we return from a function, it gets popped out of the stack. It's really that simple. So we start with the IIFE call, which is an anonymous function. Push that to the stack. The IIFE function defines other functions, but only executes printDouble. That gets pushed to the stack. PrintDouble calls double, so we push double to the stack, double calls add, we push add to the stack, and so far, we're still executing all functions. We have not returned from any of them. When we return from add, we pop add out of the stack. We're done with it. Then we return from double, so double gets popped out of the stack, too. Now, the execution continues in printDouble. We get into a new function call, console.log, that get's pushed into the stack and popped immediately, because it did not call any other functions. Then we implicitly return from printDouble, so we pop printDouble out of the stack and finally pop the anonymous IIFE itself out of the stack. Note how every time a function is added to the stack, its arguments and local variables are added too in that same level. You'll sometimes hear the term stack frame to reference the function and its arguments and local variables. I am pretty sure you have seen the call stack before, if not in node then the browser. Every time you get an error, the console will show the call stack. For example, I changed add to use an undefined variable here, which will raise an error. When we execute this code in Node or the browser, we'll get an error report which includes the call stack, the anonymous function calls printDouble, which calls double, which calls add. This is the state of the call stack when the error happened. And what do you think will happen if a function calls itself recursively without an exit condition? It's the equivalent of an infinite loop, but on the stack. We'll keep pushing the same function to the stack until we reach the V8 size limit for the stack, and V8 will error out with this error: Maximum call stack size exceeded.

  15. Handling Slow Operations As long as the operations we execute in the call stack are fast, there is no problem with having a single thread, but when we start dealing will slow operations, the fact that we have a single thread becomes a problem, because these slow operations will block the execution. Let me simulate that for you with an example. You do know that we can still write blocking code in Node, right? For example, a long for loop is a blocking operation. In here the slowAdd function will take a few seconds to complete, depending on the hardware, of course. So what happens in the call stack when we step into a blocking function like slowAdd? Well, the first time we step into it, it gets pushed to the stack, and then we wait until V8 finishes that useless blocking loop and returns from slowAdd(3,3) which gets popped out of the stack at that point. Then we step into slowAdd 4 4, and we wait, done, return, pop. Same thing for slowAdd 5 5, wait, done, return, pop. Then we get into the console.log lines, which are fast and thus non blocking, so push, pop, push, pop, push, and pop. Let me actually show you how Node behaves when we execute this exact code, slowAdd, wait, slowAdd, wait, slowAdd, wait, print, print, print. While Node is waiting after every slowAdd here, it cannot do anything else. This is blocking programming, and Node's event loop exists to allow us to avoid doing this style of programming.

  16. How Callbacks Actually Work We all know that Node API is designed around callbacks. We pass functions to other functions as arguments, and those argument functions get executed at a later time, somehow. For example, if we change our slowAdd function to output the result in a setTimeout call after five seconds, the first argument to setTimeout is the function that is famously known as a callback. So let's see what happens on the stack for this case. We call slowAdd 3 and 3, push that to the stack, slowAdd calls setTimeout, so we push that to the stack, since setTimeout does not call any functions but rather has a function argument, it gets popped out of stack immediately, and at that point, slowAdd 3 and 3 is done, so we pop that out of the stack too. Then we continue, slowAdd 4 and 4, push it to stack, calls setTimeout, push that to the stack and immediately pop it, then pop slowAdd 4 and 4, then somehow, console.log(6) gets added to the stack to be executed, and after that, console.log(8) gets added to the stack to be executed. To understand how the last two calls to console.log appeared in the call stack, let's take a look at the bigger picture here. First, it's important to understand that an API call like setTimeout is not part of V8. It's provided by Node itself, just like it's provided by browsers, too. It's wired in a way to work with the event loop asynchronously. That's why it behaves a bit weirdly on the normal call stack. Let's talk about the event queue, which is sometimes called the message queue or the callback queue. It's simply a list of things to be processed; let's call these things events. When we store an event on the queue, we sometimes store a plain-old function with it. This function is what we know as a callback. A queue data structure is a first in first out structure, so the first event we queue will be the first event to get de-queued. To de-queue and process an event from the edge of the queue, we just invoke the function associated with it. Invoking a function will push it to the stack. So we start with the anonymous function, it pushes slowAdd 3, 3 to the call stack, which in turn pushes setTimeout cb 1, delay of 5 seconds. The setTimeout callbacks here in this example are actually anonymous functions, but to simplify the visualization, I labeled them cb1 and cb2. At this point, Node sees a call to its setTimeout API, takes note of it, and instantiate a timer outside of the JavaScript runtime. The setTimeout call on the stack will be done and popped out, and while the Node timer is running, the stack is free to continue processing its items. It pops slowAdd 3, 3, pushes slowAdd 4, 4, which in turn pushes setTimeout with the second callback. Node kick off another timer for this new setTimeout call and the stack continues to pop its done functions. After five seconds, both timers complete, and when they do they queue the callbacks associated with them into the event queue. Exactly at this moment, the event loop has something important to do. The event loop job is super simple. It monitors the call stack and the event queue. When the stack is empty, and the queue is not empty (there are events waiting to be processed in the queue), it will de-queue one event from the queue and push its callback to the stack. It's called an event loop, because it loops this simple logic until the event Queue is empty. Right now, our example's call stack is empty and the queue is not. The event loop will pick cb1, and push it to the stack. Cb1 will push console.log call to the stack, which returns immediately, and this marks cb1 as done. The stack is empty again, and we still have one event to process, so the event loop will push cb2 to the stack and cb2 pushes console.log, done, pop, cb2 is done, and we are at an idle state now for the event loop. The stack is empty and the queue is empty. Node will exit the process when we reach this state. All Node APIs work with this concept. Some process will go handle a certain I/O asynchronously, keeping track of a callback, and when it's done it will queue the callback into the event queue. Keep in mind that any slow code being executed on the stack directly will block the event loop. Similarly, if we're not careful about the amount of events that get queued in the event queue, we can overwhelm the queue and keep both the event loop and the call stack busy. As a Node developer, these are some of the most important things to understand about blocking vs non-blocking code.

  17. setImmediate and process.nextTick What happens when the timer delay is 0 milliseconds? Well, almost the same thing. Let's walk through it. The main function pushes slowAdd 3 3, which pushes the setTimeout, which creates the timer. The timer immediately queues its callback on the queue; however, the event loop will not process that event, because the stack is not empty. So the stack continues its normal flow until we get to the second timer, which also immediately queues its callback. The stack is still not empty. After we pop all calls from the stack, the event loop picks the first callback and pushes that to the stack, and then does the same thing with the second one. Because of this loop, the timers are not really executed after 0 milliseconds, but rather after we're done with the stack, so if there was a slow operation on the stack, those timers will have to wait. The delay we define in a timer is not a guaranteed time to execution, but rather a minimum time to execution. The timer will execute after a minimum of this delay. Node's event loop has multiple phases. The timers run in one of those phases while most I/O operations run in another phase. Node has a special timer, setImmediate, which runs in a separate phase of the event loop. It's mostly equivalent to a 0ms timer, except in some situations, setImmediate will actually take precedence over previously defined 0ms setTimeouts. For example, this code will always display immediate before timeout, although we kick the 0ms timeout first. It's generally recommended to always use setImmediate when you want something to get executed on the next tick of the event loop. Ironically, Node has a process.nextTick api that is very similar to setImmediate, but Node actually does not execute its callback on the next tick of the event loop, so the name here is misleading, but it's unlikely to change. Process.nextTick is not technically part of the event loop, and it does not care about the phases of the event loop. Node processes the callbacks registered with nextTick after the current operation completes and before the event loop continues. This is both useful and dangerous, so be careful about it, especially when using process.nextTick recursively. One good example for using nextTick is to make for a standard function contract. For example, in this script, the fileSize function receives a fileName argument and a callback. It first makes sure the fileName argument is a string, and it executes the callback with an error if not. Then, it executes the asnyc function fs.stat and executes the callback with the file size. An example use of the fileSize function is here where we log the file size. Very simple. If we execute it with the current file name, it should log the size of this file. This call was async because we saw the Hello message first, which is expected because of our use of the async fs.stat function. But something is wrong with this fileSize function. Can you identify it? Let me give you a hint. Let's trigger the validation by passing a number instead of a string here. Validation is a go, but the console Hello line was not executed at all in this case, because the validation code here is synchronous. So the fileSize function could be both sync and async depending on the first argument. This is usually a bad design. A function should always be either sync or async. To fix this problem, instead of directly calling the callback with an error here, we can use process.nextTick call for the callback, and with the argument of the error that we want. This way, when we actually trigger the validation, the error will happen asynchronously and the file size function is going to be always async.

  18. Summary I hope this module helped you understand Node's event loop and its event-based concurrency model. We looked at V8's call stack and how that is a record of where in the code V8 is currently executing. We looked at how slow operations affect the single-threaded call stack and looked at Node APIs like setTimeout, which does not really run at V8, but rather as a Node API that does not block the call stack. Node pushes asynchronous operations as events to its event queue, and the event loop monitors both the call stack and the event queue and de-queue callbacks from the event queue into the call stack, which gives V8 back the control to execute the content of those callbacks. We've also looked at two special API calls, setImmediate, which is similar to a 0ms timer, and process.nextTick, which pushes a callback immediately after the current operation and before the event loop continues. In the next module, we'll dive deeper into Node's event-driven architecture and see how events and listeners work with the event loop and how to use Node's EventEmitter class.

  19. Node's Event-driven Architecture Callbacks, Promises, and Async/Await The original way Node handled asynchronous calls is with callbacks. This was a long time ago, before JavaScript had native promises support and the async/await feature. We saw how asynchronous callbacks work with the event loop, but callbacks in the general definition are just functions that you pass to other functions, which, of course, is possible in JavaScript, because functions are first class objects. It's important to understand here that callbacks do not indicate an asynchronous call in the code. In Module 2, we saw how a function can call the callback both synchronously and asynchronously and talked about how to avoid doing that with process.nextTick. Let's see an example of a typical asynchronous node function that's written with a callback style. ReadFileAsArray takes a file path and a callback, read the file, split the lines into an array of strings, and call the callback with that array. Here's an example use for it. We use it for a file that has only numbers, parse the array of strings to numbers, and count the odd ones. Just one simple example of using readFileAsArray. Node's callback style is used purely here. The callback has an error first argument that's nullable, and we pass the callback as the last argument for the host function. Always do that in your functions, because Node users will always assume that. So don't pass the callback argument first here, or flip error and lines here. In modern JavaScript, we have promise objects. Promises can be an alternative to callbacks for asynchronous APIs. Instead of passing a callback as an argument and handling the error in the same place, a promise object allows us to handle success and error cases separately and also allow us to chain multiple asynchronous calls instead of nesting them. Let's convert this callback example to use promises. Instead of passing a callback as the last argument to readFileAsArray, we do a .then call on it. This .then will give us access to the lines array, and we can do our processing on it as before inside the .then call, and to handle the error, we do a catch call on the result to log the error. To convert the host function definition to work with this code, we start by returning a promise object, which wraps the fs.readFile async call. The promise object exposes two functions, a resolve and reject. We replace the error cb with a reject call and replace the success cb with a resolve call. We're simply replacing callback calls with promise calls. And this callback argument is no longer used. This will work exactly the same. Although this code is a little bit easier to understand and work with, Node official method for working with asynchronous code is callbacks, and developers using your code might assume that you have a callback interface. If you want to work with promises, you can keep the callback mechanism and add a promise interface to it. Lots of popular node packages started doing that. It's very simple. Don't replace the callback calls, keep them and add a promise call. So we reject here the error and also return a callback with the error. And in here we resolve with the lines and keep the callback invocation with the lines. The only other things you need to do in this case is to have a default value for this callback argument in case the code is being used with the promise interface. We can use a simple, default empty function in the argument here. Now this code will work both ways, with callbacks and with promises. Adding a promise interface makes your code a lot easier to work with when there is a need to loop over an async function. With callbacks, things become messy. Promises improve that a little bit, and function generators improve on that a little bit more. But a more recent alternative to working with async code is to use the async function, which allows us to treat async code as if it was linear, making it a lot more readable when we need to process things in loops. In my current node version, this feature is behind a harmony flag, but it'll be officially available soon. To consume our readFileAsArray method with async/await, we first create an async function, which is just a normal function with the word async before it. Inside the async function, we call the readFileAsArray function as if it returns the lines variable, and to make that work, we use the keyword await. After that, we continue the code as if the readFileAsArray call was synchronous. And to get things to run, we execute the async function. Very simple and more readable. However, to work with errors, we also need to wrap everything in a try/catch call like this. Don't forget that. To execute all this code, I currently need to run the file with the --harmony-async-await flag, and the readFileAsArray function example will work as expected. It's actually now working with all three methods. It exposes both a callback interface and a promise interface, and we can consume it using callbacks, promises and with the new async/wait feature in Javascript.

  20. Event Emitter The EventEmitter is a module that facilitates communication between objects in Node. EventEmitter is at the core of Node asynchronous event-driven architecture. Many of Node's built-in modules inherit from EventEmitter. The concept is simple: emitter objects emit named events that cause listeners to be called. So an emitter object has two main features, emitting named events, and registering listener functions. To work with the EventEmitter, we just create a class that extends EventEmitter. Emitter objects are what we instantiate from EventEmitter-based classes. At any point in their lifecycle, we can use the emit function to emit any named event we want. Emitting an event is the signal that some condition has occurred. This condition is usually about a state change in the emitting object. We can add listener functions using the on method, and those listener functions will simply be executed every time the emitter object emits their associated named event. Let's take a look at an example. Class WithLog is an event emitter. It defines one instance function execute. Execute receives a task function, and wrap its execution with log statements. It fires events before and after the execution. To see the sequence of what will happen here, we register listeners on both named events and actually execute a sample task here. What I want you to notice about the output here is that it all happens synchronously, there is nothing async about this code. We get the Before executing line first, the begin named event causes the about to execute log line, then we execute, the end named event causes the done with execute line, and then we get the after execute line. So just like plain-old callbacks, do not assume that events mean synchronous or asynchronous code. This is important, because say that the execute taksFunction was actually async, we can simulate that with a setTimeout call here instead of a direct function. Now, this line will be async, and the two lines after are not accurate anymore. To emit an event after an asynchronous function is done, we'll probably need to combine callbacks or promises with this event-based communication. One benefit for using events instead of regular callbacks is that we can react to the same signal multiple times by defining multiple listeners. To accomplish the same with callbacks, we have to write more logic for this inside the single available callback. Events are a great way for applications to allow multiple external plugins to build functionality on top of the application's core. You can think of them as hook points to allow for customizing the story around a state change. Let's convert this synchronous sample example into something asynchronous and a little bit more useful. The WithTime class executes an asyncFunction and reports the time that's taken by that asyncFunction using console.time and console.timeEnd calls. It emits the right sequence of events before and after the execution. And also emits error/data events to work with the usual signal of asynchronous callback. We call it simply by flatting the arguments of an async method. And instead of handling data with callbacks, we can listen to the data event now. When we execute this code now, we get the right sequence of events as expected, and we get a reported time for the execution, which is helpful.

  21. Arguments, Errors, and Order of Listeners In the previous clip's example, there were two events that were emitted with extra arguments. The error event is emitted with an error object, and the data event is emitted with a data object. We can use as many arguments as we want after the named event, and all these arguments will be available inside the listener functions. For example, to work with this data event, the listener function that we register will get access to the data argument that was passed to the emitted event. And that data is exactly what the read file callback exposes. The other special event was the error event. The error event is a special one, because if we don't handle it with a listener, the node process will actually exit. Let me show you what I mean. If we have two calls to the execute method, and the first call is triggering an error, the node process is going to crash and exit. To prevent this behavior, we need to register a listener for the special error event. And if we do so, that error will be reported and both lines will be executed. So the node process did not crash and exit in that case. The other way to handle exceptions from emitted errors is to register a listener for the uncaught exception process event. This will essentially be the same, but remember the advice that on uncaught exception, we should just let the process exit anyway. So usually when you use the uncaught exception, we will do some kind of cleanup and then we'll have the process exit. However, imagine that multiple error events happened. This means the uncaught exception listener will be triggered multiple times, which might be a problem for any cleanup code we are doing here. The eventemitter module expose a once method instead of an on. This once method means invoke the listener just once, not every time it happens. So this is a practical use case to use with the uncaught exception, because with the first uncaught exception we should start doing the cleanup and exit the process anyway. If we register multiple listeners for the same event, the invocation for those listeners will be in order. The first listener that we register is the first listener that gets invoked. In here we have two listeners, and the length listener was invoked before the characters listener. If you have a need to define a new listener, but have that listener invoked first, you can use the prepend listener method, and if you need to remove a listener you can use the remove listener method.

  22. Practical Example: Task List Manager Let's create a practical example using Node's event emitter. Let's create a simple task list manager. We'll keep the structure simple. We'll have a client file and a server file. The client emits a command event for the server, and the server emits a response event for the client. We'll make it support four commands: help to display the list of commands, ls to list the current tasks, add to add a new task, and delete to delete a task. To get started, I created a client.js file and a server.js file. We'll need to require the event emitter in both of them. In client.js, we need to read input from the user. For that, I'm going to use the readline module. To use the readline module, we create an interface with the input and output streams, and let's just use process standard in and process standard out. The client event emitter is going to be a simple one. There isn't a lot of logic in here. So instantiating an object directly from the event emitter should be okay. The server, on the other hand, is going to have some logic. So I'm going to create a class that extends eventEmitter. And I'll export an instance of that class. In the client file, we'll import this server object. However, since we're doing two-way communication here, the client and the server need access to each other. The client is going to emit events to the server, and the server is going to listen to those events. And the inverse is true as well. So in client.js, we have access to the server object, because we just required it. We could do the same for the client, but another way we could do that is to make server.js export a function and execute this function here with a client as an argument. So this will allow us to go in server.js, and in here, assume that we have a function instead of just an object. This function is going to receive the client, and we can potentially instantiate the server object with the client object. So now in the server class, we can define the constructor function, and that constructor function is going to receive the client. Back in the client.js, let's go ahead and use the readline interface. We can register a listener for the line event, which receives an input. So just to test that, let's go ahead and log this input line. We can test this with node client.js, and it's listening to the input and echoing it back. So now that we have access to input from the user, we can use the client to communicate this input to the server. So we can do client.emit the input, for example. Input is input. So now every time the user hits enter, the client is going to emit an input event to the server. So the idea here is to run this. We're going to do something like help here, and that would be the command for the server. In fact, let's name this event command, instead of input, to make it more clear. So, to test all this, in server.js let's go ahead and listen to any command event from the client. So on command, we're going to receive the command. We'll have a function here to do something with the command. I'm going console.log it for now so we can test. Command is add. Perfect. So the commands that we want to implement are help, add, ls and delete. Let's input the first command. I'm going to create an instance method here for every command, and every one of them is going to emit a response named event. So for now, we'll do placeholders and copy help for add, ls, and delete. And to make sure we're only executing those supported commands, I'm going to do a switch statement here on the command, and inside here a list all the supportive commands. And for these supportive commands, we're gonna go ahead and execute them. So I'll pick the command and execute it. And for the default case, this will be and unknown command, so I'm going to go ahead and emit a response object here and say something like unknown command. So now inside the client file, we're emitting the command to the server and the server is going to emit a response event. So what we need to do here now is listen to that response event. So server on response. The listener here is going to be a function, and this function gets access to the response stream. Let's just console.log the response. So I think we can actually test all that. Help, add, delete, ls and something unknown. Perfect. I'm going to do a trick here to make this more user friendly. So basically we write this string, which is a special command that will clear the terminal. Then we write the response, and then we write a prompt to receive other commands. So now when we test this, it will be something like help, it says help, and then receives another command. Add, responds with add and receives another command. So we should probably do the same on the initialization of the server and instruct the user to type a command. Instead of doing it here manually, we can have the server response with an initial response that says "type a command." So that's exactly what I'm going to do. Right after we initialize, we're going to do this.emit response, and in here say something like "type a command help to list commands." However, this is not going to work for a very important reason. It's all about the sequence of what's happening. This line is going to be executed when we execute this line. So, our handler for the response event would not be defined yet. To fix this problem, we can wrap this call in a process.next tick. So in here, I'll do process.nextTick function, and in this function we'll do this.event response type a command. It's process.nextTick. Let's test now. Things are working. Type a command, help to list commands. Help, add an unknown command. Perfect. Now that we have the structure of this communication between the client and the server, implementing those commands are actually easy. The help command will just emit a string. The available commands are add, ls or delete. The add command, on the other hand, is going to need to receive some arguments. We're going to say add something. So right now we don't have any arguments. We're actually calling this with an empty argument. We're only receiving command here. So we can either process the command for its arguments in here or process it on the client side, which makes more sense to me. So I will initialize a command and arguments variables inside the line listener. I'm going to make the command and rest of the arguments to be read from input when I split it on space. This way, the first token of this input is going to be the command, and the rest of the arguments will be in the args variable. So let's pass both the command and the args to the emitted event. Perfect. So now, inside server.js, we will have access to command and an array of arguments. And let's pass this array of arguments to every command we execute. So now we do have a list of arguments, so let's just test by responding with that list of arguments and joining it back with a string. So we can test that. If we do add hello there, it will display back hello there. Perfect. So the rest of the implementation here is purely JavaScript. So let me do it quick and fast forward. And let me review what I did. So I added a tasks object to hold the tasks information. The taskID to hold a unique ID for every new task. In the add method, we just add the new task to the tasks object, emit a response for the user that the task was added. For the delete method, we delete the task for whatever ID the user types in and give the user feedback that the tasks was deleted. And the last method is going to use this new tasks string method to display the tasks. This is just a friendly output for the tasks object. We'll just loop over the keys and for every key we return a string with the task information and then joining everything with a new line. So this is the final product. Let's go ahead and test it. Node client. We have a help command, which lists all the available commands adds task, ls and delete. Let's add a few tasks. Do this add, do that. We'll see the feedback that tasks are added. We can list the tasks, and we delete a task. So let's delete one, and lists the tasks after that, and we have two.

  23. Summary In this module, we explored the event-driven nature in Node using its core EventEmitter module. We first talked about the different ways we can deal with asynchronous code in JavaScript and how Node's standard way is to use an error-first callback as the last argument of an asynchronous function. We saw how we can combine callbacks with promises and how we can consume these promises using the new async/await feature in JavaScript. We then learned about the Event Emitter module and saw how emitter objects emit named events that cause listeners to be called. We also saw how events can be used for both asynchronous and synchronous code. We saw how to use arguments with emitted events and how to handle the special error events, and what happens when we register multiple listeners to the same event. We then created a practical example using Node's event emitter, which was a simple command-based task list manager with features to list, add, and delete tasks in a queue of tasks. In the next module, we'll explore Node's capabilities for working with TCP and UDP networking.

  24. Node for Networking TCP Networking with the Net Module Let's create a basic network server example. We use the net's module, create server method. We then register a connection handler that fires every time a client connects to this server. When that happens, let's console.log the client is connected. The handler also gives us access to a connected socket itself. This socket object implements a duplex stream interface, which means that we can read and write to it. So let's write "Welcome New Client." And to run the server, we need to listen to a port, and the callback here is just to confirm it. Let's test. Testing this is simple, we can use either telnet or netcat. For example, nc localhost 8000, we get the client is connected message in the server console, and the welcome message is sent to the client, and then node keeps running, because we did not terminate that connection. Now the client can write to that socket, but we have not registered a handler to read from the socket. The socket being a duplex stream means that it's also an EventEmitter. So we can listen to data event on the socket. The handler for this event gives us access to a buffer object. Let's console log it. Now when the client types something, we get it as a buffer. This is great, because Node does not assume anything about encoding. The user can be typing here in any language. Let's now echo this data object back to the user using socket.write. When we do so, you'll notice that the data we are writing back is actually a string. This is because the write method on the socket assumes a utf8 encoding here. This second argument can be used to control the encoding and the default is utf8. We can also globally set the encoding if we need to, so the global encoding is now utf8 and the data argument here becomes a string instead of a buffer, because we now told Node to assume a utf8 encoding for anything received from this socket. Both the console.log line and the echoed data are assumed to be utf8. I'm gonna keep an example without global encoding, just in case we need access to the buffer object. What happens when the client disconnects? The socket emitter has an on 'end' event that gets triggered when the client disconnects. At that point, we can't write to the socket any more. Let's console log that client is disconnected. Let's test that now. Connect, and we can disconnect a netcat session with Ctrl+D. The server logs the line.

  25. Working with Multiple Sockets Let's review what we did in the previous clip real quick. We created a network server using the net's module createServer method. This server object is an EventEmitter, and it emits a connection event every time a client connects to it. So we register the handler on this connection event. This handler gives us access to the connected socket. The connected socket is also an EventEmitter, and it emits a dataEvent whenever the user types into that connection, and it also emits an end event if the user disconnects from that server. So now we have a server, we can connect to this server, type to it, and many users can connect to that server and also type to it. No problem there. They can both write to their sockets we can read from both sockets. These two connected clients get two different sockets. In fact, let's give every socket a unique id. We can simply use a counter. I'm going to do counter, start from 0, every time a client connects, we'll define an id property for that socket, and assign it a counter++ the counter for the next socket. And now on data, when we receive the data from the user, let's identify the socket, and let me remove this console.log line here, we don't need it. And in here, instead of writing "data is," I am going to write ${socket.id} to identify it, just like that. So let's test one client, two clients, and now when this client types, it says 0, and when this client types it says one. So this makes the log clear. We have two different clients connecting here, and we can write to their sockets. To make these two clients chat with each other, all we need to do is, when we receive data from one of them, we write it to all the connected sockets. So, we need to keep a record of all connected sockets and loop over them in the data event handler. We can use a simple object to keep a record of all connected sockets. Let's do that. Let's define a sockets variable and start it as an empty object. Every time a socket connects, let's store it in this sockets object. Let's index it with the id we just defined, just like that. Then on a data event, we can loop over this object. We can use object.entries, which is actually a new method in JavaScript, just like keys, but gives us both the key and the value. So, entries for sockets.forEach, which gives us a handler. Inside the each handler argument we can destructure both the key and the socket object, which I'm going to alias as client socket here, so that it doesn't conflict with the global socket that we're working with. And now inside the forEach, we can just use our previous code to write to the socket. But instead of socket here, it's going be cs, the socket that we have in the loop, and actually key is not used, so we can just remove it. We only care about the connected socket. And I do want to keep this as socket.id because that's the socket that is actually sending the data, so it's good to identify who is writing in that case. So let's test all that. Connect multiple clients, and try typing in all of them. Hello, so you'll see how zero said hello zero said hello in both of them. And then hi, you'll see how one said hi and one said hi in both of them. So let's review. On data from any socket, we loop over this sockets object, and we write the received data to every socket in there. This is simply act like a chat server, because every connected client is receiving all the messages from every other connected client. However, if one client disconnect, our chat application will simply crash, because now we're trying to write to a socket that was closed. So, to fix that, in the end event here, when the socket emits an end event, we should delete it from the sockets object. So sockets for the socket.id. This would delete the disconnecting socket from the sockets object. And this way, if one client disconnects, the other client can still type and the server will not crash. So now we can say that we have a bare bone chat server.

  26. Improving the Chat Server We created a very basic chat server, so how about we improve it a little bit? Right now, we can run the server. Clients can connect, we can identify them with their IDs, and they all receive all the messages. So usually, when you chat, you don't really get echoed back your own messages. So let's have a condition here inside the for each to implement that. Basically, if the currently connected socket, which we can identify with socket.ID, if that equals to the current socket, which is identified by the key back to key here, if that condition is true, then the current socket in the loop is the same socket that is sending the data, so we should probably just return here and not write back to the user. So let's test that. So now when the first client types, client 1 and 2 both get the hello message, but client 0 does not get the hello message. Let's try hi in here. Client 0 and client 2 both get the message, but client 1 does not. Perfect. So how about we identify our chatters with names instead of numbers? Maybe we can ask them about their name when they first connect. So instead of this welcome new client message, I'm going to type "please type your name." Just like that. And this means that the first data event I'll receive is going to be the name of the client. So I need a condition to identify if a client is just giving me their name for the first message or is this a data event that's coming from a client who already gave me their name and they're just chatting now? So how about instead of adding every socket to the officially connected socket, we make that a condition. We only add the socket when we receive the name of the user. So in here, we'll do a condition. We'll say if the socket is not part of the sockets connected, so if not sockets for the socket.ID, then this means that we have not registered this socket yet, and this current data message is the name of the user. So we'll go ahead and register this socket now, and we'll store the name of the user, which is basically the data object. And remember the data object is a buffer, because we don't have the global encoding, so we'll do a toString on it to convert it into UTF8, and also we trim it to remove the new line. And in this case, we want to return. We don't want to echo this name back to all connected user, because this is just a socket that's just connecting and giving me their name. How about we give the user a welcome message here, so socket.write, and say something like "welcome socket.name," just like that. So now that we have a name for every socket, we can actually use the name here instead of the ID to identify the socket. Socket.name in here. And I think we can test. Connect three clients, and you'll notice how the server is asking for the name. Let's name the first chatter Alice, Bob for the second, Charlie for the third, and you see how the server welcomed the names, so now when Alice types something, Bob and Charlie get the message Alice said "hello." Let's try it here. So Bob is saying "Hi." Charlie is saying "Hey." So that's a good improvement. How about we also show the timestamp on the message? Because without timestamps, it's not really a chat server. So we can simply display the timestamp here right after the name, so let's make it into a variable, and how about we create a function for the timestamp, just like that, and let's go define this function. So function timestamp is a very simple function that will basically format the current date, so let's do now = newDate. This is to just give me the current date, and let's return a string that has the hours, which is now.getHours, and the minutes, which is now.getMinutes. Very simple. So this would be like hhmm format. This is very simple. If you really want to do an actual time format, you have to handle multiple cases, and you have to be aware of time zones and all that. I would recommend a library called moment if you want to actually work with timestamps, but this is just an example to improve our simple chat server. So let's test that. Alice, Bob and Charlie, and now we see that Alice said "Hello" at 10.47.

  27. The DNS Module Let's take a look at the DNS Module. We can use it to translate network names to addresses and vice versa. For example, we can use the lookup method to lookup a host, say Pluralsight.com. This gives us a callback, error first callback, and it gives us the address for that host. So we can log the address here. And running this will give us the IP address for Pluralsight.com. The lookup method on the DNS module is actually a very special one, because it does not necessarily perform any network communication, and instead uses the underlying operating system facilities to perform the resolution. This means that it will be using libuv threads. All the other methods on the DNS module uses the network directly and does not use libuv threads. For example, the equivalent network method for lookup is resolve4, 4 is the family of IP address, so we're interested in IPv4 addresses. And this will give us an array of addresses in case the domain has multiple A records, which seems to be the case for Pluralsight.com. If we just use resolve, instead of resolve4, the default second argument here is A, which is an A record so this is what just happened, but we can also resolve other types, like for example, if we're interested in MX record we just do that, and here are all the MX records for Pluralsight.com. You can tell they're using Google apps. And all the available types also have equivalent method names, so MX here is the same as resolving with the MX argument. Another interesting method to know about is the reverse method. The reverse method takes in an IP, so let's actually pass one of the IPs for Pluralsight, and it gives us a callback error first and hostnames, so it translates an IP back to its host names. Let's test that, and you'll notice that it translates this IP back to an Amazon AWS hostname. If we actually tried to open this in a browser it will probably go to Pluralsight.com. There you go.

  28. UDP Datagram Sockets Let's see an example for working with UDP sockets in Node. To do so, we need to require the dgram module, which provides an implementation for the UDP datagram sockets. We then can use the dgram createSocket method to create a new socket. The argument for the createSocket could be udp4 or udp6, depending on the type of socket that you want to use. So we're using udp4 in here. And to listen on this socket, we use server.bind and give it port and host. I'm putting port and host here in constants, because I'm going to reuse them for communication. So the server object here is an event emitter and it has a few events that we can register handlers for. The first event is the listening event, which happens when the UDP server starts listening. And the other event that we want to register a handle for is the message event. This event happens every time the socket receives a message. It's callback exposes the message itself and the remote information, so let's console log the remote information address and port and the message so that we know where the message is coming from. And that's it for the UDP server. We can actually run this code and the server will start listening for connections. So this part is the server, and I'm going to create a client in the same file here for simplicity. So since we do have this dgram required already and we can actually share these three variables, so all we need to do is create a client socket, which we can use the same call for. Let's call this client. And to send a UDP packet, we can use client.send. This method takes the first argument, can be a string, so let's send "Pluralsight rocks." And we need to tell the UDP socket which port and host to send this message to, so we can use port and host in here. And this exposes an optional callback in case we want to handle the error in here. So, if there's an error, we should throw it, and in here we can console.log something like udp message send, and potentially after we send this message, we can close the client, in case this is the only udp packet that we want to send. So we can test that. You'll see that server is listening, message is sent, and you'll see this code is getting fired with the right address port in there. Every time you create a new socket, it will use a different port. For example, let's put all this code in an interval function, and every second do all of this sending. And let's see what happens now. You'll see that every second we're getting a different port for the remote information. Let's undo this. So back to sending a message. This first argument here is currently a string, but it can also be a buffer, so let's create a new buffer. I'm going to call it message and it will be buffer from this message, and I want to send this message, which is a buffer in this case. And when we have a buffer, we can specify where to start from this buffer, and where to end from this buffer. So 0 and msg.length will be equivalent to sending a string, so you'll see I got the same thing there. So we're sending the buffer from 0 to msg.length, and I can actually send the packet in multiple steps. So, for example, let's send the first 11 bytes, which is Pluralsight. And after we do that, let's send from 11, 6 characters, which is going to give me the rocks message, so let's take a look at that. Now you'll see that the packet was sent on two steps, so we have Pluralsight in the first step and then space rocks in the second step. Let me undo that so we don't need it. So specifying an offset and the length is only needed if you use a buffer. The first argument can actually be an array of messages if we need to send multiple things.

  29. UDP Summary In this module, we explored Node capabilities for working with TCP and UDP networking. We created a basic network server example using the net module and saw how we can communicate with TCP sockets for read and write and how the sockets are event emitters. We saw how to give every socket an id and how to keep a record of all connected sockets, and then we used that structure to make our network server into a simple chat application. We then improved that chat application by asking connected clients about their names first and outputting a timestamp for every chat message. We then explored the DNS module and its various methods and how to use it to translate network names into addresses and vice versa. And finally, we saw an example for working with UDP sockets in Node using the dgram module. In the next course module, we'll talk about Node's HTTP and HTTPS capabilities.

  30. Node for Web The Basic Streaming HTTP Server HTTP is a first class citizen in Node. In fact, node started as a web server and evolved into the much more generalized framework it is today. Node's HTTP module is designed with streaming and low latency in mind. Node is today a very popular tool to create and run web servers. Let's look at the typical hello-world example for Node's http module. We create a server using the HTTP module create server method, which gives us, no surprise there, an event emitter object. That event emitter has many events, one of which is the request event, and this event happens every time a client connects to this http server. It exposes a request object and a response object. We can use the response object to modify the response Node will be using for that request. In here, we respond with 200 ok, content-type text, and the text "hello world." The server runs on port 8000. When we run this script, node will not exit, because it has a listener handler and it will respond to any http request on port 8000. If we inspect the headers this simple http server is sending, we'll see http 1.1, this is the current most recent version of http that's supported in browsers. Response code is 200 ok, and the content-type is what we set it to, and we have a connection keep-alive and transfer-encoding is chunked. Keep-alive means that the connection to the web server will be persisted. The tcp connection will not be killed after a requester receives a response, so that they can send multiple requests on the same connection. Transfer-encoding chunked is used to send a variable length response text. It basically means that the response is being streamed! Node is ok with us sending partial chunked responses, because the response object is a writable stream. There is no response length value being sent. After Node sends the 200 ok here, it can do many other things before terminating the response, and instead of inefficiently buffering everything it wants to write in memory and then write it at once, it can just stream parts of the response as they're ready. So this very simple http server can be used, for example, to stream video files out of the box. But remember that the connection is not terminated here, so the browser knows that the content is done through the http protocol. Http 1.1 has a way of terminating a message, and it's what happens when we use the response .end function. So if we don't actually use the end method, if we use the write method instead, in this case, when a client connects, they get the "hello world," but the response is not terminated, because basically Node is still streaming. In fact, in here, we can define a timer function. Let's make that fire after one second. And inside this function, we'll go ahead and write another message. And how about we write yet another message after two seconds? Let's try it. Run the server. Connect. And you'll see "hello world." After one second you'll see another message, and after two seconds you'll see the third message. And the server is still streaming. We have not instructed http to terminate the response object. Let's make the timeout periods here a little bit longer. Let's do 10 seconds and 20 seconds and test that. Initiate an http request. I see the "hello world." Node is not really sleeping, it's just idling, and it can actually handle other client requests during this idling phase, thanks to the event loop. Both of these requests are being handled concurrently with the same node process. Please note, however, that terminating the response object with a call to the end method is not optional, you have to do it for every request. If you don't, the request will timeout after the default timeout period, which is set to two minutes. As you can see, this right line did not happen, because it's delayed till after the default server timeout. We can actually control the server timeout using the timeout property, so this line will make the timeout one second, and you'll see now how the server is going to timeout right away.

  31. Working with HTTPS HTTPS is the HTTP protocol over TLS/SSL. Node has a separate module to work with HTTPS, but it's very similar to the HTTP module. To convert the basic HTTP server example to work with HTTPS, all we need to do is require https instead and provide the createServer method with an options object. This object can be used to configure multiple things. Usually we can pass key and certificate. Those can be buffers or strings to represent a private key and the certificate. We can also pass a pfx option to combine them if you want. I'll just use key insert and let's walk through a complete example. We first need to generate a certificate. We can use the openssl toolkit for that, which will allow us to first generate a private key. We can have it encrypted or not encrypted. And it will allow us to generate a certificate signing request (CSR), and then self-sign this certificate to test it. Of course, the browsers will not trust our self-signed certificate, but it's good for testing purposes. We can actually combine all these steps with one command that will output a private key and a certificate file for us. It will ask for some information, but since it's all just a test, you can use the default answers here. When this command is done, you should see a key.pem file, and a cert.pem file in the working directory. Now that we have these files, we just need to pass them here to the create server method. We can use the fs module to do that. We'll do readFileSync. Since these files are to be read once, and used for creating a server, this is okay here. We'll do ./, the key.pem file, and same thing for the cert. So it's cert.pem, and of course we need to require the fs module, and the last thing we need to do is to change this port to 443, which is the default port for https communication. To test all that, we need to execute the script. In my environment I need to sudo this command to get access to the 443 port. So sudo node https.js and we have an https server. So we can go to https localhost and the browser is going to warn us about the self-signed certificate. It simply means that the browser does not recognize this certificate, but we can trust it manually. And there you go, very simple.

  32. Requesting HTTP/HTTPS Data Node can also be used as a client for requesting http and https data. There are five major classes of objects in Node's HTTP module. The Server class is what we use to create the basic server, it inherits from net.Server, so it's an EventEmitter. A ServerResponse object gets created internally by an HTTP server. The http.Agent class is used to manage pooling sockets used in HTTP client requests. Node uses a global agent by default, but we can create a different agent with different options when we need to. When we initiate an HTTP request, we will be working with a ClientRequest object. This is different from the request object we've seen in the server example. That request object is actually an IncomingMessage object, which we're gonna see in a little bit. Both clientRequest and serverResponse implement the writable stream interface. IncomingMessage objects implement the readable stream interface, and all three objects are event emitters. We've seen the basic server example. Let's see a basic request example. Let's request the HTML page at google.com with Node. We can use the http.request method. This method takes an option argument and it gives us access to a callback with a response for the host that we're gonna request. We can specify many options here. To request google.com, we need hostname is google.com, and we can use multiple other options here. For example, the default method is get, but we can use a method like POST if we need to post information to a hostname. I'll leave it default, and in here we can console log the response, which is going to be the html at google.com. You'll notice one thing about the handler that we define in the second argument here, it doesn't have an error argument. It's simply because that this handler gets registered as an event listener and we also handle the error with an event listener here. So this http request method returns an object, and that object is an event emitter. So we can register a handler for the error event. So we can do something like request.on error, and handle the error in that case. So we can do something like this. Since this request object is a writable stream, we can do things like write, for example, if we're writing with a post method. But for get requests, we don't need that. We just need to terminate the stream, so we do .end here and that should do it. Let's test. So node request and I'll pipe it on less and you'll see the response object here is an incoming message. So we can read information on this response object like status code. We can also read response.headers, for example, and this response object is an event emitter. It emits a data event when it receives data from the hostname. This data event gives us a callback, and that callback's argument is a buffer. So, for example, if we want to see the actual HTML, we do toString on it, so let's actually take a look at that. You'll see the status code 200, the headers object, and you'll see the actual html for google.com. If we're not writing any information to the headers request or if we're not posting or putting or deleting data, we could actually just use the get method here. This get method is simpler. Its first argument is just a string with the URL that we want to read, so we need to add the http protocol in here, and we don't need to do a request.end on this. That will be done for us. So this should be equivalent, and this request is done using the global http agent. So the http module has a globalAgent, which node uses to manage the sockets here. It has some pre-configured options here. We can see that agent information here if we do a request.agent. So in here you'll first see the agent information and then you'll see the printed information from the response. And all this interface is exactly the same if we want to work with https instead of http. So if we want to fetch https google.com, all we need to is replace http with https, and things will work exactly the same, but it's now communicating the request over https. So let me now show you where to identify these objects in the examples we've seen so far. In this example, the request object here is from the class clientRequest. This response object is of type incomingMessage. And the agent that was used for the request is of type http Agent. In the server example, the server object is from the http server class, the request object inside the request listener is from the incomingMessage class, and the response object is from the server response class.

  33. Working with Routes Let's see how we can work with routes using Node's http module. We have a basic server here, and what I want to do I want to support some routes. Let's try to support a home route, an api route, and see how we can respond to different things in there. So, to support that, we need to first read the URL information, and this is easy. We can use req.url to do so. So now, when I request localhost, the req/url is slash, and anything I put here I will be able to read it. So, to handle routes, all we need to do is we need a switch statement for this req.url. So, for example, what should we do when it's slash? Or, what should we do when it's home, and so on? So let's assume that we have a /home.html file, very basic template HTML, and we want to display this file when we go to slash home. All we need to do is write the header, which is this case text HTML, and then in here we want to read home that HTML and write it to the response. So we can use fs.readfileSync ./home.html. And of course, we need fs module and let's try it. There you go. Excellent. What if we have an about.html and now we also want to support /about. So it's exact same code as this portion. So we can actually make it dynamic. Let's make it support both home and about, and let's make it support dynamic, because this is the exact req.url part. So first convert this into a template string and then this part becomes variable req.url, just like that. Let's test. We can now see the about page and the home page. So what should we do on the root route? How about we redirect the user to /home? With Node, we just write a header for that. So it's writeHead, and we'll just use 301 here to indicate a permanent move, and the header that we want to write here is the location header. So, this is permanently moved to /home. And we still need to end the response, so now when we request the slash route, it will tell us that it was moved permanently to slash home. By the way, this first number here is the http status code. We can actually take a look at all the status codes using http.status_codes. So you'll see all of them here. 301 is moved permanently. If you need to work with json data, say that we have a route here. We'll make it /api, so this route is gonna respond with some data. Let's assume that we have the data here is a constant. This data is some kind of object and we want to respond with this data json, so we need to do two things. First, the content type in this case is application/json. And the information that we want to write here is a stringified version of the JSON object. So it's json.stringify the data variable. And we can test that. So in here if we go to /api, we'll get back a json response. Now what happens if we request something that does not exist? Right now, the server does not respond with anything and it will eventually timeout. So this would be this default case here, so we should probably respond with a 404, in this case. So 404 means not found. We should still end the response here, so let's try it now. It will give me a 404 not found.

  34. Parsing URLs and Query Strings If you need to work with URLs, especially if you need to parse URL, Node has this URL module that you can use. It has many methods, but the most important one is parse. The format method is also useful. So let me show you how to work with that. Before we do, let's take a look at the url api documentation from the nodeJS.org website. This diagram here has details on all the elements of a URL. This example is manually parsing this URL in here. So you can see we have the protocol, which is HTTP. There is the authentication part, which is username and password in case we have those, and then there is hostname and port, and both of them together is a host. And there's the path name, which is what comes after the host and before the query string. And then with a query string, the question mark including the query itself we call it search, but without the question mark we call it query. And pathname + search is called path. And if we have a hash location here it will be called hash. These are all the elements in any URL object. So if we have the URL as a string and we need to parse it into those elements, we can use the URL parse method. For example, I can use the url.parse method to parse this URL that I just grabbed from Pluralsight search, and this call is going to give me all the elements that I have in that URL. We have an HTTPS protocol. We don't have any authentication data. The host is plurlasight.com, and since the port is null the host and hostname is the same, there's no hash, there's a search part query part hash name path and the of itself, which is the full URL. We can actually also specify a second argument here true to parse the query string itself, so if we do that the query string will be parsed, and reading information from the query string is as easy as doing .query.queue, for example. This will give me exactly the search query for that URL. If, on the other hand, you have the inverse situation, if you have an object will all these elements detailed and you want to format this object into a URL, you can use the url.format method. And this will give you back a string with all these URL object properties concatenated in the right way. If you only care about the query string, then you can use the query string module, which has many methods, as you see, but the most important ones are the parse method and this stringify method. So if we have an object like this one and we want to convert this object into a query string portion, all we need to do is call queryString.stringify on that object, and it will give me an actual string I can use in any url query. And you can see how this stringify method escaped some special characters by default, which is great. If you have the inverse situation, if you have a query string and you want to parse it into an object, what you need here is queryString.parse, and you give the string that you want to parse, and this will give you back an object parsed from that query string.

  35. Summary In this module, we talked about the basic Node's HTTP server and how it's non-blocking and streaming-ready out of the box. We saw how to create an HTTPS server with a test self-signed certificate that we created using the OpenSSL kit. We reviewed the five major http module classes and identified them in the examples we used. We saw how to use Node for requesting http and https data from the web. We also talked about how to support basic web routes using Node's HTTP module. We used that to respond with HTML files, JSON data, and to handle redirects and not-found URLs. Finally, we explored how to parse and format URLs and query strings using the native Node modules provided for them.

  36. Node's Common Built-in Libraries Working with the Operating System Let's explore some of the common built-in modules that come with Node. Node provides a number of utilities to access information directly from the operating system. We can use the os module for that. We can read information about CPUs, like their model, speed, and times. We can read information about network interfaces. We can read their IP addresses. We can also read the mac addresses, netmasks, and families. We can read information about total and free memory, where is the OS temp directory, and most importantly, what Operating System was Node compiled for. The type method will return Linux or Windows_NT, or Darwin for OS X. We can use that method, for example, to write code specific to an Operating System. We can also read the release version of the Operating System with the release method. The userInfo method is a handy one. It returns an object with information about the current user: username, uid, gid, shell, and home directory. On Windows, the shell attribute is null and both uid and gid are -1. Os.constants return an object with all the operating system error codes and process signals. The signals list is a handy, quick way to see a reference of all process signals available in the underlying OS.

  37. Working with the File System The fs module provides simple File System I/O functions to use with Node. All of the fs module functions have asynchronous and synchronous forms. You can pick either form depending on your code logic. For example, if you're reading a file during a server initialization process, readFileSync is probably ok, but if you're reading a file every time a user requests something from that server, you should probably stick with the asynchronous form. Other than being synchronous or asynchronous, those forms handle exceptions differently. The async functions pass any encountered errors normally as the first argument in the callback, while the synchronous functions immediately throw any errors. When using the synchronous functions, if you don't want the errors to bubble up, you'll need to use a try/catch statement to handle them. The readFile method, by the way, gives back a buffer if we don't specify a character encoding, which we can do in the second argument if needed. I usually default to working with buffers, unless I have specific needs to convert the raw data into a string. To explore the multiple capabilities of the fs module, I thought we'd go through some realistic tasks and demonstrate how to do them with the fs module. For the first task, we have a directory full of corrupted files. Someone accidentally ran a script that appended a copy of each file to itself. So each file has its data duplicated, and you're tasked to write a Node script to fix that, basically to truncate each file to half its content. I have prepared some bad data-doubled files for you in this task1/files directory. As you can see, the https js file, which was our https example, is duplicate here, and all the other files in this directory have the same problem. Go ahead and try to write that script on your own first, then come back to see my version of the solution. Pause the video here. To fix a list of files, we first need to read that list. The readdir method can do that for us. I'm using the sync version of this method here, because basically this script has nothing to do unless we have a list of files. Once we have the list of files, this files constant will be an array of file names. Just the names, not the full path, so we'll use the path module to get the actual full path. Don't use string concatenation here. Always use path.join when working with file paths to make your code platform-agnostic. For every filePath, the goal is to truncate it to the right size, which is half of its current size, so we need to read each file size. We don't need to read the file content, we just need its size, so instead of using readFile here, we can use the stat method, which gives us only meta information about the file, including its size. Note how here we used the asynchronous versions of the methods, because we have multiple files to process. Once we have the size, we can use the truncate method with exactly half the size of the file. Problem solved. To test this script, we just run it in this task1 directory, and then we'll make sure that the files were fixed correctly. And it looks like they did. The second task, you have a directory with many log files, and you're tasked to write a Node script that is to be run every day on that directory and it should delete all files older than 7 days. So just keep 1 week worth of logs in that directory. You don't have access to that directory, so your task includes generating test data to make sure your cleaning script is working correctly. Want to try that task on your own first? Pause here. Here's our test data seed script. We first synchronously create a destination directory to work with. We can't do anything without that. Then, with a simple for loop, we'll create 10 sample files, using the writeFile method, and then for each file we create, we use the utimes method to change its timestamp. This method takes two arguments after the file path, the access time and the modified time. We're just using the same value here. These two arguments expect a Unix timestamp in seconds. I wanted every file to have a different timestamp starting from the current date and going back to 9 days in the past, so I subtracted 1 day milliseconds multiplied by the current file index value, which is 0 through 9. I then divided the whole thing by 1000 to suit the arguments for the utimes function. The result of running this script is a new files directory with 10 files, each with a different timestamp. File0 is current date, file 1 is one day old, and so on. Now that we have our test data, let's write a script to clean any files older than 7 days. Basically our script should remove files 7, 8, and 9. The solution is simple. Exactly like task1, we need to loop over all the files in the directory, and we need to read the stats metadata for each file. What we're interested in now, however, is the mtime for every file. If the file was not modified in the last 7 days we want to delete it, which we can do with the unlink function. The if statement here is simple. The mtime is a date object, so using the getTime method we read its Unix milliseconds value, subtract that from the current Unix millisecond value, and check if the result is greater than 7 days in milliseconds. When we run this script, files 7, 8, and 9 will be deleted. If we run again the script again, no files will be deleted, and if we run it tomorrow same time, file 6 will be deleted. Task 3. You're tasked to write a script that watches 1 directory and log 3 type of events on that directory. When a file is added, removed, or changed in the directory, your script should output a timestamped message about that event. We can simply use the fs.watch for this task, but fs watch does not give us enough details to account for all 3 events. Both the add and delete events are reported under the watch method as rename eventType. Here's what I did to work around that. For testing, the files directory is what we'll watch, and it has 3 empty files. We first read the names of the current files in the directory to be watched. Then, we start the watch listener. If the eventType is rename, it means a file was either added or deleted from the directory. So if the file exists in the currentFiles array, it means the event is a remove event, and we update our currentFiles array to remove the removed file. The logWithTime method just outputs the message with a UTC timestamp. If we get to this point in the code, it means a file is being added, so we'll update the currentFiles array and log that message. If the eventType is not rename, the other eventType supported in this listener is the change event, so in this case, the file is being changed. The fs watch API is actually not completely consistent on different environments, so be sure to test any code written with it thoroughly first. To test this code, we simply run this file, and then we'll try to do some changes on our test files. We'll go ahead and change file0 and we get file0 was changed message. And let's go ahead and try to delete file0, and we'll get file0 was removed. And we'll create a new file and we get the file0 was added message.

  38. Console and Utilities The console module has some interesting less-known parts that I want to make sure that you know about. The console module is designed to match the console object provided by web browsers. In Node.js, there is a Console class that we can use to write to any Node.js stream, and there is a global console object already configured to write to stdout and stderr. Those are two different things. So, for example, you want to use the console methods but instead of writing to stdout and stderr, you want to write to say a socket or a file, all you need to do is instantiate a different console object from the Console class and pass the desired output and error streams as arguments. When we run this code, for example, it'll create an out.log and err.log files and write to both of them every five seconds. Console.log uses the util module under the hood to format and output a message with a new line. We can use printf formatting elements in the message, or we can simply use multiple arguments to print multiple messages on the same line. Popular formatting elements are %d for a number and %s for a string. There is also %j for a json object. If you want to use the printf substitutions without console logging it, you can use the util.format method. This just returns the formatted string. We can console log objects and Node will use util.inspect method to print a string representations of those objects. Util.inspect has a second options argument that we can use to control the output. For example, to only use the first level of an object, we can use a depth: 0 option. Util.inspect just returns the string. If you want to use the inspect second argument and still print to stdout, you can use the console.dir function. It will pass the second argument option to util.inspect and print out the result. Console.info is just an alias to console.log. Console.error behaves exactly like console.log, but writes to stderr instead of stdout. Console.warn is an alias for console.error. The console object has an assert function for simple assertions to test if the argument is true. It will throw an AssertionError if it's not. The built-in assert module has a lot more features for assertions, and I usually use it for quick assertions. It's not perfect, but it's good enough for simple cases. The ifError function on the assert module is a particularly interesting one. It throws value if value is truthy, which is exactly what we usually do on any errors argument in callbacks. Console.trace behaves just like console.error, but it also prints the call stack at the point where it is placed, which is handy when debugging problems. You can use console.time and console.timeEnd to start and stop timers and report the duration of an operation. The argument you pass to them should be a unique label for that operation. The util module has a few more handy functions. There is debuglog if you want to conditionally write debug messages to stderr based on the existence of the NODE_DEBUG environment variable. This debug line here will only be reported if this script was executed with NODE_DEBUG = web. The util.deprecate function can be used to wrap a function, say before you export it, to mark that function as deprecated. Users of that function will get a deprecation warning the first time they use that function. Finally, the util.inherits method was heavily used before the introduction of ES6 classes to inherit the prototype methods from one constructor to another. However, with ES6 classes and the extends keyword, the use of this util.inherits method is no longer needed or recommended, but you'll probably see it a lot in older examples of Node code.

  39. Debugging Node.js Applications Node comes with a built-in debugger that supports single-stepping, breakpoints, watchers, and much more. In here, I have a simple function that I expect to calculate the negative sum of all its arguments, so when we call it with 1, 5, and 10, I expect the result to be 0 - 1 -5 -10, which is -16. The implementation is simple. We reduce the arguments starting from 0 and every time we just subtract the argument from the running total. However, the script is not working as expected, so I'm going to use the built-in debugger to step through this code and try to find the problem. We activate the debugger by simply adding the debug argument to the command line. When we do that, you'll notice how the debugger is listening on this port. Node basically communicates with the debugger on this port. You can type help to see what you can do. Familiar debugging commands like continue, next, step are all available. Node starts the debugger session in a break state, so we always need to continue execution. If we type in continue now, the script will simply run and output the wrong answer and be done. We don't have any breakpoints. We can actually place a debugger line anywhere in the script and when the debugger reaches that line, it will halt, allowing us to inspect that point in code and issue other commands. Since we reached the end of this script and did not really step into anything, let's start over using the restart command. This time, before continuing the execution, we'll set a breakpoint on line two using the sb command. This is equivalent to adding a debugger line on that point. Now that we have a breakpoint, we can continue the execution with the continue command. The debugger will break at that point. We can now inspect anything accessible to the script at that point, using the REPL command. We can, for example, inspect the args variable, which appears to be correct, so it seems our problem might be inside the reduce callback, so let's create a breakpoint in there, line 3. Now, the debugger should break three times there, because we're looping over three arguments, but instead of manually inspecting the variables on every break, we can define watch expressions using the watch command. I'd like to watch the arg variable and the total variable. We now have two watchers that will be reported when the debugger breaks. When we continue, the watchers report arg to be 0 and total to be 1, which is actually flipped. I am expecting total to start with 0 and arg to be the first argument in the array, which is 1. So to verify, let's continue again. Total is now getting the 5 value, which is the second argument in the array. And if we continue again, it's getting the 10. So I definitely got the order of the callback arguments wrong. It should be total first and arg second. Easy fix. If while debugging you cleared your screen (I can do that with CTRL L here), and you can't remember where exactly the debugger is currently breaking, you can use the list command, which takes an argument of how many lines it should list before and after the current breakpoint. This debugger client is not perfect, but it's good enough for a quick interactive debugging session. But it gets better. Recent Node versions support an experimental feature to integrate the Chrome dev tools to debug any script. All we need to do is use the --inspect argument with the script. But this feature does not break by default, so we can use --debug-brk argument to make it break before the first execution. When we add the --inspect argument, we get a URL to open in Chrome and when we open that URL in Chrome, dev tools will open our Node script and we can use all the familiar fully featured tools here to inspect the script. We can step into and over functions. For example, to redo our debugging session here, I can simply step into negativeSum, and while I'm stepping over the three reduce calls, I can see the argument values right there without any need to watch or set breakpoints. But if I need to set breakpoints, I can simply do that by clicking on the line numbers here. I can also use the console to inspect any variables or expressions and I can even hover over variables to see their current values. So all the features that we know and love in Chrome devtools can now be used with Node scripts, and the best thing about it is that it's all built-in. I don't need to install any packages to get this value. When I resume the execution of this script, the console.log call on line seven will be reported here.

  40. Summary In this module, we explored the native os module that can be used to access information directly from the operating system. We then explored the various synchronous and asynchronous capabilities of the fs module by implementing three practical tasks. We've seen how to use the methods to create a directory, read directory information, read file stats, truncate a file, write to a file, update the last modified time of a file, delete a file, and watches a file or directory for changes. We explored the methods on the console object, like log/error/trace, and the use of time/timeEnd to report durations of executions. We saw how we can create a custom console object and saw how the console object uses the util module internally and that we can use the util module directly if we need to. The util module has some extra utility functions that are helpful, like the debuglog, for example, to control debug messages in your code and the deprecate function to mark pieces of your code as deprecated and give the users a warning when they use them. We debugged a problem in a script using the built-in debugger client and saw how we can set break points, define watchers, and inspect any point in the code using a built-in REPL. We've also seen how can use Chrome DevTools to debug our Node script using the --inspect argument, which gives us a much better graphical client to debug our node code.

  41. Working with Streams Stream All the Things! Working with big amounts of data in Node.js means working with streams. A lot of people find it hard to understand and work with streams, and there are a lot of Node packages out there with the sole purpose of making working with streams easier. However, I'm going to focus on the native Node.js stream API here. Streams, most importantly, give you the power of composability in your code. Just like you can compose powerful Linux commands by piping other smaller commands, you can do exactly the same in Node with Streams. Many of the built-in modules in Node implement the streaming interface. This list has some examples for readable and writable streams. Some of these items are both readable and writable streams, and you can notice how the items are closely related, so while an HTTP response is a readable stream on the client, it's a writable stream on the server, because basically one object generates the data and the other object consumes it. Note also how the stdin/out and error streams are the inverse type when it comes to child processes, which we'll talk about in the next module. Let's start by defining streams. Actually, pause the video and try to write down your own definition of streams. Streams are simply collections of data, just like arrays or strings, with the difference that they might not be available all at once and they don't have to fit in memory, which makes them really powerful for working with large amounts of data or data that's coming from an external source one chunk at a time. Let's jump right into an example to show you the difference streams can make in code when it comes to memory consumption. Let's create a simple web server that serves a really big text file. I've included a simple script for you to create a big file, and look what I used to do that, a writable stream! The fs module can be used to read from and write to files using a stream interface. Here, we're writing to this big.file through a writable stream, and we're just writing 1 million lines with a loop, using the .write method. Then, we use the .end method when we're done. Running this script generates a file that's about 400 MB. So now, we'll serve this file in a simple http server using the readFile method and writing the response inside its callback. We've done this before. Let me run this script and I'll monitor the memory usage here using the Mac activity monitor. On Windows you can use the Task Manager for that. Here's our Node process using 8.9MB of memory. That's a normal number. And when a client requests the big file through our web server, the server's memory usage immediately jump to over 400 MB of memory. That's because our code basically buffered the whole big file in memory before it wrote it out. This is very inefficient. Luckily, we don't have to do that in Node. The response object is also a writable stream. Remember how we streamed data with timers when we were talking about the HTTP server? And we used .write and .end on this response object, the same methods we just used to create the big file with the stream. Since this response is a writable stream, if we have the big file as a readable stream, we can simply pipe one into the other and avoid filling up the memory. The fs module can give a readable stream for any file using this createReadStream method and then we simply pipe this readable stream into the response writeable stream. Not only is this code a more elegant version of what we had before, but it's also a much better way to deal with this data. Let me show you. We'll run the server, monitor the memory, which starts low as expected, and now, when a client asks for that big file that we're serving, we stream it, one chunk at a time, which means we don't buffer in memory at all. Look at the memory usage. It grew by about 20, 25 MBs and that's it. In fact, let's push this example into the limits. I'll regenerate this big file with 5 million lines instead of just 1 million, which should take the file to well over 2GB. And that's actually bigger that the default buffer limit in Node. So, if we try to serve this file using readFile, we simply can't by default. But with the streaming version, there is no problem at all streaming 2GB of data to the requester. And best of all, the process memory usage is roughly the same.

  42. Streams 101 There are four fundamental stream types in Node.js: Readable, Writable, Duplex, and Transform. A readable stream is an abstraction for a source from which data can be consumed. An example of that is the fs.createReadStream method. A writable stream is an abstraction for a destination to which data can be written. An example of that is the fs.createWriteStream method. Duplex streams are both Readable and Writable, like a socket, for example, and Transform streams are basically duplex streams that can be used to modify or transform the data as it is written and read. An example of that is the zlib createGzip stream to compress the data using gzip. You can think of a transform stream as a function where the input is the writable stream part and the output is readable stream part. You might also hear transform streams referred to as "through streams." All streams are instances of EventEmitter. They all emit events that we can use to write or read data from them. However, we can consume streams in a simpler way using the pipe method. In this simple line, we're piping the output of a readable stream, src, as the input of a writable stream, destination. Src has to be a readable stream and destination has to be a writeable stream here. They can both, of course, be duplex streams as well. In fact, if we're piping into a duplex stream, we can chain pipe calls just like we do in Linux. A pipe into b, pipe into c, and then pipe into d, given that b and c are both duplex streams. This line is equivalent to a.pipe(b), then b.pipe(c), then c.pipe(d). It's generally recommended to either use pipe or events, but avoid mixing them, and usually when you're using pipe you don't need to use events. But if you need to consume the streams in more custom ways, then events would be the way to go. Note that we talk about streams in Node there are two main different things about them. There is the task of implementing the streams and there is the consuming of the streams. Stream implementers are usually who use the stream module. For consuming, all we need to do is either use pipe or listen to the stream events. Readable and writable streams have events and functions that are somehow related. We usually use them together. Some of events are similar, like the error and close events, and others are different. The most important events on a readable stream are the data event, which is emitted whenever the stream passes a chunk of data to the consumer, and the end event, which is emitted when there is no more data to be consumed from the stream. The most important events on a writable stream are the drain event, which is a signal that the writable stream can receive more data, and the finish event, which is emitted when all data has been flushed to the underlying system. To consume a readable stream, we use either the pipe/unpipe methods, or the read/unshift/resume methods, and to consume a writable stream, we just write to it with the write method and call the end method when we're done. Readable streams have two main modes that affect the way we consume them. They can be either in the paused mode or in the flowing mode. Those are sometimes referred to as pull vs. push modes. All readable streams start in the pause mode, but they can be easily switch into flowing and back to paused where needed. In Paused mode, we have to use the .read() method to read from the stream, while in the flowing mode, the data is continuously flowing and we have to listen to events to consume it. In the flowing mode, data can actually be lost if no consumers are available to handle it, this is why when we have a readable stream in flowing mode, we need a 'data' event handler. In fact, just adding a data event handler switches a paused stream into flowing mode, and removing the data handler switches it back to paused mode. Some of this is done for backward compatibility with the older stream interface in node. Usually, to switch between these two modes, we use the resume and pause methods.

  43. Implementing Readable and Writable Streams Let's implement a writable stream. We need to use the Writable constructor from the stream module. We can implement a writable stream in many ways. We can extend this Writable constructor if we want, but I prefer the simpler constructor approach. We just create an object from the Writable constructor and we pass it a number of options. The only required option is a write function, which is what the streams uses to send data to the resource. This write method takes three arguments. The chunk is usually a buffer unless we configure the stream differently. The encoding argument is needed in that case, but usually we can ignore it. And the callback is what we need to call after we're done processing the data chunk. Let's simply console log the chunk as a string and call the callback after that. So this is a very simple and probably not so useful echo stream. Anything it receives it will echo back. To consume this stream, we can simply use it with process.stdin, which is a readable stream, so we can just pipe stdin into our outStream. When we run this file, anything we type into process.stdin will be echoed back using the outStream console.log line. This is not a very useful stream to implement, because it's already implemented and built in. This is very much equivalent to process.stdout. We can just pipe stdin into stdout and we'll get the exact same echo feature with just that line. Let's implement a readable stream. Similar to writable, we require the readable interface, construct an object from it, and for the readable stream we can simply push the data that we want the consumers to consume. When we push null, it means we want to signal that the stream does not have any more data. To consume this stream, we can simply pipe into process.stdout. When we run this file, we're reading the data from this inStream and echoing it to standard out. Very simple, but also not very efficient. We're basically pushing all our data to the stream before piping it to process.stdout. The better way here is to push data on demand, when a consumer asks for it. We can do that by implementing the read method on the readable stream. When the read method is called on a readable stream, the implementation can push partial data to the queue. Let's push one letter at a time. We'll start with the character code 65 for a, and on every read, we'll push the letter and increment the character code. While the consumer is reading, the read method will continue to fire, and we'll push more letters. We'll have to stop this cycle somewhere, so let's stop it when the character code is greater than 90, which is the letter Z. To stop it, we just push null. Testing this code, it's exactly the same, but now we're pushing the data on demand when the consumer asks to read it. Let's delay this pushing code a bit to explain this further. We'll just put these lines inside a timer. Delay it by a 100 ms. If we execute this code, the stream will stream characters every 100 ms, and we'll get an error here, because we basically created a race condition where one timer will push null and another timer will try to push data after that. To fix that we can simply move this if statement to above the push data line, and just return from it if we hit the max condition. This should fix our problem. Let's now register a process on exit event, and in there let's write to the error console the current character code that we have in our inStream. What I want to do now is to read only three characters from our inStream and make sure that not all the data gets buffered. We can read three characters using the head command with -c3, just like this, and when we do that we get our three letters, and the head command will cause the node process to exit and we'll see our currentCharCode value, which is only at 69, which is D. So data is only pushed to this readable stream on demand here. The head command causes this error on the standard out, which goes unhandled and forces the process to exit, but we can suppress this error message by officially registering a handler for the error event on stdout and just call process.exit in there. This will make our script return the three characters cleanly.

  44. Duplex Streams and Transform Streams With Duplex streams, we can implement both a readable and writable stream with the same object. It's as if we can inherit from both interfaces. Here's an example duplex stream that combines the two examples we've implemented in the previous clip. For every Duplex stream, we have to implement both the write and read methods. By combining the streams here, we can use this duplex stream to read the letters from A to Z and we can also use it for its echo feature. We pipe the standard in into this duplex stream to use the echo feature, and we pipe the duplex stream itself into the standard out to read the letters A through Z. It's extremely important to understand that the readable and writable sides of a duplex stream operate completely independently from one another. This is merely a grouping of two features into one object. A transform stream is the more interesting duplex stream, because its output is computed from its input. We don't have to implement read or write. We only need to implement a transform method, which combines both of them. It has the signature of the write method and we can use it to push data as well. In this transform stream, which we're consuming exactly like the previous example, but we only implemented a transform method, and in that method, we convert the chunk into its upperCase version, then push that as the readable part. And we still need to call the callback, just like in the write method. This stream effectively implements the same echo feature, but converting the input into uppercase first. Testing this script, anything we type will be echoed back in upperCase format. Let's take a look at a more useful transform stream, the zlib createGzip() function. In this example, we read a filename from the argument, create a read stream from that file name, pipe that into the transform stream from zlib gzip, and pipe that into a writable stream on the file system using the same filename with a .gz extension. This script basically compresses a file and write the compressed file back into the file system. I've included a test file here for testing. Let's compress that. We'll get a test.file.gz, and I can basically remove test.file and recover it by gunzipping the compressed version. The cool thing about using pipes is that we can actually combine them with events if we need to. Say ,for example, I want to console log a done message when this script is done writing the compressed data into the file system. I'll need to use the finish event on the writable stream here, but since the pipe method returns the destination, I can just chain a .on call here, register the finish event handler, and console log my line in there. And say that we want to give the user a progress indicator. We can use the data event on the duplex stream, which indicates that we've made progress compressing some data. Let's just output a . in there, and now if we run the script, not only will it compress the file, but it'll give the progress indicator as well. So with pipe we get to easily consume streams, but we can still further customize our interaction with them using events where needed. What's great about pipe, though, is that we can use it to compose our program piece by piece. So for example, instead of listening to the data event, we can simply create a transform stream to report progress. So this progress stream would be a simple pass through stream, but it reports the progress to standard out as well. Note how I used the second argument in the callback to push the data, which is equivalent to pushing the data first. So now we can just pipe the zilb stream into progress to make it report the progress, which is really cool. The applications here are endless. Say that we want to encrypt the file as well before we gzip it. All we need to do is pipe another transform stream. The crypto module has a transform stream that does exactly what we want. All we need to do is add it to our pipe chain. This script now compresses and encrypts this file and only those who have the secret can use the outputted file. It's actually not a .gz anymore, so we can use any extension here. And we can't unzip this file with a normal unzip utility, because it's encrypted. To actually be able to unzip anything zipped with this script successfully, we need to use the opposite streams for crypto and zlib in a reverse order. So we create a read stream from the compressed version, pipe that into the crypto createDecipher stream, and then pipe that into the gunzip stream, and then write things out back to a file without the extension part. We can now use this script to unzip our specially zipped file.

  45. Summary In this module, we explored Node.js streams. We first saw how they're comparable to the Unix philosophy of doing single tasks and then composing bigger tasks from smaller ones by chaining them together. We saw how a lot of the built-in modules in Node.js use streams internally in one way or another. And we saw how much of a difference streams make in terms of memory consumption when working with big files and how sometimes using streams is actually your only option to do that. We talked about the four different types of streams (readable, writable, duplex, and transform) and how a transform stream can be compared to a normal function since its output is computed from its input. We talked about how all streams are event emitters and how we can consume them with either events or by using the pipe feature, which can also be used to chain stream pipes, just like in Linux. We talked about how there are two different tasks about working with streams in Node, the implementation and the consuming parts, which are completely different. We explored the various events and methods available on both readable and writable streams, and we talked about the two different modes for readable streams, paused vs flowing, and how those affect the way we consume the streams. We then implemented a simple echo writable stream and a simple readable stream, which we used to read English letters A to Z. We saw how to use the readable streams read method to push data to consumers on demand. We've then seen some example for Duplex and Transform streams and saw how easy it is to compose rich functionalities by piping transform streams into each other. The zlib and crypto modules have some very useful transform streams that can be easily used with each other.

  46. Clusters and Child Processes Scaling Node.js Applications While single-threaded, non-blocking performance is quite good, eventually, one process in one CPU is not going to be enough to handle an increasing workload of an application. No matter how powerful the server you use can be, what a single thread can support is limited. The fact that node runs in a single thread does not mean that we can't take advantage of multiple processes, and of course, multiple machines as well. Using multiple processes is the only way to scale a Node.js application. Node.js is designed for building distributed applications with many nodes. This is why it's named Node.js. Scalability is baked into the platform and it's not something you start thinking about later in the lifetime of an application. Workload is the most popular reason we scale our applications, but it's not the only reason. We also scale our applications to increase their availability and tolerance to failure. Before we talk about scaling a Node.js application though, it's important to understand the different strategies of scalability. There are mainly three things we can do to scale an application. The easiest thing to do to scale a big application is to clone it multiple times and have each cloned instance handle part of the workload. This does not cost a lot in term of development time and it's highly effective. This course module will focus on the cloning strategy. But we can also scale an application by decomposing it based on functionalities and services. This means having multiple, different applications with different code bases and sometimes with their own dedicated databases and User Interfaces. This strategy is commonly associated with the term microservice, where micro indicates that those services should be as small as possible, but in reality the size of the service is not what's important, but rather the enforcement of loose coupling and high cohesion between services. The implementation of this strategy is often not easy and could result in long-term unexpected problems, but when done right the advantages are great. The third scaling strategy is to split the application into multiple instances where each instance is responsible for only a portion of the application's data. This strategy is often named horizontal partitioning, or sharding, in databases. Data partitioning requires a lookup step before each operation to determine which instance of the application to use. For example, maybe we want to partition our users based on their country or language. We need to do a lookup of that information first. Successfully scaling a big application should eventually implement all three strategies, and Node.js makes it easy to do so. In the next few clips, we'll talk about the built-in tools available in Node.js to implement the cloning strategy.

  47. Child Processes Events and Standard IO We can easily spin a child process using the child_process node modules, and those child processes can easily communicate with each other with a messaging system. The child_process module enables us to access Operating system functionalities by running any system command inside a child process, control its input stream, and listen in on its output stream. We can control the arguments to pass to the command and we can do whatever we want with its output. We can, for example, pipe the output of one command to another command, as all inputs and outputs of these commands can be presented to us using Node streams. The examples I'll be using in this module are all Linux-based. On Windows, you'll need to switch the commands I use with their windows alternatives. There are four different ways we can use to create a child process in Node. Spawn, fork, exec, and execFile. We're going to see the differences between those functions and when to use each. The spawn method launches a command in a new process and we can use it to pass that command any arguments. For example, here's code to spawn a new process that will execute the pwd command. We simply destructure the spawn function from the child_process module and execute it with the command as the first argument. The result of the spawn function is a ChildProcess instance, which implements Node.js EventEmitter API. This means we can register handlers for events on this child object directly. For example, we can do something when the child process ends by registering a handler for the exit event, which gives us the exit code for the child process and any signal, if any, that was used to terminate the child process. This signal variable is null when the child process exits normally. Other events we can register handlers for for this ChildProcess instance are disconnect, error, message, and close. The disconnect event is triggered when the parent process manually calls the child.disconnect method. An error event is triggered if the process could not be spawned or killed. The message event is the most important one. It's triggered when the child process uses the process.send() method to send messages. This is how parent/child processes can communicate with each other. We will see examples of this in the upcoming clips. And finally, the close event is emitted when the stdio streams of a child process get closed. Every child process gets the three standard stdio streams, which we can access using child.stdin, child.stdout, and child.stderr. When those streams get closed, the child process that was using them will trigger the close event. This close event is different than the exit event, because multiple child processes might share the same stdio streams, and a child process exiting does not mean the streams got closed. Since all streams are event emitter, we can listen to different events on those streams attached to every child process. Unlike in a normal process though, in a child process, the stdout/stderr streams are readable streams while the stdin stream is a writable one, which is the inverse of those types as found in a normal process. The events we can use for those streams are the standard ones. Most importantly, on the readable streams we can listen to the data event, which will have the output of the command or any error encountered while executing the command. These two handlers here will log both cases to the main process standard out and error. When we execute this script, the output of the pwd command gets printed and the child process exits with code 0, which means no error occurred. We can pass the spawn command any argument using the second argument of the spawn function, which is an array of all the arguments to be passed to the command. For example, to execute the find command on the current directory with a -type f argument (to list files only), we pass these three strings as values to the array here and this will be equivalent to executing the command find.-type f, which will list all the files in all directories under the current one. If an error occurs in the execution of the command, for example, if we can find an invalid destination here, we get the stderr line to trigger and the exit event will report an exit code of 1, which signifies that an error has occurred. The error value here depends on the operating system and the type of error. A child process stdin is a writable stream. We can use it to send a command some input. Just like any writable stream, the easiest way to consume it is using the pipe function. We pipe a readable stream into a writable stream. Since the top level process stdin is a readable stream, we can pipe that into a child process stdin stream. Here's an example. The child process here invokes the wc command, which counts lines, words, and characters in Linux. We then pipe the main process stdin (which is a readable stream) into the child process stdin (which is a writable stream). The result of this combination is that we get a standard input mode where we can type something, and when we hit Ctrl D, what we typed will be used as the input of the wc command and we got 1 line, 2 words, and 12 characters here. We can pipe the standard input output of multiple processes on each other, just like we can do with Linux commands. For example, we can pipe the standout out of the find command in the first example to the standard in of the wc command in the second example to count all the files in the current directory. All we need to do is define both child processes and pipe the output of the first one to the input of the second one. I added the -l argument to the wc command to only count lines here. Executing this script here will give us a count of all files in all directories under the current one. And so far we have three files here in this current directory.

  48. The Shell Snytax, exec(), and execFile() By default, the spawn method does not create a shell to execute the command we passed into it, making it slightly more efficient than the exec method, which does create a shell. The exec method has one other major difference. It buffers the command's generated output and pass the whole value to a callback function. Here's our previous example implemented with an exec method. Since exec uses a shell to execute the command, we can use the shell syntax directly here making use of the shell pipe feature. Exec buffers the output and pass it here in the stdout argument to the callback, which is the output we want to print. If there is an error in the command, the err first argument will be set. Exec is a good choice if you need to use the shell syntax and the data returned from the command is not big, because exec will buffer the whole data before it returns it. The spawn function is a much better choice when the data returned from the command is big, because that data will be streamed with the standard I/O object and we can actually make the child process inherit the standard io objects of its parents if we want to. Here's an example to spawn the same find command, but inherit the process stdin stdout and stderr. So when we execute this command, the data event will be triggered on the process.stdout, making the script output the result right away. If we want to use shell syntax and still get advantage of the streaming of the data that the spawn command give us, we can also use the shell option and set that it true. This way spawn will use a shell, but it will still not buffer the data the way exec does. This is the best of the two worlds. There are a few other options we can use in the last argument here besides shell and stdio. We can, for example, use the cwd option to change the working directory of the script. As an example, here's the same count all files script done with a spawn command using a shell and with a working directory set to my Downloads folder, which will count all files I have in Downloads, and I obviously need to clean that folder. One other option we can use is the env option to specify environment variables that will be visible to the new process. The default for this option is process.env itself. So any command will have access to the current process environment by default, and if we want to override that behavior we can simply pass an empty object in the env option, and now the command will not have access to the parent process env object anymore. We can use this env option to manually define environment values and access them in a child command like this one. The last option I want to explain here is the detached option, which makes the child process run independently of its parent process, but the exact behavior depends on the OS. On Windows, the detached child process will have its own console window, while on Linux the detached child process will be made the leader of a new process group and session. If the unref method is called on the detached process, the parent process can exit independently of the child. This can be useful if the child is executing a long running process, but to keep it running in the background the child's stdio configurations also have to be independent of the parent. This example will run a node script in the background by detaching and also ignoring its parent stdio file descriptors, so that the parent can terminate while the child keeps running in the background. When we run this, the parent process will exit, but the timer.js process will continue to run in the background. If you need to execute a file without using a shell, the exec file function is what you need. It behaves exactly like the exec function, but does not use a shell, which makes it a bit more efficient. But behaviors like io direction and file clobbing are not supported when using exec file. On Windows also, some files cannot be executed on their own, like .bat or .cmd files. Those files cannot be executed with exec file, and either exec or spawn with shell set to true is required to execute them. All of the child_process module functions have synchronous blocking versions that will wait until the spawned process exits. This is potentially useful for simplifying scripting tasks or any startup processing tasks, but they are to be avoided otherwise.

  49. The fork() Function The fork function is a variation on the spawn function for spawning node processes. The biggest difference between spawn and fork is that a communication channel is established to the child process when using fork. So we can use the send method on the forked process and the global process object to exchange messages between the parent and forked processes with an interface similar to that of the EventEmitter module. Here's an example. In the parent module, we fork child.js and listen on the message event, which will be triggered whenever the child uses process.send, which we're doing every second. To pass down messages from the parent to the child, we can execute the send function on the forked object itself, and in the child we listen to the message event on the global process object. When running this code, the forked child will send an incremented counter value every second and the parent will just print that. Let's do a more practical example to using the fork function. Let's say we have an http server that handles multiple endpoints. One of these endpoints is computationally expensive and will take few seconds to complete. I've simulated this with a long for loop here. This program as is has a big problem. When the the /compute endpoint is requested, the server will not be able to handle any other requests because the event loop is busy with the long for loop operation. There are a few ways we can solve this problem depending on the nature of the long operation, but one solution that works for all operations is just to move the computational operation into another process using fork. We'll just move the whole function into its own file, call it compute.js. Then, instead of doing the operation here, we fork that file and use the message's interface to communicate messages between the server and the forked process. When we get a request, we'll send a message to the forked process to start processing. The forked process can read that message using the message event on the global process object. In there, we'll call the long computation operation, and once done, we can send its result back to the parent process using process.send. In the parent process, we just listen to the message event on the forked process itself, and once we get that, we have a sum ready for us to send the requesting user over http. We'll need to import fork for this code to work, and this code is, of course, limited by the number of processes we can fork, but when we execute this code now and request the long computation over http, the main server is not blocked at all and can take further requests, and when the computation is done, it's received normally as before. Node's cluster module, which we'll explore next, is based on this idea of child process forking and load balancing the requests among the many forks that we can create on any system.

  50. The Cluster Module The cluster module can be used to enable load balancing over an environment multiple CPU cores. It's based on the fork function that we've seen in the previous clip and it basically allows us to fork our main application process as many times as we have CPU cores, and then it will take over and load balance all requests to the main process across all forked processes. The cluster module is Node's helper for us to implement the cloning scalability strategy, but only on one machine. So when you have a big machine with a lot of resources or when it's easier and cheaper to add more resources to one machine rather than adding new machines, then the cluster module is a great option for a really quick implementation of the cloning strategy. Even small machines usually have multiple cores, and even if you're not worried about the load on your Node server, you should enable the cluster module anyway to increase your server availability and fault-tolerance. It's a simple step and when using a process manager like pm2 for example, it becomes as simple as just providing an argument to the launch command! But I'm going to show you how use the cluster module natively and explain how it works. Here's the structure of what the Cluster module does. We create a master process and that master process forks a number of worker processes and manages them. Each worker process represents an instance of the application that we want to scale. All incoming requests are handled by the master process and the master process is the one who decides which worker process should handle an incoming request. The master process's job is easy, because it actually just uses a round-robin algorithm to pick a worker process. This is enabled by default on all platforms except Windows, and it can be globally modified to let the load-balancing be handled by the operation system itself. The round robin algorithm distributes the load evenly across all available processes on a rotational basis. The first request is forwarded to the first worker process, the second to the next worker process in the list, and so on. And when the end of the list is reached, the algorithm starts again from the beginning. This is one of the simplest and most used load balancing algorithms; however, it's not the only one. More featured algorithms allow assigning priorities, selecting the least loaded server or the one with the fastest response time.

  51. Load-balancing an HTTP Server Let's clone and load balance a simple HTTP server using the cluster module. Here's the simple hello-world http example server slightly modified to simulate some CPU work before responding. To verify that the balancer we're going to create is going to work, I've included the process id in the HTTP response to identify which instance of the application is actually handling a request. We'll go ahead and run this server and test that it's working, and before we create a cluster to clone this server into multiple workers, let's do a simple benchmark of how many requests this server can handle per second. We can use the ApacheBench tool for that. I have that tool installed already, but you might need to install that on your system. This command will load the server with 200 concurrent connections for 10 seconds and our single node server was able to handle about 51 requests per second. Of course, the results here will be different on different platforms, and this is a very simplified test of performance and it's not a 100% accurate, but it will clearly show the difference that a cluster would make in a multi-core environment. Now that we have a reference benchmark, let's scale our application with the cloning strategy using the cluster module. We'll create a new file for the master process. I'll name it cluster.js. This is on the same level as our server.js. In cluster.js, we'll need both the cluster module and the os module. We'll use the os module to read the number of CPU cores we can work with. The cluster module gives us a handy boolean flag to determine if this cluster.js file is being loaded as a master process or not. The first time we execute this file, we will be executing the master process and the isMaster flag will be set to true. In this case, we can instruct the master process to fork our server as many times as we have CPU cores. This is simple. We just read the number of cpus we have using the os module, then with a for loop over this number of cpus, we call the cluster.fork method. This will simply create as many workers as the number of CPUs in the system to take advantage of all the available processing power. When the cluster.fork line is executed from the master process, the current main module cluster.js is run again, but this time in worker mode with the isMaster flag set to false. There is actually another flag set to true in this case if you need to use it, which is the isWorker flag. When the application runs as a worker, it can start doing the actual work. This is where we need to define our server, which we can do by requiring the server.js file that we have already. And that's basically it. That's how easy it is to take advantage of all the processing power in a machine. Let's test. We run cluster.js this time, and on this machine I have 8 cores so it starts 8 processes. It's important to understand that these are completely different Node.js processes. Each worker process here will have its own event loop and memory space. When we hit our server multiple times now, the requests will start to get handled by different worker processes with different process ids. Note that those workers will not be exactly rotated in sequence, because the cluster module performs some optimizations when picking the next worker. But what we see here confirms that the load is being somehow distributed among different worker processes. Now let's try to load test our server again using the same ab command. The cluster I created on this machine was able to handle 181 requests per second in comparison to the 51 requests per second that we got using a single Node. The performance of this simple application tripled with just a few lines of code.

  52. Broadcasting Messages to All Workers The communication between the master process and the workers is simple, because under the hood, the cluster module is just using the child_process.fork API, which means we also have communication channels available between the master process and to each worker. We can access the list of worker objects using cluster.workers, which is an object that holds a reference to all workers and can be used to read information about these workers. So since we have communication channels between this master process and all workers, to broadcast a message to all of them we just need a simple loop over all the workers here. We can use the new Object.values to get an array of all workers from the workers object. Then, for each worker, we use the send function to send over any primitive or object value. In server.js, to handle a message received from this master process, we register a handler for the message event on the global process object. Let me remove this previous console.log line here to simplify the output and let's test. Every worker will now receive a message from the master process. Note how the workers here are not guaranteed to start in order. Let's make this communication example a little bit more practical. Let's say we want our server to reply with the number of users we have created in our database. We'll create a mock function that returns the number of users we have in the database and just have it double its value every time it's called. What we want to do here to avoid multiple DB requests is to cache this call for a certain period of time, let's say 10 seconds. But we still don't want our 8 workers to do their own DB requests and end up with 8 db requests every 10 seconds. We can have the master process do 1 request and tell all of 8 workers about the new value for the user count using the communication interface. In the master process mode, we'll create a function, call it updateWorkers, and use the same loop to broadcast the userCount value instead of a text message. Then we'll invoke updateWorkers for the first time and will invoke it every 10 seconds using a setInterval. This way, every 10 seconds, all workers will receive the new value over the process communication channel. In the server code, we want to use this value that we can read here using the message variable. We can simply create a variable global to this module and set it every time we receive a message. And we can now just use this variable anywhere we want. We'll respond with that variable as well. We need to make this into a write instead of end to output both messages and let's review. The userCount module variable here is controlled by the message interface. The master process reset the value of this userCount variable every 10 seconds by invoking the mock DB call. Testing this now, the first 10 seconds, we get 25 from all workers, and we only did a single DB request here, and then after 10 seconds all workers will start reporting the new user count, and we only did another single DB request. And this was all possible thanks to the communication channels between the master process and all workers.

  53. Availability and Zero-downtime Restarts One of the problems in running a single instance of a Node application vs. many is that when that instance crashes, it has to be restarted, and there will be downtime between these two actions, even if the process was automated as it should be. This also applies to the case when the server has to be restarted to deploy new code. With one instance, there will be a downtime and this affects the availability of the system. When we have multiple instances, the availability of the system can be easily increased with just a few extra lines of code. To simulate a random crash in our server process, let's do a process.exit call inside a timer that fires after a random amount of time. When a worker process exits like this, the master can be notified using the exit event on the cluster object, so we can register a handler for that event, and in there we just fork a new worker process when any worker process exits. It's good to add this condition here to make sure that the worker process actually crashed and was not manually disconnected or killed by the master process, which we're not doing here, but might eventually need to. For example, the master process might decide that we are using too much resources based on the load patterns it sees and it will need to kill a few workers in that case. To do so, we can use the .kill or .disconnect method on any worker and the exitedAfterDisconnect flag will be set to true in that case, so with this if statement here our code will not fork a new worker for that case. If we run this code now, after random amount of seconds, workers will start to crash and the master process will immediately fork new workers to increase the availability of the system. We can actually measure the availability using the same ab command and see how many requests the server was not able to handle overall, because some of the unlucky requests will have to face the crash case, and that's hard to avoid. But it looks like only 17 requests failed out of over 1800 in this 10 second interval with 200 concurrent requests. That's over 99% availability. With these few lines of code, we don't have to worry about process crashes any more. The master guardian will keep an eye on those processes for us. But what about the case when we want to restart all worker process when, for example, we want to deploy new code? Well, we have multiple instances running, so instead of restarting them together, we can simply restart them one at a time to allow other workers to continue to server requests while one worker is being restarted. Implementing this with the cluster module is easy. Let me first comment out this random crash code. We're done with that example. Now, in the master process, we don't want to restart this master process once it's up, but we do need a way for us to send this master process a command to instruct it to restart its workers. Now, this easy on Linux systems, because we can simply listen to a USER signal like SIGUSR2, for example, which we can trigger by using the kill command on the process id and passing that signal instead of killing the process. This way, the master process will not be killed and we have a way to instruct it to start doing something. SIGUSR2 is a proper signal to use here, because this will be a user command. If you're wondering why not SIGUSR1, it's because Node uses that for its debugger and we want to avoid any conflicts here. Unfortunately, on Windows, these process signal are not supported and we would have to find another way to command the master process to do something. There are some alternatives, we can, for example, use standard input or socket input, or we can monitor the existence of a process.pid file and watch that for a remove event. But to keep this example simple, we'll just assume this server is running on a Linux platform. Node works very well on Windows, but I think it's a much safer option to host production Node applications on a Linux platform, and this is not just because of Node itself but many other production tools are much more stable on Linux. This is just my personal opinion and feel free to completely ignore it. By the way, on recent version of windows you can actually use a Linux subsystem and it works very well. I've tested it myself and it was nothing short of impressive. If you're developing a Node applications on Windows and you're not using this new feature, look for Bash on Windows and give it a try. So when the master process receives this signal, it's time for it to restart its workers, but we want to do that one worker at a time, which means only restart the second worker when we're done restarting the first one. So to begin this task, we'll get a reference to all current workers using cluster.workers object and we'll just store the workers in an array. Then, let's have a restartWorker function that receives the index of the worker to be restarted. This way we can do the restarting here in sequence by having this function call itself when it's ready for the next worker. So to start this sequence, we'll call this function with index 0 to make it pick the first worker. Inside the function, we'll get a reference to the worker to be restarted using the workers array and the received index, and since we will be calling this function recursively to form a sequence, we need a stop condition. When we no longer have a worker to restart, we'll just return. Then we basically want to disconnect this worker. Now before restarting the next worker, we want to fork a new worker to replace this one that we just disconnected. We can use the exit event on the worker itself to be able to fork a new worker when this one exits, but we have to make sure that this exit was actually triggered after a disconnect call, so we can use the exitedAfetrDisconnect flag. If this flag is not true, then this exit was caused by something else other that our disconnect call and we should just return in that case and do nothing. But if it is set to true, then we'll go ahead and fork a new worker to replace the one that we just disconnected. When this new forked worker is ready, we can go ahead and restart the next one, but remember that this fork process is not synchronous, so we can't just restart the next worker after this call. Instead, we can monitor the listening event on this forked worker, which tell us that this new worker is connected and ready. When we get this event, we can safely restart the next worker in sequence. That's all we need for a zero-downtime restart to work with the cluster module. To test it, first console log the master process id because we need that to send the SIGUSR2 signal. Then, start the cluster, copy the master process id, and I'll restart the cluster in a new terminal and actually before doing so, I'm going to fire the same ab command to see what effect this restart will have on availability. Start this command first, and right away restart the cluster. So the restart happened while the AB command was load testing our system here. The log here shows how all the workers were restarted in sequence and this restart did not affect our availability at all. We have 0 failed requests here. Process monitors like, for example, PM,2 which I personally use in production, make all the tasks we went through so far extremely easy and give a lot more features to monitor the health of a Node.js application. For example, with PM2, to launch a cluster for any app, all you need to is use is the -i argument, and to do a zero second downtime restart, you just issue this command, and you can scale the application instances up and down with this command, but I find it helpful to first understand what actually will happen under the hood here.

  54. Shared State and Sticky Load Balancing Good things always come with a cost. When we load balance a Node application, we lose some features that are only suitable for a single process. This problem is kind of similar to what's known in other languages as thread safety, which is about sharing data between threads and, in our case, sharing data between worker processes. So for example, with a cluster setup, we can no longer cache things in memory, because every worker process has its own memory space, so if we cache something in one worker's memory, other workers will not have access to it. If we need to cache things with a cluster setup, we have to use a separate entity and read/write to that entity's API from all workers. This entity can be a database server or if you want to use in-memory cache, you can use a server like Redis or you can create a dedicated Node process with a READ/WRITE api for all other workers to communicate with. Don't look at this as a disadvantage though, because if you remember the scalability strategies, using a separate entity for your application caching needs is part of decomposing your app for scalability. So you should probably do that even if you're running on a single core machine. Other than caching, when we're running on a cluster, stateful communication in general becomes a problem. Since the communication is not guaranteed to be with the same worker, creating a stateful channel on any one worker is not an option. The most common example for this is authenticating users. With a cluster, the request for authentication comes to the load balancer, which gets sent to one worker, assuming that to be A in this example. Worker A now recognize the state of this user. But when the same user makes another request, the load balancer will eventually send them to other workers, which do not have them as authenticated. So keeping a reference of an authenticated user session in one instance memory is not going to work anymore. This problem can be solved in many ways. We can simply share the state across the many workers we have by storing these sessions' information in a shared database or a Redis node, but applying this strategy requires some code change and that sometimes is not an option. If you can't do the code modifications needed to make for a shared storage of sessions here, there is a less invasive, but not as efficient strategy. You can use what's known as Sticky Load Balancing. This is much simpler to implement as many load balancers support this strategy out of the box. The idea is simple. When a user authenticates with a worker instance, we keep a record of that relation on the load balancer level. Then, when the same user sends a new request, we do a lookup in this record to figure out which server has their session authenticated and keep sending them to that server instead of the normal distributed behavior. This way, the code on the server side does not have to be changed, but we don't really get the benefit of load balancing for authenticated users here, so only use sticky load balancing if you have no other option. The cluster module actually does not support sticky load balancing, but a few other load balancers can be configured to do sticky load balancing by default.

  55. Summary In this module, we talked about how scalability in Node is something that we start thinking about early on in the process, and talked about the reasons to scale an application and the three different strategies that can be used to scale an application: cloning, decomposing of services, and partitioning based on data. Then to explore the Node's built-in modules that will help us with the cloning strategy, we first looked at the different ways we can create a child process using the child process module. We learned about the spawn method and its event and how to pass it arguments. We looked at how to work with its standard io objects and pipe them together when spawning multiple commands. We've seen the various options we can use with the spawn function like using a shell, customizing standard io objects, controlling the environment variables and detaching the process from its parent. Then we talked about the difference between spawn, exec, and execFile, and then saw an example of forking a process to do a long blocking computation without blocking the main process. We've then scaled the simple http server example using the cluster module and saw how a few lines of code tripled the performance of the server. We saw an example of how to broadcast a message from the master process to all forked workers and how to restart failed workers automatically with a cluster module and how to perform zero-downtime restarts as well. And finally, we've talked about the problem of stateful communication when working with clusters and load balancers and how to work around that.

  56. Course Wrap Up Course Wrap Up Thanks for taking this course. I hope you enjoyed it and it was a good educational experience for you and now you feel a lot more comfortable with the Node runtime itself. I've written a medium article about tips and advice for coders. I thought I'd share some of this advice for you here as I think it's very relevant to this course. The key for any new experience to stick in your mind is PRACTICE. Go back to the clips that you were not completely comfortable with and challenge yourself with new matching exercises and implement them. And keep doing so until you're comfortable with the topic. Just like young children, code communicates by misbehaving. Change your attitude toward bugs and problems. It's always great to face them, because you'll probably learn something new, and it's easy to make progress right then and there. Don't just solve the problems, though, understand why they happened. Get comfortable reading and debugging foreign code that you're seeing for the first time. Use interactive debugging when you can, but do not underestimate the value of print debugging. In some cases, print debugging is far more effective than interactive debugging. Don't guess when debugging. Learn how to isolate the issue, perform sanity checks, and only after that, formulate a theory about what might be wrong. Your first instinct will always be to guess. Resist it. Get comfortable reading API documentation and learn the standard libraries for JavaScript and Node. Don't memorize how to use things, just be aware of the existence of the powerful features, and know how to quickly look them up. Read as many good books as you can, and don't actually just read them, but implement with your hands the examples in them, and modify those examples if you can. I say good books, because, unfortunately, there are a lot of confusing and not completely accurate educational resources out there, so you need to pick the healthy ones. I plan on writing a course follow-up article on Medium and I'll share my recommendations for books and other educational resources for both JavaScript and Node. Node has been changing fast. Very soon there might be a new streams API that's different from the current one. Follow the Node project on github and its official site to stay up-to-date with the changes in Node and its ecosystem. And don't forget that you can always ask me any questions about any of my courses using this Slack account. Come say hi if you don't have any questions, or if you have any kind of feedback about this course, good or bad. Come chat with me and tell me what could I have done to make this course better for you, and what should my next course be about! Thanks again and I'll see you in the next course.