Transcript.docx

  • Uploaded by: Priyanka Shukla
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Transcript.docx as PDF for free.

More details

  • Words: 25,731
  • Pages: 37
Getting Started with Node.js

Introduction

Paul O'Fallon: Hello. My name is Paul O'Fallon, and I'd like to welcome you to the course "An Introduction to 'Node.js'", module one, "Getting Started with 'Node.js'". In this module, we'll cover an overview of "Node.js", building and installing "Node.js", developing for "Node" with the "Cloud9 IDE", an introduction to Node's "Event Loop", and the ins and outs of writing code with call-backs. So with that said, let's jump right in and get started.

Node.js Background

"Node.js" is a server-side "JavaScript" platform. It was first introduced to the community by its creator, Ryan Dahl, at the "2009 JSConf.eu Conference". Its presentation was received with a standing ovation. Since that time, "Node" has continued to evolve, with contributions from the community as well as the project's primary sponsor, cloud computing Company Joyent. "Node" is among the most popular projects on "GitHub", often beating out other heavyweights such as "jQuery" and "Ruby on Rails". At a high level, "Node" is comprised of three building blocks. The first is "Lib UV", a high performance, cross-platform evented IO library. It's a fairly new addition to "Node", and replaces or abstracts several UNIX-only libraries once directly required by the project. "Lib UV" was built as a part of porting "Node.js" to the "Windows" environment. Next is "V8". This is Google's "JavaScript" engine, the same engine found in their Chrome web browser. The "Node" team makes every effort to leverage "V8" out of the box within "Node". This makes it easier for the team to include updated versions of "V8" in each release of "Node", and thereby benefit from Google's continuous innovation of their "JavaScript" engine. The last component of "Node" is the custom "C++" and "JavaScript" code developed specifically for the "Node" platform itself. These three things together make up the "Node.js" platform.

Getting Node.js

There are several ways to get "Node.js". There are installers and binaries available for download at nodejs.org, and "Node" is available via many common "Linux" package managers. Of course, the source code is also available if you'd like to download and build it yourself. One handy way I've found to manage my "Node" installations in a "Linux" or "Mac" environment is with "NVM", a tool developed by Tim Caswell in the spirit of Ruby's "RVM". It can be cloned directly from "GitHub", and used to install and manage any number of "Node" versions. Because "Node" is evolving so rapidly, tools like "NVM" can be helpful not only with the manual task of downloading and building a new release, but also by allowing you to rapidly switch executables for testing your code against multiple versions of "Node" all on one system. Here you see the steps to get started. After cloning, simply source the shell script and run "NVM Install" with a version number. You can switch versions with

the "NVM Use" statement, and you can set your default version of "Node" with the "NVM Alias Default" statement. Let's take a look at using "NVM" to install "Node".

Demo: Installing Node on Linux with NVM

So now we're going to take a look at installing "Node" on an "Aboon 2" "Linux" instance using "NVM". Now, as I mentioned, the absolute easiest way to install "Node" on "Linux" is just by using the version that comes with the "Package Manager". So let's take a look real quick at how we would do that. First, let's search for that package. ( Typing on Keyboard ) And here you can see that there are several "Node.js" packages that we could install, so if we wanted to install "Node", we could simply install the "Node.js" package, and it would be available here. Now if you want to manage multiple versions, or if you want to stay more current with the latest version of "Node", you can use a tool like "NVM" to manage those yourself. So let's look at doing that, so, as I mentioned, you'll want to first clone the repository of "NVM" from "GitHub", and to do that you'll need to have the "Git" client installed on your machine. And when you do, you'll run a command like this. So here we're going to clone it into an "NVM" directory. Okay, so now we have it cloned, if we look in there (typing on keyboard), we have some shell scripts. And before we actually have the command available to use, we need to source the appropriate shell script so let's do that. ( Typing on Keyboard ) Okay, now "NVM" should work. Yes, and there are all the commands that are available with "NVM". So if we do "NVM LS", you'll see we don't have any versions installed yet, so let's actually install a version. ( Typing on Keyboard ) You'll see that install went very quickly, and that's because for the more recent versions they actually have the binary for a lot of platforms available to download, so I didn't even have to compile it; all I had to do is just download it. And that makes it really easy to install multiple versions, particularly when they are the more recent versions. And so if I do "NVM LS", now you'll see I have two versions, and I'm currently using the last one I installed, which was 8.15. So if I want to install an older version, let's say a 0.6 version, of "Node", for that I'll have to compile it. In order to do that on "Linux", I'll have to have the appropriate compilers and what not installed. It's much like you would expect to build any "C++" project. ( Typing on Keyboard ) So even though it is downloading and compiling, "NVM" takes care of all of that for you as long as the appropriate packages are installed on your system. ( Silence ) So now this build process is going to go on for quite some time. Okay, so the compilation is done. "NVM" is telling us that we're now using "Node" version 0.6.9. So now an "NVM LS" will show us three versions, where our current one again is the last one we installed. If I want to switch between versions, well, let's just -- let's verify that what we see is true. So if I do "Node-V", 0.6.9. Now to switch versions of "Node", I can simply go to "NVM Use 0.8.15", and if I do "Node-View now", you'll see 0.8.15. So I can switch that easily. But if you're like me, 99% of the time you're going to be wanting to use the same version of "Node". You don't necessarily want to have to set it every time. Well, "NVM" makes that easy by giving you the ability to alias to a default version. And so by doing that, what that means is, whenever "NVM" is sourced, it will give you that version by default. Now you can use aliases for more than this, but this is really the case where I use aliases. So if I type "NVM Alias Default Version 0.8.15", now the default is 8.15. So now that we have "Node" installed, let's actually run it (typing on keyboard). So if you execute "Node" without any script name, it will dump you into a "Read eval print loop", or "REPL", where you can interact with "Node" from the "Command Prompt". Obviously, the most trivial example would be to just do a "Console.log" (typing on keyboard), and it prints to the console. So that's one way to play around with it for very simple experiments with "Node". So let's create a file, and give ourselves something a little more substantial (typing on keyboard). We're going to try to set

"Time-out", and so let's just try one of those here (typing on keyboard). So we're going to call -- have "Set Time-out" call a function (typing on keyboard), and print the word "world" (typing on keyboard) after one second. But we're going to have it immediately print the word "hello", and so let's try that (typing on keyboard). And so this script was almost identical to one that Ryan Dahl did in his 2009 demo of "Node". Okay, so now let's try to run an example that will do a little more than simply set a time-out. Let's try to spin up a very simple HTTP server. Now on the "Node" home page, you'll see the code for a very simple web server that just prints out "Hello, world." So what I've done is I've copied that and pasted it into a script called "Server JSConf.eu". Now the only thing I've changed is I've taken off the IP address and left just the port number. We'll go into more in a later module about what this is really doing, but just take it for granted that we're creating a web server and we are writing back a text-plain header and sending the text "Hello, World." And this web server is going to listen on port 1337, and it's also going to print something out to the console. So if we run that (typing on keyboard), and we go to the host name of this VM at that address, and hit "Return", let's see. And you'll see we get "Hello, World." So it's printing that out, very, very simple example, but I mean, it's a running web server in a few lines of code.

Demo: Developing for Node with Cloud9 IDE

You can, of course, edit "JavaScript" files for "Node" in your favorite text editor and run them from the command line. However, many of the examples in this course will be shown using the free version of the web-based "Cloud9 IDE". It provides browser support for many of the common editing, running, and debugging tasks you'll do as a part of "Node" development. In fact, much of "Cloud9" itself was written in "Node.js". In this screenshot, you can see some of the syntax highlighting, and even code completion support, for "Node.js" applications. Let's take a quick look,, and familiarize ourselves with some of the features provided by "Cloud9". So to get started with "Cloud9", you'll want to go to the home page, which is at c9.IO. Now once you're here, you can sign up by creating an account directly on "Cloud9", or log in with your "GitHub" or "Bitbucket" account. Now I chose to associate my login on "Cloud9" with my "GitHub" account, and, since I've been here before, it already knows who I am and it's asking me to go to "My Dashboard", so let's click here and do that. So here you can see "My Dashboard". What it has down the left is the projects that I'm currently editing, or currently have active on "Cloud9". But because I've associated with my "GitHub" account, it also has all of my projects that are on "GitHub". But what we want to do here is we're going to go into the project that I've created that has all of the simple code for this course, and that's this "PS Intro to Node", so let's click on that. And we'll want to start editing. So once we get into a particular workspace, it should look familiar to you if you've used any other "IDE" before. It shares a lot in common with, say, "Visual Studio" or "Eclipse". It's got a series of menus across the top that you can use to do various functions within the "IDE", but one in particular that's unique to "Cloud9" is this "Share" function. You can invite someone by email, "Twitter", or "Facebook" to come and codevelop a project with you. And I've used this once before and it's really, really impressive to see somebody else editing your file, and running it, and watching the output in the console from someone else running it. That's pretty impressive. Now over here on the side you'll see the tree structure for our project, and the folders and files. But you can do several other things with this sidebar if that's not what you want to see. You can only show the open files, or you can set your "Run" in "Debug Options". And here is where if you wanted to run it in a different version of "Node", you could do that. So here they have "Node" 8.X and 6.X. or if this was a "PHP" project, you could set that there as well. Now if you'll remember, on our "Linux" instance, where we were using "NVM" to

switch between version 8 and version 6 of "Node", and you had to download and compile version 6, you don't have to do any of that here. You can just pick the one that you want to use, and it's there for you. And then, when you're ready to deploy, you can deploy to a couple of different providers directly from "Cloud9". So you can deploy to "Heroku" or "Windows Azure" directly from the "IDE". And then, of course, there's a lot of settings that you can set about the "IDE" to tailor it to your liking. So then here is a tabbed-based editing window which you would be familiar with from other "IDEs". Now it's one other thing that's kind of interesting is that, in particular for "Node", is that the "Cloud9 IDE" has pretty good support for code completion and syntax highlighting for "Node" -- well, syntax highlighting for "JavaScript" and code completion for "Node" -- so, for instance, if I go and type "Console.log", you'll see how it's giving me assistance there. In fact, if I just do "Consoles" and then stop, I can scroll through the various functions that it has, and, you know, get some extra information about those, and some documentation here as well, and actually some assistance. If I've left a syntax error here, I can get some guidance as well. And then, down here at the bottom has a console and then an output window, so in the console you can enter a lot of different commands, some of which are kind of "UNIX" style commands, and some are special to this console. So if we type in "Help", we can see the commands that are here, and some of them about navigating to tabs are specific to "Cloud9", but then also, if you scroll up further, you'll see things like "LS" and "MV", so more "UNIX" style commands. So if you're more comfortable with typing things, and want to do less mouse movement, you can do that. It also supports a lot of keyboard shortcuts, so "Control-S" to save a file and what not. If you run an application -- so let's run this one, just super simple, we'll save it first; now we'll run it -- and you can see that now we're in the "Output" tab here, and we're seeing "Hello, world." Now a couple of these -- a couple lines will show up whenever you run a "Node" code on "Cloud9". In this case, we're just printing to the console so they don't apply, but they're here anyway, so it's reminding us that we should use "Process EMV Port" and "IP" as the host import of our scripts. And then if I wanted to see the web App that I was writing in "Cloud9", I could click on this link. So now let's run just a barely, slightly more complicated example. Let's do that "Set Timeout" that we did on the "Linux" VM. ( Typing on Keyboard ) And we'll run that, and there you go. You see "Hello, world" in the output. Now let's go back to the "Node" home page and grab that code and try creating a simple HTTP server. So here we are again on the "Node" home page. Let's scroll down and grab this code again, and we will paste it here. Now the reminder it gave us down here matters, so let's go ahead and, since we're not going to be necessarily running it at that host and IP, let's take that out. And let's grab these descriptors they suggested that we use for these variables for the port and the IP. I'll paste those in, and now we'll run this. And you can see it actually pulled it up in a browser here. If we want to pull it up in a separate tab, or separate window, we can just click on this... ( Background Sounds ) ...and there you go, "Hello, World." Now the host name of the instance is "Cloud9" or "C9.io", then my username, and then the name of the workspace. And it's "Hello, world", and that is what's running here, and so it's a very simple way to come in, sign up, to get yourself a development environment and a web server. Most of the examples that we do in this course, we're going to do in "Cloud9". It would be great for you to go ahead and sign up for a free account and keep up.

Node's Event Loop

One of the key concepts "Node" brings from the browser to "JavaScript" on the server is the "Event Loop". In the browser, the "Event Loop" is constantly listening for DOM events, so just key presses or mouse clicks. Similarly, Node's "Event Loop" is constantly listening for events on the server side.

These events can be externally generated, such as incoming HTTP requests or TCP connections, or they can be timers and other internal events generated by your "Node" application itself. Additionally, other events may be triggered on the response to a request against an external resource. For example, asking "Node" to open a file for reading will fire an event when the file is opened and ready. Sending a message to an external process will fire an event when the message has been sent. And making a request of a network resource, such as another web server, will fire an event when the HTTP response is received. A key point is that each of these are handled as discrete events in "Node". In fact, the events will very likely interleave each other. For example, in this diagram, a timer event is received between the request for a file and when the file is ready for reading, and both TCP and HTTP events are received while we're sending a message to an external process. "Node" itself doesn't pause and wait for any of these requests to complete. It simply continues to react to events as they arrive. A common example to demonstrate this non-blocking, event-driven approach is a web application that fetches data from a data base. The application raises an event when an HTTP request is received. This event generates a query to the data base for some information. Once "Node" receives an event back from the data base that the query is complete, an HTTP response is formulated and sent to the caller. While it is waiting for the response from the data base, however, "Node" is not blocked and is free to handle additional requests. Here you can see that it receives a second request while still waiting for the first one to complete, and so on, and so forth. This non-blocking approach is fundamental to "Node", and differentiates it from the more traditional, server-side programming model that requires you to manage multiple threads to achieve this type of concurrency.

Node Conventions for Writing Asychronous Code

Writing code that operates in this non-blocking, asynchronous environment requires a different way of thinking. Here is a typical approach to querying a data base, shown using some JavaScript-looking pseudo code. Each function returns a value before the next function is called. Each statement builds on the results of the prior one. First, we connect to a data base. Then we use that connection to create a statement. From that statement, we execute a query and get back a set of results. Finally, we iterate over those results. "Node.js" code that accomplishes this task would look quite different in this case because each function returns almost immediately, before the actual work has been done. We have to find other ways to convey the ordering of our statements. If you look at the "Get DB" connection function, you'll see in this case it takes two parameters. The first is the same connection string as before, but the second parameter is a function. What we're saying, in effect, is, "Get a connection to the data base, and once you have it, call this function and pass it the connection you just created." By crafting the statement this way, we've left "Node" free to do other work while it's waiting for the data base connection to be established. The "Create Statement" function is written similarly, except in this case the only parameter is the function to call once the statement has been created. These two functions are examples of using call-backs to write code that will run asynchronously. However, lest you think functions with return values have disappeared altogether, you'll notice that the "Execute Query" function does indeed return a value. In this case, the results object returned from the function does not immediately contain the results of the data base query. It is a special object called an "Event Emitter", which is capable of emitting events in the future, when each row of the query result becomes available. Here we're telling "Node" to invoke a function when each row event is omitted by the results object. We'll learn more about event emitters in a later module. "Node" has adopted some conventions around its use of call-backs, and

it's important to cover those briefly here. They'll help set the stage for many of the examples we'll cover in upcoming modules. Let's start with a function called "Git-Stuff". It takes two parameters. The first one is a regular input parameter, and the second is the call-back function to invoke once the "Git-Stuff" function has completed. A "Node" convention is that the call-back parameter is always the last parameter passed to the asynchronous function. Here that call-back is a named function called "Handle Results". Another convention around the use of call-backs is error handling. The first value passed to the call-back should always be an error parameter. While "Node" does support JavaScript's "try catch" syntax, it's much more common to report an error by passing a value as the first parameter to the call-back. Additionally, it's very convenient to verify whether a function succeeded by checking for an undefined or "False-E" error value at the top of your call-back function. In this example, we used a named variable, "Handle Results", to define the function that was later passed as a call-back to "Git-Stuff". This is certainly viable; however, strictly abiding by this approach will leave you with many, many functions that are only used once in your code. For simple call-backs, or those that are only referenced once, it is very common to use anonymous functions as call-backs. In this case, the anonymous function is defined within the parameter list of the calling function. Here's an example of the previous "Handle Results" function being added to the "Git-Stuff" parameter list as an anonymous function. Another benefit I've found when using anonymous functions is that they benefit from JavaScript's support of closures. As you cascade down a series of functions and call-backs, you continue to have at your disposal all the variables created along the way. Let's take a look at some simple examples of writing asynchronous "JavaScript" in "Node.js" using call-backs.

Demo: Writing Asychronous Code in Node.js

Okay, so now we're going to take a look at writing some asynchronous code for "Node.js" using callbacks. To start, we're going to keep it simple. For now, what I want to focus on is the function call and the call-back, so what I have here is a function. And the way that it works is it will double the number you pass in only if the number you pass in is even. If you pass it in as even, it will double it; if you pass in an odd number, you're going to get an error. And each call to "Even Doubler" is going to take a random amount of time, some time less than one second but it's random each time. If I want to invoke the function "Even Doubler", I can call it here. I pass in the number that I want to double, and then the call-back to receive the results when the call is done. And this is where this random amount of time comes into play. That's when "Handle Results" will be called. And the function "Handle Results" takes three parameters. Now when we looked at the slides, the convention that we looked at is that the first parameter to the call-back is whether or not there was an error, and you'll see here we have a variable called "ERR". And after that, I mean, you can really have as many parameters as you want in your call-back, and so in this case I have two. One is I have the results, which will be the doubled number if there wasn't an error. And then I have a third parameter, which is how long did it take this particular invocation of "Even Doubler" to run? Here we inspect "Error", and if we find an error, we log that to the console either way. And then if there's not an error, we tell you what the results are and then how long it took to calculate those results. So if we invoke here for an even number, let's see what we get. We'll run it. We got our dashed line here; that printed first. Even though we invoked the function first, the first thing to the console was our dashed line. And that's because the invocation, it was going to take a certain amount of time to complete. And while that was completing, "Node" went ahead and printed out to the console this dashed line. And then, when "Even Doubler" was done doing its work, it called the "Handle Results" call-back,

which inspected the error value, and it was null in this case, which is what we would want to see because this is indeed an even number. And then it printed out to the console the fact that the number 2 doubled is 4, and it took 604 milliseconds. Now, like I said, that number is random. If we run it again, you're going to get a different number. So that's what that looks like. Now let's try and type in another one right underneath it (typing on keyboard). This time, we will give it an odd number, which should trigger an error. And we'll do both. Okay, now the first one that we called with 2 came back with the result of 4 at 378 milliseconds, and the call where we passed in the 3 generated an error, and the error message -- so if you'll notice here, we checked for the error and we printed out the message from that error object and got "Odd Input", so that was the error. Let's add another one and (typing on keyboard) -- and we'll say that and run it. Now we got -- this is kind of interesting, right, so we got three results because we invoked it three times. The results that we got back are not in the same order in which we invoked the functions. We called "Event Doubler" with -- and passed it in 2. Then we immediately called "Event Doubler" again, passed it in 3, and then 10, and came down and none of them had finished, so we printed "Console.log" here, which gave us the five dashes. The first of the three to complete was this one where we passed in a number 2, gave us back 4 with 737. The second one to complete -- between the invocation with 3 and the invocation with 10 -- the second one to complete was actually this one, and so that's what was printed to the console, and then the third one to complete was where we passed in number 3 and we got the error. The reason that they're different in this particular case is because each one's waiting a random amount of time, and it just so happened that the invocation where we passed in a 3 randomly got the highest of the three numbers, and so that's how long it waited. When you're dealing with the asynchronous code like this, you don't control when the call-back is going to get invoked. And if you're executing a series of calls to a function, whether like I'm doing one after the other, or if you were doing them in a "for loop", you can't guarantee that they're going to get called. In fact, the callbacks will not get called in the order that you invoked the functions. Okay, so here what I've done is I've taken our "Even Doubler" function call, put it inside of a "for loop" where I'm going to call it 10 times. Just so we'll know what's happening each time I call it, I'm printing out to the console that I'm about to call it, and the value I'm going to call it for. ( Typing on Keyboard ) So if we move this up, you'll see here they're being invoked in order as it iterates through the "for loop", and then here's our dashed line that -- but the results come back in a very random order, obviously. So this would have been where we called it where the counter was 6 and 2 and 4 and 8, and so, in this case,, they're coming back in what would seem like a random order, but if you look at the timing, this confirms the fact that the timings were actually in order. So let's say I wanted to print the message "Done" at the bottom when this was completely done. If I were to change this from a named function to an anonymous function, and keep a count of the call-backs, I might have better luck with that so let's give that a try. Okay, so now you'll notice that the named function is gone and it's been replaced here with an anonymous function that I defined as a part of the call to "Even Doubler". So now as I go through the "for loop", I'm calling "Even Doubler", passing in the parameter, and then defining an anonymous function to handle as the call-back. And this part of the call-back is the same as before, but now I'm doing one more thing. I've initialized a counter to zero here outside of the "for loop", and then as I loop through the invocations of the call-back, I am incrementing a count. And then if I'm on the 10th call-back, I know I'm done. It doesn't necessarily matter which variable this was invoked with, but I know that I'm counting the number of times that the call-back has been invoked. And if it's been invoked 10 times, then I'm done. So let's run this and see if that works. We see our invocations in order, as before, and we see our call-backs out of order as far as the numbers they were invoked with, but they are in order by time. That matches as before. And now our "Done" is printed at the bottom of the output. We can take a quick look at what's inside "Even Doubler" and see how that was written, so what it's doing is calculating an amount of time to wait. And if what

was passed in is odd, then it's going to use that "Set Time-out" function and say, "You know what? After that period of time that you just calculated, I want you to call back an error." And then if the number is even, it's going to say, "Well, you know what? After that amount of time that you just calculated, I want you to call back, but I don't want you to pass it back at the error. In fact, I want you to pass a null for the error parameter; go ahead and double it, and then pass back the wait time." So this is where we get the multiple parameters in the successful call-back.

The "Christmas Tree" Problem, Conclusion

Now that we've learned how to write code using anonymous call-backs, we should also be careful not to overdo it. Selecting the right combination of approaches can be crucial in structuring your "Node.js" application. Anonymous functions are very common and very useful, but many novice "Node" developers, who rely on them exclusively, find themselves frustrated with the Christmas tree problem. Code like this can be difficult to debug and maintain. This is often cited as a shortcoming of Node's programming model. However, the smart use of named functions, as well as modules, event emitters and streams, all of which we'll cover in subsequent videos, will give you the tools you need to write "Node" applications that are no more difficult to build and maintain than those of any other server-side programming language. And besides, they're much more fun. So to conclude, in this module, we began with a brief introduction to the origins of "Node" and its underpinnings. We next installed "Node" in a "Linux" environment using "NVM". After that, we took a brief lap around the features offered by the "Cloud9" web-based "IDE". Diving into the fundamentals, we then discussed Node's event loop and non-blocking IO. We finished with a discussion of using call-backs to write asynchronous code. I hope this module has been a useful introduction to "Node.js". I encourage you to stick around as we dig deeper into this exciting server-side programming framework. Thank you.

Modules, require() and NPM

Introduction, Accessing Built-in Modules

Paul O'Fallon: Hello, my name is Paul O'Fallon and I'd like to welcome you to the course, An Introduction to Node.js Module 2, Modules require an NPM. In this module we're going to cover how to include Node modules in your application. We'll then look at the three most common sources of Node modules and we'll wrap up with a discussion on how to create and publish your own Node modules. So let's get started. Modules are the way to bring external functionality to your Node application. The require function loads a module and assigns it to a variable for your application to use. Modules make their functionality available by explicitly exporting it for use in other applications. A module can export specific variables and these variables can also be functions. Sometimes a module may export an object which you can instantiate in your code. Here too, you'll notice an informal naming convention. A module which simply exports a set of variables is often assigned to a camel case variable starting with a lowercase letter; foo in this example. However, a module which is designed to be instantiated will be assigned a camel case variable with an initial capital; Bar in this example. Finally, there may be cases where you only need a single variable or

function out of a large module. You can import just the one function you need by specifying the variable name immediately after the require function call. There are three main sources of modules that you can bring into your project with the require function. The first is Node's built-in modules. While Node provides a few functions in its global name space such as set timeout and set interval, there are many modules that shift with Node that must be explicitly included in your project in order to be used. These module names passed to the require function are simple string identifiers, such as fs here. Some of the built-in Node modules include fs for accessing the file system, http for creating and responding to http requests, crypto for performing cryptographic functions and os for accessing attributes of the underlying operating system. Let's take a look at some examples of using require to access Node's built-in modules (silence).

Demo: Accessing Built-in Modules

Okay, so we're going to take a look at a very simple example of using Node's built-in modules and including those via require. So, here at the top of our script you'll see the require function, require and then here we're just requiring the os module. Now, to get access to the os module we didn't have to do anything special in our Node installation, it just comes with Node, but it's not included in your project by default so we have to include it by calling require os and we're assigning that to the os variable. This is a module that will give you some information about the operating system that your script is running on. In this case we're going to print out the host name, we're going to print out the 15 minute load average, which is the third element in the array, and we're going to print out the free memory and the total memory. And I have this little function here just to convert what it gives me back to Megabytes. So, it's really as simple as that for the os module. Obviously as we go through the remaining course modules we're going to be diving into more of Node's modules. The os is one of the more simple ones, but it gives you an idea of the require. So now let's run this and take a look at the console. So you'll see the host name, the load average and then the memory statistics here. So, the os module is a handy way to get access to this information in your Node application, but more importantly this is a very simple example of how you would require in an external module using require, assign it to a variable and then use it in your script.

Using require() to Modularize Your Application

Another use of the require function is to access functionality located within other files in your project. In Node's module system each of your JavaScript files is a module and can expose functionality to be required by other files. This is a great way to modularize your code, making it easier to develop and maintain. In this case the require syntax is more rich and can include file system-like semantics. For example, you can require a file in the same directory, in a subdirectory, or in another navigable directory. Note the dot slash prefix is always required in this case and the js suffix of the file is omitted. Aside from the syntax of the module name, require operates the same as before, meaning that you can still require a single variable from another JavaScript file like this. The way you make variables available to other JavaScript files is by assigning values to the module.exports object. For example, let's take a look at one.js. It has the variable count assigned the value of two and doIt assigned a function. We've made the function available to external callers by

adding it to module.exports. Similarly the variable foo with the value bar is also exported. Now, in two.js we require the file one.js using this syntax. Next we can invoke the function doIt since it was exported in one.js and we have similar access to the foo. However, since the count variable in one.js was not exported it is not available in two.js. Only those variables defined as a part of module.exports are available externally. Let's take a look at an example of exporting and requiring variables between JavaScript files (silence).

Demo: Accessing Application Files with require()

Okay, so now we're going to take a look at using require to import one JavaScript file into another JavaScript file. And to start off we're going to take a look at a file called mathfun.js and what we've done is we've taken the evenDoubler function from our last module and brought it forward here and put it in mathfun.js and really this maxTime and the function definition are identical to what we looked at last time. But now we've added a line here and assigned evenDoubler to module.exports and -- which should make it available to other scripts that want to use it. And then while we were here we decided to go ahead and just set a foo variable to the value of bar. Treated as a module this has exported two variables evenDoubler and foo. So now let's go see what it looks like to require that in another script. So up here you can see we've called the require function and passed in dot slash mathfun. The dot slash says look in the current directory and find me a JavaScript file named mathfun. Now we don't put the .js here, but that's assumed. Process results and the four loop, that's all just really a carryover from the prior video so I'm not going to go through that. But what you'll notice here is when I invoke evenDoubler now, I am prefixing it with mathfun because that function has actually surfaced through this variable, mathfun. So we're calling it mathfun.evenDoubler, just like we called it evenDoubler before and then down here we're also going to try to call the foo variable in mathfun which we should get back bar for that, and even though it shouldn't work we're going to go ahead and try to print out a value of maxTime. If you'll remember from mathfun js, it is set here but it's not exported. So it should not be visible inside of this file. So let's run this (silence). Okay, so I'm not going to go over the bulk of the output, this is just a carryover from last time. You'll see we printed out our invocations and then we printed out our results down here. But you'll notice that the function calls were made so the calls to evenDoubler were made in the mathfun module and also when we accessed the foo variable we did get the value bar back, like we expected and when we tried to access maxTime we got undefined because that variable was not exported. So, that is a quick look at what it looks like to import one JavaScript file into another using the require statement.

Finding 3rd Party Modules via NPM

The final source for Node modules we're going to cover is the Node Package Manager or NPM registry. It is home to many third party modules available for download and use in your Node applications. Modules are installed from the NPM registry by using the NPM command that is installed with Node. NPM install and then the module name will download and install this module into a Node modules folder inside your project. Modules that have been downloaded and installed this way can be required with the same simple string identifiers as Node's built-in modules. Node

understands how to traverse the Node modules folder structure to load the appropriate JavaScript code. Similar to loading a single function from a file, you can also load a specific file from a module by calling require with the relative past to that file within the modules directory structure. While technically feasible, this should be done with extreme care. Pulling a single file from deep within the directory structure of a third party module may introduce a level of coupling that the module author never intended. Finally, some Node modules provide more than variables and functions you can access within your Node application. They also provide utilities you can invoke from the command line. Since these modules have a scope beyond any one application, you will want to install them outside of your current projects directory tree. You can do this by invoking NPM install with the dash G flag for global. This will install the module along with the appropriate command line executable on your path so they are available both inside and outside of your project. Some examples of Node modules that provide command line utilities include the express web framework, the Mocha test framework and the module provided by Microsoft for their Azure cloud platform. Let's take a look at installing some third party modules using NPM (silence).

Demo: Installing and Using 3rd Party Modules

So now we're going to take a look at using the Node Package Manager or NPM registry to search for and install third party modules for use within our Node applications. So a great place to look for modules that might be of interest to you is the NPMJS.org website where you can see statistics about the module ecosystem in general and also some stats about some particular modules, the most depended upon modules which arguably would be the most popular modules. They're the ones that other modules depend on the most and ones that are most recently updated etcetera. Now the module that we're interested in this time is the request module, which is the second most depended upon module. And when looking at any particular module you'll be able to go to a webpage, see some download statistics and just some general information about the module including their read me file. And so we've decided we would like to install the request module, which is a simplified http request client. So let's go back to our project and here now we have a very simple script which uses require to load the request module and assign it to the variable request. We won't go into a whole lot of detail about what it does. I do suggest you look into it further; it really is a great module. It's also very simple to use and simple to explain. So, if you invoke the request function and the first parameter you pass in the URL that you would like to receive, then the second parameter is a callback. The callback takes three parameters; of course it follows the Node convention of taking the error parameter as the first parameter to the callback and then it has two other parameters in the callback; the response object and the body, which is the text of the response. So now if we were to run it right now we would get an error because we haven't actually downloaded and installed the request module from the NPM registry. So, we need to get it and download and install it locally and the way that we'll do that is with the NPM command from the command line. First we need to be sure that we're in the right directory where the script file is that we want to run. And so, we need to go down into (typing) - and so now we can run the NPM install command and we want to install request (silence). Okay, and so now that the module's been installed we should be good to go and we can try to run our script and see what we get (silence). And so what we have is the html and JavaScript that is the Pluralsight homepage, but we did that by using the request function which came as a part of the request module. This was a third party module that we downloaded from the NPM registry.

Publishing Your Own Module, Conclusion

Publishing your own modules to the NPM registry is very easy and is a great way to share your work with others. It's as simple as adding one extra file to your Node project and a couple of extra NPM commands. The file you need to add to your project root is package.json. This file describes your modules to NPM and specifies how they should be installed when downloaded from the registry. It only has a couple of required fields; name and version. NPM encourages the use of semantic versioning in your version numbers. There are another set of optional fields that are primarily used to describe your module on the NPM website. These will help others find your module as well as your source code repository. If your module has any dependencies, you should specify those here as well. You'll notice that exact version numbers are not required, ranges are also supported. When NPM installs your module it will also download and install its required dependencies. Main is where you define the entry point into your module. This is what is executed when someone requires your module. Once you have this file ready you'll need to run NPM add user to create an account on the NPM registry. With that in place you simply run NPM publish dot from your project's root directory to publish the module to NPM. One additional step I recommend is to move to an empty directory, run NPM install on your module and try to use it in a sample Node application. I know from experience that it is easy to upload a broken module and not realize it because all of the unit tests passed in the working directory before you publish it. So, to conclude; in this module we discussed how to include Node modules in your project using the require statement. The three common sources of Node modules; the built-in modules, your own applications JavaScript files and the NPM registry. And we wrapped up with the brief discussion on how to define and publish your own module to the NPM registry. I hope this module has been a useful introduction to the Node's modules, the require statement and NPM. Thank you.

Events and Streams

Introduction

Paul O'Fallon: Hello. My name is Paul O'Fallon, and I'd like to welcome you to the course, An Introduction to Node.js, Module 3, Events and Streams. In this module we'll discuss the differences between callbacks and events. We'll look at Node's EventEmmiter class as well as a couple of patterns for using Event Emitters. We'll then move on to Readable and WriteableStreams and piping between streams. So let's get started.

Events and the EventEmitter class

So far, we've looked at callbacks as a way to implement asynchronous non-blocking code. Node provides another way to achieve this with Events. Here is a callback example similar to what we've seen so far. It's a function which invokes a callback with an array of results. Here is a similar snippet

of code written using Events. In this case, the getThem function returns a value immediately. The value is an instance of the EventEmmiter class. This Results object has an On function. Here we are specifying that for each item event, execute this function passing in the current item. Then, on the Done Event, or when there are no more results, invoke this function, and if there is an error, invoke this function and pass in the error that occurred. Some of the key differences between these two approaches are; in the Callback Model, you make a request and provide a function to be called when the request is completed. One request, one reply. The Event Model, however, is more of a publish/subscribe approach. You can invoke the On function repeatedly to provide multiple functions to invoke on each event; in essence, subscribing to the events. In the Callback approach, you don't receive any results until you receive all the results. In the example above, the callback will not be invoked until the entire items array is ready. If these items arrive slowly, the callback will not be invoked until the last item has arrived. It also means that the getThem function will be storing the entire list of items in memory while accumulating them prior to invoking the callback with the entire array. On the other hand, in the evented example above, functions associated with the item event will be invoked for each item. This gives you the opportunity to act on the first item as soon as it arrives and the second item and so forth. It also means that the getThem function is not accumulating the items in memory. Finally, the Callback scenario, if only by convention, is an all or nothing proposition. While technically possible to invoke a callback with both error and items parameters, this is not the convention and would not be expected. If the error parameter is set, then the call is assumed to have failed. In the evented scenario, however, an error is emitted as a separate event. Notice that the item and done events do not pass in an error parameter as the first value. In this evented approach, the error can be emitted instead of any item events or after some item events have already been omitted. This access to partial results may be desirable in some situations. The EventEmitter Class is provided by Node as a construct for building these event-driven interfaces. The code that is subscribing to events, like the code in our last slide, will call the On function of the EventEmitter instance and specify the event being subscribed to. And then the code publishing events will call the Emit function and specify the event being emitted. Now these events themselves are simply strings and can be of any value. In our previous example, we defined three events; item, done, and error. When emitting an event, you can also provide additional arguments after the event name. These will pass as parameters to any functions subscribed to that event. In our previous example, this included the item itself that was passed to the item event, as well as, the error object passed to the error event. This set of events and their arguments constitute an interface, or contract, between the subscriber and the publisher, or emitter. There are two common patterns I've seen for using EventEmitters in Node. The first is as a return value from a function. This is what we saw on the earlier slide. In this case, an instance of EventEmitter is created directly and returned from a function. Another common pattern is when an object extends EventEmitter and emits events while also providing other functions and values. Let's take a look at an example of both of these EventEmitter patterns.

Demo: Returning an EventEmitter from a function

So, in our EventEmitter examples, the first thing we're going to look at is the first pattern that we talked about on the slides, which is the example where you actually instantiate an EventEmitter and return it from a function call. So, here we have a getResource function which takes in a number and returns an instance of an EventEmitter that emits three events; a Start event, a Data event, and an End event. This is a good demonstration of how these event names can be really whatever you want.

In our slides we had Item and Done, and Error. In here we have Start, and Data, and End. And what we're saying is whenever I see the Start event, I want you to log this to the console. And whenever I see the Data event, I want you to log this to the console, including the data that was sent. And then, on the End event, let's log this to the console. And right now I've hidden the code inside this getResource function. Right now we're focusing on the subscribing part. But so for now, let's run this and see what we get. So we'll bring this up here. And so here you can see we have the I've Started which was the reaction to the Start event. And then we got five Data events, and one End event that says I'm Done. Now the five is because we passed in five and that's how many -- that's the way this function is coded. Whatever number you pass in, that's how many data events you're going to get back. That was us subscribing to those events and printing something out to the console. Now let's take a look at the getResource function itself, and see what it does. Now the first thing that it does is we instantiate a new EventEmitter. And you'll notice that in order to do that, we had to require an EventEmitter and by doing that we used Node's built-in events module. Because we only wanted the EventEmitter from that events module, we specified that as part of the required and put that in our own EventEmitter variable. And so we instantiate one of those and then now this process nextTick is something we haven't seen before. In our first module we looked at SetTimeOut and SetInterval. And process nextTick is similar but what it really says is on the very next tick of the event loop I want you to run this function. And in this example we're really using that to emulate an asynchronous function because what we want is we want the return value here to be called before we start emitting events and as would normally be the case if you were, you know, talking to file system or a database. So what we say is, on the next tick of the event loop I want you to emit a Start event. It's our EventEmitter. We're emitting a Start event. And then now we're going to set an Interval. And what we're saying is, every 10 milliseconds I want you to execute this function. And then this function we're emitting our Data event. We're keeping a count of how many Data events we've emitted so far. We start at zero and then we keep a count. And if the count is equal to the number that was passed in, we're going to emit an End event and then clear that interval to stop the function from being executed. This is an example of the first pattern of using an EventEmitter. Now let's take a look at the second pattern.

Demo: Inheriting from EventEmitter

In this second example, we're going to take a look at the second pattern we discussed earlier where we have an object that extends the EventEmitter class. So what we've done is we've taken our original example and basically cut it in half, and we've taken the code that emits the events and separated it out into a separate JavaScript file. So let's start by taking a look at that file. So here you'll see we have a function called Resource which takes a number, and I've hidden the contents of the function for now. We'll get to that in just a minute. But what this function basically does -- this is ourObject, and ourObject extends or inherits from EventEmitter. So in order to do that, we're using Node's util module which we've required here. It's a built-in module and it has an Inherits function. And what we've said is, we want our resource object to inherit from the EventEmitter object, which gives us access to the On function and the Emit function, and also, because we want scripts that included this file as a module, to have access to our resource object. We set module.exports equal to resource. Now if we look down inside the code for the Resource function, you'll see that it's almost identical to what we saw in our getResource function. process.netTick is here. The Events are emitted here. The only difference is that because in our previous example, we were instantiating an EventEmitter and using that instantiated variable to call the Emit function, in this case because our

Resource function inherits from EventEmitter. It is our Resource function that's doing the emitting of the events and so we need to use the "This" variable to access the current resource that needs to do the emitting. This is all packaged up in a Resource.js. So now let's take a look at the JavaScript code that is using this. In our main script file we have a Require statement and here, because we're requiring another JavaScript file in our project, we have the dot slash and then the name of the file which is Resource.js, and we're storing it in a Resource variable. So in this example, instead of simply calling a function and getting an instance of an EventEmitter back, we're instantiating our Resource object and getting that back here and the variable Are. But once we do that, we're subscribing to the same events we did before. So let's run this and see if we get what we expect. ( Typing Sounds) And, yes, as you would expect, we're getting the -- I've started, the Start event, and then seven data events because we passed in a seven, and then the End event. So the net effect, at least in the console, is the same. But these are the two types of patterns that you'll see from time to time. You may have an actual object that you instantiate that emits Events, or you may have a case where you call a function and get an EventEmitter back.

Readable and Writable Streams, the Pipe function

Building on the concept of an EventEmitter is something that Node calls a stream. A stream extends the EventEmitter class and implements an agreed-upon set of events and other functions. These events provide a unified abstraction for dealing with multiple types of dataflow, including network traffic such as; http and tcp traffic, File I/O, standard in/out and error, and more. Each stream is an instance of either a ReadableStream, meaning something that you would read from, a WriteableStream, something that you would write to, or both. Also, because of the standard events and functions exposed in both Readable and WriteableStreams, a ReadableStream can be piped to a WriteableStream. This is conceptually similar to piping commands in Unix where the data read from the ReadableStream is piped to the WriteableStream. Node handles the backpressure and to address the scenario where a ReadableStream provides data faster than a WriteableStream can consume it. The interface of a ReadableStream includes a Boolean indicating whether the stream is currently readable or not; a series of events that are emitted when new data arrives or when there is no more data, etc. ; a series of functions to pause, resume, and destroy the stream; as well as the pipe function. And the interface to a WriteableStream includes a similar Boolean indicating whether this stream is currently writeable; events that are emitted such as drain when it is safe to write to this stream; and pipe when this stream has been passed to a ReadableStream's pipe function; functions to write data to the stream; and to terminate it. While you can certainly interact with streams directly by subscribing to events and invoking functions directly, the real power of streams comes from the pipe function. Now that we've briefly covered some of the functions and events and Readable and WriteableStreams, let's see how these work together to provide the pipe functionality. When you invoke the pipe function on a ReadableStream, you pass as a parameter the WriteableStream you want to pipe to. This in turn emits the pipe event on the WriteableStream. The pipe function then begins an orchestration of events and functions between the two streams. When data arrives to the ReadableStream, the data event is emitted and the write function on the WriteableStream is invoked with this data. If at some point the write function returns a false value indicating that no more data should be written, the pause function of the ReadableStream is called "to stop the flow of data." Then, once the WriteableStream is ready to receive more data, the drain event is emitted and the resume function on the ReadableStream is invoked. Once the ReadableStream is finished, the End event is emitted and the End function on the WriteableStream

is invoked. This elaborate dance happens behind a very simple interface and one that is consistent across network, file, and process communication. Let's take a look at some Stream examples.

Demo: Readable and Writable Streams

So first, we'll take a look at a ReadableStream. So to do this, we're going to use the Request Module. And if you remember from Module Two the Request Module is a third-party module that we've installed from the npm registry. We've already done that here and we're going to Require and Request and store it in the Request variable. One of the nice things about Request is it understands and makes very good use of streams. And so here, by simply calling Request and passing it in the name of a URL, it will return to us a stream. And then, because, if you remember, streams inherit from EventEmitters, we can use the On function to subscribe to some of the events that are emitted from a ReadableStream. So the Data event is emitted whenever new Data has been received, and the End event is emitted when there's no more data to be read. And so we subscribe to these two events and what Request does is the stream that it gives you back is actually the body of the Response of the Request. So, in our example, it should be the html of the Pluralsight Homepage, and so as the Data for that Homepage comes back, we'll get some number of Data events and the function that we're asking it to invoke on the Data event. The parameter that's passed to it is the actual Data that it received. And so we want to log that to the console, and then our tags -- or if you want to call them that, that we added. So let's go back to the bottom. Done is there. So our End event was emitted and we printed Done. Now let's scroll back through and look for our Data with the angle brackets. So if we see (scrolling sound) -- there's one. So that means this was the start of one of the chunks. (Scrolling sound) And we keep scrolling. And there's another one. (Scrolling sound) And so on and so forth. So what happens is as the html is being returned from the http request, those Data events are being fired and pieces of the html are being sent to the functions being registered to that Data event. And, of course, we're just printing them to the console. This is how you would interact with the ReadableStream. Next we'll take a look at a WriteableStream. So in this case, the example we're going to use for the WriteableStream is processed.standardout. The Process module has several streams that it makes available. One is StandardIn. One is StandardOut and another is StandardError. So, in this case, we wanted a WriteableStream, so we're going to use StandardOut. And in this case what we're doing is we're simply writing out Hello, and then writing out World. And we're also going to inspect the Boolean variable just to make sure that this StandardOut really is Writeable. So, let's run this. And so it's simple enough. Is it Writeable? Yes, and prints out True. And the Write function does what you would expect. It takes each of those and just simply writes them to StandardOut.

Demo: Piping Between Streams

So now we're going to take a look at piping a ReadableStream to a WriteableStream. And so we're basically going to combine the previous two examples together. Here, in this case, we have our Request module that we've required and stored in our Request to variable. And similar to our ReadableStream example, we're going to call Request on the Pluralsight Homepage, and store that stream that's returned in the Variable S. But now, instead of listening for events on that

ReadableStream, we're going to simply call the pipe function, and we're going to pipe it to our WriteableStream, which in this case, we're going to use the WriteableStream from our second example, Process.StandardOut. And so what this should do is take the html returned from the Request and simply pipe it to the console. So let's run this and see what we get. Piping the Request to StandardOut simply dumps everything that came back just straight to StandardOut. Now, of course, you could also simply chain these together and call Request and because Request is going to return to you a stream you could simply, on the end of the Request, do .pipe and then pipe it to the WriteableStream. This will, in essence, do the exact same thing as the code above it that's commented out. But let's just run it to prove that to ourselves. Yep, and so it does exactly the same thing. Now here is an example that's a little bit different than before. So now what we're going to do is take the same Request of the same Webpage. But now, instead of piping it to the console, what we're going to do is pipe it to a file on the file system. And so to do this, we're using the FS module, which is another built-in Node module, and we're bringing it in and storing it as the FS variable, and FS is for the File System. And so here what we're doing is calling the CreateWriteStream function on the FS module which says I want you to create me a writeable stream that will store the contents in this file and return me that stream. Because the CreateWriteStream function returns a stream, we've simply put that function invocation as the parameter to the pipe function. So here we're saying, okay, I want you to request this Website and I want you to pipe it to the stream returned by the CreateWriteStream function which will create this file. So when we're done, we should be able to run this one line of code, download this html from their Homepage, and write it to a file. Let's try that. ( Typing Sounds ) It ran and it's complete, so now let's do an LS and see what we have. So we have our Pluralsight.html file. In fact, if we (typing sounds) -- we can see that it does have a size to it, and if we want to add to the console we can... ( Typing Sounds ) We can see that that file does indeed contain html from the Pluralsight Homepage. So, in one line of code you can download a Webpage and write it to a file. So what we're going to do for our last example is take our previous one one step further. If you remember from our slides, we mentioned that a stream can either be a ReadableStream, a WriteableStream, or both. In this case we're going to show a stream that is both readable and writeable. So what we've done is taken our Request, a Website -- or a Webpage -- and write it to a file, and we've injected into the middle of it the ability to gzip the data on the way into the file. And so what we have is Node has another built-in module called zlib, and so we've required that. And what we have is our Request function which we never turned to stream, and so on that stream we're calling pipe, and then what we're passing to pipe is actually the results of zlib.creategzip. So what create.gzip does is return a stream that's both readable and writeable that will read in uncompressed content and output compressed content. And so we call that function. We'll get that stream back. We pass that to pipe. The pipe function, as its return value, will return the stream that you passed into pipe. And the reason that it works that way is explicitly so you can chain these together. So, when we pass in the stream that was returned from the create.gzip function into pipe, the return value of this pipe function is actually the stream. So on that stream, we can call pipe. And pipe that output to our CreateWriteStream function call. But in this case, we're going to name the file Pluralsight.html.gz because in this case it's gzipped. So, we'll execute our request, get the Data back, pipe it through our gzip stream and then pipe it onto a file. So, let's run this and see what we get. ( Typing Sounds ) Okay. So it's done. Now let's do an LS (typing sounds) and see what we have. Okay. So now we do have a Pluralsight.html.gz. If you'll notice, it is a good bit smaller than the .html which would lead us to assume that it is obviously compressed. But now, let's actually look at the contents of that file and just to prove it to ourselves. So if we use a zcat command... ( Typing Sounds ) ...to print that out. And there you go. You'll see that it is the actual html. You can get very, very creative with these chaining of the pipes together, and it just shows how much you can do in a very small amount of code with the benefit of this pipe command. And if you

remember from the slides of all the back and forth with the pause, and drain, and resume, and all that -- I mean, all of that is in place here in-between each one of these pipes to manage the flow of the data from one to the other.

Conclusion

So, in conclusion, in this module we examined the differences between callbacks and events for implementing non-blocking asynchronous code. We then looked at the EventEmitter class and two common patterns for using it. From there, we moved on to Readable and WriteableStreams including, piping data from one to the other. I hope this module has been helpful in introducing you to the concepts of EventEmitters and streams and their js. Thank you. ( Silence )

Accessing the Local System

Introduction, The Process Object

Paul O'Fallon: Hello, my name is Paul O'Fallon and I'd like to welcome you to the course An Introduction to Node.js Module 4, Accessing the Local System. In this module we'll examine the ways Node gives you to interact with your local environment. This includes Node's "process" object, interacting with the file system, which will lead us to a brief conversation on Node's buffer class and then we'll wrap up with a look at the os module. So, let's get started (silence). The "process" object provides a way for your Node application to both manage its own process as well as other processes on the system. It's available by default in your Node application; it does not need to be required. The "process" object contains a variety of variables and functions including a set of streams for accessing standard in, out and error. The first is a readable stream and the latter two are writable streams. It also provides a series of attributes about the current process such as its set of environment variables, command line arguments, process ID and title, uptime and memory usage, current working directory and more. It provides a set of functions. Most of these such as abort, change directory, set gid and uid act on the current running process. The kill function however requires a process ID as a parameter and can be used to terminate other processes on the system. Finally, the "process" object is an instance of the event emitter class. It emits an exit event when the process is about to exit. It can also emit an uncaught exception event if an exception bubbles all the way up to the event loop. It also emits all of the standard POSIX signal events such as SIGNI etcetera. Let's take a look at some examples of using Node's process module (silent).

Demo: The Process object

So, for our first process example we're going to take a look at the three streams that are provided by the "process" object. Now, we spent a lot of time looking at streams in an earlier module, so I'm not going to go into the ins and outs of streams themselves and in fact in that earlier module we did use

the process dot standard out stream in our examples. But what we're going to do here is look at a modified version of an example from the Node.js documentation and we're going to try to hit on all three of the streams; standard in, standard out and standard error. And so what we're doing here is we're going to actually be listening for data coming in on standard in and when it does we're going to turn around and write it to standard out and when the standard in stream is closed we're going to write another message, but this time to standard error. So these are the underlying events and functions that make up the stream interface, but here we're interacting with them directly as opposed to say piping to or from one of these. Another thing about standard in is that this stream starts paused, so you must call resume in order for it to begin receiving information. So we're going to run this, but in Cloud9 we're going to run this differently than we've run some of our previous examples. If you look down in the lower right hand corner you'll see an open a terminal button and if you click on that what you'll actually get is in your tabbed interface, you will get another tab that is an actual terminal session into your workspace. And so since we want our process to read from standard in, we really need to run in it a shell. And so, this will give us the ability to do that and so we're down inside the appropriate directory, so let's run our script (typing). And so now as we type into standard in (typing) you'll see that we are triggering the event which in turn is causing us to write to standard out. And so for as long as we were to do this we would continue to see the data be read by standard in and written to standard out. And then if we do a control D to exit out of the program you'll see the end which was printed to standard error. This program worked like we expected, it was listening for when I typed in hello and world, it triggered these data events on standard in where we turned around and wrote to standard out and then my control D ended the standard in stream which caused the end event where we wrote to standard error that we're ending. So now we're going to take a look at some of the events provided by the "process" object. We've taken most of the code from our earlier example and copied it here. So we still have our standard in, standard out, standard error code from before but we've added a couple of extra items. Now we're also listening for one of the POSIX signals, we're listening for SIG term and if we receive that event we're going to write to standard error and we've also added a console dot log statement here to log the process ID of this particular process. So let's run this and see if we can trigger this SIG term event. So we're going to run the program here (typing). And, remember the one line we added for the console dot log to print out the process ID -- that's what this is here. Now, if we interact with it, it works the same as it did before all the way down to if we do control D it will end, but we don't want to do that so let's run it again. So now what we'd like to do is see if we can cause the SIG term event to happen. So to do that what we're going to do is actually do is go to another terminal window and issue a kill command to this process and see if we can trigger that. So this is process 10200 and so if we go here (typing) and from the UNIX command line issue the kill command dash term and put in the process ID of our Node process, when we go back - so you can see here, "Why are you trying to terminate me?" So us running the kill command from the command line and specifying dash term caused these SIG term event to be emitted from the "process" object which then invoked our function and we printed this to standard error. Now, the program itself is still running so we can still interact with it and control D to end it.

Interacting with the File System

Interacting with the file system in Node is done via the built in fs module. Many of the functions provided by this module are wrappers around the POSIX functions. They come in both asynchronous and synchronous flavors. This is the one area of the Node standard library where you will see a

significant collection of synchronous functions. The fs module also provides a couple of stream oriented functions; create ReadSream opens a file for reading and returns a readable stream, create WriteStream opens a file for writing and returns a writable stream. These are useful for integrating with other streams as we did in our last module. Lastly, the fs module provides a watch function which will watch a file or directory for changes. The function returns and event emitter which emits a change event whenever a file changes. Let's take a look at interacting with a file system using the fs module (silence).

Demo: Interacting with the File System

In demonstrating interacting with the file system we're going to look at both the synchronous and asynchronous functions provided by the fs module and we're going to start with the synchronous functions. Now each of the synchronous functions has the word sync in the name at the end of the function name and so you'll see that repeated over and over again here. We start by requiring the fs module and from there we check to see if the temp directory exists and if it does we check to see if new dot text is in that directory and if it is we remove that new dot text file and then we remove the directory temp. This lets us run this script over and over again because it cleans up after itself. We make the directory temp, we check to be sure that the directory exists and if it does we change into that directory, then we create a file called test dot text and pass in this stream to be the contents of the file, then we rename the file, then we print the size of the file and the contents of the file. And so this is very top down, these are the synchronous functions and so each one executes one after the other, no call backs. So let's run this (silence). So if we look at the output it says the directory exists removing, which means that it did find the temp directory and then it comes down here and creates the file, renames it and then tells us that the file has a size of 35 bytes and that this is some test text from the file is the contents. So it's written to test dot text and then renamed to new dot text and then that's the contents. Now that we've seen what this logic looks like implemented with the synchronous functions, let's take a look at the same logic implemented using the asynchronous version of these functions. So now we're looking at the asynchronous functions provided by the fs module. Here we've left our cleanup code at the top using the synchronous version because that's really just to kind of remove the previous invocations of this script. So before we even go through each line of the code you're probably thinking that this looks like the Christmas tree problem and you would be right. This code implemented just like this, all just with anonymous functions as callbacks does create a Christmas tree problem and no, I would not want to maintain code that looks like this for anything longer than a demonstration. But it is a good example and actually a tangible example of what a Christmas tree problem can look like. You'll notice that the word sync is gone from all of the function names and so here we're making the directory temp and in the callback we are calling the exists, to be sure that it exists. And in that callback we get whether or not it exists, so we'll check and be sure that it does exist and if it does we change into the directory, we write the file, we rename the file, we get the statistics for the file to get the size, print that to the console and then we read the file and get the data and then we print out the contents of the file. Each one of these is building on the one prior by being implemented in the prior functions callback which gives us the Christmas tree. But let's run this and see what we get (silence). So if we look at the results we get basically the same output as before. This is an example of what it would look like to use the asynchronous functions in comparison to the synchronous functions for accessing the file system.

What is a Buffer?

In one of our last demos you may remember that we had to invoke a two string method on the value returned when we read a file from the file system. That's because the return value for this function was a buffer object. So what's a buffer? JavaScript has difficulty dealing with binary data, but interacting with the network and file systems require it. The buffer class provides a raw memory allocation for dealing with binary data directly. Buffers can be converted to and from strings by providing an encoding. The default encoding is utf8. When we called two string on our buffer returned from reading a file we were using this default encoding to convert it to a string. This support for multiple encodings makes buffers a handy way to convert strings to and from base64. Let's take a look at a couple of examples using buffers (silence).

Demo: Buffers

So now we're going to take a look at a couple of examples of using buffers. But before we go through this code I want to pop back to the file system examples from the last demo and point out where buffers were showing up. Either in the asynchronous or the synchronous example, you'll notice that when we read the file from disc we take the value that is returned by the read file sync function and we call a two string on it. So that means what we're getting back is actually not a string by default and so we're calling two string on the instance of the buffer object to get a string value to print out. This is an example of where you're getting a buffer and you need to be able to deal with buffers. A couple of canned examples here where we are instantiating a new buffer and passing in the string hello, this gives us a buffer B. If we want to log it to the console just like we saw in our file system example we'll convert it to a string. We can also convert it to a base64 string by simply passing in the encoding base64 in the two string function. Because of what's returned from each of these function calls, you can actually chain this together. So here you see I'm actually creating a buffer passing in world and then immediately converting it to base64. So if all you really wanted to do is get the word world as a base64 encoded string you could do all that in one line. So something else you can do very efficiently with buffers is call out certain subsections of a buffer. So in this case we're going to pull out the first few characters of this buffer and print that to the console. So now let's run this and see what we get. So looking at our output we see first, we see hello which is our buffer we instantiated here with the string hello and then converted back to a string. And the next console dot log output is where we log the base64 encoded version of hello which is here. Here we log to the console the two string where we only wanted the first two characters which gave the H-E of hello.

The OS Module, Conclusion

We first saw the os module back during our discussion of using require to include built-in modules in our Node applications. The os module provides several functions for examining the operating system on which your Node application is running. These include the default temp directory, the host name, the type of os, the platform, architecture and release information. They also include the uptime of

the system, the load average, total and free memory, the number of CPU's, as well as a list of networking interfaces. Finally, the EOL variable contains the appropriate end of line marker for the operating system. So, to conclude; in this module we've looked at several ways Node allows you to interact with your local environment including Node's "process" object, the fs module for the file system, a short diversion to discuss buffers and a recap of the os module. I hope this has been a helpful look into the local system features provided by Node.js. Thank you.

Interacting with the Web

Introduction, Making Web Requests in Node

Paul O'Fallon: Hello. My name is Paul O'Fallon, and I'd like to welcome you to the course, "An Introduction to Node.js, Module 5 -- Interacting with the Web." In this module, we'll learn how to use Node as a web client, a web server, and how to extend that to include real time integration using Socket.IO. So let's get started. First, we'll look at using Node to make requests of other websites. In a couple of our earlier videos, we used the third-party request module to fetch the HTML of the Pluralsight homepage. Here, we're going to look at doing the same thing using the HTTP module that is included with Node. It can be included in your application by requiring HTTP. The request function provided by the HTTP module takes two parameters -- an options parameter and a callback. This options parameter can be either a simple URL string, or a more complex object specifying many options about the request being made. The request function returns a value, which is an instance of client request. This is a writable stream and can be written or piped to for HTTP post uploads. In addition to returning a value, the request function also takes a callback parameter. This callback, when invoked, is passed a single parameter, and instance of client response which represents the results of the HTTP request. This client response is a readable stream which can be read from or piped to a writable stream. Note that this callback is an example of where Node does not follow its own conventions, since the first parameter to the callback is not an error indicator. If you don't pass a callback to the request function, you can still retrieve the client response object. It is also provided by a response event emitted by the client request returned from the function call. If all you're doing is making a simple get request however, Node provides a simplified interface in HTTP.git. Let's take a look at some examples of using Node's built-in HTTP module for making client requests.

Demo: Making Web Requests in Node

So now we're going to take a look at using Node's HTTP module to make some HTTP client requests. In our first example, we're going to come down to this code here where we're calling HTTP requests, and if you remember, I mentioned that the first parameter could be either a simple URL or an object. So for our first run, we're going to use a simple URL and we're going to pass it a callback to be called when the response is ready and it will pass us the response. And then we're going to log the status code in the response to the console, which will give us like a 200 or 404 or 500 the HTTP status code that is returned, and then we're also going to pipe the response to process.stanardout. So when calling HTTP requests and being returned the request object, in order to have the request actually

invoked, you have to call a request.end basically, closing the writeable stream that it's giving you when you create the request. Let's run this now and see what we get. And so here you can see in the output that we got the HTML from Google's homepage, and the JavaScript; and if we scroll (typing) up to the top we see 200, that's the status code that was returned which is an okay status code. That is a very simple way to make a request and get your response back. So now let's make the same request but instead of passing in a simple URL, let's pass in the options object that we've created here. It will end up making the same request but you can see we've defined the values as a set of properties. So let's substitute out the simple URL for the options variable and run it again and we should get the same answer. And we do. We get the 200 response and then if we scroll through you'll see all of the HTML and JavaScript we've got before. So if you know you're going to make a get request, we can optimize this by just saying "get." Because you're doing a get, the request that is returned does not have to be closed because in a get scenario it knows you're not going to be uploading any data, so you can leave off the request.end, and in fact, you don't even really need to take the return variable, so you can take that out, as well. Now, let's run this. And you can see it did the same thing. Interestingly enough, so I used Google in this example. In previous examples, I had used the Pluralsight homepage, so let's put that in here and see what we get. ( Typing ) Now, this is interesting. So what we got back was a 301 status code, and the HTML that this object has moved and can be found at a different URL. So a difference between the built-in HTTP request and get functions and the third party module that we used, is that these do not automatically follow redirects where if you remember in our previous modules when we simply used the third party module to issue a request against Pluralsight, we got the HTML back. We didn't even really know that there was a redirect that needed to be followed. In this particular case, you're operating at a slightly lower level where you would have to deal with those redirects on your own.

Building a Web Server in Node

In our very first module of this course, we copied the example web server code from the node.js.org homepage and used it to stand up a simple server. Now, we'll take another look at that code and how it leverages many of the features we've learned so far in this course. The create server function is passed a single parameter. The callback to be invoked each time a request is received by the web server. Optionally if no callback is provided, requests can also be received by listening for events on the server object that is returned. Even after the create server function has returned, the server will not begin accepting HTTP requests until the listen function is called. This function can be called in several different ways, but the most common is with an IP port number combination. When a request is made to the HTTP server and the callback is invoked, it is passed two parameters. The first is an instance of server requests, which is a readable stream. It represents the request being made and for uploads to the server it can be read from or piped to a writable stream. The second parameter passed to the callback on each web request is a server response object, which is a writable stream. This represents the response sent to the client. If you are returning steam-oriented data, such as a file from disk, you can pipe that stream to the server response writable stream. Support for SSL requests are addressed by a separate HTTPS module, which has a very similar create server function. Let's take a look at some examples of using Node's built-in HTTP module for creating a simple web server.

Demo: Building a Web Server in Node

So now we're going to take a look at using Node to run a simple web server. Now, this will be a slight extension over what we looked at in our earlier modules where we simply cut and paste the code from the Node.js homepage, but we did use that for a starting point here. We are requiring the HTTP module and then we're calling create server just like before, and we're passing in a single function which will be invoked on each request. And if you remember, this function is passed a request and a response. Now, just like the example from the Node homepage, we're going to write out the header of 200 for a status code of Okay, and then a content type of "text/plain." But now we're going to do a little bit more. If the URL in the request is "/file.txt," we're going to go to the file system, and we're going to open a readable stream of that file and we're going to pipe it to the response. So if you remember, the response was a writeable stream, so we're simply going to open a file and pipe it to the response. Otherwise, we're going to call result.end Hello world. And then rather than a server that's returned being stored as a variable here, and then simply to turn around and call listen, the listen is chained to the end of create server. So we're creating a server and then for the server that's returned, we're going to call listen and remember our variables for providing port and IP, we've put those here and then a console.log to show that the server is running. And just so we can double check what the value of /file.text is, let's take a look at that. So this is a file being read from the file system and piped to this response. Now, one more thing I want to point out real quick that may not be familiar to you yet is this "underscoreunderscoredirname." That is a variable that will tell you the directory that the current script is running in. So in this case what I want to say is my server.js is running in this directory, and I want to find /file.text in the same directory. So let's run this and try a couple of different URLs. So we're going to visit the URL here, (typing) and without anything added to the base URL, we simply get "Hello world," just what we would expect because we didn't ask for /file.text so we're just going to get "Hello world." But now, let's ask for /file.text. And now you can see here that we did get the contents of the file return to the browser. And so this is a good example of using streams to go from a file system to a network where we actually opened a file from disk and then simply piped it to the response.

Realtime Interaction with Socket.IO

While not a core part of Node, it makes sense to take a few minutes and follow the web server scenario to its logical next step. Socket.IO provides an abstraction over the various methods used to maintain an active connection between a browser and a server. It will use web sockets where they are supported, and will transparently fall back to several other techniques in cases where web sockets are not yet supported, either due to browser or firewall limitations. In the case of Node.js, Socket.IO also provides a consistent interface for performing these socket-based communications in both the browser and the server. This is one tangible demonstration of the synergy between JavaScript on the client and the server. Here on the left, we have a snippet of code that would run in Node.js on a server. Notice that it starts by requiring the Socket.IO module in invoking the listen function. From there, it is using the event emitter construct to listen to and emit events. On the browser side, the Socket.IO JavaScript library is loaded from the server. Notice that there is no special configuration on the server to provide this JavaScript file. That is handled transparently by the Socket.IO Node.js module. Now, let's step through a typical back-and-forth scenario. The

browser will issue IO.connect to establish a connection to the Node.js server. The server receives a connection event and emits a news event with a payload, "Hello world." The browser receives this news event and invokes the appropriate function. Within this function, the browser emits an event entitled, "my other event," and provides some data. This event is received on the Node.js server and the appropriate function is invoked which logs the payload data to the console. The powerful thing about this scenario is that both the browser and the server are using the same constructs for emitting and acting on messages being passed back and forth. The code on the server looks very similar to that in the browser and vice-versa. During this type of development in other server site languages it's certainly possible, but it will lack the symmetry and implementation and all the benefits that come with it. Let's take a quick look at a Socket.IO Node.js example.

Demo: Socket.IO

So for the first part of our web socket example, we're going to go back to our Linux VM that we used in Module 1. And part of the reason for this is because Cloud9 does not support web sockets natively. And so I wanted to show you this first in an environment that supports web sockets and then we'll go back to Cloud9 and show you how to get it to work there. But here in our Linux VM, let's go through the Node server side code that will be using the web sockets. We're starting off by requiring HTTP and Socket.IO and the file system. Now, this example will be a little bit different than what we saw on the slides for Socket.IO because this will have a web server and Socket.IO server combined. Our slide only had a Socket.IO server. Here, you'll see our HTTP create server function, but in this case instead of passing an anonymous callback, we're passing a handler named function. And handler, all it really does is says, "Regardless of what URL you give me, I'm always going to give you back index.HTML"; so it doesn't even really look at the request, it just simply opens up index.HTML and then returns it. The server being returned from create server here restoring it in an app variable, and passing that to Socket.IO.listen, so Socket.IO really adds itself to the server that was created in HTTP create server. So for the Socket.IO server, we're saying, "Whenever I receive a connection event, meaning a browser connects to the server, I want you to set an interval every two seconds. And every two seconds I want you to capture a current time stamp emitted to the log, and then over the socket I want you to emit a timer event to the browser and I want you to pass the time stamp that you just created to the browser." And also, we want to listen for a submit event that comes from the browser, and if we receive that we want to take that data and just log it to the console. And then we're going to call app.listen for the create server that we created up here, and this is what will serve the index.HTML page and then something to log to the console. Now before we run this, let's take a look at what is in index.HTML and will be returned to the browser. (Typing) So here's a view source of the index.HTML page. Here you'll see that onload we are making a connection to the server, we are listening for the timer event and when we get the timer event, we're going to update a portion of the HTML page to show the data where the time stamp that came with the timer event. And then in our submit data function which gets invoked whenever we click on the button of the form on the page, we're going to get the input data and we're going to generate a submit event and pass that data along with the event. So here we are listening for a timer event and emitting a submit event. And over on the server side, we are emitting a timer event and listening for a submit event. That's kind of the symmetry that you get when doing this between Node on a server and JavaScript in the browser. Now that we've taken a look at it, let's run it and see what it does. ( Typing ) So here it tells us that Socket.IO has been started and here is our console.log statement telling us the server is running. So if we go to the test page and hit Refresh, the page has been

loaded and now you can see that we've begun receiving timer events from the server. If we go back and look at the server logs, you can see this is the console.log we have inside of that interval where we're logging to the console as well, and so those timer events continue to be sent to the browser. Now, if we want to type in something here to generate the submit event on a server, we can do that. So if we type in a string here and press "Submit" and then jump back to the server, you'll see "Submitted colon qwerty" right here, which is the event on the server triggered by the browser. This is an example of the back and forth that you can achieve between a server and a browser via web sockets and Socket.IO without any page refreshes, without reloading a page or long pulling or anything. So now we're back in Cloud9 and here I have the same JavaScript file as I had before on my Linux VM. I've folded a little bit of the code here just so that it would all fit in one window, but you can trust me that the code in here is the same as it was on our VM. The only thing I've changed here is this code here. So like I mentioned, Socket.IO will try to use web sockets, and then if it doesn't it will begin to fall back to other transports. So what I've done here is given it an explicit instruction that says, "You know what, the only transport I want you to use here is the "xhr-polling."" So I'm going to use this mechanism to maintain the connection between the client and the server since I can't depend on web sockets. These three lines are the only thing I've changed and let's run it here and see what we get. So you'll see here very similar output to what we had before and now we can just click on this. ( Typing ) And we'll see, we have our sample webpage and timer events continually updated. And if we come back here and look, we'll see that we're getting data continuously written to the output. Now, let's come type something in here real quick. (Typing) Now if jump back -- in fact, let's just go and stop the server -- and we look, and I may have to hunt for it but you'll see submitted this as a test. So even if you know in your production environment, you'll be able to support web sockets or you'll have a fallback strategy. In Cloud9 you can just set this and it becomes much easier to do your development that way.

Conclusion

To build a web application of any significance in Node.js, you'll likely want to leverage a web framework that runs on top of Node. One very popular framework is ExpressJS. I encourage you to check out the Pluralsight course on ExpressJS if you want to learn more about building web apps in Node. So in this module, we've covered using HTTP requests as a web client, using HTTP create server to build a web server, and extending that model to include Socket.IO for real-time communication. I hope this module has been helpful in introducing you to the benefits of using Node to interact with the web. Thank you.

Testing and Debugging

Introduction, The Assert Module

Paul O'Fallon: Hello, my name is Paul O'Fallon, and I'd like to welcome you to the course, An Introduction to Node.js, Module 6, Testing and Debugging. In this module, we'll discuss basic unit testing using Node's built in "assert" module, and quickly move on to more advanced testing with

the Mocha test framework and should.js. And we'll wrap up by using the Cloud9 IDE to debug Node.js applications. So let's get started. "Assert" is a module that comes with Node, but must be required by your application. It provides functions to perform a number of tests or assertions in your code. For example, you can test for equality or inequality between expected and actual values. You can test whether a block of code does or does not throw an exception. You can test to the truthiness of a value. Because of Node's conventions on the placement of the error parameter in a callback, there is also special support for testing whether an error was passed to a callback. And each of these assertions can contain a message to output when the assertion fails. When testing for equality, the "assert" module provides for three different types of equality. The first is simply the equal function. This is a shallow coercive test for equality, and is equivalent to using the double equal sign in JavaScript code. The second is strict equal, which, as the name indicates, is strict equality, the equivalent of a triple equal sign in JavaScript. The third and final type of equality provided by "assert" is deep equal. This allows you to test the equality of more complex data types. In this case, identical values are equal, and other values that are not objects are evaluated with coercive equality. Date objects are considered equal if they contain the same date and time. And other objects, including arrays, are considered equal if they have the same number of owned properties, equivalent values for every key, and an identical prototype. Let's take a look of some examples of using "assert" to unit test our Node.js code.

Demo: The Assert Module

(Pause) So now we're going to take a look at an example of using "assert" to unit test some Node.js code, and to do that we're going to bring up our mathfun module, that we created in Module 2 of this course. And in fact we've extended it a little bit to include a synchronous version of the same evenDoubler function that we used back then. So here in our mathfun.js file, you'll see the evenDoubler function, which I've folded here, but it's the exact same code from Module 2, but we've added a synchronous version of this function called evenDoublerSync, which has the same rules as far as requiring an even number, but instead of invoking a callback, it returns the value if it is past an even number. And rather than passing an error to a callback in case of an odd number, it throws an exception. In this case it throws an exception with a new error with the text odd input. I've added them both to module exports here, so they're both available to my code. So now I'm going to back to the unit test for this code. We see here that I have required the "assert" module, and I've required my mathfun script in the same directory, and assigned it to the variable "fun". So now the test code is that I am running the synchronous version of evenDoubler passing it in a two, and asserting that that will be equal to four. And then to test, what will happen if I pass in an odd number to the synchronous function? I'm asserting that if I pass in an odd number to evenDoublerSync that that will throw an exception. And then this extra parameter here, I can actually specify a regular expression to match to the text of the error that's returned. So if you remember from the mathfun file, I'm returning odd input. So what I'm actually checking for here is that the exception that is thrown should have the word odd in it. So to test the asynchronous evenDoubler function, I'm invoking it with a value of two, passing it an anonymous callback, and then checking to see if an error was returned. And then I'm also asserting that the results are indeed equal to four as they should be. And then in this case, I've actually supplied an extra message to the "assert" function that this is what it should print out if this test fails. And lastly, I'm testing the asynchronous function when I pass it in an odd value. So in this case, I do expect the error to come back not null, so I'm asserting that I'm going to get an error back. So let's run this code and see what we get. (Pause) As

you can see, I didn't get any errors, but I really didn't get any output either. So the "assert" module is really just for testing your output and for exceptions and whatnot. It doesn't really give you any useful feedback, particularly if everything passes. So let's make some of these tests fail. Why don't we go here and change the string that we're matching on for the exception that's thrown, and let's run this. (Pause) So you can see I got a rather messy error message, but what it's telling me is that odd input is what the error contained, and that is not what I was testing for; I was looking for odd two, and that was not there. So let's fix this one. And now let's go down and make this one fail. So now we're going to say that when I call evenDoubler with a value of two, I should get five back, which we know is not right, but that's going to fail. So if we run this, here you'll see that we did get our message, evenDoubler failed on an even number, so that's our message here. It's kind of in the middle of you know some extra stuff, this output is certainly decipherable, but it's not very friendly, but it is there.

Testing with Mocha and Should.js

A Node application of any complexity will quickly outgrow the built in test capabilities provided by the "assert" module. There are many test frameworks that have sprouted up in the Node community. A fairly popular test framework is Mocha. It is also common to see it paired with should.js, which provides a rich assertion syntax. Mocha provides many useful features for writing tests for your Node application. Even when testing asynchronous functions, it will run your tests, serially. Like test frameworks for many other languages, the test cases are organized into test suites. There are hooks to execute arbitrary code before and after each test suite, as well as before and after each test. It includes the support for pending, exclusive and inclusive tests. A pending test is one that has been stubbed out, but not yet implemented. Exclusive and inclusive tests are helpful ways to isolate certain tests without having to comment out those tests, which you may forget to uncomment later. It will time each one of your tests, and let you know which ones are running slow. During your development cycle, it can be configured to watch your source directory, and rerun the tests each time a file changes, for example, when you save a file in your IEE or in your text editor. It even supports multiple syntax options for writing your tests and multiple options for rendering the test results. Should.js extends Node's asserts module with behavior driven development or BDD style assertions. These can make your tests easier to construct and much more readable. Here is an example from the should.js read me file. You'll notice that we have a user object with two properties, name and pets. Should.js actually extends the object prototype and adds a should function. It also adds additional functions, which serve as syntactic sugar, which make the test more readable. Here we are saying that the variable user should have a property with the value tj. Property is an enhanced assertion that understands how to validate whether an object has a certain property. You can also chain these assertions to make your tests more compact. Here we're saying that the user should have a property of pets, and that the length of that array should be four. In this case, the test will not blow up if the user object does not have a pets property, it will just fail. Even though should is added as a function onto every object, you will still have the case where you need to check the existence of an object. In this scenario, you cannot simply append the should option to the end of an object name because if the object is undefined, it will generate an error. In this case, you can evoke the should function statically and pass in the variable to be tested. The should function is not only available on the top level function, but on properties of objects as well. Let's take a look of some examples of testing with Mocha and should.js.

Demo: Mocha and Should.js

So now let's take a look at writing the same tests we showed earlier using the "assert" module, but now using Mocha and should.js. So here is that test file, and we'll notice at the top that we are requiring should to bring it into this program, but we're not requiring Mocha. Mocha is actually installed globally on our system, and is what we will run from the command line, but it's not something we explicitly require here. But it is what provides the describe and it functions here in the tests themselves. So let's walk through this. The describe are basically our test suites, and they can be nested. So here we're saying that I have some tests for mathfun, and the first set of tests is that when I'm using the module synchronously, and now I'm defining individual tests here. It should double even numbers correctly. This is kind of nice because the tester are self documenting and by writing these descriptions, somebody coming along later should have a pretty good idea of what it's supposed to do, even if they're still a little new to the code itself. So I'm saying it should double even numbers correctly. Here is where should.js comes in. If I say fun.evendoublersync, and I'm passing in the number two, I can immediately say on the return value it should and then dot equal four, so when I call evenDoublerSync two, it's going to return a value. And I'm simply saying that should equal four, so I can do all that on one line. Now my next test, I'm saying that on an odd number it should throw an exception. And so here I'm executing this function, and I'm saying that that function should throw an exception, and that exception in the message should have the word odd. So that regular expression is similar to our last example. Now in this case, I'm calling a don function, and don is what's passed into my test. This is the way that we let Mocha know that this particular test is done, and for our asynchronous tests that we're about to get to don becomes even more important. So let's look now, we have another test suite here identified by describe, and we're saying when evenDoubler is used asynchronously, it should double even numbers correctly. And here we're calling our asynchronous function evenDoubler, passing in a two, and an anonymous callback. And now if you remember from the slides, we mentioned that if you need to check the existence of a variable, you can use the should function statically, which is what we're doing in this case. So we're saying should not exist error. We couldn't really say error dot should dot not exist because if it was undefined, then that would blow out the script, but by saying should dot not dot exist error, we can test to ensure that that doesn't exist, and then the results should equal four. Now in this case, calling done, which was passed into our test here is our way of telling Mocha that this test is done. And throwing down a little further, we'll say that when invoked asynchronously, it should return an error on an odd number. And we're calling evenDoubler again, passing in an odd number this time for our anonymous callback, we're saying that the error should exist, but the results should not exist, and we're telling it that after those two assertions, we're done. And in our case here, since most of our running we've been doing from the run command at the top, to run Mocha from the command line, what we're actually going to do is go back to a terminal here in cloud nine. So here we have a terminal session. If you're doing this for the first time, you're going to need to install Mocha, and should, as well, actually, so you'll want to run mpm install dash gmocha to install it globally, so it'll be available from the command line. I also need to install the should module, which I just did. Now let's install Mocha locally. (Pause) Okay, so now I can run Mocha from the command line and I get all kinds of usage information back. So now let's run Mocha against the test that we wrote. (Typing sounds) Now you'll see that for the completed tests, we do get some information back there. It's hard to tell here, but there's four tiny little dots there. Because we had written four tests, and then at the bottom there's four tests completed, and how long those tests took to run, but even that

might be a little hard to read, so as we mentioned in the slides, Mocha has multiple ways of outputting the test results. So let's take a look at another way to report those results. ( Typing on Keyboard ) ( Silence ) So here now what we have done is we've said, I want you to use the reporter called spec, and run the same test. This is where the strings that we've passed to describe init come in very handy, right, so it actually prints out that information along with whether each one passed or failed. And you'll see they all passed, so they all got green checks. Now we also mentioned that Mocha will time your tests and let you know if it thinks it's running slowly. Now you can configure what slowly means, but here we've taken the default. And going all the way back to Module 2, if you'll remember that our asynchronous evenDoubler function waits a random amount of time, up to one second. And so in this case, what it's saying is -- it's printing these in red, saying, look, 383 milliseconds or 659, that's an awful long time, you might want to take a look at that. So now that we've seen tests that pass, let's take a look at some that fail. So let's go back and make a couple of these fail. So we will go back and change our testing of the error message that comes in when we throw an exception. So we'll change that, and then let's come back and we'll do the same one we did before. We'll make this test say that evenDoubler shall return for two should return five, even though we know that's not true. It will make that fail. So let's save this and then run those. ( Silence ) So let's scroll back up. You see right here, up at the top, we get the green check for the two that passed, and we get red for the two that failed. And then in our summary here it says that two of four tests failed. Now what it's done, as well, is for the two that failed, it has reprinted them at the bottom here, and included the stack trace with each one. It says this was the first test that failed, that's why it gives a number one. It should have thrown in odd numbers, and that comes back down here and tells us exactly what happened. It says that we were trying to match odd two, but got odd input. And then for our second failed test, if we scroll down, we'll see expected four to equal five. Much more digestible information coming out of Mocha than we were getting just from the pure "assert" module. Not to say you couldn't have continued to add more and more text and verbage and stuff to the "assert" module, but sticking with something like Mocha gives you much better output right off the bat. We'll take a look at one more feature. You remember when we talked about isolating tests? And you can do that with "the only." So for instance, if I'm really wanting to focus on this particular test right now, I can come in here and put only. And that is an inclusive test, meaning I only want to run this one test right now. So let's say that. Now, I haven't changed anything else in my test suite, just added that only statement. And (typing sounds) now it just runs that one test. So that can be handy for focusing on one particular test. And we can go back and do the opposite, (typing sounds) and tell it we want to skip that test. And if we run it again... ( Typing ) ( Silence ) When you skip it, it actually turns into a pending test. Had I simply commented that out, I may have forgotten to uncomment it, and that test would just be gone, basically. So if it were failing right now, and I said, well, I'm going to get to it in a minute, I could simply go in and skip it, it will continue to tell me that it's in pending status to remind me to go back and fix it. Once you have a few tests written, it can also be fun to experiment with some of the different reporters that come with Mocha. Here is one that's there just for fun, which shows your tests represented as a plane coming in for a landing. So as you can see, Mocha not only makes writing tests easier. It can actually make them fun as well.

Debugging with the Cloud9 IDE

Node.js provides hooks for debugging your applications. And the Cloud9 IDE makes good use of them and their debugging tools. Here in this screenshot, you can see the breakpoint in line 31, as

well as the value of the data variable. The window on the right shows the current call stack. In fact, the buttons along the right side of the window make up Cloud9's debugging functions, including options to resume as well as step into, over and out of the code being executed. View the call stack, which you can see here in the screenshot. Execute arbitrary JavaScript during your debug session. View the values of currently active variables, and the ability to manage the breakpoints in your application. Even more powerful is combining this server side Node.js debugging with the client side JavaScript debug capabilities of the Chrome web browser. The two of these together make a powerful JavaScript debug suite. Let's take a look at some examples of debugging Node applications.

Demo: Debugging with the Cloud9 IDE and Chrome

For our first debugging example, we're going to pull up some code that we looked at in the very first module. We're going to look at our setTimeout example. And so what we'll do here is add a breakpoint inside the named function that gets invoked by a setTimeout. So we'll simply click here to add a breakpoint to line two. And when you're running a program in Cloud9 and you want to run it in debug mode, you can come up here and choose run in debug mode. And so now when we click the green star button, it will be in debug mode. So let's run this. So here we can see that running was printed to the console, which is our console.log statement here, and then after 200 milliseconds, when the handleTimeout function was invoked, our breakpoint was triggered on console.log, and it stopped. And so here you can see, if we run our mouse over the code we can inspect certain variables, and we now have this panel available to us over here with some debugging functions. And so here are our resume, step over, into, and out options, as well as the call stack, which this program doesn't have much going on, so the call stack is fairly small. And we can interact with the code or view the variables or the breakpoints. I only have the one breakpoint in my code right now. So if we go back up and continue, then the console.log statement executes, and the program is done. Now let's pull up another one of our previous examples, a more complicated one, and look at the debugging functions we can use with that application. So this is our WebSockets example from Module 5. Here we're going to scroll down and add a breakpoint whenever we log to the console upon receiving an event that something was submitted from the browser. So let's put a breakpoint there and we'll run this once again in debug mode. And we'll launch our web page. There's our timer data being updated. Now let's submit something. ( Typing on Keyboard ) So now that we've submitted it, let's go back to the server. Our breakpoint was triggered and it has stopped. And we have the same options here as before. We have a lot more going on in our call stack this time. And we have our variables. Now, here is the variable data set to this is a test, and that is this variable right here. In fact, if we come up here, simply by running your mouse over it, you can see the value. This is what we typed in in the web browser, and it's available to be inspected here. And it stopped. Now, if we just resume, and then we look down in the (typing sounds) -- you'll see "submitted this is a test" in the output. That is an example of debugging just on the server side. But now since we're running in Chrome, we actually have access to some very nice debugging capabilities on the client side, as well. So let's take a look at using both of those together. So now we're going to leave our breakpoint where it is on the server side, and let's go take a look at the client side. So if we load up the Chrome developer tools, we can see that this is the HTML and JavaScript of our web page. And here I can actually add a breakpoint as well on the client side. So what I'd like to do is add a breakpoint on the client side before we emit the submit event. So let's click here to add a breakpoint here. And so now, as our timer continues to update, let's go and enter a new piece of information here (typing sounds) and click submit. Okay, now you can see that we've triggered the breakpoint in

the browser right before we emit the data to the server. The server side is still running, but the browser side has paused right on the emit. If we want to continue through this breakpoint, we can come up here, but now if we go back to our server, now we've tripped our breakpoint on the server, which was on the receipt of that submit event that we just emitted on the browser. And this looks like what we saw before, so if we resume here we'll see our submitted a second test was written to the output. So this is a demonstration of how you can use the server side debugging capabilities of Cloud9, alongside the client side debugging capabilities of the Chrome web browser to provide a full featured debugging environment.

Conclusion

There are other ways to debug Node applications in cases where you are not using the Cloud9 IDE. There is a Node inspector module, which provides browser based debugging of Node.js debugging applications. I touch on this option in another Pluralsight course, Node on Windows in Azure. So to conclude, in this module, we started with an overview of Node's built in "assert" module used for writing unit tests of Node.js code. We then moved on to a more useful and feature rich method of writing tests using Mocha and should.js. We then wrapped up by debugging our Node applications using the features found in the Cloud9 IDE. I hope this module has been helpful in introducing you to the testing and debugging features of Node.js. Thank you. ( Silence )

Scaling Your Node Application

Introduction, The Child Process Module

Hello, my name is Paul O'Fallon and I'd like to welcome you to the course An Introduction to Node.js, module 7, Scaling Your Node Application. In this module, we'll cover how to create child processes in your Node application. And then, we'll look at Node's experimental "cluster" module for scaling your application across multiple processes. So, let's get started. One common criticism of Node applications is that they do not handle CPU intensive tests well. This is because spending too much CPU time on any one task in your Node app will block the event loop and prevent other work from being done. One strategy for dealing with this issue is the use of "child processes". A CPU intensive task, resizing an image for example, can be deferred to a child process while the main Node application continues to process events. Node has four ways to launch child processes and all are a part of the child process module. The first is the spawn function which will launch a new process and run the command specified in the first parameter. An optional array of arguments will be passed to the command. This function returns an instance of a ChildProcess object which is an EventEmitter. It emits exit and close events which can be listened for by the parent Node application. The child process return value also has streams for the standard in, out, and error of the child process. The parent or spawning process can pipe data to and from these streams much like you would the similar streams provided by the process object. The second way to launch a child process in Node is with the exec function. This function runs the provided command in a shell. Any command line arguments you wish to provide must be included in the string past as the command to execute. You

can even pipe between Unix commands within a single invocation of exec. For example, piping the output of LS to the Unix grep command. A third method is the execFile function. This is similar to exec but instead of launching a process and executing the command, the file parameter is executed directly. One final method for invoking a child process in Node is particularly optimized for spawning new child Node processes. This is done with the fork function. It is a specialized version of the spawn function especially for creating Node processes. Like spawn, it returns an instance of the ChildProcess object. However, in this case, it adds an additional send function and message event to facilitate message passing between the parent and child processes. Let's take a look at some sample code from the Node.js website that uses the fork function. Here we have a parent.js script which requires the child process module. It then calls the fork function to invoke Node again to run the child.js script. It listens for the message event from the returned ChildProcess object logging the message to the console. It then attempts to send a message to the newly forked Node process. When the separate Node process runs child.js, it listens for the message event and logs the message to the console. It too attempts to send a message back to its parent. Each process sends its message which is picked up by the other and logged to the console. Let's take a look at some examples of launching child processes in Node.js.

Demo: The "exec" function

For our first child process example, we're going to take a look at the exec function. And so here you can see that we've required the child process module and in more specifically the exec function directly. And so, what we're going to do is run the uptime Unix function and in the callback, if we get an error, we will log that to the console, otherwise we will log the output to the console. And so what this will be is the output of the command once the command has completed. And down here, we're going to log to the console the process ID of the child process. So starting with just a simple uptime command, let's run this. Before we get any results from the child process, we are logging to the console the process ID and you can see that here. And then we get the output of the uptime command printed to the console where the output is and then the results. But this can be more than just a simple Unix command, it could be any arbitrary Unix statement. So for instance, if we were to pipe the uptime command to the cut command and slice just a little bit of that output to display, you could do that here and then everything else would work the same. So let's run this. And if you want to compare that to the last output, you can see here where in our earlier output, we received the entire output of the uptime command. But because the actual command we sent to the child process was uptime and then pipe that to cut, we only got a little bit of the output the next time. Now let's change the commands being run to force it to generate an error so we can see what that looks like. So, if I change the name of this cut command to be 23cut which doesn't exist, we'll run that and see what the error looks like. So here you can see we output the process ID again but in this case, because we detected an error, instead of printing standard out, we printed standard error. And what standard error told us is that 23cut is a command that our shell does not know anything about. So this is an example of using exec to execute an arbitrary command on your system.

Demo: The "spawn" function

For our next child process example, we'll take a look at the spawn function. Now the spawn function gives you more control over the standard in, standard out, and standard error of your process while it's running. If you remember in our exec example, we had access to standard out and standard error once the process was complete. In this case, you can actually feed data to standard in and get data from standard out while the process is running. So in this example, we're spawning two processes, one is the Unix PS command to get a list of processes and another command is the grep command and we want to look for the phrase Node. Now because the child processes have standard out, standard error and standard in as streams, we can actually pipe from one to the other. And so what we've done here is said I'm going to take the standard out of the PS command and pipe it to standard in of grep. In essence, doing a Node what you would otherwise maybe do from the Unix command line and then saying, I'm going to take the standard out of grep and send it to the console. But of course even though we are piping up here, you can also watch for events on the streams as well. So here, we're looking for a data event on standard error to see if there's an error with the PS command and if there is, we will log that to the console, and the same for grep here. So let's run this. So here you can see the command has run and what was printed to the console was the output of the PS command piped through grep. Here you can see node and node so these are commands that passed through grep because they contain node.

Demo: The "fork" function

So now for our final child process example, we're going to take a look at the fork function. And if you'll remember, fork is built on spawn but especially designed for spawning child processes that are also Node applications. And so here in our example, what we're doing is we're calling fork and asking it to run honorstudent.js which is another Node program. And when it executes, fork returns us a ChidProcess object, and on that ChildProcess object we're going to invoke the send function which means we're going to send it a message. And the message we're going to send it is this JSON object which has a command called double and a number 20, so we're basically asking the child double the number 20. Now we're also listening for a message from the child and when we get a message, we're printing the answer variable to the console and then we're sending another message to the child and this time, the command we're sending is done. This notion of a command in this JSON object could be whatever you want. So now let's go look at the code that's been forked. So our evenDoubler function is showing up again here, but now we've wrapped it inside of a function that gets invoked whenever this child receives a message. So in the child process, we're simply registering a function for the message event on the process object. So whenever this process receives a message, we're invoking this function. And in the function, we're going to look at the command. If you remember, part of the JSON object that we were sending was a CMD variable. So if the CMD variable is set to double, we're going to log to the console that we were asked to double this number and then we'll run our-- get all evenDoubler process. And when we get an answer back, we will use process again to send a message back to the parent and what we're going to send them is this JSON object that will contain the result. Now, if the command that we get is done, then this child process can exit and so that's what process exit will do. So let's go back and run this now and see what we get. The first log to the console is from the honorstudent program where he's telling us that he was asked to double the number 20, so he received the message with the command double. The second line is from the parent process where he received the message back from the child with the answer and he says the answer is 40. So the main takeaway for this particular example in the fork function is really the message passing between the parent and the child process.

Scaling with Node's Cluster Module

Node has recently introduced an experimental cluster module. This builds on the child process fork function and introduces classes, functions, and events for managing a master application and a set of "Worker" Nodes. In a typical cluster scenario, there will be a Node.js script which serves as the master application. The cluster module provides a variable isMaster, which will tell you if your code is running the master process. To create Worker Nodes, actually separate Node.js processes. Cluster provides a fork function. By default, fork will run the same Node.js grep for the worker as the master. You can use the isMaster variable in a large if statement to segment the master application from the worker code. Executing the fork function will also emit a fork event in the master. Once the worker process has been spawned, the master will also emit an online event indicating that the worker has been created. Additional workers can be created with subsequent invocations of the fork function. Cluster also provides an isWorker variable to indicate whether your code is executing inside a worker Node. A common cluster scenario is spawning multiple worker Nodes to create a scalable web server, if the worker executes a listen function, a listening event is emitted on the master. And the arguments to this listen function are transmitted to the master where it will create a listening server for that IP import if one does not already exist, and pass a handle to this server back to the worker. If another worker process executes the same listen function, the master will send the same handle to this second worker. This allows both workers to listen on the same IP port combination. It is then up to the operating system as to how incoming requests are distributed between worker processes. They do not proxy through the master. Requests are sent directly to worker processes. Let's take a look at an example of using the cluster module to set up a multi worker web server.

Demo: Building a Clustered Web Server

We'll start our cluster example from the top. We'll begin by requiring the cluster module, as well as requiring the HTTP module because what we're going to be building is a clustered web server. And we've defined a variable here called numWorkers equals 2. This is how many worker processes we're going to spawn. Now, if you remember from the slides, I mentioned the isMaster variable and being able to structure your application by having a large if statement around the bulk of your code, and that's what we've done here. So, if this code is running on the master, it will execute this code. However, if it's not, it will execute this code which means that's running on a worker process. So, when you're setting up a cluster, there are ways that you can configure it to run an entirely separate Node.js file for the worker processes. But for a simple example like this, it's easier just to keep it all in one file. So what we're going to do in the master is iterate over a for loop for the number of workers and fork that many worker processes. And we'll also log something to the console as well. Now everything else in the master is listening for events that happen on the worker processes. So here we're listening for the fork event and we will log that to the console. And if you'll notice when the fork event is omitted, it's passed a reference to the worker that omitted that event. And the same is true here with online, the online event were passed to worker and we'll log that to the console. And then the listening event, the same way, although we've logged a little bit of extra information here. So, we've also logged the process ID which is available at worker.process.pid, and

the address and port that the listening process is listening on. And then finally, when a worker process terminates, it will emit the exit event. And so, we are listening for that as well and logging that data to the console. So that's what the master process will be doing. Let's scroll down and see what the child process will do. So the child process first thing will log to the console that it is ready. And we've established the count here and set it to zero. We'll get to that in just a minute. So here in the worker, this is a typical HTTP CreateServer function just like we studied in a prior module. And we're writing out an okay status code, and then we're sending back to the client, "hello world from worker" and then we'll put the worker number, the process ID and the count. And what the count is is that each time the server fulfills a request, we're incrementing the count. And then, once this particular worker has served three requests, it's going to destroy itself. And this will be one way we can make sure that the operating system is taking care of routing the requests to unavailable worker. And we'll see that play out in just a minute. And then with-- for the server that's returned, we're chaining the listen and using our Cloud 9 process, ENV, PORT and IP parameters, and that's really about it. So the worker is establishing a server, answering up to three requests and then, terminating itself. So before we run this, one of the things you'll want to be sure of is that you're running in Node.js 0.8 and not 0.6. So, if you come over here to the run and debug side bar, and be sure to choose Node 0.8. Let's run this code. So we had a fair amount printed to the output. Let's take a look at that first. So if you'll remember in our for loop as we were forking each worker process, we logged this to the console. So this was logged twice, once for each worker. And then after that, the master began receiving events, the fork event from the first worker, and then the second worker. And next, the online event from the worker number one and then number two. And then, in our worker code, we were explicitly printing to the console worker and then the number ready. So we printed that, and then the master received the listening event for the first worker, and then, the listening event for the second worker. Now we printed a little bit of extra information here just to really drive home the point that these are two different processes at the operating system level, 76 and 77. But they are both listening on the same IP and port number. At this point, it's up to the operating system to decide which process to route each request to. So let's invoke a few requests against this web server and see what we get. An easy way to execute discrete requests against the web server is using the cURL command line tool. And that's what we're going to do here. CURL available for Linux, Mac and even Windows. And the simplest use is the word cURL and the address you would like to request. So we're going to do that here against our running web server in Cloud 9. So let's run that. So here you can see that the output from the web server is hello world from worker number 2 pid and then the number and then the count. So what this tells us is that worker number 2 answered this request and it was his first request. If you remember, each worker will only answer three requests before terminating itself. So let's make another request. Okay, this was still worker number 2 and it was his second request. And worker number 2 a third time with his third request. And if you look at the output, you'll notice that as we answer each request, we were incrementing the count and logging that to the console. And then, after the third request, the master received the exit event from the worker. Let's keep making requests for this web server. Now you'll notice worker number 1 has taken over handling request with a count of 1, 2, and 3. And now if we go back to the output, you'll see the three requests handled by worker number 1 and now he has exited as well. This is a really good example of seeing how you can spot multiple Node processes to do the same work and actually share server handles between worker processes.

Conclusion

So in conclusion, in this module, we learned about creating child processes in Node with the spawn, exec, execFile and fork functions of the child process module. We then looked at scaling your application using Nodes experimental cluster module. I hope this has been a helpful introduction to scaling your Node applications. Thank you.

More Documents from "Priyanka Shukla"

Transcript.docx
April 2020 7
Synopsis.docx
June 2020 16
Chemistry Xii(1).pdf
December 2019 47
Priyanka 2k18.docx
June 2020 14
Co Agent.pdf
June 2020 19