Quantcast
Channel: Node.js – SitePoint
Viewing all 225 articles
Browse latest View live

Using MySQL with Node.js & the mysql JavaScript Client

$
0
0

NoSQL databases are all the rage these days and probably the preferred back-end for Node.js applications. But you shouldn't architect your next project based on what's hip and trendy, rather the type of database to be used should depend on the project's requirements. If your project involves dynamic table creation, real time inserts etc. then NoSQL is the way to go, but on the other hand, if your project deals with complex queries and transactions, then a SQL database makes much more sense.

In this tutorial, we'll have a look at getting started with the mysql module — a Node.js driver for MySQL, written in JavaScript. I'll explain how to use the module to connect to a MySQL database, perform the usual CRUD operations, before examining stored procedures and escaping user input.

This popular tutorial was updated on 11.07.2017. Changes include updating to ES6 syntax, addressing the fact that the node-mysql module module was renamed, adding more beginner friendly instructions and adding a section on ORMs.

Quick Start: How to Use MySQL in Node

Maybe you've arrived here looking for a quick leg up. If you're just after a way to get up and running with MySQL in Node in as little time as possible, we got you covered!

Here's how to use MySQL in Node in 5 easy steps:

  1. Create a new project: mkdir mysql-test && cd mysql-test
  2. Create a package.json file: npm init –y
  3. Install the mysql module: npm install mysql –save
  4. Create an app.js file and copy in the snippet below.
  5. Run the file: node app.js. Observe a “Connected!” message.
//app.js

const mysql = require('mysql');
const connection = mysql.createConnection({
  host: 'localhost',
  user: 'user',
  password: 'password',
  database: 'database name'
});
connection.connect((err) => {
  if (err) throw err;
  console.log('Connected!');
});

Installing the mysql Module

Now let's take a closer look at each of those steps. First of all we're using the command line to create a new directory and navigate to it. Then we're creating a package.json file using the command npm init –y. The -y flag means that npm will use only defaults and not prompt you for any options.

This step also assumes that you have Node and npm installed on your system. If this is not the case, then check out this SitePoint article to find out how to do that: Install Multiple Versions of Node.js using nvm.

After that, we're installing the mysql module from npm and saving it as a project dependency. Project dependencies (as opposed to dev-dependencies) are those packages required for the application to run. You can read more about the differences between the two here.

mkdir mysql-test
cd mysql-test
npm install mysql -y

If you need further help using npm, then be sure to check out this guide, or ask in our forums.

Getting Started

Before we get on to connecting to a database, it's important that you have MySQL installed and configured on your machine. If this is not the case, please consult the installation instructions on their home page.

The next thing we need to do is to create a database and a database table to work with. You can do this using a
graphical interface, such as phpMyAdmin, or using the command line. For this article I'll be using a database called sitepoint and a table called employees. Here's a dump of the database, so that you can get up and running quickly, if you wish to follow along:

CREATE TABLE employees (
  id int(11) NOT NULL AUTO_INCREMENT,
  name varchar(50),
  location varchar(50),
  PRIMARY KEY (id)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;

INSERT INTO employees (id, name, location) VALUES
(1, 'Jasmine', 'Australia'),
(2, 'Jay', 'India'),
(3, 'Jim', 'Germany'),
(4, 'Lesley', 'Scotland');

Using MySQL with Node.js & the mysql JavaScript Client

Connecting to the Database

Now, let's create a file called app.js in our mysql-test directory and see how to connect to MySQL from Node.js.

Continue reading %Using MySQL with Node.js & the mysql JavaScript Client%


Add Social Login via Google & Facebook to Your Single-page App

$
0
0

SPA Social Login: Authenticate Your Users via Google and Facebook

Increasingly, we are seeing web applications that are developed using a single page architecture, where the entire application is loaded into the browser as JavaScript and then all interactions with the server are carried out using HTTP based APIs that return JSON documents. Often, these applications will require some level of user-restricted interactions, e.g. for storing user profile details. Where this was a relatively simple task to implement in a traditional HTML based application, this is trickier in a single page application that needs to authenticate every API request.

This article will demonstrate a technique using the Passport.js library to implement social logins using a variety of providers, and leading from that to token-based authentication for the later API calls.

All of the source code for this article is available for download from our GitHub repository.

Why Use Social Sign-in for Your SPA?

When implementing a login mechanism on your web application, there are a number of concerns to take into account.

  • How should your UI handle authentication itself?
  • How should you store user information?
  • How should you best secure the user credentials?

These, and many more questions, need to be taken into consideration before you embark on writing a login portal. But, there is a better way.

Many sites, social networks chiefly among them, allow you to use their platforms for authentication of your own applications. This is achieved using a number of different APIs - OAuth 1.0, OAuth 2.0, OpenID, OpenID Connect, etc.

Implementing your login flow by using these social login technologies offers a number of advantages.

  • You are no longer responsible for rendering the UI for the user to authenticate with.
  • You are no longer responsible for storing and securing sensitive user details.
  • The user is able to use a single login for accessing multiple sites.
  • If the user feels their password has been compromised, they can reset it once and benefit across many sites.
  • Often, the service that provides the authentication functionality will make other details available. This can be used, for example, to automatically register users that have never used your site before, or to allow you to post updates to their profile on their behalf.

Why Use Token-based Authentication for Your API?

Any time a client requires access to your API, you will need some way to determine who they are and whether the access is permitted or not. There are several ways of achieving this, but the principal options are:

  • Session-based authentication
  • Cookie-based authentication
  • Token-based authentication

Session-based authentication requires some way for your API service to associate a session with the client. This is often very straightforward to set up, but can suffer if you are deploying your API across multiple servers. You are also at the mercy of the mechanism that your server uses for session management and expiry, which might be out of your control.

Cookie-based is where you simply have some identifier stored in a cookie, and this is used to automatically identify the API request. This means that you need some mechanism of setting the cookie in the first place, and you risk leaking it on subsequent requests, since cookies are automatically included in all (suitable) requests to the same host.

Token-based is a variation on the cookie-based authentication, but putting more control in your hands. Essentially you generate a token in the same way as in a cookie-based authentication system, but you will include it with requests yourself — normally in the "Authorization" header or else directly in the URL. This means that you are completely in control of storing the token, which requests will include it, and so on.

Note: even though the HTTP Header is called "Authorization", we are actually doing authentication with it. This is because we are using it to ascertain "who" the client is, not "what" the client is allowed to do.

The strategy that is used for generating the token is important as well. These tokens can either be reference tokens, which means that they are nothing more than an identifier that the server uses to look up the real details. Or complete tokens, which means that the token contains all of the information needed already.

Reference tokens have a significant security advantage in that there is absolutely no leakage to the client of the users credentials. There is a performance penalty though, in that you need to resolve the token into the actual credentials on every single request made.

Complete tokens are the opposite. They expose the user credentials to anyone who can understand the token, but because the token is complete there is no performance penalty on looking it up.

Often, Complete Tokens will be implemented using the JSON Web Tokens standard, since this has allowances in it for improving the security of the tokens. Specifically, JWTs allow for the token to be cryptographically signed, meaning that you can guarantee that the token has not been tampered with. There is also provision for them to be encrypted, meaning that without the encryption key the token can not even be decoded.

If you'd like a refresher on using JWTs in Node, check out our tutorial: Using JSON Web Tokens with Node.js.

The other downside to using a complete token is one of size. A reference token could be implemented, for example, using a UUID which would have a length of 36 characters. Conversely, a JWT can easily be hundreds of characters long.

For this article we are going to use JWT tokens to demonstrate how they can work. However, when you implement this for yourself you will need to decide on whether you wish to use reference or complete tokens, and what mechanism you will use for these.

Continue reading %Add Social Login via Google & Facebook to Your Single-page App%

A Practical Guide to Planning a MEAN Stack Application

$
0
0

A Practical Guide to Planning a MEAN Stack Application is an excerpt from Manning’s Getting MEAN with Mongo, Express, Angular, and Node, Second Edition. Getting MEAN, Second Edition is completely revised and updated to cover Angular 2, Node 6 and the latest mainstream release of JavaScript ES2015 (ES6). This Second Edition will walk you through how to develop web applications using this updated MEAN stack.

Planning a Real Application

For the purposes of this article, let’s imagine that we’re building a working application on the MEAN stack called Loc8r. Loc8r will list nearby places with WiFi where people can go and get some work done. It will also display facilities, opening times, a rating, and a location map for each place.

Planning the MEAN Stack Application at a High Level

The first step is to think about what screens we’ll need in our application. We’ll focus on the separate page views and the user journeys. We can do this at a high level, not concerning ourselves with the details of what’s on each page. It’s a good idea to sketch out this stage on a piece of paper or a whiteboard, as it helps to visualize the application in its entirety. It also helps with organizing the screens into collections and flows, serving as a good reference point when we build it. As there’s no data attached to the pages or application logic behind them, it’s easy to add and remove parts, change what’s displayed and where, and even change how many pages we want. The chances are that we won’t get it right the first time; the key is to start and iterate and improve until we’re happy with the separate pages and overall user flow.

Planning the Screens

Let’s think about Loc8r. As stated our aim is as follows:

Loc8r will list nearby places with WiFi where people can go and get some work done. It displays facilities, opening times, a rating, and a location map for each place. Visitors can to submit ratings and reviews.

From this we can get an idea about some of the screens we’re going to need:

  1. A screen that lists nearby places
  2. A screen that shows details about an individual place
  3. A screen for adding a review about a place

We’ll probably also want to tell visitors what Loc8r is for and why it exists, and we should add another screen to the list:

  1. A screen for “about us” information

Dividing the Screens into Collections

Next, we want to take the list of screens and collate them where they logically belong together. For example, the first three in the list are all dealing with locations. The About page doesn’t belong anywhere and it can go in a miscellaneous Others collection. Sketching this out brings us something like figure 1.

Planning a MEAN stack application: collections

Figure 1: Collate the separate screens for our application into logical collections.

Having a quick sketch like this is the first stage in planning, and we need to go through this stage before we can start thinking about architecture. This stage gives us a chance to look at the basic pages, and to also think about the flow. Figure 1, for example, shows a basic user journey in the Locations collection, going from the List page, to a Details page, and then onto the form to add a review.

Continue reading %A Practical Guide to Planning a MEAN Stack Application%

MEAN Stack: Developing an app with Angular 2+ and the Angular CLI

$
0
0

The MEAN stack comprises of advanced technologies used to develop both the server-side and the client-side of a web application in a JavaScript environment. The components of the MEAN stack include the MongoDB database, Express.js (a web framework), Angular (a front-end framework), and the Node.js runtime environment. Taking control of the MEAN stack and familiarizing different JavaScript technologies during the process will help you in becoming a full-stack JavaScript developer.

JavaScript’s sphere of influence has dramatically grown over the years and with that growth, there is an ongoing desire to keep up with the latest trends in programming. New technologies have emerged and existing technologies have been rewritten from the ground up (I am looking at you, Angular).

This tutorial intends to create the MEAN application from scratch and serve as an update to the original MEAN stack tutorial. If you are familiar with MEAN and want to get started with the coding, you can skip to the overview section.

Introduction to MEAN Stack

Node.js - Node.js is a server-side runtime environment built on top of Chrome's V8 JavaScript engine. Node.js is based on an event-driven architecture that runs on a single thread and a non-blocking IO. These design choices allow you to build real-time web applications in JavaScript that scale well.

Express.js - Express is a minimalistic yet robust web application framework for Node.js. Express.js uses middleware functions to handle HTTP requests and then either return a response or pass on the parameters to another middleware. Application-level, Router-level, and Error-handling middlewares are available in Express.js.

MongoDB - MongoDB is a document-oriented database program where the documents are stored in a flexible JSON-like format. Being an NOSQL database program, MongoDB relieves you from the tabular jargon of the relational database.

Angular - Angular is an application framework developed by Google for building interactive Single Page Applications. Angular, originally AngularJS, was rewritten from scratch to shift to a Component based architecture from the age old MVC framework. Angular recommends the use of TypeScript which, in my opinion, is a good idea because it enhances the development work-flow.

Now that we are acquainted with the pieces of the MEAN puzzle, let’s see how we can fit them together, shall we?

Overview

Here is a high-level overview of our application.

High-level overview of our MEAN stack application

We will be building an Awesome Bucket List Application from the ground up without using any boilerplate template. The front-end will include a form that accepts your bucket list items and a view that updates and renders the whole bucket list in real-time.

Any update to the view will be interpreted as an event and this will initiate an HTTP request. The server will process the request, update/fetch the MongoDB if necessary, and then return a JSON object. The front-end will use this to update our view. By the end of this tutorial, you should have a bucket list application that looks like this.

Screenshot of the bucket list application that we are going to build

The entire code for the Bucket List application is available on GitHub.

Prerequisites

First things first, you need to have Node.js and MongoDB installed to get started. If you are entirely new to Node, I would recommend reading the Beginner’s Guide to Node to get things rolling. Likewise, setting up MongoDB is easy and you can check out their documentation for installation instructions specific to your platform.

$ node -v
# v8.0.0

Start the mongo daemon service using the command.

sudo service mongod start

To install the latest version of Angular, I would recommend using Angular-CLI. It offers everything you need to build and deploy your angular application. If you are not familiar with the Angular CLI yet, make sure you check out The Ultimate Angular CLI Reference.

npm install -g @angular/cli

Create a new directory for our bucket list project. That’s where all your code will go, both the front end and the back end.

mkdir awesome-bucketlist
cd awesome-bucketlist

Creating the Backend Using Express.js and MongoDB

Express doesn’t impose any structural constraints on your web application. You can place the entire application code in a single file and get it to work, theoretically. However, your code base would be a complete mess. Instead, we are going to do this the MVC (Model, View, and Controller) way (minus the view part).

MVC is an architectural pattern that separates your models (the back-end) and views (the UI) from the controller (everything in between), hence MVC. Since Angular will take care of the front-end for us, we will have three directories, one for models and another one for controllers, and a public directory where we will place the compiled angular code.

In addition to this, we will create an app.js file that will serve as the entry point for running the Express server.

Directory structure of MEAN stack

Although using a model and controller architecture to build something trivial like our bucket list application might seem essentially unnecessary, this will be helpful in building apps that are easier to maintain and refactor.

Initializing npm

We’re missing a package.json file for our back end. Type in npm init and, after you’ve answered the questions, you should have a package.json made for you.

We will declare our dependencies inside the package.json file. For this project we will need the following modules.

  • express: Express module for the web server
  • mongoose: A popular library for MongoDB
  • bodyparser: Parses the body of the incoming requests and makes it available under req.body
  • cors: CORS middleware enables cross-origin access control to our web server.

I’ve also added a start script so that we can start our server using npm start.

{
  "name": "awesome-bucketlist",
  "version": "1.0.0",
  "description": "A simple bucketlist app using MEAN stack",
  "main": "app.js",
  "scripts": {
    "start": "node app"
  },

//The ~ is used to match the most recent minor version (without any breaking changes)

 "dependencies": {
    "express": "~4.15.3",
    "mongoose": "~4.11.0",
    "cors": "~2.8.3",
    "body-parser": "~1.17.2"
  },

  "author": "",
  "license": "ISC"
}

Now run npm install and that should take care of installing the dependencies.

Filling in app.js

First, we require all of the dependencies that we installed in the previous step.

// We will declare all our dependencies here
const express = require('express');
const path = require('path');
const bodyParser = require('body-parser');
const cors = require('cors');
const mongoose = require('mongoose');

//Initialize our app variable
const app = express();

//Declaring Port
const port = 3000;

As you can see, we’ve also initialized the app variable and declared the port number. The app object gets instantiated on the creation of the Express web server. We can now load middleware into our Express server by specifying them with app.use().

//Middleware for CORS
app.use(cors());

//Middleware for bodyparsing using both json and urlencoding
app.use(bodyParser.urlencoded({extended:true}));
app.use(bodyParser.json());

/*express.static is a built in middleware function to serve static files.
 We are telling express server public folder is the place to look for the static files
*/
app.use(express.static(path.join(__dirname, 'public')));

The app object can understand routes too.

app.get('/', (req,res) => {
    res.send("Invalid page");
})

Here, the get method invoked on the app corresponds to the GET HTTP method. It takes two parameters, the first being the path or route for which the middleware function should be applied.

The second is the actual middleware itself and it typically takes three arguments: The req argument corresponds to the HTTP Request; the res argument corresponds to the HTTP Response; and next is an optional callback argument that should be invoked if there are other subsequent middlewares that follow this one. We haven’t used next here since the res.send() ends the request-response cycle.

Add this line towards the end to make our app listen to the port that we had declared earlier.

//Listen to port 3000
app.listen(port, () => {
    console.log(`Starting the server at port ${port}`);
});

npm start should get our basic server up and running.

By default, npm doesn’t monitor your files/directories for any changes and you have to manually restart the server every time you’ve updated your code. I recommend using nodemon to monitor your files and automatically restart the server when any changes are detected. If you don't explicitly state which script to run, nodemon will run the file associated with the main property in your package.json.

npm install -g nodemon
nodemon

We are nearly done with our app.js file. What’s left to do? We need to

  1. Connect our server to the database.
  2. Create a controller which we can then import to our app.js.

Setting up mongoose

Setting up and connecting a database is straightforward with MongoDB. First, create a config directory and a file named database.js to store our configuration data. Export the database URI using module.exports.

// 27017 is the default port number.
module.exports = {
    database: 'mongodb://localhost:27017/bucketlist'
}

And establish a connection with the database in app.js using mongoose.connect().

// Connect mongoose to our database
const config = require('./config/database');
mongoose.connect(config.database);

"But what about creating the bucket list database?", you may ask. The database will be created automatically when you insert a document into a new collection on that database.

Working on the controller and the model

Now let’s move on to create our bucket list controller. Create a bucketlist.jsfile inside the controller directory. We also need to route all the /bucketlist requests to our bucketlist controller (in app.js).

const bucketlist = require('./controllers/bucketlist');

//Routing all HTTP requests to /bucketlist to bucketlist controller
app.use('/bucketlist',bucketlist);

Here is the final version of our app.js file.

Continue reading %MEAN Stack: Developing an app with Angular 2+ and the Angular CLI%

Build a CRUD App Using React, Redux and FeathersJS

$
0
0

Building a modern project requires splitting the logic into front-end and back-end code. The reason behind this move is to promote code re-usability. For example, we may need to build a native mobile application that accesses the back-end API. Or we may be developing a module that will be part of a large modular platform.

The popular way of building a server-side API is to use a library like Express or Restify. These libraries make creating RESTful routes easy. The problem with these libraries is that we will find ourselves writing a TON of REPEATING CODE. We will also need to write code for authorization and other middleware logic.

To escape this dilemma, we can use a framework like Loopback or Feathersjs to help us generate an API.

At the time of writing, Loopback has more GitHub stars and downloads than Feathers. Loopback is a great library for generating RESTful CRUD endpoints in a short period of time. However, it does have a slight learning curve and the documentation is not easy to get along with. It has stringent framework requirements. For example, all models must inherit one of its built-in model class. If you need real-time capabilities in Loopback, be prepared to do some additional coding to make it work.

FeathersJS, on the other hand, is much easier to get started with and has realtime support built-in. Quite recently the Auk version was released (because Feathers is so modular, they use bird names for version names) which introduced a vast number of changes and improvements in a number of areas. According to a post they published on their blog, they are now the 4th most popular real-time web framework. It has excellent documentation and they have covered pretty much any area we can think of on building a real-time API.

What makes Feathers amazing is its simplicity. The entire framework is modular and we only need to install the features we need. Feathers itself is a thin wrapper built on top of express where they've added new features namely services and hooks. Feathers also allows us to effortlessly send and receive data over web sockets.

Prerequisites

Before you get started with the tutorial, you'll need to have a solid foundation in the following topics:

On your machine, you will need to have installed recent versions of:

  • NodeJS 6+
  • Mongodb 3.4+
  • Yarn package manager (optional)
  • Chrome browser

If you have never written a database API in JavaScript before, I would recommend first taking a look at this tutorial on creating RESTful APIs.

Scaffold the App

We are going to build a CRUD contact manager application using React, Redux, Feathers and MongoDB. You can take a look at the completed project here.

In this tutorial, I'll show you how to build the application from the bottom up. We'll kick-start our project using the create-react-app tool.

# scaffold a new react project
create-react-app react-contact-manager
cd react-contact-manager

# delete unnecessary files
rm src/logo.svg src/App.css

Use your favorite code editor and remove all the content in index.css. Open App.js and rewrite the code like this:

import React, { Component } from 'react';

class App extends Component {
  render() {
    return (
      <div>
        <h1>Contact Manager</h1>
      </div>
    );
  }
}

export default App;

Make sure to run yarn start to ensure the project is running as expected. Check the console tab to ensure that our project is running cleanly with no warnings or errors. If everything is running smoothly, use Ctrl+C to stop the server.

Build the API Server with Feathers

Let's proceed with generating the back-end API for our CRUD project using the feathers-cli tool.

# Install Feathers command-line tool
npm install -g feathers-cli

# Create directory for the back-end code
mkdir backend
cd backend

# Generate a feathers back-end API server
feathers generate app

? Project name | backend
? Description | contacts API server
? What folder should the source files live in? | src
? Which package manager are you using (has to be installed globally)? | Yarn
? What type of API are you making? | REST, Realtime via Socket.io

# Generate RESTful routes for Contact Model
feathers generate service

? What kind of service is it? | Mongoose
? What is the name of the service? | contact
? Which path should the service be registered on? | /contacts
? What is the database connection string? | mongodb://localhost:27017/backend


# Install email field type
yarn add mongoose-type-email

# Install the nodemon package
yarn add nodemon --dev

Open backend/package.json and update the start script to use nodemon so that the API server will restart automatically whenever we make changes.

// backend/package.json

....
"scripts": {
    ...
    "start": "nodemon src/",
    ...
  },
...

Let's open backend/config/default.json. This is where we can configure MongoDB connection parameters and other settings. I've also increased the default paginate value to 50, since in this tutorial we won't write front-end logic to deal with pagination.

{
  "host": "localhost",
  "port": 3030,
  "public": "../public/",
  "paginate": {
    "default": 50,
    "max": 50
  },
  "mongodb": "mongodb://localhost:27017/backend"
}

Open backend/src/models/contact.model.js and update the code as follows:

// backend/src/models/contact.model.js

require('mongoose-type-email');

module.exports = function (app) {
  const mongooseClient = app.get('mongooseClient');
  const contact = new mongooseClient.Schema({
    name : {
      first: {
        type: String,
        required: [true, 'First Name is required']
      },
      last: {
        type: String,
        required: false
      }
    },
    email : {
      type: mongooseClient.SchemaTypes.Email,
      required: [true, 'Email is required']
    },
    phone : {
      type: String,
      required: [true, 'Phone is required'],
      validate: {
        validator: function(v) {
          return /^\+(?:[0-9] ?){6,14}[0-9]$/.test(v);
        },
        message: '{VALUE} is not a valid international phone number!'
      }
    },
    createdAt: { type: Date, 'default': Date.now },
    updatedAt: { type: Date, 'default': Date.now }
  });

  return mongooseClient.model('contact', contact);
};

In addition to generating the contact service, Feathers has also generated a test case for us. We need to fix the service name first for it to pass:

// backend/test/services/contact.test.js

const assert = require('assert');
const app = require('../../src/app');

describe('\'contact\' service', () => {
  it('registered the service', () => {
    const service = app.service('contacts'); // change contact to contacts

    assert.ok(service, 'Registered the service');
  });
});

Open a new terminal and inside the backend directory, execute yarn test. You should have all the tests running successfully. Go ahead and execute yarn start to start the backend server. Once the server has finished starting it should print the line: 'Feathers application started on localhost:3030'.

Launch your browser and access the url: http://localhost:3030/contacts. You should expect to receive the following JSON response:

{"total":0,"limit":50,"skip":0,"data":[]}

Now let's use Postman to confirm all CRUD restful routes are working. You can launch Postman using this button:

Run in Postman

If you are new to Postman, check out this tutorial. When you hit the SEND button, you should get your data back as the response along with three additional fields which are _id, createdAt and updatedAt.

Use the following JSON data to make a POST request using Postman. Paste this in the body and set content-type to application/json.

{
  "name": {
    "first": "Tony",
    "last": "Stark"
  },
  "phone": "+18138683770",
  "email": "tony@starkenterprises.com"
}

Build the UI

Let's start by installing the necessary front-end dependencies. We'll use semantic-ui css/semantic-ui react to style our pages and react-router-dom to handle route navigation.

Important: Make sure you are installing outside the backend directory

// Install semantic-ui
yarn add semantic-ui-css semantic-ui-react

// Install react-router
yarn add react-router-dom

Update the project structure by adding the following directories and files:

|-- react-contact-manager
    |-- backend
    |-- node_modules
    |-- public
    |-- src
        |-- App.js
        |-- App.test.js
        |-- index.css
        |-- index.js
        |-- components
        |   |-- contact-form.js #(new)
        |   |-- contact-list.js #(new)
        |-- pages
            |-- contact-form-page.js #(new)
            |-- contact-list-page.js #(new)

Let's quickly populate the JS files with some placeholder code.

For the component contact-list.js, we'll write it in this syntax since it will be a purely presentational component.

// src/components/contact-list.js

import React from 'react';

export default function ContactList(){
  return (
    <div>
      <p>No contacts here</p>
    </div>
  )
}

For the top-level containers, I use pages. Let's provide some code for the contact-list-page.js

// src/pages/contact-list-page.js

import React, { Component} from 'react';
import ContactList from '../components/contact-list';

class ContactListPage extends Component {
  render() {
    return (
      <div>
        <h1>List of Contacts</h1>
        <ContactList/>
      </div>
    )
  }
}

export default ContactListPage;

For the contact-form component, it needs to be smart, since it is required to manage its own state, specifically form fields. For now, we'll place this placeholder code.

// src/components/contact-form.js
import React, { Component } from 'react';

class ContactForm extends Component {
  render() {
    return (
      <div>
        <p>Form under construction</p>
      </div>
    )
  }
}

export default ContactForm;

Populate the contact-form-page with this code:

// src/pages/contact-form-page.js

import React, { Component} from 'react';
import ContactForm from '../components/contact-form';

class ContactFormPage extends Component {
  render() {
    return (
      <div>
        <ContactForm/>
      </div>
    )
  }
}

export default ContactFormPage;

Now, let's create the navigation menu and define the routes for our App. App.js is often referred to as the 'layout template' for the Single Page Application.

Continue reading %Build a CRUD App Using React, Redux and FeathersJS%

Building a React Universal Blog App: A Step-by-Step Guide

$
0
0

When the topic of single page applications (SPAs) comes up, we tend to think of browsers, JavaScript, speed, and invisibility to search engines. This is because an SPA renders a page's content using JavaScript, and since web crawlers don't use a browser to view web pages, they can't view and index the content --- or at least most of them can't.

This is a problem that some developers have tried to solve in various ways:

  1. Adding an escaped fragment version of a website, which requires all pages to be available in static form and adds a lot of extra work (now deprecated).
  2. Using a paid service to un-browserify an SPA into static markup for search engine spiders to crawl.
  3. Trusting that search engines are now advanced enough to read our JavaScript-only content. (I wouldn't just yet.)

Using Node.js on the server and React on the client, we can build our JavaScript app to be universal (or isomorphic). This could offer several benefits from server-side and browser-side rendering, allowing both search engines and humans using browsers to view our SPA content.

In this step-by-step tutorial, I'll show you how to build a React Universal Blog App that will first render markup on the server side to make our content available to search engines. Then, it will let the browser take over in a single page application that is both fast and responsive.

Building a React Universal Blog App

Getting Started

Our universal blog app will make use of the following technologies and tools:

  1. Node.js for package management and server-side rendering
  2. React for UI views
  3. Express for an easy back-end JS server framework
  4. React Router for routing
  5. React Hot Loader for hot loading in development
  6. Flux for data flow
  7. Cosmic JS for content management

To start, run the following commands:

mkdir react-universal-blog
cd react-universal-blog

Now create a package.json file and add the following content:

{
  "name": "react-universal-blog",
  "version": "1.0.0",
  "engines": {
    "node": "4.1.2",
    "npm": "3.5.2"
  },
  "description": "",
  "main": "app-server.js",
  "dependencies": {
    "babel-cli": "^6.26.0",
    "babel-loader": "^7.1.2",
    "babel-preset-es2015": "^6.24.1",
    "babel-preset-es2017": "^6.24.1",
    "babel-preset-react": "^6.24.1",
    "babel-register": "^6.26.0",
    "cosmicjs": "^2.4.0",
    "flux": "^3.1.3",
    "history": "1.13.0",
    "hogan-express": "^0.5.2",
    "html-webpack-plugin": "^2.30.1",
    "path": "^0.12.7",
    "react": "^15.6.1",
    "react-dom": "^15.6.1",
    "react-router": "1.0.1",
    "webpack": "^3.5.6",
    "webpack-dev-server": "^2.7.1"
  },
  "scripts": {
    "webpack-dev-server": "NODE_ENV=development PORT=8080 webpack-dev-server --content-base public/ --hot --inline --devtool inline-source-map --history-api-fallback",
    "development": "cp views/index.html public/index.html && NODE_ENV=development webpack && npm run webpack-dev-server"
  },
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "react-hot-loader": "^1.3.0"
  }
}

In this file, you'll notice that we've added the following:

  1. Babel to package our CommonJS modules and convert our ES6 and React JSX into browser-compatible JavaScript
  2. The Cosmic JS official Node.js client to easily serve our blog content from the Cosmic JS cloud-hosted content API
  3. Flux for app data management (which is a very important element in our React application).
  4. React for UI management on server and browser
  5. React Router for routes on server and browser
  6. webpack for bringing everything together into a bundle.js file.

We've also added a script in our package.json file. When we run npm run development, the script copies the index.html file from our views folder into our public folder. Then, it sets the content base for our webpack-dev-server to public/ and enables hot reloading (on .js file save). Finally, it helps us debug our components at the source and gives us a fallback for pages it can't find (falls back to index.html).

Now let's set up our webpack configuration file by editing the file webpack.config.js:

// webpack.config.js
var webpack = require('webpack')

module.exports = {
  devtool: 'eval',
  entry: './app-client.js',
  output: {
    path: __dirname + '/public/dist',
    filename: 'bundle.js',
    publicPath: '/dist/'
  },
  module: {
    loaders: [
      { test: /\.js$/, loaders: 'babel-loader', exclude: /node_modules/ },
      { test: /\.jsx$/, loaders: 'babel-loader', exclude: /node_modules/ }
    ]
  },
  plugins: [
    new webpack.DefinePlugin({
      'process.env.COSMIC_BUCKET': JSON.stringify(process.env.COSMIC_BUCKET),
      'process.env.COSMIC_READ_KEY': JSON.stringify(process.env.COSMIC_READ_KEY),
      'process.env.COSMIC_WRITE_KEY': JSON.stringify(process.env.COSMIC_WRITE_KEY)
    })
 ]
};

You'll notice that we've added an entry property with a value of app-client.js. This file serves as our app client entry point, meaning that from this point webpack will bundle our application and output it to /public/dist/bundle.js (as specified in the output property). We also use loaders to let Babel work its magic on our ES6 and JSX code. React Hot Loader is used for hot-loading (no page refresh!) during development.

Before we jump into React-related stuff, let's get the look-and-feel of our blog ready to go. Since I'd like you to focus more on functionality than style in this tutorial, here we'll use a pre-built front-end theme. I've chosen one from Start Bootstrap called Clean Blog. In your terminal run the following commands:

Create a folder called views and inside it an index.html file. Open the HTML file and add the following code:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <meta name="description" content="">
  <meta name="author" content="">
  <title>{{ site.title }}{{# page }} | {{ page.title }}{{/ page }}</title>
  <!-- Bootstrap Core CSS -->
  <link href="/css/bootstrap.min.css" rel="stylesheet">
  <!-- Custom CSS -->
  <link href="/css/clean-blog.min.css" rel="stylesheet">
  <link href="/css/cosmic-custom.css" rel="stylesheet">
  <!-- Custom Fonts -->
  <link href="//maxcdn.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css" rel="stylesheet" type="text/css">
  <link href="//fonts.googleapis.com/css?family=Lora:400,700,400italic,700italic" rel="stylesheet" type="text/css">
  <link href="//fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800" rel="stylesheet" type="text/css">
  <!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
  <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
  <!--[if lt IE 9]>
    <script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
    <script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
  <![endif]-->
</head>
<body class="hidden">
  <div id="app">{{{ reactMarkup }}}</div>
  <script src="/js/jquery.min.js"></script>
  <script src="/js/bootstrap.min.js"></script>
  <script src="/js/clean-blog.min.js"></script>
  <script src="/dist/bundle.js"></script>
</body>
</html>

To get all of the JS and CSS files included in public, you can get them from the GitHub repository. Click here to download the files.

Generally I would use the fantastic React Bootstrap package and refrain from using jQuery. However, for the sake of brevity, we'll keep the theme's pre-built jQuery functionality.

In our index.html file, we'll have our React mount point set up at the div where id="app". The template variable {{{ reactMarkup }}} will be converted into our server-rendered markup and then once the browser kicks in, our React application will take over and mount to the div with id="app". To improve the user experience while our JavaScript loads everything, we add class="hidden" to our body. Then, we remove this class once React has mounted. It might sound a bit complicated, but I'll show you how we'll do this in a minute.

At this point, your app should have the following structure:

package.json
public
  |-css
    |-bootstrap.min.css
    |-cosmic-custom.css
  |-js
    |-jquery.min.js
    |-bootstrap.min.js
    |-clean-blog.min.js
views
  |-index.html
webpack.config.js

Now that we have our static pieces done, let's start building some React Components.

Continue reading %Building a React Universal Blog App: A Step-by-Step Guide%

Building a React Universal Blog App: Implementing Flux

$
0
0

In the first part of this miniseries, we started digging into the world of React to see how we could use it, together with Node.js, to build a React Universal Blog App.

In this second and last part, we'll take our blog to the next level by learning how to add and edit content. We'll also get into the real meat of how to easily scale our React Universal Blog App using React organizational concepts and the Flux pattern.

Break It Down for Me

As we add more pages and content to our blog, our routes.js file will quickly become big. Since it's one of React's guiding principles to break things up into smaller, manageable pieces, let's separate our routes into different files.

Open your routes.js file and edit it so that it'll have the following code:

// routes.js
import React from 'react'
import { Route, IndexRoute } from 'react-router'

// Store
import AppStore from './stores/AppStore'

// Main component
import App from './components/App'

// Pages
import Blog from './components/Pages/Blog'
import Default from './components/Pages/Default'
import Work from './components/Pages/Work'
import NoMatch from './components/Pages/NoMatch'

export default (
  <Route path="/" data={AppStore.data} component={App}>
    <IndexRoute component={Blog}/>
    <Route path="about" component={Default}/>
    <Route path="contact" component={Default}/>
    <Route path="work" component={Work}/>
    <Route path="/work/:slug" component={Work}/>
    <Route path="/blog/:slug" component={Blog}/>
    <Route path="*" component={NoMatch}/>
  </Route>
)

We've added a few different pages to our blog and significantly reduced the size of our routes.js file by breaking the pages up into separate components. Moreover, note that we've added a Store by including AppStore, which is very important for the next steps in scaling out our React application.

The Store: the Single Source of Truth

In the Flux pattern, the Store is a very important piece, because it acts as the single source of truth for data management. This is a crucial concept in understanding how React development works, and one of the most touted benefits of React. The beauty of this discipline is that, at any given state of our app we can access the AppStore's data and know exactly what's going on within it. There are a few key things to keep in mind if you want to build a data-driven React application:

  1. We never manipulate the DOM directly.
  2. Our UI answers to data and data live in the store
  3. If we need to change out our UI, we can go to the store and the store will create the new data state of our app.
  4. New data is fed to higher-level components, then passed down to the lower-level components through props composing the new UI, based on the new data received.

With those four points, we basically have the foundation for a one-way data flow application. This also means that, at any state in our application, we can console.log(AppStore.data), and if we build our app correctly, we'll know exactly what we can expect to see. You'll experience how powerful this is for debugging as well.

Now let's create a store folder called stores. Inside it, create a file called AppStore.js with the following content:

// AppStore.js
import { EventEmitter } from 'events'
import _ from 'lodash'

export default _.extend({}, EventEmitter.prototype, {

  // Initial data
  data: {
    ready: false,
    globals: {},
    pages: [],
    item_num: 5
  },

  // Emit change event
  emitChange: function(){
    this.emit('change')
  },

  // Add change listener
  addChangeListener: function(callback){
    this.on('change', callback)
  },

  // Remove change listener
  removeChangeListener: function(callback) {
    this.removeListener('change', callback)
  }

})

You can see that we've attached an event emitter. This allows us to edit data in our store, then re-render our application using AppStore.emitChange(). This is a powerful tool that should only be used in certain places in our application. Otherwise, it can be hard to understand where AppStore data is being altered, which brings us to the next point…

React Components: Higher and Lower Level

Dan Abramov wrote a great post on the concept of smart and dumb components. The idea is to keep data-altering actions just in the higher-level (smart) components, while the lower-level (dumb) components take the data they're given through props and render UI based on that data. Any time there's an action performed on a lower-level component, that event is passed up through props to the higher-level components in order to be processed into an action. Then it redistributes the data (one-way data flow) back through the application.

Said that, let's start building some components. To do that, create a folder called components. Inside it, create a file called App.js with this content:

// App.js
import React, { Component } from 'react'

// Dispatcher
import AppDispatcher from '../dispatcher/AppDispatcher'

// Store
import AppStore from '../stores/AppStore'

// Components
import Nav from './Partials/Nav'
import Footer from './Partials/Footer'
import Loading from './Partials/Loading'

export default class App extends Component {

  // Add change listeners to stores
  componentDidMount(){
    AppStore.addChangeListener(this._onChange.bind(this))
  }

  // Remove change listeners from stores
  componentWillUnmount(){
    AppStore.removeChangeListener(this._onChange.bind(this))
  }

  _onChange(){
    this.setState(AppStore)
  }

  getStore(){
    AppDispatcher.dispatch({
      action: 'get-app-store'
    })
  }

  render(){

    const data = AppStore.data

    // Show loading for browser
    if(!data.ready){

      document.body.className = ''
      this.getStore()

      let style = {
        marginTop: 120
      }
      return (
        <div className="container text-center" style={ style }>
          <Loading />
        </div>
      )
    }

    // Server first
    const Routes = React.cloneElement(this.props.children, { data: data })

    return (
      <div>
        <Nav data={ data }/>
        { Routes }
        <Footer data={ data }/>
      </div>
    )
  }
}

In our App.js component, we've attached an event listener to our AppStore that will re-render the state when AppStore emits an onChange event. This re-rendered data will then be passed down as props to the child components. Also note that we've added a getStore method that will dispatch the get-app-store action to render our data on the client side. Once the data has been fetched from the Cosmic JS API, it will trigger an AppStore change that will include AppStore.data.ready set to true, remove the loading sign and render our content.

Page Components

To build the first page of our blog, create a Pages folder. Inside it, we'll create a file called Blog.js with the following code:

// Blog.js
import React, { Component } from 'react'
import _ from 'lodash'
import config from '../../config'

// Components
import Header from '../Partials/Header'
import BlogList from '../Partials/BlogList'
import BlogSingle from '../Partials/BlogSingle'

// Dispatcher
import AppDispatcher from '../../dispatcher/AppDispatcher'

export default class Blog extends Component {

  componentWillMount(){
    this.getPageData()
  }

  componentDidMount(){
    const data = this.props.data
    document.title = config.site.title + ' | ' + data.page.title
  }

  getPageData(){
    AppDispatcher.dispatch({
      action: 'get-page-data',
      page_slug: 'blog',
      post_slug: this.props.params.slug
    })
  }

  getMoreArticles(){
    AppDispatcher.dispatch({
      action: 'get-more-items'
    })
  }

  render(){

    const data = this.props.data
    const globals = data.globals
    const pages = data.pages
    let main_content

    if(!this.props.params.slug){

      main_content = &lt;BlogList getMoreArticles={ this.getMoreArticles } data={ data }/&gt;

    } else {

      const articles = data.articles

      // Get current page slug
      const slug = this.props.params.slug
      const articles_object = _.keyBy(articles, 'slug')
      const article = articles_object[slug]
      main_content = &lt;BlogSingle article={ article } /&gt;

    }

    return (
      <div>
        <Header data={ data }/>
        <div id="main-content" className="container">
          <div className="row">
            <div className="col-lg-8 col-lg-offset-2 col-md-10 col-md-offset-1">
            { main_content }
            </div>
          </div>
        </div>
      </div>
    )
  }
}

This page is going to serve as a template for our blog list page (home) and our single blog pages. Here we've added a method to our component that will get the page data prior to the component mounting using the React lifecycle componentWillMount method. Then, once the component has mounted at componentDidMount(), we'll add the page title to the <title> tag of the document.

Along with some of the rendering logic in this higher-level component, we've included the getMoreArticles method. This is a good example of a call to action that's stored in a higher-level component and made available to lower-level components through props.

Let's now get into our BlogList component to see how this works.

Create a new folder called Partials. Then, inside it, create a file called BlogList.js with the following content:

// BlogList.js
import React, { Component } from 'react'
import _ from 'lodash'
import { Link } from 'react-router'

export default class BlogList extends Component {

  scrollTop(){
    $('html, body').animate({
      scrollTop: $("#main-content").offset().top
    }, 500)
  }

  render(){

    let data = this.props.data
    let item_num = data.item_num
    let articles = data.articles

    let load_more
    let show_more_text = 'Show More Articles'

    if(data.loading){
      show_more_text = 'Loading...'
    }

    if(articles && item_num <= articles.length){
      load_more = (
        <div>
          <button className="btn btn-default center-block" onClick={ this.props.getMoreArticles.bind(this) }>
            { show_more_text }
          </button>
        </div>
      )
    }

    articles = _.take(articles, item_num)

    let articles_html = articles.map(( article ) => {
      let date_obj = new Date(article.created)
      let created = (date_obj.getMonth()+1) + '/' + date_obj.getDate() + '/' + date_obj.getFullYear()
      return (
        <div key={ 'key-' + article.slug }>
          <div className="post-preview">
            <h2 className="post-title pointer">
              <Link to={ '/blog/' + article.slug } onClick={ this.scrollTop }>{ article.title }</Link>
            </h2>
            <p className="post-meta">Posted by <a href="https://cosmicjs.com" target="_blank">Cosmic JS</a> on { created }</p>
          </div>
          <hr/>
        </div>
      )
    })

    return (
      <div>
        <div>{ articles_html }</div>
        { load_more }
      </div>
    )
  }
}

In our BlogList component, we've added an onClick event to our Show More Articles button. The latter executes the getMoreArticles method that was passed down as props from the higher-level page component. When that button is clicked, the event bubbles up to the Blog component and then triggers an action on the AppDispatcher. AppDispatcher acts as the middleman between our higher-level components and our AppStore.

For the sake of brevity, we're not going to build out all of the Page and Partial components in this tutorial, so please download the GitHub repo and add them from the components folder.

AppDispatcher

The AppDispatcher is the operator in our application that accepts information from the higher-level components and distributes actions to the store, which then re-renders our application data.

To continue this tutorial, create a folder named dispatcher. Inside it, create a file called AppDispatcher.js, containing the following code:

// AppDispatcher.js
import { Dispatcher } from 'flux'
import { getStore, getPageData, getMoreItems } from '../actions/actions'

const AppDispatcher = new Dispatcher()

// Register callback with AppDispatcher
AppDispatcher.register((payload) => {

  let action = payload.action

  switch(action) {

    case 'get-app-store':
      getStore()
      break

    case 'get-page-data':
      getPageData(payload.page_slug, payload.post_slug)
      break

    case 'get-more-items':
      getMoreItems()
      break

    default:
      return true

  }

  return true

})

export default AppDispatcher

We've introduced the Flux module into this file to build our dispatcher. Let's add our actions now.

Continue reading %Building a React Universal Blog App: Implementing Flux%

How to Deploy Node Applications: Heroku vs Now.sh

$
0
0

As Node.js continues to gain in popularity, new tutorials pop up teaching you to write server-side JavaScript apps and APIs. Once you've built your shiny new Node app, though, what then?

In this article, I'm going to take a look at a couple of options for deploying your Node applications. We'll take a look at Now.sh and Heroku.

I'll explain how to deploy your code to each platform and we'll end the article with a short summary of the pros and cons. I'll pay attention to options for monitoring, ease of use, offered functionality and what the free hosting plan includes.

Deployment with Heroku

To be able to deploy apps to Heroku, you will have to sign up at Heroku and install the Heroku CLI for your machine. I prefer working from my terminal!

Before we can start, we need to add some code to the Procfile. Heroku makes use of this file to determine how to execute the uploaded code.

The following code needs to be added to the file so Heroku knows what command should be executed to start the app:

web: node app.js

Once this is done, try to log in from the terminal by typing heroku login. Heroku will ask you to enter your login credentials.

Next, navigate to the root of your project and enter the command: heroku create. This creates an app on Heroku which is ready to receive the source code of your project. The name of the app on Heroku is randomly created.

To deploy our code to Heroku, simply use git push heroku master. We can visit the app with the command heroku open which will open the generated URL.

Pushing changes to Heroku

Changes can be pushed by following the normal Github flow:

git add .
git commit -m "Changes made to app"
git push heroku master
heroku open

Useful Heroku Commands

  • To make sure that at least one instance of the app is running: heroku ps:scale web=1
    Because we are using the free platform, it is not possible to upscale your application. However, it is possible to downscale so no instances of the application are running: heroku ps:scale web=0

  • View the latest logs (stream) in chronological order generated by Heroku: heroku logs --tail
    It's also possible to show the app logs only. App logs are the output of console.log() statements in your code and can be viewed with heroku logs --source app-name

  • Heroku provides the possibility to run your app locally at http://localhost:5000: heroku local web

  • List all Heroku apps: heroku apps

  • Remove a deployment: heroku apps:destroy --app app-name

  • Add owner (account) to access the app: heroku access:add me@email.com, same for removing heroku access:remove me@email.com

Heroku Environment Variables

If you are working with a .env file locally, you might want to use other environment variables for your Heroku deployment. It is possible to set these with heroku config:set PORT=3001. These values overwrite the variables set in you .env file.

To see all defined Heroku environment variables, just use heroku config. If you want to remove an environment variable for e.g. PORT, use heroku config:unset PORT.

Continue reading %How to Deploy Node Applications: Heroku vs Now.sh%


A Beginner’s Guide to npm — the Node Package Manager

$
0
0

Node.js makes it possible to write applications in JavaScript on the server. It's built on the V8 JavaScript runtime and written in C++ — so it's fast. Originally, it was intended as a server environment for applications, but developers started using it to create tools to aid them in local task automation. Since then, a whole new ecosystem of Node-based tools (such as Grunt, Gulp and Webpack) has evolved to transform the face of front-end development.

This popular article was updated on 08.06.2017 to reflect the current state of npm, as well as the changes introduced by the release of version 5.

To make use of these tools (or packages) in Node.js we need to be able to install and manage them in a useful way. This is where npm, the Node package manager, comes in. It installs the packages you want to use and provides a useful interface to work with them.

In this article I'm going to look at the basics of working with npm. I will show you how to install packages in local and global mode, as well as delete, update and install a certain version of a package. I'll also show you how to work with package.json to manage a project's dependencies. If you're more of a video person, why not sign up for SitePoint Premium and watch our free screencast: What is npm and How Can I Use It?.

But before we can start using npm, we first have to install Node.js on our system. Let's do that now...

Installing Node.js

Head to the Node.js download page and grab the version you need. There are Windows and Mac installers available, as well as pre-compiled Linux binaries and source code. For Linux, you can also install Node via the package manager, as outlined here.

For this tutorial we are going to use v6.10.3 Stable. At the time of writing, this is the current Long Term Support (LTS) version of Node.

Tip: You might also consider installing Node using a version manager. This negates the permissions issue raised in the next section.

Let's see where node was installed and check the version.

$ which node
/usr/bin/node
$ node --version
v6.10.3

To verify that your installation was successful let's give Node's REPL a try.

$ node
> console.log('Node is running');
Node is running
> .help
.break Sometimes you get stuck, this gets you out
.clear Alias for .break
.exit  Exit the repl
.help  Show repl options
.load  Load JS from a file into the REPL session
.save  Save all evaluated commands in this REPL session to a file
> .exit

The Node.js installation worked, so we can now focus our attention on npm, which was included in the install.

$ which npm
/usr/bin/npm
$ npm --version
3.10.10

Node Packaged Modules

npm can install packages in local or global mode. In local mode it installs the package in a node_modules folder in your parent working directory. This location is owned by the current user. Global packages are installed in {prefix}/lib/node_modules/ which is owned by root (where {prefix} is usually /usr/ or /usr/local). This means you would have to use sudo to install packages globally, which could cause permission errors when resolving third-party dependencies, as well as being a security concern. Lets change that:

Parcel delivery company
Time to manage those packages

Changing the Location of Global Packages

Let's see what output npm config gives us.

$ npm config list
; cli configs
user-agent = "npm/3.10.10 node/v6.10.3 linux x64"

; userconfig /home/sitepoint/.npmrc
prefix = "/home/sitepoint/.node_modules_global"

; node bin location = /usr/bin/nodejs
; cwd = /home/sitepoint
; HOME = /home/sitepoint
; "npm config ls -l" to show all defaults.

This gives us information about our install. For now it's important to get the current global location.

$ npm config get prefix
/usr

This is the prefix we want to change, so as to install global packages in our home directory. To do that create a new directory in your home folder.

$ cd ~ && mkdir .node_modules_global
$ npm config set prefix=$HOME/.node_modules_global

With this simple configuration change, we have altered the location to which global Node packages are installed. This also creates a .npmrc file in our home directory.

$ npm config get prefix
/home/sitepoint/.node_modules_global
$ cat .npmrc
prefix=/home/sitepoint/.node_modules_global

We still have npm installed in a location owned by root. But because we changed our global package location we can take advantage of that. We need to install npm again, but this time in the new user-owned location. This will also install the latest version of npm.

$ npm install npm --global
└─┬ npm@5.0.2
  ├── abbrev@1.1.0
  ├── ansi-regex@2.1.1
....
├── wrappy@1.0.2
└── write-file-atomic@2.1.0

Finally, we need to add .node_modules_global/bin to our $PATH environment variable, so that we can run global packages from the command line. Do this by appending the following line to your .profile, .bash_profileor .bashrc and restarting your terminal.

export PATH="$HOME/.node_modules_global/bin:$PATH"

Now our .node_modules_global/bin will be found first and the correct version of npm will be used.

$ which npm
/home/sitepoint/.node_modules_global/bin/npm
$ npm --version
5.0.2

Continue reading %A Beginner’s Guide to npm — the Node Package Manager%

Unit Test Your JavaScript Using Mocha and Chai

$
0
0

Have you ever made some changes to your code, and later found it caused something else to break?

I'm sure most of us have. This is almost inevitable, especially when you have a larger amount of code. One thing depends on another, and then changing it breaks something else as a result.

But what if that didn't happen? What if you had a way of knowing when something breaks as a result of some change? That would be pretty great. You could modify your code without having to worry about breaking anything, you'd have fewer bugs and you'd spend less time debugging.

That's where unit tests shine. They will automatically detect any problems in the code for you. Make a change, run your tests and if anything breaks, you'll immediately know what happened, where the problem is and what the correct behavior should be. This completely eliminates any guesswork!

In this article, I'll show you how to get started unit testing your JavaScript code. The examples and techniques shown in this article can be applied to both browser-based code and Node.js code.

The code for this tutorial is available from our GitHub repo.

What is Unit Testing

When you test your codebase, you take a piece of code — typically a function — and verify it behaves correctly in a specific situation. Unit testing is a structured and automated way of doing this. As a result, the more tests you write, the bigger the benefit you receive. You will also have a greater level of confidence in your codebase as you continue to develop it.

The core idea with unit testing is to test a function's behavior when giving it a certain set of inputs. You call a function with certain parameters, and check you got the correct result.

// Given 1 and 10 as inputs...
var result = Math.max(1, 10);

// ...we should receive 10 as the output
if(result !== 10) {
  throw new Error('Failed');
}

In practice, tests can sometimes be more complex. For example, if your function makes an Ajax request, the test needs some more set up, but the same principle of "given certain inputs, we expect a specific outcome" still applies.

Setting Up the Tools

For this article, we'll be using Mocha. It's easy to get started with, can be used for both browser-based testing and Node.js testing, and it plays nicely with other testing tools.

The easiest way to install Mocha is through npm (for which we also need to install Node.js). If you're unsure about how to install either npm or Node on your system, consult our tutorial: A Beginner's Guide to npm — the Node Package Manager

With Node installed, open up a terminal or command line in your project's directory.

  • If you want to test code in the browser, run npm install mocha chai --save-dev
  • If you want to test Node.js code, in addition to the above, run npm install -g mocha

This installs the packages mocha and chai. Mocha is the library that allows us to run tests, and Chai contains some helpful functions that we'll use to verify our test results.

Testing on Node.js vs Testing in the Browser

The examples that follow are designed to work if running the tests in a browser. If you want to unit test your Node.js application, follow these steps.

  • For Node, you don't need the test runner file.
  • To include Chai, add var chai = require('chai'); at the top of the test file.
  • Run the tests using the mocha command, instead of opening a browser.

Setting Up a Directory Structure

You should put your tests in a separate directory from your main code files. This makes it easier to structure them, for example if you want to add other types of tests in the future (such as integration tests or functional tests).

The most popular practice with JavaScript code is to have a directory called test/ in your project's root directory. Then, each test file is placed under test/someModuleTest.js. Optionally, you can also use directories inside test/, but I recommend keeping things simple — you can always change it later if necessary.

Setting Up a Test Runner

In order to run our tests in a browser, we need to set up a simple HTML page to be our test runner page. The page loads Mocha, the testing libraries and our actual test files. To run the tests, we'll simply open the runner in a browser.

If you're using Node.js, you can skip this step. Node.js unit tests can be run using the command mocha, assuming you've followed the recommended directory structure.

Below is the code we'll use for the test runner. I'll save this file as testrunner.html.

<!DOCTYPE html>
<html>
  <head>
    <title>Mocha Tests</title>
    <link rel="stylesheet" href="node_modules/mocha/mocha.css">
  </head>
  <body>
    <div id="mocha"></div>
    <script src="node_modules/mocha/mocha.js"></script>
    <script src="node_modules/chai/chai.js"></script>
    <script>mocha.setup('bdd')</script>

    <!-- load code you want to test here -->

    <!-- load your test files here -->

    <script>
      mocha.run();
    </script>
  </body>
</html>

The important bits in the test runner are:

  • We load Mocha's CSS styles to give our test results nice formatting.
  • We create a div with the ID mocha. This is where the test results are inserted.
  • We load Mocha and Chai. They are located in subfolders of the node_modules folder since we installed them via npm.
  • By calling mocha.setup, we make Mocha's testing helpers available.
  • Then, we load the code we want to test and the test files. We don't have anything here just yet.
  • Last, we call mocha.run to run the tests. Make sure you call this after loading the source and test files.

The Basic Test Building Blocks

Now that we can run tests, let's start writing some.

We'll begin by creating a new file test/arrayTest.js. An individual test file such as this one is known as a test case. I'm calling it arrayTest.js because for this example, we'll be testing some basic array functionality.

Every test case file follows the same basic pattern. First, you have a describe block:

describe('Array', function() {
  // Further code for tests goes here
});

describe is used to group individual tests. The first parameter should indicate what we're testing — in this case, since we're going to test array functions, I've passed in the string 'Array'.

Secondly, inside the describe, we'll have it blocks:

describe('Array', function() {
  it('should start empty', function() {
    // Test implementation goes here
  });

  // We can have more its here
});

it is used to create the actual tests. The first parameter to it should provide a human-readable description of the test. For example, we can read the above as "it should start empty", which is a good description of how arrays should behave. The code to implement the test is then written inside the function passed to it.

All Mocha tests are built from these same building blocks, and they follow this same basic pattern.

  • First, we use describe to say what we're testing - for example, "describe how array should work".
  • Then, we use a number of it functions to create the individual tests - each it should explain one specific behavior, such as "it should start empty" for our array case above.

Continue reading %Unit Test Your JavaScript Using Mocha and Chai%

Debugging JavaScript with the Node Debugger

$
0
0

It’s a trap! You’ve spent a good amount of time making changes, nothing works. Perusing through the code shows no signs of errors. You go over the logic once, twice or thrice, and run it a few times more. Even unit tests can’t save you now, they too are failing. This feels like staring at an empty void without knowing what to do. You feel alone, in the dark, and starting to get pretty angry.

A natural response is to throw code quality out and litter everything that gets in the way. This means sprinkling a few print lines here and there and hope something works. This is shooting in pitch black and you know there isn’t much hope.

You think the darkness is your ally

Does this sound all too familiar? If you’ve ever written more than a few lines of JavaScript, you may have experienced this darkness. There will come a time when a scary program will leave you in an empty void. At some point, it is not smart to face peril alone with primitive tools and techniques. If you are not careful, you’ll find yourself wasting hours to identify trivial bugs.

The better approach is to equip yourself with good tooling. A good debugger shortens the feedback loop and makes you more effective. The good news is Node has a very good one out of the box. The Node debugger is versatile and works with any chunk of JavaScript.

Below are strategies that have saved me from wasting valuable time in JavaScript.

The Node CLI Debugger

The Node debugger command line is a useful tool. If you are ever in a bind and can’t access a fancy editor, for any reason, this will help. The tooling uses a TCP-based protocol to debug with the debugging client. The command line client accesses the process via a port and gives you a debugging session.

You run the tool with node debug myScript.js, notice the debug flag between the two. Here are a few commands I find you must memorize:

  • sb('myScript.js', 1) set a breakpoint on first line of your script
  • c continue the paused process until you hit a breakpoint
  • repl open the debugger’s Read-Eval-Print-Loop (REPL) for evaluation

Don’t Mind the Entry Point

When you set the initial breakpoint, one tip is that it's not necessary to set it at the entry point. Say myScript.js, for example, requires myOtherScript.js. The tool lets you set a breakpoint in myOtherScript.js although it is not the entry point.

For example:

// myScript.js
var otherScript = require('./myOtherScript');

var aDuck = otherScript();

Say that other script does:

// myOtherScript.js
module.exports = function myOtherScript() {
  var dabbler = {
    name: 'Dabbler',
    attributes: [
      { inSeaWater: false },
      { canDive: false }
    ]
  };

  return dabbler;
};

If myScript.js is the entry point, don’t worry. You can still set a breakpoint like this, for example, sb('myOtherScript.js', 10). The debugger does not care that the other module is not the entry point. Ignore the warning, if you see one, as long as the breakpoint is set right. The Node debugger may complain that the module hasn’t loaded yet.

Time for a Demo of Ducks

Time for a demo! Say you want to debug the following program:

function getAllDucks() {
  var ducks = { types: [
    {
      name: 'Dabbler',
      attributes: [
        { inSeaWater: false },
        { canDive: false }
      ]
    },
    {
      name: 'Eider',
      attributes: [
        { inSeaWater: true },
        { canDive: true }
      ]
    } ] };

  return ducks;
}

getAllDucks();

Using the CLI tooling, this is how you'd do a debugging session:

> node debug debuggingFun.js
> sb(18)
> c
> repl

Continue reading %Debugging JavaScript with the Node Debugger%

How to Build a File Upload Form with Express and Dropzone.js

$
0
0

Let’s face it, nobody likes forms. Developers don’t like building them, designers don’t particularly enjoy styling them and users certainly don’t like filling them in.

Of all the components that can make up a form, the file control could just be the most frustrating of the lot. A real pain to style, clunky and awkward to use and uploading a file will slow down the submission process of any form.

That’s why a plugin to enhance them is always worth a look, and DropzoneJS is just one such option. It will make your file upload controls look better, make them more user-friendly and by using AJAX to upload the file in the background, at the very least make the process seem quicker. It also makes it easier to validate files before they even reach your server, providing near-instananeous feedback to the user.

We’re going to take a look at DropzoneJS in some detail; show how to implement it and look at some of the ways in which it can be tweaked and customized. We’ll also implement a simple server-side upload mechanism using Node.js.

As ever, you can find the code for this tutorial on our GitHub repository.

Introducing DropzoneJS

As the name implies, DropzoneJS allows users to upload files using drag n’ drop. Whilst the usability benefits could justifiably be debated, it’s an increasingly common approach and one which is in tune with the way a lot of people work with files on their desktop. It’s also pretty well supported across major browsers.

DropzoneJS isn’t simply a drag n’drop based widget, however; clicking the widget launches the more conventional file chooser dialog approach.

Here’s an animation of the widget in action:

The DropzoneJS widget in action

Alternatively, take a look at this, most minimal of examples.

You can use DropzoneJS for any type of file, though the nice little thumbnail effect makes it ideally suited to uploading images in particular.

Features

To summarize some of the plugin’s features and characteristics:

  • Can be used with or without jQuery
  • Drag and drop support
  • Generates thumbnail images
  • Supports multiple uploads, optionally in parallel
  • Includes a progress bar
  • Fully themeable
  • Extensible file validation support
  • Available as an AMD module or RequireJS module
  • It comes in at around 33Kb when minified

Browser Support

Taken from the official documentation, browser support is as follows:

  • Chrome 7+
  • Firefox 4+
  • IE 10+
  • Opera 12+ (Version 12 for MacOS is disabled because their API is buggy)
  • Safari 6+

There are a couple of ways to handle fallbacks for when the plugin isn’t fully supported, which we’ll look at later.

Installation

The simplest way to install DropzoneJS is via Bower:

bower install dropzone

Alternatively you can grab it from Github, or simply download the standalone JavaScript file — though bear in mind you’ll also need the basic styles, also available in the Github repo.

There are also third-party packages providing support for ReactJS and implementing the widget as an Angular directive.

First Steps

If you’ve used the Bower or download method, make sure you include both the main JS file and the styles (or include them into your application’s stylesheet), e.g:

<link rel="stylesheet" href="/path/to/dropzone.css">
<script type="text/javascript" src="/path/to/dropzone.js"></script>

Pre-minified versions are also supplied in the package.

Tip: If you’re using RequireJS, use dropzone-amd-module.js instead.

Basic Usage

The simplest way to implement the plugin is to attach it to a form, although you can use any HTML such as a div tag. Using a form, however, means less options to set — most notably the URL, which is the most important configuration property.

You can initialize it simply by adding the dropzone class, for example:

<form id="upload-widget" method="post" action="/upload" class="dropzone">
</form>

Technically that’s all you need to do, though in most cases you’ll want to set some additional options. The format for that is as follows:

Dropzone.options.WIDGET_ID = {
  //
};

To derive the widget ID for setting the options, take the ID you defined in your HTML and camel-case it. For example, upload-widget becomes uploadWidget:

Dropzone.options.uploadWidget = {
  //
};

You can also create an instance programmatically:

var uploader = new Dropzone(‘#upload-widget’, options);

Next up, we’ll look at some of the available configuration options.

Basic Configuration Options

The url option defines the target for the upload form, and is the only required parameter. That said, if you’re attaching it to a form element then it’ll simply use the form’s action attribute, in which case you don’t even need to specify that.

The method option sets the HTTP method and again, it will take this from the form element if you use that approach, or else it’ll simply default to POST, which should suit most scenarios.

The paramName option is used to set the name of the parameter for the uploaded file; were you using a file upload form element, it would match the name attribute. If you don’t include it then it defaults to file.

maxFiles sets the maximum number of files a user can upload, if it’s not set to null.

By default the widget will show a file dialog when it’s clicked, though you can use the clicked parameter to disable this by setting it to false, or alternatively you can provide an HTML element or CSS selector to customize the clickable element.

Those are the basic options, but let’s now look at some of the more advanced options.

Enforcing Maximum File Size

The maxFilesize property determines the maximum file size in megabytes. This defaults to a size of 1000 bytes, but using the filesizeBase property, you could set it to another value — for example, 1024 bytes. You may need to tweak this to ensure that your client and server code calculate any limits in precisely the same way.

Restricting to Certain File Types

The acceptFiles parameter can be used to restrict the type of file you want to accept. This should be in the form of a comma-separated list of MIME-types, although you can also use wildcards.

For example, to only accept images:

acceptedFiles: ‘image/*’,

Modifying the Size of the Thumbnail

By default, the thumbnail is generated at 120px by 120px; i.e., it’s square. There are a couple of ways you can modify this behavior.

The first is to use the thumbnailWidth and/or the thumbnailHeight configuration options.

If you set both thumbnailWidth and thumbnailHeight to null, the thumbnail won’t be resized at all.

If you want to completely customize the thumbnail generation behavior, you can even override the resize function.

One important point about modifying the size of the thumbnail; the dz-image class provided by the package sets the thumbnail size in the CSS, so you’ll need to modify that accordingly as well.

Continue reading %How to Build a File Upload Form with Express and Dropzone.js%

Using MySQL with Node.js and the mysql JavaScript Client

$
0
0

NoSQL databases are all the rage these days and probably the preferred back-end for Node.js applications. But you shouldn't architect your next project based on what's hip and trendy, rather the type of database to be used should depend on the project's requirements. If your project involves dynamic table creation, real time inserts etc. then NoSQL is the way to go, but on the other hand, if your project deals with complex queries and transactions, then a SQL database makes much more sense.

In this tutorial, we'll have a look at getting started with the mysql module — a Node.js driver for MySQL, written in JavaScript. I'll explain how to use the module to connect to a MySQL database, perform the usual CRUD operations, before examining stored procedures and escaping user input.

This popular tutorial was updated on 11.07.2017. Changes include updating to ES6 syntax, addressing the fact that the node-mysql module module was renamed, adding more beginner friendly instructions and adding a section on ORMs.

Quick Start: How to Use MySQL in Node

Maybe you've arrived here looking for a quick leg up. If you're just after a way to get up and running with MySQL in Node in as little time as possible, we got you covered!

Here's how to use MySQL in Node in 5 easy steps:

  1. Create a new project: mkdir mysql-test && cd mysql-test
  2. Create a package.json file: npm init –y
  3. Install the mysql module: npm install mysql –save
  4. Create an app.js file and copy in the snippet below.
  5. Run the file: node app.js. Observe a “Connected!” message.
//app.js

const mysql = require('mysql');
const connection = mysql.createConnection({
  host: 'localhost',
  user: 'user',
  password: 'password',
  database: 'database name'
});
connection.connect((err) => {
  if (err) throw err;
  console.log('Connected!');
});

Installing the mysql Module

Now let's take a closer look at each of those steps. First of all we're using the command line to create a new directory and navigate to it. Then we're creating a package.json file using the command npm init –y. The -y flag means that npm will use only defaults and not prompt you for any options.

This step also assumes that you have Node and npm installed on your system. If this is not the case, then check out this SitePoint article to find out how to do that: Install Multiple Versions of Node.js using nvm.

After that, we're installing the mysql module from npm and saving it as a project dependency. Project dependencies (as opposed to dev-dependencies) are those packages required for the application to run. You can read more about the differences between the two here.

mkdir mysql-test
cd mysql-test
npm install mysql -y

If you need further help using npm, then be sure to check out this guide, or ask in our forums.

Getting Started

Before we get on to connecting to a database, it's important that you have MySQL installed and configured on your machine. If this is not the case, please consult the installation instructions on their home page.

The next thing we need to do is to create a database and a database table to work with. You can do this using a
graphical interface, such as phpMyAdmin, or using the command line. For this article I'll be using a database called sitepoint and a table called employees. Here's a dump of the database, so that you can get up and running quickly, if you wish to follow along:

CREATE TABLE employees (
  id int(11) NOT NULL AUTO_INCREMENT,
  name varchar(50),
  location varchar(50),
  PRIMARY KEY (id)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=5 ;

INSERT INTO employees (id, name, location) VALUES
(1, 'Jasmine', 'Australia'),
(2, 'Jay', 'India'),
(3, 'Jim', 'Germany'),
(4, 'Lesley', 'Scotland');

Using MySQL with Node.js & the mysql JavaScript Client

Connecting to the Database

Now, let's create a file called app.js in our mysql-test directory and see how to connect to MySQL from Node.js.

Continue reading %Using MySQL with Node.js and the mysql JavaScript Client%

What Is Node and When Should I Use It?

$
0
0

So you’ve heard of Node.js, but aren’t quite sure what it is or where it fits into your development workflow. Or maybe you’ve heard people singing Node’s praises and now you’re wondering if it’s something you need to learn. Perhaps you’re familiar with another back-end technology and want to find out what’s different about Node. […]

Continue reading %What Is Node and When Should I Use It?%

How to Build and Structure a Node.js MVC Application

$
0
0

In a non-trivial application, the architecture is as important as the quality of the code itself. We can have well-written pieces of code, but if we don’t have a good organization, we’ll have a hard time as the complexity increases. There’s no need to wait until the project is half-way done to start thinking about the architecture. The best time is before starting, using our goals as beacons for our choices.

Node.js doesn't have a de facto framework with strong opinions on architecture and code organization in the same way that Ruby has the Rails framework, for example. As such, it can be difficult to get started with building full web applications with Node.

In this article, we are going to build the basic functionality of a note-taking app using the MVC architecture. To accomplish this we are going to employ the Hapi.js framework for Node.js and SQLite as a database, using Sequelize.js, plus other small utilities to speed up our development. We are going to build the views using Pug, the templating language.

What is MVC?

Model-View-Controller (or MVC) is probably one of the most popular architectures for applications. As with a lot of other cool things in computer history, the MVC model was conceived at PARC for the Smalltalk language as a solution to the problem of organizing applications with graphical user interfaces. It was created for desktop applications, but since then, the idea has been adapted to other mediums including the web.

We can describe the MVC architecture in simple words:

Model: The part of our application that will deal with the database or any data-related functionality.

View: Everything the user will see. Basically the pages that we are going to send to the client.

Controller: The logic of our site, and the glue between models and views. Here we call our models to get the data, then we put that data on our views to be sent to the users.

Our application will allow us to publish, see, edit and delete plain-text notes. It won’t have other functionality, but because we will have a solid architecture already defined we won’t have big trouble adding things later.

You can check out the final application in the accompanying GitHub repository, so you get a general overview of the application structure.

Laying out the Foundation

The first step when building any Node.js application is to create a package.json file, which is going to contain all of our dependencies and scripts. Instead of creating this file manually, NPM can do the job for us using the init command:

npm init -y

After the process is complete will get a package.json file ready to use.

Note: If you're not familiar with these commands, checkout our Beginner's Guide to npm.

We are going to proceed to install Hapi.js—the framework of choice for this tutorial. It provides a good balance between simplicity, stability and feature availability that will work well for our use case (although there are other options that would also work just fine).

npm install --save hapi hoek

This command will download the latest version of Hapi.js and add it to our package.json file as a dependency. It will also download the Hoek utility library that will help us write shorter error handlers, among other things.

Now we can create our entry file; the web server that will start everything. Go ahead and create a server.js file in your application directory and all the following code to it:

'use strict';

const Hapi = require('hapi');
const Hoek = require('hoek');
const Settings = require('./settings');

const server = new Hapi.Server();
server.connection({ port: Settings.port });

server.route({
  method: 'GET',
  path: '/',
  handler: (request, reply) => {
    reply('Hello, world!');
  }
});

server.start((err) => {
  Hoek.assert(!err, err);

  console.log(`Server running at: ${server.info.uri}`);
});

This is going to be the foundation of our application.

First, we indicate that we are going to use strict mode, which is a common practice when using the Hapi.js framework.

Next, we include our dependencies and instantiate a new server object where we set the connection port to 3000 (the port can be any number above 1023 and below 65535.)

Our first route for our server will work as a test to see if everything is working, so a 'Hello, world!' message is enough for us. In each route, we have to define the HTTP method and path (URL) that it will respond to, and a handler, which is a function that will process the HTTP request. The handler function can take two arguments: request and reply. The first one contains information about the HTTP call, and the second will provide us with methods to handle our response to that call.

Finally, we start our server with the server.start method. As you can see, we can use Hoek to improve our error handling, making it shorter. This is completely optional, so feel free to omit it in your code, just be sure to handle any errors.

Storing Our Settings

It is good practice to store our configuration variables in a dedicated file. This file exports a JSON object containing our data, where each key is assigned from an environment variable—but without forgetting a fallback value.

In this file, we can also have different settings depending on our environment (e.g. development or production). For example, we can have an in-memory instance of SQLite for development purposes, but a real SQLite database file on production.

Selecting the settings depending on the current environment is quite simple. Since we also have an env variable in our file which will contain either development or production, we can do something like the following to get the database settings (for example):

const dbSettings = Settings[Settings.env].db;

So dbSettings will contain the setting of an in-memory database when the env variable is development, or will contain the path of a database file when the env variable is production.

Also, we can add support for a .env file, where we can store our environment variables locally for development purposes; this is accomplished using a package like dotenv for Node.js, which will read a .env file from the root of our project and automatically add the found values to the environment. You can find an example in the dotenv repository.

Note: If you decide to also use a .env file, make sure you install the package with npm install -s dotenv and add it to .gitignore so you don’t publish any sensitive information.

Our settings.js file will look like this:

// This will load our .env file and add the values to process.env,
// IMPORTANT: Omit this line if you don't want to use this functionality
require('dotenv').config({silent: true});

module.exports = {
  port: process.env.PORT || 3000,
  env: process.env.ENV || 'development',

  // Environment-dependent settings
  development: {
    db: {
      dialect: 'sqlite',
      storage: ':memory:'
    }
  },
  production: {
    db: {
      dialect: 'sqlite',
      storage: 'db/database.sqlite'
    }
  }
};

Now we can start our application by executing the following command and navigating to localhost:3000 in our web browser.

node server.js

Note: This project was tested on Node v6. If you get any errors, ensure you have an updated installation.

Defining the Routes

The definition of routes gives us an overview of the functionality supported by our application. To create our additional routes, we just have to replicate the structure of the route that we already have in our server.js file, changing the content of each one.

Let’s start by creating a new directory called lib in our project. Here we are going to include all the JS components. Inside lib, let’s create a routes.js file and add the following content:

'use strict';

module.exports = [
  // We are going to define our routes here
];

In this file, we will export an array of objects that contain each route of our application. To define the first route, add the following object to the array:

{
  method: 'GET',
  path: '/',
  handler: (request, reply) => {
    reply('All the notes will appear here');
  },
  config: {
    description: 'Gets all the notes available'
  }
},

Our first route is for the home page (/) and since it will only return information we assign it a GET method. For now, it will only give us the message All the notes will appear here, which we are going to change later for a controller function. The description field in the config section is only for documentation purposes.

Then, we create the four routes for our notes under the /note/ path. Since we are building a CRUD application, we will need one route for each action with the corresponding HTTP method.

Add the following definitions next to the previous route:

{
  method: 'POST',
  path: '/note',
  handler: (request, reply) => {
    reply('New note');
  },
  config: {
    description: 'Adds a new note'
  }
},
{
  method: 'GET',
  path: '/note/{slug}',
  handler: (request, reply) => {
    reply('This is a note');
  },
  config: {
    description: 'Gets the content of a note'
  }
},
{
  method: 'PUT',
  path: '/note/{slug}',
  handler: (request, reply) => {
    reply('Edit a note');
  },
  config: {
    description: 'Updates the selected note'
  }
},
{
  method: 'GET',
  path: '/note/{slug}/delete',
  handler: (request, reply) => {
    reply('This note no longer exists');
  },
  config: {
    description: 'Deletes the selected note'
  }
},

We have done the same as in the previous route definition, but this time we have changed the method to match the action we want to execute.

The only exception is the delete route. In this case, we are going to define it with the GET method rather than DELETE and add an extra /delete in the path. This way, we can call the delete action just by visiting the corresponding URL.

Continue reading %How to Build and Structure a Node.js MVC Application%


How to Build a Simple Web Server with Node.js

$
0
0

The following is an excerpt from the book Get Programming with Node.js, published by manning.com. You can purchase the book here at a 37% discount by using the code fccwexler.

Get Programming with Node.js CoverThis article is a practical introduction to using Node.js. We’re going to go over installing Node.js, learn about npm, and then we’re going to build a Node.js module and jump right in to initializing a web server. Feel free to follow along at home as you read!

Installing Node.js

Node.js is growing in popularity and support. Because of this, new versions to download are being deployed quite frequently, and it’s important to stay up to date with the latest versions to see how they may benefit or otherwise impact the applications you’re building. At the time of writing, the version of Node.js to download is 7.6 or greater.

NOTE: The release of Node.js 7.6 comes with support for ES6 syntax. ES6 (ECMAScript 2015) is a recent update to JavaScript, with syntax improvements for defining variables, functions, and OOP code altogether. To keep up with updates to JavaScript, download the latest stable version of Node.js as your development progresses.

There are a couple of ways to download and install Node.js, all of which are listed on the Node.js main site.

Because Node.js is platform-independent, you can download and install it on macOS, Windows, or Linux and expect full functionality.

The simplest way to install Node.js is to go to the download link and follow the instructions and prompts to download the installer for the latest version of Node.js.

NOTE: When you install Node.js, you also get npm, the Node.js ecosystem of external libraries (multiple files of code other people wrote) that can be imported into your future projects. You’ll learn more about npm in the next section.

npm installer

Figure 1. Node.js installer page

When the installer file is downloaded, double-click the file from your browser’s download panel or from your computer’s download folder. The installer will open a new window that looks like Figure 1 and write all necessary files and core Node.js libraries to your system. You may be asked to accept licensing agreements or give the installer permission to install Node.js onto your computer. Follow the prompts to click through the installation.

npm installation

Figure 2. Node.js writing to your machine

Terminal and your PATH

You’ll be working mostly in your computer’s terminal, which is built-in software used to navigate and run commands on your computer without a graphical interface. This book teaches using the Unix terminal (Bash) commands. Those of you who are Windows users can follow along using Window’s CMD terminal window (you may need to look up command-equivalents throughout the book). You can reference this table comparing Windows and Unix commands. To make things easier on Windows, you can download and install an additional Bash terminal called GitBash from git-scm.com.

Make a note of where your version of Node.js and npm are installed on your machine. This information is shown in the final window in the installer. The installer attempts to add these directory locations to your system’s PATH.

Your computer’s PATH variable is the first place the terminal will look for resources used in development. Think of it like your computer’s index for quickly finding the tools you need. By adding these tools’ original file path or directory locations to the PATH variable, terminal won’t have any problems finding them. If you experience any problems starting Node.js in your terminal, follow the installation steps here.

Making sure everything's installed correctly

Now that you have Node.js installed, let’s use terminal to make sure everything is installed correctly. Open terminal (or GitBash) and type the following command at the prompt: node-v.

The output of this command should show you the version of Node.js you’ve just installed. Similarly, you can check the version of npm that you’ve installed by running the command npm -v at the command prompt.

NOTE: If your terminal responds with an error or with nothing at all, it’s possible that your installation of Node.js was not successful. In the case of an error, try copying and pasting that error into a search engine to look for common solutions or simply try repeating the installation process.

Now that you have Node.js installed and your terminal running, you need somewhere to write your code. Although text editors come in many different forms and can be used to make non-code files as well, text editors designed specifically for developers often come prepackaged with helpful tools and plugins. I recommend installing the Atom text editor, which you can download at atom.io.

TIP: If you ever forget where you installed Node.js or npm, you can open a command window and type either which node or which npm at the prompt to see the corresponding location. From a Windows command-line prompt, use where in place of which.

Planning Your App

Imagine that you want to build an application for your city’s community-supported agriculture (CSA) club. Through this application, users could subscribe to receive food from local farms and distributors. The application ensures that your community gets healthy food and stays connected. You plan to use Node.js to build this web application and you want to start by verifying users’ zip codes to see if they live close enough for delivery. The question is: will you need to build your own tool to make this possible?

Luckily for us, the answer is no, npm can be used to install Node.js packages, libraries of code others have written that you can use to add specific features to your application. In fact, there’s a package for verifying locations based on zip codes. We’ll take a closer look at that package and how to install it in a little bit.

Creating a Node.js Module

A Node.js application is ultimately made up of many JavaScript files. For your application to stay organized and efficient, these files need to have access to each other’s contents when necessary. Each file, whose code is collectively related, is called a module. Let’s look at our app again and add some positive messages to it. You can create a file called messages.js with the following code:

let messages = ["You are great!", "You can accomplish anything!", "Success is in your future!"];

Keeping these messages separate from the code you’ll write to display them will make your code more organized. To manage these messages in another file, you need to change the let variable definition to use the exports object, like so:

exports.messages =["You are great!", "You can accomplish anything!", "Success is in your future!"];

Just like other JavaScript objects, you are adding a messages property on the Node.js exports object, which can be shared between modules.

NOTE: The exports object is actually a property of the moduleobject. module is both the name of the code files in Node.js and one of its global objects. Using exports is essentially a shorthand for module.exports.

The module is ready to be required (imported) by another JavaScript file. You can test this by creating another file called printMessages.js, whose purpose is to loop through the messages and log them to your console with the code in listing 1. First, require the local module by using the require object and the module’s filename (with or without a .js extension). Then, refer to the module’s array by the variable set up in printMessages.js.

Listing 1. log messages to console in printMessages.js

const messageModule = require(’./messages’); 1

messageModule.messages.forEach( (m) =&gt; { 2

  console.log(m);

});
  1. Require the local messages.js module.
  2. Refer to the module’s array through messageModule.messages.

require is another Node.js global object used to locally introduce methods and objects from other modules. Node.js interprets require('./messages'); to look for a module called messages.js within your project directory and allow code within printMessages.js to use any properties added to the exports object.

Next, we’ll use npm, another tool for adding modules to your project.

Running npm Commands

With your installation of Node.js, you also got Node Package Manager (npm). As the name suggests, npm is responsible for managing the external packages (modules others have built and made available online) in your application. Throughout application development, npm will be used to install, remove, and modify these packages. Entering npm -l in your terminal brings up a list of npm commands with brief explanations.

Listing 2 contains a few npm commands that you’ll want to know about.

Listing 2. Npm commands to know

  • npm init. Initializes a Node.js application and creates a package.json file
  • npm install <package>. Installs a Node.js package.
  • npm publish. Saves and uploads a package you built to the npm package community.
  • npm start. Runs your Node.js application (provided the package.json file is set up to use this command). npm stop will quit the running application.

When using the npm install <package>, appending --save to your command installs the package as a dependency for your application. Appending --global installs the package globally on your computer to be used anywhere within terminal. These command extensions, called flags, have the shorthand forms of -S and -g, respectively. npmuninstall <package> reverses the install action. Should a project call for it, the npm install express -S can be used to install the Express.js framework, and npm install express-generator -g to install the Express.js generator for use as a command-line tool.

Continue reading %How to Build a Simple Web Server with Node.js%

MEAN Stack: Build an App with Angular 2+ and the Angular CLI

$
0
0

The MEAN stack comprises advanced technologies used to develop both the server-side and the client-side of a web application in a JavaScript environment. The components of the MEAN stack include the MongoDB database, Express.js (a web framework), Angular (a front-end framework), and the Node.js runtime environment. Taking control of the MEAN stack and familiarizing different JavaScript technologies during the process will help you in becoming a full-stack JavaScript developer.

JavaScript’s sphere of influence has dramatically grown over the years and with that growth, there’s an ongoing desire to keep up with the latest trends in programming. New technologies have emerged and existing technologies have been rewritten from the ground up (I’m looking at you, Angular).

This tutorial intends to create the MEAN application from scratch and serve as an update to the original MEAN stack tutorial. If you’re familiar with MEAN and want to get started with the coding, you can skip to the overview section.

Introduction to the MEAN Stack

Node.js - Node.js is a server-side runtime environment built on top of Chrome's V8 JavaScript engine. Node.js is based on an event-driven architecture that runs on a single thread and a non-blocking IO. These design choices allow you to build real-time web applications in JavaScript that scale well.

Express.js - Express is a minimalistic yet robust web application framework for Node.js. Express.js uses middleware functions to handle HTTP requests and then either return a response or pass on the parameters to another middleware. Application-level, router-level, and error-handling middlewares are available in Express.js.

MongoDB - MongoDB is a document-oriented database program where the documents are stored in a flexible JSON-like format. Being an NoSQL database program, MongoDB relieves you from the tabular jargon of the relational database.

Angular - Angular is an application framework developed by Google for building interactive Single Page Applications. Angular, originally AngularJS, was rewritten from scratch to shift to a Component-based architecture from the age old MVC framework. Angular recommends the use of TypeScript which, in my opinion, is a good idea because it enhances the development workflow.

Now that we’re acquainted with the pieces of the MEAN puzzle, let’s see how we can fit them together, shall we?

Overview

Here’s a high-level overview of our application.

High-level overview of our MEAN stack application

We’ll be building an Awesome Bucket List Application from the ground up without using any boilerplate template. The front end will include a form that accepts your bucket list items and a view that updates and renders the whole bucket list in real time.

Any update to the view will be interpreted as an event and this will initiate an HTTP request. The server will process the request, update/fetch the MongoDB if necessary, and then return a JSON object. The front end will use this to update our view. By the end of this tutorial, you should have a bucket list application that looks like this.

Screenshot of the bucket list application that we are going to build

The entire code for the Bucket List application is available on GitHub.

Prerequisites

First things first, you need to have Node.js and MongoDB installed to get started. If you’re entirely new to Node, I would recommend reading the Beginner’s Guide to Node to get things rolling. Likewise, setting up MongoDB is easy and you can check out their documentation for installation instructions specific to your platform.

$ node -v
# v8.0.0

Start the mongo daemon service using the command.

sudo service mongod start

To install the latest version of Angular, I would recommend using Angular CLI. It offers everything you need to build and deploy your Angular application. If you’re not familiar with the Angular CLI yet, make sure you check out The Ultimate Angular CLI Reference.

npm install -g @angular/cli

Create a new directory for our bucket list project. That’s where both the front-end and the back-end code will go.

mkdir awesome-bucketlist
cd awesome-bucketlist

Creating the Backend Using Express.js and MongoDB

Express doesn’t impose any structural constraints on your web application. You can place the entire application code in a single file and get it to work, theoretically. However, your codebase would be a complete mess. Instead, we’re going to do this the MVC way (Model, View, and Controller)—minus the view part.

MVC is an architectural pattern that separates your models (the back end) and views (the UI) from the controller (everything in between), hence MVC. Since Angular will take care of the front end for us, we’ll have three directories, one for models and another one for controllers, and a public directory where we’ll place the compiled angular code.

In addition to this, we’ll create an app.js file that will serve as the entry point for running the Express server.

Directory structure of MEAN stack

Although using a model and controller architecture to build something trivial like our bucket list application might seem essentially unnecessary, this will be helpful in building apps that are easier to maintain and refactor.

Initializing npm

We’re missing a package.json file for our back end. Type in npm init and, after you’ve answered the questions, you should have a package.json made for you.

We’ll declare our dependencies inside the package.json file. For this project we’ll need the following modules:

  • express: Express module for the web server
  • mongoose: A popular library for MongoDB
  • bodyparser: Parses the body of the incoming requests and makes it available under req.body
  • cors: CORS middleware enables cross-origin access control to our web server.

I’ve also added a start script so that we can start our server using npm start.

{
  "name": "awesome-bucketlist",
  "version": "1.0.0",
  "description": "A simple bucketlist app using MEAN stack",
  "main": "app.js",
  "scripts": {
    "start": "node app"
  },

//The ~ is used to match the most recent minor version (without any breaking changes)

 "dependencies": {
    "express": "~4.15.3",
    "mongoose": "~4.11.0",
    "cors": "~2.8.3",
    "body-parser": "~1.17.2"
  },

  "author": "",
  "license": "ISC"
}

Now run npm install and that should take care of installing the dependencies.

Filling in app.js

First, we require all of the dependencies that we installed in the previous step.

// We’ll declare all our dependencies here
const express = require('express');
const path = require('path');
const bodyParser = require('body-parser');
const cors = require('cors');
const mongoose = require('mongoose');

//Initialize our app variable
const app = express();

//Declaring Port
const port = 3000;

As you can see, we’ve also initialized the app variable and declared the port number. The app object gets instantiated on the creation of the Express web server. We can now load middleware into our Express server by specifying them with app.use().

//Middleware for CORS
app.use(cors());

//Middleware for bodyparsing using both json and urlencoding
app.use(bodyParser.urlencoded({extended:true}));
app.use(bodyParser.json());

/*express.static is a built in middleware function to serve static files.
 We are telling express server public folder is the place to look for the static files
*/
app.use(express.static(path.join(__dirname, 'public')));

The app object can understand routes too.

app.get('/', (req,res) => {
    res.send("Invalid page");
})

Here, the get method invoked on the app corresponds to the GET HTTP method. It takes two parameters, the first being the path or route for which the middleware function should be applied.

The second is the actual middleware itself, and it typically takes three arguments: the req argument corresponds to the HTTP Request; the res argument corresponds to the HTTP Response; and next is an optional callback argument that should be invoked if there are other subsequent middlewares that follow this one. We haven’t used next here since the res.send() ends the request–response cycle.

Add this line towards the end to make our app listen to the port that we had declared earlier.

//Listen to port 3000
app.listen(port, () => {
    console.log(`Starting the server at port ${port}`);
});

npm start should get our basic server up and running.

By default, npm doesn’t monitor your files/directories for any changes, and you have to manually restart the server every time you’ve updated your code. I recommend using nodemon to monitor your files and automatically restart the server when any changes are detected. If you don't explicitly state which script to run, nodemon will run the file associated with the main property in your package.json.

npm install -g nodemon
nodemon

We’re nearly done with our app.js file. What’s left to do? We need to

  1. connect our server to the database
  2. create a controller, which we can then import to our app.js.

Setting up mongoose

Setting up and connecting a database is straightforward with MongoDB. First, create a config directory and a file named database.js to store our configuration data. Export the database URI using module.exports.

// 27017 is the default port number.
module.exports = {
    database: 'mongodb://localhost:27017/bucketlist'
}

And establish a connection with the database in app.js using mongoose.connect().

// Connect mongoose to our database
const config = require('./config/database');
mongoose.connect(config.database);

“But what about creating the bucket list database?”, you may ask. The database will be created automatically when you insert a document into a new collection on that database.

Working on the controller and the model

Now let’s move on to create our bucket list controller. Create a bucketlist.jsfile inside the controller directory. We also need to route all the /bucketlist requests to our bucketlist controller (in app.js).

const bucketlist = require('./controllers/bucketlist');

//Routing all HTTP requests to /bucketlist to bucketlist controller
app.use('/bucketlist',bucketlist);

Here’s the final version of our app.js file.

// We’ll declare all our dependencies here
const express = require('express');
const path = require('path');
const bodyParser = require('body-parser');
const cors = require('cors');
const mongoose = require('mongoose');
const config = require('./config/database');
const bucketlist = require('./controllers/bucketlist');

//Connect mongoose to our database
mongoose.connect(config.database);

//Declaring Port
const port = 3000;

//Initialize our app variable
const app = express();

//Middleware for CORS
app.use(cors());

//Middlewares for bodyparsing using both json and urlencoding
app.use(bodyParser.urlencoded({extended:true}));
app.use(bodyParser.json());



/*express.static is a built in middleware function to serve static files.
 We are telling express server public folder is the place to look for the static files

*/
app.use(express.static(path.join(__dirname, 'public')));


app.get('/', (req,res) => {
    res.send("Invalid page");
})


//Routing all HTTP requests to /bucketlist to bucketlist controller
app.use('/bucketlist',bucketlist);


//Listen to port 3000
app.listen(port, () => {
    console.log(`Starting the server at port ${port}`);
});

As previously highlighted in the overview, our awesome bucket list app will have routes to handle HTTP requests with GET, POST, and DELETE methods. Here’s a bare-bones controller with routes defined for the GET, POST, and DELETE methods.

//Require the express package and use express.Router()
const express = require('express');
const router = express.Router();

//GET HTTP method to /bucketlist
router.get('/',(req,res) => {
    res.send("GET");
});

//POST HTTP method to /bucketlist

router.post('/', (req,res,next) => {
    res.send("POST");

});

//DELETE HTTP method to /bucketlist. Here, we pass in a params which is the object id.
router.delete('/:id', (req,res,next)=> {
    res.send("DELETE");

})

module.exports = router;

I’d recommend using Postman app or something similar to test your server API. Postman has a powerful GUI platform to make your API development faster and easier. Try a GET request on http://localhost:3000/bucketlist and see whether you get the intended response.

And as obvious as it seems, our application lacks a model. At the moment, our app doesn’t have a mechanism to send data to and retrieve data from our database.

Create a list.js model for our application and define the bucket list Schema as follows:

//Require mongoose package
const mongoose = require('mongoose');

//Define BucketlistSchema with title, description and category
const BucketlistSchema = mongoose.Schema({
    title: {
        type: String,
        required: true
    },
    description: String,
    category: {
        type: String,
        required: true,
        enum: ['High', 'Medium', 'Low']
    }
});

When working with mongoose, you have to first define a Schema. We have defined a BucketlistSchema with three different keys (title, category, and description). Each key and its associated SchemaType defines a property in our MongoDB document. If you’re wondering about the lack of an id field, it's because we’ll be using the default _id that will be created by Mongoose.

Mongoose assigns each of your schemas an _id field by default if one is not passed into the Schema constructor. The type assigned is an ObjectId to coincide with MongoDB's default behavior.

You can read more about it in the Mongoose Documentation

However, to use our Schema definition we need to convert our BucketlistSchema to a model and export it using module.exports. The first argument of mongoose.model is the name of the collection that will be used to store the data in MongoDB.

const BucketList = module.exports = mongoose.model('BucketList', BucketlistSchema );

Apart from the schema, we can also host database queries inside our BucketList model and export them as methods.

//BucketList.find() returns all the lists
module.exports.getAllLists = (callback) => {
    BucketList.find(callback);
}

Here we invoke the BucketList.find method which queries the database and returns the BucketList collection. Since a callback function is used, the result will be passed over to the callback.

Let’s fill in the middleware corresponding to the GET method to see how this fits together.

const bucketlist = require('../models/List');

//GET HTTP method to /bucketlist
router.get('/',(req,res) => {
    bucketlist.getAllLists((err, lists)=> {
        if(err) {
            res.json({success:false, message: `Failed to load all lists. Error: ${err}`});
        }
        else {
            res.write(JSON.stringify({success: true, lists:lists},null,2));
            res.end();

    }
    });
});

We’ve invoked the getAllLists method and the callback takes two arguments, error and result.

All callbacks in Mongoose use the pattern: callback(error, result). If an error occurs executing the query, the error parameter will contain an error document, and result will be null. If the query is successful, the error parameter will be null, and the result will be populated with the results of the query.

-- MongoDB Documentation

Similarly, let’s add the methods for inserting a new list and deleting an existing list from our model.

//newList.save is used to insert the document into MongoDB
module.exports.addList = (newList, callback) => {
    newList.save(callback);
}

//Here we need to pass an id parameter to BUcketList.remove
module.exports.deleteListById = (id, callback) => {
    let query = {_id: id};
    BucketList.remove(query, callback);
}

We now need to update our controller’s middleware for POST and DELETE also.

//POST HTTP method to /bucketlist

router.post('/', (req,res,next) => {
    let newList = new bucketlist({
        title: req.body.title,
        description: req.body.description,
        category: req.body.category
    });
    bucketlist.addList(newList,(err, list) => {
        if(err) {
            res.json({success: false, message: `Failed to create a new list. Error: ${err}`});

        }
        else
            res.json({success:true, message: "Added successfully."});

    });
});

//DELETE HTTP method to /bucketlist. Here, we pass in a param which is the object id.

router.delete('/:id', (req,res,next)=> {
  //access the parameter which is the id of the item to be deleted
    let id = req.params.id;
  //Call the model method deleteListById
    bucketlist.deleteListById(id,(err,list) => {
        if(err) {
            res.json({success:false, message: `Failed to delete the list. Error: ${err}`});
        }
        else if(list) {
            res.json({success:true, message: "Deleted successfully"});
        }
        else
            res.json({success:false});
    })
});

With this, we have a working server API that lets us create, view, and delete the bucket list. You can confirm that everything is working as intended by using Postman.

POST HTTP request using Postman

We’ll now move on to the front end of the application using Angular.

Continue reading %MEAN Stack: Build an App with Angular 2+ and the Angular CLI%

Installing Multiple Versions of Node.js Using nvm

$
0
0

When developing Node.js applications, you might face situations where you need to install multiple versions of Node. This can happen when you have multiple projects and they have different requirements, or you have a deployable application which must be compatible with different Node versions. Without a good tool, this would mean a lot of work and effort to install the different versions manually, and basing a project on a specific version. Fortunately, there’s a better way!

Introducing nvm

nvm stands for Node Version Manager. As the name suggests, it helps you manage and switch between different Node versions with ease. It provides a command line interface where you can install different versions with a single command, set a default, switch between them and much more.

OS Support

nvm supports both Linux and macOS, but that's not to say that Windows users have to miss out. There’s a second project named nvm-windows which offers Windows users the possibility of easily managing Node environments. Despite the name, nvm-windows is not a clone of nvm, nor is it affiliated with it. However, the basic commands listed below (for installing, listing and switching between versions) should work for both nvm and nvm-windows.

Installation

Let’s first cover installation for Windows, macOS and Linux.

Windows

First, make sure you uninstall any Node.js version you might have on your system, as they can collide with the installation. After this, download the latest stable installer. Run the executable installer, follow the steps provided and you're good to go!

macOS/Linux

Removing previous Node installations is optional, although it’s advised you do so. There are plenty of good online resources for how you might do this (macOS, Linux). It’s also good if you remove any npm installation you might have, since it might collide with nvm's installation. You'll also need to have a C++ compiler installed on your system. For macOS, you can install the Xcode command line tools. You can do this by running the following command:

xcode-select --install

On Linux, you can install the build-essential package by running the following (assumes apt):

sudo apt-get update
sudo apt-get install build-essential

Having the required C++ compiler, you can then install nvm using cURL or Wget. On your terminal, run the following:

With cURL:

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash

Or with Wget:

wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash

Note that the version number (v0.33.8) will change as the project develops, so it’s worth checking the relevant section of project's home page to find the most recent version.

This will clone the nvm repository to ~/.nvm and will make the required changes to your bash profile, so that nvm is available from anywhere in your terminal.

That's it, nvm is installed and ready to be used.

Using nvm

If installed correctly, the nvm command is available anywhere in you terminal. Let's see how to use it to manage Node.js versions.

Install Multiple Versions of Node.js

One of the most important parts of nvm is, of course, installing different versions of Node.js. For this, nvm provides the nvm install command. You can install specific versions by running this command followed by the version you want. For example:

nvm install 8.9.4

By running the above in a terminal, nvm will install Node.js version 8.9.4. nvm follows SemVer, so if you want to install, for example, the latest 8.9 patch, you can do it by running:

nvm install 8.9

nvm, will then install Node.js version 8.9.X, where X is the highest available version. At the time of writing, this is 4, so you'll have the 8.9.4 version installed on your system. You can see the full list of available versions by running:

nvm ls-remote

For nvm-windows, this is:

nvm ls available

Continue reading %Installing Multiple Versions of Node.js Using nvm%

An Introduction to NodeBots

$
0
0

Starter Kits

  • SparkFun Inventors Kit - This is the kit that started it all for me years ago! It comes with a range of standard components like colored LED lights, sensors, buttons, a motor, a tiny speaker and more. It also comes with a guide and sample projects you can use to build your skills. You can find it here - SparkFun Inventor's Kit.
  • Freetronics Experimenter's Kit for Arduino - This kit is by an Australian based company called Freetonics. It has very similar components to the SparkFun one, with a few small differences. It also has its own guide with sample projects to try as well. For those based in Australia, these kits and other Freetronics parts are available at Jaycar. You can also order it online here: Freetronics Experimenter's Kit.
  • Seeed Studio ARDX starter kit - Seeed Studio have their own starter kit too which is also very similar to the SparkFun and Freetronics ones. It has its own guide and such too! You can find it here - ARDX - The starter kit for Arduino.
  • Adafruit ARDX Experimentation Kit for Arduino - This kit is also very similar to the ones above with its own guide. You can find it here - Adafruit ARDX Experimentation Kit for Arduino.
  • Arduino Starter Kit - The guys at Arduino.cc have their own official kit that is available too. The starter kit is similar to the ones above but has some interesting sample projects like a "Love-O-Meter". You can find it here and often at other resellers too - Arduino Starter Kit.

With all of the above kits, keep in mind that none of them are targeted towards NodeBot development. So the examples in booklets and such are written in the simplified C++ code that Arduino uses. For examples using Node, see the resources below.

Resources To Learn NodeBots

There are a few key spots where you can learn how to put together various NodeBot projects on the web. Here are a few recommendations:

  • Controlling an Arduino with Node.js and Johnny-Five - This is a free SitePoint screencast I recorded a little while ago that introduces the basics of connecting up an Arduino to Node.js and using the framework to turn an LED light on and off.
  • Arduino Experimenter's Guide for NodeJS - An adaptation by Anna Gerber and other members of the NodeBots community from the SparkFun version of .:oomlout:.'s ARDX Guide. It shows how to do many of the examples from the kits mentioned above in Node instead of the simplified C++ code from Arduino.
  • The official Johnny-Five website - Recently, the Johnny-Five framework had a whole new website released that has great documentation on how to use the framework on Arduino and other platforms too!
  • Make: JavaScript Robotics Book - A new book released by Rick Waldron and others in the NodeBot community that provides a range of JS projects using various devices. Great for those who've got the absolute basics down and want to explore some new projects!
  • NodeBots Official Site - Check this page out if you're looking for a local NodeBots meetup near you, or to read more about NodeBots in general.
  • NodeBots - The Rise of JS Robotics - A great post by Chris Williams on how NodeBots came to be. It is a good read for those interested.

The SimpleBot

Andrew Fisher, a fellow Aussie NodeBot enthusiast, put together a rather simple project for people to build for their first NodeBot experience. It is called a "SimpleBot" and lives up to its name. It is a NodeBot that you can typically build in a single day. If you're keen on getting an actual robot up and running, rather than just a basic set of sensors and lights going on and off, this is a great project choice to start with. It comes available to Aussie attendees of NodeBots Day in one of the ticket types for this very reason! It is a bot with wheels and an ultrasonic sensor to detect if it's about to run into things. Here's what my own finished version looks like that I've prepared as a sample for NodeBots Day this year:

A SimpleBot

A list of SimpleBot materials needed and some sample Node.js code is available at the SimpleBot GitHub repo. Andrew also has a YouTube video showing how to put the SimpleBot together.

Andrew also collaborated with the team at Freetronics to put together a SimpleBot Arduino shield that might also be useful to people who'd like to give it a go as a learning project without needing to solder anything: SimpleBot Shield Kit.

Conclusion

That concludes a simple introduction into the world of NodeBots! If you're interested in getting involved, you've got all the info you should need to begin your NodeBot experience. I'll be organising the International NodeBots Day event in Sydney, so if you're a Sydneysider, grab a ticket and come along - International NodeBots Day Sydney, July 25.

If you build yourself a pretty neat NodeBot with any of the above resources, leave a note in the comments or get in touch with me on Twitter (@thatpatrickguy), I'd love to check out your JavaScript powered robot!

Continue reading %An Introduction to NodeBots%

How to Use SSL/TLS with Node.js

$
0
0

It’s time! No more procrastination and poor excuses: Let’s Encrypt works beautifully, and having an SSL-secured site is easier than ever.

In this article, I’ll work through a practical example of how to add a Let’s Encrypt-generated certificate to your Express.js server.

But protecting our sites and apps with HTTPS isn’t enough. We should also demand encrypted connections from the servers we’re talking to. We’ll see that possibilities exist to activate the SSL/TLS layer even if it wouldn’t be enabled by default.

Let’s start with a short review of HTTPS’s current state.

HTTPS Everywhere

The HTTP/2 specification was published as RFC 7540 in May 2015, which means at this point it’s a part of the standard. This was a major milestone. Now we can all upgrade our servers to use HTTP/2. One of the most important aspects is the backwards compatibility with HTTP 1.1 and the negotiation mechanism to choose a different protocol. Although the standard doesn’t specify mandatory encryption, currently no browser supports HTTP/2 unencrypted. This gives HTTPS another boost. Finally we’ll get HTTPS everywhere!

What does our stack actually look like? From the perspective of a website running in the browser (application level) we have roughly the following layers to reach the IP level:

  1. Client browser
  2. HTTP
  3. SSL/TLS
  4. TCP
  5. IP

HTTPS is nothing more than the HTTP protocol on top of SSL/TLS. Hence all the rules of HTTP still have to apply. What does this additional layer actually give us? There are multiple advantages: we get authentication by having keys and certificates; a certain kind of privacy and confidentiality is guaranteed, as the connection is encrypted in an asymmetric manner; and data integrity is also preserved: that transmitted data can’t be changed during transit.

One of the most common myths is that using SSL/TLS requires too many resources and slows down the server. This is certainly not true anymore. We also don’t need any specialized hardware with cryptography units. Even for Google, the SSL/TLS layer accounts for less than 1% of the CPU load. Furthermore, the network overhead of HTTPS as compared to HTTP is below 2%. All in all, it wouldn’t make sense to forgo HTTPS for having a little bit of overhead.

Is TLS Fast Yet?

The most recent version is TLS 1.3. TLS is the successor of SSL, which is available in its latest release SSL 3.0. The changes from SSL to TLS preclude interoperability. The basic procedure is, however, unchanged. We have three different encrypted channels. The first is a public key infrastructure for certificate chains. The second provides public key cryptography for key exchanges. Finally, the third one is symmetric. Here we have cryptography for data transfers.

TLS 1.3 uses hashing for some important operations. Theoretically, it’s possible to use any hashing algorithm, but it’s highly recommended to use SHA2 or a stronger algorithm. SHA1 has been a standard for a long time but has recently become obsolete.

HTTPS is also gaining more attention for clients. Privacy and security concerns have always been around, but with the growing amount of online accessible data and services, people are getting more and more concerned. A useful browser plugin is “HTTPS Everywhere”, which encrypts our communications with most websites.

HTTPS-everywhere

The creators realized that many websites offer HTTPS only partially. The plugin allows us to rewrite requests for those sites, which offer only partial HTTPS support. Alternatively, we can also block HTTP altogether (see the screenshot above).

Basic Communication

The certificate’s validation process involves validating the certificate signature and expiration. We also need to verify that it chains to a trusted root. Finally, we need to check to see if it was revoked. There are dedicated trusted authorities in the world that grant certificates. In case one of these were to become compromised, all other certificates from the said authority would get revoked.

The sequence diagram for a HTTPS handshake looks as follows. We start with the init from the client, which is followed by a message with the certificate and key exchange. After the server sends its completed package, the client can start the key exchange and cipher specification transmission. At this point, the client is finished. Finally the server confirms the cipher specification selection and closes the handshake.

HTTPS-sequence

The whole sequence is triggered independently of HTTP. If we decide to use HTTPS, only the socket handling is changed. The client is still issuing HTTP requests, but the socket will perform the previously described handshake and encrypt the content (header and body).

So what do we need to make SSL/TLS work with an Express.js server?

HTTPS

By default, Node.js serves content over HTTP. But there’s also an HTTPS module which we have to use in order to communicate over a secure channel with the client. This is a built-in module, and the usage is very similar to how we use the HTTP module:

const https = require("https"),
  fs = require("fs");

const options = {
  key: fs.readFileSync("/srv/www/keys/my-site-key.pem"),
  cert: fs.readFileSync("/srv/www/keys/chain.pem")
};

const app = express();

app.use((req, res) => {
  res.writeHead(200);
  res.end("hello world\n");
});

app.listen(8000);

https.createServer(options, app).listen(8080);

Ignore the /srv/www/keys/my-site-key.pem and and /srv/www/keys/chain.pem files for now. Those are the SSL certificates we need to generate, which we’ll do a bit later. This is the part that changed with Let’s Encrypt. Previously, we had to generate a private/public key pair, send it to a trusted authority, pay them and probably wait a bit in order to get an SSL certificate. Nowadays, Let’s Encrypt generates and validates your certificates for free and instantly!

Generating Certificates

Certbot

A certificate, which is signed by a trusted certificate authority (CA), is demanded by the TLS specification. The CA ensures that the certificate holder is really who he claims to be. So basically when you see the green lock icon (or any other greenish sign to the left side of the URL in your browser) it means that the server you’re communicating with is really who it claims to be. If you’re on facebook.com and you see a green lock, it’s almost certain you really are communicating with Facebook and no one else can see your communication — or rather, no one else can read it.

It’s worth noting that this certificate doesn’t necessarily have to be verified by an authority such as Let’s Encrypt. There are other paid services as well. You can technically sign it yourself, but then the users visiting your site won’t get an approval from the CA when visiting and all modern browsers will show a big warning flag to the user and ask to be redirected “to safety”.

In the following example, we’ll use the Certbot, which is used to generate and manage certificates with Let’s Encrypt.

On the Certbot site you can find instructions on how to install Certbot on your OS. Here we’ll follow the macOS instructions. In order to install Certbot, run

brew install certbot

Webroot

Webroot is a Certbot plugin that, in addition to the Certbot default functionallity which automatically generates your public/private key pair and generates an SSL certificate for those, also copies the certificates to your webroot folder and also verifies your server by placing some verification codes into a hidden temporary directory named .well-known. In order to skip doing some of these steps manually, we’ll use this plugin. The plugin is installed by default with Certbot. In order to generate and verify our certificates, we’ll run the following:

certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com

You may have to run this command as sudo, as it will try to write to /var/log/letsencrypt.

You’ll also be asked for your email address. It’s a good idea to put in a real address you use often, as you’ll get a notification if your certificate expires is about to expire. The trade off for Let’s Encrypt being a free certificate is that it expires every three months. Luckily, renewal is as easy as running one simple command, which we can assign to a cron and then not have to worry about expiration. Additionally, it’s a good security practice to renew SSL certificates, as it gives attackers less time to break the encryption. Sometimes developers even set up this cron to run daily, which is completely fine and even recommended.

Keep in mind that you have to run this command on a server to which the domain specified under the -d (for domain) flag resolves — that is, your production server. Even if you have the DNS resolution in your local hosts file, this won’t work, as the domain will be verified from outside. So if you’re doing this locally, it will most likely not work at all, unless you opened up a port from your local machine to the outside and have it running behind a domain name which resolves to your machine, which is a highly unlikely scenario.

Last but not least, after running this command, the output will contain paths to your private key and certificate files. Copy these values into the previous code snippet, into the cert property for certificate and key property for the key.

// ...

const options = {
  key: fs.readFileSync("/var/www/example/sslcert/privkey.pem"),
  cert: fs.readFileSync("/var/www/example/sslcert/fullchain.pem") // these paths might differ for you, make sure to copy from the certbot output
};

// ...

Continue reading %How to Use SSL/TLS with Node.js%

Viewing all 225 articles
Browse latest View live