Using Grunt to Auto-Restart node.js with File Watchers

Written by Russell on . Posted in App Dev, Deployment, Front End, Productivity, Software

Preprocessors have become a very important part of the development life cycle. In the past we would just write some HTML, JavaScript and CSS with a backend and deploy a website.  Now for better speed, development experience and more manageable outcomes we have a multitude of languages that compile into these standards, e.g: CoffeeScript–> JavaScript, LESS –> CSS, Jade –> HTML… Then there is JS, CSS and HTML compression and minification after that.

Creating a development workflow to manage all of these transformations can be a very daunting task at the beginning of a new project. The complexity of the workflow can can also make or break a project. When ever I try to solve a problem I always try to look at the difficulty curve of reproduction and repeatability. In other words, it shouldn’t be difficult to setup a new developer on the project and it should be easy to perform 1k times a day without a work-performance loss.

Grunt has become a very important part of our  development life cycles. For compiled projects such as Java, adding in Grunt from ANT or Maven is relatively simple to in-line. For our node.js projects we wanted to be able to not only run the node server, but we wanted to be able to auto-restart the process while also auto-building resource files like LESS and JS files. Below you will find a lengthy Grunt file with commentary interlaced. This should help you get started with your own development environment.

'use strict';

module.exports = function(grunt) {
    grunt.initConfig({
        /**
            The concurrent task will let us spin up all of required tasks. It is very 
            important to list the 'watch' task last because it is blocking and nothing 
            after it will be run.
        **/
        concurrent: {
            dev: ["less:dev", "nodemon", "watch"],
            options: {
                logConcurrentOutput: true
            }
        },

        /**
            The nodemon task will start your node server. The watch parameter will tell 
            nodemon what files to look at that will trigger a restart. Full grunt-nodemon 
            documentation
        **/
        nodemon: {
            dev: {
                script: 'index.js',
                options: {
                    /** Environment variables required by the NODE application **/
                    env: {
                          "NODE_ENV": "development"
                        , "NODE_CONFIG": "dev"
                    },
                    watch: ["server"],
                    delay: 300,

                    callback: function (nodemon) {
                        nodemon.on('log', function (event) {
                            console.log(event.colour);
                        });

                        /** Open the application in a new browser window and is optional **/
                        nodemon.on('config:update', function () {
                            // Delay before server listens on port
                            setTimeout(function() {
                                require('open')('http://127.0.0.1:8000');
                            }, 1000);
                        });

                        /** Update .rebooted to fire Live-Reload **/
                        nodemon.on('restart', function () {
                            // Delay before server listens on port
                            setTimeout(function() {
                                require('fs').writeFileSync('.rebooted', 'rebooted');
                            }, 1000);
                        });
                    }
                }
            }
        },

        /**
            Watch the JS and LESS folders for changes. Triggering 
            fires off the listed tasks
        **/
        watch: {
            js: {
                files: ["client/resources/less/**/*.js"],
                tasks: ['copy:dev:custom'],
                options: { nospawn: true, livereload: true }
            },
            less: {
                files: ["client/resources/less/**/*.less"],
                tasks: ['less'],
                options: { nospawn: true, livereload: true }
            }
        },

        /** 
            Less task to compile LESS into CSS.
            Different options for dev and prod
        **/
        less: {
            dev: {
                options: {
                    compress: false,
                    yuicompress: false,
                    strictMath: true,
                    strictUnits: true,
                    strictImports: true
                },
                files: lessFiles
            }, 
            prod: {
                options: {
                    compress: true,
                    yuicompress: true,
                    strictMath: true,
                    strictUnits: true,
                    strictImports: true
                },
                files: lessFiles
            }
        },

        /**
            Used for production mode, minify and uglyfy the JavaScript Output
        **/
        uglify: {
            prod: {
                options: {
                    mangle: true,
                    compress: true,
                    sourceMap: true,
                    drop_console: true
                },
                files: {
                    'client/public/js/main.js': ['client/resources/js/main.js']
                }
            }
        },

        /**
            This copy tasks has two parts. The libraries will be rarely updated and are 
            only copied on startup. The custom sub-task will copy over application specific 
            JS since it doesn't need a preprocessor in dev
        **/
        copy: {
            dev: {
                custom: {
                    files: [
                        {
                            src: ["client/public/js/*.js"], 
                            dest: "client/public/js", 
                            expand: true, 
                            flatten: true
                        }
                    ]
                },
                libs: {
                    files: [
                      /** 
                        Array of file objects that reference bower libs }}
                      **/
                    ]
                }
            }
        }
    });

    /**
        Load all the GRUNT tasks
    **/
    grunt.loadNpmTasks("grunt-nodemon");
    grunt.loadNpmTasks("grunt-concurrent")
    grunt.loadNpmTasks("grunt-contrib-copy")
    grunt.loadNpmTasks("grunt-contrib-less")
    grunt.loadNpmTasks("grunt-contrib-watch");
    grunt.loadNpmTasks("grunt-contrib-uglify");

    /**
        Register tasks allowing you to run:
            grunt
            grunt run
            grun dev
            grun prod
    **/
    grunt.registerTask("run", ["concurrent:dev"]);
    grunt.registerTask("default", ["concurrent:dev"]);

    grunt.registerTask("dev", ["less:dev", "copy:dev"]);
    grunt.registerTask("prod", ["uglify:prod", "less:prod"]);
};

You will need to add a few requirements to your packages.json file to pull in the new requrements

    "devDependencies": {
          "open": "*"
        , "grunt-nodemon": "*"
        , "grunt-concurrent": "*"
        , "grunt-contrib-copy": "*"
        , "grunt-contrib-less": "*"
        , "grunt-contrib-watch": "*"
        , "grunt-contrib-uglify": "*"
    }

Scheduling EC2 Backups with Skeddly (automated EBS snapshots)

Written by Perry Woodin on . Posted in Deployment

A recent topic of conversation in the office has been backups. Three of us have experienced catastrophic hardware failures on our local development machines (i.e. our laptops). Thankfully, we are all obsessive about backups so we all got back up and running in no time. If you aren’t backing up your local system, then you need to read Sean’s excellent post on Selecting Cloud Backup Software.

But what about your servers? And more specifically, what if you’re running an Amazon EC2 instance?

Like all things AWS, Amazon has many options for creating backups. If you setup your EC2 instance to use Elastic Block Storage (EBS), you can simply create a snapshot of your volume from the AWS Console. These EBS snapshots are incremental backups that persists on Amazon’s S3. Incremental means that only the blocks that have changed since your last snapshot are saved. This is all really slick, but manually creating snapshots from the AWS Console isn’t a good solution if your goal is to have daily, hourly, or whatever snaphots.

You could create your own service using command line tools. For example:

ec2-create-snapshot vol-id --description "Daily Backup"

Or, you could use Amazon’s API. For example:

https://ec2.amazonaws.com/
?Action=CreateSnapshot
&VolumeId=volume-id
&AUTHPARAMS

The above options are really useful, but it takes a bit of time and fiddling to get everything right. The easiest solution I have found for scheduling automated snapshots is a service called Skeddly http://www.skeddly.com/.  Skeddly can do more than automate snapshots, but that’s what we’re going to look at in this post.

Using Skeddly

Sign Up and Create a Skeddly Specific AWS User

As of writing this post, you can get a 30-Day Trial of Skeddly, so go there now and sign up.

Before you sign up I would suggest creating an access key specifically for Skeddly. You do this by creating a user from the AWS Console under Identity and Access Manager (IAM). I created a user called skeddly with the following policy:

{
"Statement": [
{
"Action": [
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:CreateTags",
"ec2:DescribeInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeTags"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

If you need asstance creating a policy, you can use the AWS Policy Generator located at http://awspolicygen.s3.amazonaws.com/policygen.html

Add an Instance

After logging into Skeddly, select the Managed Instances tab and Add Instance. This is the easiest way to create your automated snapshots. You will need to know the following before you can fill out the form:

Instance ID: Get this from your AWS Console under EC2. You will see an instance column that displays your instance ids. Your instance id should look something like i-1a2b3c4d.

Elastic IP: If you want Skeddly to Stop/Start your instance to ensure the snapshot is complete you will need to supply the Elastic IP address associated with your instance. This is optional, but recommended.

Access Key: This is the IAM user I suggested creating above.

After supplying the necessary instance information you can jump down to Backup Instance and Delete Backups.

Create your schedule.

Note there are macros you can use to name your snapshot. I use something like

$(VOLUMENAME)$(DATE)

I keep a weeks worth of snapshots. And because my instance has three EBS volumes, I set the Minimum to Keep at 21. That’s 3 volumes x 7 days = 21.

Pricing

I think the pricing is incredibly reasonable. It costs $0.15 to create or delete a snapshot of each volume. That means I’m only spending $0.90/day to create three snapshots and delete three snapshots.

Scratching the Surface

Skeddly can do so much more. Once you get started, you may find yourself scheduling all sorts of tasks. Need to backup your RDS… Skeddly can do that. Create an AMI… let Skeddly handle it. Make a nice dinner… Skeddly can’t do that, but with all the time you’re saving why not put on your chef’s hat and prepare some grub.

Jenkins ChromeDriver plugin Killing my Nodes

Written by Perry Woodin on . Posted in Deployment, Gotchas

I decided I wanted to learn how to use Selenium with Jenkins so I installed the ChromeDriver and Selenium plugins on Jenkins 1.486. Immediately after doing so, my Jenkins nodes started displaying the “Connection was broken” error message from the master Jenkins instance. Checking the connection from the node itself, everything looks fine. The node appears to be connected, but the master thinks otherwise.

I’ll obviously have to to some trouble-shooting. For now, I removed the ChromeDriver plugin and my node connections are working again.

Distributed Builds with Jenkins Nodes (master / slave setup)

Written by Perry Woodin on . Posted in Deployment

When I first started using Jenkins (it was Hudson at the time), I would push my ColdFusion updates to various servers (e.g. staging and production) with an FTP sync ant task. It wasn’t a particularly great way to do things, but it saved a lot of time and it worked. I used this setup because I wanted to manage all of the Jenkins projects from a single Jenkins point of entry. I wanted to avoid setting up and managing Jenkins on multiple servers. I have since abandoned the FTP sync in favor of ant running on each server via a Jenkins node.

This post is geared towards ColdFusion developers because it assumes you do not need to compile your code and simply want to update your code base with changes from SCM. I will also note that Jenkins nodes are often used to distribute work load. I’m not going to cover that here. I’m simply going to explain how I setup Jenkins nodes on a Windows 2008 server. I will write-up a more detailed post about the build process we use on some projects here at Troy Web.

If you don’t already have Jenkins running somewhere, you can go get a native installer at https://jenkins-ci.org/

Setting up a Jenkins Node on Windows 2008 Server

Log into Jenkins and go to Manage Jenkins => nodes.

  • Click New Node.
  • Give the node a name (e.g. Production), select Dumb Slave.
  • Click OK.

On the next page fill out the following:

  • # of executors. This controls the number of concurrent builds the node can run.
  • Remote FS root. This is the directory on your slave machine where Jenkins will install files necessary to run projects. Something like c:\Jenkins
  • Usage. Accept the default of Utilize this slave as much as possible.
  • Launch method. For Windows, select Launch slave agents via Java Web Start
  • Availability. Accept the default of Keep this slave on-line as much as possible.

With the new node defined, log into the master Jenkins FROM the slave machine and go to Manage Jenkins => nodes. Click on your node and you will be presented with a page showing details on how to launch the node.

I haven’t had any luck with the Launch button, so I recommend running the jnlp from the command line.

Once the node has connected to the master you should see a window that indicates the slave has connected.

To install as a window service, select File => Install as Windows Service.

Assigning Projects to a Node

If you want a Jenkins project to run on your new node, go to a project configuration page and check the box next to Restrict where this project can be run. In the Label Expression field, enter the name of the node where the project should run.

Troubleshooting the Connection Between Master and Slave

You may need to set anonymous read access for users in Jenkins.

I had to open up a port in the firewall and then set the port to fixed under Manage Jenkins => Configure System => TCP port for JNLP slave agents.