Michele Titolo | Blog | Speaking

Building the Skeleton in Hapi

After my last exploration went sour, I decided to do some more up-front research before diving into Hapi.js. In addition to the resources on Hapijs.com, I spent a lot of time googling for best practices. While there are a lot of blog posts and tutorials on using Hapi, there do not appear to be any standards around how applications are organized.

The only open source projects related to boilerplates I found had few stars on GitHub and haven’t received updates in the past year. So the basic assumption from here on out is that I’m going this alone.

An Ocean of Choices

Since Hapi doesn’t come with any underlying systems, I get to pick all of them myself. In case you were wondering: yes there are at least 3 packages to do everything in Javascript. I already decided to use Sequelize and it’s CLI as my ORM. Since it has it’s own validator functionality, it will be interesting to integrate with Joi, Hapi’s validation library.

For the skeleton test, I’m not going to worry about any automation, which means I’m putting off the Grunt vs Gulp debate. So far the documentation for Grunt is way better than Gulp, even though the chaining in Gulp is nicer. I don’t have a ton of things to automate, so this is low priority for now.

As for the JSON templating issue I ran across in my last post, I’m spending more time upfront researching my options. Since JSON is “Javascript Object Notation” one would think that there would be copious of libraries to help with this action. But no, it seems that the standard JSON library is used to convert any full JS object into JSON to send over the wire. This is also not acceptable, so I have spent a lot of time trying to find a solution.

Why JSON templating matters

A brief interlude to explain why just calling JSON.stringify does not scale. This project is starting out small, but as I continue to add features I expect my models to get larger over time. Generally this is the case with software. As the models get larger, I want to have fine-grained control over how they are sent over the wire. Why? There are several reasons. As any project grows, the need to nest models also grows (sorry but REST doesn’t scale). When sending the same object in multiple API requests, it should be sent consistently. A templating system, especially one that supports partials, assures this is the case.

Also I am primarily a mobile developer and I know my hypothetical users live in a semi-connected world. They will not always have 5 bars of LTE, so I need to be conscious of how much data is being sent over the wire. The likely problematic model will be Photos, which will always be sent in a list. If that model ends up having more than ~20 properties, when it is in a list I probably don’t want to send the full object.

Let me make this next point very clear: Not sending the full model in a list is both a UX and technical decision. This will mean when a user wants to look at the details of a Photo, another request will have to be made, increasing the time for the screen to be finished loading. There is a point at which this tradeoff becomes worth it. I don’t know exactly where that is, or if this project will reach it, but I most certainly don’t want to have to put off solving this problem when it could have a significant impact to the rest of the project (especially when barely any libraries exist with this functionality!).

The current state of JSON templating in Javascript

I found several packages that support JSON template creation. At first I did look at some of the more popular HTML templating frameworks, like dust and mustache, but found that they were too HTML-focused for my needs. After a significant amount of searching, I found 3 libraries: Kale, JSONPath-object-transform, and Hapi JSON View. The first two haven’t been updated in a year, so that does not bode well. The last one is both Hapi specific and extremely new. I’m going to live on the edge and use that one.

Initial Hapi setup

Now that I have that all sorted, it’s time to get Hapi up and running. As for setup, I ended up following most of this blog post. It walks through all the different files created and CLI tools, which was useful since this is all new to me. Of course I created my models and business logic instead of the one listed.

One snag I ran into with sequelize-cli is I wasn’t aware of the defaults it added. For instance, I want id and created_at on the Group model. I ran the command node_modules/.bin/sequelize model:create --name Group --attributes id:integer,name:string,created_at:date. But when I looked at the migration that generated, well there was already both id and createdAt. Shouldn’t two columns with the same name throw an error? Apparently not. So I removed my additions and learned to not add those fields in the future.

'use strict';
module.exports = {
  up: function(queryInterface, Sequelize) {
    return queryInterface.createTable('Groups', {
      id: {
        allowNull: false,
        autoIncrement: true,
        primaryKey: true,
        type: Sequelize.INTEGER
      },
      id: {
        type: Sequelize.INTEGER
      },
      name: {
        type: Sequelize.STRING
      },
      created_at: {
        type: Sequelize.DATE
      },
      createdAt: {
        allowNull: false,
        type: Sequelize.DATE
      },
      updatedAt: {
        allowNull: false,
        type: Sequelize.DATE
      }
    });
  },
  down: function(queryInterface, Sequelize) {
    return queryInterface.dropTable('Groups');
  }
};

And then came the infamous many-to-many relationship. Sequelize will automatically create a join table for me of I let it, but I need meta-information about those memberships so I need to create the model myself. Thankfully I read the docs and saw the important If you want additional attributes in your join table, you can define a model for the join table in sequelize, before you define the association. I hadn’t done any migrations with the user/group relationship due to not having updated the development DB to sqlite yet. This ended up going smoothly.

Okay next challenge: all of the Sequelize documentation is built assuming everything is in one file and it doesn’t differentiate between migration and model files. This was incredibly confusing, because I had to add the ENUM value declarations in 2 places to get the membership migration to succeed. And then I realized that Sequelize didn’t create the many-to-many keys, even though according to the docs I did everything right. Compared to DataMapper, which I fought with for over an hour, this was much easier. Models, check. Here’s what my app looks like at this point (with node_modules omitted b/c omg that’s redic):

├── ./Gulpfile.js
├── ./config
│   └── ./config/config.json
├── ./db.development.sqlite
├── ./index.js
├── ./migrations
│   ├── ./migrations/20160511001612-create-user.js
│   ├── ./migrations/20160511001808-create-group.js
│   └── ./migrations/20160511003357-create-membership.js
├── ./models
│   ├── ./models/group.js
│   ├── ./models/index.js
│   ├── ./models/membership.js
│   └── ./models/user.js
├── ./package.json
└── ./seeders

And for my package-happy friends at this point there are 297 packages in the node_modules folder. This I do not like, but that’s more of a problem with the JS stdlib than anything else IMO. I also said I’d ignore the grunt vs gulp debate, but auto loading the server is kind of awesome. So I added in a single gulp task to restart the server when a file changes on disk. Clock time so far: 1.5 hours, which is less than the partial Padrino skeleton by 30 mins!

Next week’s post will go into creating the controllers, and dealing with all sorts of issues with packages in Node.

© 2023 Michele Titolo