Michele Titolo | Blog | Speaking

Showing all items tagged: micheledoesweb


Reviving #micheledoesweb

I started a project earlier this year to re-familiarize myself with web technologies that have come around in the last 2 or so years and to rejuvenate related rusty skills. The beginning was great, I wrote a blog post 5 weeks in a row. My life was pretty quiet at this point. I had finished up travel for the spring, I wasn’t on major projects at work that left me brain-drained at the end of the day, and there were no upcoming conferences I needed to prep for. But then all of that changed.

At the beginning of June, I was put on a Long and Difficult project. Don’t get me wrong, I’m really enjoying this. But it does mean that since June, most nights I get home and I don’t want to write code. I also wrote 3 new conference talks between June and the end of September. Each talk takes at least 15 hours of work, many take quite a bit longer. None of this is a bad thing. I like that my job challenges me. I like that I can leave my job at the office and am rarely working outside my normal 9-5. I like exploring new topics and format for talks. I only have so much time in a day, and I enjoy having a life, so #micheledoesweb fell to the wayside.

That doesn’t mean I haven’t used bits of spare time to keep investigating. I’ve been learning plenty, just in smaller chunks. Could I have written a blog post on each of those? Possibly. Some of those blog posts from earlier this year took 4-5 hours to write, not counting the time actually learning (for reference this one took 2 hours). I just haven’t had that time, and being out of practice writing doesn’t help either. But here’s some idea of what I have done.

I started learning Go. After dealing with issues in JS, I wanted types. Swift 3 wasn’t out yet (and still isn’t ready IMO), so Go was the next best thing. After wrestling with JS, Go was wonderful. There was a compiler! And actual objects! And types! But the whole $GOPATH thing felt like a hack. The package manager really isn’t great, and I was worried I’d run into versioning issues if I continued to use it. There are also lots of web frameworks in Go, but they had the same kinds of issues that something like Sinatra has in Ruby, where they were “micro frameworks”. It turns out I like the way I was able to structure my Hapi.js application. I stopped learning Go at this point.

I dug into AWS. I have access to several online learning platforms that cover AWS. Most chapters of online courses can be done in ~10-15 minutes. I’ve learned about everything from EC2, to RDS, to Lambda. I’ve deployed a static site backed by S3, an EC2 instance, and configured DNS with Route 53. I still need to figure out the whole EC2 <-> RDS thing, but my AWS skills are much better than when I started #micheledoesweb.

I learned about Flow, a JS type checker. My friends over at Artsy have been doing a lot of JS recently, and my Twitter feed now has more JS than I ever imagined. When I was starting #micheledoesweb, I looked into TypeScript, but I didn’t want to learn an entire new language (which is funny because I started learning Go anyway). It also meant that I couldn’t use most of the JS tutorials without doing a translation from JS -> TypeScript. But recently I learned of Flow, which is a static type checker for JS. It doesn’t require a new language or tons of extra tools, but it does give just a little extra security that I felt was lacking from pure JS apps. I’m not sure if I’ll use it, or JS, for #micheledoesweb but I am glad I took the time to learn about it.

In addition, since the beginning of June I have:

  • Attended 5 conferences
  • Spoke at 4 of those conferences
  • Gone on 7 distinct trips
  • Flown over 20k miles
  • Helped with 4 app releases
  • Got my first direct report 🎉
  • Started working on my first hardware project for Halloween

The end of 2016 is looking to be much calmer, so I hope to be able to pick #micheledoesweb back up in earnest very soon 🙃

Building APIs in Hapi and Node.js

Controllers

This is where it’s going to get interesting. I really liked how Padrino combined routes and logic in the same place, so I attempted to do so with Hapi. After combing through the boilerplate projects, I found the magic to allow me to split up my routes into separate controllers. It’s a simple module.exports = server; in my main index.js file.

An interlude on module.exports

Admittedly, before I sat down and started working on this test I dug a bit into how Node.js works. In the tutorials and sample code I looked through, there were always these calls to module.exports. It’s a documented feature of Node, and represents what is returned when a file or set of files is required. This can be a function or a class. But this does explain why at the end of all my Model files, there ends up being a return User; for instance.

// user.js
module.exports = function(sequelize, DataTypes) {
  var User = ...;
  return User;
};

// index.js
var models = require('./models');

server.route({
  method: 'GET',
  path: '/api',
  handler: function(request, reply) {
    models.User.findAll()
      .then(function(users) {
        reply(users).code(200);
      });
  }
});

Now it can be called whatever, but this way allows normalized access throughout the application. It’s still not a type system, but at least there is pseudo-namespacing.

The GET endpoints

I’m going to start by creating the simple routes for GET /users and GET /users/:id. The tutorial I followed had me essentially create /users but I didn’t have any data in my database, so it always returned an empty array []. This is already better than some of the other frameworks I tried out, as that is the expected behavior.

Seed data

In order to test all these out, I first need data. I don’t want to go through the process of creating all the endpoints and then using them to add data. Equalize has the ability to seed a database, so that is what I used. The CLI will create a seed file for you, and store it in ./seeds. But it is empty.

The documentation for what goes in a seed file is also pitiful. It took copious amounts of searching to find the right format for the up and down actions. For some reason up refuses to work without returning an array. Over the course of about 5 hours, I managed to get seeders to run just once.There’s also no reset or drop command, so if something went awry I had to manually fix the db. That was a feature requested in 2014 but nothing has come of it. It took me filing a bug, and actually finding bugs in the cli library (hint: don’t use JSON seederStorage) for this to start working again. Final seed file:

'use strict';

module.exports = {
  up: function (queryInterface, Sequelize) {
    return queryInterface.bulkInsert('Users', [
        {email: "aphrodite@olympus.org", name: "Aphrodite",  createdAt: Date.now(), updatedAt: Date.now() },
        {email: "athena@olympus.org", name: "Athena", createdAt: Date.now(), updatedAt: Date.now() },
        {email: "zeus@olympus.org", name: "Zeus", createdAt: Date.now(), updatedAt: Date.now() },
        {email: "apollo@olympus.org", name: "Apollo", createdAt: Date.now(), updatedAt: Date.now() }
        ],{}
      );
  },

  down: function (queryInterface, Sequelize) {
    return queryInterface.bulkDelete({tableName: 'Users'}, null, {});
  }
};

I sunk at least 8 hours into getting this working, over the course of 3 nights. If you think I’m exaggerating, I’m not.

Writing tests

I don’t usually do TDD, but for this I’m going to use a test harness to enforce the contracts I set out in my Swagger document. This also is a super easy way to verify that as I change things, I don’t regress any endpoints. I’m not working on a full unit test suite, because the logic is simple. An integration suite covers all I need.

For this testing I decided to use Frisby.js. It runs on top of Jasmine, and spins up a server to serve requests and validate responses. It’s exactly what I need. My first test looks like this:

var frisby = require('frisby');

frisby.create('Ensure we get a list of users')
    .get('http://localhost:3000/users')
    .expectStatus(200)
    .expectHeaderContains('content-type', 'application/json')
    .expectJSONTypes('*', {
        name: String,
        id: Number,
        email: String
    })
    .toss();

This was also a struggle to get working, since there have been bugs fixed on the master branch, and no release has been made. It’s really frustrating to see abandonware, especially when so much work was done after the last release. After a lot of headaches I used the gulp-jasmine-phantom package instead of gulp-jasmine and managed to get it all working. All told getting the test harness setup was at least 5 hours of work, most of it debugging frisby and gulp.

Creating Controllers

The next part involves moving around some code, because I don’t want everything in the index.js file. Most of this worked as expected, and very little had to change to get it to be happy.

.
├── ./Gulpfile.js
├── ./app
│   ├── ./controllers
│   │   ├── ./index.js
│   │   └── ./user.js
│   └── ./models
│       ├── ./group.js
│       ├── ./index.js
│       ├── ./membership.js
│       └── ./user.js
├── ./db
│   ├── ./config
│   │   └── ./config.json
│   ├── ./db.development.sqlite
│   ├── ./migrations
│   │   ├── ./20160511001612-create-user.js
│   │   ├── ./20160511001808-create-group.js
│   │   └── ./20160511003357-create-membership.js
│   └── ./seeders
│       └── ./d20160511015238-users-and-groups.js
├── ./index.js
├── ./package.json
└── ./spec
    └── ./user_spec.js

The most challenging part of creating controllers was loading them. Since each javascript file exists in a vacuum, they need to be require‘d in order to be used in other files. I could manually require every controller, but that would just get tedious. Thankfully this is a problem many have solved. After a lot of playing around, I ended up using the solution in this blog post, with a little modification for Hapi. It works quite well.

I thought about adding a prototype and actual controller objects, but automating the importing of separate methods used for different requests seemed like a pain, and I’ll be breaking standard REST conventions. I don’t want to dig myself into a hole. Each of my controllers module.exports one function that takes a Hapi server instance, and then assigns the routes. I can still create functions within the files to handle more complex logic, but most of these endpoints aren’t complicated.

JSON Templates

Compared to everything else in this blog post, this took the least amount of time. It helps that the library is new, hence the maintainer is still paying attention to it. The syntax is strange, as the same word ends up being repeated several times, i.e. reply.view('users', { users: users }). After a little oddness, everything was up and running smoothly. I created a partial for the user object, and then used that for both the list and the detail calls.

Failing the test

Over the past two weeks I’ve put in over 20 hours of work to get two endpoints working (I stopped after GET /users and GET /users/:id). I filed a few issues on OSS projects and made one post on Stack Overflow. While the remainder of the endpoints would hopefully be trivial at this point, I am very disappointed by the tooling. There’s going to be a learning curve with a new language and the popular libraries in it, but 20 hours to get 2 API endpoints working is ridiculous. I came to a conclusion: I don’t want to do this project in Javascript. I’ve already lost too much sleep and been frustrated to the point of wanting to give up (which I actually did one night). I don’t know where I’ll go from here, but hopefully some space away from the horrors of Javascript will give me some insight.

Building the Skeleton in Hapi

After my last exploration went sour, I decided to do some more up-front research before diving into Hapi.js. In addition to the resources on Hapijs.com, I spent a lot of time googling for best practices. While there are a lot of blog posts and tutorials on using Hapi, there do not appear to be any standards around how applications are organized.

The only open source projects related to boilerplates I found had few stars on GitHub and haven’t received updates in the past year. So the basic assumption from here on out is that I’m going this alone.

An Ocean of Choices

Since Hapi doesn’t come with any underlying systems, I get to pick all of them myself. In case you were wondering: yes there are at least 3 packages to do everything in Javascript. I already decided to use Sequelize and it’s CLI as my ORM. Since it has it’s own validator functionality, it will be interesting to integrate with Joi, Hapi’s validation library.

For the skeleton test, I’m not going to worry about any automation, which means I’m putting off the Grunt vs Gulp debate. So far the documentation for Grunt is way better than Gulp, even though the chaining in Gulp is nicer. I don’t have a ton of things to automate, so this is low priority for now.

As for the JSON templating issue I ran across in my last post, I’m spending more time upfront researching my options. Since JSON is “Javascript Object Notation” one would think that there would be copious of libraries to help with this action. But no, it seems that the standard JSON library is used to convert any full JS object into JSON to send over the wire. This is also not acceptable, so I have spent a lot of time trying to find a solution.

Why JSON templating matters

A brief interlude to explain why just calling JSON.stringify does not scale. This project is starting out small, but as I continue to add features I expect my models to get larger over time. Generally this is the case with software. As the models get larger, I want to have fine-grained control over how they are sent over the wire. Why? There are several reasons. As any project grows, the need to nest models also grows (sorry but REST doesn’t scale). When sending the same object in multiple API requests, it should be sent consistently. A templating system, especially one that supports partials, assures this is the case.

Also I am primarily a mobile developer and I know my hypothetical users live in a semi-connected world. They will not always have 5 bars of LTE, so I need to be conscious of how much data is being sent over the wire. The likely problematic model will be Photos, which will always be sent in a list. If that model ends up having more than ~20 properties, when it is in a list I probably don’t want to send the full object.

Let me make this next point very clear: Not sending the full model in a list is both a UX and technical decision. This will mean when a user wants to look at the details of a Photo, another request will have to be made, increasing the time for the screen to be finished loading. There is a point at which this tradeoff becomes worth it. I don’t know exactly where that is, or if this project will reach it, but I most certainly don’t want to have to put off solving this problem when it could have a significant impact to the rest of the project (especially when barely any libraries exist with this functionality!).

The current state of JSON templating in Javascript

I found several packages that support JSON template creation. At first I did look at some of the more popular HTML templating frameworks, like dust and mustache, but found that they were too HTML-focused for my needs. After a significant amount of searching, I found 3 libraries: Kale, JSONPath-object-transform, and Hapi JSON View. The first two haven’t been updated in a year, so that does not bode well. The last one is both Hapi specific and extremely new. I’m going to live on the edge and use that one.

Initial Hapi setup

Now that I have that all sorted, it’s time to get Hapi up and running. As for setup, I ended up following most of this blog post. It walks through all the different files created and CLI tools, which was useful since this is all new to me. Of course I created my models and business logic instead of the one listed.

One snag I ran into with sequelize-cli is I wasn’t aware of the defaults it added. For instance, I want id and created_at on the Group model. I ran the command node_modules/.bin/sequelize model:create --name Group --attributes id:integer,name:string,created_at:date. But when I looked at the migration that generated, well there was already both id and createdAt. Shouldn’t two columns with the same name throw an error? Apparently not. So I removed my additions and learned to not add those fields in the future.

'use strict';
module.exports = {
  up: function(queryInterface, Sequelize) {
    return queryInterface.createTable('Groups', {
      id: {
        allowNull: false,
        autoIncrement: true,
        primaryKey: true,
        type: Sequelize.INTEGER
      },
      id: {
        type: Sequelize.INTEGER
      },
      name: {
        type: Sequelize.STRING
      },
      created_at: {
        type: Sequelize.DATE
      },
      createdAt: {
        allowNull: false,
        type: Sequelize.DATE
      },
      updatedAt: {
        allowNull: false,
        type: Sequelize.DATE
      }
    });
  },
  down: function(queryInterface, Sequelize) {
    return queryInterface.dropTable('Groups');
  }
};

And then came the infamous many-to-many relationship. Sequelize will automatically create a join table for me of I let it, but I need meta-information about those memberships so I need to create the model myself. Thankfully I read the docs and saw the important If you want additional attributes in your join table, you can define a model for the join table in sequelize, before you define the association. I hadn’t done any migrations with the user/group relationship due to not having updated the development DB to sqlite yet. This ended up going smoothly.

Okay next challenge: all of the Sequelize documentation is built assuming everything is in one file and it doesn’t differentiate between migration and model files. This was incredibly confusing, because I had to add the ENUM value declarations in 2 places to get the membership migration to succeed. And then I realized that Sequelize didn’t create the many-to-many keys, even though according to the docs I did everything right. Compared to DataMapper, which I fought with for over an hour, this was much easier. Models, check. Here’s what my app looks like at this point (with node_modules omitted b/c omg that’s redic):

├── ./Gulpfile.js
├── ./config
│   └── ./config/config.json
├── ./db.development.sqlite
├── ./index.js
├── ./migrations
│   ├── ./migrations/20160511001612-create-user.js
│   ├── ./migrations/20160511001808-create-group.js
│   └── ./migrations/20160511003357-create-membership.js
├── ./models
│   ├── ./models/group.js
│   ├── ./models/index.js
│   ├── ./models/membership.js
│   └── ./models/user.js
├── ./package.json
└── ./seeders

And for my package-happy friends at this point there are 297 packages in the node_modules folder. This I do not like, but that’s more of a problem with the JS stdlib than anything else IMO. I also said I’d ignore the grunt vs gulp debate, but auto loading the server is kind of awesome. So I added in a single gulp task to restart the server when a file changes on disk. Clock time so far: 1.5 hours, which is less than the partial Padrino skeleton by 30 mins!

Next week’s post will go into creating the controllers, and dealing with all sorts of issues with packages in Node.

Building the Skeleton in Padrino

So in order to get a real feel for how Padrino and Hapi.js will work, I’m going to create the site skeleton and the basic APIs. In addition to the server app framework, I’ve made a number of other technical decisions:

  • PicsNearMe has a relational database (sorry Mongo!). The database will be MySQL locally. I plan on taking advantage of Amazon’s new Aurora DB in production.
  • For now, the app will serve statically generated HTML for the web front-end. If I feel like getting fancy I’ll add Angular.js later.
  • APIs will be spec’d beforehand using Swagger.
  • The Swagger spec will be used for unit testing the application.
  • Authentication will be OAuth2, with tokens expiring after a day and a re-login needed after 30 days of inactivity (aka the refresh token is good for 30 days, but regular token just for 1). I’ve worked with this model in the past and when coupled with revoking permissions, it works out really well. I wouldn’t recommend this approach with particularly sensitive information.

Testing out the frameworks

For the purposes of giving Hapi and Padrino a “real world test” I won’t be creating the full application. Instead I’ll be creating the user/group system which at a simple level looks something like this:
User, GroupMembership, and Group table graphic

One thing to note is that I’ve already made the decision to have Photos owned by Groups, even for 1 user. So upon account creation there is automatically one group created just for the user to post pictures to on their own. If I expected this site to have millions of users, I would absolutely revisit this decision. But for now it is a trade-off I’m willing to make in order to have a single code path for posting photos.

I’ll also be skipping authentication, authorization, and permissions for this test, as that is highly language and framework dependent. I’ll get it working at some point.

Here are the endpoints I’ll be creating:

GET     /users
POST    /users
GET     /users/{id}
PUT     /users/{id}
DELETE  /users/{id}
GET     /users/{id}/invitations
PUT     /users/{id}/invitations/{id}
GET     /groups
POST    /groups
GET     /groups/{id}
PUT     /groups/{id}
POST    /groups/{id}/invite

Because I want friendly URLs, the GroupMembership table will be exposed via the invitations route. The /groups/{id}/invite route will actually be the invite creation, again because pretty URLs matter.

Padrino

Getting the Padrino boilerplate setup is pretty painless, except in 0.13.X all the models are in a separate folder from the rest of the application. This is unacceptable to me so I moved them into /app/models.

The GroupMembership model is many-to-many, and I’ve never done this with DataMapper before. I found a blog post that talks about how to create these intermediary models. And that made the db commands not run. So I disabled DataMapper.finalize from config/boot.rb and manually created/migrated the db. Starting the app still didn’t work until I took out the fancy many-to-many logic. This was very confusing and took almost an hour to sort out.

My GroupMembership model ends up looking like this:

class GroupMembership
  include DataMapper::Resource

  # property <name>, <type>
  property :id, Serial
  property :status, Enum[:pending, :accepted, :declined], default: :pending
  property :roles, Enum[:owner, :member], default: :member
  property :created_at, DateTime
  property :user_id, Integer
  property :group_id, Integer


  belongs_to :user
  belongs_to :group
end

After models and setting up the db, the next step is getting some output. Unlike more opinionated frameworks, Padrino doesn’t have generators that create all the things. One thing I like is that Padrino takes the HTTP verb and integrates it directly into the controller. There are no separate routes file to edit, which is wonderful. The downside is that for this API I end up writing commands like: padrino g controller Groups get:index post:index put:index. There’s also no way to indicate params on the command line, so anything that needs an id for instance needs to be edited. This is the resulting empty controller:

Picsnearme::App.controllers :groups, :provides => [:json] do

  get :index do

  end

  post :index do

  end

  put :index, with: :id do

  end

end

I also need to add a custom route to handle /groups/{id}/invite. This also can’t be done by the command line generators. Thankfully, that is also intuitive:

post :invite, map: "/groups/:id/invite" do

end

The biggest downside here is that Padrino doesn’t do the pluralization handling like Rails does. So while my model is called User, I needed to create a Users controller so URLs would be correctly pluralized.

Fantastic. Now to actually have the routes serve some JSON! Except that doesn’t seem to work the way I expected it to. I tried render, I tried sinatra-style body, heck I even tried return and I could not get the controllers to return just JSON without a view. I finally googled enough to try render json: @users which then complained about a missing template. I do not want to template out my JSON responses in the same way I create view templates. Data is not a view so I do not want JSON templates in the same place as views rendered as HTML. Those are separate concerns.

Oh and in the process of doing this, the server hung and I had to force-quit it via Activity Monitor. All the blog posts I could find online use the grape plugin for building APIs, but I don’t want to use grape.

Well, shit. I guess Padrino alone isn’t good enough to create APIs. I’m not going any further as I’ve learned enough about Padrino to know it won’t work for me. This is an incredibly huge letdown. Hapi will be covered next week (hopefully I get further)!

Languages, frameworks, and tools, oh my!

A lot has changed since I spent time building websites. Figuring out where to start is slightly overwhelming. Around ever corner there is a decision: less or sass? grunt or gulp? django or rails? handlebars vs mustache vs angular? Without too many restrictions it’s easy to get lost in the sea of choice. So here are my criteria for choosing the technology for this project:

  • Handles front and back end
  • APIs are first-class citizens
  • Connects with multiple kinds of databases
  • Uses a sane front-end templating system
  • Able to automate deployment

Language

Most people usually pick a language before picking a framework to use. That’s not how I am going to do things. I’m currently working in my 4th programming language, so adding another one is not a big deal. At this point the biggest hurdles for me end up being tooling and conventions rather than language syntax. Of course the tooling significantly differs depending on language.

Tooling is the main reason I’m not doing any of this in Swift. It’s far too new, and doesn’t have enough established conventions separate from the iOS and OS X world. Maybe in 5 years it’ll be viable, but right now it’s not ready for this project. I also have similar thoughts about Go, so that’s also not on the list.

Based on what I know about the common web world, the most popular languages are (in no particular order): PHP, Ruby, Java, Javascript, Python, and C#. Even though I work for a large enterprise, I don’t want to spend my free time working in enterprise software land, which rules out Java and C#. I also moved on from PHP many years ago and don’t really want to go back. So that leaves me with: Ruby, Python, and Javascript.

My biggest irk with all of these is that none are strongly typed. My second biggest irk is connecting them to a debugger is difficult. To someone used to no debuggers or types, that might not sound like a big deal, but debugging is a lot easier with types and runtime introspection. Basically half of my debugging skills are simply not applicable in a loosely typed world. This is going to be all rainbows and butterflies.

Frameworks

These are all popular and (mostly) modern web languages with a plethora of tools and frameworks available to use. In these three languages there are a number of frameworks to choose from. These tend to fall into two camps: opinionated but contains everything and micro but pluggable/extendable.

Do-it-all frameworks

These are the frameworks most likely to become a monolith: Rails, Djano, and Sails.js. These all come with the kit-and-caboodle ready to go, including command line helpers. I’m a huge fan of automation, so having those commands is a big time saver.

Rails is a framework I’ve worked with off and on for several years, mostly as smaller projects. It’s currently my go-to for prototyping or as a stand-in for a real service. With Rails 5 creating an api-only site is much easier. The best part about Rails is that I am already familiar with it, even though some things have changed in Rails 4 and 5 that I need to catch up on. For some reason the asset pipeline just never stuck. Rails does hit all my bullet points, and is able to plug and play with many kinds of databases and front end languages. Rails 5 has done good work at cutting down on dependencies, but there are still a lot. The biggest draw back is that sometimes Rails is too opinionated and magical. I’d like to be a little closer to the metal this time around.

Django has been around for a while and is one of the most popular Python framework. Like Rails, it has command line generators for a number of different kinds of things, middlewear to connect to a number of different databases, and a whole lot of plugins. I’m particularly fond of the django-admin plugin that is just so much better than anything I’ve used with Rails. It took me less than 2 hours to get through the intro tutorial, including over a half hour trying to figure out why my code wasn’t working when it was the same as the tutorials. Stray whitespace and the loose types are not my friends.

Sails.js is a Javascript framework heavily based on Rails. The CLI is well developed and there are sensible defaults for the conventions. Unfortunately, there is no Sails.js introduction tutorial that walks through all of the basics on the official Sails.js site. I spent an hour trying to mimic the Django tutorial I did (which was fantastic) but ended up not getting very far. In addition to a lack of a good tutorial, when I had no items for a query, Sails returned a 404 instead of an empty collection. Needless to say I was not impressed.

Microframeworks

Each language that I’m considering has a microframework that’s more popular than all the others. Ruby has Sinatra, Python has Flask, and Node has Express.js. All of these on their own don’t really provide enough of a baseline for me to get started. I want more than a simple server that can respond to requests, and while a lot of middlewear exists, there’s still a lack of conventions to follow. I don’t want to completely reinvent the wheel. Fortunately others have run across this problem so I don’t need to build my own solution. Enter Padrino and Hapi.js.

Padrino is a Ruby framework based on Sinatra. It’s very much a plug-and-play system where you can add or not add modules as you choose. It does have a CLI for quickly creating new kinds of files, and it sticks with the common MVC paradigm for organization. After a little bit of playing, this definitely lacks the heft of Rails while providing a lot of the niceties. And here we have the first actual contender!

Hapi is one of the few popular Node.js frameworks that doesn’t depend on Express.js. It has a fairly small core with a lot of plugins. And the authenticaiton/authorization model is the best I’ve seen so far. It doesn’t provide an ORM out of the box, but integrates well with Sequelize which does have a CLI for me to work with. Even though it is Javascript, I think I can make this work.

Next up: creating the basic site skeleton in Pandrino and Hapi (yes there will be code!)

#micheledoesweb

One of my goals for 2016 is to successfully complete a side project. I haven’t had a personal side project since college, and I’ve been wanting to round out my experience with more web development. This isn’t my first web rodeo, but I haven’t worked on a majority-web project since 2012. It was in Rails 3, used Bootstrap 2, and GOSH was it fantastic.

So what am I going to build? I was inspired by a hackathon project I did in 2012 and created a list of features this hypothetical product would need. Feast your eyes on PicsNearMe! It’s Snapchat meets Shutterfly! Visiting a museum, park, or just hanging out? Create a group with friends and then take and share your photos right in the moment. Afterwards, export your images to social networking sites like Twitter, Facebook, and Pinterest to show the rest of the world how much fun you had, or download a gorgeous photo album for nostalgia.

Why this terrible startup idea? It happens to cover some of the most common web application features:

  • Mobile-friendly authentication
  • Image download, upload, and processing
  • User, group, and event management
  • Image export to 3rd party services
  • PDF generation (for the photo albums)
  • Transactional emails

There are some additional technical aspects as well:

  • Securing uploads and downloads
  • Creating a continuous delivery pipeline
  • Creating performant image processing, album generation
  • Content filtering

I plan on writing weekly and putting about 5 hours into this per week. Luckily I’m starting this at a point where I have no conference talks for 3 months. Getting started is always the hardest part!

© 2017 Michele Titolo