&yet Blog

● posted by Henrik Joreteg

This last year, we’ve learned a lot about building scalable realtime web apps, most of which has come from shipping &bang.

&bang is the app we use to keep our team in sync. It helps us stay on the same page, bug each other less and just get stuff done as a team.

The process of actually trying to get something out the door on a bootstrapped budget helped us focus on the most important problems that needed to be solved to build a dynamic, interactive, real-time app in a scaleable way.

A bit of history

I’ve written a couple of posts on backbone.js since discovering it. The first one introduces Backbone.js as a lightweight client-side framework for building clean, stateful client apps. In the second post I introduced Capsule.js. Which is a tool that I built on top of Backbone that adds nested models and collections and also allows you to keep a mirror of your client-side state on a node.js server to seemlessly synchronize state between different clients.

That approach was great for quickly prototyping an app. But as I pointed out in that post, that’s a lot of in memory state being stored on the server and simply doesn’t scale very well.

At the end of that post I hinted at what we were aiming to do to ultimately solve that problem. So this post is meant to be a bit of an update on those thoughts.

Our new approach

Redis is totally freakin’ amazing. Period. I can’t say enough good things about it. Salvatore Sanfilippo is a god among men, in my book.

Redis can scale.

Redis can do PubSub.

PubSub just means events. Just like you can listen for click events in Javascript in a browser you can listen for events in Redis.

Redis, however is a generic tool. It’s purposely fairly low-level so as to be broadly applicable.

What makes Redis so interesting, from my perspective, is that you can treat it as a shared memory between processes, languages and platforms. What that means, in a practical sense, is that as long as each app that uses it interacts with it according to a pre-defined set of rules, you can write a whole ecosystem of functionality for an app in whatever language makes the most sense for that particular task.

Enter Thoonk

My co-worker, Nathan Fritz, is the closest thing you can get to being a veteran of realtime technologies.

He’s a member of the XSF council for the XMPP standard and probably wrote his first chat bot before you knew what chat was. His Sleek XMPP Python library is iconic in the XMPP community. He has a self-declared un-natural love for XEP-60 which describes the XMPP PubSub standard.

He took everything he learned from his work on that standard and built Thoonk. (In fact, he actually kept the PubSub spec open as he built the Javascript and Python implementations of Thoonk.)

What is Thoonk??

Thoonk is an abstraction on Redis that provides higher-level datatypes for a more approachable interface. Essentially, staring at Redis as a newbie is a bit intimidating. Not that it’s hard to interface with, it’s just kind of tricky to figure out how to logically structure and retrieve your data. Thoonk simplifies that into a few data-types that describe common use cases. Primarly “feeds”, “sorted feeds”, “queues” and “jobs”.

You can think of a feed as an ad-hoc database table. They’re “cheap” to create and you simply declare them to make them or use them. For example, in &bang, we have all our users in a feed called “users” for looking up user info. But also, each user has a variety of individual feeds. For example, they have a “task” feed and a “shipped” feed. This is where it veers from what people are used to in a relational database model, because each user’s tasks are not a part of a global “tasks” feed. Instead, each user has a distinct feed of tasks because that’s the entity we want to be able to subscribe to.

So rather than simply breaking down a model into types of data, we end up breaking things into groups of items (a.k.a. “feeds”) that we want to be able to track changes to. So, as an example, we may have something like this:

// our main user feed
var userFeed = thoonk.feed('users');

// an individual task feed for a user
var userTaskFeed = thoonk.sortedFeed('team.andyet.members.{{memberID}}.tasks');

Marrying Thoonk and Capsule

Capsule was actually written with Thoonk in mind. In fact that’s why they were named the way they did: You know these lovely pneumatic tube systems they use to send cash to bank tellers and at Costco? (PPSHHHHHHH—THOONK! And here’s your capsule.)

Anyway, the integration didn’t end up being quite as tight as we had originally thought but it still works quite well. Loose coupling is better anyway right?

The core problem I was trying to solve with Capsule was unifying the models that are used to represent the state of the app in the browser and the models you use to describe your data on the server—ideally, not just unifying the data structure, but also letting me share behavior of those objects.

Let me explain.

As I mentioned, we recently shipped &bang. It lets a group of people share their task lists and what they’re actively working on with each other.

It spares you from a lot of “what are you working on?” conversations and increases accountability by making your work quite public to the team.

It’s a realtime, keyboard-driven, web app that is designed to feel like a desktop app. &bang is a node.js application built entirely with the methods described here.

So, in &bang, a team model has attributes as well as a couple of nested backbone collections such as members and chat messages. Each member has attributes and other nested collections, tasks, shipped items, etc.

Initial state push

When a user first logs in we have to send the entire model state for the team(s) they’re on so we can build out the interface (see my previous post for more on that). So, the first thing we do when a user logs in is subscribe them to the relevant Thoonk feeds and perform the the initial state transfer to the client.

To do this, we init an empty team model on the client (a backbone/capsule model shared between client/server) . Then we recurse through our Thoonk feed structures on the server to export the data from the relevant feeds into a data structure that Capsule can use to import that data. The team model is inflated with the data from the server and we draw the interface.

From there, the application is kept in sync using events from Thoonk that get sent over websockets and applied to the client interface. Events like “publish”, “change”, “retract” and “position”.

Once we got the app to the point where this was all working, it was kind of a magical moment, because at this point, any edits that happen in Thoonk will simply get pushed out through the event propagation all the way to the client. Essentially, the inteface that a user sees is largely a slave to the server. Except, of course, the portions of state that we let the user manipulate locally.

At this point, user interactions with the app that change data are all handled through RPC calls. Let’s jump back to the server and you’ll see what I mean.

I thought you were still using Capsule on the server?

We do, but differently, here’s how that is handled.

In short… it’s a job system.

Sounds intimidating right? As someone who started in business school, then gradually got into front-end dev, then back-end dev, then a pile of JS, job systems sounded scary. In my mind they’re for “hardcore” programmers like Fritzy or Nate or Lance from our team. Job systems don’t have to be that scary.

At a very high level you can think of a “job” as a function call. The key difference being, you don’t necessarily expect an immediate result. To continue with examples from &bang: a job may be to “ship a task”. So, what do we need to know to complete that action? We need the following:

  • member Id of the user shipping the task
  • the task id being completed (we call this “shipping”, because it’s cooler, and it’s a reminder a reminder that finishing is what’s important)

We can derive everything else we need from those key pieces of information.

So, rather than call a function somewhere:

shipTask(memberId, taskId)

We can just describe a job as a simple JSON object:

    userId: <user requesting the job>,
    taskId: <id of task to 'ship'>,
    memberId: <id of team member>

The we can add that to our “shipTask” job queue like so:


The cool part about the event propagation I talked about above is we really don’t care so much when that job gets done. Obviously fast is key, but what I mean is, we don’t have to sit around and wait for a synchronous result because the event propagation we’ve set up will handle all the application state changes.

So, now we can write a worker that listens for jobs from that job queue. In that worker we’ll perform all the necessary related logic. Specifically stuff like:

  • Validating that the job is properly formatted (contains required fields of the right type)
  • Validating that the user is the owner of that task and is therefore allowed to “ship” it.
  • Modifying Thoonk feeds accordingly.

Encapsulating and reusing model logic

You’ll notice that part of that list requires some logic. Specifically, checking to see if the user requesting the action is allowed to perform it. We could certainly write that logic right here, in this worker. But, in the client we’re also going to want to know if a user is allowed to ship a given task, right? Why write that logic twice?

Instead we write that logic as a method of a Capsule model that describes a task. Then, we can use the same method to determine whether to show the UI that lets the user perform the action in the browser as we use on the back end to actually perform the validation. We do that by re-inflating a Capsule model for that task in our worker code and calling the canEdit() method on it and passing it the user id requesting the action. The only difference being, on the server-side we don’t trust the user to tell us who they are. On the server we roll the user id we have for that session into the job when it’s created rather then trust the client.


One other, hugely important thing that we get by using Capsule models on the server is some security features. There are some model attributes that are read only as far a the client is concerned. What if we get a job that tries to edit a user’s ID? In a backbone model if I call:

backboneModelInstance.set({id: 'newId'});

That will change the ID of the object. Clearly that’s not good in a server environment when you’re trusting that to be a unique ID. There are also lots of other fields you may want on the client but you don’t want to let users edit.

Again, we can encapsulate that logic in our Capsule models. Capsule models have a safeSet method that assumes all inputs are evil. Unless an attribute is whitelisted as clientEditable it won’t set it. So when we go to set attributes within the worker on the server we use safeSet when dealing with untrusted input.

The other important piece of securing a system that lets users indirectly add jobs to your job system is ensuring that the job you receive validate your schema. I’m using a node implementation of JSON Schema for this. I’ve heard some complaints about that proposed standard, but it works really well for the fairly simple usecase I need it for.

A typical worker may look something like this:

workers.editTeam = function () {
  var schema = {
    type: "object",
    properties: {
      user: {
        type: 'string',
        required: true
      id: {
        type: 'string',
        required: true
      data: {
        type: 'object',
        required: true

  editTeamJob.get(0, function (err, json, jobId, timeout) {
    var feed = thoonk.feed('teams'), 

      function (cb) {
        // validate our job
        validateSchema(json, schema, cb);
      function (clean, cb) {
        // store some variables from our cleaned job
        result = clean;
        team = result.id;
        newAttributes = result.data;
        verifyOwnerTeam(team, cb);
      function (teamData, cb) {
        // inflate our capsule model
        inflated = new Team(teamData);
        // if from the server user normal 'set'
      function (cb) {
        // do the edit, all we're doing is storing JSON strings w/ ids
        feed.edit(JSON.stringify(inflated.toJSON()), result.id, cb);
    ], function (err) {
      var code;
      if (!err) {
        code = 200;
        logger.info('edited team', {team: team, attrs: newAttributes});
      } else if (err === 'notAllowed') {
        code = 403;
        logger.warn('not allowed to edit');
      } else {
        code = 500;
        logger.error('error editing team', {err: err, job: json});
      // finish the job 
      editTeamJob.finish(jobId, null, JSON.stringify({code: code}));
      // keep the loop crankin'

Sounds like a lot of work

Granted, writing a worker for each type of action a user can perform in the app with all the related job and validation is not an insignificant amount of work. However, it worked rather well for us to use the state syncing stuff in Capsule while we were still in the prototyping stage, then converting the server-side code to a Thoonk-based solution when we were ready to roll out to production.

So why does any of this matter?

It works.

What this ultimately means is that we now push the system until Redis is our bottleneck. We can spin up as many workers as we want to crank through jobs and we can write those workers in any language we want. We can put our node app behind HA proxy or Bouncy and spin up a bunch of ‘em. Do we have all of this solved and done? No. But the core ideas and scaling paths seem fairly clear and doable.

[update: Just to add a bit more detail here, from our tests we feel confident that we can scale to tens of thousands of users on a single server and we believe we can scale vertically after doing some intelligent sharding with multiple servers.]

Is this the “Rails of Realtime?”


Personally, I’m not convinced there ever will be one. Even Owen Barnes (who originally set out to build just that with SocketStream) said at KRTConf: “There will not be a black box type framework for realtime.” His new approach is to build a set of interconnected modules for structuring out a realtime app based on the unique needs of its specific goals.

The kinds of web apps being built these days don’t fit into a neat little box. We’re talking to multiple web services, multiple databases, and pushing state to the client.

Mikeal Rogers gave a great talk at KRTConf about that exact problem. It’s going to be really, really hard to create a framework that solves all those problems in the same way that Rails or Django can solve 90% of the common problems with routes and MVC.

Can you support a BAJILLION users?

No, but a single Redis db can handle a fairly ridiculous amount of users. At the point that actually becomes our bottleneck, (1) we can split out different feeds for different databases, and (2) we’d have a user base that would make the app wildly profitable at that point—certainly more than enough to spend some more time on engineering. What’s more, Salvatore and the Redis team are putting a lot of work into clustering and scaling solutions for Redis that very well may outpace our need for sharding, etc.

Have you thought about X, Y, Z?

Maybe not! The point of this post is simply to share what we’ve learned so far.

You’ll notice this isn’t a “use our new framework” post. We would still need to do a lot of work to cleanly extract and document a complete realtime app solution from what we’ve done in &bang—particularly if we were trying to provide a tool that can be used to quickly spin up an app. If your goal is to find a tool like that, definitely check out what Owen and team are doing with SocketStream and what Nate and Brian are doing with Derby.

We love the web, and love the kinds of apps that can be built with modern web technologies. It’s our hope that by sharing what we’ve done, we can push things forward. If you find this post helpful, we’d love your feedback.

Technology is just a tool, ultimately, it’s all about building cool stuff. Check out &bang and follow me @HenrikJoreteg, Adam @AdamBrault and the whole @andyet team on the twitterwebz.

If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.

● posted by Henrik Joreteg

Quick intro, the hype and awesomeness that is Node

Node.js is pretty freakin’ awesome, yes. But it’s also been hyped up more than an Apple gadget. As pointed out by Eric Florenzano on his blog a LOT of the original excitement of server-side JS was due to the ability to share code between client and server. However, instead, the first thing everybody did is start porting all the existing tools and frameworks to node. Faster and better, perhaps, but it’s still largely the same ‘ol thing. Where’s the paradigm shift? Where’s the code reuse?!

Basically, Node.js runs V8, the same JS engine as Chrome, and as such, it has fairly decent ECMA Script 5 support. Some of the stuff in “5” is super handy, such as all the iterator stuff forEach, map, etc. But – and it’s a big “but” indeed – if you use those methods you’re no longer able to use ANY of your code in older browsers, (read “IE”).

So, that is what makes underscore.js so magical. It gives you simple JS fallbacks for non-supported ECMA Script 5 stuff. Which means, that if you use it in node (or a modern browser), it will still use the faster native stuff, if available, but if you use it in a browser that doesn’t support that stuff your code will still work. Code REUSE FTW!

So what kind of stuff would we want to share between client and server?

Enter Backbone.js

A few months ago I got really into Backbone and wrote this introductory post about it that made the frontpage of HN. Apparently, a LOT of other people were interested as well, and rightfully so; it’s awesome. Luckily for us, Jeremy Askenas (primary author of backbone, underscore, coffeescript and all around JS magician) is also a bit of a node guru and had the foresight to make both backbone and underscore usable in node, as modules. So once you’ve installed ‘em with npm you can just do this to use them on the server:

var _ = require('underscore')._,
    backbone = require('backbone');

So what?! How is this useful?

State! What do I mean? As I mentioned in my introductory backbone.js post, if you’ve structured your app “correctly” (granted, this my subjective opinion of “correct”), ALL your application state lives in the backbone models. In my code I go the extra step and store all the models for my app in a sort of “root” app model. I use this to store application settings as attributes and then any other models or collections that I’m using in my app will be properties of this model. For example:

var AppModel = Backbone.Model.extend({
  defaults: {
    attribution: "built by &yet",
    tooSexy: true

  initialize: {
    // some backbone collections
    this.members = new MembersCollection();
    this.coders = new CodersCollection();

    // another child backbone model
    this.user = new User();

Unifying Application State

By taking this approach and storing all the application state in a single Backbone model, it’s possible to write a serializer/deserializer to extract and re-inflate your entire application state. So that’s what I did. I created two recursive functions that can export and import all the attributes of a nested backbone structure and I put them into a base class that looks something like this:

var BaseModel = Backbone.Model.extend({
  // builds and return a simple object ready to be JSON stringified
  xport: function (opt) {
    var result = {},
      settings = _({
        recurse: true
      }).extend(opt || {});

    function process(targetObj, source) {
      targetObj.id = source.id || null;
      targetObj.cid = source.cid || null;
      targetObj.attrs = source.toJSON();
      _.each(source, function (value, key) {
        // since models store a reference to their collection
        // we need to make sure we don't create a circular refrence
        if (settings.recurse) {
          if (key !== 'collection' && source[key] instanceof Backbone.Collection) {
            targetObj.collections = targetObj.collections || {};
            targetObj.collections[key] = {};
            targetObj.collections[key].models = [];
            targetObj.collections[key].id = source[key].id || null;
            _.each(source[key].models, function (value, index) {
              process(targetObj.collections[key].models[index] = {}, value);
          } else if (source[key] instanceof Backbone.Model) {
            targetObj.models = targetObj.models || {};
            process(targetObj.models[key] = {}, value);

    process(result, this);

    return result;

  // rebuild the nested objects/collections from data created by the xport method
  mport: function (data, silent) {
    function process(targetObj, data) {
      targetObj.id = data.id || null;
      targetObj.set(data.attrs, {silent: silent});
      // loop through each collection
      if (data.collections) {
        _.each(data.collections, function (collection, name) {
          targetObj[name].id = collection.id;
          Skeleton.models[collection.id] = targetObj[name];
          _.each(collection.models, function (modelData, index) {
            var newObj = targetObj[name]._add({}, {silent: silent});
            process(newObj, modelData);

      if (data.models) {
        _.each(data.models, function (modelData, name) {
          process(targetObj[name], modelData);

    process(this, data);

    return this;

So, now we can quickly and easily turn an entire application’s state into a simple JS object that can be JSON stringified and restored or persisted in a database, or in localstorage, or sent across the wire. Also, if we have these serialization function in our base model we can selectively serialize any portion of the nested application structure.

Backbone models are a great way to store and observe state.

So, here’s the kicker: USE IT ON THE SERVER!

How to build models that work on the server and the client

The trick here is to include some logic that lets the file figure out whether it’s being used as a CommonJS module of if it’s just in a script tag.

There are a few different ways of doing this. For example you can do something like this in your models file:

(function () {
  var server = false,
  if (typeof exports !== 'undefined') {
    MyModels = exports;
    server = true;
  } else {
    MyModels = this.MyModels = {};



Just be aware that any external dependencies will be available if you’re in the browser and you’ve got other <script> tags defining those globals, but anything you need on the server will have to be explicitly imported.

Also, notice that I’m setting a server variable. This is because there are certain things I may want to do in my code on the server that won’t happen in the client. Doing this will make it easy to check where I am (we try to keep this to a minimum though, code-reuse is the goal).

State syncing

So, if we go back to thinking about the client/server relationship, we can now keep an inflated Backbone model living in memory on the server and if the server gets a page request from the browser we can export the state from the server and use that to rebuild the page to match the current state on the server. Also, if we set up event listeners properly on our models we can actually listen for changes and send changes back and forth between client/server to keep the two in sync.

Taking this puppy realtime

None of this is particularly interesting unless we have the ability to send data both ways – from client to server and more importantly from server to client. We build real-time web apps at &yet–that’s what we do. Historically, that’s all been XMPP based. XMPP is awesome, but XMPP speaks XML. While JavaScript can do XML, it’s certainly simpler to not have to do that translation of XMPP stanzas into something JS can deal with. These days, we’ve been doing more and more with Socket.io.

The magical Socket.io

Socket.io is to Websockets what jQuery is to the DOM. Basically, it handles browser shortcomings for you and gives you a simple unified API. In short, socket.io is a seamless transport mechanism from node.js to the browser. It will use websockets if supported and fall back to one of 5 transport mechanisms. Ultimately, it goes all the way back to IE 5.5! Which is just freakin’ ridiculous, but at the same time, awesome.

Once you figure out how to set up socket.io, it’s fairly straightforward to send messages back and forth.

So on the server-side we do something like this on the new connection:

io.on('connection', function(client){
  var re = /(?:connect.sid\=)[\.\w\%]+/;
  var cookieId = re.exec(client.request.headers.cookie)[0].split('=')[1]
  var clientModel = clients.get(cookieId)

  if (!clientModel) {
    clientModel = new models.ClientModel({id: cookieId});

  // store some useful info
  clientModel.client = client;

    event: 'initial',
    data: clientModel.xport(),


So, on the server when a new client connection is made, we immediately send the full app state:

io.on('connection', function(client) {
    event: 'initial',
    data: appModel.xport()

For simplicity, I’ve decided to keep the convention of sending a simple event name and the data just so my client can know what to do with the data.

So, the client then has something like this in its message handler.

socket.on('message', function (data) { 
  switch (data.event) {
    case 'initial':
    case 'change'




So, in one fell swoop, we’ve completely synced state from the server to the client. In order to handle multiple connections and shared state, you’ll obviously have to add some additional complexity in your server logic so you send the right state to the right user. You can also wait for the client to send some other identifying information, or whatnot. For the purposes of this post I’m trying to keep it simple (it’s long already).

Syncing changes

JS is built to be event driven and frankly, that’s the magic of Backbone models and views. There may be multiple views that respond to events, but ultimately, all your state information lives in one place. This is really important to understand. If you don’t know what I mean, go back and read my previous post.

So, now what if something changes on the server? Well, one option would be to just send the full state to the clients we want to sync each time. In some cases that may not be so bad – especially if the app is fairly light, the raw state data is pretty small as well. But still, that seems like overkill to me. So what I’ve been doing is just sending the model that changed. So I added the following publishChange method to my base model:

publishChange: function (model, collection) {
  var event = {};

  if (model instanceof Backbone.Model) {
    event = {
      event: 'change',
      model: {
        data: model.xport({recurse: false}),
        id: model.id
  } else {
    console.log('event was not a model', e);

  this.trigger('publish', event);

Then added something like this to each model’s init method:

initialize: function () {
  this.bind('change', _(this.publishChange).bind(this));

So now, we have an event type in this case change and then we’ve got the model information. Now you may be wondering how we’d know which model to update on the other end of the connection. The trick is the id. What I’ve done so solve this problem is to always generate a UUID and set it as the id when any model or collection is instantiated on the server. Then, always register models and collections in a global lookup hash by their id. That way we can look up any model or collection in the hash and just set all our data on it. Now my client controller can listen for publish events and send them across the wire with just an id. Here’s my register function on my base model (warning, it’s a bit hackish):

register: function () {
  var self = this;
  if (server) {
    var id = uuid();
    this.id = id;
    this.set({id: id});

  if (this.id && !Skeleton.models[this.id]) Skeleton.models[this.id] = this;

  this.bind('change:id', function (model) {
    if (!Skeleton.models[this.id]) Skeleton.models[model.id] = self;

Then, in each model’s initialize method, I call register and I have a lookup:

initialize: function () {

So now, my server will generate a UUID and when the model is sent to the client that id will be the same. Now we can always get any model, no matter how far it’s nested by checking the Skeleton.models hash. It’s not hard to deduce that you could take a similar approach for handling add and remove events as long as you’ve got a way to look up the collections on the other end.

So how should this be used?

Well there’s are three choices that I see.

  1. Send model changes from either the server or the client in the same way. Imagine we’re starting with an identical state on the server and client. If we now modify the model in place on the client, the publish event would be triggered and its change event would be sent to the server. The change would be set to the corresponding model on the server, which would then immediately trigger another change event, this time on the server echoing back the change to the client. At that point the loop would die because the change isn’t actually different than the current state so no event would be triggered. The downside with this approach is that it’s not as fault tolerant of flaky connections and it’s a bit on the noisy side since each change is getting sent and then echoed back. The advantage of this approach is that you can simply change the local model like you normally would in backbone and your changes would just be synced. Also, the local view would immediately reflect the change since it’s happening locally.

  2. The other, possibly superior, approach is to treat the server as the authority and broadcast all the changes from the server. Essentially, you would just build the change event in the client rather than actually setting it locally. That way you leave it up to the server to actually make changes and then the real change events would all flow from the server to the client. With this approach, you’d actually set the change events you got from the server on the client-side, your views would use those changes to update, but your controller on the client-side wouldn’t send changes back across the wire.

  3. The last approach is just a hybrid of the other two. Essentially, there’s nothing stopping you from selectively doing both. In theory, you can sync the trivial state information for example simple UI state (whether an item in a list is selected or not) using method #1 and then do more important interactions by sending commands to the server.

In my experiments option 2 seems to work the best. By treating the server as the ultimate authority, you save yourself a lot of headaches. To accommodate this I simply added one more method to my base model class called setServer. It builds a change event and sends it through our socket. So now, in my views on the client, when I’m responding to a user action instead of calling set on the model I simply call setServer and pass it a hash of key/value pairs just like I would for a normal set.

setServer: function(attrs, options) {
    event: 'set',
    id: this.id,
    change: attrs

Why is this whole thing awesome?

It lets you build really awesome stuff! Using this approach we send very small changes over an already established connection, we can very quickly synchronize state from one client to the other or the server can get updates from an external data source, modify the model on the server and those changes would immediately be sent to the connected clients.

Best of all – it’s fast. Now, you can just write your views like you normally would in a Backbone.js app.

Obviously, there are other problems to be solved. For example, it all gets a little bit trickier when dealing with a multiple states. Say, for instance you have a portion of application state that you want to sync globally with all users and a portion that you just want to sync with other instances of the same user, or the same team, etc. Then you have to either do multiple socket channels (which I understand Guillermo is working on), or you have to sync all the state and let your views sort our what to respond to.

Also, there’s persistence and scaling questions some of which we’ve got solutions for, some of which, we don’t. I’ll save that for another post. This architecture is clearly not perfect for every application. However, in the use cases where it fits, it’s quite powerful. I’m neck-deep in a couple of projects where I’m explore the possibilities of this approach and I’ve gotta say, I’m very excited about the results. I’m also working on putting together a bit of a real-time framework built on the ideas in this post. I’m certainly not alone in these pursuits, it’s just so cool to see more and more people innovating and building cool stuff with real-time technologies. I’m thankful for any feedback you’ve got, good or bad.

If you have thoughts or questions, I’m @HenrikJoreteg on twitter. Also, my buddy/co-worker @fritzy and I have started doing a video podcast about this sort of stuff called Keeping It Realtime. And, be sure to follow @andyet and honestly, the whole &yet team for more stuff related to real-time web dev. We’re planning some interesting things that we’ll be announcing shortly. Cheers.

If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.

● posted by Henrik Joreteg


We’ve been finding ourselves building more and more JS heavy apps here at &yet. Until recently, we’ve pretty much invented a custom app architecture for each one.

Not surprisingly, we’re finding ourselves solving similar problems repeatedly.

On the server side, we use django to give us an MVC structure to follow. But there’s no obvious structure to your client-side code. There are some larger libraries that give you this, but usually have a ton of widgets etc. I’m talking about solutions like Sproutcore, YUI, or Google Closure and there are toolkits like GWT and Cappuccino that let you compile other code to JS.

But for those of us who want to lovingly craft the UI exactly how we want them in the JavaScript we know and love, and yet crave quick lightweight solutions, those toolkits feel like overkill.

Recently something called Backbone.js hit my “three tweet threshold” I decided to take a look. Turns out it’s a winner in my book and I’ll explain why and how we used it.

The Problem

There are definitely some challenges that come with building complex, single-page apps, not the least of which is managing an ever increasing amount of code running in the same page. Also, since JavaScript has no formal classes there is no self-evident approach for structuring an entire application.

As a result of these problems new JS devs trying to build these apps typically goes through a series of realizations that goes something like this:

  1. Get all excited about jQuery and attempt to use the DOM to store data and application state.
  2. Realizing the first approach gets really tricky once you have more than a few things to keep track of and so instead you attempt to store that state in some type of JS model.
  3. Realizing that binding model changes to the UI can get messy if you call functions directly from your model setters/getters. You know you have to react to model changes in the UI somehow but not really knowing where to do those DOM manipulations and ending up something that’s looking more and more like spaghetti code.
  4. Building some type of app structure/framework to solve these problems.
  5. And… finally realizing that someone’s already solved many of these problems for you (and open sourced their code).

The Goals

So, how do we want our app to behave? Here are the ideals as I see them.

  1. All state/models for your app should live in one place.
  2. Any change in that model should be automatically reflected in the UI, whether that’s in one place or many.
  3. Clean/maintainable code structure.
  4. Writing as little “glue code” as possible.

Enter Backbone.js

Backbone doesn’t attempt to give you widgets or application objects or even really give you views. It basically gives you a few key objects to help you structure your code. Namely, Models, Collections and Views. Ultimately what it provides is some basic tools that you can use to build a clean MVC app in the client. We get some useful base objects for those and an event architecture for handling changes. Let’s take a look at each of those.

The Model object

The model object just gives you a way to set and retrieve arbitrary attributes. So, all you really need to create a fully functioning and useful model is the following:

var Movie = Backbone.Model.extend({});

Now you can instantiate, and set and get attributes all you want:

matrix = new Movie();

    title: "The Matrix",
    format: "dvd'


You can also pass it attributes directly when you instantiate like so:

matrix = new Movie({
    title: "The Matrix",
    format: "dvd'

If you need to enforce that certain required attributes when you build it, you can do so by providing an initialize() function to provide some initial checks. By convention the initialize function gets called with the arguments you pass the constructor.

var Movie = Backbone.Model.extend({
    initialize: function (spec) {
        if (!spec || !spec.title || !spec.format) {
            throw "InvalidConstructArgs";

        // we may also want to store something else as an attribute
        // for example a unique ID we can use in the HTML to identify this
        // item's element. We can use the models 'cid' or 'client id for this'.
            htmlId: 'movie_' + this.cid

You can also define a validate() method. This will get called anytime you set attributes and you can use it to validate your attributes (surprise!). If the validate() method returns something it won’t set that attribute.

var Movie = Backbone.Model.extend({
    validate: function (attrs) {
        if (attrs.title) {
            if (!_.isString(attrs.title) || attrs.title.length === 0 ) {
                return "Title must be a string with a length";

Ok, so there’s a quite a few more goodies you get for free from the models. But I’m trying to give an overview, not replace the documentation (which is quite good). Let’s move on.


Backbone collections are just an ordered collection of models of a certain type. Rather than just storing your models in a JS Array, a collection gives you a lot of other nice functionality for free. Functionality such as conveniences for retrieving models and a way to always keep in sorted according to the rules you define in a comparator() function.

Also, after you tell a collection which type of model it holds then adding a new item to the collection is a simple as:

// define our collection
var MovieLibrary = Backbone.Collection.extend({
    model: Movie,

    initialize: function () {
        // somthing

var library = new MovieLibarary();

// you can add stuff by creating the model first
var dumb_and_dumber = new Movie({
    title: "Dumb and Dumber",
    format: "dvd"


// or even by adding the raw attributes
    title: "The Big Lebowski",
    format: "VHS"

Again, there’s a lot more goodies in Collections, but their main thing is solving a lot of the common problems for maintaining an ordered collection of models.


Here’s where your DOM manipulation (read jQuery) takes place. In fact, I use that as a compliance check: The only files that should have jQuery as a dependency are Views.

A view is simply a convention for drawing changes to a model to the browser. This is where you directly manipulate the HTML. For the initial rendering (when you first add a new model) you really need some sort of useful client-side templating solution. My personal biased preference is to use ICanHaz.js and Mustache.js to store and retrieve them. (If you’re interested there’s more on ICanHaz.js on github.) But then, your view just listens and responds to changes in the model.

Here’s a simple view for our Movie items:

var MovieView = Backbone.View.extend({
    initialize: function (args) {
        _.bindAll(this, 'changeTitle');

        this.model.bind('change:title', this.changeTitle);

    events: {
        'click .title': 'handleTitleClick'

    render: function () {
        // "ich" is ICanHaz.js magic
        this.el = ich.movie(this.model.toJSON());

        return this;

    changeTitle: function () {

    handleTitleClick: function () {
        alert('you clicked the title: ' + this.model.get('title'));

So this view handles two kinds of events. First, the events attribute links user events to handlers. In this case, handling the click for anything with a class of title in the template. Also, this just makes sure that any changes to the model will automatically update the html, therein lies a lot of the power of backbone.

Putting it all together

So far we’ve talked about the various pieces. Now let’s talk about an approach for assembling an entire app.

The global controller object

Although you may be able to get away with just having your main app controller live inside the AppView object, I didn’t like storing my model objects in the view. So I created a global controller object to store everything. For this I create a simple a singleton object named whatever my app is named. So, to continue our example I might look something like this.

var MovieAppController = {
    init: function (spec) {
        // default config
        this.config = {
            connect: true

        // extend our default config with passed in object attributes
        _.extend(this.config, spec);

        this.model = new MovieAppModel({
            nick: this.config.nick,
            account: this.config.account,
            jid: this.config.jid,
            boshUrl: this.config.boshUrl
        this.view = new MovieAppView({model: this.model});

        // standalone modules that respond to document events
        this.sm = new SoundMachine();

        return this;

    // any other functions here should be events handlers that respond to
    // document level events. In my case I was using this to respond to incoming XMPP
    // events. So the logic for knowing what those meant and creating or updating our
    // models and collections lived here.
    handlePubSubUpdate: function () {};

Here you can see that we’re storing our application model that holds all our other models and collections and our application view.

Our app model would in this example would hold any collections we may have, as well as store any attributes that our application view may want to respond to:

var MovieAppModel = Backbone.Model.extend({
    initialize: function () {
        // init and store our MovieCollection in our app object
        this.movies = new MovieCollection();

Our application view would look something like this:

var MovieAppView = Backbone.View.extend({
    initialize: function () {
        // this.model refers the the model we pass to the view when we
        // first init our view. So here we listen for changes to the movie collection.
        this.model.movies.bind('add', this.addMovie);
        this.model.movies.bind('remove', this.removeMovie);

    events: {
        // any user events (clicks etc) we want to respond to

    // grab and populate our main template
    render: function () {
        // once again this is using ICanHaz.js, but you can use whatever
        this.el = ich.app(this.model.toJSON());

        // store a reference to our movie list
        this.movieList = this.$('#movieList');

        return this;

    addMovie: function (movie) {
        var view = new MovieView({model: movie});

        // here we use our stored reference to the movie list element and
        // append our rendered movie view.

    removeMovie: function (movie) {
        // here we can use the html ID we stored to easily find
        // and remove the correct element/elements from the view if the 
        // collection tells us it's been removed.
        this.$('#' + movie.get('htmlId')).remove();

Ok so now for a snapshot of the entire app. I’ve included all my dependencies, each in their own file (read notes on this below). We also include the ICanHaz.js templates. Then on $(document).ready() I would simply call the init function and pass in whatever variables the server-side code may have written to my template. Then we draw our app view by calling its render() method and appending that to our <body> element like so:

<!DOCTYPE html>
        <title>Movies App</title>

        <!-- libs -->
        <script src="jquery.js"></script>
        <script src="underscore.js"></script>
        <script src="backbone.js"></script>

        <!-- client templating -->
        <script src="mustache.js"></script>
        <script src="ICanHaz.js"></script>

        <!-- your app -->
        <script src="Movie.js"></script>
        <script src="MovieCollection.js"></script>
        <script src="MovieView.js"></script>
        <script src="MovieAppModel.js"></script>
        <script src="MovieAppView.js"></script>
        <script src="MovieAppController.js"></script>

        <!-- ICanHaz templates -->
        <script id="app" type="text/html">
            <h1>Movie App</h1>
            <ul id="movies"></ul>

        <script id="movie" type="text/html">
            <li id="{{ htmlId }}"><span class="title">{{ title }}</span> <span>{{ format }}</span></li>

            $(document).ready(function () {
                // init our app
                window.app = MovieAppController.init({
                    account: '{{ user.get_profile.account.slug }}',
                    // etc, etc

                // draw our main view


All said and done, now if we add or remove any movies from our collection or change their titles in the model those changes will just be reflected in the HTML like magic. With all the proper behaviors you’d defined for them in your MovieView.

General Tips

  1. Store all your objects in their own files for development.
  2. Compress and minify all your JS into one file for production.
  3. Use JSLint
  4. Extend underscore.js for any of your global utility type functions instead of modifying native objects. More on that in this gist.
  5. jQuery should only be used in your views. That way you can potentially unit test your model and collections on the server side.
  6. Use generic jQuery events to signal other views in your app. That way you don’t tightly couple them.
  7. Keep your models as simple as possible.

I know I’ve covered a lot of stuff here, and some of it duplicates what’s available in the backbone.js documentation but when I started working on building an app with it, I would have wanted a complete, yet somewhat high-level walkthrough like this. So I figured I’d write one. Big props to DocumentCloud and Jeremy Ashkenas for creating and sharing backbone with us.

If you have any thoughts, comments or reactions I’m @HenrikJoreteg on twitter. Thanks and good luck.

If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.