Blog

● posted by Nathan LaFreniere

Last week, Eran Hammer came to the &yet office to introduce Hapi 2.0.

Hapi is a very powerful and highly modular web framework created by Eran and his team at Walmart Labs. It currently powers the mobile walmart.com site, as well as some portions of the desktop site. With that kind of traffic, you could definitely say Hapi is battle-tested.

Hapi’s quickly becoming a popular framework among Node developers. Since mid-2013, &yet has been using Hapi for all new projects and we’ve begun porting several old projects to use it, too.

Before he started his presentation, Eran casually mentioned that he planned to at least touch on every feature in Hapi, and boy did he succeed.

From creating your server and adding routes and their handlers, to writing and utilizing plugins, and even configuring some options to help your Ops team keep your application running smoothly, everything is covered. As he talks about features, Eran also points out each breaking change along the way to facilitate updating your applications from Hapi 1.

If you’re currently using Hapi, are considering using it in the future, or are even a little bit curious about it, I highly recommend watching.


&yet presents Eran Hammer on Hapi 2.0 from &yet on Vimeo.

● posted by Nathan LaFreniere

Protocol buffer encoding is hard.

I really wanted to use them, though, seeing as there’s a pretty significant speed increase when you don’t have the overhead of HTTP.

Unfortunately, no one had written a node.js library for it. A couple of C bindings existed, but when I tried to use them, they either didn’t even compile or I couldn’t get them to work. That’s when I had one of my all-too-common breakdowns, and decided to write my own. After all, anything for the sake of increased performance, right?

Using Google’s specifications, I got started. In order to use protocol buffer encoding in any language, you have to start by writing a definition file to describe what messages exist, and what they contain. That definition is used for both encoding and decoding packets.

Obviously parsing these definitions is important, so that’s where I started. The format of these files is reasonably consistent, so figuring out how to translate them into javascript objects wasn’t too difficult. A couple of hours worth of work, and one big ugly loop later, I was feeling pretty good about myself.

Once I had the definition parsed, I could start on writing actual data. This took a bit more effort; the protocol defines several different data types, and each of them is stored in its own unique way. To add to the confusion, you can also specify both repeated and optional elements.

Since I was only writing this library to use with Riak, I decided to only use the data types that Riak uses in their definitions. Luckily for me, there are only two, varints and bytes. But what the heck is a varint? Back to the documentation I went. I learned that the varint is a fancy way to serialize numbers. According to Google’s documentation:

“Each byte in a varint, except the last byte, has the most significant bit (msb) set – this indicates that there are further bytes to come. The lower 7 bits of each byte are used to store the two’s complement representation of the number in groups of 7 bits, least significant group first.”

Wow, that’s not confusing at all. I had to take some time for my brain to wrap around this one. Eventually, it came to me. Split your number up into 7 bit chunks, reverse the order, then set the first bit of each group to 1, except for the very last one. If the number you serialize is less than 127 (that’s 7 bits), this is very easy since the result is just the number itself.

Need to serialize 10? No problem, it’s 10. That’s the answer. 10. But what if you need to encode something much larger, say 500? This is where things get tricky. First things first, let’s represent this in bits so it’s easier to visualize:

111110100

Now we know we can only use 7 bits out of each byte since that first one is reserved, so let’s break that into two almost bytes:

0000011 1110100

Now, you’ll remember the documentation says “least significant group first.” Well, that just means things are backwards, so let’s flip those around:

1110100 0000011

Okay, but we also have to tell the varint that we’re using multiple bytes, so that means we have to set the most significant bit on the first byte to a 1. Tack a 1 on to the beginning of the first byte, and a 0 on the beginning of the second, taking them from almost bytes to actual bytes, and we’ve got our varint representation of the number 500.

11110100 00000011

Wee, bits!

Deserializing is the reverse, the first byte that doesn’t have the most significant bit set is the last byte of your number.

Drop the first bit of everything, and reverse all the bytes. Tada, there’s your number.

If you want to see what the code that actually does this in protobuf.js looks like, check out my butils project at https://github.com/nlf/node-butils.

Anyway, now that I’ve probably lost 90% of everyone that was reading this with confusing math stuffs, I’ll also explain how bytes or strings work. They’re a much simpler critter – first we varint serialize the length of the string, then we append the bytes. That’s it.

Now, remember how I said that protocol buffer applications have a file that describes the available messages?

Well, we have to store that information in the data stream too, so we know what data we actually have. How do you do that?

Easy peasy.

Each message is made up of several fields, each field has a number assigned to it as well as a type. An example from Google’s documentation:

message Test1 {
  required int32 a = 1;
  required string b = 2;
}

Here, there are two fields, an int32 (that’s a varint) named “a” at number 1, and a string named “b” at number 2. To build the header for each piece of data, you store the type of the data in the last three bits, and the field number in the other 5. Field number 1 is a varint, which is numerically a type of 0. So the header for that field would look like:

00001000

Field 2, however, is a string which is numerically a type of 2, so its header would look like this:

00010010

Append the actual data to the header, and you’ve got a protocol buffer encoded message. Good for you!

For repeated fields, you simply use the same header on each item. Optional fields, if they aren’t there just don’t add them to the data stream. Now we’ve got all of the protocol buffer implementation that we need.

But what about Riak? I guess that’ll be a separate library. Let’s call this one protobuf.js, and move on shall we?

Riak prepends data messages with its own header. Their header is the 32 bit length of the internal Riak message code added to the length of the protocol buffer message, then the 8 bit message code itself, and the actual protocol buffer message. There’s a small table of message codes in the Riak documentation, so this was really easy to add.

Now we know how to encode data into protocol buffers, we know how to decode that data, we know how to add the appropriate headers, all that’s left is to slap an interface on it and we’ve got a client!

The client simply translates the function call to a Riak message code, so client.get() becomes RpbGetReq. We encode the parameters as a protocol buffer message, we prepend the Riak header, and send it off to the server. The reply comes back with the message code RpbGetResp, so we make sure we’ve got the right length of data (check your message length in the header!) then drop the Riak header, and decode the protocol buffer message.

That’s all there is to it. The Riak client portion of the code is relatively simple. A little connection management magic, some packet queueing to make sure we don’t send another request before we get a response for the last one, and it’s done.

And that, my friends, is how a Riak protocol buffer library is born.

After I “finished”, I went ahead and published what I had to npm, and basically forgot about it. Several months later, I got a pull request from Mathias Meyer (@roidrage on twitter and github) fixing a bug.

Then another.

After the third pull request, he told me that it’s because he intended to integrate riakpbc as an alternate to the HTTP backend in riak-js.

We collaborated some more over the next several weeks, and as of version 0.10 riak-js supports Riak’s protocol buffer interface through riakpbc.

Now that riakpbc and protobuf.js are being used in an already fairly widespread project, it’s quite a bit more likely that bugs will be found and features will be requested.

I’ve seen a report from a user who was able to gain a 30% performance increase in his application, just by using protocol buffer encoding instead of HTTP, who wouldn’t want that kind of performance for free?

● posted by Nate Vander Wilt

In support of an upcoming &yet product (ssssssh!), I was asked to create a JavaScript wrapper around a REST-based API we’re using from node.js.

If you’ve been there, you might know how it goes: guess which API features the current project actually needs, make up some sort of “native” object representation, implement some bridge code that kinda works, and as a finishing touch, slap a link to the service’s real documentation atop the code you left stubbed out for later.

Or, you find someone else’s wrapper library. They took the time to implement most features, and even wrote their own version of the documentation — but the project they needed it for was cancelled years ago, so their native library still wraps the previous version of the server API, without the new features you need.

FACTS

On one hand, the HTTP REST server offers all the newest features, with official usage documented by the service provider. On the other hand, your JavaScript code should be fluently written, following the native programming language idioms. Can we keep it that way?

It’s a paradox, really:

  • the REST interface is the BEST interface.

  • the best interface is a native interface.

Do you see where this is going? It took me a while, but after wrestling with the design of yet another web service wrapper, I finally saw the whole coin that I’d been flipping. It’s called Fermata, because when I finally put the two sides together I was working from a cheerful Italian caffè and needed a lively but REST-ful word.

In REST you have nouns and verbs — resources and methods — URLs and GET/PUT/POSTs. In JavaScript you have objects and methods — nouns and verbs. So in Fermata, URLs are objects, and methods are, well…methods on those objects:

var rest_server = fermata.api({url:"http://couchdb.example.com:5984"});
var my_document = rest_server.mydata.sample_doc;
my_document.put({title:"Fermata blog post", content:"?"}, function (err, response) { if (!err) console.log("Relax, your data is in good hands."); });

Hey, presto, abracadabra! Pretty simple, eh?

So…is it magic?

Yes, it is magic.

Explain this sleightly hand. Or die.

Easy there, fair Internet reader person! Your dollar was just hiding right there in your ear.

To make the dot syntax work without having to know all the paths available on the server, Fermata uses a feature of an upcoming JavaScript Harmony proposal called catch-all proxies. Proxy-fied objects finally give JavaScript developers a way to intercept all access to an object or function, injecting custom behaviour that would otherwise be impossible.

ECMAScript 5 (the latest JavaScript standard; you can tell it is Web Standard since it ends in 5) let us define property descriptors to handle the actual fetching and storing of pre-declared object keys — that is, you could have custom behaviour for specific properties only if their names were known beforehand:

var myObject = {}
Object.defineProperty(myObject, 'someSpecificProperty', {get: function () { return "someSpecificProperty has this value"; }});
myObject.someSpecificProperty === "someSpecificProperty has this value";
myObject.someOtherProperty === undefined;

ECMAScript Harmony (an in-progress proposal for the next version of JavaScript) wants to take this a step further: rather than just controlling a few pre-defined properties of an object, you can control access to any key — the property’s keyname is handed to a completely generic “get” function trap on the object:

var myObject = Proxy.create({get: function (obj, keyName) { return keyName + " has this value"; }}, {});
myObject.someSpecificProperty === "someSpecificProperty has this value";
myObject.someOtherProperty === "someOtherProperty has this value";

So the subpath keys of a Fermata URL don’t actually exist. (I told you it was magic.) Instead, when you assign var obj = url.path the JavaScript engine calls a proxy “trap” handler function, that Fermata provides, instead: “hey, for key named path on the object url, what should I say the value is?”. Fermata says: “I’ll make a new, slightly longer, URL proxy” and so that’s what the JavaScript engine assigns to var obj. If you then access a property on obj, Fermata just returns yet another object created via Proxy. Smoke and mirrors.

Of course, where there’s smoke and mirrors there must be fire and medicine cabinets. I said ECMAScript Harmony is a proposal that “wants to” standardize Proxy objects in JavaScript — a future version of JavaScript. Fortunately for us impatient types, an intrepid developer named Sam Shull has stocked the node.js medicine cabinet with node-proxy.

While it differs a little from the official Harmony proposal, his V8 Proxy library made Fermata possible. Made Fermata magic.

dramatic pause

But my web browser isn’t magic, yet

Firefox 4’s JavaScript engine implements the new Proxy object feature, but to reliably use Fermata’s magic on the web we’d have to wait for broader support. (Chrome might be next; the race is on, fellas!) In the meantime, I’ve designed Fermata so that anything you can do with dots and brackets you can do with parentheses, and more!

var homebase = fermata.api({url:"", user:"webapp", password:SESSION_ID});
var latestMessages = homebase('api')('user')('messages.json');
latestMessages() === "/api/user/messages.json";    // use empty parens for the URL as a string
latestMessages.get(function (e, messages) { console.log(messages); });

You can also use the parenthesis syntax to pass an array, which is how you prevent the automatic URL component escaping Fermata does normally. Starting from the CouchDB restServer example above:

recent_docs = rest_server('mydata')(['_design/app/_view/by_date'], {reduce:false, descending:true, limit:10});  // keep a view query handy
recent_docs() === "http://couchdb.example.com:5984/mydata/_design/app/_view/by_date?reduce=false&descending=true&limit=10";
recent_docs.get(...you know the drill...);

Volunteer from the audience

I’d encourage you to give Fermata a spin the next time you only want magic cutting between you and your favorite REST service. It’s hosted on github and installable via npm, under the terms of your friendly local MIT License.

After writing and using various REST wrapper interfaces through the years, I’m excited that I can finally speak both fluent HTTP and native JavaScript at the same time. In the office next door, Henrik is already using it from node.js to access several REST service APIs via the one consistent interface Fermata provides. As web applications move more code to the client, and more services implement careful CORS support,

Fermata can provide a high-level AJAX microframework in the browser too.

One next step for Fermata is to add plug-in support for taking care things like of default URLs, setting required headers, converting from XML instead of JSON, and signing OAuth access. The idea is not to wrap the wrapper. More like a musical key signature: do some initial site-specific setup, and the plugin will take care of any API-specific themes while the rest of your JavaScript notation is consistent. Something along the lines of:

var twitter_client = fermata.api({twitter:CLIENT_KEY, user:ID, solemn_developer_promise:"I accept and do acknowledge Tweetie's forever victory, it was a fantastic app while earning its overlord status."});
twitter_client.statuses.user_timeline.get(...);    // same ol' Fermata, but plugin is handling OAuth and format stuff

…maybe? Feedback on the plug-in interface, and anything really, is always appreciated!

● posted by Henrik Joreteg

Quick intro, the hype and awesomeness that is Node

Node.js is pretty freakin’ awesome, yes. But it’s also been hyped up more than an Apple gadget. As pointed out by Eric Florenzano on his blog a LOT of the original excitement of server-side JS was due to the ability to share code between client and server. However, instead, the first thing everybody did is start porting all the existing tools and frameworks to node. Faster and better, perhaps, but it’s still largely the same ‘ol thing. Where’s the paradigm shift? Where’s the code reuse?!

Basically, Node.js runs V8, the same JS engine as Chrome, and as such, it has fairly decent ECMA Script 5 support. Some of the stuff in “5” is super handy, such as all the iterator stuff forEach, map, etc. But – and it’s a big “but” indeed – if you use those methods you’re no longer able to use ANY of your code in older browsers, (read “IE”).

So, that is what makes underscore.js so magical. It gives you simple JS fallbacks for non-supported ECMA Script 5 stuff. Which means, that if you use it in node (or a modern browser), it will still use the faster native stuff, if available, but if you use it in a browser that doesn’t support that stuff your code will still work. Code REUSE FTW!

So what kind of stuff would we want to share between client and server?

Enter Backbone.js

A few months ago I got really into Backbone and wrote this introductory post about it that made the frontpage of HN. Apparently, a LOT of other people were interested as well, and rightfully so; it’s awesome. Luckily for us, Jeremy Askenas (primary author of backbone, underscore, coffeescript and all around JS magician) is also a bit of a node guru and had the foresight to make both backbone and underscore usable in node, as modules. So once you’ve installed ‘em with npm you can just do this to use them on the server:

var _ = require('underscore')._,
    backbone = require('backbone');

So what?! How is this useful?

State! What do I mean? As I mentioned in my introductory backbone.js post, if you’ve structured your app “correctly” (granted, this my subjective opinion of “correct”), ALL your application state lives in the backbone models. In my code I go the extra step and store all the models for my app in a sort of “root” app model. I use this to store application settings as attributes and then any other models or collections that I’m using in my app will be properties of this model. For example:

var AppModel = Backbone.Model.extend({
  defaults: {
    attribution: "built by &yet",
    tooSexy: true
  },

  initialize: {
    // some backbone collections
    this.members = new MembersCollection();
    this.coders = new CodersCollection();

    // another child backbone model
    this.user = new User();
  }
});

Unifying Application State

By taking this approach and storing all the application state in a single Backbone model, it’s possible to write a serializer/deserializer to extract and re-inflate your entire application state. So that’s what I did. I created two recursive functions that can export and import all the attributes of a nested backbone structure and I put them into a base class that looks something like this:

var BaseModel = Backbone.Model.extend({
  // builds and return a simple object ready to be JSON stringified
  xport: function (opt) {
    var result = {},
      settings = _({
        recurse: true
      }).extend(opt || {});

    function process(targetObj, source) {
      targetObj.id = source.id || null;
      targetObj.cid = source.cid || null;
      targetObj.attrs = source.toJSON();
      _.each(source, function (value, key) {
        // since models store a reference to their collection
        // we need to make sure we don't create a circular refrence
        if (settings.recurse) {
          if (key !== 'collection' && source[key] instanceof Backbone.Collection) {
            targetObj.collections = targetObj.collections || {};
            targetObj.collections[key] = {};
            targetObj.collections[key].models = [];
            targetObj.collections[key].id = source[key].id || null;
            _.each(source[key].models, function (value, index) {
              process(targetObj.collections[key].models[index] = {}, value);
            });
          } else if (source[key] instanceof Backbone.Model) {
            targetObj.models = targetObj.models || {};
            process(targetObj.models[key] = {}, value);
          }
        }
      });
    }

    process(result, this);

    return result;
  },

  // rebuild the nested objects/collections from data created by the xport method
  mport: function (data, silent) {
    function process(targetObj, data) {
      targetObj.id = data.id || null;
      targetObj.set(data.attrs, {silent: silent});
      // loop through each collection
      if (data.collections) {
        _.each(data.collections, function (collection, name) {
          targetObj[name].id = collection.id;
          Skeleton.models[collection.id] = targetObj[name];
          _.each(collection.models, function (modelData, index) {
            var newObj = targetObj[name]._add({}, {silent: silent});
            process(newObj, modelData);
          });
        });
      }

      if (data.models) {
        _.each(data.models, function (modelData, name) {
          process(targetObj[name], modelData);
        });
      }
    }

    process(this, data);

    return this;
  }
});

So, now we can quickly and easily turn an entire application’s state into a simple JS object that can be JSON stringified and restored or persisted in a database, or in localstorage, or sent across the wire. Also, if we have these serialization function in our base model we can selectively serialize any portion of the nested application structure.

Backbone models are a great way to store and observe state.

So, here’s the kicker: USE IT ON THE SERVER!

How to build models that work on the server and the client

The trick here is to include some logic that lets the file figure out whether it’s being used as a CommonJS module of if it’s just in a script tag.

There are a few different ways of doing this. For example you can do something like this in your models file:

(function () {
  var server = false,
    MyModels;
  if (typeof exports !== 'undefined') {
    MyModels = exports;
    server = true;
  } else {
    MyModels = this.MyModels = {};
  }

  MyModels.AppModel...

})()

Just be aware that any external dependencies will be available if you’re in the browser and you’ve got other <script> tags defining those globals, but anything you need on the server will have to be explicitly imported.

Also, notice that I’m setting a server variable. This is because there are certain things I may want to do in my code on the server that won’t happen in the client. Doing this will make it easy to check where I am (we try to keep this to a minimum though, code-reuse is the goal).

State syncing

So, if we go back to thinking about the client/server relationship, we can now keep an inflated Backbone model living in memory on the server and if the server gets a page request from the browser we can export the state from the server and use that to rebuild the page to match the current state on the server. Also, if we set up event listeners properly on our models we can actually listen for changes and send changes back and forth between client/server to keep the two in sync.

Taking this puppy realtime

None of this is particularly interesting unless we have the ability to send data both ways – from client to server and more importantly from server to client. We build real-time web apps at &yet–that’s what we do. Historically, that’s all been XMPP based. XMPP is awesome, but XMPP speaks XML. While JavaScript can do XML, it’s certainly simpler to not have to do that translation of XMPP stanzas into something JS can deal with. These days, we’ve been doing more and more with Socket.io.

The magical Socket.io

Socket.io is to Websockets what jQuery is to the DOM. Basically, it handles browser shortcomings for you and gives you a simple unified API. In short, socket.io is a seamless transport mechanism from node.js to the browser. It will use websockets if supported and fall back to one of 5 transport mechanisms. Ultimately, it goes all the way back to IE 5.5! Which is just freakin’ ridiculous, but at the same time, awesome.

Once you figure out how to set up socket.io, it’s fairly straightforward to send messages back and forth.

So on the server-side we do something like this on the new connection:

io.on('connection', function(client){
  var re = /(?:connect.sid\=)[\.\w\%]+/;
  var cookieId = re.exec(client.request.headers.cookie)[0].split('=')[1]
  var clientModel = clients.get(cookieId)

  if (!clientModel) {
    clientModel = new models.ClientModel({id: cookieId});
    clients.add(clientModel);
  }

  // store some useful info
  clientModel.client = client;

  client.send({
    event: 'initial',
    data: clientModel.xport(),
    templates: 
  });

...

So, on the server when a new client connection is made, we immediately send the full app state:

io.on('connection', function(client) {
  client.send({
    event: 'initial',
    data: appModel.xport()
  });
};

For simplicity, I’ve decided to keep the convention of sending a simple event name and the data just so my client can know what to do with the data.

So, the client then has something like this in its message handler.

socket.on('message', function (data) { 
  switch (data.event) {
    case 'initial':
      app.model.mport(data.data);
      break;
    case 'change'

      ...  

  }


});

So, in one fell swoop, we’ve completely synced state from the server to the client. In order to handle multiple connections and shared state, you’ll obviously have to add some additional complexity in your server logic so you send the right state to the right user. You can also wait for the client to send some other identifying information, or whatnot. For the purposes of this post I’m trying to keep it simple (it’s long already).

Syncing changes

JS is built to be event driven and frankly, that’s the magic of Backbone models and views. There may be multiple views that respond to events, but ultimately, all your state information lives in one place. This is really important to understand. If you don’t know what I mean, go back and read my previous post.

So, now what if something changes on the server? Well, one option would be to just send the full state to the clients we want to sync each time. In some cases that may not be so bad – especially if the app is fairly light, the raw state data is pretty small as well. But still, that seems like overkill to me. So what I’ve been doing is just sending the model that changed. So I added the following publishChange method to my base model:

publishChange: function (model, collection) {
  var event = {};

  if (model instanceof Backbone.Model) {
    event = {
      event: 'change',
      model: {
        data: model.xport({recurse: false}),
        id: model.id
      }
    }
  } else {
    console.log('event was not a model', e);
  }

  this.trigger('publish', event);
},

Then added something like this to each model’s init method:

initialize: function () {
  this.bind('change', _(this.publishChange).bind(this));
}

So now, we have an event type in this case change and then we’ve got the model information. Now you may be wondering how we’d know which model to update on the other end of the connection. The trick is the id. What I’ve done so solve this problem is to always generate a UUID and set it as the id when any model or collection is instantiated on the server. Then, always register models and collections in a global lookup hash by their id. That way we can look up any model or collection in the hash and just set all our data on it. Now my client controller can listen for publish events and send them across the wire with just an id. Here’s my register function on my base model (warning, it’s a bit hackish):

register: function () {
  var self = this;
  if (server) {
    var id = uuid();
    this.id = id;
    this.set({id: id});
  }

  if (this.id && !Skeleton.models[this.id]) Skeleton.models[this.id] = this;

  this.bind('change:id', function (model) {
    if (!Skeleton.models[this.id]) Skeleton.models[model.id] = self;
  });
},

Then, in each model’s initialize method, I call register and I have a lookup:

initialize: function () {
    this.register();    
}

So now, my server will generate a UUID and when the model is sent to the client that id will be the same. Now we can always get any model, no matter how far it’s nested by checking the Skeleton.models hash. It’s not hard to deduce that you could take a similar approach for handling add and remove events as long as you’ve got a way to look up the collections on the other end.

So how should this be used?

Well there’s are three choices that I see.

  1. Send model changes from either the server or the client in the same way. Imagine we’re starting with an identical state on the server and client. If we now modify the model in place on the client, the publish event would be triggered and its change event would be sent to the server. The change would be set to the corresponding model on the server, which would then immediately trigger another change event, this time on the server echoing back the change to the client. At that point the loop would die because the change isn’t actually different than the current state so no event would be triggered. The downside with this approach is that it’s not as fault tolerant of flaky connections and it’s a bit on the noisy side since each change is getting sent and then echoed back. The advantage of this approach is that you can simply change the local model like you normally would in backbone and your changes would just be synced. Also, the local view would immediately reflect the change since it’s happening locally.

  2. The other, possibly superior, approach is to treat the server as the authority and broadcast all the changes from the server. Essentially, you would just build the change event in the client rather than actually setting it locally. That way you leave it up to the server to actually make changes and then the real change events would all flow from the server to the client. With this approach, you’d actually set the change events you got from the server on the client-side, your views would use those changes to update, but your controller on the client-side wouldn’t send changes back across the wire.

  3. The last approach is just a hybrid of the other two. Essentially, there’s nothing stopping you from selectively doing both. In theory, you can sync the trivial state information for example simple UI state (whether an item in a list is selected or not) using method #1 and then do more important interactions by sending commands to the server.

In my experiments option 2 seems to work the best. By treating the server as the ultimate authority, you save yourself a lot of headaches. To accommodate this I simply added one more method to my base model class called setServer. It builds a change event and sends it through our socket. So now, in my views on the client, when I’m responding to a user action instead of calling set on the model I simply call setServer and pass it a hash of key/value pairs just like I would for a normal set.

setServer: function(attrs, options) {
  socket.send({
    event: 'set',
    id: this.id,
    change: attrs
  });
}

Why is this whole thing awesome?

It lets you build really awesome stuff! Using this approach we send very small changes over an already established connection, we can very quickly synchronize state from one client to the other or the server can get updates from an external data source, modify the model on the server and those changes would immediately be sent to the connected clients.

Best of all – it’s fast. Now, you can just write your views like you normally would in a Backbone.js app.

Obviously, there are other problems to be solved. For example, it all gets a little bit trickier when dealing with a multiple states. Say, for instance you have a portion of application state that you want to sync globally with all users and a portion that you just want to sync with other instances of the same user, or the same team, etc. Then you have to either do multiple socket channels (which I understand Guillermo is working on), or you have to sync all the state and let your views sort our what to respond to.

Also, there’s persistence and scaling questions some of which we’ve got solutions for, some of which, we don’t. I’ll save that for another post. This architecture is clearly not perfect for every application. However, in the use cases where it fits, it’s quite powerful. I’m neck-deep in a couple of projects where I’m explore the possibilities of this approach and I’ve gotta say, I’m very excited about the results. I’m also working on putting together a bit of a real-time framework built on the ideas in this post. I’m certainly not alone in these pursuits, it’s just so cool to see more and more people innovating and building cool stuff with real-time technologies. I’m thankful for any feedback you’ve got, good or bad.

If you have thoughts or questions, I’m @HenrikJoreteg on twitter. Also, my buddy/co-worker @fritzy and I have started doing a video podcast about this sort of stuff called Keeping It Realtime. And, be sure to follow @andyet and honestly, the whole &yet team for more stuff related to real-time web dev. We’re planning some interesting things that we’ll be announcing shortly. Cheers.


If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.