Quick intro, the hype and awesomeness that is Node
Node.js is pretty freakin' awesome, yes. But it's also been hyped up more than an Apple gadget. As pointed out by Eric Florenzano on his blog a LOT of the original excitement of server-side JS was due to the ability to share code between client and server. However, instead, the first thing everybody did is start porting all the existing tools and frameworks to node. Faster and better, perhaps, but it's still largely the same 'ol thing. Where's the paradigm shift? Where's the code reuse?!
Basically, Node.js runs V8, the same JS engine as Chrome, and as such, it has fairly decent ECMA Script 5 support. Some of the stuff in "5" is super handy, such as all the iterator stuff forEach
, map
, etc. But – and it's a big "but" indeed – if you use those methods you're no longer able to use ANY of your code in older browsers, (read "IE").
So, that is what makes underscore.js so magical. It gives you simple JS fallbacks for non-supported ECMA Script 5 stuff. Which means, that if you use it in node (or a modern browser), it will still use the faster native stuff, if available, but if you use it in a browser that doesn't support that stuff your code will still work. Code REUSE FTW!
So what kind of stuff would we want to share between client and server?
Enter Backbone.js
A few months ago I got really into Backbone and wrote this introductory post about it that made the frontpage of HN. Apparently, a LOT of other people were interested as well, and rightfully so; it's awesome. Luckily for us, Jeremy Askenas (primary author of backbone, underscore, coffeescript and all around JS magician) is also a bit of a node guru and had the foresight to make both backbone and underscore usable in node, as modules. So once you've installed 'em with npm
you can just do this to use them on the server:
var _ = require('underscore')._,
backbone = require('backbone');
So what?! How is this useful?
State! What do I mean? As I mentioned in my introductory backbone.js post, if you've structured your app "correctly" (granted, this my subjective opinion of "correct"), ALL your application state lives in the backbone models. In my code I go the extra step and store all the models for my app in a sort of "root" app model. I use this to store application settings as attributes and then any other models or collections that I'm using in my app will be properties of this model. For example:
var AppModel = Backbone.Model.extend({
defaults: {
attribution: "built by &yet",
tooSexy: true
},
initialize: {
// some backbone collections
this.members = new MembersCollection();
this.coders = new CodersCollection();
// another child backbone model
this.user = new User();
}
});
Unifying Application State
By taking this approach and storing all the application state in a single Backbone model, it's possible to write a serializer/deserializer to extract and re-inflate your entire application state. So that's what I did. I created two recursive functions that can export and import all the attributes of a nested backbone structure and I put them into a base class that looks something like this:
var BaseModel = Backbone.Model.extend({
// builds and return a simple object ready to be JSON stringified
xport: function (opt) {
var result = {},
settings = ({
recurse: true
}).extend(opt || {});
function process(targetObj, source) {
targetObj.id = source.id || null;
targetObj.cid = source.cid || null;
targetObj.attrs = source.toJSON();
_.each(source, function (value, key) {
// since models store a reference to their collection
// we need to make sure we don't create a circular refrence
if (settings.recurse) {
if (key !== 'collection' && source[key] instanceof Backbone.Collection) {
targetObj.collections = targetObj.collections || {};
targetObj.collections[key] = {};
targetObj.collections[key].models = [];
targetObj.collections[key].id = source[key].id || null;
_.each(source[key].models, function (value, index) {
process(targetObj.collections[key].models[index] = {}, value);
});
} else if (source[key] instanceof Backbone.Model) {
targetObj.models = targetObj.models || {};
process(targetObj.models[key] = {}, value);
}
}
});
}
process(result, this);
return result;
},
// rebuild the nested objects/collections from data created by the xport method
mport: function (data, silent) {
function process(targetObj, data) {
targetObj.id = data.id || null;
targetObj.set(data.attrs, {silent: silent});
// loop through each collection
if (data.collections) {
_.each(data.collections, function (collection, name) {
targetObj[name].id = collection.id;
Skeleton.models[collection.id] = targetObj[name];
_.each(collection.models, function (modelData, index) {
var newObj = targetObj[name].add({}, {silent: silent});
process(newObj, modelData);
});
});
}
if (data.models) {
_.each(data.models, function (modelData, name) {
process(targetObj[name], modelData);
});
}
}
process(this, data);
return this;
}
});
So, now we can quickly and easily turn an entire application's state into a simple JS object that can be JSON stringified and restored or persisted in a database, or in localstorage, or sent across the wire. Also, if we have these serialization function in our base model we can selectively serialize any portion of the nested application structure.
Backbone models are a great way to store and observe state.
So, here's the kicker: USE IT ON THE SERVER!
How to build models that work on the server and the client
The trick here is to include some logic that lets the file figure out whether it's being used as a CommonJS module of if it's just in a script tag.
There are a few different ways of doing this. For example you can do something like this in your models file:
(function () {
var server = false,
MyModels;
if (typeof exports !== 'undefined') {
MyModels = exports;
server = true;
} else {
MyModels = this.MyModels = {};
}
MyModels.AppModel...
})()
Just be aware that any external dependencies will be available if you're in the browser and you've got other <script>
tags defining those globals, but anything you need on the server will have to be explicitly imported.
Also, notice that I'm setting a server
variable. This is because there are certain things I may want to do in my code on the server that won't happen in the client. Doing this will make it easy to check where I am (we try to keep this to a minimum though, code-reuse is the goal).
State syncing
So, if we go back to thinking about the client/server relationship, we can now keep an inflated Backbone model living in memory on the server and if the server gets a page request from the browser we can export the state from the server and use that to rebuild the page to match the current state on the server. Also, if we set up event listeners properly on our models we can actually listen for changes and send changes back and forth between client/server to keep the two in sync.
Taking this puppy realtime
None of this is particularly interesting unless we have the ability to send data both ways – from client to server and more importantly from server to client. We build real-time web apps at &yet–that's what we do. Historically, that's all been XMPP based. XMPP is awesome, but XMPP speaks XML. While JavaScript can do XML, it's certainly simpler to not have to do that translation of XMPP stanzas into something JS can deal with. These days, we've been doing more and more with Socket.io.
The magical Socket.io
Socket.io is to Websockets what jQuery is to the DOM. Basically, it handles browser shortcomings for you and gives you a simple unified API. In short, socket.io is a seamless transport mechanism from node.js to the browser. It will use websockets if supported and fall back to one of 5 transport mechanisms. Ultimately, it goes all the way back to IE 5.5! Which is just freakin' ridiculous, but at the same time, awesome.
Once you figure out how to set up socket.io, it's fairly straightforward to send messages back and forth.
So on the server-side we do something like this on the new connection:
io.on('connection', function(client){
var re = /(?:connect.sid=)[.\w%]+/;
var cookieId = re.exec(client.request.headers.cookie)[0].split('=')[1]
var clientModel = clients.get(cookieId)
if (!clientModel) {
clientModel = new models.ClientModel({id: cookieId});
clients.add(clientModel);
}
// store some useful info
clientModel.client = client;
client.send({
event: 'initial',
data: clientModel.xport(),
templates:
});
...
So, on the server when a new client connection is made, we immediately send the full app state:
io.on('connection', function(client) {
client.send({
event: 'initial',
data: appModel.xport()
});
};
For simplicity, I've decided to keep the convention of sending a simple event name and the data just so my client can know what to do with the data.
So, the client then has something like this in its message handler.
socket.on('message', function (data) {
switch (data.event) {
case 'initial':
app.model.mport(data.data);
break;
case 'change'
...
}
});
So, in one fell swoop, we've completely synced state from the server to the client. In order to handle multiple connections and shared state, you'll obviously have to add some additional complexity in your server logic so you send the right state to the right user. You can also wait for the client to send some other identifying information, or whatnot. For the purposes of this post I'm trying to keep it simple (it's long already).
Syncing changes
JS is built to be event driven and frankly, that's the magic of Backbone models and views. There may be multiple views that respond to events, but ultimately, all your state information lives in one place. This is really important to understand. If you don't know what I mean, go back and read my previous post.
So, now what if something changes on the server? Well, one option would be to just send the full state to the clients we want to sync each time. In some cases that may not be so bad – especially if the app is fairly light, the raw state data is pretty small as well. But still, that seems like overkill to me. So what I've been doing is just sending the model that changed. So I added the following publishChange
method to my base model:
publishChange: function (model, collection) {
var event = {};
if (model instanceof Backbone.Model) {
event = {
event: 'change',
model: {
data: model.xport({recurse: false}),
id: model.id
}
}
} else {
console.log('event was not a model', e);
}
this.trigger('publish', event);
},
Then added something like this to each model's init method:
initialize: function () {
this.bind('change', _(this.publishChange).bind(this));
}
So now, we have an event type in this case change
and then we've got the model information. Now you may be wondering how we'd know which model to update on the other end of the connection. The trick is the id
. What I've done so solve this problem is to always generate a UUID and set it as the id
when any model or collection is instantiated on the server. Then, always register models and collections in a global lookup hash by their id
. That way we can look up any model or collection in the hash and just set
all our data on it. Now my client controller can listen for publish
events and send them across the wire with just an id
. Here's my register function on my base model (warning, it's a bit hackish):
register: function () {
var self = this;
if (server) {
var id = uuid();
this.id = id;
this.set({id: id});
}
if (this.id && !Skeleton.models[this.id]) Skeleton.models[this.id] = this;
this.bind('change:id', function (model) {
if (!Skeleton.models[this.id]) Skeleton.models[model.id] = self;
});
},
Then, in each model's initialize
method, I call register and I have a lookup:
initialize: function () {
this.register();
}
So now, my server will generate a UUID and when the model is sent to the client that id
will be the same. Now we can always get any model, no matter how far it's nested by checking the Skeleton.models
hash. It's not hard to deduce that you could take a similar approach for handling add
and remove
events as long as you've got a way to look up the collections on the other end.
So how should this be used?
Well there's are three choices that I see.
-
Send model changes from either the server or the client in the same way. Imagine we're starting with an identical state on the server and client. If we now modify the model in place on the client, the
publish
event would be triggered and itschange
event would be sent to the server. The change would beset
to the corresponding model on the server, which would then immediately trigger anotherchange
event, this time on the server echoing back the change to the client. At that point the loop would die because thechange
isn't actually different than the current state so no event would be triggered. The downside with this approach is that it's not as fault tolerant of flaky connections and it's a bit on the noisy side since each change is getting sent and then echoed back. The advantage of this approach is that you can simply change the local model like you normally would in backbone and your changes would just be synced. Also, the local view would immediately reflect the change since it's happening locally. -
The other, possibly superior, approach is to treat the server as the authority and broadcast all the changes from the server. Essentially, you would just build the
change
event in the client rather than actually setting it locally. That way you leave it up to the server to actually make changes and then the real change events would all flow from the server to the client. With this approach, you'd actuallyset
the change events you got from the server on the client-side, your views would use those changes to update, but your controller on the client-side wouldn't send changes back across the wire. -
The last approach is just a hybrid of the other two. Essentially, there's nothing stopping you from selectively doing both. In theory, you can sync the trivial state information for example simple UI state (whether an item in a list is selected or not) using method #1 and then do more important interactions by sending commands to the server.
In my experiments option 2 seems to work the best. By treating the server as the ultimate authority, you save yourself a lot of headaches. To accommodate this I simply added one more method to my base model class called setServer
. It builds a change event and sends it through our socket. So now, in my views on the client, when I'm responding to a user action instead of calling set
on the model I simply call setServer
and pass it a hash of key/value pairs just like I would for a normal set
.
setServer: function(attrs, options) {
socket.send({
event: 'set',
id: this.id,
change: attrs
});
}
Why is this whole thing awesome?
It lets you build really awesome stuff! Using this approach we send very small changes over an already established connection, we can very quickly synchronize state from one client to the other or the server can get updates from an external data source, modify the model on the server and those changes would immediately be sent to the connected clients.
Best of all – it's fast. Now, you can just write your views like you normally would in a Backbone.js app.
Obviously, there are other problems to be solved. For example, it all gets a little bit trickier when dealing with a multiple states. Say, for instance you have a portion of application state that you want to sync globally with all users and a portion that you just want to sync with other instances of the same user, or the same team, etc. Then you have to either do multiple socket channels (which I understand Guillermo is working on), or you have to sync all the state and let your views sort our what to respond to.
Also, there's persistence and scaling questions some of which we've got solutions for, some of which, we don't. I'll save that for another post. This architecture is clearly not perfect for every application. However, in the use cases where it fits, it's quite powerful. I'm neck-deep in a couple of projects where I'm explore the possibilities of this approach and I've gotta say, I'm very excited about the results. I'm also working on putting together a bit of a real-time framework built on the ideas in this post. I'm certainly not alone in these pursuits, it's just so cool to see more and more people innovating and building cool stuff with real-time technologies. I'm thankful for any feedback you've got, good or bad.
If you have thoughts or questions, I'm @HenrikJoreteg on twitter. Also, my buddy/co-worker @fritzy and I have started doing a video podcast about this sort of stuff called Keeping It Realtime. And, be sure to follow @andyet and honestly, the whole &yet team for more stuff related to real-time web dev. We're planning some interesting things that we'll be announcing shortly. Cheers.
If you're building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.