● posted by Henrik Joreteg

As a portion of our elaborate training events I give a short talk about JS frameworks. I’ve shied away from posting many of my opinions about frameworks online because it tends to stir the pot, hurt people’s feelings, and unlike talking face to face, there’s no really great, bi-directional channel for rebuttals.

But, I’ve been told that it was very useful and helped provide a nice, quick overview of some of the most popular JS tools and frameworks for building single page apps. So, I decided to flesh it out and publish it as A Thing™ but please remember that you’re just reading opinions, I’m not telling you what to do and you should do what works for you and your team. Feel free to disagree with me on twitter or even better, write a post explaining your position.



  1. Super easy to start. You just drop in a script tag into your document add some ng- attributes to your app and you magically get behavior.

  2. It’s well-supported by a core team, many of whom are full time Google employees.

  3. Big userbase / community.


  1. Picking Angular means you’re learning Angular the framework instead of how to solve problems in JavaScript. If I were to encourage our team to build apps using Angular, what happens when {insert hot new JS framework} comes along? Or we discover that for a certain need, Angular can’t quite do the thing we want it to and we want to build it with something else? At that point how well will those angular skills translate to something else? Instead, I’ve got developers who’s primary skill is Angular, not necessarily JavaScript.

  2. Violates separation of concerns. Call me old school, but I still believe CSS is for style, HTML is for structure, and JavaScript is for app logic. But, in Angular you spend a lot of time describing behavior in HTML instead of JS. For me personally, this is the deal breaker with Angular. I don’t want to describe application logic in HTML, it’s simply not expressive enough because it’s a markup language for structuring documents, not describing application logic. To get around this, Angular has had to create what is arguably another language inside HTML and then also writing a bit of JS to describe additional details. Now, rather than learning how to build applications in JavaScript, you’re learning Angular and things seem to have a tendency to get complex. That’s why my friend Ari’s Angular book is 600 pages!

  3. Too much magic. Magic comes at a cost. When you’re working with something that’s highly abstracted, it becomes a lot more difficult to figure out what’s wrong when something goes awry. And of course, when you veer off the beaten path, you’re on your own. I could be wrong, but I would guess most Angular users lack enough understanding of the framework itself to really feel confident modifying or debugging Angular itself.

  4. Provides very little structure. I’m not sure a canonical way to build a single page app in Angular exists. Don’t get me wrong, I think that’s fine, there’s nothing wrong with non-prescriptive toolkits but it does mean that it’s harder to jump into someone else’s angular app, or add someone to yours, because styles are likely very different.

my fallible conclusion

There’s simply too much logic described in a quasi-language in HTML rather than in JS and it all feels too abstract and too magical.

I’d rather our team get good at JS and DOM instead of learning a high-level abstraction.



  1. Heavy emphasis on doing things “The Ember Way” (also note item #1 in the “cons” section). This is a double edged sword. If you have a huge team and expect lots of churn, having rigid structure can be the difference between having a transferrable codebase and every new developer wanting to throw it all away. If they are all Ember devs, they can probably jump in and help on an Ember project.

  2. Outsource many of the hard problems of building single page apps to some incredibly smart people who will make a lot of the hard tradeoff decisions for you. (also note item #2 in the “cons” section.)

  3. Big, helpful community.

  4. Nice docs site.

  5. A good amount of existing solved problems and components to use.


  1. Heavy emphasis on doing things “The Ember Way”. Note this is also in the “pros” section. It’s very prescriptive. While you can veer from the standard path from the sound of it, many do not. For example, you don’t have to use handlebars with Ember, but I would be surprised if there are many production Ember apps out there that don’t.

  2. Ember codifies a lot of opinions. If you don’t agree with those opinions and decide to replace pieces of functionality with your own, you’re still sending all the unused code to the browser. Byte counting isn’t a core value of mine, but conceptually it’s nicer to be able to only send what you use. In addition, when you’re only sending what you’re using, there’s less code to sift through to locate the bug.

  3. Memory usage can be a bit of an issue, which can be a problem, especially when running Ember on mobile

  4. Ember is intentionally, and structurally inflexible. Don’t believe me? Take Yehuda’s word for it instead (the surrounding conversation is interesting too).

my fallible conclusion

The lack of flexibility and feeling like in order to use Ember you have to go all or nothing is a deal breaker for me.


It’s worth noting that it’s not really fair to include React in this list. It’s not a framework, it’s a view layer. But there’s so much discussion on this that I decided to add it here anyway. Arguably, when you mix in Facebook’s flux dispatcher stuff, it’s more of a framework.


  1. Blindly re-render without worrying about DOM thrashing, it will “diff” the virtual DOM that you render to, against what it knows the DOM is and will perform minimal changes to get them in sync.

  2. Their virtual DOM also resolves issues with eventing across browsers by abstracting it to a standards-compliant event-emitting/bubbling model. As a result, you get a consistent event model across any browser.

  3. It’s just a view layer, not a complete framework. This means you can use it with whatever application orchestration you’d like to do. It does seem to pair nicely with Backbone, since Backbone doesn’t give you a view binding solution out of the box and encourages you to simply re-render on model changes, which is exactly what React encourages and deals with.


  1. The template syntax and the way you create DOM (with JSX) is a bit odd for a JS developer because you put unquoted HTML right into your Javascript as if it were valid to do so. And yes, JSX is optional, but the alternative: React.DOM.div(null, "Hello ",; isn’t much better, IMO.

  2. If you want really finite and explicit control over how things get applied to the DOM you don’t really have it anymore. For example, if you want very specific control over how things are bound to style attributes, for creating touch draggable UIs. You can’t easily time the order of how classes get applied, etc. (please note this is something I’ve assumed would be an issue but have not run into myself, but this was confirmed by a dev I was talking to who was struggling with exactly this. But take it with a grain of salt).

  3. While you can just re-render the entire react view, depending on the complexity of component, it sure seems like there can be a lot of diffing to do. I’ve heard of React devs choosing to update only the known changed components, which to me, takes away from the whole idea of not having to care. Again, note that I’m speaking from very limited experience.

my fallible conclusion

I think React is very cool. If I had to build a single page app that supported old browsers I’d look closely at using Backbone + React.

A note on the “FLUX” architecture: To me this is not new information or even a new idea, just a new name. Apparently I’m not alone in that opinion.

The way I understand it, conceptually FLUX is the same as having an intelligently evented model layer in something like Ampersand or Backbone and turning all user actions and server data updates into changes to that state.

By ensuring that the user actions never result in directly manipulating the DOM you end up with the same unidirectional event propagation flow as FLUX + React. We intentionally didn’t include any sort of two-way bindings in Ampersand for that reason. In my opinion two-way bindings are fraught with peril. Having a single layer deal with incoming events, be they from the server or user action is what we’ve been doing for years.


This one is a bit strange to me. There’s a standard being developed for being able to define custom elements (document.registerElement for creating new HTML tags with built in behavior), doing HTML imports (<link type='html'> for being able to import those custom elements into other documents), and shadow DOM (for isolating CSS from the rest of the document).

Those things are great (except HTML imports, IMO).

But, judging by Polymer’s introduction, it sounds like a panacea for making all web development easy and amazing and that it’s good for everything. Here’s what the opening line says:

Web Components usher in a new era of web development based on encapsulated and interoperable custom elements that extend HTML itself. Built atop these new standards, Polymer makes it easier and faster to create anything from a button to a complete application across desktop, mobile, and beyond.

While I think being able to create custom elements and encapsulating style and behavior is fantastic, I’m frustrated with the way it’s being positioned. It sounds like you should use this for everything now.

Here’s the kicker: I don’t know of any significant Google app that uses polymer for anything.

That’s a red flag for me. Please don’t misunderstand, obviously this is all new stuff and change takes time. My issue is just that the messaging on the site and from the Google engineers working on this don’t convey that new-ness.

In addition, even if you were to create custom elements for all the view code in your single page app, something has to manage the creation/destruction of those elements. You still have to manage state and orchestrate an app, which means your custom elements are really just another way to write the equivalent of a Backbone view. In the single page app world, I don’t see what we would actually gain by switching those things to custom elements.


  1. Being able to create things like custom form inputs without them being baked into the browser is awesome.

  2. Polymer polyfills enough so you can start using and experimenting with this functionality now.

  3. Proper isolation of styles when building widgets has been a problem on the web for years. The new standards solve that problem at the browser level, which is awesome.


  1. I personally feel like one of Google’s main motivations for doing this is to make it dead simple to drop in Google services that include behavior, style and functionality into a web page without having to know any JS. I could be completely off base here, but I can’t help but feel like the marketing push is largely a big hype push to help push the standards through.

  2. HTML Imports seem like a bad idea to me. It’s feels like the CSS @import problem all over again. If you import a thing, you have to wait to get it back before the browser notices that it imports another thing, etc. So if you actually take this fully componentized approach to building a page that is promoted, then you’ll end up with a ton of back and forth network requests. They do have a tool called the “vulcanizer” for flattening these things out, however. But inlining it doesn’t seem to be an option. There’s was a whole post written yesterday about the problems with HTML imports that discusses this and other issues.

  3. I simply don’t understand why Google is pushing this stuff so hard as if it’s some kind of panacea when the only example I can find of Google using it themselves is on the Polymer site itself. The site claims “Polymer makes it easier and faster to create anything from a button to a complete application across desktop, mobile, and beyond.” In my experimentation, that simply wasn’t the case, I smell hype.

my fallible conclusion

Google doesn’t seem to be eating their own dog food here. The document.registerElement spec is exciting, beyond poly-filling that, I see no use for Polymer, sorry.


There is no more broadly production deployed single page app framework than Backbone that I’m aware of. The examples section of the backbone docs lists a lot of big names and that list is far from exhaustive.


  1. It’s a small and flexible set of well-tested building blocks.

    1. Models
    2. Collections
    4. Router
  2. It solves a lot of the basic problems.

  3. Its limited scope makes it easy to understand. As a result I always make new front end developer read the Backbone.js documentation as a first task when they join &yet.


  1. It doesn’t provide solutions for all the problems you’ll encounter. This is why every major user of backbone that I’m aware of has built their own “framework” on top of Backbone’s base.

  2. Most notably find yourself missing when using plain Backbone are:

    1. A way to create derived properties on models.
    2. A way to bind properties and derived properties to views.
    3. A way to render a collection of views within an element.
    4. A way to cleanly handle “subviews” and nested layouts, etc.
  3. As much as backbone is minimalistic, it’s pieces also arguably too coupled to each other. For example, until my merged pull request is released you couldn’t use any other type of Model within a Backbone Collection without monkey patching internal methods. This may not matter for some apps, but it does matter if I want to, for example, use a model to store some observable data in a library intended for use by other code that may or not be a backbone app. The only way to use Backbone Models is to include all of Backbone which feels odd and inefficient to me.

my fallible conclusion

Backbone pioneered a lot of amazing things. I’ve been using it since 0.3 and I strongly agree with its minimalistic philosophy.

It’s helped spawn a new generation of applications that treat the browser as a runtime, not just a document rendering engine. But, its narrow scope left people to invent solutions on top of Backbone. While this isn’t a bad thing, per sé, it just brings to light that there are more problems to be solved.

Not using a framework

There’s a subset of developers who think you shouldn’t use frameworks, for anything ever. While I appreciate the sentiment and find myself very in line with many of them generally, to me it’s simply not pragmatic, especially in a team scenario.

I tend to agree with Ryan Florence’s Post on this topic. Which is best summed up by this one quote from his post:

When you decide to not pick a public framework, you will end up with a framework anyway: your own.

He goes on to say, that doing this is not inherently bad, just that you should be serious about it and maintain it, etc. I highly recommend the post, it’s excellent.


  • Ultimate flexibility

  • You’ll tend to include only the exact code that you need in your app.


  • Massive re-inventing of things, cost.

  • Knowing what modules to use and finding the right modules is hard

  • No clear documentation or conventions for new developers

  • Really hard to transfer and re-use code for your next project

  • You’ll generally end up having learn from your own mistakes instead of benefiting from other’s code.

The GIANT gap

In doing our trainings and in writing my book, Human JavaScript and within our team itself we’ve come to realize there is a huge gap between picking a tool, framework, or library and actually building a complete application.

Not to mention, there are huge problems surrounding how to actually build an app as a team without stomping on each other.

There are sooooo many options and patterns on how to structure, build, and deploy applications beyond just picking a framework.

Few people seem to be talking about how to do all of that, which is just as big of a rabbit hole as picking a framework!

What we actually want

  • Clear starting point

  • A clear, but not enforced, standard way to do things

  • Explicitly clear separation of concerns, so we can mix and match and replace as needed

  • Easy dependency management

  • A way to use existing solutions so we don’t have to re-invent everything

  • A development workflow where we can switch from development mode to production with a simple boolean in a config.

How we’ve addressed all of these things

So, in case you hadn’t already heard, we did the unspeakable thing in JavaScript. We made a “new” framework: Ampersand.js It’s a bit like a redux or derivation of Backbone.

The response so far, has been overwhelmingly positive, we only announced it about a month ago and all these folks have jumped in to contribute. People have been giving talks about it at meetups, and Jeremy Ashkenas, the creator of Backbone.js, Underscore.js, and CoffeeScript invited me to give a keynote at BackboneConf 2014 about Ampersand.js.

So how did we address all my critiques about the other tools?

  1. Flexible but cohesive

    • It comes with a set of “core” modules (documented here) that roughly line up with the components in Backbone. But they are all installed and used individually. No assumptions are made that you’re using a RESTful or even Ajax powered API. If you don’t want that stuff, you just use Ampersand-State instead of the decorated version of State we call Ampersand-Model that adds the restful methods.

    • It doesn’t come with a templating language. Templates can be as simple as a string of HTML, a function that return a string of HTML, or a function that return DOM. The sample app includes some more advanced templating with templatizer, but it truly could be anything. One awesome approach for doing handlebars/htmlbars + Ember style in-template binding declarations is domthing by Philip Roberts. There are also people using React with Ampersand views.

    • Views have a way to declare bindings separate from the template engine. So if you want, you can use HTML strings for templates and still get full control of bindings. The nice thing about not bundling a templating engine means that you can write componentized/reusable views without needing to also include a templating system.

  2. There has to be a clear starting point and some idiomatic way to structure the app as a whole that can be used as a reference, but those standard approaches should not enforced. We did this by building a CLI that can help you spin up a new app, that follows all these conventions that can serve either as a starting point, or simply as a reference. See the quick start guide for more.

  3. We wanted to build on something proven not just start something new for the sake of doing it. This is why we built on Backbone as a base instead of starting from scratch entirely.

  4. We wanted a more complete reference guide to fill that gap I mentioned that explains all the surrounding ideas, tools, and philosophies. We did this by writing a book on the topic: Human JavaScript. It’s free to read online in its entirety and available as an ebook.

  5. We wanted to make it easy to use “solved problems” so we don’t have to re-invent the wheel all the time. We did this by using npm for all package management, and by creating a quick-searchable directory of our favorite clientside modules.

  6. We wanted a painless development-to-production workflow. We did this with a tool called moonboots that adds some dev and deployment workflow functionality to browserify. Moonboots has a plugin for hapi.js and express.js where the only thing you have to do to go from production mode (minified, cached, uniquely named static assets) and dev mode (re-built on each request, not minified, not cached) is toggling a single boolean.

  7. We didn’t just want this to be an &yet project, it has to be bigger than that. We’ve already had over 40 contributors in the short time Ampersand.js has been public, and we just added the first of hopefully many non-&yet contributors to core. Everything uses the very permissivie MIT license and its modular, loosely coupled structure lends itself quite well to extending or replacing any piece of it to fit your needs. For clarity we’ve also set it up as its own organization on GitHub.

  8. We wanted additional training and support to be available if needed. For this we’ve made the #&yet IRC channel on freenode open to questions and support. In addition there are people and companies who want paid training opportunities to be available in order for them to even feel comortable adopting a technology. They want to know that more information and help is available, so in addition to the free resources, we’ve also put together a Human JavaScript code-along online training and offer in person training events to provide hands-on training and support.

So are you saying Ampersand is the best choice for everyone?

Nope. Not at all. It certainly has its own set of tradeoffs. Here are some I’m aware of, there are probably others:

  • Unsurprisingly, it is still a somewhat immature codebase compared to some of these other tools. Having said that, however, we use it for all our single page app projects at &yet and the core modules all have thorough test suites. It’s also worth noting that if you do run into a problem, odds are it won’t be as debilitating. Its open, hackable, pluggable nature makes it different than many frameworks in that you don’t have to jump through a bunch of hoops to fix or overwrite something in your app. The small modules typically make it easier to isolate, patch, and quickly publish bugfixes. In fact, we often publish a patched version to npm as soon as a pull request is merged. Our strict adherance to semver makes it possible to do that while mitigating odds of breaking any existing code. I think that’s part of the reason it has gotten as many pull requests as it has already. Even still, if you have a different idea of how something should work, it’s easy to use your own module instead. We’re also trying to increase the number of core committers to make sure the patches are getting in even if other core devs are busy.

  • It doesn’t have the rich tooling and giant communities built up around it yet. That stuff takes time, but as I said, we’re encouraged by the level of participation we’ve had thus far. Please file bugs and help create the things you wish existed.

  • Old browser support is a rough spot. We intentionally drew a line saying we won’t support IE8. We’re not the alone there, jQuery 2.0 doesn’t either, Google has said they’ll only support the latest two versions of IE for Apps and recently dropped IE9 too, and Microsoft themselves just announced their plan to phase out support for all older browsers. Why did we do this? It’s because we’re using [getters and setters] for the state management stuff. It was a hard decision but felt like enough of a win to make it worth it. Unfortunately, since that is a language-level feature, It’s not easily shimmable (at least not that I’m aware of). Sadly, for some companies not supporting IE8 is a dealbreaker. Perhaps someone has already written a transpiler in a browserify transform that can solve this problem, but I’m not aware of that. If you are, please let me know. I would love it if Ampersand-State could support IE 7 and 8.

Final thoughts

Hopefully this explanation was useful. If you have any feedback, thoughts or if there’s something I missed or got wrong I’m @HenrikJoreteg on twitter, please let me know.

Also please help us make these tools better. We love getting more people involved in the project. File bugs or grab one of the open issues and help us patch ‘em.

Want to start using Ampersand?

Check the learning guides, API reference, or read Human JavaScript online for free.

For hands-on learning jump into the Human JavaScript code-along online training, or for the ultimate kickstart come hang out in person at our training events where you’ll build an app from scratch together with us.

See you on the Interwebz <3

● posted by Henrik Joreteg

It used to all make sense.

The web was once nothing but documents.

Just like you’d want some type of file browser UI to dig through files on your operating system, obviously, you need some type of document browser to view all these web-addressable “documents”.

But over time, those “documents” have become a lot more. A. lot. more.

But I can now use one of these “documents” to have a 4 person video/audio conference on Talky with people anywhere in the world, play incredible full-screen first-person shooters at 60fps, write code in a full-fledged editor, or {{ the reader may insert any number of amazing web apps here }} using nothing but this “document viewer”.

Does calling them “documents” seem ridiculous to anyone else? Of course it does. Calling them “sites” is pretty silly too, actually because a “site” implies a document with links and a URL.

I know the “app” vs. “site” debate is tired and worn.

Save for public, content-heavy sites, all of the apps that I’m asked to write by clients these days at &yet are fully client-side rendered.

The browser is not an HTML renderer for me, it’s the world’s most ubiquitous, yet capable, runtime. With the amazing capabilities of the modern web platform, it’s to the point where referring to a browser as a document viewer is a insult to the engineers who built it.

There is a fundamental difference when you treat the browser as a runtime instead of a document renderer.

I typically send it nothing but a doctype, a script tag, and a stylesheet with permanent cache headers. HTML just happens to be the way I tell the browser to download my app. I deal with the initial latency issues by all-but-ensuring visitors hit the app with a primed cache. This is pretty easy for apps that are opened frequently or are behind a static login page in which you prefetch the app resources. With proper cache headers the browser won’t even do the 304 not-modified dance. It will simply start executing code.

This makes some people cringe, and many web purists (luddites?! #burn) would argue that everything should gracefully degrade and that there isn’t, or at least there shouldn’t be, any distinction between a JavaScript app and site. When I went to EdgeConf in NYC the “progressive enhancement” panel said a lot of things like “your app should still be usable without JS enabled”. Often “javascript is disabled” is really the time when the browser is downloading your javascript. To this I say:


It simply cannot be done. Like it or not, the web has moved on from that myopic view of it. The blanket graceful degradation view of the web no longer makes sense when you can now build apps whose core use case is fully dependent on a robust JavaScript runtime.

I had a great time at Chrome Dev Summit, but again, the core message of the “Instant Mobile Apps” talk was: “render your html on the server to avoid having your render blocking code require downloading your JS before it can start executing.”

For simple content-driven sites, I agree. Completely. The demo in that particular talk was the Chrome developer documentation. But it’s a ridiculously easy choice to render documentation server side. (In fact the notion that there was ever a client-side rendered version to begin with was surprising to me.)

If your view of the web lacks a distinction between clientside apps and sites/documents, I’d go as far as to say that you’re now part of the problem.


Because that view enables corporate IT departments to argue for running old browsers without getting laughed out of the building.

Because that view keeps some decision makers from adopting 100% JavaScript apps and instead spending money on native apps with web connectivity.

Because that view wastes precious developer time inventing and promoting hacks and workarounds for shitty browsers when they could be building next-generation apps.

Because that view enables you to argue that your proficiency of browser CSS hacks for IE7 is still relevant.

Because that view will always keep the web locked into the browser.

What about offline?

I’m writing this on a plane without wifi and of course, using a native app to do so. There are two primary reasons for this:

  1. The offline web is still crap. See and this post for more.
  2. All my favorite web-based tools are still stuck in the browser.

The majority of users will never ever open a browser without an Internet connection, type in a URL and expect ANYTHING to happen.

Don’t get me wrong, I’m very supportive of the offline first efforts and they are crucial for justifying that

We have a very different view of apps that exist outside of the browser. In fact, the expectation is often reversed: “Oh right, I do need a connection for this to work”.

Chrome OS is one approach, but I think its 100% cloud-based approach is more hardcore than the world is ready to adopt and certainly is never going to fly with the indie data crowd or the otherwise Google-averse.

So, have I ranted enough yet?

According to Jake Archibald from Google, ServiceWorkers will land in Canary sometime early 2014. This work is going to fundamentally change what the web can do.

If you’re unfamiliar with ServiceWorkers (previously called Navigation Controllers), they let you write your own cache control layer in javascript for your web application. ServiceWorkers promise to serve the purpose that appcache was intended for: truly offline web apps.

At a high level, they now let javascript developers building clientside apps to treat the existence of a network connection as an enhancement rather than an expectation.

You may think, “Oh, well, the reason we use the web is because access to the network provides our core value as an app.”

While I’d tend to agree that most of the useful apps fundamentally require data from the internet to be truly useful, you’re missing the point.

Even if the value of your app depends entirely on a network connection, you can now intercept requests and choose to answer them from caches that you control, while in parallel attempting to fetch newer versions of those resources from the network.

If you think about it, that capability is no different than something like Facebook for iOS or Android.

That Facebook app’s core value is unquestioningly derived from seeing your friends’ latest updates and photos, which you’re obviously not going to get without a connection. But the fundamental difference is this: the native app will still open the app and show you all the cached content it has. As a result (and for other reasons) the OS has given those types of apps a privileged status.

With full programmatic cache control for the web that ServiceWorkers will offer, you’ll be able to choose to load your app and whatever latest content you had downloaded from cache first while optionally trying to connect and download new things from the network. The addition of a controllable cache layer in web apps means that an app like facebook really has no compelling reason to be a native app. I mean, really. If you break it down, that app is mostly a friend timeline browser, right? (the key word there being browser).

BUT, even with the addition of ServiceWorkers, there’s another extremely important difference: user perception.

We’ve spent years teaching users that things they use in their web browser simply do not work offline. Users understand (at least at on some unconscious level) that the browser is the native app that gets sites/documents from the Internet. From a user experience standpoint, trying to teach the average user anything different is attempting to roll a quarry full of rocks up a hill.

This is where it starts to become apparent that failing to draw a distinction between a fully client “apps” and a website really starts to become a disservice to all these new capabilities of the web platform. It doesn’t matter how good the web stack becomes, it will never compete with native apps in the “native” space while it stays stuck in the browser.

The addition of “packaged” chrome apps is an admirable, but in my opinion, still inadequate attempt at addressing this issue.

At the point where a user on a mobile device opts to “add to home screen” the intent from the user is more than just a damn bookmark, they’re saying: “I want access to this on the same level as my native apps”. It’s a user’s request for an installation of that app, but in reality it’s treated as a shitty, half-assed install that’s really just a bookmark. But the intent from the user is clear: “I want a special level of quick and easy access to this specific app“.

So why not just embrace that what they’re actually trying to do is “install” that web application into their operating system?

Apple sort of does this for Mac Apps. After you first “sideload” (a.k.a. download from the web and try to run) a native Mac desktop app, they treat it a bit like an awkward stepchild when you first open it. They warn you and tell you: hey, this was an app downloaded from the Internet, are you sure you want to let this thing run?

While I’m not a fan of the language or the FUD involved with that, the timing makes perfect sense to me. At the point I’ve opted to “install” something to my homescreen on my mobile device (or the equivalent to that for desktop), that seems like the proper inflection point to verify with the user that they do, in fact, want to let this app have access to specific “privileged” OS APIs.

Without a simple way to install and authorize a clientside web app, these kinds of apps will always get stuck in the uncanny valley of half-assed, semi-installed apps.

So why bother in the first place? Why not just do native whenever you want to build an “app”? Beyond providing a way to build for multiple platforms, there’s one more thing the web has that native apps don’t have: a URL.

The UNIFORM RESOURCE LOCATOR concept is easy to take for granted, but it’s extremely useful to be able to reference things like links to emails inside gmail, or a tweet, or a very specific portion of documentation. Being able to naturally link between apps on the web is what gives the web its power. It’s unfortunate that many, when they first start building single page applications don’t update URLs as they go and fail to respect the “back” button, thus breaking the web.

But when done properly, the blending the rich interactivity of native apps with the addressability and ubiquity of the web is a thing of beauty.

I cannot understate how excited I am about Service Workers. Because finally, we’ll have the ability to build web applications that treat network resources the same way that good native applications do: as an enhancement.

Of course, the big IF is whether platforms play along and actually treat these types of apps as first class citizens.

Call me an optimist, but I think the capabilities that ServiceWorkers promise us, will shine a light on the bizarre awkwardness of the concept of opening a browser to access offline apps.

The web platform’s capabilities have outgrown the browser.

Let’s help the web to make its next big push.

I’m @HenrikJoreteg on twitter. I’d love to hear your thoughts on this.

For further reading on ServiceWorkers, here is a great explainer doc.

Also, check out my book on building sanely structured single page applications.

● posted by Henrik Joreteg

I had the privilege to attend EdgeConf 2013 as a panelist and opening speaker for the Realtime Data discussion.

It was an incredible, deeply technical conference with an interesting discussion/debate format.

Here’s the video from the panel:

The slides from my talk can be found on speakerdeck.

It was a privilege to attend — I’m very grateful to Andrew Betts and FT Labs for the opportunity to be there.

● posted by Melanie Brown

We asked Portlandians about realtime technologies—and, um, they answered!

DISCLAIMER: No hipsters feelings were harmed in the making of this video.

Film by Miss Melanie Brown
Music by YACHT

If you enjoyed this, be sure to check out last year’s video, too. :)

● posted by Henrik Joreteg

These days, more and more HTML is rendered on the client instead of sent pre-rendered by the server. So If you’re building a web app that uses a lot of client side javascript you’ll doubtlessly want to create some HTML in the browser.

How we used to do it

First a bit of history. When I first wrote ICanHaz.js I was just trying to ease a pain point I was having: generating a bunch of HTML in a browser is a pain.

Why is it a pain? Primarily because JS doesn’t cleanly support multi-line strings, but also because there isn’t an awesome string interpolation system built into JS.

To work around that, ICanHaz.js, as lots of other template clientside template systems do, uses a hack to make it easier to send arbitrary strings to the browser. As it turns out, browsers ignore content in <script> tags if you give them a type attribute that isn’t text/javascript. So, ICanHaz reads the content of tags on the page that say: <script type=“text/html”> which can contain templates or any other multi-line strings for that matter. So, ICanHaz will reads those templates and turns each of them into a function that you can call to render that string with your data mixed into it. For example:

This html:

<script id="user" type="text/html">
    <p class="name">Hello I'm {{ name }}</p>
    <p><a href="{{ twitter }}">@{{ twitter }}</a></p>

Is read by ICanHaz and turned into a function you call with your own like this:

// your data
var data = {
  first_name: "Henrik",
  last_name: "Joreteg"

// I can has user??
html = ich.user(data)

This works, and lots of people clearly thought the same as it’s been quite a popular library.

Why that’s less-than-ideal

It totally works, but if you think about it, it’s a bit silly. It’s not super fast and you’re making the client do a bunch of extra parsing just to turn text into a function. You also have to send the entire template engine to the browser which is a bunch of wasted bandwidth.

How we’re doing it now

What I finally realized is that all you actually want when doing templating on the client is the end result that ICanHaz gives you: a function that you call with your data that returns your HTML.

Typically, smart template engines, like the newer versions of Mustache.js, do this for you. Once the template has been read, it gets compiled into a function that is cached and used for subsequent rending of that same template.

Thinking about this leaves me asking: why don’t we just send the javascript template function to the client instead of doing all the template parsing/compiling on the client?

Well, frankly, because I didn’t really know of a great way to do it.

I started looking around and realized that Jade (which we already use quite a bit at &yet) has support for compiling as a separate process and, in combination with a small little runtime snippet, this lets you create JS functions that don’t need the whole template engine to render. Which is totally awesome!

So, to make it easier to work with, I wrote a little tool: templatizer that you can run on the server-side (using node.js) to take a folder full of jade templates and turn them into a javascript file that you can include in your app that has just has the template rendering functions as javascript.

The end result

From my tests the actual rendering of templates is 6 to 10 times faster. In addition you’re sending way less code to the browser (because you’re not sending a whole templating engine) and you’re not making the browser do a bunch of work you could have already done ahead of time.

I still need to write more docs and use it for a few more projects before we have supreme confidence in it, but I’ve been quite happy with the results so far and wanted to share it.

I’d love to hear your thoughts. I’m @HenrikJoreteg on twitter and you should follow @andyet as well and check out our awesome team same-pagification tool And Bang.

See you on the Internet. Go build awesome stuff!

● posted by Henrik Joreteg

The single biggest challenge you’ll have when building complex clientside application is keeping your code base from becoming a garbled pile of mess.

If it’s a longer running project that you plan on maintaining and changing over time, it’s even harder. Features come and go. You’ll experiment with something only to find it’s not the right call.

I write lots of single page apps and I absolutely despise messy code. Here are a few techniques, crutches, coping mechanisms, and semi-pro tips for staying sane.

Separating views and state

This is the biggest lesson I’ve learned building lots of single page apps. Your view (the DOM) should just be blind slave to the model state of your application. For this you could use any number of tools and frameworks. I’d recommend starting with Backbone.js (by the awesome Mr. @jashkenas as it’s the easiest to understand, IMO.

Essentially, you’ll build up a set of models and collections in memory in the browser. These models should be completely oblivious to how they’re used. Then you have views that listen for changes in the models and update the DOM. This could be a whole giant blog post in an of itself. But this core principal of separating your views and your application state is vital when building large apps.

Common JS Modules

I’m not going to get into a debate about module styles and script loaders. But I can tell you this: I haven’t seen any cleaner, simpler mechanism for splitting your code into nice isolated chunks than Common JS modules.

It’s the same style/concept that is used in node.js. By following this style I get the additional benefit of being able to re-use modules written for the client on the server and vice versa.

If you’re unfamiliar with the Common JS modules style, your files end up looking something like this:

// you import things by using the special `require` function and you can
// assign the result to a variable

var StrictModel = require('strictModel'),
    _ = require('underscore');

// you expose functionality to other modules by declaring your main export
// like this.
module.exports = StrictModel.extend({
    type: 'navItem',
    props: {
        active: ['boolean', true, false],
        url: ['string', true, ''],
        position: ['number', true, 200]
    init: function () {
        // some, something

Of course, browsers don’t have support for these kinds of modules out of the box (there is no window.require). But, luckily that can be fixed. I use a clever little tool called stitch written by Sam Stephenson of 37signals. There’s also another one by @substack called browserify that lets you use a lot of the node.js utils on the client as well.

What they do is create a require function and bundle up a folder of modules into an app package.

Stitch is written for node.js but you could just as easily just use another server-side language and just use node to build your client package. Ultimately it’s just creating a single JS file and of course at that point you can just serve it like any other static file.

You set up Stitch in a simple express server like this:

// require express and stitch
var express = require('express'),
    stitch = require('stitch');

// define our stitch package
 var appPackage = stitch.createPackage({
    // you add the folders whose contents you want to be “require-able”
    paths: [
        __dirname + '/clientmodules',  // this is where i put my standalone modules
        __dirname + '/clientapp' // this is where i put my modules that compose the app
    // you can also include normal dependencies that are not written in the 
    // commonJS style
    dependencies: [
        somepath + '/jquery.js',
        somepath + '/bootstrap.js'

// init express
var app = express.createServer();

// define a path where you want your JS package to be server
app.get('/myAwesomeApp.js', appPackage.createServer());

// start listening for requests

At this point you can just go to http://localhost:3000/myAwesomeApp.js in a browser and you should see your whole JS package.

This is handy while developing because you don’t have to re-start or recompile anything when you make changes to the files in your package.

Once you’re ready to go to production you can use the package and uglify JS to write a minified file to disk to be served staticly:

var uglifyjs = require('uglify-js'),
    fs = require('fs');

function uglify(code) {
    var ast = uglifyjs.parser.parse(code);
    ast = uglifyjs.uglify.ast_mangle(ast);
    ast = uglifyjs.uglify.ast_squeeze(ast);
    return uglifyjs.uglify.gen_code(ast);

// assuming `appPackage` is in scope of course, this is just a demo
appPackage.compile(function (err, source) {
    fs.writeFileSync('build/myAwesomeApp.js', uglify(source));

Objection! It’s a huge single file, that’s going to load slow!

Two things. Don’t write a huge app with loads and loads of giant dependencies. Second, cache it! If you do your job right, your users will only download that file once and you can probably do it while they’re not even paying attention. If you’re clever you can even prime their cache by lazy-loading the app on the login screen, or some other such cleverness.

Not to mention, for single page apps, speed once your app has loaded is much more important than the time it takes to do the initial load.

Code Linting

If you’re building large JS apps and not doing some form of static analysis on your code, you’re asking for trouble. It helps catch silly errors and forces code style consistency. Ideally, no one should be able to tell who wrote what part of your app. If you’re on a team, it should all be uniform within a project. How do you do that? We use a slick tool written by Nathan LaFreniere on our team called, simply, precommit-hook. So all we have to do is:

npm install precommit-hook

What that will do is create a git pre-commit hook that uses JSHint to check your project for code style consistency before each commit. Once upon a time there was a tool called JSLint written by Mr. Crockford. Nowadays (love that silly word) there’s a less strict, more configurable version of the same project called JSHint.

The neat thing about the npm version of JSHint is that if you run it from the command line it will look for a configuation file (.jshintrc) and an ignore file (.jshintignore) both of which the precommit hook will create for you if they don’t exist. You can use these files to configure JSHint to follow the code style rules that you’ve defined for the project. This means that you can now run jshint . at the root of your project and lint the entire thing to make sure it follows the code styles you’ve defined in the .jshintrc file. Awesome, right!?!

Our .jshintrc files usually looks something like this:

    "asi": false,
    "expr": true,
    "loopfunc": true,
    "curly": false,
    "evil": true,
    "white": true,
    “undef": true,
    "predef": [

The awesome thing about this approach is that you can enforce consistency and that the rules for the project are contained and actually checked into the project repo itself. So if you decide to have a different set of rules for the next project, fine. It’s not a global setting it’s defined and set by whomever runs the project.

Creating an “app” global

So what makes a module? Ideally, I’d suggest each module being in it’s own file and only exporting one piece of functionality. Only having a single export helps you keep clear what purpose the module has and keeps it focused on just that task. The goal is having lots of modules that do one thing really well and then your app just combines modules into a coherent story.

When I’m building an app, I intentionally have one main controller object of sorts. It’s attached to the window as “app” just for my own. For modules that I’ve written specifically for this app (stuff that’s in the clientapp folder) I allow myself the use of that global to perform app-level actions like navigating, etc.

Using events: Modules talking to modules

How do you keep your modules cleanly separated? Sometimes modules are dependant on other modules. How do you keep them loosely coupled? One good technique is triggering lots of events that can be used as hooks by other code. Many of the core components in node.js are extensions of EventEmitter the reason is that you can register handlers for stuff that happens to those items just like you can register a handler for someone clicking a link in the browser. This pattern is really useful when building re-usable compenents yourself. By exporting things that inherit from event emitters means that the code using your module can specify what they care about rather than the module having to know. For example, see the super simplified version of the And Bang js library below.

There are lots of implementations of event emitters. We use a modified version of one from the LearnBoost guys: @tjholowaychuk, @rauchg and company. It’s wildemitter on my github if you’re curious. But the same concept works for any of the available emitters. See below:

// require our emitter
var Emitter = require('wildemitter');

// Our main constructor function
var AndBang = function (config) {
    // extend with emitter;

// inherit from emitter
AndBang.prototype = new Emitter();

 // Other methods
AndBang.prototype.setName = function (newName) { = newName;
    // we can trigger arbitrary events
    // these are just hooks that other
    // code could chose to listen to.
    this.emit('nameChanged', newName);

// export it to the world
module.exports = AndBang;

Then, other code that wants to use this module can listen for events like so:

var AndBang = require('andbang'),
    api = new AndBang();

// now this handler will get called any time the event gets triggered
api.on('nameChanged',  function (newName) { /* do something cool */ });

This pattern makes it easy to expose functionality without having to know anything about the consuming code.


I’m tired of typing so that’s all for now. :)

But I just thought I’d share some of the tools, techniques and knowledge we’ve acquired through blood, sweat and mistakes. If you found it helpful, useful or if you want to yell at me. You can follow me on twitter: @HenrikJoreteg.

See ya on the interwebs! Build cool stuff!

● posted by Henrik Joreteg

The other day, DHH[1] tweeted this:

Forcing your web ui to be “just another client” of your API violates the first rule of distributed systems: Don’t write distributed systems.

— DHH (@dhh) June 12, 2012

In building the new Basecamp, 37signals chose to do much of the rendering on the server-side and have been rather vocal about that, bucking the recent trend to build really richly interrative, client-heavy apps. They cite speed, simplicity and cleanliness. I quote DHH, again:

It’s a perversion to think that responding to Ajax with HTML fragments instead of JSON is somehow dirty. It’s simple, clean, and fast.

— DHH (@dhh) June 12, 2012

Personally, I think this generalization is a bit short-sighted.

The “rule” that is cited in the first tweet about distributed sytstems is from Martin Fowler who says:

First Law of Distributed Object Design: Don’t distribute your objects!

So, yes, duplicating state into the client is essentially just that: you’re distributing your objects. I’m not saying I’m wiser than Mr. Fowler, but I do know that keeping client state can make an app much more useful and friendly.

Take Path for iPhone. It caches state, so if you’re offline you can read posts from your friends. You can also post new updates while offline that just seemlessly get posted when you’re back on a network. That kind of use case is simply impossible unless you’re willing to duplicate state to the client.

As application developers we’re not trying to dogmatically enforce “best practices” of computer science just for the sake of dogma, we’re trying to build great experiences. So as soon as we want to support that type of use case, we have to agree that it’s OK to do it in some circumstances.

As some have pointed out and DHH acknowledged later, even Basecamp goes against his point with the calendar. In order to add the type of user experience they want, they do clientside MVC. They store some state in the client and do some client-side rendering. So, what’s the difference in that case?

I’m not saying all server side rendering is bad. I’m just saying, why not pick one or the other? It seems to me (and I actually speak from experience here) that things get really messy once you start mixing presentation and domain logic.

As it turns out, Martin Fowler actually wrote A WHOLE PAPER about separating presentation from domain logic.

The other point I’d like to make is this: What successful, interesting web application do you know/use/love that doesn’t have multiple clients?

As soon as you have any non-web client, such as an iPhone app, or a dashboard widget or a CLI or some other webapp that another developer built, you now need a seperate data API anyway.

Obviously, 37signals has an API. But, gauging by the docs, there are pieces of the API that are incomplete. Another benefit of dog-fooding your own API is that you can’t ship with an incomplete API if you built your whole app on it.

We’re heads-down on the next version of And Bang which is built entirely on what will be our public API. This re-engineering has been no small undertaking, but we feel it will be well worth the effort.

The most interesting apps we use are not merely experienced through a browser anymore. APIs are the one true cross-platform future you can safely bank on.

I’m all ears if you have a differing opinion. Hit me up on twitter @HenrikJoreteg and follow @andyet and @andbang if you’re curious about what else we’re up to.

[1] DHH (David Heinemeier Hansson) of Ruby on Rails and 37signals fame is not scared to state his opinions. I think everyone would agree that his accomplishments give him the right to do so. To be clear, I have nothing but respect for 37 Signals. Frankly, their example is a huge source of inspiration for bootstrapped companies like ours at &yet.

● posted by Nathan Fritz

Now you’re thinking with feeds!

When I look at a single-page webapp, all I see are feeds; I don’t even see the UI anymore. I just see lists of items that I care about. Some of which only I have access to and some of which other groups have access to. I can change, delete, re-position, and add to the items on these feeds and they’ll propagate to the people and entities that have access to them (even if it is just me on another device or at a later date).

I’ve seen it this way for years, but I haven’t grokked it enough to articulate what I was seeing until now.

What Thoonk Is

Thoonk is a series of higher-level objects built on Redis that sends publish, edit, delete, and position events when they are changed. These objects are feeds for making real-time applications and feed services.

What is a Thoonk feed?

A Thoonk feed is a list of indexed data objects that are limited by topic and by what a single entity might subscribe to. An RSS/ATOM feed qualifies. What makes a Thoonk feed different from a table? A table is limited to a topic, but lacks single entity interest limitations. A Thoonk feed isn’t just a message broker, it’s a database-store that sends out events when the data changes.

Let’s use &bang as an example. Each team-member has a list of tasks. In a relational database we might have a table that looks like this:


id | team_id | member_id | description | complete bool | etc.

Whenever a user renders their list, I would query that list, limiting by a specific user and a specific team.

If we converted this table, without changing it, into a Thoonk feed, then we would only be able to subscribe to ALL tasks and not just the tasks of a particular team or member. So, instead, a Thoonk feed might look like:


{description: "", completed: false, etc, etc}

Now when the user wants a rendered list of tags, I can do one index look-up rather than three, and I am able to subscribe to changes on the specific team member’s tasks, or even to team:353:member:*:tasks to subscribe to all of that team’s tasks.

[Note: I suppose you could arrange a relational database this way, but it wouldn’t really be able to take advantage of SQL, nor could you subscribe to the table to get changes.]

It’s Feeds All the Way Up

If I use Thoonk subscribe-able feeds as my data-storage engine, life gets so much easier. When a user logs in, I can subscribe contextualized callbacks just for them to the feeds of data that they have access to read from. This way, if their data changes for any reason, by any process, by any server, it can bubble all the way up to the user without having to run any queries. I can also subscribe separate processes that can automatically scrub, pre-index, cull, or any number of tasks to any Thoonk feed a particular process cares about. I can use processes in mixed languages to provide monitoring and additional API’s to the feeds.

But What About Writes?

Let’s not think in terms of writes. Writes are just changes to feed items (publishing, editing, deleting, repositioning) that writes the data to ram/disk and informs any subscribers of the change. Let’s instead think in terms of user-actions. A user-action (such as delegating a task to another user in &bang) needs ACL and may affect multiple feeds in a single call. If we defer user-actions to jobs (a special kind of Thoonk feed), we can easily isolate, scale, share, and distribute the business-logic involved in dealing with a user-action.

What Are Thoonk Jobs?

Thoonk Jobs are items that represent business-logic needing to be done reliably, a single time, by any available worker. Jobs are consumed as fast as a worker-pool can consume them. A job feed is a list of job items, each of which may exist in the state of available, in-flight, and stalled. Available jobs are taken and are placed in an in-flight set while they are being processed. When the job is done, the job is removed from the in-flight set, and its item is deleted. If the worker fails to complete the job (either because of an error, distaste, or a monitoring process deciding that the job has timed out), the job may be placed back to the available list or the stalled set.

Why use Thoonk Jobs for User-Actions?

  • User-actions that fail for some reason can be retried (you can also limit the # of retries).
  • The work can be distributed across processes and servers.
  • User-actions can burst much faster than the workers can handle them.
  • A user-action that ultimately fails can be stalled, where an admin is informed to investigate and potentially edit and/or retry when the issue that caused it has been resolved or to test said resolution.
  • Any process in any language can contribute jobs (and get results from them) without having to re-implement the business logic or ACL.

The Last One is a Doozy

Scaling, reliability, monitoring and all of that is nice, but being able to build your application out rather than up is, I believe, the greatest reason for this approach. &bang is written in node.js, but if I have a favorite library for implementing a REST interface or an XMPP interface written in Python or Ruby (or any other language), I can quickly put that together and add it as a process. In fact, I can pretty much add any piece of functionality as a process without having to reload the rest of the application server, and really isolate a feature as its own process. User-actions from this process can be published to Thoonk Job feeds without having to worry about request validation or ACL since that is handled by the worker itself.

Rather than having a very large, complex application, I can have a series of very small processes that automatically cluster and are informed of changes in areas of their specific concerns.

Scaling Beyond Redis

Our testing indicates that Redis will not be a choke point until we have nearly 100,000 active users. The plan to scale beyond that is to shard &bang by teams. A quick look-up will tell us which server a team resides on, and users and processes can subscribe callbacks to connections on those servers. In that way, we can run many Redis servers, and theoretically scale vertically. High-availability is handled by a slave for each shard and a gossip protocol for promoting slaves.

Conflict Resolution and Missed Updates

Henrik’s recent post spawned a couple of questions about conflict resolution. First I’ll give a deflection, and then I’ll give a real answer.

&bang doesn’t yet need conflict resolution. None of the writes are actually done on the client as they are all RPC calls which go into a job queue. Then the workers validate the payload, check the ACL, and update some feeds, at which point the data bubbles back up to the client. The feed updates are atomic, and happen quite quickly. Also, two users being able “to edit the same item only comes up with delegated task, in which case the most recent edit wins.

Ok, now the real answer. Thoonk is going to have revision history and incrementing revision numbers for 1.0. Each historical item is the same as the publish/edit/delete/reposition updates that are sent via pubsub. When a user change job is done, the client can send its current revision numbers for the feeds involved, and thus conflicts on an edit can be detected. The historical data should be enough data to facilitate some form of conflict resolution (determined by the application implementer). The revision numbers can also bubble up to the client, so the client can detect missing updates and ask for a replay from a given revision number.

Currently we’re punting on missed items. Anytime the &bang user is disconnected, the app is disabled and refreshed when it is able to reconnect. A more elaborate solution using the new Thoonk features I just listed is probably coming and perhaps some real offline-mode support with local “dirty” changes that get resolved when you come back online.

All Combined

Using Thoonk, we were able to make &bang scale to 10s of thousands of active users on a single server, burst user-activity beyond our choke-points, isolate user-action business-logic and ACL, automatically cluster to more servers and processes, choose any Redis client library supported language for individual features and interfaces, bubble data changes all the way up to the user regardless of the source of change, provide an easy way of iterating, and generally create a kick-ass, realtime, single-page webapp.

Can I Use Thoonk Now?

Thoonk.js and are MIT licensed, and free to use. While we are using Thoonk.js in production and it is stable there, the API is not final. Currently I’m moving the the feed logic to Redis Lua scripts, which will be officially supported in Redis 2.6 with an RC1 promised for this December. I plan to be ready for that. The Lua scripting will give us performance gains, and remove unnecessary extra logic to keep publish/edit/delete/reposition commands atomic, but most importantly it will allow us to share the core code with all implementations of Thoonk, allowing us to easily add and support more languages. As mentioned previously, as I do the Redis Lua scripting, I’ll be adding revision history and revision numbers to feeds, which will facilitate conflict detection and replay of missed events.

That said, feel free to comment, contribute, steal, or abuse the project in the meantime. A 1.0 release will indicate API stability, and I will encourage its use in production at that point. I will soon be breaking out the Lua scripts to their own git repo for easy implementation.

If you want to keep an eye on what we’re doing, follow me @fritzy and @andyet on twitter. Also be sure to check out &bang for getting stuff done with your team.

If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Shoot Henrik an email ( and tell us what we can do to help.

● posted by Henrik Joreteg

This last year, we’ve learned a lot about building scalable realtime web apps, most of which has come from shipping &bang.

&bang is the app we use to keep our team in sync. It helps us stay on the same page, bug each other less and just get stuff done as a team.

The process of actually trying to get something out the door on a bootstrapped budget helped us focus on the most important problems that needed to be solved to build a dynamic, interactive, real-time app in a scaleable way.

A bit of history

I’ve written a couple of posts on backbone.js since discovering it. The first one introduces Backbone.js as a lightweight client-side framework for building clean, stateful client apps. In the second post I introduced Capsule.js. Which is a tool that I built on top of Backbone that adds nested models and collections and also allows you to keep a mirror of your client-side state on a node.js server to seemlessly synchronize state between different clients.

That approach was great for quickly prototyping an app. But as I pointed out in that post, that’s a lot of in memory state being stored on the server and simply doesn’t scale very well.

At the end of that post I hinted at what we were aiming to do to ultimately solve that problem. So this post is meant to be a bit of an update on those thoughts.

Our new approach

Redis is totally freakin’ amazing. Period. I can’t say enough good things about it. Salvatore Sanfilippo is a god among men, in my book.

Redis can scale.

Redis can do PubSub.

PubSub just means events. Just like you can listen for click events in Javascript in a browser you can listen for events in Redis.

Redis, however is a generic tool. It’s purposely fairly low-level so as to be broadly applicable.

What makes Redis so interesting, from my perspective, is that you can treat it as a shared memory between processes, languages and platforms. What that means, in a practical sense, is that as long as each app that uses it interacts with it according to a pre-defined set of rules, you can write a whole ecosystem of functionality for an app in whatever language makes the most sense for that particular task.

Enter Thoonk

My co-worker, Nathan Fritz, is the closest thing you can get to being a veteran of realtime technologies.

He’s a member of the XSF council for the XMPP standard and probably wrote his first chat bot before you knew what chat was. His Sleek XMPP Python library is iconic in the XMPP community. He has a self-declared un-natural love for XEP-60 which describes the XMPP PubSub standard.

He took everything he learned from his work on that standard and built Thoonk. (In fact, he actually kept the PubSub spec open as he built the Javascript and Python implementations of Thoonk.)

What is Thoonk??

Thoonk is an abstraction on Redis that provides higher-level datatypes for a more approachable interface. Essentially, staring at Redis as a newbie is a bit intimidating. Not that it’s hard to interface with, it’s just kind of tricky to figure out how to logically structure and retrieve your data. Thoonk simplifies that into a few data-types that describe common use cases. Primarly “feeds”, “sorted feeds”, “queues” and “jobs”.

You can think of a feed as an ad-hoc database table. They’re “cheap” to create and you simply declare them to make them or use them. For example, in &bang, we have all our users in a feed called “users” for looking up user info. But also, each user has a variety of individual feeds. For example, they have a “task” feed and a “shipped” feed. This is where it veers from what people are used to in a relational database model, because each user’s tasks are not a part of a global “tasks” feed. Instead, each user has a distinct feed of tasks because that’s the entity we want to be able to subscribe to.

So rather than simply breaking down a model into types of data, we end up breaking things into groups of items (a.k.a. “feeds”) that we want to be able to track changes to. So, as an example, we may have something like this:

// our main user feed
var userFeed = thoonk.feed('users');

// an individual task feed for a user
var userTaskFeed = thoonk.sortedFeed('team.andyet.members.{{memberID}}.tasks');

Marrying Thoonk and Capsule

Capsule was actually written with Thoonk in mind. In fact that’s why they were named the way they did: You know these lovely pneumatic tube systems they use to send cash to bank tellers and at Costco? (PPSHHHHHHH—THOONK! And here’s your capsule.)

Anyway, the integration didn’t end up being quite as tight as we had originally thought but it still works quite well. Loose coupling is better anyway right?

The core problem I was trying to solve with Capsule was unifying the models that are used to represent the state of the app in the browser and the models you use to describe your data on the server—ideally, not just unifying the data structure, but also letting me share behavior of those objects.

Let me explain.

As I mentioned, we recently shipped &bang. It lets a group of people share their task lists and what they’re actively working on with each other.

It spares you from a lot of “what are you working on?” conversations and increases accountability by making your work quite public to the team.

It’s a realtime, keyboard-driven, web app that is designed to feel like a desktop app. &bang is a node.js application built entirely with the methods described here.

So, in &bang, a team model has attributes as well as a couple of nested backbone collections such as members and chat messages. Each member has attributes and other nested collections, tasks, shipped items, etc.

Initial state push

When a user first logs in we have to send the entire model state for the team(s) they’re on so we can build out the interface (see my previous post for more on that). So, the first thing we do when a user logs in is subscribe them to the relevant Thoonk feeds and perform the the initial state transfer to the client.

To do this, we init an empty team model on the client (a backbone/capsule model shared between client/server) . Then we recurse through our Thoonk feed structures on the server to export the data from the relevant feeds into a data structure that Capsule can use to import that data. The team model is inflated with the data from the server and we draw the interface.

From there, the application is kept in sync using events from Thoonk that get sent over websockets and applied to the client interface. Events like “publish”, “change”, “retract” and “position”.

Once we got the app to the point where this was all working, it was kind of a magical moment, because at this point, any edits that happen in Thoonk will simply get pushed out through the event propagation all the way to the client. Essentially, the inteface that a user sees is largely a slave to the server. Except, of course, the portions of state that we let the user manipulate locally.

At this point, user interactions with the app that change data are all handled through RPC calls. Let’s jump back to the server and you’ll see what I mean.

I thought you were still using Capsule on the server?

We do, but differently, here’s how that is handled.

In short… it’s a job system.

Sounds intimidating right? As someone who started in business school, then gradually got into front-end dev, then back-end dev, then a pile of JS, job systems sounded scary. In my mind they’re for “hardcore” programmers like Fritzy or Nate or Lance from our team. Job systems don’t have to be that scary.

At a very high level you can think of a “job” as a function call. The key difference being, you don’t necessarily expect an immediate result. To continue with examples from &bang: a job may be to “ship a task”. So, what do we need to know to complete that action? We need the following:

  • member Id of the user shipping the task
  • the task id being completed (we call this “shipping”, because it’s cooler, and it’s a reminder a reminder that finishing is what’s important)

We can derive everything else we need from those key pieces of information.

So, rather than call a function somewhere:

shipTask(memberId, taskId)

We can just describe a job as a simple JSON object:

    userId: <user requesting the job>,
    taskId: <id of task to 'ship'>,
    memberId: <id of team member>

The we can add that to our “shipTask” job queue like so:


The cool part about the event propagation I talked about above is we really don’t care so much when that job gets done. Obviously fast is key, but what I mean is, we don’t have to sit around and wait for a synchronous result because the event propagation we’ve set up will handle all the application state changes.

So, now we can write a worker that listens for jobs from that job queue. In that worker we’ll perform all the necessary related logic. Specifically stuff like:

  • Validating that the job is properly formatted (contains required fields of the right type)
  • Validating that the user is the owner of that task and is therefore allowed to “ship” it.
  • Modifying Thoonk feeds accordingly.

Encapsulating and reusing model logic

You’ll notice that part of that list requires some logic. Specifically, checking to see if the user requesting the action is allowed to perform it. We could certainly write that logic right here, in this worker. But, in the client we’re also going to want to know if a user is allowed to ship a given task, right? Why write that logic twice?

Instead we write that logic as a method of a Capsule model that describes a task. Then, we can use the same method to determine whether to show the UI that lets the user perform the action in the browser as we use on the back end to actually perform the validation. We do that by re-inflating a Capsule model for that task in our worker code and calling the canEdit() method on it and passing it the user id requesting the action. The only difference being, on the server-side we don’t trust the user to tell us who they are. On the server we roll the user id we have for that session into the job when it’s created rather then trust the client.


One other, hugely important thing that we get by using Capsule models on the server is some security features. There are some model attributes that are read only as far a the client is concerned. What if we get a job that tries to edit a user’s ID? In a backbone model if I call:

backboneModelInstance.set({id: 'newId'});

That will change the ID of the object. Clearly that’s not good in a server environment when you’re trusting that to be a unique ID. There are also lots of other fields you may want on the client but you don’t want to let users edit.

Again, we can encapsulate that logic in our Capsule models. Capsule models have a safeSet method that assumes all inputs are evil. Unless an attribute is whitelisted as clientEditable it won’t set it. So when we go to set attributes within the worker on the server we use safeSet when dealing with untrusted input.

The other important piece of securing a system that lets users indirectly add jobs to your job system is ensuring that the job you receive validate your schema. I’m using a node implementation of JSON Schema for this. I’ve heard some complaints about that proposed standard, but it works really well for the fairly simple usecase I need it for.

A typical worker may look something like this:

workers.editTeam = function () {
  var schema = {
    type: "object",
    properties: {
      user: {
        type: 'string',
        required: true
      id: {
        type: 'string',
        required: true
      data: {
        type: 'object',
        required: true

  editTeamJob.get(0, function (err, json, jobId, timeout) {
    var feed = thoonk.feed('teams'), 

      function (cb) {
        // validate our job
        validateSchema(json, schema, cb);
      function (clean, cb) {
        // store some variables from our cleaned job
        result = clean;
        team =;
        newAttributes =;
        verifyOwnerTeam(team, cb);
      function (teamData, cb) {
        // inflate our capsule model
        inflated = new Team(teamData);
        // if from the server user normal 'set'
      function (cb) {
        // do the edit, all we're doing is storing JSON strings w/ ids
        feed.edit(JSON.stringify(inflated.toJSON()),, cb);
    ], function (err) {
      var code;
      if (!err) {
        code = 200;'edited team', {team: team, attrs: newAttributes});
      } else if (err === 'notAllowed') {
        code = 403;
        logger.warn('not allowed to edit');
      } else {
        code = 500;
        logger.error('error editing team', {err: err, job: json});
      // finish the job 
      editTeamJob.finish(jobId, null, JSON.stringify({code: code}));
      // keep the loop crankin'

Sounds like a lot of work

Granted, writing a worker for each type of action a user can perform in the app with all the related job and validation is not an insignificant amount of work. However, it worked rather well for us to use the state syncing stuff in Capsule while we were still in the prototyping stage, then converting the server-side code to a Thoonk-based solution when we were ready to roll out to production.

So why does any of this matter?

It works.

What this ultimately means is that we now push the system until Redis is our bottleneck. We can spin up as many workers as we want to crank through jobs and we can write those workers in any language we want. We can put our node app behind HA proxy or Bouncy and spin up a bunch of ‘em. Do we have all of this solved and done? No. But the core ideas and scaling paths seem fairly clear and doable.

[update: Just to add a bit more detail here, from our tests we feel confident that we can scale to tens of thousands of users on a single server and we believe we can scale vertically after doing some intelligent sharding with multiple servers.]

Is this the “Rails of Realtime?”


Personally, I’m not convinced there ever will be one. Even Owen Barnes (who originally set out to build just that with SocketStream) said at KRTConf: “There will not be a black box type framework for realtime.” His new approach is to build a set of interconnected modules for structuring out a realtime app based on the unique needs of its specific goals.

The kinds of web apps being built these days don’t fit into a neat little box. We’re talking to multiple web services, multiple databases, and pushing state to the client.

Mikeal Rogers gave a great talk at KRTConf about that exact problem. It’s going to be really, really hard to create a framework that solves all those problems in the same way that Rails or Django can solve 90% of the common problems with routes and MVC.

Can you support a BAJILLION users?

No, but a single Redis db can handle a fairly ridiculous amount of users. At the point that actually becomes our bottleneck, (1) we can split out different feeds for different databases, and (2) we’d have a user base that would make the app wildly profitable at that point—certainly more than enough to spend some more time on engineering. What’s more, Salvatore and the Redis team are putting a lot of work into clustering and scaling solutions for Redis that very well may outpace our need for sharding, etc.

Have you thought about X, Y, Z?

Maybe not! The point of this post is simply to share what we’ve learned so far.

You’ll notice this isn’t a “use our new framework” post. We would still need to do a lot of work to cleanly extract and document a complete realtime app solution from what we’ve done in &bang—particularly if we were trying to provide a tool that can be used to quickly spin up an app. If your goal is to find a tool like that, definitely check out what Owen and team are doing with SocketStream and what Nate and Brian are doing with Derby.

We love the web, and love the kinds of apps that can be built with modern web technologies. It’s our hope that by sharing what we’ve done, we can push things forward. If you find this post helpful, we’d love your feedback.

Technology is just a tool, ultimately, it’s all about building cool stuff. Check out &bang and follow me @HenrikJoreteg, Adam @AdamBrault and the whole @andyet team on the twitterwebz.

If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up ( and tell us what we can do to help.

● posted by Henrik Joreteg

Last week we launched our newest product, &!, at KRTConf. It’s a realtime, single-page app that empowers teams to bug each other less and get more done as a team.

One of our speakers, Scott Hanselman from Microsoft tried to open the app in IE9 and was immediately redirected to a page that tells users they need WebSockets to use the app. He then wrote a post criticizing this choice, his argument being that users don’t care about the underlying technology, they just want it to work. He thinks we should provide reasonable fallbacks so that it works for as wide of an audience as possible.

I completely agree with his basic premise: users don’t care about the technology.

Users care about their experience.

I think this is something the web has ignored for far too long so I’ll say it again:

Users only care about their experience.

In this case, we’re not building a website with content. We’re building an experience.

We didn’t require Web Sockets because we’re enamored with the technology, we actually require it precisely because it provides the best user experience.

The app simply doesn’t feel as responsive when long-polling. There’s enough of a difference in lag and responsiveness that we made the choice to eliminate the other available transports in (We’re doing a lot more with our data transport than simply sending chats.) Additionally, we’re also using advanced HTML5 and CSS3 that simply isn’t available yet in IE9. It turns out that checking for WebSockets is a fairly good litmus test of the support of those other features (namely CSS3 transitions and animations). The app is just plain more fun to use because of those features.

Apple beat Microsoft by focusing on user experience. They unapologetically enforced minimum system requirements and made backward incompatible changes. Why is it considered “acceptable” to require minimum hardware (which costs money), but it’s somehow not acceptable to require users to download a free browser?

I’ve said this over and over again: web developers who are building single-page applications are in direct competition with native applications.

If we as web developers continue to limp along support for less-than-top-notch browsers, the web will continue to lose ground to the platforms that build for user experience first. Why should we, as a small bootstrapped company invest our limited resources building less-than-ideal fallbacks?

All this, of course, depends on your audience. We created &! for small, forward-thinking teams, not necessarily their moms. :)