Blog

● posted by Henrik Joreteg

It used to all make sense.

The web was once nothing but documents.

Just like you’d want some type of file browser UI to dig through files on your operating system, obviously, you need some type of document browser to view all these web-addressable “documents”.

But over time, those “documents” have become a lot more. A. lot. more.

But I can now use one of these “documents” to have a 4 person video/audio conference on Talky with people anywhere in the world, play incredible full-screen first-person shooters at 60fps, write code in a full-fledged editor, or {{ the reader may insert any number of amazing web apps here }} using nothing but this “document viewer”.

Does calling them “documents” seem ridiculous to anyone else? Of course it does. Calling them “sites” is pretty silly too, actually because a “site” implies a document with links and a URL.

I know the “app” vs. “site” debate is tired and worn.

Save for public, content-heavy sites, all of the apps that I’m asked to write by clients these days at &yet are fully client-side rendered.

The browser is not an HTML renderer for me, it’s the world’s most ubiquitous, yet capable, runtime. With the amazing capabilities of the modern web platform, it’s to the point where referring to a browser as a document viewer is a insult to the engineers who built it.

There is a fundamental difference when you treat the browser as a runtime instead of a document renderer.

I typically send it nothing but a doctype, a script tag, and a stylesheet with permanent cache headers. HTML just happens to be the way I tell the browser to download my app. I deal with the initial latency issues by all-but-ensuring visitors hit the app with a primed cache. This is pretty easy for apps that are opened frequently or are behind a static login page in which you prefetch the app resources. With proper cache headers the browser won’t even do the 304 not-modified dance. It will simply start executing code.

This makes some people cringe, and many web purists (luddites?! #burn) would argue that everything should gracefully degrade and that there isn’t, or at least there shouldn’t be, any distinction between a JavaScript app and site. When I went to EdgeConf in NYC the “progressive enhancement” panel said a lot of things like “your app should still be usable without JS enabled”. Often “javascript is disabled” is really the time when the browser is downloading your javascript. To this I say:

WELL, THEN SHOW ME A TALKY.IO CLONE THAT GRACEFULLY DEGRADES!

It simply cannot be done. Like it or not, the web has moved on from that myopic view of it. The blanket graceful degradation view of the web no longer makes sense when you can now build apps whose core use case is fully dependent on a robust JavaScript runtime.

I had a great time at Chrome Dev Summit, but again, the core message of the “Instant Mobile Apps” talk was: “render your html on the server to avoid having your render blocking code require downloading your JS before it can start executing.”

For simple content-driven sites, I agree. Completely. The demo in that particular talk was the Chrome developer documentation. But it’s a ridiculously easy choice to render documentation server side. (In fact the notion that there was ever a client-side rendered version to begin with was surprising to me.)

If your view of the web lacks a distinction between clientside apps and sites/documents, I’d go as far as to say that you’re now part of the problem.

Why?

Because that view enables corporate IT departments to argue for running old browsers without getting laughed out of the building.

Because that view keeps some decision makers from adopting 100% JavaScript apps and instead spending money on native apps with web connectivity.

Because that view wastes precious developer time inventing and promoting hacks and workarounds for shitty browsers when they could be building next-generation apps.

Because that view enables you to argue that your proficiency of browser CSS hacks for IE7 is still relevant.

Because that view will always keep the web locked into the browser.

What about offline?

I’m writing this on a plane without wifi and of course, using a native app to do so. There are two primary reasons for this:

  1. The offline web is still crap. See offlinefirst.org and this hood.ie post for more.
  2. All my favorite web-based tools are still stuck in the browser.

The majority of users will never ever open a browser without an Internet connection, type in a URL and expect ANYTHING to happen.

Don’t get me wrong, I’m very supportive of the offline first efforts and they are crucial for justifying that

We have a very different view of apps that exist outside of the browser. In fact, the expectation is often reversed: “Oh right, I do need a connection for this to work”.

Chrome OS is one approach, but I think its 100% cloud-based approach is more hardcore than the world is ready to adopt and certainly is never going to fly with the indie data crowd or the otherwise Google-averse.

So, have I ranted enough yet?

According to Jake Archibald from Google, ServiceWorkers will land in Canary sometime early 2014. This work is going to fundamentally change what the web can do.

If you’re unfamiliar with ServiceWorkers (previously called Navigation Controllers), they let you write your own cache control layer in javascript for your web application. ServiceWorkers promise to serve the purpose that appcache was intended for: truly offline web apps.

At a high level, they now let javascript developers building clientside apps to treat the existence of a network connection as an enhancement rather than an expectation.

You may think, “Oh, well, the reason we use the web is because access to the network provides our core value as an app.”

While I’d tend to agree that most of the useful apps fundamentally require data from the internet to be truly useful, you’re missing the point.

Even if the value of your app depends entirely on a network connection, you can now intercept requests and choose to answer them from caches that you control, while in parallel attempting to fetch newer versions of those resources from the network.

If you think about it, that capability is no different than something like Facebook for iOS or Android.

That Facebook app’s core value is unquestioningly derived from seeing your friends’ latest updates and photos, which you’re obviously not going to get without a connection. But the fundamental difference is this: the native app will still open the app and show you all the cached content it has. As a result (and for other reasons) the OS has given those types of apps a privileged status.

With full programmatic cache control for the web that ServiceWorkers will offer, you’ll be able to choose to load your app and whatever latest content you had downloaded from cache first while optionally trying to connect and download new things from the network. The addition of a controllable cache layer in web apps means that an app like facebook really has no compelling reason to be a native app. I mean, really. If you break it down, that app is mostly a friend timeline browser, right? (the key word there being browser).

BUT, even with the addition of ServiceWorkers, there’s another extremely important difference: user perception.

We’ve spent years teaching users that things they use in their web browser simply do not work offline. Users understand (at least at on some unconscious level) that the browser is the native app that gets sites/documents from the Internet. From a user experience standpoint, trying to teach the average user anything different is attempting to roll a quarry full of rocks up a hill.

This is where it starts to become apparent that failing to draw a distinction between a fully client “apps” and a website really starts to become a disservice to all these new capabilities of the web platform. It doesn’t matter how good the web stack becomes, it will never compete with native apps in the “native” space while it stays stuck in the browser.

The addition of “packaged” chrome apps is an admirable, but in my opinion, still inadequate attempt at addressing this issue.

At the point where a user on a mobile device opts to “add to home screen” the intent from the user is more than just a damn bookmark, they’re saying: “I want access to this on the same level as my native apps”. It’s a user’s request for an installation of that app, but in reality it’s treated as a shitty, half-assed install that’s really just a bookmark. But the intent from the user is clear: “I want a special level of quick and easy access to this specific app“.

So why not just embrace that what they’re actually trying to do is “install” that web application into their operating system?

Apple sort of does this for Mac Apps. After you first “sideload” (a.k.a. download from the web and try to run) a native Mac desktop app, they treat it a bit like an awkward stepchild when you first open it. They warn you and tell you: hey, this was an app downloaded from the Internet, are you sure you want to let this thing run?

While I’m not a fan of the language or the FUD involved with that, the timing makes perfect sense to me. At the point I’ve opted to “install” something to my homescreen on my mobile device (or the equivalent to that for desktop), that seems like the proper inflection point to verify with the user that they do, in fact, want to let this app have access to specific “privileged” OS APIs.

Without a simple way to install and authorize a clientside web app, these kinds of apps will always get stuck in the uncanny valley of half-assed, semi-installed apps.

So why bother in the first place? Why not just do native whenever you want to build an “app”? Beyond providing a way to build for multiple platforms, there’s one more thing the web has that native apps don’t have: a URL.

The UNIVERSAL RESOURCE LOCATOR concept is easy to take for granted, but it’s insanely useful to be able to reference things like links to emails inside gmail, or a tweet, or a very specific portion of documentation. Being able to naturally link between apps on the web is what gives the web it’s power. It’s unfortunate that many, when they first start building single page applications don’t update URLs as they go and fail to respect the “back” button, thus breaking the web.

But when done properly, the blending the rich interactivity of native apps with the addressability and ubiquity of the web is a thing of beauty.

I cannot understate how excited I am about Service Workers. Because finally, we’ll have the ability to build web applications that treat network resources the same way that good native applications do: as an enhancement.

Of course, the big IF is whether platforms play along and actually treat these types of apps as first class citizens.

Call me an optimist, but I think the capabilities that ServiceWorkers promise us, will shine a light on the bizarre awkwardness of the concept of opening a browser to access offline apps.

The web platform’s capabilities have outgrown the browser.

Let’s help the web to make its next big push.

I’m @HenrikJoreteg on twitter. I’d love to hear your thoughts on this.

For further reading on ServiceWorkers, here is a great explainer doc.

Also, check out my book on building sanely structured single page applications.

● posted by Henrik Joreteg

I had the privilege to attend EdgeConf 2013 as a panelist and opening speaker for the Realtime Data discussion.

It was an incredible, deeply technical conference with an interesting discussion/debate format.

Here’s the video from the panel:

The slides from my talk can be found on speakerdeck.

It was a privilege to attend — I’m very grateful to Andrew Betts and FT Labs for the opportunity to be there.

● posted by Melanie Brown

We asked Portlandians about realtime technologies—and, um, they answered!

DISCLAIMER: No hipsters feelings were harmed in the making of this video.

Film by Miss Melanie Brown
Music by YACHT

If you enjoyed this, be sure to check out last year’s video, too. :)


● posted by Henrik Joreteg

These days, more and more HTML is rendered on the client instead of sent pre-rendered by the server. So If you’re building a web app that uses a lot of client side javascript you’ll doubtlessly want to create some HTML in the browser.

How we used to do it

First a bit of history. When I first wrote ICanHaz.js I was just trying to ease a pain point I was having: generating a bunch of HTML in a browser is a pain.

Why is it a pain? Primarily because JS doesn’t cleanly support multi-line strings, but also because there isn’t an awesome string interpolation system built into JS.

To work around that, ICanHaz.js, as lots of other template clientside template systems do, uses a hack to make it easier to send arbitrary strings to the browser. As it turns out, browsers ignore content in <script> tags if you give them a type attribute that isn’t text/javascript. So, ICanHaz reads the content of tags on the page that say: <script type=“text/html”> which can contain templates or any other multi-line strings for that matter. So, ICanHaz will reads those templates and turns each of them into a function that you can call to render that string with your data mixed into it. For example:

This html:

<script id="user" type="text/html">
  <li>
    <p class="name">Hello I'm {{ name }}</p>
    <p><a href="http://twitter.com/{{ twitter }}">@{{ twitter }}</a></p>
  </li>
</script>

Is read by ICanHaz and turned into a function you call with your own like this:

// your data
var data = {
  first_name: "Henrik",
  last_name: "Joreteg"
}

// I can has user??
html = ich.user(data)

This works, and lots of people clearly thought the same as it’s been quite a popular library.

Why that’s less-than-ideal

It totally works, but if you think about it, it’s a bit silly. It’s not super fast and you’re making the client do a bunch of extra parsing just to turn text into a function. You also have to send the entire template engine to the browser which is a bunch of wasted bandwidth.

How we’re doing it now

What I finally realized is that all you actually want when doing templating on the client is the end result that ICanHaz gives you: a function that you call with your data that returns your HTML.

Typically, smart template engines, like the newer versions of Mustache.js, do this for you. Once the template has been read, it gets compiled into a function that is cached and used for subsequent rending of that same template.

Thinking about this leaves me asking: why don’t we just send the javascript template function to the client instead of doing all the template parsing/compiling on the client?

Well, frankly, because I didn’t really know of a great way to do it.

I started looking around and realized that Jade (which we already use quite a bit at &yet) has support for compiling as a separate process and, in combination with a small little runtime snippet, this lets you create JS functions that don’t need the whole template engine to render. Which is totally awesome!

So, to make it easier to work with, I wrote a little tool: templatizer that you can run on the server-side (using node.js) to take a folder full of jade templates and turn them into a javascript file that you can include in your app that has just has the template rendering functions as javascript.

The end result

From my tests the actual rendering of templates is 6 to 10 times faster. In addition you’re sending way less code to the browser (because you’re not sending a whole templating engine) and you’re not making the browser do a bunch of work you could have already done ahead of time.

I still need to write more docs and use it for a few more projects before we have supreme confidence in it, but I’ve been quite happy with the results so far and wanted to share it.

I’d love to hear your thoughts. I’m @HenrikJoreteg on twitter and you should follow @andyet as well and check out our awesome team same-pagification tool And Bang.

See you on the Internet. Go build awesome stuff!


● posted by Henrik Joreteg

The single biggest challenge you’ll have when building complex clientside application is keeping your code base from becoming a garbled pile of mess.

If it’s a longer running project that you plan on maintaining and changing over time, it’s even harder. Features come and go. You’ll experiment with something only to find it’s not the right call.

I write lots of single page apps and I absolutely despise messy code. Here are a few techniques, crutches, coping mechanisms, and semi-pro tips for staying sane.

Separating views and state

This is the biggest lesson I’ve learned building lots of single page apps. Your view (the DOM) should just be blind slave to the model state of your application. For this you could use any number of tools and frameworks. I’d recommend starting with Backbone.js (by the awesome Mr. @jashkenas as it’s the easiest to understand, IMO.

Essentially, you’ll build up a set of models and collections in memory in the browser. These models should be completely oblivious to how they’re used. Then you have views that listen for changes in the models and update the DOM. This could be a whole giant blog post in an of itself. But this core principal of separating your views and your application state is vital when building large apps.

Common JS Modules

I’m not going to get into a debate about module styles and script loaders. But I can tell you this: I haven’t seen any cleaner, simpler mechanism for splitting your code into nice isolated chunks than Common JS modules.

It’s the same style/concept that is used in node.js. By following this style I get the additional benefit of being able to re-use modules written for the client on the server and vice versa.

If you’re unfamiliar with the Common JS modules style, your files end up looking something like this:

// you import things by using the special `require` function and you can
// assign the result to a variable

var StrictModel = require('strictModel'),
    _ = require('underscore');

// you expose functionality to other modules by declaring your main export
// like this.
module.exports = StrictModel.extend({
    type: 'navItem',
    props: {
        active: ['boolean', true, false],
        url: ['string', true, ''],
        position: ['number', true, 200]
    },
    init: function () {
        // some, something
    }
});

Of course, browsers don’t have support for these kinds of modules out of the box (there is no window.require). But, luckily that can be fixed. I use a clever little tool called stitch written by Sam Stephenson of 37signals. There’s also another one by @substack called browserify that lets you use a lot of the node.js utils on the client as well.

What they do is create a require function and bundle up a folder of modules into an app package.

Stitch is written for node.js but you could just as easily just use another server-side language and just use node to build your client package. Ultimately it’s just creating a single JS file and of course at that point you can just serve it like any other static file.

You set up Stitch in a simple express server like this:

// require express and stitch
var express = require('express'),
    stitch = require('stitch');

// define our stitch package
 var appPackage = stitch.createPackage({
    // you add the folders whose contents you want to be “require-able”
    paths: [
        __dirname + '/clientmodules',  // this is where i put my standalone modules
        __dirname + '/clientapp' // this is where i put my modules that compose the app
    ],
    // you can also include normal dependencies that are not written in the 
    // commonJS style
    dependencies: [
        somepath + '/jquery.js',
        somepath + '/bootstrap.js'
    ]
});

// init express
var app = express.createServer();

// define a path where you want your JS package to be server
app.get('/myAwesomeApp.js', appPackage.createServer());

// start listening for requests
app.listen(3000);

At this point you can just go to http://localhost:3000/myAwesomeApp.js in a browser and you should see your whole JS package.

This is handy while developing because you don’t have to re-start or recompile anything when you make changes to the files in your package.

Once you’re ready to go to production you can use the package and uglify JS to write a minified file to disk to be served staticly:

var uglifyjs = require('uglify-js'),
    fs = require('fs');

function uglify(code) {
    var ast = uglifyjs.parser.parse(code);
    ast = uglifyjs.uglify.ast_mangle(ast);
    ast = uglifyjs.uglify.ast_squeeze(ast);
    return uglifyjs.uglify.gen_code(ast);
}

// assuming `appPackage` is in scope of course, this is just a demo
appPackage.compile(function (err, source) {
    fs.writeFileSync('build/myAwesomeApp.js', uglify(source));
});

Objection! It’s a huge single file, that’s going to load slow!

Two things. Don’t write a huge app with loads and loads of giant dependencies. Second, cache it! If you do your job right, your users will only download that file once and you can probably do it while they’re not even paying attention. If you’re clever you can even prime their cache by lazy-loading the app on the login screen, or some other such cleverness.

Not to mention, for single page apps, speed once your app has loaded is much more important than the time it takes to do the initial load.

Code Linting

If you’re building large JS apps and not doing some form of static analysis on your code, you’re asking for trouble. It helps catch silly errors and forces code style consistency. Ideally, no one should be able to tell who wrote what part of your app. If you’re on a team, it should all be uniform within a project. How do you do that? We use a slick tool written by Nathan LaFreniere on our team called, simply, precommit-hook. So all we have to do is:

npm install precommit-hook

What that will do is create a git pre-commit hook that uses JSHint to check your project for code style consistency before each commit. Once upon a time there was a tool called JSLint written by Mr. Crockford. Nowadays (love that silly word) there’s a less strict, more configurable version of the same project called JSHint.

The neat thing about the npm version of JSHint is that if you run it from the command line it will look for a configuation file (.jshintrc) and an ignore file (.jshintignore) both of which the precommit hook will create for you if they don’t exist. You can use these files to configure JSHint to follow the code style rules that you’ve defined for the project. This means that you can now run jshint . at the root of your project and lint the entire thing to make sure it follows the code styles you’ve defined in the .jshintrc file. Awesome, right!?!

Our .jshintrc files usually looks something like this:

{
    "asi": false,
    "expr": true,
    "loopfunc": true,
    "curly": false,
    "evil": true,
    "white": true,
    “undef": true,
    "predef": [
        "app",
        "$",
        "require",
        "__dirname",
        "process",
        "exports",
        "module"
    ]
}

The awesome thing about this approach is that you can enforce consistency and that the rules for the project are contained and actually checked into the project repo itself. So if you decide to have a different set of rules for the next project, fine. It’s not a global setting it’s defined and set by whomever runs the project.

Creating an “app” global

So what makes a module? Ideally, I’d suggest each module being in it’s own file and only exporting one piece of functionality. Only having a single export helps you keep clear what purpose the module has and keeps it focused on just that task. The goal is having lots of modules that do one thing really well and then your app just combines modules into a coherent story.

When I’m building an app, I intentionally have one main controller object of sorts. It’s attached to the window as “app” just for my own. For modules that I’ve written specifically for this app (stuff that’s in the clientapp folder) I allow myself the use of that global to perform app-level actions like navigating, etc.

Using events: Modules talking to modules

How do you keep your modules cleanly separated? Sometimes modules are dependant on other modules. How do you keep them loosely coupled? One good technique is triggering lots of events that can be used as hooks by other code. Many of the core components in node.js are extensions of EventEmitter the reason is that you can register handlers for stuff that happens to those items just like you can register a handler for someone clicking a link in the browser. This pattern is really useful when building re-usable compenents yourself. By exporting things that inherit from event emitters means that the code using your module can specify what they care about rather than the module having to know. For example, see the super simplified version of the And Bang js library below.

There are lots of implementations of event emitters. We use a modified version of one from the LearnBoost guys: @tjholowaychuk, @rauchg and company. It’s wildemitter on my github if you’re curious. But the same concept works for any of the available emitters. See below:

// require our emitter
var Emitter = require('wildemitter');

// Our main constructor function
var AndBang = function (config) {
    // extend with emitter
    Emitter.call(this);
};

// inherit from emitter
AndBang.prototype = new Emitter();

 // Other methods
AndBang.prototype.setName = function (newName) {
    this.name = newName;
    // we can trigger arbitrary events
    // these are just hooks that other
    // code could chose to listen to.
    this.emit('nameChanged', newName);
};

// export it to the world
module.exports = AndBang;

Then, other code that wants to use this module can listen for events like so:

var AndBang = require('andbang'),
    api = new AndBang();

// now this handler will get called any time the event gets triggered
api.on('nameChanged',  function (newName) { /* do something cool */ });

This pattern makes it easy to expose functionality without having to know anything about the consuming code.

More?

I’m tired of typing so that’s all for now. :)

But I just thought I’d share some of the tools, techniques and knowledge we’ve acquired through blood, sweat and mistakes. If you found it helpful, useful or if you want to yell at me. You can follow me on twitter: @HenrikJoreteg.

See ya on the interwebs! Build cool stuff!


● posted by Henrik Joreteg

The other day, DHH[1] tweeted this:

Forcing your web ui to be “just another client” of your API violates the first rule of distributed systems: Don’t write distributed systems.

— DHH (@dhh) June 12, 2012

In building the new Basecamp, 37signals chose to do much of the rendering on the server-side and have been rather vocal about that, bucking the recent trend to build really richly interrative, client-heavy apps. They cite speed, simplicity and cleanliness. I quote DHH, again:

It’s a perversion to think that responding to Ajax with HTML fragments instead of JSON is somehow dirty. It’s simple, clean, and fast.

— DHH (@dhh) June 12, 2012

Personally, I think this generalization is a bit short-sighted.

The “rule” that is cited in the first tweet about distributed sytstems is from Martin Fowler who says:

First Law of Distributed Object Design: Don’t distribute your objects!

So, yes, duplicating state into the client is essentially just that: you’re distributing your objects. I’m not saying I’m wiser than Mr. Fowler, but I do know that keeping client state can make an app much more useful and friendly.

Take Path for iPhone. It caches state, so if you’re offline you can read posts from your friends. You can also post new updates while offline that just seemlessly get posted when you’re back on a network. That kind of use case is simply impossible unless you’re willing to duplicate state to the client.

As application developers we’re not trying to dogmatically enforce “best practices” of computer science just for the sake of dogma, we’re trying to build great experiences. So as soon as we want to support that type of use case, we have to agree that it’s OK to do it in some circumstances.

As some have pointed out and DHH acknowledged later, even Basecamp goes against his point with the calendar. In order to add the type of user experience they want, they do clientside MVC. They store some state in the client and do some client-side rendering. So, what’s the difference in that case?

I’m not saying all server side rendering is bad. I’m just saying, why not pick one or the other? It seems to me (and I actually speak from experience here) that things get really messy once you start mixing presentation and domain logic.

As it turns out, Martin Fowler actually wrote A WHOLE PAPER about separating presentation from domain logic.

The other point I’d like to make is this: What successful, interesting web application do you know/use/love that doesn’t have multiple clients?

As soon as you have any non-web client, such as an iPhone app, or a dashboard widget or a CLI or some other webapp that another developer built, you now need a seperate data API anyway.

Obviously, 37signals has an API. But, gauging by the docs, there are pieces of the API that are incomplete. Another benefit of dog-fooding your own API is that you can’t ship with an incomplete API if you built your whole app on it.

We’re heads-down on the next version of And Bang which is built entirely on what will be our public API. This re-engineering has been no small undertaking, but we feel it will be well worth the effort.

The most interesting apps we use are not merely experienced through a browser anymore. APIs are the one true cross-platform future you can safely bank on.

I’m all ears if you have a differing opinion. Hit me up on twitter @HenrikJoreteg and follow @andyet and @andbang if you’re curious about what else we’re up to.


[1] DHH (David Heinemeier Hansson) of Ruby on Rails and 37signals fame is not scared to state his opinions. I think everyone would agree that his accomplishments give him the right to do so. To be clear, I have nothing but respect for 37 Signals. Frankly, their example is a huge source of inspiration for bootstrapped companies like ours at &yet.

● posted by Nathan Fritz

Now you’re thinking with feeds!

When I look at a single-page webapp, all I see are feeds; I don’t even see the UI anymore. I just see lists of items that I care about. Some of which only I have access to and some of which other groups have access to. I can change, delete, re-position, and add to the items on these feeds and they’ll propagate to the people and entities that have access to them (even if it is just me on another device or at a later date).

I’ve seen it this way for years, but I haven’t grokked it enough to articulate what I was seeing until now.

What Thoonk Is

Thoonk is a series of higher-level objects built on Redis that sends publish, edit, delete, and position events when they are changed. These objects are feeds for making real-time applications and feed services.

What is a Thoonk feed?

A Thoonk feed is a list of indexed data objects that are limited by topic and by what a single entity might subscribe to. An RSS/ATOM feed qualifies. What makes a Thoonk feed different from a table? A table is limited to a topic, but lacks single entity interest limitations. A Thoonk feed isn’t just a message broker, it’s a database-store that sends out events when the data changes.

Let’s use &bang as an example. Each team-member has a list of tasks. In a relational database we might have a table that looks like this:

team_member_tasks

id | team_id | member_id | description | complete bool | etc.

Whenever a user renders their list, I would query that list, limiting by a specific user and a specific team.

If we converted this table, without changing it, into a Thoonk feed, then we would only be able to subscribe to ALL tasks and not just the tasks of a particular team or member. So, instead, a Thoonk feed might look like:

team:<team_id>:member:<member_id>:tasks

{description: "", completed: false, etc, etc}

Now when the user wants a rendered list of tags, I can do one index look-up rather than three, and I am able to subscribe to changes on the specific team member’s tasks, or even to team:353:member:*:tasks to subscribe to all of that team’s tasks.

[Note: I suppose you could arrange a relational database this way, but it wouldn’t really be able to take advantage of SQL, nor could you subscribe to the table to get changes.]

It’s Feeds All the Way Up

If I use Thoonk subscribe-able feeds as my data-storage engine, life gets so much easier. When a user logs in, I can subscribe contextualized callbacks just for them to the feeds of data that they have access to read from. This way, if their data changes for any reason, by any process, by any server, it can bubble all the way up to the user without having to run any queries. I can also subscribe separate processes that can automatically scrub, pre-index, cull, or any number of tasks to any Thoonk feed a particular process cares about. I can use processes in mixed languages to provide monitoring and additional API’s to the feeds.

But What About Writes?

Let’s not think in terms of writes. Writes are just changes to feed items (publishing, editing, deleting, repositioning) that writes the data to ram/disk and informs any subscribers of the change. Let’s instead think in terms of user-actions. A user-action (such as delegating a task to another user in &bang) needs ACL and may affect multiple feeds in a single call. If we defer user-actions to jobs (a special kind of Thoonk feed), we can easily isolate, scale, share, and distribute the business-logic involved in dealing with a user-action.

What Are Thoonk Jobs?

Thoonk Jobs are items that represent business-logic needing to be done reliably, a single time, by any available worker. Jobs are consumed as fast as a worker-pool can consume them. A job feed is a list of job items, each of which may exist in the state of available, in-flight, and stalled. Available jobs are taken and are placed in an in-flight set while they are being processed. When the job is done, the job is removed from the in-flight set, and its item is deleted. If the worker fails to complete the job (either because of an error, distaste, or a monitoring process deciding that the job has timed out), the job may be placed back to the available list or the stalled set.

Why use Thoonk Jobs for User-Actions?

  • User-actions that fail for some reason can be retried (you can also limit the # of retries).
  • The work can be distributed across processes and servers.
  • User-actions can burst much faster than the workers can handle them.
  • A user-action that ultimately fails can be stalled, where an admin is informed to investigate and potentially edit and/or retry when the issue that caused it has been resolved or to test said resolution.
  • Any process in any language can contribute jobs (and get results from them) without having to re-implement the business logic or ACL.

The Last One is a Doozy

Scaling, reliability, monitoring and all of that is nice, but being able to build your application out rather than up is, I believe, the greatest reason for this approach. &bang is written in node.js, but if I have a favorite library for implementing a REST interface or an XMPP interface written in Python or Ruby (or any other language), I can quickly put that together and add it as a process. In fact, I can pretty much add any piece of functionality as a process without having to reload the rest of the application server, and really isolate a feature as its own process. User-actions from this process can be published to Thoonk Job feeds without having to worry about request validation or ACL since that is handled by the worker itself.

Rather than having a very large, complex application, I can have a series of very small processes that automatically cluster and are informed of changes in areas of their specific concerns.

Scaling Beyond Redis

Our testing indicates that Redis will not be a choke point until we have nearly 100,000 active users. The plan to scale beyond that is to shard &bang by teams. A quick look-up will tell us which server a team resides on, and users and processes can subscribe callbacks to connections on those servers. In that way, we can run many Redis servers, and theoretically scale vertically. High-availability is handled by a slave for each shard and a gossip protocol for promoting slaves.

Conflict Resolution and Missed Updates

Henrik’s recent post spawned a couple of questions about conflict resolution. First I’ll give a deflection, and then I’ll give a real answer.

&bang doesn’t yet need conflict resolution. None of the writes are actually done on the client as they are all RPC calls which go into a job queue. Then the workers validate the payload, check the ACL, and update some feeds, at which point the data bubbles back up to the client. The feed updates are atomic, and happen quite quickly. Also, two users being able “to edit the same item only comes up with delegated task, in which case the most recent edit wins.

Ok, now the real answer. Thoonk is going to have revision history and incrementing revision numbers for 1.0. Each historical item is the same as the publish/edit/delete/reposition updates that are sent via pubsub. When a user change job is done, the client can send its current revision numbers for the feeds involved, and thus conflicts on an edit can be detected. The historical data should be enough data to facilitate some form of conflict resolution (determined by the application implementer). The revision numbers can also bubble up to the client, so the client can detect missing updates and ask for a replay from a given revision number.

Currently we’re punting on missed items. Anytime the &bang user is disconnected, the app is disabled and refreshed when it is able to reconnect. A more elaborate solution using the new Thoonk features I just listed is probably coming and perhaps some real offline-mode support with local “dirty” changes that get resolved when you come back online.

All Combined

Using Thoonk, we were able to make &bang scale to 10s of thousands of active users on a single server, burst user-activity beyond our choke-points, isolate user-action business-logic and ACL, automatically cluster to more servers and processes, choose any Redis client library supported language for individual features and interfaces, bubble data changes all the way up to the user regardless of the source of change, provide an easy way of iterating, and generally create a kick-ass, realtime, single-page webapp.

Can I Use Thoonk Now?

Thoonk.js and Thoonk.py are MIT licensed, and free to use. While we are using Thoonk.js in production and it is stable there, the API is not final. Currently I’m moving the the feed logic to Redis Lua scripts, which will be officially supported in Redis 2.6 with an RC1 promised for this December. I plan to be ready for that. The Lua scripting will give us performance gains, and remove unnecessary extra logic to keep publish/edit/delete/reposition commands atomic, but most importantly it will allow us to share the core code with all implementations of Thoonk, allowing us to easily add and support more languages. As mentioned previously, as I do the Redis Lua scripting, I’ll be adding revision history and revision numbers to feeds, which will facilitate conflict detection and replay of missed events.

That said, feel free to comment, contribute, steal, or abuse the project in the meantime. A 1.0 release will indicate API stability, and I will encourage its use in production at that point. I will soon be breaking out the Lua scripts to their own git repo for easy implementation.

If you want to keep an eye on what we’re doing, follow me @fritzy and @andyet on twitter. Also be sure to check out &bang for getting stuff done with your team.


If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Shoot Henrik an email (henrik@andyet.net) and tell us what we can do to help.

● posted by Henrik Joreteg

This last year, we’ve learned a lot about building scalable realtime web apps, most of which has come from shipping &bang.

&bang is the app we use to keep our team in sync. It helps us stay on the same page, bug each other less and just get stuff done as a team.

The process of actually trying to get something out the door on a bootstrapped budget helped us focus on the most important problems that needed to be solved to build a dynamic, interactive, real-time app in a scaleable way.

A bit of history

I’ve written a couple of posts on backbone.js since discovering it. The first one introduces Backbone.js as a lightweight client-side framework for building clean, stateful client apps. In the second post I introduced Capsule.js. Which is a tool that I built on top of Backbone that adds nested models and collections and also allows you to keep a mirror of your client-side state on a node.js server to seemlessly synchronize state between different clients.

That approach was great for quickly prototyping an app. But as I pointed out in that post, that’s a lot of in memory state being stored on the server and simply doesn’t scale very well.

At the end of that post I hinted at what we were aiming to do to ultimately solve that problem. So this post is meant to be a bit of an update on those thoughts.

Our new approach

Redis is totally freakin’ amazing. Period. I can’t say enough good things about it. Salvatore Sanfilippo is a god among men, in my book.

Redis can scale.

Redis can do PubSub.

PubSub just means events. Just like you can listen for click events in Javascript in a browser you can listen for events in Redis.

Redis, however is a generic tool. It’s purposely fairly low-level so as to be broadly applicable.

What makes Redis so interesting, from my perspective, is that you can treat it as a shared memory between processes, languages and platforms. What that means, in a practical sense, is that as long as each app that uses it interacts with it according to a pre-defined set of rules, you can write a whole ecosystem of functionality for an app in whatever language makes the most sense for that particular task.

Enter Thoonk

My co-worker, Nathan Fritz, is the closest thing you can get to being a veteran of realtime technologies.

He’s a member of the XSF council for the XMPP standard and probably wrote his first chat bot before you knew what chat was. His Sleek XMPP Python library is iconic in the XMPP community. He has a self-declared un-natural love for XEP-60 which describes the XMPP PubSub standard.

He took everything he learned from his work on that standard and built Thoonk. (In fact, he actually kept the PubSub spec open as he built the Javascript and Python implementations of Thoonk.)

What is Thoonk??

Thoonk is an abstraction on Redis that provides higher-level datatypes for a more approachable interface. Essentially, staring at Redis as a newbie is a bit intimidating. Not that it’s hard to interface with, it’s just kind of tricky to figure out how to logically structure and retrieve your data. Thoonk simplifies that into a few data-types that describe common use cases. Primarly “feeds”, “sorted feeds”, “queues” and “jobs”.

You can think of a feed as an ad-hoc database table. They’re “cheap” to create and you simply declare them to make them or use them. For example, in &bang, we have all our users in a feed called “users” for looking up user info. But also, each user has a variety of individual feeds. For example, they have a “task” feed and a “shipped” feed. This is where it veers from what people are used to in a relational database model, because each user’s tasks are not a part of a global “tasks” feed. Instead, each user has a distinct feed of tasks because that’s the entity we want to be able to subscribe to.

So rather than simply breaking down a model into types of data, we end up breaking things into groups of items (a.k.a. “feeds”) that we want to be able to track changes to. So, as an example, we may have something like this:

// our main user feed
var userFeed = thoonk.feed('users');

// an individual task feed for a user
var userTaskFeed = thoonk.sortedFeed('team.andyet.members.{{memberID}}.tasks');

Marrying Thoonk and Capsule

Capsule was actually written with Thoonk in mind. In fact that’s why they were named the way they did: You know these lovely pneumatic tube systems they use to send cash to bank tellers and at Costco? (PPSHHHHHHH—THOONK! And here’s your capsule.)

Anyway, the integration didn’t end up being quite as tight as we had originally thought but it still works quite well. Loose coupling is better anyway right?

The core problem I was trying to solve with Capsule was unifying the models that are used to represent the state of the app in the browser and the models you use to describe your data on the server—ideally, not just unifying the data structure, but also letting me share behavior of those objects.

Let me explain.

As I mentioned, we recently shipped &bang. It lets a group of people share their task lists and what they’re actively working on with each other.

It spares you from a lot of “what are you working on?” conversations and increases accountability by making your work quite public to the team.

It’s a realtime, keyboard-driven, web app that is designed to feel like a desktop app. &bang is a node.js application built entirely with the methods described here.

So, in &bang, a team model has attributes as well as a couple of nested backbone collections such as members and chat messages. Each member has attributes and other nested collections, tasks, shipped items, etc.

Initial state push

When a user first logs in we have to send the entire model state for the team(s) they’re on so we can build out the interface (see my previous post for more on that). So, the first thing we do when a user logs in is subscribe them to the relevant Thoonk feeds and perform the the initial state transfer to the client.

To do this, we init an empty team model on the client (a backbone/capsule model shared between client/server) . Then we recurse through our Thoonk feed structures on the server to export the data from the relevant feeds into a data structure that Capsule can use to import that data. The team model is inflated with the data from the server and we draw the interface.

From there, the application is kept in sync using events from Thoonk that get sent over websockets and applied to the client interface. Events like “publish”, “change”, “retract” and “position”.

Once we got the app to the point where this was all working, it was kind of a magical moment, because at this point, any edits that happen in Thoonk will simply get pushed out through the event propagation all the way to the client. Essentially, the inteface that a user sees is largely a slave to the server. Except, of course, the portions of state that we let the user manipulate locally.

At this point, user interactions with the app that change data are all handled through RPC calls. Let’s jump back to the server and you’ll see what I mean.

I thought you were still using Capsule on the server?

We do, but differently, here’s how that is handled.

In short… it’s a job system.

Sounds intimidating right? As someone who started in business school, then gradually got into front-end dev, then back-end dev, then a pile of JS, job systems sounded scary. In my mind they’re for “hardcore” programmers like Fritzy or Nate or Lance from our team. Job systems don’t have to be that scary.

At a very high level you can think of a “job” as a function call. The key difference being, you don’t necessarily expect an immediate result. To continue with examples from &bang: a job may be to “ship a task”. So, what do we need to know to complete that action? We need the following:

  • member Id of the user shipping the task
  • the task id being completed (we call this “shipping”, because it’s cooler, and it’s a reminder a reminder that finishing is what’s important)

We can derive everything else we need from those key pieces of information.

So, rather than call a function somewhere:

shipTask(memberId, taskId)

We can just describe a job as a simple JSON object:

{
    userId: <user requesting the job>,
    taskId: <id of task to 'ship'>,
    memberId: <id of team member>
}

The we can add that to our “shipTask” job queue like so:

thoonk.job('shipTask').put(JSON.stringify(jobObject));

The cool part about the event propagation I talked about above is we really don’t care so much when that job gets done. Obviously fast is key, but what I mean is, we don’t have to sit around and wait for a synchronous result because the event propagation we’ve set up will handle all the application state changes.

So, now we can write a worker that listens for jobs from that job queue. In that worker we’ll perform all the necessary related logic. Specifically stuff like:

  • Validating that the job is properly formatted (contains required fields of the right type)
  • Validating that the user is the owner of that task and is therefore allowed to “ship” it.
  • Modifying Thoonk feeds accordingly.

Encapsulating and reusing model logic

You’ll notice that part of that list requires some logic. Specifically, checking to see if the user requesting the action is allowed to perform it. We could certainly write that logic right here, in this worker. But, in the client we’re also going to want to know if a user is allowed to ship a given task, right? Why write that logic twice?

Instead we write that logic as a method of a Capsule model that describes a task. Then, we can use the same method to determine whether to show the UI that lets the user perform the action in the browser as we use on the back end to actually perform the validation. We do that by re-inflating a Capsule model for that task in our worker code and calling the canEdit() method on it and passing it the user id requesting the action. The only difference being, on the server-side we don’t trust the user to tell us who they are. On the server we roll the user id we have for that session into the job when it’s created rather then trust the client.

Security

One other, hugely important thing that we get by using Capsule models on the server is some security features. There are some model attributes that are read only as far a the client is concerned. What if we get a job that tries to edit a user’s ID? In a backbone model if I call:

backboneModelInstance.set({id: 'newId'});

That will change the ID of the object. Clearly that’s not good in a server environment when you’re trusting that to be a unique ID. There are also lots of other fields you may want on the client but you don’t want to let users edit.

Again, we can encapsulate that logic in our Capsule models. Capsule models have a safeSet method that assumes all inputs are evil. Unless an attribute is whitelisted as clientEditable it won’t set it. So when we go to set attributes within the worker on the server we use safeSet when dealing with untrusted input.

The other important piece of securing a system that lets users indirectly add jobs to your job system is ensuring that the job you receive validate your schema. I’m using a node implementation of JSON Schema for this. I’ve heard some complaints about that proposed standard, but it works really well for the fairly simple usecase I need it for.

A typical worker may look something like this:

workers.editTeam = function () {
  var schema = {
    type: "object",
    properties: {
      user: {
        type: 'string',
        required: true
      },
      id: {
        type: 'string',
        required: true
      },
      data: {
        type: 'object',
        required: true
      }
    }
  };

  editTeamJob.get(0, function (err, json, jobId, timeout) {
    var feed = thoonk.feed('teams'), 
      result,
      team,
      newAttributes,
      inflated;

    async.waterfall([
      function (cb) {
        // validate our job
        validateSchema(json, schema, cb);
      },
      function (clean, cb) {
        // store some variables from our cleaned job
        result = clean;
        team = result.id;
        newAttributes = result.data;
        verifyOwnerTeam(team, cb);
      },
      function (teamData, cb) {
        // inflate our capsule model
        inflated = new Team(teamData);
        // if from the server user normal 'set'
        inflated.safeSet(newAttributes);
      },
      function (cb) {
        // do the edit, all we're doing is storing JSON strings w/ ids
        feed.edit(JSON.stringify(inflated.toJSON()), result.id, cb);
      }
    ], function (err) {
      var code;
      if (!err) {
        code = 200;
        logger.info('edited team', {team: team, attrs: newAttributes});
      } else if (err === 'notAllowed') {
        code = 403;
        logger.warn('not allowed to edit');
      } else {
        code = 500;
        logger.error('error editing team', {err: err, job: json});
      }
      // finish the job 
      editTeamJob.finish(jobId, null, JSON.stringify({code: code}));
      // keep the loop crankin'
      process.nextTick(workers.editTeam);
    });
  });
};

Sounds like a lot of work

Granted, writing a worker for each type of action a user can perform in the app with all the related job and validation is not an insignificant amount of work. However, it worked rather well for us to use the state syncing stuff in Capsule while we were still in the prototyping stage, then converting the server-side code to a Thoonk-based solution when we were ready to roll out to production.

So why does any of this matter?

It works.

What this ultimately means is that we now push the system until Redis is our bottleneck. We can spin up as many workers as we want to crank through jobs and we can write those workers in any language we want. We can put our node app behind HA proxy or Bouncy and spin up a bunch of ‘em. Do we have all of this solved and done? No. But the core ideas and scaling paths seem fairly clear and doable.

[update: Just to add a bit more detail here, from our tests we feel confident that we can scale to tens of thousands of users on a single server and we believe we can scale vertically after doing some intelligent sharding with multiple servers.]

Is this the “Rails of Realtime?”

Nope.

Personally, I’m not convinced there ever will be one. Even Owen Barnes (who originally set out to build just that with SocketStream) said at KRTConf: “There will not be a black box type framework for realtime.” His new approach is to build a set of interconnected modules for structuring out a realtime app based on the unique needs of its specific goals.

The kinds of web apps being built these days don’t fit into a neat little box. We’re talking to multiple web services, multiple databases, and pushing state to the client.

Mikeal Rogers gave a great talk at KRTConf about that exact problem. It’s going to be really, really hard to create a framework that solves all those problems in the same way that Rails or Django can solve 90% of the common problems with routes and MVC.

Can you support a BAJILLION users?

No, but a single Redis db can handle a fairly ridiculous amount of users. At the point that actually becomes our bottleneck, (1) we can split out different feeds for different databases, and (2) we’d have a user base that would make the app wildly profitable at that point—certainly more than enough to spend some more time on engineering. What’s more, Salvatore and the Redis team are putting a lot of work into clustering and scaling solutions for Redis that very well may outpace our need for sharding, etc.

Have you thought about X, Y, Z?

Maybe not! The point of this post is simply to share what we’ve learned so far.

You’ll notice this isn’t a “use our new framework” post. We would still need to do a lot of work to cleanly extract and document a complete realtime app solution from what we’ve done in &bang—particularly if we were trying to provide a tool that can be used to quickly spin up an app. If your goal is to find a tool like that, definitely check out what Owen and team are doing with SocketStream and what Nate and Brian are doing with Derby.

We love the web, and love the kinds of apps that can be built with modern web technologies. It’s our hope that by sharing what we’ve done, we can push things forward. If you find this post helpful, we’d love your feedback.

Technology is just a tool, ultimately, it’s all about building cool stuff. Check out &bang and follow me @HenrikJoreteg, Adam @AdamBrault and the whole @andyet team on the twitterwebz.


If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.

● posted by Henrik Joreteg

Last week we launched our newest product, &!, at KRTConf. It’s a realtime, single-page app that empowers teams to bug each other less and get more done as a team.

One of our speakers, Scott Hanselman from Microsoft tried to open the app in IE9 and was immediately redirected to a page that tells users they need WebSockets to use the app. He then wrote a post criticizing this choice, his argument being that users don’t care about the underlying technology, they just want it to work. He thinks we should provide reasonable fallbacks so that it works for as wide of an audience as possible.

I completely agree with his basic premise: users don’t care about the technology.

Users care about their experience.

I think this is something the web has ignored for far too long so I’ll say it again:

Users only care about their experience.

In this case, we’re not building a website with content. We’re building an experience.

We didn’t require Web Sockets because we’re enamored with the technology, we actually require it precisely because it provides the best user experience.

The app simply doesn’t feel as responsive when long-polling. There’s enough of a difference in lag and responsiveness that we made the choice to eliminate the other available transports in Socket.io. (We’re doing a lot more with our data transport than simply sending chats.) Additionally, we’re also using advanced HTML5 and CSS3 that simply isn’t available yet in IE9. It turns out that checking for WebSockets is a fairly good litmus test of the support of those other features (namely CSS3 transitions and animations). The app is just plain more fun to use because of those features.

Apple beat Microsoft by focusing on user experience. They unapologetically enforced minimum system requirements and made backward incompatible changes. Why is it considered “acceptable” to require minimum hardware (which costs money), but it’s somehow not acceptable to require users to download a free browser?

I’ve said this over and over again: web developers who are building single-page applications are in direct competition with native applications.

If we as web developers continue to limp along support for less-than-top-notch browsers, the web will continue to lose ground to the platforms that build for user experience first. Why should we, as a small bootstrapped company invest our limited resources building less-than-ideal fallbacks?

All this, of course, depends on your audience. We created &! for small, forward-thinking teams, not necessarily their moms. :)

● posted by Nathan Fritz

As application developers, we persist data in tables which are constantly updated, leaving most of the application’s components and user-interface in the dark until it asks for the data.

[Movie trailer voice] Imagine a world where these tables push change-events to any piece of your application stack, in diverse languages and on multiple servers.[/Movie trailer voice]

Enter Thoonk.

Clustering Node.js instances, communicating between service components in different languages and on different machines, forking off asynchronous jobs for reliability and queuing of work, communicating between APIs and views, and sending events to real-time webapps are all problems that can be solved with messaging.

Thoonk solves these problems more gracefully than simple messaging because the messages are change-events on persisted data.

Thoonk is a Redis schema for manipulating advanced, live objects (feeds, sorted-feeds, queues, and job-queues, etc). Thoonk is also a couple of implementations of this schema (currently thoonk.js for Node.js and thook.py for Python).

Thoonk is a lot of things, which I will describe, but really what I would like you to get out of this is what the concept is useful for.

A feed is a list of data entries that have publish, edit, retract, and other events associated with those entries. A feed brings to mind ATOM or RSS to most people, but I think feeds are more useful when the associated events are broadcast on publish-subscribe channels so that data can be synchronized. Redis contains both of the necessary components (object storage and publish-subscribe channels).

Thoonk feeds enable our “live tables” fantasy.

Let’s get specific about Thoonk feed-types.

Please refer to the Thoonk.js and Thoonk.py documentation for examples.

The basic feed is a list of items sorted by publish time. Verbs on these objects include publish, edit, and retract. Feeds may be configured to have a max-number of items, which when exceeded, drops the oldest items. Every item may have a unique assigned id, or Thoonk will generate one for you.

Sorted-Feeds are similar to feeds, but they have no item limit (beyond practical memory limitations) and are sorted by publishing items relative to existing item ids. Verbs for sorted-feeds include append, prepend, publishBefore, publishAfter, move, edit, and retract. Sorted-feeds emit position updates when an item is published or moved in addition to publish, edit, and retract events.

Queues contain items that can be placed at the beginning or end, producing FIFO and LIFO queues. A queue get is a blocking operation with an optional timeout that pops an item off of the end. Queues can be used for simple messaging and task distribution.

Job channels distribute items in a guaranteed completion manner. Jobs consist of three queues: available jobs, in-flight jobs, and stalled job. Like queues, jobs can be pushed to the beginning or end of available jobs and getting a job is a blocking operation with a timeout. Job verbs include: publish, retract, get, cancel (place an in-flight job back into available-jobs), stall (place a job out of the way that has been a problem), retry (place a stalled job as available).

Sets will be added in the near future as a means for maintaining live filters/queuries for feeds and other data.

An example Thoonk ecosystem:

Thoonk is a tool which allows you create an Internet service as a wide ecosystem rather than a deep application. Say we provide a series of 8 node.js processes to take advantage of the number of CPU threads available. This node.js application provides a websocket interface to a browser-js application with live events coming from Thoonk feeds on Redis, organized by individual users and teams. In another process, we might run a Ruby service that provides a REST interface for manipulating and querying objects within users and groups. Say also that we want to peer certain data with other services — we can run a Python process which provides XMPP Publish-Subscribe (XEP-0060) and a Java interface which provides a PubsubHubbub interface. In addition to that, background jobs that absolutely have to be done can be pushed through a job system with workers running in C.

All of these separate components subscribe to the feeds pertinent to their function as well as provide relevant ACL and interface to the end-points. You are now free to use the most appropriate tools for the job, distribute load, organize application data, and selectively synchronize state easily. Of course, if you don’t have to have a lot of processes on a lot of servers in a lot of languages, you can still take advantage of compartmentalizing and duplicating your componets.

Backstory

I find Messaging to be an interesting problem, particularly when machines communicate to share state, make requests, etc. However, messaging has limited use without persistent data, which is why I like XMPP Publish-Subscribe (XEP-0060) so much. Feeds of data — combining data-persistence with publish-subscribe events about changes to the data, is incredibly valuable in machine-to-machine communication.

This is something that I’ve been applying to clustering, configuration distribution, job distribution and management, and real-time webapps, and other problems for years now in my consulting work.

Then, I discovered Redis, which is a very fast key-store-with-containers database that also includes publish-subscribe, and I immediately knew what I had to build.

I’m publishing this as MIT because I not only want to share it, but I want your feedback, harsh criticism, and contributions. We need more implementations in other languages, and I’d love to see people publish tools that contribute to Thoonk interfaces. In addition, please point out flaws in the contract.txt (schema) document, show us your extensions and own object types, etc.

Just hit up myself @fritzy and/or Lance Stout @lancestout on twitter, follow the github projects (Thoonk.js and Thoonk.py), and watch http://thoonk.com.

Our team at &yet always seems to find our way to work on interesting things, so be sure to follow us on Twitter for the latest.

-Nathan Fritz, &yet Chief Architect