Blog

● posted by Adam Brault

Because we are huge fans of human namespace collisions and amazing people, we’re adding two new members to our team: Adam Baldwin and Nathan LaFreniere, both in transition from nGenuity, the security company Adam Baldwin co-founded and built into a well-respected consultancy that has advised the likes of GitHub, AirBNB, and LastPass on security.

We have relied on Adam and Nathan’s services through nGenuity to inform, improve, and check our development process, validating and invalidating our team’s work and process, providing education and correction along the way. We are thrilled to be able to bring these resources to bear with greater influence, while providing Adam Baldwin with the authority to improve areas in need of such.

Adam Baldwin

Adam Baldwin has served as &yet’s most essential advisor since our first year, providing me with confidence in venturing more into development as an addition to my initial web design freelance business, playing “panoptic debugger” when I struggled with it, helping us establish good policy and process as we built our team, improving our system operations, and always, always, bludgeoning us about the head regarding security.

It really can’t be expressed how much respect I and our team at &yet have for Adam and his work.

He’s uncovered Basecamp vulnerabilities that encouraged 37Signals to change their policies for handling reported vulnerabilities, found huge holes in Sprint/Verizon MiFi (that made for one of the most hilarious stories I’ve been a part of), published vulnerabilities twice to root Rackspace, shared research to uberhackers at DEFCON, and has provided security advice for a number of first-class web apps, including ones you’re using today and conceivably right now.

Adam Baldwin will be joining our team at &yet as CSO—it’s a double title: Chief of Software Operations and Chief Security Officer.

Adam will be adding his security consultancy, alongside &yet’s other consulting services, but will also be overseeing our team’s software processes, something he has informed, shaped, and helped externally verify since, I think, before most of our team was born.

On a personal note (a longer version of which is here), I must say it’s a real joy to be able to welcome one of my best friends into helping lead a business he helped build as much as anyone our team.

Nathan LaFreniere

As excited as I am personally to add Adam Baldwin, our dev team is even more thrilled about adding Nathan, whose services we have become well accustomed to relying on in our contract with nGenuity and in a large project where we’ve served a mutual customer.

Nathan is a multitalented dev/ops badass well-versed in automated deployment tools.

He solves operations problems with a combination of experience, innovation, and willingness to learn new tools and approaches.

He’s already gained a significant depth of experience building custom production systems for Node.js, including some tools we’ve come to rely on heavily for &bang.

Nathan’s passion for well-architected, smoothly running, and meticulously monitored servers has helped our developers sleep at night, very literally.

I know getting the luxury of having a huge amount of Nathan’s time at our developers disposal sounds to them like diving into a pool of soft kittens who don’t mind you diving on them and aren’t hurt at all by it either oh and they’re declawed and maybe wear dentures but took them out.

So that’s what we have for you today.

We think you’re gonna love it.

● posted by Nathan Fritz

Now you’re thinking with feeds!

When I look at a single-page webapp, all I see are feeds; I don’t even see the UI anymore. I just see lists of items that I care about. Some of which only I have access to and some of which other groups have access to. I can change, delete, re-position, and add to the items on these feeds and they’ll propagate to the people and entities that have access to them (even if it is just me on another device or at a later date).

I’ve seen it this way for years, but I haven’t grokked it enough to articulate what I was seeing until now.

What Thoonk Is

Thoonk is a series of higher-level objects built on Redis that sends publish, edit, delete, and position events when they are changed. These objects are feeds for making real-time applications and feed services.

What is a Thoonk feed?

A Thoonk feed is a list of indexed data objects that are limited by topic and by what a single entity might subscribe to. An RSS/ATOM feed qualifies. What makes a Thoonk feed different from a table? A table is limited to a topic, but lacks single entity interest limitations. A Thoonk feed isn’t just a message broker, it’s a database-store that sends out events when the data changes.

Let’s use &bang as an example. Each team-member has a list of tasks. In a relational database we might have a table that looks like this:

team_member_tasks

id | team_id | member_id | description | complete bool | etc.

Whenever a user renders their list, I would query that list, limiting by a specific user and a specific team.

If we converted this table, without changing it, into a Thoonk feed, then we would only be able to subscribe to ALL tasks and not just the tasks of a particular team or member. So, instead, a Thoonk feed might look like:

team:<team_id>:member:<member_id>:tasks

{description: "", completed: false, etc, etc}

Now when the user wants a rendered list of tags, I can do one index look-up rather than three, and I am able to subscribe to changes on the specific team member’s tasks, or even to team:353:member:*:tasks to subscribe to all of that team’s tasks.

[Note: I suppose you could arrange a relational database this way, but it wouldn’t really be able to take advantage of SQL, nor could you subscribe to the table to get changes.]

It’s Feeds All the Way Up

If I use Thoonk subscribe-able feeds as my data-storage engine, life gets so much easier. When a user logs in, I can subscribe contextualized callbacks just for them to the feeds of data that they have access to read from. This way, if their data changes for any reason, by any process, by any server, it can bubble all the way up to the user without having to run any queries. I can also subscribe separate processes that can automatically scrub, pre-index, cull, or any number of tasks to any Thoonk feed a particular process cares about. I can use processes in mixed languages to provide monitoring and additional API’s to the feeds.

But What About Writes?

Let’s not think in terms of writes. Writes are just changes to feed items (publishing, editing, deleting, repositioning) that writes the data to ram/disk and informs any subscribers of the change. Let’s instead think in terms of user-actions. A user-action (such as delegating a task to another user in &bang) needs ACL and may affect multiple feeds in a single call. If we defer user-actions to jobs (a special kind of Thoonk feed), we can easily isolate, scale, share, and distribute the business-logic involved in dealing with a user-action.

What Are Thoonk Jobs?

Thoonk Jobs are items that represent business-logic needing to be done reliably, a single time, by any available worker. Jobs are consumed as fast as a worker-pool can consume them. A job feed is a list of job items, each of which may exist in the state of available, in-flight, and stalled. Available jobs are taken and are placed in an in-flight set while they are being processed. When the job is done, the job is removed from the in-flight set, and its item is deleted. If the worker fails to complete the job (either because of an error, distaste, or a monitoring process deciding that the job has timed out), the job may be placed back to the available list or the stalled set.

Why use Thoonk Jobs for User-Actions?

  • User-actions that fail for some reason can be retried (you can also limit the # of retries).
  • The work can be distributed across processes and servers.
  • User-actions can burst much faster than the workers can handle them.
  • A user-action that ultimately fails can be stalled, where an admin is informed to investigate and potentially edit and/or retry when the issue that caused it has been resolved or to test said resolution.
  • Any process in any language can contribute jobs (and get results from them) without having to re-implement the business logic or ACL.

The Last One is a Doozy

Scaling, reliability, monitoring and all of that is nice, but being able to build your application out rather than up is, I believe, the greatest reason for this approach. &bang is written in node.js, but if I have a favorite library for implementing a REST interface or an XMPP interface written in Python or Ruby (or any other language), I can quickly put that together and add it as a process. In fact, I can pretty much add any piece of functionality as a process without having to reload the rest of the application server, and really isolate a feature as its own process. User-actions from this process can be published to Thoonk Job feeds without having to worry about request validation or ACL since that is handled by the worker itself.

Rather than having a very large, complex application, I can have a series of very small processes that automatically cluster and are informed of changes in areas of their specific concerns.

Scaling Beyond Redis

Our testing indicates that Redis will not be a choke point until we have nearly 100,000 active users. The plan to scale beyond that is to shard &bang by teams. A quick look-up will tell us which server a team resides on, and users and processes can subscribe callbacks to connections on those servers. In that way, we can run many Redis servers, and theoretically scale vertically. High-availability is handled by a slave for each shard and a gossip protocol for promoting slaves.

Conflict Resolution and Missed Updates

Henrik’s recent post spawned a couple of questions about conflict resolution. First I’ll give a deflection, and then I’ll give a real answer.

&bang doesn’t yet need conflict resolution. None of the writes are actually done on the client as they are all RPC calls which go into a job queue. Then the workers validate the payload, check the ACL, and update some feeds, at which point the data bubbles back up to the client. The feed updates are atomic, and happen quite quickly. Also, two users being able “to edit the same item only comes up with delegated task, in which case the most recent edit wins.

Ok, now the real answer. Thoonk is going to have revision history and incrementing revision numbers for 1.0. Each historical item is the same as the publish/edit/delete/reposition updates that are sent via pubsub. When a user change job is done, the client can send its current revision numbers for the feeds involved, and thus conflicts on an edit can be detected. The historical data should be enough data to facilitate some form of conflict resolution (determined by the application implementer). The revision numbers can also bubble up to the client, so the client can detect missing updates and ask for a replay from a given revision number.

Currently we’re punting on missed items. Anytime the &bang user is disconnected, the app is disabled and refreshed when it is able to reconnect. A more elaborate solution using the new Thoonk features I just listed is probably coming and perhaps some real offline-mode support with local “dirty” changes that get resolved when you come back online.

All Combined

Using Thoonk, we were able to make &bang scale to 10s of thousands of active users on a single server, burst user-activity beyond our choke-points, isolate user-action business-logic and ACL, automatically cluster to more servers and processes, choose any Redis client library supported language for individual features and interfaces, bubble data changes all the way up to the user regardless of the source of change, provide an easy way of iterating, and generally create a kick-ass, realtime, single-page webapp.

Can I Use Thoonk Now?

Thoonk.js and Thoonk.py are MIT licensed, and free to use. While we are using Thoonk.js in production and it is stable there, the API is not final. Currently I’m moving the the feed logic to Redis Lua scripts, which will be officially supported in Redis 2.6 with an RC1 promised for this December. I plan to be ready for that. The Lua scripting will give us performance gains, and remove unnecessary extra logic to keep publish/edit/delete/reposition commands atomic, but most importantly it will allow us to share the core code with all implementations of Thoonk, allowing us to easily add and support more languages. As mentioned previously, as I do the Redis Lua scripting, I’ll be adding revision history and revision numbers to feeds, which will facilitate conflict detection and replay of missed events.

That said, feel free to comment, contribute, steal, or abuse the project in the meantime. A 1.0 release will indicate API stability, and I will encourage its use in production at that point. I will soon be breaking out the Lua scripts to their own git repo for easy implementation.

If you want to keep an eye on what we’re doing, follow me @fritzy and @andyet on twitter. Also be sure to check out &bang for getting stuff done with your team.


If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Shoot Henrik an email (henrik@andyet.net) and tell us what we can do to help.

● posted by Henrik Joreteg

This last year, we’ve learned a lot about building scalable realtime web apps, most of which has come from shipping &bang.

&bang is the app we use to keep our team in sync. It helps us stay on the same page, bug each other less and just get stuff done as a team.

The process of actually trying to get something out the door on a bootstrapped budget helped us focus on the most important problems that needed to be solved to build a dynamic, interactive, real-time app in a scaleable way.

A bit of history

I’ve written a couple of posts on backbone.js since discovering it. The first one introduces Backbone.js as a lightweight client-side framework for building clean, stateful client apps. In the second post I introduced Capsule.js. Which is a tool that I built on top of Backbone that adds nested models and collections and also allows you to keep a mirror of your client-side state on a node.js server to seemlessly synchronize state between different clients.

That approach was great for quickly prototyping an app. But as I pointed out in that post, that’s a lot of in memory state being stored on the server and simply doesn’t scale very well.

At the end of that post I hinted at what we were aiming to do to ultimately solve that problem. So this post is meant to be a bit of an update on those thoughts.

Our new approach

Redis is totally freakin’ amazing. Period. I can’t say enough good things about it. Salvatore Sanfilippo is a god among men, in my book.

Redis can scale.

Redis can do PubSub.

PubSub just means events. Just like you can listen for click events in Javascript in a browser you can listen for events in Redis.

Redis, however is a generic tool. It’s purposely fairly low-level so as to be broadly applicable.

What makes Redis so interesting, from my perspective, is that you can treat it as a shared memory between processes, languages and platforms. What that means, in a practical sense, is that as long as each app that uses it interacts with it according to a pre-defined set of rules, you can write a whole ecosystem of functionality for an app in whatever language makes the most sense for that particular task.

Enter Thoonk

My co-worker, Nathan Fritz, is the closest thing you can get to being a veteran of realtime technologies.

He’s a member of the XSF council for the XMPP standard and probably wrote his first chat bot before you knew what chat was. His Sleek XMPP Python library is iconic in the XMPP community. He has a self-declared un-natural love for XEP-60 which describes the XMPP PubSub standard.

He took everything he learned from his work on that standard and built Thoonk. (In fact, he actually kept the PubSub spec open as he built the Javascript and Python implementations of Thoonk.)

What is Thoonk??

Thoonk is an abstraction on Redis that provides higher-level datatypes for a more approachable interface. Essentially, staring at Redis as a newbie is a bit intimidating. Not that it’s hard to interface with, it’s just kind of tricky to figure out how to logically structure and retrieve your data. Thoonk simplifies that into a few data-types that describe common use cases. Primarly “feeds”, “sorted feeds”, “queues” and “jobs”.

You can think of a feed as an ad-hoc database table. They’re “cheap” to create and you simply declare them to make them or use them. For example, in &bang, we have all our users in a feed called “users” for looking up user info. But also, each user has a variety of individual feeds. For example, they have a “task” feed and a “shipped” feed. This is where it veers from what people are used to in a relational database model, because each user’s tasks are not a part of a global “tasks” feed. Instead, each user has a distinct feed of tasks because that’s the entity we want to be able to subscribe to.

So rather than simply breaking down a model into types of data, we end up breaking things into groups of items (a.k.a. “feeds”) that we want to be able to track changes to. So, as an example, we may have something like this:

// our main user feed
var userFeed = thoonk.feed('users');

// an individual task feed for a user
var userTaskFeed = thoonk.sortedFeed('team.andyet.members.{{memberID}}.tasks');

Marrying Thoonk and Capsule

Capsule was actually written with Thoonk in mind. In fact that’s why they were named the way they did: You know these lovely pneumatic tube systems they use to send cash to bank tellers and at Costco? (PPSHHHHHHH—THOONK! And here’s your capsule.)

Anyway, the integration didn’t end up being quite as tight as we had originally thought but it still works quite well. Loose coupling is better anyway right?

The core problem I was trying to solve with Capsule was unifying the models that are used to represent the state of the app in the browser and the models you use to describe your data on the server—ideally, not just unifying the data structure, but also letting me share behavior of those objects.

Let me explain.

As I mentioned, we recently shipped &bang. It lets a group of people share their task lists and what they’re actively working on with each other.

It spares you from a lot of “what are you working on?” conversations and increases accountability by making your work quite public to the team.

It’s a realtime, keyboard-driven, web app that is designed to feel like a desktop app. &bang is a node.js application built entirely with the methods described here.

So, in &bang, a team model has attributes as well as a couple of nested backbone collections such as members and chat messages. Each member has attributes and other nested collections, tasks, shipped items, etc.

Initial state push

When a user first logs in we have to send the entire model state for the team(s) they’re on so we can build out the interface (see my previous post for more on that). So, the first thing we do when a user logs in is subscribe them to the relevant Thoonk feeds and perform the the initial state transfer to the client.

To do this, we init an empty team model on the client (a backbone/capsule model shared between client/server) . Then we recurse through our Thoonk feed structures on the server to export the data from the relevant feeds into a data structure that Capsule can use to import that data. The team model is inflated with the data from the server and we draw the interface.

From there, the application is kept in sync using events from Thoonk that get sent over websockets and applied to the client interface. Events like “publish”, “change”, “retract” and “position”.

Once we got the app to the point where this was all working, it was kind of a magical moment, because at this point, any edits that happen in Thoonk will simply get pushed out through the event propagation all the way to the client. Essentially, the inteface that a user sees is largely a slave to the server. Except, of course, the portions of state that we let the user manipulate locally.

At this point, user interactions with the app that change data are all handled through RPC calls. Let’s jump back to the server and you’ll see what I mean.

I thought you were still using Capsule on the server?

We do, but differently, here’s how that is handled.

In short… it’s a job system.

Sounds intimidating right? As someone who started in business school, then gradually got into front-end dev, then back-end dev, then a pile of JS, job systems sounded scary. In my mind they’re for “hardcore” programmers like Fritzy or Nate or Lance from our team. Job systems don’t have to be that scary.

At a very high level you can think of a “job” as a function call. The key difference being, you don’t necessarily expect an immediate result. To continue with examples from &bang: a job may be to “ship a task”. So, what do we need to know to complete that action? We need the following:

  • member Id of the user shipping the task
  • the task id being completed (we call this “shipping”, because it’s cooler, and it’s a reminder a reminder that finishing is what’s important)

We can derive everything else we need from those key pieces of information.

So, rather than call a function somewhere:

shipTask(memberId, taskId)

We can just describe a job as a simple JSON object:

{
    userId: <user requesting the job>,
    taskId: <id of task to 'ship'>,
    memberId: <id of team member>
}

The we can add that to our “shipTask” job queue like so:

thoonk.job('shipTask').put(JSON.stringify(jobObject));

The cool part about the event propagation I talked about above is we really don’t care so much when that job gets done. Obviously fast is key, but what I mean is, we don’t have to sit around and wait for a synchronous result because the event propagation we’ve set up will handle all the application state changes.

So, now we can write a worker that listens for jobs from that job queue. In that worker we’ll perform all the necessary related logic. Specifically stuff like:

  • Validating that the job is properly formatted (contains required fields of the right type)
  • Validating that the user is the owner of that task and is therefore allowed to “ship” it.
  • Modifying Thoonk feeds accordingly.

Encapsulating and reusing model logic

You’ll notice that part of that list requires some logic. Specifically, checking to see if the user requesting the action is allowed to perform it. We could certainly write that logic right here, in this worker. But, in the client we’re also going to want to know if a user is allowed to ship a given task, right? Why write that logic twice?

Instead we write that logic as a method of a Capsule model that describes a task. Then, we can use the same method to determine whether to show the UI that lets the user perform the action in the browser as we use on the back end to actually perform the validation. We do that by re-inflating a Capsule model for that task in our worker code and calling the canEdit() method on it and passing it the user id requesting the action. The only difference being, on the server-side we don’t trust the user to tell us who they are. On the server we roll the user id we have for that session into the job when it’s created rather then trust the client.

Security

One other, hugely important thing that we get by using Capsule models on the server is some security features. There are some model attributes that are read only as far a the client is concerned. What if we get a job that tries to edit a user’s ID? In a backbone model if I call:

backboneModelInstance.set({id: 'newId'});

That will change the ID of the object. Clearly that’s not good in a server environment when you’re trusting that to be a unique ID. There are also lots of other fields you may want on the client but you don’t want to let users edit.

Again, we can encapsulate that logic in our Capsule models. Capsule models have a safeSet method that assumes all inputs are evil. Unless an attribute is whitelisted as clientEditable it won’t set it. So when we go to set attributes within the worker on the server we use safeSet when dealing with untrusted input.

The other important piece of securing a system that lets users indirectly add jobs to your job system is ensuring that the job you receive validate your schema. I’m using a node implementation of JSON Schema for this. I’ve heard some complaints about that proposed standard, but it works really well for the fairly simple usecase I need it for.

A typical worker may look something like this:

workers.editTeam = function () {
  var schema = {
    type: "object",
    properties: {
      user: {
        type: 'string',
        required: true
      },
      id: {
        type: 'string',
        required: true
      },
      data: {
        type: 'object',
        required: true
      }
    }
  };

  editTeamJob.get(0, function (err, json, jobId, timeout) {
    var feed = thoonk.feed('teams'), 
      result,
      team,
      newAttributes,
      inflated;

    async.waterfall([
      function (cb) {
        // validate our job
        validateSchema(json, schema, cb);
      },
      function (clean, cb) {
        // store some variables from our cleaned job
        result = clean;
        team = result.id;
        newAttributes = result.data;
        verifyOwnerTeam(team, cb);
      },
      function (teamData, cb) {
        // inflate our capsule model
        inflated = new Team(teamData);
        // if from the server user normal 'set'
        inflated.safeSet(newAttributes);
      },
      function (cb) {
        // do the edit, all we're doing is storing JSON strings w/ ids
        feed.edit(JSON.stringify(inflated.toJSON()), result.id, cb);
      }
    ], function (err) {
      var code;
      if (!err) {
        code = 200;
        logger.info('edited team', {team: team, attrs: newAttributes});
      } else if (err === 'notAllowed') {
        code = 403;
        logger.warn('not allowed to edit');
      } else {
        code = 500;
        logger.error('error editing team', {err: err, job: json});
      }
      // finish the job 
      editTeamJob.finish(jobId, null, JSON.stringify({code: code}));
      // keep the loop crankin'
      process.nextTick(workers.editTeam);
    });
  });
};

Sounds like a lot of work

Granted, writing a worker for each type of action a user can perform in the app with all the related job and validation is not an insignificant amount of work. However, it worked rather well for us to use the state syncing stuff in Capsule while we were still in the prototyping stage, then converting the server-side code to a Thoonk-based solution when we were ready to roll out to production.

So why does any of this matter?

It works.

What this ultimately means is that we now push the system until Redis is our bottleneck. We can spin up as many workers as we want to crank through jobs and we can write those workers in any language we want. We can put our node app behind HA proxy or Bouncy and spin up a bunch of ‘em. Do we have all of this solved and done? No. But the core ideas and scaling paths seem fairly clear and doable.

[update: Just to add a bit more detail here, from our tests we feel confident that we can scale to tens of thousands of users on a single server and we believe we can scale vertically after doing some intelligent sharding with multiple servers.]

Is this the “Rails of Realtime?”

Nope.

Personally, I’m not convinced there ever will be one. Even Owen Barnes (who originally set out to build just that with SocketStream) said at KRTConf: “There will not be a black box type framework for realtime.” His new approach is to build a set of interconnected modules for structuring out a realtime app based on the unique needs of its specific goals.

The kinds of web apps being built these days don’t fit into a neat little box. We’re talking to multiple web services, multiple databases, and pushing state to the client.

Mikeal Rogers gave a great talk at KRTConf about that exact problem. It’s going to be really, really hard to create a framework that solves all those problems in the same way that Rails or Django can solve 90% of the common problems with routes and MVC.

Can you support a BAJILLION users?

No, but a single Redis db can handle a fairly ridiculous amount of users. At the point that actually becomes our bottleneck, (1) we can split out different feeds for different databases, and (2) we’d have a user base that would make the app wildly profitable at that point—certainly more than enough to spend some more time on engineering. What’s more, Salvatore and the Redis team are putting a lot of work into clustering and scaling solutions for Redis that very well may outpace our need for sharding, etc.

Have you thought about X, Y, Z?

Maybe not! The point of this post is simply to share what we’ve learned so far.

You’ll notice this isn’t a “use our new framework” post. We would still need to do a lot of work to cleanly extract and document a complete realtime app solution from what we’ve done in &bang—particularly if we were trying to provide a tool that can be used to quickly spin up an app. If your goal is to find a tool like that, definitely check out what Owen and team are doing with SocketStream and what Nate and Brian are doing with Derby.

We love the web, and love the kinds of apps that can be built with modern web technologies. It’s our hope that by sharing what we’ve done, we can push things forward. If you find this post helpful, we’d love your feedback.

Technology is just a tool, ultimately, it’s all about building cool stuff. Check out &bang and follow me @HenrikJoreteg, Adam @AdamBrault and the whole @andyet team on the twitterwebz.


If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Hit us up (henrik@andyet.net) and tell us what we can do to help.

● posted by Henrik Joreteg

Last week we launched our newest product, &!, at KRTConf. It’s a realtime, single-page app that empowers teams to bug each other less and get more done as a team.

One of our speakers, Scott Hanselman from Microsoft tried to open the app in IE9 and was immediately redirected to a page that tells users they need WebSockets to use the app. He then wrote a post criticizing this choice, his argument being that users don’t care about the underlying technology, they just want it to work. He thinks we should provide reasonable fallbacks so that it works for as wide of an audience as possible.

I completely agree with his basic premise: users don’t care about the technology.

Users care about their experience.

I think this is something the web has ignored for far too long so I’ll say it again:

Users only care about their experience.

In this case, we’re not building a website with content. We’re building an experience.

We didn’t require Web Sockets because we’re enamored with the technology, we actually require it precisely because it provides the best user experience.

The app simply doesn’t feel as responsive when long-polling. There’s enough of a difference in lag and responsiveness that we made the choice to eliminate the other available transports in Socket.io. (We’re doing a lot more with our data transport than simply sending chats.) Additionally, we’re also using advanced HTML5 and CSS3 that simply isn’t available yet in IE9. It turns out that checking for WebSockets is a fairly good litmus test of the support of those other features (namely CSS3 transitions and animations). The app is just plain more fun to use because of those features.

Apple beat Microsoft by focusing on user experience. They unapologetically enforced minimum system requirements and made backward incompatible changes. Why is it considered “acceptable” to require minimum hardware (which costs money), but it’s somehow not acceptable to require users to download a free browser?

I’ve said this over and over again: web developers who are building single-page applications are in direct competition with native applications.

If we as web developers continue to limp along support for less-than-top-notch browsers, the web will continue to lose ground to the platforms that build for user experience first. Why should we, as a small bootstrapped company invest our limited resources building less-than-ideal fallbacks?

All this, of course, depends on your audience. We created &! for small, forward-thinking teams, not necessarily their moms. :)

● posted by Eric Zanol

It’s our first podcast, or maybe &cast, and what a start we’re off to.

James displays a knack for not preparing, being distracted, and wiping sweat off his face. He does, however, know what he’s talking about when it comes to CSS specs. Eric asks James to explain the newly proposed subject selectors, link psuedo-classes and whether or not anyone could become Batman, realistically.

Let us know what you think about the CSS4 proposals and how excited you are about the “parent” selector. Because as you can tell, we’re wicked excited about it over here.

Credits:

“Talent”: @ericzanol (left) and @jamesmenera (right).

Video filmed and produced by the awesome Ms. Mel.

● posted by Nate Vander Wilt

A good software development framework should make the common things easy and make the uncommon things possible.

Unfortunately, Django sometimes makes the simple things easy and the hard things possible — and security is hard!

What Django does well

The Django community does take security very seriously.

The ORM makes it really difficult to expose your app to SQL injection attacks. The template processing system makes it hard to enable cross-site scripting. It takes work to avoid Django’s CSRF protection, and it’d be rare to subvert its well-tested session handling.

Not only that, but Django’s documentation and release notes go the extra mile, discouraging many poor practices and even warning against problems outside of Django that could affect the security of a web app.

Django’s target market

So what’s the problem?

Django has its roots in the publishing industry and got its wings as a basis for sharing-oriented “Web 2.0” sites. When a majority of resources are publicly available, or shared among all logged-in users, it’s possible to focus on securing a few private corners.

What Django’s design considers uncommon is “multitenant” apps — imagine that instead of adding a blog to your company website, you are building a corporate blog–hosting service.

With a single-tenant app, there’s generally some level of trust among all users. Maybe an intern is only supposed to edit customer support documents, but discovers a bug in the custom CMS built on Django that lets him post a funny picture of his boss on the homepage. Sure it was technically the Django-using programmer who let it happen, but it was the intern who betrayed the tenant’s trust.

With multiple tenants, the responsibility of trust is upon the developers. When some computer-savvy ACME Corp employees find a hole that lets them access Wonder Inc’s draft blog posts, they’re just doing their job. If Wonder Inc imagined their exciting product announcements were safe inside your Django app, they won’t care how easy it was to make that security mistake.

What Django makes easy

Most days, most developers are struggling valiantly just to get their code to work. Getting it to “compile”, getting it to “run”, getting it to run “on the production database”. Fixing it to stay running even when a user clicks Y before they click X.

Security is hard because you still have to do all the “just getting it to work”, but you also have to make sure it doesn’t work even if a different user clicks X, fakes Y and then does Z with a little help from A.

Let me make this clear: security mistakes are too common to be a problem of “stupid developers”. Leave the PEBKAC mentality for the poor techs who have to support what they can’t fix — we are developers and designers, busy developing our designs. Engineering, Enforcement and Education are wonderful, but usability is cheaper.

Django’s ORM makes it easy — too easy — to expose database rows to users who shouldn’t have access. It provides a very user-friendly mapping from SQL to model objects. The catch is, the database doesn’t give a rat’s rooty-tooty about your app’s permission model, and neither does the ORM. Its job is to be the floppy disk for your spreadsheets, the ORM’s job is to pretend the spreadsheet rows are documents. Fair enough. But the tools Django provides for validating data access are too difficult to customize for an app where every table is shared among mutually untrusted tenants. Remember that developers are naturally inclined to code until it works for them — not to prove that the same code won’t work when an attacker calls it up.

The template processing and file handling infrastructure encourage developers to expose private user uploads via statically hosted media directories. This is fine for a blog, but when a user notices their private upload got renamed to “/media/userimages/image__.jpg” they might start figuring out that Apache will gladly let them see “image_.jpg” (and “image.php”!) in that directory too.

Finally, while most of Django’s middleware does enhance web app security, the error debugging system can lead to inadvertent storage of sensitive user data if an exception catches it mid-flight. This issue is being addressed for Django 1.4, although the design is opt-in and may be a bit fragile in practice — but this particular problem is both hard and uncommon. In this last case I suspect the solution being built in is a good enough design.

How to make secure apps more common

That leaves us with Django’s ORM and file handling — which I’m convinced are not good enough designs for a multitenant web app framework.

In a multitenant app it is very common that model lookups and form validation must be contained to a stricter subset of data than Django encourages.

The very best solution to this problem is to partition your app. Give each tenant their own virtual system, their own database — in short, their own copy of the hosted app.Partitioning does take more work to configure up-front, but that’s the best place for investments like that. It also complicates cross-account administration features: which is exactly the point. Make the uncommon use cases the harder ones, so that the normal stuff is securer by default.

If you’re not ready or it’s too late to partition, do your whole team a favor and stop using Django’s ORM and ModelForms directly in a multitenant codebase. You need to write an API and force all your code to use it, instead of the ORM. Django’s views are too presentation-focused. Not the place to expect secure code. When coding up a working user interface, it’s too easy to say “My code needs this object!” when you mean “Some user would like to access this data?”. Give day-to-day development the freedom to wholeheartedly fight For the user. Build an internal Python data access API for the sole purpose of standing between the user request and the ORM or filesystem; a good gate on this border can keep a thousand welcome mats safe.

Whether you partition your app into single-tenant instances or use an API to isolate data access, you should develop tests primarily for security. If a commit breaks functionality, it’s an obvious bug. Someone will complain soon enough. If a code change only adds “functionality” that isn’t supposed to exist, it’s a zero-day. Will you notice the mistake in time?

Interestingly enough, our security tests do tend to catch functionality regressions too, since they really must check that Mallory can’t do something Alice and Bob can.That’s a nice benefit, especially since you’re still updating tests because the app is getting better and its security needs to as well. (Having to maintain tests that only lock in functionality as it continually changes, sucks.)

Focus your programmatic testing efforts on permissions enforcement. Your time is precious — don’t bother with automated tests for anything less valuable than earning trust!

Make boring mistakes hard

Django is a great traditional web framework that makes many customizations easy. It’s possible to build secure multitenant apps using the pieces Django provides, although certain built-in features and certain patterns encouraged by the documentation need to be avoided.

I suspect this is also the case with many other web frameworks. And security might not be the only area where developers’ toolkits make doing things “the wrong way” the easy way.

Pay attention to design decisions at the framework level that distract your team from delivering a great user experience at a higher level.

Avoid shooting yourselves in the foot (feets?) by only picking fights on fronts where the troops will stay engaged. Make solving interesting problems the only uphill battle for your developers. Then level the field for your customers. (That’s what usability is about.)

To see us work it’s not self-evident, but nerds invented computers to avoid tedious mistake-prone work. Like end-users, developers have lives and are busy and are experts only in their own passions. Assume security will be taken for granted by users, and developers alike!

If secure web apps should be common, vulnerable code must be made hard to write. It is a good workman’s responsibility to blame his tools every now and then — occasionally we get something as useful as Django as a result!

● posted by Henrik Joreteg

Realtime is becoming a central part of Internet technology.

It’s sneaking it’s way into our lives already with push notifications, Facebook and Google’s web chats, and it’s a core focus for startups like Convore, Pusher, Superfeedr, Browserling, NowJS, Urban Airship, Learnboost, our own &! (andbang), and many more.

What’s most interesting to me is how accessible this is all becoming for developers. In my presentation at NodeConf I mentioned that the adoption of new technology seems directly related to how easy it is to tinker with it. So, as realtime apps get easier and easier to build, I’m convinced that we’re going to see a whole slew of new applications that tap this power in new, amazing ways.

We at &yet have built five or so realtime apps in the past year, and we’re super excited about this stuff. We’ve also discovered that there are a slew of different methods and tools for building these kinds of apps—we’ve used a number of them. Different developer communities have been solving the same problems with different tools and it’s been amazing to see how much mindblowingly awesome code has been so freely shared. However, there’s still a bit of a disconnect, because it often happens within a given dev community. We always find that we learn the most when we talk to and learn from people who are doing things differently than we are.

So what can we do to encourage more of this?

That’s exactly the conversation Adam and I were having when we went to the XMPP Summit in Brussels, Belgium. That conversation culminated into a crazy idea: We should put on a conference entirely focused on realtime web stuff!

It’s crazy, for a couple of reasons. First, we’ve never organized a conference before and secondly we’re in Eastern Washington, not exactly a tech hotspot (although, we’re working on that too). Luckily we’re fortunate to have made some awesome friends as we’ve attended conferences, written blogposts and worked on pretty cool projects for our clients.

We’re teaming up with Julien Genestoux and Superfeedr to make this all happen. Julien is a pioneer and incredible visionary when it comes to realtime technology. Superfeedr was one of the early startups in the realtime web world. Whether you know it or not, you’ve probably benefitted from superfeedr’s technology while using other services like gowalla, tumblr, etsy, posterous and many more.

Together we’ve manage to line up a ridiculously awesome list of speakers, that we’re gradually announcing. So far, we’ve announced Guilleremo Rauch (creator of socket.io), Leah Culver (founder of Convore and previously pownce.com), James Halliday (JS hacker and creator of dnode, browserify, and a bunch of other awesome stuff under the alias “substack”). Also, we’ve just added realtime veteran Jack Moffit (@metajack) and NowJS’s Sridatta Thatipamala (@sridatta).

Personally, I’m way more excited about attending this event than I am being part of organizing it. These people are my heroes. We’ve got several more really interesting folks on the TBA list as well.

We’ve been getting some great advice from Chris Williams (JSConf‘s daddy) on how to put on a kick-ass conference. We don’t know if we’ll make any money, in fact, our main goal is just to not lose money. We just want to bring together all of these amazing people from various communities that are pushing the envelope of what can be done in a browser. We need to listen to each other, learn from each other and push each other to solve the problems that can make more awesome apps a possibility.

In order for attendees to get the most value possible, we’re going to do a presentation track (on the top floor) and then a hack-track (on the lower floor), where the presenters can do smaller, follow-up sessions, how-to’s, training, etc. Multiple hack-tracks will be going on simultaneously. The goal being for people to be able to get more in-depth knowledge on the topics that interest them most.

We’re also trying hard to get representatives of various dev communities, so that no one stack is touted as the “One True Way”. That’s just silly. We all have our favorites, I get that, but ultimately we’re better off if we learn from each other, especially from those who are not using our tools of choice. There’s a whole batch of new problems to solve in building (and scaling) rich, real-time applications that work on as many devices as possible.

The details

KRTConf will be Nov. 7-8 in Portland, OR, all the details are on krtconf.com and new stuff is being announced as it happens on twitter at @krtconf and on this blog.

If you wanna be there, you can get tickets on eventbrite and if you’re interested in speaking, sponsoring or otherwise being involved in the event, email Adam adam@krtconf.com or myself henrik@krtconf.com or hit us up on twitter @adambrault @henrikjoreteg.

I’m super excited to be a part of this and hopefully I’ll see you there!

● posted by Adam Brault

Monday will be Melani Brown’s first day as a full-time &yet team member—we can’t wait!

Melani is a talented filmmaker and photographer who will be doing awesome stuff of that sort with us.

She has worked on Kill Bill, Desperate Housewives, Nike commercials, and the online Old Spice social media ad campaign. She has photographed Bon Iver, Sallie Ford & the Sound Outside, and numerous indie bands.

As a longtime friend of the equally talented Amy Lynn Taylor, we were privileged to have Mel provide our team’s photography a couple years ago. We’ve enjoyed several one-off collaborations with her since, including inviting her to participate in our team’s month-long stay in an Italian castle this Spring.

It’s been clear for some time that she’s an unofficial member of our team, more than anything because she comfortably fits our approach and values: she’s talented, creative, passionate, and has an attitude of encouraging those around her to grow and succeed.

In her many years of travels across the globe, she could best be described as an itinerant blesser. We feel blessed to officially make her a part of our team.

In addition to the great short film she made about our Italy adventure, here’s a couple more examples of Mel’s great work:

coding & designing from around the world from Melani Brown on Vimeo.

Pocket Portrait: 02 Ritchie Young from Melani Brown on Vimeo.

● posted by Adam Brault

We are excited to add Shenoa Lawrence to the &yet team. She will be serving part-time as &yet’s Community Coordinator, beginning last week.

Shenoa has taken a strong leadership role in our local tech community: <!doctype society>, Room to Think (our local coworking movement), and TriConf (a local barcamp &yet helped sponsor last weekend). She’s also in the process of putting together weCreate, a local directory of people, projects, and products that make up our community. Her dedication and contributions have been a major part of the continued success of all of the above.

We want to affirm that dedication and empower her to continue it.

Shenoa is a veteran web developer and designer, and served as a leader of a community she was a part of in the San Francisco Bay Area. Members of our community have huge respect for Shenoa as an individual and as a contributor to the big success of our local dev community.

Since its first days, &yet has invested time and money in helping build our area’s designer and developer community. Our team considers it one of the most important things we’ve been privileged to contribute to.

This is a continuation of those efforts.

We take a realistic view that community is something that emerges from intentionally cultivated soil—there are both mechanic and organic aspects to a good community, and both require hard work.

Since it began in February, <!doctype society> has gained over 70 members and drawn participants from Walla Walla, Yakima, and Spokane—but we know there are many more who should be a part of our local web development, and creative community. And our aspirations for these groups are bigger than mere social gatherings—we want to spark the founding of numerous startups in our area and help provide resources for them to succeed.

We’re excited about what Shenoa has contributed so far to that end, thrilled to be able to team up with her further, and eagerly anticipate what’s next.

Please thank Shenoa for her hard work and dedication and for being willing to take on this new challenge as a continuity of what she’s already helped to build.