Blog

● posted by Henrik Joreteg

TodoMVC is a neat project with a simple idea: to build the same application with a whole slew of different frameworks so we can compare how they solve the same problems.

The project’s maintainers asked me to contribute an example in Ampersand.js, so that’s what we did.

There are a few aspects of the implementation that I thought were worth writing up.

First, some highlights

  1. filesize: Total filesize of all the JS assets required for the app only 24kb(minified/gzipped) which, for comparison, is smaller than jQuery by itself. By comparison the Ember.js version is 165kb and that’s without including compiled templates.

  2. super efficient DOM updating: Nothing is re-rendered to the DOM just because the underlying model changed. Only state changes that result in a different outcome touch the DOM at all, and when we do need to update the DOM because of a state change, it’s done using specific DOM methods such as .setAttribute, .innerText, .classList.add, etc. and generally not with .innerHTML. This matters because innerHTML is slower, because it requires the browser to parse HTML. The point is that after the initial render, it does the absolute minimum number of DOM updates required, and does them as efficiently as possible.

  3. good code hygiene: Maintainable, readable code. All state stored in models, zero state stored in the DOM. Fully valid HTML (I’m looking at you, Angular). Call me old school, but behavior is in JS, styles is in CSS, and structure is in HTML.

  4. fully template language agnostic: We’re using jade here, but it really doesn’t matter because the bindings are all handled outside of the templating, as we’ll see later. You could easily use the template language of your choice, or even plain HTML strings.

Ok, now let’s get into some more of the details.

Persisting todos to localStorage

The TodoMVC project’s app spec specifies:

Your app should dynamically persist the todos to localStorage. If the framework has capabilities for persisting data (i.e. Backbone.sync), use that, otherwise use vanilla localStorage. If possible, use the keys id, title, completed for each item. Make sure to use this format for the localStorage name: todos-[framework]. Editing mode should not be persisted.

This is ridiculously easy in Ampersand. This could be done this as a mixin, so we could use the “backbone-esque” .save() methods on the models. But given how straightforward this use case is, it’s simpler to just do it directly. We simply create two methods.

One to write the data to localStorage:

writeToLocalStorage: function () {
  localStorage[STORAGE_KEY] = JSON.stringify(this);
}

One to retrieve it:

readFromLocalStorage: function () {
  var existingData = localStorage[STORAGE_KEY];
  if (existingData) {
    this.set(JSON.parse(existingData));
  }
}

You’ll notice we’re just passing this to JSON.stringify. This works because ampersand collection has a toJSON() method and the spec for the browser’s built-in JSON interface states that it will look for and call a toJSON method on the object passed in, if present. So rather than doing JSON.stringify(this.toJSON()), we can just do JSON.stringify(this). Ampersand collection’s toJSON is simply an alias to serialize which loops through the models it contains and calls each of their serialize methods and returns them all as a serializable array.

So far we’ve just created the methods and not actually used them, so how do we wire that up?

Well, given how simple the app requirements are in this case: “save everything when stuff changes,” we can just have the collection watch itself and persist when it changes. Then our models don’t even have to know or care how they get persisted, the collection will watch itself, and whenever we add/remove or change something it’ll re-save itself. This keeps our logic nicely encapsulated into the collection whose responsibility it is to deal with models. Makes sense, right?

Turns out, that’s quite easy too. Inside our collection’s initialize method, we’ll do as follows. See line comments below:

initialize: function () {
  // Attempt to read from localStorage right away
  // this also adds them to the collection
  this.readFromLocalStorage();

  // We put a slight debounce on this since it could possibly
  // be called in rapid succession. We're using a small npm package
  // called 'debounce' for this: 
  // https://www.npmjs.org/package/debounce
  this.writeToLocalStorage = debounce(this.writeToLocalStorage, 100);

  // We listen for changes to the collection
  // and persist on change
  this.on('all', this.writeToLocalStorage, this);
}

Syncing between multiple open tabs

Even though it’s not specified in the spec, we went ahead and handled the case where you’ve got the app open in multiple tabs in the same browser. In most of the other implementations, this case isn’t covered, but it feels like it should be. Turns out, this is quite simple as well.

We simply add the following line to our initialize method in our collection, which listens for storage events from the window:

window.addEventListener('storage', this.handleStorageEvent.bind(this));

The corresponding handler inside our collection looks like this:

handleStorageEvent: function (event) {
  if (event.key === STORAGE_KEY) {
    this.readFromLocalStorage();
  }
}

The event argument passed to our storage event handler will includes a key property which we can use to determine which localStorage value changed. These storage events don’t fire in the tab that caused them, and they only fire in other tabs if the data is actually different. This seems perfect for our case. So we simply check to see if the change was to the key we’re storing to, run readFromLocalStorage, and we’re good.

That’s it! Here’s the final collection code.

note: It’s worth noting that the app spec for TodoMVC is a bit contrived (understandably). If you’re going to use localStorage in a real app you should beware that it is shared by all open tabs of your app, as well as the fact that your data schema may change in a future version. To address these issues, consider namespacing your localStorage keys with a version number to avoid conflicts. While all these problems can be solved, in most production cases you probably shouldn’t treat localStorage as anything other than an somewhat untrustworthy cache. If it uses it to store something important and the user clears their browser data, it’s all gone. Also, you can’t always trust that you’ll get valid JSON back, so a try/catch would probably be wise as well.

Session properties for editing state

If you paid close attention, you noticed the TodoMVC application spec also says we shouldn’t persist the editing state of a todo. This refers to the fact that you can double-click a task to put it into edit mode.

One thing that’s a bit unique in Ampersand is its use of what we call “session properties” to store things like the editing state.

If you look at the other examples, both Ember and Backbone only reference the “editing” state in the view code or the view controller; there’s no reference to it in the models. Compare that to our todo model:

var State = require('ampersand-state');


module.exports = State.extend({
  // These properties get persisted to localStorage
  // because they'll be included when serializing
  props: {
    title: {
      type: 'string',
      default: ''
    },
    completed: {
      type: 'boolean',
      default: false
    }
  },

  // session properties are *identical* to `props`
  // *except* that they're not included when serializing.
  session: {
    // here we declare that editing state, just like
    // the properties above.
    editing: {
      type: 'boolean',
      default: false
    }
  },

  // This is just a convenience method that just
  // gives our view a simple method to call when 
  // it wants to trash a todo.
  destroy: function () {
    if (this.collection) {
      this.collection.remove(this);
    }
  }
});

You might be thinking, WHAT!? You’re storing view state in the models?!

Yes. Well… sort of.

If you think about it, is it really view state? I’d argue it’s “application state,” or really “session state” that’s very clearly tied to that particular model instance.

Conceptually, at least to me, it’s clear that it’s actually a state of the model. The view is not in “editing” mode; the model is.

How the view or the rest of the app deals with that information is irrelevant. The fact is, when a user edits a todo, they have put that particular todo into an editing state. That has nothing to do with a particular view of that model.

This distinction becomes even more apparent if your app needs to do something else based on that state information, such as disabling application-wide keyboard shortcuts, or applying a class to the todo-list container element when it’s in edit mode.

Even if you disagree with that, what about readability? Let’s say you’re working with a team on this app, where can they go to see all the state we’re storing related to a single todo?

In the Backbone.js example the model code reads like this:

app.Todo = Backbone.Model.extend({
  // Default attributes for the todo
  // and ensure that each todo created has `title` and `completed` keys.
  defaults: {
    title: ',
    completed: false
  },

  // Toggle the `completed` state of this todo item.
  toggle: function () {
    this.save({
      completed: !this.get('completed')
    });
  }
});

and in Ember:

Todos.Todo = DS.Model.extend({
  title: DS.attr('string'),
  isCompleted: DS.attr('boolean')
});

Neither of these give any indication that we also care about whether a model is in editing mode or not. We’d have to dig into the view to see that. In an app this simple, it’s not a big deal. In a big app this kind of thing gets problematic very quickly.

It feels so much clearer to see all the types of state related to that model in a single place.

Using a subcollection to get filtered views of the todos

The spec says we should have 3 different view modes for our todos:

  1. All todos
  2. Remaining todos
  3. Completed todos

There are a few different ways we could go about this. We’ve got our trusty ampersand-collection-view which will take a collection and render a view for each item in the collection. It also takes care of adding and removing items if the collection changes, as well as cleaning up event handlers if the parent view is destroyed.

That collection view is included in ampersand-view and is exposed as a simple method: renderCollection.

One way to accomplish what’s being asked in the spec would be to create three different collections and shuffle todos around between collections based on their completed state. But that feels a bit weird. Because we really only have one item type. We could also have a single base collection and request a new filtered list of todos from that collection each time any of them changes, which is how the Backbone.js implementation does it. But that would mean that it’s no longer just a rendered collection. Instead we’d have to re-render a view for each todo in the matching set, which doesn’t feel very clean or efficient.

It seems cleaner/easier to just have a single todos collection and then render a “filtered view,” if you will. Ideally, we’d just be able to set a mode of that filtered view and have it add/remove as necessary.

So we want something that behaves like a normal collection, but which is really just a subset of that collection.

Then we could still just call renderCollection once, using that subcollection.

Then if we change the filtering rules of the subcollection things would Just Work™. In ampersand we’ve got just such a thing in ampersand-subcollection.

If you give it collection to use as a base and a set of rules like filters, a max length, or its own sorting order, then it pretends to be a “real” collection. It has a models array of its current models, a length property, its own comparator, and it will fire events like add/remove/change/sort as the underlying data in the base collection changes, but it will fire those events based on its own defined filters and rules.

So, let’s use that. In this case we just need a single subcollection, so we’ll just create it and attach it to the collection as part of its initialize method:

var Collection = require('ampersand-collection');
var SubCollection = require('ampersand-subcollection');
var Todo = require('./todo');


module.exports = Collection.extend({
  model: Todo,
  initialize: function () {
    ...
    // This is what we'll actually render
    // it's a subcollection of the whole todo collection
    // that we'll add/remove filters to accordingly.
    this.subset = new SubCollection(this);
    ...
  },
  ...
}

Now, rather than just rendering our collection, in our main view we’ll render the subcollection instead:

this.renderCollection(app.me.todos.subset, TodoView, this.queryByHook('todo-container'));

We’ll talk about model structure in just a minute, but for now let’s just realize that app.model.todos is our todos collection and app.model.todos.subset was the subcollection we just created above.

The TodoView is the constructor (a.k.a. view class) for the view we want to use to render the items in the collection and this.queryByHook('todo-container') will return the DOM element we want to render these into. If you’re curious about queryByHook, see this explanation of why we use data-hook.

So, now we can just re-configure that subcollection and it will fire add/remove events for changes based on those filters and our collection renderer will update accordingly.

There are three valid states for the view mode we’re in. It can be "active", "completed", or "all". So now we create a simple helper method on the collection that configures it based on the mode:

setMode: function (mode) {
  if (mode === 'all') {
    this.subset.clearFilters();
  } else {
    this.subset.configure({
      where: {
        completed: mode === 'completed'
      }
    }, true);
  }
}

So where does that mode come from? Let’s look at our model structure.

Modeling state

In Ampersand a common pattern is to create a me model to represent state for the user of the app. If the user is logged in and has a username or other attributes, we’d store them as props on the me model. In this app, there’s no persisted me properties, but we do still have a user of the app we want to model and that user has a set of todos that belong to them. So we’ll create that as a collection property on the me object like so:

var State = require('ampersand-state');
var Todos = require('./todos');


module.exports = State.extend({
  ...
  collections: {
    todos: Todos
  },
  ...  
});

Things that otherwise represent “session state” or other cached data related to the user can be attached to the me model as session properties as we described above.

Something like the mode we described above fits into that category.

Ideally, we should be able to simply change the mode on the me model and everything else should just happen.

And, since we’re using ampersand-state, we can change the entire mode of the app with a simple assignment, as follows:

app.me.mode = 'all';

Go ahead and open a console on the app page and try setting it to various things. Note that it will only let you set it to a valid value. If you try doing: app.me.mode = 'garbage' you’ll get this error:

type error

This type of defensive programming is hugely helpful for catching errors in other parts of your app.

This works because we’ve defined mode as a session property on our me model like this:

mode: {
  type: 'string',
  values: [
    'all',
    'completed',
    'active'
  ],
  default: 'all'
}

It’s readable and behaves as you’d expect.

Calculating various lengths/totals

The app spec states we must show counts of “items left” and “items completed,” plus we have to be able to know if there aren’t any items at all in the collection so we can hide the header and footer.

This means we need to track 3 different calculated totals at all times.

Ultimately if this is state we care about, we want them to be easily readable as part of a model definition. Since we have a me model that contains the mode and has a child collection of todos, it makes sense for it to care about and track those totals. So we’ll create session properties for all of those totals too.

In the me model’s initialize we can listen to events in our collection that we know will affect these totals, and then we have a single method handleTodosUpdate that calculates and sets those totals.

The totals are quite easy; we check todos.length for totalCount, loop through once to calculate how many items are completed for completedCount, then use simple arithmetic for activeCount.

Just for clarity, we also then set a boolean value for whether all of them are completed or not. This is because the spec states that if you go through and check all the items in the list, that the “check all” checkbox at the top should also check itself. Tracking that state as a separate boolean makes it nice and clear.

So, now our me models looks something like this:

...
initialize: function () {
  // Listen to changes to the todos collection that will
  // affect lengths we want to calculate.
  this.listenTo(this.todos, 'change:completed change:title add remove', this.handleTodosUpdate);

  // We also want to calculate these values once on init
  this.handleTodosUpdate();
  ...
},
// Define our session properties
session: {
  activeCount: {
    type: 'number',
    default: 0
  },
  completedCount: {
    type: 'number',
    default: 0
  },
  totalCount:{
    type: 'number',
    default: 0
  },
  allCompleted: {
    type: 'boolean',
    default: false
  },
  mode: {
    type: 'string',
    values: [
      'all',
      'completed',
      'active'
    ],
    default: 'all'
  }
},
// Calculate and set various lengths we're
// tracking. We set them as session properties
// so they're easy to listen to and bind to DOM
// where needed.
handleTodosUpdate: function () {
  var completed = 0;
  var todos = this.todos;
  todos.each(function (todo) {
    if (todo.completed) {
      completed++;
    }
  });
  // Here we set all our session properties
  this.set({
    completedCount: completed,
    activeCount: todos.length - completed,
    totalCount: todos.length,
    allCompleted: todos.length === completed
  });
},
...

At this point we have all the state we want to track for the entire app. None of it is mixed into any of the view logic. We’ve got an entire completely de-coupled data layer that tracks all state for the app.

You can see the me model in its entirety as currently deployed on github.

Routing

Once we’ve done all of this state management, the router becomes super simple.

We’ve already created a mode flag on the me that actually controls everything.

So all we have to do is set the proper mode based on the URL, which we can do like so:


var Router = require('ampersand-router');


module.exports = Router.extend({
  routes: {
    // this matches all urls
    '*filter': 'setFilter'
  },
  setFilter: function (arg) {
    // if we passed one, set it
    // if not set it to "all"
    app.me.mode = arg || 'all';
  }
});

Views

At this point it’s really all a matter of wiring things up to the views. The views contain very little actual logic. They simply declare how things should be rendered, what data should be bound where, and turn user actions into changes in our state layer.

For this app, the index.html file contains the layout HTML already. So the main view is just going to attach itself to the <body> tag as you can see in our app.js file, below. We simply hand it the existing document.body and never call render() because it’s already there.

var MainView = require('./views/main');
var Me = require('./models/me');
var Router = require('./router');


window.app = {
  init: function () {
    // Model representing state for
    // user using the app. Calling it
    // 'me' is a bit of convention but
    // it's basically 'app state'.
    this.me = new Me();

    // Our main view
    this.view = new MainView({
      el: document.body,
      model: this.me
    });

    // Create and fire up the router
    this.router = new Router();
    this.router.history.start();
  }

};

window.app.init();

The views in this particular app handle all bindings declaratively as described by the bindings property of the views. It might feel a tad verbose, but it’s also very precise. This way you, as the developer, can decide whether you want to just render things into the template on first render, or whether you want to bind things. It’s also useful for publishing re-usable views. Because you don’t have to include any templating library as part of your re-usable views.

Templates and views are easily the most debate-inducing portion of modern JS apps, but the main point is that Ampersand.js gives you an agnostic way of doing data binding that’s there if you want it, but completely gets out of your way if you’d rather use something like Handlebars or React to handle your view layer.

That’s the whole point of the modular architecture of Ampersand.js: optimize for flexibility, install only what you want to use.

For a full reference of all the data binding types you can use, see the reference documentation.

Below are the declarative bindings from the main view with comments describing what each does.

Note that model in this case is the me model. So model.totalCount, for example, is referencing the me.totalCount session property discussed above. If you really prefer tracking state in your view code, it’s easy to do so. Simply add a props or session properties just like you would in a model, and everything still works.

It’s worth noting that with the way we’ve declared bindings in the app they still work if you replaced this.el, or if this.model was changed or didn’t exist at the time of first render. They would still be set and updated accordingly.

Many times in real apps, these binding declarations are simpler than this example, but on the plus side it serves as a good demo of the types of bindings that are available. Here’s the data binding section from our js/views/main.js view:


...

bindings: {
  // Toggles visibility of main and footer
  // based on truthiness of totalCount.
  // Since zero is falsy it won't show if
  // total is zero.
  'model.totalCount': {
    // this is the binding type
    type: 'toggle',
    // this is just a CSS selector
    selector: '#main, #footer'
  },
  // This is how you do multiple bindings
  // to a single property. Just pass an 
  // array of bindings.
  'model.completedCount': [
    // Hides the clear-completed span
    // when there are no completed items
    {
      type: 'toggle',
      // "hook" here is shortcut for 
      // selector: '[data-hook=clear-completed]'
      hook: 'clear-completed'
    },
    // Inserts completed count as text
    // into the span
    {
      type: 'text',
      hook: 'completed-count'
    }
  ],
  // This is an HTML string that we made
  // as a derived (a.k.a. computed) property
  // of the `me` model. This was done this way
  // for simplicity because the target HTML
  // looks like this: 
  // "<strong>5</strong> items left"
  // where "items" has to be correctly pluralized
  // since it's not just text, but not really
  // a bunch of nested HTML it was easier to just
  // bind this as `innerHTML`.
  'model.itemsLeftHtml': {
    type: 'innerHTML',
    hook: 'todo-count'
  },
  // This adds the 'selected' class to the right
  // element in the footer
  'model.mode': {
    type: 'switchClass',
    name: 'selected',
    cases: {
      'all': '[data-hook=all-mode]',
      'active': '[data-hook=active-mode]',
      'completed': '[data-hook=completed-mode]',
    }
  },
  // Bind 'checked' state of `mark-all`
  // checkbox at the top
  'model.allCompleted': {
    type: 'booleanAttribute',
    name: 'checked',
    hook: 'mark-all'
  }
},
...

A few closing thoughts

I’m excited that we were asked to contribute an example to TodoMVC. Big thanks to Luke Karrys, Philip Roberts and Gar for their help/feedback on building the app and to Sindre Sordhus, Addy Osmani, and Pascal Hartig for their hard work on the TodoMVC project, as it’s quite useful for comparing available tools.

If you have any feedback ping me, @HenrikJoreteg on twitter or any of the other core contributors for that matter. You can also jump into the #&yet IRC channel on freenode and tell us what you think. We’re always working to improve.

We think we’ve created something that strikes a good balance between flexibility, expressiveness, readability, and power, and we’re thrilled about the fast adoption and massive community contribution we’ve seen in just a few short months since releasing Ampersand.js.

I’ll be speaking about frameworks in Brighton at FullFrontal in a few weeks, and then about Ampersand.js at BackboneConf in December. Hope to see you then.

If you like the philosophy and approaches described here, you might also enjoy my book, Human JavaScript. If you want Ampersand.js training for your team get in touch with our training coordinator.

See you on the Interwebz! <3

● posted by Peter Saint-Andre

Two of our core values on the &yet team are curiosity and generosity. That’s why you’ll so often find my yeti colleagues at the forefront of various open-source projects, and also sharing their knowledge at technology and design conferences around the world.

An outstanding example is the work that Philip Roberts has done to understand how JavaScript really works within the browser, and to explain what he has discovered in a talk entitled “What the Heck is the Event Loop, Anyway?” (delivered at both ScotlandJS and JSConf EU in recent months).

If you’d like to know more about the inner workings of JavaScript, I highly recommend that you spend 30 minutes watching this video - it is fascinating and educational and entertaining all at the same time. (Because another yeti value is humility, you won’t find Philip boasting about this talk, but I have no such reservations because it is seriously great stuff.)

What the Heck is the Event Loop, Anyway?

● posted by Henrik Joreteg

As a portion of our elaborate training events I give a short talk about JS frameworks. I’ve shied away from posting many of my opinions about frameworks online because it tends to stir the pot, hurt people’s feelings, and unlike talking face to face, there’s no really great, bi-directional channel for rebuttals.

But, I’ve been told that it was very useful and helped provide a nice, quick overview of some of the most popular JS tools and frameworks for building single page apps. So, I decided to flesh it out and publish it as A Thing™ but please remember that you’re just reading opinions, I’m not telling you what to do and you should do what works for you and your team. Feel free to disagree with me on twitter or even better, write a post explaining your position.

Angular.js

pros

  1. Super easy to start. You just drop in a script tag into your document add some ng- attributes to your app and you magically get behavior.

  2. It’s well-supported by a core team, many of whom are full time Google employees.

  3. Big userbase / community.

cons

  1. Picking Angular means you’re learning Angular the framework instead of how to solve problems in JavaScript. If I were to encourage our team to build apps using Angular, what happens when {insert hot new JS framework} comes along? Or we discover that for a certain need, Angular can’t quite do the thing we want it to and we want to build it with something else? At that point how well will those angular skills translate to something else? Instead, I’ve got developers who’s primary skill is Angular, not necessarily JavaScript.

  2. Violates separation of concerns. Call me old school, but I still believe CSS is for style, HTML is for structure, and JavaScript is for app logic. But, in Angular you spend a lot of time describing behavior in HTML instead of JS. For me personally, this is the deal breaker with Angular. I don’t want to describe application logic in HTML, it’s simply not expressive enough because it’s a markup language for structuring documents, not describing application logic. To get around this, Angular has had to create what is arguably another language inside HTML and then also writing a bit of JS to describe additional details. Now, rather than learning how to build applications in JavaScript, you’re learning Angular and things seem to have a tendency to get complex. That’s why my friend Ari’s Angular book is 600 pages!

  3. Too much magic. Magic comes at a cost. When you’re working with something that’s highly abstracted, it becomes a lot more difficult to figure out what’s wrong when something goes awry. And of course, when you veer off the beaten path, you’re on your own. I could be wrong, but I would guess most Angular users lack enough understanding of the framework itself to really feel confident modifying or debugging Angular itself.

  4. Provides very little structure. I’m not sure a canonical way to build a single page app in Angular exists. Don’t get me wrong, I think that’s fine, there’s nothing wrong with non-prescriptive toolkits but it does mean that it’s harder to jump into someone else’s angular app, or add someone to yours, because styles are likely very different.

my fallible conclusion

There’s simply too much logic described in a quasi-language in HTML rather than in JS and it all feels too abstract and too magical.

I’d rather our team get good at JS and DOM instead of learning a high-level abstraction.

Ember.js

pros

  1. Heavy emphasis on doing things “The Ember Way” (also note item #1 in the “cons” section). This is a double edged sword. If you have a huge team and expect lots of churn, having rigid structure can be the difference between having a transferrable codebase and every new developer wanting to throw it all away. If they are all Ember devs, they can probably jump in and help on an Ember project.

  2. Outsource many of the hard problems of building single page apps to some incredibly smart people who will make a lot of the hard tradeoff decisions for you. (also note item #2 in the “cons” section.)

  3. Big, helpful community.

  4. Nice docs site.

  5. A good amount of existing solved problems and components to use.

cons

  1. Heavy emphasis on doing things “The Ember Way”. Note this is also in the “pros” section. It’s very prescriptive. While you can veer from the standard path from the sound of it, many do not. For example, you don’t have to use handlebars with Ember, but I would be surprised if there are many production Ember apps out there that don’t.

  2. Ember codifies a lot of opinions. If you don’t agree with those opinions and decide to replace pieces of functionality with your own, you’re still sending all the unused code to the browser. Byte counting isn’t a core value of mine, but conceptually it’s nicer to be able to only send what you use. In addition, when you’re only sending what you’re using, there’s less code to sift through to locate the bug.

  3. Memory usage can be a bit of an issue, which can be a problem, especially when running Ember on mobile

  4. Ember is intentionally, and structurally inflexible. Don’t believe me? Take Yehuda’s word for it instead (the surrounding conversation is interesting too).

my fallible conclusion

The lack of flexibility and feeling like in order to use Ember you have to go all or nothing is a deal breaker for me.

React

It’s worth noting that it’s not really fair to include React in this list. It’s not a framework, it’s a view layer. But there’s so much discussion on this that I decided to add it here anyway. Arguably, when you mix in Facebook’s flux dispatcher stuff, it’s more of a framework.

pros

  1. Blindly re-render without worrying about DOM thrashing, it will “diff” the virtual DOM that you render to, against what it knows the DOM is and will perform minimal changes to get them in sync.

  2. Their virtual DOM also resolves issues with eventing across browsers by abstracting it to a standards-compliant event-emitting/bubbling model. As a result, you get a consistent event model across any browser.

  3. It’s just a view layer, not a complete framework. This means you can use it with whatever application orchestration you’d like to do. It does seem to pair nicely with Backbone, since Backbone doesn’t give you a view binding solution out of the box and encourages you to simply re-render on model changes, which is exactly what React encourages and deals with.

cons

  1. The template syntax and the way you create DOM (with JSX) is a bit odd for a JS developer because you put unquoted HTML right into your Javascript as if it were valid to do so. And yes, JSX is optional, but the alternative: React.DOM.div(null, "Hello ", this.props.name); isn’t much better, IMO.

  2. If you want really finite and explicit control over how things get applied to the DOM you don’t really have it anymore. For example, if you want very specific control over how things are bound to style attributes, for creating touch draggable UIs. You can’t easily time the order of how classes get applied, etc. (please note this is something I’ve assumed would be an issue but have not run into myself, but this was confirmed by a dev I was talking to who was struggling with exactly this. But take it with a grain of salt).

  3. While you can just re-render the entire react view, depending on the complexity of component, it sure seems like there can be a lot of diffing to do. I’ve heard of React devs choosing to update only the known changed components, which to me, takes away from the whole idea of not having to care. Again, note that I’m speaking from very limited experience.

my fallible conclusion

I think React is very cool. If I had to build a single page app that supported old browsers I’d look closely at using Backbone + React.

A note on the “FLUX” architecture: To me this is not new information or even a new idea, just a new name. Apparently I’m not alone in that opinion.

The way I understand it, conceptually FLUX is the same as having an intelligently evented model layer in something like Ampersand or Backbone and turning all user actions and server data updates into changes to that state.

By ensuring that the user actions never result in directly manipulating the DOM you end up with the same unidirectional event propagation flow as FLUX + React. We intentionally didn’t include any sort of two-way bindings in Ampersand for that reason. In my opinion two-way bindings are fraught with peril. Having a single layer deal with incoming events, be they from the server or user action is what we’ve been doing for years.

Polymer

This one is a bit strange to me. There’s a standard being developed for being able to define custom elements (document.registerElement for creating new HTML tags with built in behavior), doing HTML imports (<link type='html'> for being able to import those custom elements into other documents), and shadow DOM (for isolating CSS from the rest of the document).

Those things are great (except HTML imports, IMO).

But, judging by Polymer’s introduction, it sounds like a panacea for making all web development easy and amazing and that it’s good for everything. Here’s what the opening line says:

Web Components usher in a new era of web development based on encapsulated and interoperable custom elements that extend HTML itself. Built atop these new standards, Polymer makes it easier and faster to create anything from a button to a complete application across desktop, mobile, and beyond.

While I think being able to create custom elements and encapsulating style and behavior is fantastic, I’m frustrated with the way it’s being positioned. It sounds like you should use this for everything now.

Here’s the kicker: I don’t know of any significant Google app that uses polymer for anything.

That’s a red flag for me. Please don’t misunderstand, obviously this is all new stuff and change takes time. My issue is just that the messaging on the site and from the Google engineers working on this don’t convey that new-ness.

In addition, even if you were to create custom elements for all the view code in your single page app, something has to manage the creation/destruction of those elements. You still have to manage state and orchestrate an app, which means your custom elements are really just another way to write the equivalent of a Backbone view. In the single page app world, I don’t see what we would actually gain by switching those things to custom elements.

pros

  1. Being able to create things like custom form inputs without them being baked into the browser is awesome.

  2. Polymer polyfills enough so you can start using and experimenting with this functionality now.

  3. Proper isolation of styles when building widgets has been a problem on the web for years. The new standards solve that problem at the browser level, which is awesome.

cons

  1. I personally feel like one of Google’s main motivations for doing this is to make it dead simple to drop in Google services that include behavior, style and functionality into a web page without having to know any JS. I could be completely off base here, but I can’t help but feel like the marketing push is largely a big hype push to help push the standards through.

  2. HTML Imports seem like a bad idea to me. It’s feels like the CSS @import problem all over again. If you import a thing, you have to wait to get it back before the browser notices that it imports another thing, etc. So if you actually take this fully componentized approach to building a page that is promoted, then you’ll end up with a ton of back and forth network requests. They do have a tool called the “vulcanizer” for flattening these things out, however. But inlining it doesn’t seem to be an option. There’s was a whole post written yesterday about the problems with HTML imports that discusses this and other issues.

  3. I simply don’t understand why Google is pushing this stuff so hard as if it’s some kind of panacea when the only example I can find of Google using it themselves is on the Polymer site itself. The site claims “Polymer makes it easier and faster to create anything from a button to a complete application across desktop, mobile, and beyond.” In my experimentation, that simply wasn’t the case, I smell hype.

my fallible conclusion

Google doesn’t seem to be eating their own dog food here. The document.registerElement spec is exciting, beyond poly-filling that, I see no use for Polymer, sorry.

Backbone

There is no more broadly production deployed single page app framework than Backbone that I’m aware of. The examples section of the backbone docs lists a lot of big names and that list is far from exhaustive.

pros

  1. It’s a small and flexible set of well-tested building blocks.

    1. Models
    2. Collections
    3. Views
    4. Router
  2. It solves a lot of the basic problems.

  3. Its limited scope makes it easy to understand. As a result I always make new front end developer read the Backbone.js documentation as a first task when they join &yet.

cons

  1. It doesn’t provide solutions for all the problems you’ll encounter. This is why every major user of backbone that I’m aware of has built their own “framework” on top of Backbone’s base.

  2. Most notably find yourself missing when using plain Backbone are:

    1. A way to create derived properties on models.
    2. A way to bind properties and derived properties to views.
    3. A way to render a collection of views within an element.
    4. A way to cleanly handle “subviews” and nested layouts, etc.
  3. As much as backbone is minimalistic, it’s pieces also arguably too coupled to each other. For example, until my merged pull request is released you couldn’t use any other type of Model within a Backbone Collection without monkey patching internal methods. This may not matter for some apps, but it does matter if I want to, for example, use a model to store some observable data in a library intended for use by other code that may or not be a backbone app. The only way to use Backbone Models is to include all of Backbone which feels odd and inefficient to me.

my fallible conclusion

Backbone pioneered a lot of amazing things. I’ve been using it since 0.3 and I strongly agree with its minimalistic philosophy.

It’s helped spawn a new generation of applications that treat the browser as a runtime, not just a document rendering engine. But, its narrow scope left people to invent solutions on top of Backbone. While this isn’t a bad thing, per sé, it just brings to light that there are more problems to be solved.

Not using a framework

There’s a subset of developers who think you shouldn’t use frameworks, for anything ever. While I appreciate the sentiment and find myself very in line with many of them generally, to me it’s simply not pragmatic, especially in a team scenario.

I tend to agree with Ryan Florence’s Post on this topic. Which is best summed up by this one quote from his post:

When you decide to not pick a public framework, you will end up with a framework anyway: your own.

He goes on to say, that doing this is not inherently bad, just that you should be serious about it and maintain it, etc. I highly recommend the post, it’s excellent.

pros

  • Ultimate flexibility

  • You’ll tend to include only the exact code that you need in your app.

cons

  • Massive re-inventing of things, cost.

  • Knowing what modules to use and finding the right modules is hard

  • No clear documentation or conventions for new developers

  • Really hard to transfer and re-use code for your next project

  • You’ll generally end up having learn from your own mistakes instead of benefiting from other’s code.

The GIANT gap

In doing our trainings and in writing my book, Human JavaScript and within our team itself we’ve come to realize there is a huge gap between picking a tool, framework, or library and actually building a complete application.

Not to mention, there are huge problems surrounding how to actually build an app as a team without stomping on each other.

There are sooooo many options and patterns on how to structure, build, and deploy applications beyond just picking a framework.

Few people seem to be talking about how to do all of that, which is just as big of a rabbit hole as picking a framework!

What we actually want

  • Clear starting point

  • A clear, but not enforced, standard way to do things

  • Explicitly clear separation of concerns, so we can mix and match and replace as needed

  • Easy dependency management

  • A way to use existing solutions so we don’t have to re-invent everything

  • A development workflow where we can switch from development mode to production with a simple boolean in a config.

How we’ve addressed all of these things

So, in case you hadn’t already heard, we did the unspeakable thing in JavaScript. We made a “new” framework: Ampersand.js It’s a bit like a redux or derivation of Backbone.

The response so far, has been overwhelmingly positive, we only announced it about a month ago and all these folks have jumped in to contribute. People have been giving talks about it at meetups, and Jeremy Ashkenas, the creator of Backbone.js, Underscore.js, and CoffeeScript invited me to give a keynote at BackboneConf 2014 about Ampersand.js.

So how did we address all my critiques about the other tools?

  1. Flexible but cohesive

    • It comes with a set of “core” modules (documented here) that roughly line up with the components in Backbone. But they are all installed and used individually. No assumptions are made that you’re using a RESTful or even Ajax powered API. If you don’t want that stuff, you just use Ampersand-State instead of the decorated version of State we call Ampersand-Model that adds the restful methods.

    • It doesn’t come with a templating language. Templates can be as simple as a string of HTML, a function that return a string of HTML, or a function that return DOM. The sample app includes some more advanced templating with templatizer, but it truly could be anything. One awesome approach for doing handlebars/htmlbars + Ember style in-template binding declarations is domthing by Philip Roberts. There are also people using React with Ampersand views.

    • Views have a way to declare bindings separate from the template engine. So if you want, you can use HTML strings for templates and still get full control of bindings. The nice thing about not bundling a templating engine means that you can write componentized/reusable views without needing to also include a templating system.

  2. There has to be a clear starting point and some idiomatic way to structure the app as a whole that can be used as a reference, but those standard approaches should not enforced. We did this by building a CLI that can help you spin up a new app, that follows all these conventions that can serve either as a starting point, or simply as a reference. See the quick start guide for more.

  3. We wanted to build on something proven not just start something new for the sake of doing it. This is why we built on Backbone as a base instead of starting from scratch entirely.

  4. We wanted a more complete reference guide to fill that gap I mentioned that explains all the surrounding ideas, tools, and philosophies. We did this by writing a book on the topic: Human JavaScript. It’s free to read online in its entirety and available as an ebook.

  5. We wanted to make it easy to use “solved problems” so we don’t have to re-invent the wheel all the time. We did this by using npm for all package management, and by creating a quick-searchable directory of our favorite clientside modules.

  6. We wanted a painless development-to-production workflow. We did this with a tool called moonboots that adds some dev and deployment workflow functionality to browserify. Moonboots has a plugin for hapi.js and express.js where the only thing you have to do to go from production mode (minified, cached, uniquely named static assets) and dev mode (re-built on each request, not minified, not cached) is toggling a single boolean.

  7. We didn’t just want this to be an &yet project, it has to be bigger than that. We’ve already had over 40 contributors in the short time Ampersand.js has been public, and we just added the first of hopefully many non-&yet contributors to core. Everything uses the very permissivie MIT license and its modular, loosely coupled structure lends itself quite well to extending or replacing any piece of it to fit your needs. For clarity we’ve also set it up as its own organization on GitHub.

  8. We wanted additional training and support to be available if needed. For this we’ve made the #&yet IRC channel on freenode open to questions and support. In addition there are people and companies who want paid training opportunities to be available in order for them to even feel comortable adopting a technology. They want to know that more information and help is available, so in addition to the free resources, we’ve also put together a Human JavaScript code-along online training and offer in person training events to provide hands-on training and support.

So are you saying Ampersand is the best choice for everyone?

Nope. Not at all. It certainly has its own set of tradeoffs. Here are some I’m aware of, there are probably others:

  • Unsurprisingly, it is still a somewhat immature codebase compared to some of these other tools. Having said that, however, we use it for all our single page app projects at &yet and the core modules all have thorough test suites. It’s also worth noting that if you do run into a problem, odds are it won’t be as debilitating. Its open, hackable, pluggable nature makes it different than many frameworks in that you don’t have to jump through a bunch of hoops to fix or overwrite something in your app. The small modules typically make it easier to isolate, patch, and quickly publish bugfixes. In fact, we often publish a patched version to npm as soon as a pull request is merged. Our strict adherance to semver makes it possible to do that while mitigating odds of breaking any existing code. I think that’s part of the reason it has gotten as many pull requests as it has already. Even still, if you have a different idea of how something should work, it’s easy to use your own module instead. We’re also trying to increase the number of core committers to make sure the patches are getting in even if other core devs are busy.

  • It doesn’t have the rich tooling and giant communities built up around it yet. That stuff takes time, but as I said, we’re encouraged by the level of participation we’ve had thus far. Please file bugs and help create the things you wish existed.

  • Old browser support is a rough spot. We intentionally drew a line saying we won’t support IE8. We’re not the alone there, jQuery 2.0 doesn’t either, Google has said they’ll only support the latest two versions of IE for Apps and recently dropped IE9 too, and Microsoft themselves just announced their plan to phase out support for all older browsers. Why did we do this? It’s because we’re using [getters and setters] for the state management stuff. It was a hard decision but felt like enough of a win to make it worth it. Unfortunately, since that is a language-level feature, It’s not easily shimmable (at least not that I’m aware of). Sadly, for some companies not supporting IE8 is a dealbreaker. Perhaps someone has already written a transpiler in a browserify transform that can solve this problem, but I’m not aware of that. If you are, please let me know. I would love it if Ampersand-State could support IE 7 and 8.

Final thoughts

Hopefully this explanation was useful. If you have any feedback, thoughts or if there’s something I missed or got wrong I’m @HenrikJoreteg on twitter, please let me know.

Also please help us make these tools better. We love getting more people involved in the project. File bugs or grab one of the open issues and help us patch ‘em.

Want to start using Ampersand?

Check the learning guides, API reference, or read Human JavaScript online for free.

For hands-on learning jump into the Human JavaScript code-along online training, or for the ultimate kickstart come hang out in person at our training events where you’ll build an app from scratch together with us.

See you on the Interwebz <3

● posted by Henrik Joreteg

It used to all make sense.

The web was once nothing but documents.

Just like you’d want some type of file browser UI to dig through files on your operating system, obviously, you need some type of document browser to view all these web-addressable “documents”.

But over time, those “documents” have become a lot more. A. lot. more.

But I can now use one of these “documents” to have a 4 person video/audio conference on Talky with people anywhere in the world, play incredible full-screen first-person shooters at 60fps, write code in a full-fledged editor, or {{ the reader may insert any number of amazing web apps here }} using nothing but this “document viewer”.

Does calling them “documents” seem ridiculous to anyone else? Of course it does. Calling them “sites” is pretty silly too, actually because a “site” implies a document with links and a URL.

I know the “app” vs. “site” debate is tired and worn.

Save for public, content-heavy sites, all of the apps that I’m asked to write by clients these days at &yet are fully client-side rendered.

The browser is not an HTML renderer for me, it’s the world’s most ubiquitous, yet capable, runtime. With the amazing capabilities of the modern web platform, it’s to the point where referring to a browser as a document viewer is a insult to the engineers who built it.

There is a fundamental difference when you treat the browser as a runtime instead of a document renderer.

I typically send it nothing but a doctype, a script tag, and a stylesheet with permanent cache headers. HTML just happens to be the way I tell the browser to download my app. I deal with the initial latency issues by all-but-ensuring visitors hit the app with a primed cache. This is pretty easy for apps that are opened frequently or are behind a static login page in which you prefetch the app resources. With proper cache headers the browser won’t even do the 304 not-modified dance. It will simply start executing code.

This makes some people cringe, and many web purists (luddites?! #burn) would argue that everything should gracefully degrade and that there isn’t, or at least there shouldn’t be, any distinction between a JavaScript app and site. When I went to EdgeConf in NYC the “progressive enhancement” panel said a lot of things like “your app should still be usable without JS enabled”. Often “javascript is disabled” is really the time when the browser is downloading your javascript. To this I say:

WELL, THEN SHOW ME A TALKY.IO CLONE THAT GRACEFULLY DEGRADES!

It simply cannot be done. Like it or not, the web has moved on from that myopic view of it. The blanket graceful degradation view of the web no longer makes sense when you can now build apps whose core use case is fully dependent on a robust JavaScript runtime.

I had a great time at Chrome Dev Summit, but again, the core message of the “Instant Mobile Apps” talk was: “render your html on the server to avoid having your render blocking code require downloading your JS before it can start executing.”

For simple content-driven sites, I agree. Completely. The demo in that particular talk was the Chrome developer documentation. But it’s a ridiculously easy choice to render documentation server side. (In fact the notion that there was ever a client-side rendered version to begin with was surprising to me.)

If your view of the web lacks a distinction between clientside apps and sites/documents, I’d go as far as to say that you’re now part of the problem.

Why?

Because that view enables corporate IT departments to argue for running old browsers without getting laughed out of the building.

Because that view keeps some decision makers from adopting 100% JavaScript apps and instead spending money on native apps with web connectivity.

Because that view wastes precious developer time inventing and promoting hacks and workarounds for shitty browsers when they could be building next-generation apps.

Because that view enables you to argue that your proficiency of browser CSS hacks for IE7 is still relevant.

Because that view will always keep the web locked into the browser.

What about offline?

I’m writing this on a plane without wifi and of course, using a native app to do so. There are two primary reasons for this:

  1. The offline web is still crap. See offlinefirst.org and this hood.ie post for more.
  2. All my favorite web-based tools are still stuck in the browser.

The majority of users will never ever open a browser without an Internet connection, type in a URL and expect ANYTHING to happen.

Don’t get me wrong, I’m very supportive of the offline first efforts and they are crucial for justifying that

We have a very different view of apps that exist outside of the browser. In fact, the expectation is often reversed: “Oh right, I do need a connection for this to work”.

Chrome OS is one approach, but I think its 100% cloud-based approach is more hardcore than the world is ready to adopt and certainly is never going to fly with the indie data crowd or the otherwise Google-averse.

So, have I ranted enough yet?

According to Jake Archibald from Google, ServiceWorkers will land in Canary sometime early 2014. This work is going to fundamentally change what the web can do.

If you’re unfamiliar with ServiceWorkers (previously called Navigation Controllers), they let you write your own cache control layer in javascript for your web application. ServiceWorkers promise to serve the purpose that appcache was intended for: truly offline web apps.

At a high level, they now let javascript developers building clientside apps to treat the existence of a network connection as an enhancement rather than an expectation.

You may think, “Oh, well, the reason we use the web is because access to the network provides our core value as an app.”

While I’d tend to agree that most of the useful apps fundamentally require data from the internet to be truly useful, you’re missing the point.

Even if the value of your app depends entirely on a network connection, you can now intercept requests and choose to answer them from caches that you control, while in parallel attempting to fetch newer versions of those resources from the network.

If you think about it, that capability is no different than something like Facebook for iOS or Android.

That Facebook app’s core value is unquestioningly derived from seeing your friends’ latest updates and photos, which you’re obviously not going to get without a connection. But the fundamental difference is this: the native app will still open the app and show you all the cached content it has. As a result (and for other reasons) the OS has given those types of apps a privileged status.

With full programmatic cache control for the web that ServiceWorkers will offer, you’ll be able to choose to load your app and whatever latest content you had downloaded from cache first while optionally trying to connect and download new things from the network. The addition of a controllable cache layer in web apps means that an app like facebook really has no compelling reason to be a native app. I mean, really. If you break it down, that app is mostly a friend timeline browser, right? (the key word there being browser).

BUT, even with the addition of ServiceWorkers, there’s another extremely important difference: user perception.

We’ve spent years teaching users that things they use in their web browser simply do not work offline. Users understand (at least at on some unconscious level) that the browser is the native app that gets sites/documents from the Internet. From a user experience standpoint, trying to teach the average user anything different is attempting to roll a quarry full of rocks up a hill.

This is where it starts to become apparent that failing to draw a distinction between a fully client “apps” and a website really starts to become a disservice to all these new capabilities of the web platform. It doesn’t matter how good the web stack becomes, it will never compete with native apps in the “native” space while it stays stuck in the browser.

The addition of “packaged” chrome apps is an admirable, but in my opinion, still inadequate attempt at addressing this issue.

At the point where a user on a mobile device opts to “add to home screen” the intent from the user is more than just a damn bookmark, they’re saying: “I want access to this on the same level as my native apps”. It’s a user’s request for an installation of that app, but in reality it’s treated as a shitty, half-assed install that’s really just a bookmark. But the intent from the user is clear: “I want a special level of quick and easy access to this specific app“.

So why not just embrace that what they’re actually trying to do is “install” that web application into their operating system?

Apple sort of does this for Mac Apps. After you first “sideload” (a.k.a. download from the web and try to run) a native Mac desktop app, they treat it a bit like an awkward stepchild when you first open it. They warn you and tell you: hey, this was an app downloaded from the Internet, are you sure you want to let this thing run?

While I’m not a fan of the language or the FUD involved with that, the timing makes perfect sense to me. At the point I’ve opted to “install” something to my homescreen on my mobile device (or the equivalent to that for desktop), that seems like the proper inflection point to verify with the user that they do, in fact, want to let this app have access to specific “privileged” OS APIs.

Without a simple way to install and authorize a clientside web app, these kinds of apps will always get stuck in the uncanny valley of half-assed, semi-installed apps.

So why bother in the first place? Why not just do native whenever you want to build an “app”? Beyond providing a way to build for multiple platforms, there’s one more thing the web has that native apps don’t have: a URL.

The UNIFORM RESOURCE LOCATOR concept is easy to take for granted, but it’s extremely useful to be able to reference things like links to emails inside gmail, or a tweet, or a very specific portion of documentation. Being able to naturally link between apps on the web is what gives the web its power. It’s unfortunate that many, when they first start building single page applications don’t update URLs as they go and fail to respect the “back” button, thus breaking the web.

But when done properly, the blending the rich interactivity of native apps with the addressability and ubiquity of the web is a thing of beauty.

I cannot understate how excited I am about Service Workers. Because finally, we’ll have the ability to build web applications that treat network resources the same way that good native applications do: as an enhancement.

Of course, the big IF is whether platforms play along and actually treat these types of apps as first class citizens.

Call me an optimist, but I think the capabilities that ServiceWorkers promise us, will shine a light on the bizarre awkwardness of the concept of opening a browser to access offline apps.

The web platform’s capabilities have outgrown the browser.

Let’s help the web to make its next big push.

I’m @HenrikJoreteg on twitter. I’d love to hear your thoughts on this.

For further reading on ServiceWorkers, here is a great explainer doc.

Also, check out my book on building sanely structured single page applications.

● posted by Henrik Joreteg

I had the privilege to attend EdgeConf 2013 as a panelist and opening speaker for the Realtime Data discussion.

It was an incredible, deeply technical conference with an interesting discussion/debate format.

Here’s the video from the panel:

The slides from my talk can be found on speakerdeck.

It was a privilege to attend — I’m very grateful to Andrew Betts and FT Labs for the opportunity to be there.

● posted by Melanie Brown

We asked Portlandians about realtime technologies—and, um, they answered!

DISCLAIMER: No hipsters feelings were harmed in the making of this video.

Film by Miss Melanie Brown
Music by YACHT

If you enjoyed this, be sure to check out last year’s video, too. :)


● posted by Henrik Joreteg

These days, more and more HTML is rendered on the client instead of sent pre-rendered by the server. So If you’re building a web app that uses a lot of client side javascript you’ll doubtlessly want to create some HTML in the browser.

How we used to do it

First a bit of history. When I first wrote ICanHaz.js I was just trying to ease a pain point I was having: generating a bunch of HTML in a browser is a pain.

Why is it a pain? Primarily because JS doesn’t cleanly support multi-line strings, but also because there isn’t an awesome string interpolation system built into JS.

To work around that, ICanHaz.js, as lots of other template clientside template systems do, uses a hack to make it easier to send arbitrary strings to the browser. As it turns out, browsers ignore content in <script> tags if you give them a type attribute that isn’t text/javascript. So, ICanHaz reads the content of tags on the page that say: <script type=“text/html”> which can contain templates or any other multi-line strings for that matter. So, ICanHaz will reads those templates and turns each of them into a function that you can call to render that string with your data mixed into it. For example:

This html:

<script id="user" type="text/html">
  <li>
    <p class="name">Hello I'm {{ name }}</p>
    <p><a href="http://twitter.com/{{ twitter }}">@{{ twitter }}</a></p>
  </li>
</script>

Is read by ICanHaz and turned into a function you call with your own like this:

// your data
var data = {
  first_name: "Henrik",
  last_name: "Joreteg"
}

// I can has user??
html = ich.user(data)

This works, and lots of people clearly thought the same as it’s been quite a popular library.

Why that’s less-than-ideal

It totally works, but if you think about it, it’s a bit silly. It’s not super fast and you’re making the client do a bunch of extra parsing just to turn text into a function. You also have to send the entire template engine to the browser which is a bunch of wasted bandwidth.

How we’re doing it now

What I finally realized is that all you actually want when doing templating on the client is the end result that ICanHaz gives you: a function that you call with your data that returns your HTML.

Typically, smart template engines, like the newer versions of Mustache.js, do this for you. Once the template has been read, it gets compiled into a function that is cached and used for subsequent rending of that same template.

Thinking about this leaves me asking: why don’t we just send the javascript template function to the client instead of doing all the template parsing/compiling on the client?

Well, frankly, because I didn’t really know of a great way to do it.

I started looking around and realized that Jade (which we already use quite a bit at &yet) has support for compiling as a separate process and, in combination with a small little runtime snippet, this lets you create JS functions that don’t need the whole template engine to render. Which is totally awesome!

So, to make it easier to work with, I wrote a little tool: templatizer that you can run on the server-side (using node.js) to take a folder full of jade templates and turn them into a javascript file that you can include in your app that has just has the template rendering functions as javascript.

The end result

From my tests the actual rendering of templates is 6 to 10 times faster. In addition you’re sending way less code to the browser (because you’re not sending a whole templating engine) and you’re not making the browser do a bunch of work you could have already done ahead of time.

I still need to write more docs and use it for a few more projects before we have supreme confidence in it, but I’ve been quite happy with the results so far and wanted to share it.

I’d love to hear your thoughts. I’m @HenrikJoreteg on twitter and you should follow @andyet as well and check out our awesome team same-pagification tool And Bang.

See you on the Internet. Go build awesome stuff!


● posted by Henrik Joreteg

The single biggest challenge you’ll have when building complex clientside application is keeping your code base from becoming a garbled pile of mess.

If it’s a longer running project that you plan on maintaining and changing over time, it’s even harder. Features come and go. You’ll experiment with something only to find it’s not the right call.

I write lots of single page apps and I absolutely despise messy code. Here are a few techniques, crutches, coping mechanisms, and semi-pro tips for staying sane.

Separating views and state

This is the biggest lesson I’ve learned building lots of single page apps. Your view (the DOM) should just be blind slave to the model state of your application. For this you could use any number of tools and frameworks. I’d recommend starting with Backbone.js (by the awesome Mr. @jashkenas as it’s the easiest to understand, IMO.

Essentially, you’ll build up a set of models and collections in memory in the browser. These models should be completely oblivious to how they’re used. Then you have views that listen for changes in the models and update the DOM. This could be a whole giant blog post in an of itself. But this core principal of separating your views and your application state is vital when building large apps.

Common JS Modules

I’m not going to get into a debate about module styles and script loaders. But I can tell you this: I haven’t seen any cleaner, simpler mechanism for splitting your code into nice isolated chunks than Common JS modules.

It’s the same style/concept that is used in node.js. By following this style I get the additional benefit of being able to re-use modules written for the client on the server and vice versa.

If you’re unfamiliar with the Common JS modules style, your files end up looking something like this:

// you import things by using the special `require` function and you can
// assign the result to a variable

var StrictModel = require('strictModel'),
    _ = require('underscore');

// you expose functionality to other modules by declaring your main export
// like this.
module.exports = StrictModel.extend({
    type: 'navItem',
    props: {
        active: ['boolean', true, false],
        url: ['string', true, ''],
        position: ['number', true, 200]
    },
    init: function () {
        // some, something
    }
});

Of course, browsers don’t have support for these kinds of modules out of the box (there is no window.require). But, luckily that can be fixed. I use a clever little tool called stitch written by Sam Stephenson of 37signals. There’s also another one by @substack called browserify that lets you use a lot of the node.js utils on the client as well.

What they do is create a require function and bundle up a folder of modules into an app package.

Stitch is written for node.js but you could just as easily just use another server-side language and just use node to build your client package. Ultimately it’s just creating a single JS file and of course at that point you can just serve it like any other static file.

You set up Stitch in a simple express server like this:

// require express and stitch
var express = require('express'),
    stitch = require('stitch');

// define our stitch package
 var appPackage = stitch.createPackage({
    // you add the folders whose contents you want to be “require-able”
    paths: [
        __dirname + '/clientmodules',  // this is where i put my standalone modules
        __dirname + '/clientapp' // this is where i put my modules that compose the app
    ],
    // you can also include normal dependencies that are not written in the 
    // commonJS style
    dependencies: [
        somepath + '/jquery.js',
        somepath + '/bootstrap.js'
    ]
});

// init express
var app = express.createServer();

// define a path where you want your JS package to be server
app.get('/myAwesomeApp.js', appPackage.createServer());

// start listening for requests
app.listen(3000);

At this point you can just go to http://localhost:3000/myAwesomeApp.js in a browser and you should see your whole JS package.

This is handy while developing because you don’t have to re-start or recompile anything when you make changes to the files in your package.

Once you’re ready to go to production you can use the package and uglify JS to write a minified file to disk to be served staticly:

var uglifyjs = require('uglify-js'),
    fs = require('fs');

function uglify(code) {
    var ast = uglifyjs.parser.parse(code);
    ast = uglifyjs.uglify.ast_mangle(ast);
    ast = uglifyjs.uglify.ast_squeeze(ast);
    return uglifyjs.uglify.gen_code(ast);
}

// assuming `appPackage` is in scope of course, this is just a demo
appPackage.compile(function (err, source) {
    fs.writeFileSync('build/myAwesomeApp.js', uglify(source));
});

Objection! It’s a huge single file, that’s going to load slow!

Two things. Don’t write a huge app with loads and loads of giant dependencies. Second, cache it! If you do your job right, your users will only download that file once and you can probably do it while they’re not even paying attention. If you’re clever you can even prime their cache by lazy-loading the app on the login screen, or some other such cleverness.

Not to mention, for single page apps, speed once your app has loaded is much more important than the time it takes to do the initial load.

Code Linting

If you’re building large JS apps and not doing some form of static analysis on your code, you’re asking for trouble. It helps catch silly errors and forces code style consistency. Ideally, no one should be able to tell who wrote what part of your app. If you’re on a team, it should all be uniform within a project. How do you do that? We use a slick tool written by Nathan LaFreniere on our team called, simply, precommit-hook. So all we have to do is:

npm install precommit-hook

What that will do is create a git pre-commit hook that uses JSHint to check your project for code style consistency before each commit. Once upon a time there was a tool called JSLint written by Mr. Crockford. Nowadays (love that silly word) there’s a less strict, more configurable version of the same project called JSHint.

The neat thing about the npm version of JSHint is that if you run it from the command line it will look for a configuation file (.jshintrc) and an ignore file (.jshintignore) both of which the precommit hook will create for you if they don’t exist. You can use these files to configure JSHint to follow the code style rules that you’ve defined for the project. This means that you can now run jshint . at the root of your project and lint the entire thing to make sure it follows the code styles you’ve defined in the .jshintrc file. Awesome, right!?!

Our .jshintrc files usually looks something like this:

{
    "asi": false,
    "expr": true,
    "loopfunc": true,
    "curly": false,
    "evil": true,
    "white": true,
    “undef": true,
    "predef": [
        "app",
        "$",
        "require",
        "__dirname",
        "process",
        "exports",
        "module"
    ]
}

The awesome thing about this approach is that you can enforce consistency and that the rules for the project are contained and actually checked into the project repo itself. So if you decide to have a different set of rules for the next project, fine. It’s not a global setting it’s defined and set by whomever runs the project.

Creating an “app” global

So what makes a module? Ideally, I’d suggest each module being in it’s own file and only exporting one piece of functionality. Only having a single export helps you keep clear what purpose the module has and keeps it focused on just that task. The goal is having lots of modules that do one thing really well and then your app just combines modules into a coherent story.

When I’m building an app, I intentionally have one main controller object of sorts. It’s attached to the window as “app” just for my own. For modules that I’ve written specifically for this app (stuff that’s in the clientapp folder) I allow myself the use of that global to perform app-level actions like navigating, etc.

Using events: Modules talking to modules

How do you keep your modules cleanly separated? Sometimes modules are dependant on other modules. How do you keep them loosely coupled? One good technique is triggering lots of events that can be used as hooks by other code. Many of the core components in node.js are extensions of EventEmitter the reason is that you can register handlers for stuff that happens to those items just like you can register a handler for someone clicking a link in the browser. This pattern is really useful when building re-usable compenents yourself. By exporting things that inherit from event emitters means that the code using your module can specify what they care about rather than the module having to know. For example, see the super simplified version of the And Bang js library below.

There are lots of implementations of event emitters. We use a modified version of one from the LearnBoost guys: @tjholowaychuk, @rauchg and company. It’s wildemitter on my github if you’re curious. But the same concept works for any of the available emitters. See below:

// require our emitter
var Emitter = require('wildemitter');

// Our main constructor function
var AndBang = function (config) {
    // extend with emitter
    Emitter.call(this);
};

// inherit from emitter
AndBang.prototype = new Emitter();

 // Other methods
AndBang.prototype.setName = function (newName) {
    this.name = newName;
    // we can trigger arbitrary events
    // these are just hooks that other
    // code could chose to listen to.
    this.emit('nameChanged', newName);
};

// export it to the world
module.exports = AndBang;

Then, other code that wants to use this module can listen for events like so:

var AndBang = require('andbang'),
    api = new AndBang();

// now this handler will get called any time the event gets triggered
api.on('nameChanged',  function (newName) { /* do something cool */ });

This pattern makes it easy to expose functionality without having to know anything about the consuming code.

More?

I’m tired of typing so that’s all for now. :)

But I just thought I’d share some of the tools, techniques and knowledge we’ve acquired through blood, sweat and mistakes. If you found it helpful, useful or if you want to yell at me. You can follow me on twitter: @HenrikJoreteg.

See ya on the interwebs! Build cool stuff!


● posted by Henrik Joreteg

The other day, DHH[1] tweeted this:

Forcing your web ui to be “just another client” of your API violates the first rule of distributed systems: Don’t write distributed systems.

— DHH (@dhh) June 12, 2012

In building the new Basecamp, 37signals chose to do much of the rendering on the server-side and have been rather vocal about that, bucking the recent trend to build really richly interrative, client-heavy apps. They cite speed, simplicity and cleanliness. I quote DHH, again:

It’s a perversion to think that responding to Ajax with HTML fragments instead of JSON is somehow dirty. It’s simple, clean, and fast.

— DHH (@dhh) June 12, 2012

Personally, I think this generalization is a bit short-sighted.

The “rule” that is cited in the first tweet about distributed sytstems is from Martin Fowler who says:

First Law of Distributed Object Design: Don’t distribute your objects!

So, yes, duplicating state into the client is essentially just that: you’re distributing your objects. I’m not saying I’m wiser than Mr. Fowler, but I do know that keeping client state can make an app much more useful and friendly.

Take Path for iPhone. It caches state, so if you’re offline you can read posts from your friends. You can also post new updates while offline that just seemlessly get posted when you’re back on a network. That kind of use case is simply impossible unless you’re willing to duplicate state to the client.

As application developers we’re not trying to dogmatically enforce “best practices” of computer science just for the sake of dogma, we’re trying to build great experiences. So as soon as we want to support that type of use case, we have to agree that it’s OK to do it in some circumstances.

As some have pointed out and DHH acknowledged later, even Basecamp goes against his point with the calendar. In order to add the type of user experience they want, they do clientside MVC. They store some state in the client and do some client-side rendering. So, what’s the difference in that case?

I’m not saying all server side rendering is bad. I’m just saying, why not pick one or the other? It seems to me (and I actually speak from experience here) that things get really messy once you start mixing presentation and domain logic.

As it turns out, Martin Fowler actually wrote A WHOLE PAPER about separating presentation from domain logic.

The other point I’d like to make is this: What successful, interesting web application do you know/use/love that doesn’t have multiple clients?

As soon as you have any non-web client, such as an iPhone app, or a dashboard widget or a CLI or some other webapp that another developer built, you now need a seperate data API anyway.

Obviously, 37signals has an API. But, gauging by the docs, there are pieces of the API that are incomplete. Another benefit of dog-fooding your own API is that you can’t ship with an incomplete API if you built your whole app on it.

We’re heads-down on the next version of And Bang which is built entirely on what will be our public API. This re-engineering has been no small undertaking, but we feel it will be well worth the effort.

The most interesting apps we use are not merely experienced through a browser anymore. APIs are the one true cross-platform future you can safely bank on.

I’m all ears if you have a differing opinion. Hit me up on twitter @HenrikJoreteg and follow @andyet and @andbang if you’re curious about what else we’re up to.


[1] DHH (David Heinemeier Hansson) of Ruby on Rails and 37signals fame is not scared to state his opinions. I think everyone would agree that his accomplishments give him the right to do so. To be clear, I have nothing but respect for 37 Signals. Frankly, their example is a huge source of inspiration for bootstrapped companies like ours at &yet.

● posted by Nathan Fritz

Now you’re thinking with feeds!

When I look at a single-page webapp, all I see are feeds; I don’t even see the UI anymore. I just see lists of items that I care about. Some of which only I have access to and some of which other groups have access to. I can change, delete, re-position, and add to the items on these feeds and they’ll propagate to the people and entities that have access to them (even if it is just me on another device or at a later date).

I’ve seen it this way for years, but I haven’t grokked it enough to articulate what I was seeing until now.

What Thoonk Is

Thoonk is a series of higher-level objects built on Redis that sends publish, edit, delete, and position events when they are changed. These objects are feeds for making real-time applications and feed services.

What is a Thoonk feed?

A Thoonk feed is a list of indexed data objects that are limited by topic and by what a single entity might subscribe to. An RSS/ATOM feed qualifies. What makes a Thoonk feed different from a table? A table is limited to a topic, but lacks single entity interest limitations. A Thoonk feed isn’t just a message broker, it’s a database-store that sends out events when the data changes.

Let’s use &bang as an example. Each team-member has a list of tasks. In a relational database we might have a table that looks like this:

team_member_tasks

id | team_id | member_id | description | complete bool | etc.

Whenever a user renders their list, I would query that list, limiting by a specific user and a specific team.

If we converted this table, without changing it, into a Thoonk feed, then we would only be able to subscribe to ALL tasks and not just the tasks of a particular team or member. So, instead, a Thoonk feed might look like:

team:<team_id>:member:<member_id>:tasks

{description: "", completed: false, etc, etc}

Now when the user wants a rendered list of tags, I can do one index look-up rather than three, and I am able to subscribe to changes on the specific team member’s tasks, or even to team:353:member:*:tasks to subscribe to all of that team’s tasks.

[Note: I suppose you could arrange a relational database this way, but it wouldn’t really be able to take advantage of SQL, nor could you subscribe to the table to get changes.]

It’s Feeds All the Way Up

If I use Thoonk subscribe-able feeds as my data-storage engine, life gets so much easier. When a user logs in, I can subscribe contextualized callbacks just for them to the feeds of data that they have access to read from. This way, if their data changes for any reason, by any process, by any server, it can bubble all the way up to the user without having to run any queries. I can also subscribe separate processes that can automatically scrub, pre-index, cull, or any number of tasks to any Thoonk feed a particular process cares about. I can use processes in mixed languages to provide monitoring and additional API’s to the feeds.

But What About Writes?

Let’s not think in terms of writes. Writes are just changes to feed items (publishing, editing, deleting, repositioning) that writes the data to ram/disk and informs any subscribers of the change. Let’s instead think in terms of user-actions. A user-action (such as delegating a task to another user in &bang) needs ACL and may affect multiple feeds in a single call. If we defer user-actions to jobs (a special kind of Thoonk feed), we can easily isolate, scale, share, and distribute the business-logic involved in dealing with a user-action.

What Are Thoonk Jobs?

Thoonk Jobs are items that represent business-logic needing to be done reliably, a single time, by any available worker. Jobs are consumed as fast as a worker-pool can consume them. A job feed is a list of job items, each of which may exist in the state of available, in-flight, and stalled. Available jobs are taken and are placed in an in-flight set while they are being processed. When the job is done, the job is removed from the in-flight set, and its item is deleted. If the worker fails to complete the job (either because of an error, distaste, or a monitoring process deciding that the job has timed out), the job may be placed back to the available list or the stalled set.

Why use Thoonk Jobs for User-Actions?

  • User-actions that fail for some reason can be retried (you can also limit the # of retries).
  • The work can be distributed across processes and servers.
  • User-actions can burst much faster than the workers can handle them.
  • A user-action that ultimately fails can be stalled, where an admin is informed to investigate and potentially edit and/or retry when the issue that caused it has been resolved or to test said resolution.
  • Any process in any language can contribute jobs (and get results from them) without having to re-implement the business logic or ACL.

The Last One is a Doozy

Scaling, reliability, monitoring and all of that is nice, but being able to build your application out rather than up is, I believe, the greatest reason for this approach. &bang is written in node.js, but if I have a favorite library for implementing a REST interface or an XMPP interface written in Python or Ruby (or any other language), I can quickly put that together and add it as a process. In fact, I can pretty much add any piece of functionality as a process without having to reload the rest of the application server, and really isolate a feature as its own process. User-actions from this process can be published to Thoonk Job feeds without having to worry about request validation or ACL since that is handled by the worker itself.

Rather than having a very large, complex application, I can have a series of very small processes that automatically cluster and are informed of changes in areas of their specific concerns.

Scaling Beyond Redis

Our testing indicates that Redis will not be a choke point until we have nearly 100,000 active users. The plan to scale beyond that is to shard &bang by teams. A quick look-up will tell us which server a team resides on, and users and processes can subscribe callbacks to connections on those servers. In that way, we can run many Redis servers, and theoretically scale vertically. High-availability is handled by a slave for each shard and a gossip protocol for promoting slaves.

Conflict Resolution and Missed Updates

Henrik’s recent post spawned a couple of questions about conflict resolution. First I’ll give a deflection, and then I’ll give a real answer.

&bang doesn’t yet need conflict resolution. None of the writes are actually done on the client as they are all RPC calls which go into a job queue. Then the workers validate the payload, check the ACL, and update some feeds, at which point the data bubbles back up to the client. The feed updates are atomic, and happen quite quickly. Also, two users being able “to edit the same item only comes up with delegated task, in which case the most recent edit wins.

Ok, now the real answer. Thoonk is going to have revision history and incrementing revision numbers for 1.0. Each historical item is the same as the publish/edit/delete/reposition updates that are sent via pubsub. When a user change job is done, the client can send its current revision numbers for the feeds involved, and thus conflicts on an edit can be detected. The historical data should be enough data to facilitate some form of conflict resolution (determined by the application implementer). The revision numbers can also bubble up to the client, so the client can detect missing updates and ask for a replay from a given revision number.

Currently we’re punting on missed items. Anytime the &bang user is disconnected, the app is disabled and refreshed when it is able to reconnect. A more elaborate solution using the new Thoonk features I just listed is probably coming and perhaps some real offline-mode support with local “dirty” changes that get resolved when you come back online.

All Combined

Using Thoonk, we were able to make &bang scale to 10s of thousands of active users on a single server, burst user-activity beyond our choke-points, isolate user-action business-logic and ACL, automatically cluster to more servers and processes, choose any Redis client library supported language for individual features and interfaces, bubble data changes all the way up to the user regardless of the source of change, provide an easy way of iterating, and generally create a kick-ass, realtime, single-page webapp.

Can I Use Thoonk Now?

Thoonk.js and Thoonk.py are MIT licensed, and free to use. While we are using Thoonk.js in production and it is stable there, the API is not final. Currently I’m moving the the feed logic to Redis Lua scripts, which will be officially supported in Redis 2.6 with an RC1 promised for this December. I plan to be ready for that. The Lua scripting will give us performance gains, and remove unnecessary extra logic to keep publish/edit/delete/reposition commands atomic, but most importantly it will allow us to share the core code with all implementations of Thoonk, allowing us to easily add and support more languages. As mentioned previously, as I do the Redis Lua scripting, I’ll be adding revision history and revision numbers to feeds, which will facilitate conflict detection and replay of missed events.

That said, feel free to comment, contribute, steal, or abuse the project in the meantime. A 1.0 release will indicate API stability, and I will encourage its use in production at that point. I will soon be breaking out the Lua scripts to their own git repo for easy implementation.

If you want to keep an eye on what we’re doing, follow me @fritzy and @andyet on twitter. Also be sure to check out &bang for getting stuff done with your team.


If you’re building a single page app, keep in mind that &yet offers consulting, training and development services. Shoot Henrik an email (henrik@andyet.net) and tell us what we can do to help.