Blog

● posted by Marcus Stong

On the &yet Ops Team, we use Docker for various purposes on some of the servers we run. We also make extensive use of iptables so that we have consistent firewall rules to protect those servers against attacks.

Unfortunately, we recently ran into an issue that prevented us from building Dockerfiles from behind an iptables firewall.

Here’s a bit more information about the problem, and how we solved it.

The Problem

When trying to run docker build on a host that provides our default DROP policy-based iptables set, apt-get was unable to resolve repository hosts on Dockerfiles that were FROM Ubuntu or debian.

Any apt-get command would result in something like this:

Step 1 : RUN apt-get update
 ---> Running in 64a37c06d1f4
Err http://http.debian.net wheezy Release.gpg
  Could not resolve 'http.debian.net'
Err http://http.debian.net wheezy-updates Release.gpg
  Could not resolve 'http.debian.net'
Err http://security.debian.org wheezy/updates Release.gpg
  Could not resolve 'security.debian.org'

To figure out what was going wrong, we logged all dropped packets in iptables to syslog like this:

# Log dropped outbound packets
iptables -N LOGGING
iptables -A OUTPUT -j LOGGING
iptables -A INPUT -j LOGGING
iptables -A FORWARD -j LOGGING
iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: " --log-level 4
iptables -A LOGGING -j DROP

The logs quickly showed that the docker0 interface was trying to FORWARD port 53 to the eth0 interface. In our case, the default FORWARD policy is DROP, so essentially iptables was dropping Docker’s requests to forward the DNS port to the public interface and Internet at large.

Since Docker couldn’t resolve the domain names where the Dockerfiles were located, it couldn’t retrieve the data it needed.

A Solution

Hmm, so we needed to allow forwarding between docker0 and eth0 , eh? That’s easy! We just added the following rules to our iptables set:

# Forward chain between docker0 and eth0
iptables -A FORWARD -i docker0 -o eth0 -j ACCEPT
iptables -A FORWARD -i eth0 -o docker0 -j ACCEPT

# IPv6 chain if needed
ip6tables -A FORWARD -i docker0 -o eth0 -j ACCEPT
ip6tables -A FORWARD -i eth0 -o docker0 -j ACCEPT

Add or alter these rules as needed, and you too will be able to build Dockerfiles properly behind an iptables firewall.

● posted by Bear

One of the best tools to use every day for locking down your servers is iptables. (You do lock down your servers, right? ;-)

Not using iptables is akin to having fancy locks with a plywood door - sure it is secure but you just cannot know that someone won’t be able to break through.

To this end I use a small set of bash scripts that ensure I always have a baseline iptables configuration and items can be added or removed quickly.

Let me outline what they are before we get to the fiddly bits…

  • checkiptables.sh — A script to compare your saved iptables config in /etc/iptables.rules to what is currrently being used. Very handy to see if you have any local changes before modifying the global config.
  • iptables-pre-up — A Debian/Ubuntu centric script that runs when your network interface comes online to ensure that your rules are active on restart. RedHat/CentOS folks don’t need this.
  • iptables.sh — The master script that sets certain defaults and then loads any inbound/outbound scripts.
  • iptables_*.sh — A bash scripts that is very easy to generate using templates for each rule needed to allow inbound/outbound traffic. I use a naming pattern to make them unique within the directory.

These scripts should be placed into your favourite local binary directory, for example

/opt/sbin
  /checkiptables.sh
  /iptables.sh
  /iptables_conf.d/
    iptables_*.sh

checkiptables.sh

#!/bin/bash
# generate a list of active rules and remove all the cruft
iptables-save | sed -e ’/^[#:]/d’ > /tmp/iptables.check
if [ -e /etc/iptables.rules ]; then
  cat /etc/iptables.rules | sed -e ’/^[#:]/d’ > /tmp/iptables.rules
  diff -q /tmp/iptables.rules /tmp/iptables.check
else
  echo "unable to check, /etc/iptables.rules does not exist"
fi

That is really it - the magic is in the sed portion - it removes all of the stuff that iptables-save outputs that isn’t related to rules and often can change between runs. The remainder of the script is performing a diff against the saved state vs current state. If current state has been modified you will see as output:

Files /tmp/iptables.rules and /tmp/iptables.check differ

iptables.sh

#!/bin/bash
PUBLICNET=eth2

iptables -F
ip6tables -F
ip6tables -X
ip6tables -t mangle -F
ip6tables -t mangle -X

# Default policy is drop
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP

ip6tables -P INPUT DROP
ip6tables -P OUTPUT DROP
ip6tables -P FORWARD DROP

iptables -A INPUT -p tcp -m multiport --dports 22 -j fail2ban-ssh
iptables -A fail2ban-ssh -j RETURN
# Allow localhost
iptables -A INPUT  -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
ip6tables -A INPUT  -i lo -j ACCEPT
ip6tables -A OUTPUT -o lo -j ACCEPT
# Allow inbound ipv6 ICMP so we can be seen by neighbors
ip6tables -A INPUT  -i ${PUBLICNET} -p ipv6-icmp -j ACCEPT
ip6tables -A OUTPUT -o ${PUBLICNET} -p ipv6-icmp -j ACCEPT
# Allow incoming SSH
iptables -A INPUT  -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
# Allow outbound DNS
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
iptables -A INPUT  -p udp --sport 53 -j ACCEPT
# Only allow NTP if it’s our request
iptables -A INPUT -s 0/0 -d 0/0 -p udp --source-port 123:123 -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -s 0/0 -d 0/0 -p udp --destination-port 123:123 -m state --state NEW,ESTABLISHED -j ACCEPT

for s in /opt/sbin/iptables_conf.d/iptables_*.sh ; do
  if [ -e "${s}" ]; then
    source ${s}
  fi
done

There is a lot going on here - flushing all current rules, setting the default policy to DROP so nothing gets through until you explicitly allow it and then allowing all localhost traffic.

After the boilerplate code, the remainder is setting up rules for SSH, DNS and other ports that are common to all server deploys. It’s the last five lines where the fun is - they loop through the files found in the iptables_conf.d directory and load any iptables_*.sh script they find. Here’s an example rule that would be in iptables_conf.d/ - this one allows outbound Etcd:

# Allow outgoing etcd
iptables -A OUTPUT -o eth2 -p tcp --dport 4001 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT  -i eth2 -p tcp --sport 4001 -m state --state ESTABLISHED -j ACCEPT

Having each rule defined by a script allows you to create the scripts using templates from your configuration system.

With the above you now have a very flexible way to manage iptables that is also self-documenting - how cool is that!

● posted by Peter Saint-Andre

Building collaboration apps has always been too hard, especially when audio and video are involved. The promise of WebRTC - building audio and video support directly into the browser - has been that it would make collaboration as easy and open as the web itself. Unfortunately, that hasn’t quite happened yet.

A big part of the problem is that we lack the kind of open platforms and frameworks that have made the web such a success - things like, say, express and hapi in the Node.js community. Instead of open standards and open source, web developers coming to the collaboration space find themselves faced with a plethora of proprietary platforms tied to specific vendors.

Don’t get us wrong: some of those platforms are excellent. However, if you’re dependent on a particular service provider for the platform that powers your entire real-time collaboration app then you’ve taken on some significant risks. Vendor lock-in and high switching costs are two that come quickly to mind. Not to mention the possibility that the startup service you depend on will go out of business, get bought by one of your competitors, or be taken off the market by an acquiring company.

That’s not a recipe for happy application developers.

We’re working to change that by creating a completely open set of front-end libraries and back-end services we’re calling Otalk. The goal is to create a “soup to nuts” technology base that we can run for ourselves (since it’s the basis for our Talky videochat service), that we can offer to organizations building their own realtime applications, and that other organizations can take in-house at any time (since it’s completely open-source).

Why would we do this? Why not build yet another silo and reap the benefits?

First, we care deeply about the open web and we have a long-time commitment to open-source technologies because we believe they’re more sustainable and more secure. Openness is at the very heart of our values as a team.

Second, we’re a fully bootstrapped company and we don’t have any investors to please, so locking in developers isn’t something we need to do for the next funding round or to sell the company.

Third, we tend to make money through consulting, training, and custom app development. The more that folks use the Otalk platform, the bigger the potential market for our core services.

Fourth, we’re running Otalk anyway as the foundation for collaboration apps like Talky. Why not offer it to the world so that web developers everywhere can more easily build their own applications?

Right now we’re using a particular technology stack:

  • XMPP over WebSockets for signaling using Prosody on the back end
  • along with JS-friendly code like stanza.io for the web,
  • and libraries that we’re working on with Steamclock for mobile;
  • and media bridging for improved scalability using the Jitsi Videobridge;
  • also STUN and TURN for NAT traversal using restund.

(More about those pieces and how they fit together in future blog posts.)

Much as we like this approach, we recognize that not everyone is sold on XMPP or the particular implementations we use. Thus we’re also actively investigating ways to make Otalk more modular, so that you could use, say, SIP for signaling, your preferred STUN/TURN server, or a third-party media bridging service. Just want to use our developer-friendly front-end libraries like SimpleWebRTC with your existing platform? Let’s make it happen! Oh, and we’re working to standardize a few XMPP extensions for multi-party signaling and such, so that the entire XMPP community can play in the realtime collaboration space.

As you can see, we’re already working with with multiple teams to make Otalk a reality. But we need your help to make it a real success.

Of course, developers are welcome to contribute via the Otalk org at GitHub. Dive right in!

Perhaps even more important are organizations who require realtime technologies they can build upon in a sustainable way over the long term. Together, we’re stronger than we could ever be apart. If you would like to partner with us or if have any questions about the emerging Otalk coalition, please drop us a line and we’ll get right back to you.

<3

● posted by Peter Saint-Andre

About a year ago, our friends at TokBox published a blog post entitled WebRTC and Signaling: What Two Years Has Taught Us. It’s a great overview of their experience with technologies like SIP and XMPP for stitching together WebRTC endpoints. And we couldn’t agree more with their observation that trying to settle on one signaling method for WebRTC would have been even more contentious than the endless video codec debates. However, our experience with XMPP differs enough from theirs that we thought it would be good to set our thoughts down in writing.

First, we’re always a bit suspicious when someone says they abandoned XMPP because it didn’t scale for them. As a general rule, protocols don’t scale - implementations do. We know of several XMPP server implementations that can route, distribute, and deliver tens of thousands of messages per second without breaking a sweat. After all, that’s what a messaging server is designed to do. Sure, some server implementations don’t scale up that high; the answer is to use a better server. We’re partial to Prosody, but Tigase and a few others are well worth researching, too.

Second, pure messaging is the easy part. The hard part is defining the payloads for those messages to build out a wide variety of features and functionality. What we like best about the XMPP community is that over the last 15 years they have defined just about everything you need for a modern collaboration system: presence, contact lists, 1:1 messaging, group messaging, pubsub notifications, service discovery, device capabilities, strong authentication and encryption, audio/video session negotiation, file transfer, you name it. Why put a lot of work into recreating those wheels when they’re ready-made for you?

An added benefit of having so many building blocks is that it’s straightforward to put them all together in productive ways. For example, XMPP includes extensions for both multi-user chat (“MUC”) and multimedia session management (Jingle). If we need multiparty signaling for video conferencing, we can easily combine the two to create what we need. Plus we get a number of advanced solutions for free this way, since MUC includes in-room presence along with a helpful authorization model and Jingle supports helpful features like renegotiation and file transfer. Not to mention that the ability to communicate device capabilities in XMPP enables us to avoid monstrosities like SDP bundling.

Yes, we know: angle brackets. Web developers hate ‘em. That’s why we’ve put so much work into making XMPP love the web, with open-source code like stanza.io and jingle.js that communicates over the WebSocket binding for XMPP (co-authored by yeti Lance Stout). This gives us a completely web-developer-friendly technology for everything we need to build performant, feature-rich, and beautiful collaboration apps for the web.

A perfect example close to our hearts is Talky, our video chat service. The next generation of Talky will be based on XMPP, enabling us to easily build a wide range of similar applications by mixing and matching various XMPP features and extensions. But more about that in our next post…

● posted by Henrik Joreteg

As a portion of our elaborate training events I give a short talk about JS frameworks. I’ve shied away from posting many of my opinions about frameworks online because it tends to stir the pot, hurt people’s feelings, and unlike talking face to face, there’s no really great, bi-directional channel for rebuttals.

But, I’ve been told that it was very useful and helped provide a nice, quick overview of some of the most popular JS tools and frameworks for building single page apps. So, I decided to flesh it out and publish it as A Thing™ but please remember that you’re just reading opinions, I’m not telling you what to do and you should do what works for you and your team. Feel free to disagree with me on twitter or even better, write a post explaining your position.

Angular.js

pros

  1. Super easy to start. You just drop in a script tag into your document add some ng- attributes to your app and you magically get behavior.

  2. It’s well-supported by a core team, many of whom are full time Google employees.

  3. Big userbase / community.

cons

  1. Picking Angular means you’re learning Angular the framework instead of how to solve problems in JavaScript. If I were to encourage our team to build apps using Angular, what happens when {insert hot new JS framework} comes along? Or we discover that for a certain need, Angular can’t quite do the thing we want it to and we want to build it with something else? At that point how well will those angular skills translate to something else? Instead, I’ve got developers who’s primary skill is Angular, not necessarily JavaScript.

  2. Violates separation of concerns. Call me old school, but I still believe CSS is for style, HTML is for structure, and JavaScript is for app logic. But, in Angular you spend a lot of time describing behavior in HTML instead of JS. For me personally, this is the deal breaker with Angular. I don’t want to describe application logic in HTML, it’s simply not expressive enough because it’s a markup language for structuring documents, not describing application logic. To get around this, Angular has had to create what is arguably another language inside HTML and then also writing a bit of JS to describe additional details. Now, rather than learning how to build applications in JavaScript, you’re learning Angular and things seem to have a tendency to get complex. That’s why my friend Ari’s Angular book is 600 pages!

  3. Too much magic. Magic comes at a cost. When you’re working with something that’s highly abstracted, it becomes a lot more difficult to figure out what’s wrong when something goes awry. And of course, when you veer off the beaten path, you’re on your own. I could be wrong, but I would guess most Angular users lack enough understanding of the framework itself to really feel confident modifying or debugging Angular itself.

  4. Provides very little structure. I’m not sure a canonical way to build a single page app in Angular exists. Don’t get me wrong, I think that’s fine, there’s nothing wrong with non-prescriptive toolkits but it does mean that it’s harder to jump into someone else’s angular app, or add someone to yours, because styles are likely very different.

my fallible conclusion

There’s simply too much logic described in a quasi-language in HTML rather than in JS and it all feels too abstract and too magical.

I’d rather our team get good at JS and DOM instead of learning a high-level abstraction.

Ember.js

pros

  1. Heavy emphasis on doing things “The Ember Way” (also note item #1 in the “cons” section). This is a double edged sword. If you have a huge team and expect lots of churn, having rigid structure can be the difference between having a transferrable codebase and every new developer wanting to throw it all away. If they are all Ember devs, they can probably jump in and help on an Ember project.

  2. Outsource many of the hard problems of building single page apps to some incredibly smart people who will make a lot of the hard tradeoff decisions for you. (also note item #2 in the “cons” section.)

  3. Big, helpful community.

  4. Nice docs site.

  5. A good amount of existing solved problems and components to use.

cons

  1. Heavy emphasis on doing things “The Ember Way”. Note this is also in the “pros” section. It’s very prescriptive. While you can veer from the standard path from the sound of it, many do not. For example, you don’t have to use handlebars with Ember, but I would be surprised if there are many production Ember apps out there that don’t.

  2. Ember codifies a lot of opinions. If you don’t agree with those opinions and decide to replace pieces of functionality with your own, you’re still sending all the unused code to the browser. Byte counting isn’t a core value of mine, but conceptually it’s nicer to be able to only send what you use. In addition, when you’re only sending what you’re using, there’s less code to sift through to locate the bug.

  3. Memory usage can be a bit of an issue, which can be a problem, especially when running Ember on mobile

  4. Ember is intentionally, and structurally inflexible. Don’t believe me? Take Yehuda’s word for it instead (the surrounding conversation is interesting too).

my fallible conclusion

The lack of flexibility and feeling like in order to use Ember you have to go all or nothing is a deal breaker for me.

React

It’s worth noting that it’s not really fair to include React in this list. It’s not a framework, it’s a view layer. But there’s so much discussion on this that I decided to add it here anyway. Arguably, when you mix in Facebook’s flux dispatcher stuff, it’s more of a framework.

pros

  1. Blindly re-render without worrying about DOM thrashing, it will “diff” the virtual DOM that you render to, against what it knows the DOM is and will perform minimal changes to get them in sync.

  2. Their virtual DOM also resolves issues with eventing across browsers by abstracting it to a standards-compliant event-emitting/bubbling model. As a result, you get a consistent event model across any browser.

  3. It’s just a view layer, not a complete framework. This means you can use it with whatever application orchestration you’d like to do. It does seem to pair nicely with Backbone, since Backbone doesn’t give you a view binding solution out of the box and encourages you to simply re-render on model changes, which is exactly what React encourages and deals with.

cons

  1. The template syntax and the way you create DOM (with JSX) is a bit odd for a JS developer because you put unquoted HTML right into your Javascript as if it were valid to do so. And yes, JSX is optional, but the alternative: React.DOM.div(null, "Hello ", this.props.name); isn’t much better, IMO.

  2. If you want really finite and explicit control over how things get applied to the DOM you don’t really have it anymore. For example, if you want very specific control over how things are bound to style attributes, for creating touch draggable UIs. You can’t easily time the order of how classes get applied, etc. (please note this is something I’ve assumed would be an issue but have not run into myself, but this was confirmed by a dev I was talking to who was struggling with exactly this. But take it with a grain of salt).

  3. While you can just re-render the entire react view, depending on the complexity of component, it sure seems like there can be a lot of diffing to do. I’ve heard of React devs choosing to update only the known changed components, which to me, takes away from the whole idea of not having to care. Again, note that I’m speaking from very limited experience.

my fallible conclusion

I think React is very cool. If I had to build a single page app that supported old browsers I’d look closely at using Backbone + React.

A note on the “FLUX” architecture: To me this is not new information or even a new idea, just a new name. Apparently I’m not alone in that opinion.

The way I understand it, conceptually FLUX is the same as having an intelligently evented model layer in something like Ampersand or Backbone and turning all user actions and server data updates into changes to that state.

By ensuring that the user actions never result in directly manipulating the DOM you end up with the same unidirectional event propagation flow as FLUX + React. We intentionally didn’t include any sort of two-way bindings in Ampersand for that reason. In my opinion two-way bindings are fraught with peril. Having a single layer deal with incoming events, be they from the server or user action is what we’ve been doing for years.

Polymer

This one is a bit strange to me. There’s a standard being developed for being able to define custom elements (document.registerElement for creating new HTML tags with built in behavior), doing HTML imports (<link type='html'> for being able to import those custom elements into other documents), and shadow DOM (for isolating CSS from the rest of the document).

Those things are great (except HTML imports, IMO).

But, judging by Polymer’s introduction, it sounds like a panacea for making all web development easy and amazing and that it’s good for everything. Here’s what the opening line says:

Web Components usher in a new era of web development based on encapsulated and interoperable custom elements that extend HTML itself. Built atop these new standards, Polymer makes it easier and faster to create anything from a button to a complete application across desktop, mobile, and beyond.

While I think being able to create custom elements and encapsulating style and behavior is fantastic, I’m frustrated with the way it’s being positioned. It sounds like you should use this for everything now.

Here’s the kicker: I don’t know of any significant Google app that uses polymer for anything.

That’s a red flag for me. Please don’t misunderstand, obviously this is all new stuff and change takes time. My issue is just that the messaging on the site and from the Google engineers working on this don’t convey that new-ness.

In addition, even if you were to create custom elements for all the view code in your single page app, something has to manage the creation/destruction of those elements. You still have to manage state and orchestrate an app, which means your custom elements are really just another way to write the equivalent of a Backbone view. In the single page app world, I don’t see what we would actually gain by switching those things to custom elements.

pros

  1. Being able to create things like custom form inputs without them being baked into the browser is awesome.

  2. Polymer polyfills enough so you can start using and experimenting with this functionality now.

  3. Proper isolation of styles when building widgets has been a problem on the web for years. The new standards solve that problem at the browser level, which is awesome.

cons

  1. I personally feel like one of Google’s main motivations for doing this is to make it dead simple to drop in Google services that include behavior, style and functionality into a web page without having to know any JS. I could be completely off base here, but I can’t help but feel like the marketing push is largely a big hype push to help push the standards through.

  2. HTML Imports seem like a bad idea to me. It’s feels like the CSS @import problem all over again. If you import a thing, you have to wait to get it back before the browser notices that it imports another thing, etc. So if you actually take this fully componentized approach to building a page that is promoted, then you’ll end up with a ton of back and forth network requests. They do have a tool called the “vulcanizer” for flattening these things out, however. But inlining it doesn’t seem to be an option. There’s was a whole post written yesterday about the problems with HTML imports that discusses this and other issues.

  3. I simply don’t understand why Google is pushing this stuff so hard as if it’s some kind of panacea when the only example I can find of Google using it themselves is on the Polymer site itself. The site claims “Polymer makes it easier and faster to create anything from a button to a complete application across desktop, mobile, and beyond.” In my experimentation, that simply wasn’t the case, I smell hype.

my fallible conclusion

Google doesn’t seem to be eating their own dog food here. The document.registerElement spec is exciting, beyond poly-filling that, I see no use for Polymer, sorry.

Backbone

There is no more broadly production deployed single page app framework than Backbone that I’m aware of. The examples section of the backbone docs lists a lot of big names and that list is far from exhaustive.

pros

  1. It’s a small and flexible set of well-tested building blocks.

    1. Models
    2. Collections
    3. Views
    4. Router
  2. It solves a lot of the basic problems.

  3. Its limited scope makes it easy to understand. As a result I always make new front end developer read the Backbone.js documentation as a first task when they join &yet.

cons

  1. It doesn’t provide solutions for all the problems you’ll encounter. This is why every major user of backbone that I’m aware of has built their own “framework” on top of Backbone’s base.

  2. Most notably find yourself missing when using plain Backbone are:

    1. A way to create derived properties on models.
    2. A way to bind properties and derived properties to views.
    3. A way to render a collection of views within an element.
    4. A way to cleanly handle “subviews” and nested layouts, etc.
  3. As much as backbone is minimalistic, it’s pieces also arguably too coupled to each other. For example, until my merged pull request is released you couldn’t use any other type of Model within a Backbone Collection without monkey patching internal methods. This may not matter for some apps, but it does matter if I want to, for example, use a model to store some observable data in a library intended for use by other code that may or not be a backbone app. The only way to use Backbone Models is to include all of Backbone which feels odd and inefficient to me.

my fallible conclusion

Backbone pioneered a lot of amazing things. I’ve been using it since 0.3 and I strongly agree with its minimalistic philosophy.

It’s helped spawn a new generation of applications that treat the browser as a runtime, not just a document rendering engine. But, its narrow scope left people to invent solutions on top of Backbone. While this isn’t a bad thing, per sé, it just brings to light that there are more problems to be solved.

Not using a framework

There’s a subset of developers who think you shouldn’t use frameworks, for anything ever. While I appreciate the sentiment and find myself very in line with many of them generally, to me it’s simply not pragmatic, especially in a team scenario.

I tend to agree with Ryan Florence’s Post on this topic. Which is best summed up by this one quote from his post:

When you decide to not pick a public framework, you will end up with a framework anyway: your own.

He goes on to say, that doing this is not inherently bad, just that you should be serious about it and maintain it, etc. I highly recommend the post, it’s excellent.

pros

  • Ultimate flexibility

  • You’ll tend to include only the exact code that you need in your app.

cons

  • Massive re-inventing of things, cost.

  • Knowing what modules to use and finding the right modules is hard

  • No clear documentation or conventions for new developers

  • Really hard to transfer and re-use code for your next project

  • You’ll generally end up having learn from your own mistakes instead of benefiting from other’s code.

The GIANT gap

In doing our trainings and in writing my book, Human JavaScript and within our team itself we’ve come to realize there is a huge gap between picking a tool, framework, or library and actually building a complete application.

Not to mention, there are huge problems surrounding how to actually build an app as a team without stomping on each other.

There are sooooo many options and patterns on how to structure, build, and deploy applications beyond just picking a framework.

Few people seem to be talking about how to do all of that, which is just as big of a rabbit hole as picking a framework!

What we actually want

  • Clear starting point

  • A clear, but not enforced, standard way to do things

  • Explicitly clear separation of concerns, so we can mix and match and replace as needed

  • Easy dependency management

  • A way to use existing solutions so we don’t have to re-invent everything

  • A development workflow where we can switch from development mode to production with a simple boolean in a config.

How we’ve addressed all of these things

So, in case you hadn’t already heard, we did the unspeakable thing in JavaScript. We made a “new” framework: Ampersand.js It’s a bit like a redux or derivation of Backbone.

The response so far, has been overwhelmingly positive, we only announced it about a month ago and all these folks have jumped in to contribute. People have been giving talks about it at meetups, and Jeremy Ashkenas, the creator of Backbone.js, Underscore.js, and CoffeeScript invited me to give a keynote at BackboneConf 2014 about Ampersand.js.

So how did we address all my critiques about the other tools?

  1. Flexible but cohesive

    • It comes with a set of “core” modules (documented here) that roughly line up with the components in Backbone. But they are all installed and used individually. No assumptions are made that you’re using a RESTful or even Ajax powered API. If you don’t want that stuff, you just use Ampersand-State instead of the decorated version of State we call Ampersand-Model that adds the restful methods.

    • It doesn’t come with a templating language. Templates can be as simple as a string of HTML, a function that return a string of HTML, or a function that return DOM. The sample app includes some more advanced templating with templatizer, but it truly could be anything. One awesome approach for doing handlebars/htmlbars + Ember style in-template binding declarations is domthing by Philip Roberts. There are also people using React with Ampersand views.

    • Views have a way to declare bindings separate from the template engine. So if you want, you can use HTML strings for templates and still get full control of bindings. The nice thing about not bundling a templating engine means that you can write componentized/reusable views without needing to also include a templating system.

  2. There has to be a clear starting point and some idiomatic way to structure the app as a whole that can be used as a reference, but those standard approaches should not enforced. We did this by building a CLI that can help you spin up a new app, that follows all these conventions that can serve either as a starting point, or simply as a reference. See the quick start guide for more.

  3. We wanted to build on something proven not just start something new for the sake of doing it. This is why we built on Backbone as a base instead of starting from scratch entirely.

  4. We wanted a more complete reference guide to fill that gap I mentioned that explains all the surrounding ideas, tools, and philosophies. We did this by writing a book on the topic: Human JavaScript. It’s free to read online in its entirety and available as an ebook.

  5. We wanted to make it easy to use “solved problems” so we don’t have to re-invent the wheel all the time. We did this by using npm for all package management, and by creating a quick-searchable directory of our favorite clientside modules.

  6. We wanted a painless development-to-production workflow. We did this with a tool called moonboots that adds some dev and deployment workflow functionality to browserify. Moonboots has a plugin for hapi.js and express.js where the only thing you have to do to go from production mode (minified, cached, uniquely named static assets) and dev mode (re-built on each request, not minified, not cached) is toggling a single boolean.

  7. We didn’t just want this to be an &yet project, it has to be bigger than that. We’ve already had over 40 contributors in the short time Ampersand.js has been public, and we just added the first of hopefully many non-&yet contributors to core. Everything uses the very permissivie MIT license and its modular, loosely coupled structure lends itself quite well to extending or replacing any piece of it to fit your needs. For clarity we’ve also set it up as its own organization on GitHub.

  8. We wanted additional training and support to be available if needed. For this we’ve made the #&yet IRC channel on freenode open to questions and support. In addition there are people and companies who want paid training opportunities to be available in order for them to even feel comortable adopting a technology. They want to know that more information and help is available, so in addition to the free resources, we’ve also put together a Human JavaScript code-along online training and offer in person training events to provide hands-on training and support.

So are you saying Ampersand is the best choice for everyone?

Nope. Not at all. It certainly has its own set of tradeoffs. Here are some I’m aware of, there are probably others:

  • Unsurprisingly, it is still a somewhat immature codebase compared to some of these other tools. Having said that, however, we use it for all our single page app projects at &yet and the core modules all have thorough test suites. It’s also worth noting that if you do run into a problem, odds are it won’t be as debilitating. Its open, hackable, pluggable nature makes it different than many frameworks in that you don’t have to jump through a bunch of hoops to fix or overwrite something in your app. The small modules typically make it easier to isolate, patch, and quickly publish bugfixes. In fact, we often publish a patched version to npm as soon as a pull request is merged. Our strict adherance to semver makes it possible to do that while mitigating odds of breaking any existing code. I think that’s part of the reason it has gotten as many pull requests as it has already. Even still, if you have a different idea of how something should work, it’s easy to use your own module instead. We’re also trying to increase the number of core committers to make sure the patches are getting in even if other core devs are busy.

  • It doesn’t have the rich tooling and giant communities built up around it yet. That stuff takes time, but as I said, we’re encouraged by the level of participation we’ve had thus far. Please file bugs and help create the things you wish existed.

  • Old browser support is a rough spot. We intentionally drew a line saying we won’t support IE8. We’re not the alone there, jQuery 2.0 doesn’t either, Google has said they’ll only support the latest two versions of IE for Apps and recently dropped IE9 too, and Microsoft themselves just announced their plan to phase out support for all older browsers. Why did we do this? It’s because we’re using [getters and setters] for the state management stuff. It was a hard decision but felt like enough of a win to make it worth it. Unfortunately, since that is a language-level feature, It’s not easily shimmable (at least not that I’m aware of). Sadly, for some companies not supporting IE8 is a dealbreaker. Perhaps someone has already written a transpiler in a browserify transform that can solve this problem, but I’m not aware of that. If you are, please let me know. I would love it if Ampersand-State could support IE 7 and 8.

Final thoughts

Hopefully this explanation was useful. If you have any feedback, thoughts or if there’s something I missed or got wrong I’m @HenrikJoreteg on twitter, please let me know.

Also please help us make these tools better. We love getting more people involved in the project. File bugs or grab one of the open issues and help us patch ‘em.

Want to start using Ampersand?

Check the learning guides, API reference, or read Human JavaScript online for free.

For hands-on learning jump into the Human JavaScript code-along online training, or for the ultimate kickstart come hang out in person at our training events where you’ll build an app from scratch together with us.

See you on the Interwebz <3

● posted by Julie Ann Horvath

In an effort to build a more inclusive community around the events we’re a part of, we’d like to announce our very first (but certainly not last) Human JavaScript Training Scholarship.

We understand that very few people, both in tech and in the world, have access to the resources needed to level-up in their careers. This is especially true of marginalized groups, who are consistently underrepresented and often even pushed out of our industry without the opportunity to thrive here.

We also understand that there are serious barriers to entry in our industry that keep people who are marginalized by race and/or gender from entering and actively participating in our field.

With this in mind, we’ll be covering one person’s trip and tuition to participate in Human JavaScript: LIVE!, our two-day, intensive JavaScript workshop for JS developers who are looking to level-up in building clientside, single-page web apps. This workshop focuses on writing modular and maintainable code, while emphasizing the importance of code collaboration.

What’s included?

We’ll cover your round-trip flight to Washington state and your tuition to our Human JavaScript: LIVE! training workshop, August 26-27. All attendee transportation, meals, and hotel stays are 100% covered by the cost of tuition and will be handled by our event team here in Richland. As a part of the scholarship, you’ll also receive access to all of our Human JavaScript training videos online (forever) and the Human JavaScript book (also forever). We also thought it’d be rad to send you some of our favorite &yet goodies, hand-crafted by our design team.

“Ok, so how do I apply?”

We want to hear your Developer Origin Story™. How’d you get started in tech? What do you wish you had known when you got started? We want to know what you love about being a developer and what, as a community, we can do to lower the barriers to entry marginalized groups often face.

You can submit your story privately to us here, or you can publish it as a blog post, webpage, or video and link us to it here.

Requirements:

  • You want to be a great JavaScript developer.
  • You are a person of color and/or you identify significantly as a woman.
  • You agree to honor our Code of Conduct, because people come first at our events.

All of &yet’s events and workshops are trans-inclusive.

Human JavaScript: LIVE!’s venue is completely ADA accessible and we are happy to provide ASL resources for anyone who needs them. Please let us know what we can do to help make you feel at home.

Apply now for the Human JavaScript Scholarship

The Deadline to apply is next Wednesday, August 13th, 2014

Need some help getting started with your origin story?

One of our favorite hashtags this year was #mynerdstory, started by Crystal Beasley to encourage women and other marginalized groups in tech to share how they got started in the tech industry.

Here’s a few example origin stories from #mynerdstory:

In addition to applying for the scholarship to participate in Human JavaScript: LIVE!, when you apply you’ll receive a $300 discount to attend the workshop, just to say thank you for being awesome.

We want to thank our community for continuing to inspire and guide us toward making our events more inclusive. We appreciate you all so much.

If you’re looking to help us spread the word please use the hashtag #HJSLScholarship or tweet us → @andyet.

● posted by Stephanie Maier

We are very excited to announce that the iOS app for Talky is now available for download on the App Store.

(Can I get a woohoo?)

Take a quick look

You’ll use the same approach to starting a conversation as Talky on the web.

One quick tap and you’ll be able to copy the room’s URL, or just click the “+” and send an invite via text or email.

In addition to other iOS users, the people you invite can use Talky on Chrome, Firefox, or Opera on the desktop, or on Chrome or Firefox on Android devices.

(Unfortunately, we regret to inform you that not all Talky conversations will feature the most delightfully amazing Leslie!)

The rest of the story

Not long ago, in January 2014, the conversation around integrating iOS with WebRTC started to gain momentum here at &yet.

With the addition of Peter Saint-Andre, formerly an architect on Cisco’s WebEx service and an area director at the IETF, and WebRTC expert Philipp Hancke to our team, we were poised to take Talky development to the next level.

When we first released Talky in 2013, it was already solving a big problem for our team. Talky gave us the ability to communicate more naturally and effectively, especially as a distributed team. It was originally a demo for SimpleWebRTC, a Javascript library you can use to quickly build apps like Talky. But we had no idea how well it would be received. Talky continues to steadily grow and we love hearing from all our users.

The initial prototype for Talky iOS was created by &yet’s quietly awesome iOS developer, Jon Hjelle. Who, if you know well, was quite displeased with the mention of his name in this blog post. (Hjon, please accept my most sincere apologies!)

The final Talky iOS app was completed in partnership with Steamclock Software. The dev team at Steamclock polished the prototype through some major changes to the WebRTC library, including the very important addition of resilient video support. The end result is a more featureful app and delightful user experience.

The iOS app also paves the way for our forthcoming Talky Pro service, which will give users the same experience as Talky, but optimized for business, complete with personalized branding.

Talky is built on top of the Otalk platform, a suite of completely open and standards-based tools for making modern communication a delightful experience for developers and users alike.

As a team we’re excited about WebRTC and its future contributions to open communication.

If you have a WebRTC project you think we could help with, we’d love to hear about it.

Just want to chat? We’d love to hear from you, too! =)

● posted by Henrik Joreteg

Introducing Ampersand.js a highly modular, loosely coupled, non-frameworky framework for building advanced JavaScript apps.

Why!?!

We <3 Backbone.js at &yet. It’s brilliantly simple code and it solves many common problems in developing clientside applications.

But we missed the focused simplicity of tiny modules in node-land. We wanted something similar in style and philosophy, but that fully embraced tiny modules, npm, and browserify.

So we made Ampersand.js, a well-defined approach to combining (get it?) a series of intentionally tiny, and loosely coupled modules for building JS apps.

Post-Backbone

Backbone has been praised for its flexibility and simplicity. The fact that Backbone’s author Jeremy Ashkenas and his fellow maintainers haven’t tried to solve every problem has kept it usable for a broad range of application types. Its effectiveness is evidenced by its incredible popularity.

I built my first Backbone app when it was still version 0.3.1, and our whole team has been an avid users/supporters of the project for quite some time. I even got a chance to speak at the first BackboneConf.

Philip Roberts, who has built a big portion of Ampersand.js, got a lot of experience building an incredibly complex Backbone app at his previous company Float. He certainly pushed Backbone to its limits in building complex spreadsheet-esque accounting tools for the web.

Not long after discovering Backbone at &yet, we got really into node.js, which brought with it a module approach and what became an awesome way of managing dependencies that we’ve have fallen deeply in like with: npm.

Nothing has done more for our team’s ability to write clean, maintainable clientside applications than having a really awesome dependency management system and substack’s browserify that allows us to quickly declare/install external dependencies and know that things will Just Work™.

npm has also been the catalyst that enables what has been referred to as the “tiny modules movement”, the basic philosophy of which is that no matter how small or insignificant the problem, you shouldn’t have to solve it more than once.

By giving a module narrow scope and functionality you can actually maintain it without burning out. Also, knowing about and fixing gotchas in a single location means that all modules depending on it also benefit.

After getting addicted to this way of working, many developers, ourselves included, have developed an allergic reaction to libraries and plugins that don’t work that way. Unfortunately, despite its lightweight, flexible approach, Backbone itself doesn’t follow that pattern.

“What? I thought you said Backbone was flexible and modular?”

Yes, but only to a point.

“But, Backbone is on npm!”

Yes, but stay with me…

One of the problems we’ve had at &yet especially when working on large Backbone applications is a sane way to document the type of properties a model is supposed to contain.

Backbone models, by default, don’t enforce any structure. You don’t have to declare anywhere what properties you’re going to store. As a result, people inevitably start saving miscellaneous properties on models from within a view somewhere, and there’s no good way for a new dev starting in on the project to be able to read the models and see exactly what state is being tracked.

To solve this problem and to enforce additional structure, I wrote a replacement model called “HumanModel” that is consistent with the philosophy explored in depth in the book Human JavaScript. This model, which has now morphed into ampersand-model, forces you to declare the properties you’re going to store, and also allows you to declare derived properties, etc.

Originally we used our replacement models within Backbone Collections, but we started running into problems. Backbone generally assumes that you’re storing Backbone.Model models in collections. So when adding an instantiated model to a collection, Backbone would fail to realize that it’s already a model. My patch to Backbone was merged and fixed this, but there have been other areas where we’ve wanted more flexibility.

For example, at times we wanted RESTful collections where data is coming from an API, but other times, we just wanted something like a Backbone collection/model system for managing state in another module, that perhaps had nothing to do with getting data from a REST API. In those cases we didn’t want to make all of Backbone a dependency of our module, just to get evented models.

Over time while building a ton of apps with it, for clients and for ourselves, we’ve kept running into these same types of problems that we attributed to the coupling/bundling of Backbone.

So we started ripping things apart into their own independently published, managed, and versioned modules.

Thus, Ampersand.js was born.

Ampersand.js splits things apart as much as possible. For example, ampersand-collection makes no assumptions about how you’re going to put data into it, what types of objects you’re going to store, or what indices you’re going to want to use to retrieve them. It follows the tiny module pattern.

But, what if you want that stuff?

Well, that’s easy, we just have another tiny module that layers in that functionality.

There’s a RESTful ampersand-rest-collection we just pre-bundle and publish it as a module for convenience, the code that combines them is hilariously simple.

You see the exact same pattern in ampersand-state and ampersand-model. “State” is the base object that “model” is built on. But model goes the additional step of including the RESTful methods.

So what exactly is Ampersand.js? What makes it unique?

In starting to toy with the concept of building out these tools, we wrote a few guiding principles, some of which we’ll no doubt get some flack for. Here they are:

1. Everything is a CommonJS module

No AMD, UMD, or bundling of any kind is included by default. The clarity, simplicity, and flexibility of CommonJS just won. Clear dependencies, no unnecessary wrapping/indenting, no extra cruft. Just a clearly declared set of dependencies in package.json.

Any sort of bundling for any other module system is easy enough to do with any number of tools like grunt or gulp.

2. Everything is installed via npm

This isn’t a diss toward the other package management approaches, it’s just a choice to maximize simplicity. Especially given point #1.

3. Modern browsers by default

We’re unapologetically supporting only IE9+. There are many features of ES5 that enable dramatic simplifications of code that simply were not present in IE before IE9. For reference, check out kangax’s ES5 compatibility table. Not having to shim each and every feature and completely avoiding non-shimmable ones saves you so many headaches that we decided to just draw that line. Bring the haters :)

But again, remember this isn’t an all-or-nothing “framework”. In fact, very arguably it’s not a framework at all. There are pieces here that don’t require IE9 and others that could be converted to solve those problems if they matter to you. It’s just a line we chose to draw in the sand so we could focus our efforts on building for the web’s present and future instead of its past.

4. Strict semver all the things

If you’re unfamiliar with semver, the semver homepage summarizes it in about three sentences. In short, it’s a strict adherence to a versioning scheme for modules that, if followed, allows you to trust minor and patch version updates to not break your code. So, for a dependency you can specify a version like this: “^1.1.0” and know that your code will not break if the underlying dependency is upgraded from 1.1.0 to 1.2.8 because the versioning scheme prohibits breaking changes without bumping the major version number.

This flexibility is very important in clientside code because we don’t want to send 5 different versions of the same dependency to the browser. Loosely declaring dependencies of the building blocks and strictly declaring them in your app’s main package.json can help you avoid a lot of these problems. Combining the way npm manages dependencies with this approach, we can get minimal duplication of shared dependencies.

5. Tiny module all the things!

The smaller the feature set of the low-level modules, the easier it is to avoid breaking changes. Higher-level modules should still exist, but, should primarily be pulling together small modules in a way that makes them more usable. For example: ampersand-rest-collection, component’s “events” module, or component’s “classes” module.

6. Expose the simplest API possible.

Simplicity is a core value. If you don’t actively fight for simplicity in software, complexity will win, and it will suck. This means things like pruning unneeded features and giving everything descriptive names even if they’re longer. That’s what minification is for. We are not compilers, so we should optimize for readability and use tools for optimizations.

While this is going to be a bit controversial, for us the focus on simplicity also means avoiding using promises. There are enough things that are new and intimidating to those building clientside apps. Adding promises makes for an unnecessarily tall cognitive leap.

Not that promises are bad, but the truth is there isn’t as much need for complex flow-control for most clientside things.

And, if you want to use promises it’d be easy enough to write a version of ampersand-sync or ampersand-router that used bluebird or another promise library and slip that into your app.

That’s the whole point of the modularity concept and still: you only include what you ultimately are using!

7. Optimize for minimal DOM manipulation and performance.

It should be easy to create rich user experiences.

There’s a lot of buzz and talk around rendering performance for JS apps. Mostly the answer to these types of performance issues is: “Don’t touch the DOM any more than you have to.”

That’s one of the core premises of libraries like Facebook’s React: only performing minimal changes and batching those changes into RAF loops.

**note: You could very easily use React with Ampersand.js, btw.

In canonical Backbone apps you often re-render the contents of a view if the related model or models change. But, if you’re trying to do things like smooth dragging and dropping, you don’t want to re-render contents of a view each time properties change. Or even if you’re using CSS3 transitions, re-rendering a section of the DOM and adding a class won’t ever trigger the CSS3 transition, because it wasn’t actually transitioned, it was just replaced with another piece of DOM that had that class. So, pretty soon in those scenarios you find yourself writing a bunch of “glue code” to bind things to the DOM and only perform minimal edits.

The point is, there are valid uses of both approaches. So the goal with ampersand-view is a simple way to declare your bindings in your view code. Check out the declarative bindings section of the docs.

You can also just mix and match. In certain cases it may be easier to re-render everything, but declaring very specific binding behavior is also simple without tying you to a template system. It gives you ultimate control. Modularity FTW!

8. Mobile is in the DNA

Think small and light. Optimize and build tools for touch interfaces. Help build the web as the go-to platform for mobile. (You can expect more tools to be released here in the future toward this end.)

9. Unapologetically designed for rich “app” experiences.

These ain’t no websites, pal. If you’re building content sites or sites you want thoroughly crawled this is not the tool for you.

This is for clientside JavaScript applications where the browser is treated as a runtime, not as a document viewer. For more on that, you can read about how we believe the web has outgrown the browser.

10. Embrace offline-first mentality and ServiceWorker all the things as soon as we can.

Yup. These are apps, they should compete with native apps. The thing that’s missing for web to truly be a viable alternative to native apps is good tools for building offline web apps. Again, for more on that read the post mentioned above.

But the point is, in order for an app to work offline it needs to be a true self-contained JavaScript app so that it can run entirely in the client. Since that’s how Ampersand.js is aimed to work, it would be a nice compliment to an offline-first backend like hood.ie.

11. Everything is MIT licensed

Software licensing can suck. Especially when trying to manage licenses of dependencies for a large enterprise project. Picking MIT for all of this stuff simplifies things as much as we can.

12. Love the developer

Don’t ignore developer workflow! We’ve got a few nice things you can see in the app the cli builds that let you simply flip a “developmentMode” boolean to put your app into a live-reloaded, unminified mode, or conversely into a production mode (more below).

The problems with tiny modules

It’s not a silver bullet. One of the biggest challenges for the “tiny module approach” is knowing which tiny modules exist and which ones to use. This can be quite daunting for someone who’s used to grabbing a few jQuery plugins and is new to all of this.

Most of the tiny modules are, well… tiny. These are small pieces of code, not heavily marketed because they’re not necessarily the pride and joy of the developer. Many of them are rather boring and don’t do very much, plus they’re infrequently updated and often they even look unmaintained because frankly, they represent a solved problem that doesn’t need to be re-solved!

Seriously, having published a ton of tiny modules, I sometimes forget about my own modules!

This can make it incredibly hard to get started and this is where frameworks really shine.

So, we’re doing a couple of things to solve that problem for ourselves and others building Ampersand.js apps.

  1. A better starting point: The ampersand cli is a scaffolding tool. It helps you build out a fully working starter app including a hapi or express node server to serve your application. It includes patterns and approaches that we use at &yet for structuring and serving single page apps which we’ve defined in Human JavaScript.

  2. The tools site: tools.ampersandjs.com. This is a site with quick-searchable, hand-picked tools for building Ampersand-style apps. A grab bag of “solved problems” for single page apps, if you will. In addition it updates its url as you search so it’s deep linkable. For example, if you’re looking to do WebRTC stuff: http://ampersandjs.github.io/tools.ampersandjs.com/?q=webrtc

  3. A book describing the philosophy: If you’re looking for deeper explanations of the philosophy and approaches used in the generated app, those are described in a lot more detail in my book Human JavaScript, which along with releasing the framework, we’ve now made available to read online for free.

Massive props to Jeremy Ashkenas and the rest of the Backbone.js authors

Many of the individual modules contain copy-and-pasted code from Backbone.js.

We’re incredibly grateful for Jeremy’s work and for the generous MIT licensing that made Ampersand.js possible.

The future

There’s still a lot to do.

Now that we’ve removed our dependency on Backbone we’re free to edit other things in “core” that we’ve had alternate ideas about.

With the flexibility that comes with the tiny modules approach, it’s easier to do a lot more exploration without having to change core items.

A few examples:

  • domthing - Philip Roberts has built an incredibly awesome DOM-based templating language and a mixin to work with Ampersand.js.

  • bind-transforms - A way to elegantly bind styles like CSS transforms to models. In combination with the cached, evented, derived properties of ampersand-state let’s you build amazing things, like smooth drag-n-drop views.

  • ampersand-forms - A set of tools for building rich, interactive forms.

We’d encourage you to get involved.

For simplicity all the “core” stuff is on Github as its own organization: https://github.com/ampersandjs.

Send pull requests, file issues, and tell the core team that we’re wrong on twitter: @HenrikJoreteg, @philip_roberts, @lynnandtonic, @lancestout, @lukekarrys, and @wraithgar.

For more cool stuff, follow the whole @andyet team on twitter.

Learning even more

To learn more about building advanced JavaScript applications that are as maintainable as they are awesome learn directly from the folks behind ampersand at our bound-to-be-memorable upcoming training adventure — JS for Teams: “It’s Aliiive!”

● posted by Sarah Bray

JS for Teams: It’s ALIIIIVE! is a two-day training adventure happening July 24 & 25 focused on teaching teams how to build advanced single-page apps in a highly maintainable way. Tickets on sale today!

To celebrate, we’re offering $200 off per ticket for the next 5 tickets – use the discount code AMPERSAND at check-out.

The tickets we set aside for our email subscribers already sold out, so don’t miss your chance. Seats are extremely limited.

Enroll now

● posted by Adam Brault

Is there an inherent business risk in letting your top JavaScript developers do their best work?

(What a painful thought!)

It’s one thing to build with the latest tools and techniques, but what happens when the developers who led the way on a given app move on to greener pastures?

JavaScript apps are notorious for being largely written by one “rockstar,” ending up dominated by the most experienced JS dev, the most charismatic person, or at least by the fastest typer.

And the first thing the people who inherit an app want to do is undertake a costly rewrite.

How do you overcome that tendency?

The problem isn’t devs doing their best work. It’s that the software they work on will outlive their attention span and maintenance capability.

So, are JS frameworks the answer?

One key reason enterprises choose highly constrained tools for building with JS: less room for the kind of creativity and innovation that makes a single developer’s work capable of being both high return and high risk.

But tools like those can also disengage veteran JS developers who prefer flexibility and modularity.

In my years of experience working with and managing developers, I’ve learned that developers who are mentally engaged are most capable of amazing work. And developers who are engaged in their work have a stronger desire to keep doing it.

It’s a tradeoff: the risk of losing (or depending on) a very good developer is often mitigated by the same tools that increase the likelihood they will leave—or mentally check out.

The tools that tend to make collaboration and consistency easy can leave very good developers hamstrung.

Learning a framework provides a lot of instant gratification, but we’ve seen frameworks and approaches come and go as the web has rapidly evolved.

Developers who end up learning a framework instead of how to solve problems in JavaScript can limit their long-term potential.

What’s more, JavaScript does not lend itself well to one-size-fits-all frameworks. There are certain types of apps that make sense for certain frameworks, but it’s undeniable that no JS framework is a panacea.

So how do we solve this problem at &yet?

I’d love to say we’ve had this down for years, but we actually stumbled upon the answer.

Our first Node.js based single-page app product, And Bang 1.0, was built largely by Henrik Joreteg and myself; I wrote the CSS and Henrik wrote the rest of the app, both server and client.

At a certain point, we decided to do a major refactor, creating And Bang 2.0.

While building the API for And Bang 2.0 was a full-team effort, getting people involved in the JS app proved tremendously challenging. Folks could contribute to parts of the app, but in the end, it was fully dependent on Henrik because not enough of our team understood the approaches Henrik was taking in building the app.

This presented a huge long-term risk. It wasn’t good for the team, and it certainly wasn’t good for Henrik. We all knew action was needed.

At one point, I recall a few of us taking Henrik out to a really painful lunch.

We told him that despite being the most productive developer on the project, he was no longer allowed to write JS on the app. He could only open issues, write documentation, and educate.

Soon, Henrik’s work documenting the approaches he’d taken on the app sparked involvement from others on the team. Things rapidly got clearer, easier to understand, simpler, more consistent.

The conversations that emerged resulted in many of the philosophies and conventions he eventually explained in his book, Human JavaScript.

Then something amazing started happening.

Where we had previously experienced frustration getting people onboarded in working on our advanced JS apps, we suddenly started to find that from veterans to new developers, these apps “made sense,” and it started to look like collaboratively authored code was the work of one person.

Our team has loved working this way.

Here’s what Philip Roberts said in his first day working on one of our JS apps: “This code is a dream. This is the most organized and understandable codebase I’ve ever seen.”

We released Human JavaScript a year ago to great acclaim beyond our team.

We’ve since decided to follow it up with a highly experiential and interactive training that goes even deeper than Human JavaScript.

JS for Teams: It’s Aliiiive will provide a comprehensive training on the approaches we’ve developed over the course of years.

JS for Teams will help your team build complex but maintainable single-page applications with a modular JS approach.

Registration is extremely limited. Tickets are now on sale. Don’t miss out!

P.S. If you’re interested in custom JS for Teams training for your organization, reach out to us at training@andyet.com.