&yet

The &yet Blog

In just a few weeks on April 30th, the ^lift security team will host their first secure development training on building secure Node.js web applications in Portland, Oregon.

The ^lift team has designed this training to help you understand the security challenges you will face when developing Node.js web applications and help you build habits that turn security from a worry or an annoyance, into a comfortable part of writing your code from the very beginning.

Seats at this first class are extremely limited, so grab your spot with the team that’s been trusted to secure tools you use everyday like npm, Github and Ginger, as well as leads the Node Security Project. Also discounted tickets are available if you want to bring your dev team (or hack the system and bring a couple friends, we won’t tell anyone).

If you can’t attend this training and you would like the ^lift team to bring it to your area, or even bring it to your dev team, please let us know by sending an email to training@liftsecurity.io

Tickets! Tickets! Get your tickets now!

So Heartbleed happened, and if you’re a company or individual who has public facing assets that are behind anything using OpenSSL, you need to respond to this now.

The first thing we had to do at &yet was determine what was actually impacted by this disclosure. We had to make list of what services are public facing, which services use OpenSSL directly or indirectly, and which services use keys/tokens that are cryptographically generated. It’s easy to only update your web servers, but really that is just one of many steps.

Here is a list of what you can do to respond to this event.

  1. Upgrade your servers to have the most recent version of OpenSSL, specifically version v1.0.1g, installed. The list of OS packages which have that version are too numerous to list so double check with your package manager for what version is appropriate.

  2. Restart any instance of Nginx, Apache, HAproxy, Varnish, XMPP Server or any other tool that dynamically links to OpenSSL.

  3. For any public facing service you are running that statically links to OpenSSL (or one of its libraries) you will need to rebuild the code and deploy. For us that was “restund” which we use for STUN/TURN.

  4. For any authentication database you have that uses OAuth tokens or cryptographically generated keys, you should mark all of those tokens and keys as invalid and force them to be regenerated.

  5. Any public facing SSL Certificate you have should be revoked and new certificates with new keys generated as well.

The last two items can be somewhat daunting, since due to the nature of this exploit we don’t know if our certs or keys (or really anything in memory) were compromised. The responsible thing to do is to assume that they were compromised, and replace them.

Security Bulletins:

Resources:

The setup

Eleven days ago, Jon Lamendola was asking Adam Baldwin in our team chat how to do something with JavaScript. The discussion went like this:

Chat discussion


A quick aside for any non devs, the longer version of what Baldwin could have written was:

if (b==1 && b===2 && b===herpderp) {
   // do stuff here
}

This is an if-statement in JavaScript. JavaScript checks the “condition,” which is the code between the first set of ( parenthesis ), and if, and only-if it is true, runs the code between the { curly braces }.

The == / === are like an equals sign in math, they check if b is equal to 1, b is equal to 2, and b is equal to herpderp.

b and herpderp are just placeholders for other values, they could be anything, depending on what code was written before these lines.

Finally && means “and.” So in this case, for the condition to be true, b must equal 1 AND b must equal 2 AND b must equal herpderp.


At first glance, what Jon said seems to be true. Whatever b is, it can’t be both equal to 1 and 2 at the same time, so the condition can never be true, and the code inside { } will never execute.

However, it is at this moment I realise I am well and truly screwed. Jon has used two words which always get me: “can’t” and “never.” Something in the back of my mind niggles: what he said is basically true, but I am also pretty sure there’s a way that someone could break JavaScript subtly enough to make it not true. And I know it’s going to bother me until I can figure it out. This leads to my single contribution to the conversation:

  • Phil: Oh man, you’ve totally nerdsniped me.

If you haven’t heard the term nerdsniping before, this xkcd comic lays it out pretty well:

xkcd nerdsniping comic

Attempt one, == vs ===

I immediately pull up a text editor and start to experiment with how I can possibly get something to equal one and two at the same time.

The first thing my brain latches onto is the possibility of exploiting a typo in Baldwin’s code. He wrote b==1 && b===2 (note 2 = vs 3 =). This is still valid JavaScript, and they both mean roughly the same thing, but == is subtly different from ===. Roughly, === checks if two things are the same thing, while == checks if two things are “equivalent.”

  • Checking if something is exactly equal to 2 is easy, first we check if something is a number, and if it is we check if it’s the number 2. If so, then yes something === 2. If it's not a number then it can't possibly be exactly equal to 2, no matter what it is.

  • But checking if something is “equivalent” to 1 is more tricky. If something is a number then we just check if it is the number 1. But if it’s not a number, it still might be “equivalent” to 1. To figure it out if something is equivalent to 1, JavaScript calls the valueOf() method on something, and if that returns 1, then something == 1.

So I did some experiments, here are just a few of the more interesting things I tried:

  • I started by verifying that valueOf works like I expected for == comparisons: If I start with a b that is just a random thing, I can change it’s valueOf function, and have it return 1, which works.

      var b = { }; //a something
    
      b.valueOf = function () {
        return 1;
      };
    
      if (b==1) {
        console.log("This code is executed"); //yup, this works
      }
  • I then think, ah ha, I can just start with a 2, and change it’s valueOf function to return 1, but that doesn’t seem to work :(

      var b = 2;
    
      b.valueOf = function () {
          return 1;
      };
    
      if (b==1) {
          console.log('This aint called');
      }
  • Then I remember that there are two slightly different ways to declare a number in JavaScript: var b = 1 is different to var b = new Number(1). Ah ha! Maybe I change the prototype? And it works. Welp, I’ve just made a number 2, that is equivalent to 1. Way to break things! But breaking things is what I’m trying to do, so I guess that’s a win.

      Number.prototype.valueOf = function () {
          return 1;
      };
    
      var b = new Number(2);
    
      if (b==1) {
          console.log('This is called'); //Woo!
      }

Alas now, b===2 is no longer true. I guess we’ve messed with the prototype too much, and JavaScript isn’t really convinced it’s still a proper number anymore.

Attempt two, read the specs

A few days pass. And the problem comes back to mind. I start digging in even further to the difference between == and === to see how I might be able to exploit it. A question on stackoverflow.com discussing the differences between the two, links to the original specification pdf document for JavaScript. Yes, thanks Jon, at this point I am now reading the ecmascript spec in pursuit of pedantry.

What’s possibly amusing is I’ve been a JavaScript developer for a number of years, and this is the first time I’ve ever even opened the spec document.

  • I read 11.9.3 The Abstract Equality Comparison Algorithm
  • I read 11.9.6 The Strict Equality Comparison Algorithm
  • I read 9.3 ToNumber and 9.3.1 ToNumber Applied to the String Type as they were mentioned by the Abstract Equality Comparison Algorithm.

I read those, and spend another hour or so hacking and experimenting, but alas, nothing of interest is revealed. To make b===2 b has to be a number, but to make b==1 I have to modify b's valueOf method, which immediately makes it not a proper number anymore.

Frustrated I give it up for the day. I’m still pretty sure it’s doable, but I just haven’t seen how yet.

11 days later... SUCCESS!

Eleven days after the original post, and I’ve all but forgotten about my little nerdsnipe failure.

I'm sitting reading a post by Reginald Braithwaite. I always enjoy Reg’s writing because he likes to break JavaScript down to its components and think about how and why it works. And somehow reading this post, and thinking about Reg, reminds me of my little problem.

I fire up a text editor again. For some reason the answer, this time, is obvious immediately.

var b = {};
var herpderp = 2;

b.valueOf = function () {
    b = 2;
    return 1;
};

if (b==1 && b===2 && b===herpderp) {
    console.log('This code runs!!!');
}

I run it, it works, success! So how does it work? Well first of all I had to realise the one little mistake in Jon’s phrasing. He said “b can't be 1, 2 and herpderp at the same time, so that code would never execute.” But that’s not quite what’s happening here. It’s not happening “at the same time”: first JavaScript checks if b==1, then it checks if b===2 and then it checks if b===herpderp.

First we make b something that is not a number, but give it a valueOf function that returns 1, such that b==1 calls the valueOf function, sees that it got a 1 back, and this passes the first test. But we also did something sneaky in that valueOf function call, we changed b to be the number 2. Now when JavaScript gets to the next check, b===2, it sees that b is 2, and that passes. Finally it gets to b===herpderp, which is easy as we can make herpderp whatever we like, so if we make it 2, we're all good and all three tests pass.

It should be noted, that I got very lucky. If Baldwin hadn’t missed that third = off the first check, or even if he’d put the two-= in one of the other checks instead, this solution wouldn’t have worked.

And there we have it

Eleven days later, and that little nerdsnipe can slip from my mind. Have I learned anything directly useful? Probably not. Have I learned something that might come in handy one day? Just maybe I’ll have a handy little tool I can use to pick away at some particularly fiddly bug. Or maybe, my next nerdsnipe will be a little easier to solve. At the very least I now know where the ecmascript spec document is!


Postscript:

After realising it had taken me 11 days to solve this nerdsnipe, I threw it out to twitter and got a hint from @jaz303 about another way of solving this that would have worked even if Baldwin hadn’t typo’d that first ===:

var n = 1;
var herpderp = 3;

Object.defineProperty(global, 'b', {
    get: function () {
        return n++;
    }
});

if (b===1 && b===2 && b===herpderp) {
    console.log('this runs');
}

Here, instead of creating a local variable b with var b, we are defining a property on the global object (in the browser this would be window). The way that JavaScript looks up variable references, means that if it can’t find a local variable named b it will try it on the global object: so b===1 is the same as global.b===1.

Also, instead of just defining global.b = 1, we’ve defined it using Object.defineProperty(), this allows us to create a getter which increments the returned value every time it is called, allowing all three tests to pass if herpderp is 3.

Today we’re honored to welcome a few new amazing individuals to the &yet team.

Here at &yet, we strongly believe that each person who joins our team should fundamentally improve what it’s like to work here. We also count on our new teammates to help lead us toward being the type of company we want to see ourselves become. So you can bet that we take extra care and consideration when adding new folks to the team.

Here’s a tiny (but brilliant) glimpse of the direction we’re heading, represented by the newest additions to &yet team:

David Dias

Many of us came to know David (@daviddias) through his role as organizer of the incredible LXJS. We were lucky enough to have him provide some extra help pulling off the madness of RealtimeConf, too. We’ve been so deeply inspired by his enthusiasm and genuine positivity that we asked David to join the team. Considering his zeal for security and his knowledge of Node, David has taken on a central role in further developing and pushing forward the capabilities of ^Lift Security and the Node Security Project, starting with the training class happening this May in Portland.

Lynn Fisher

We’re grateful Luke introduced us to his friend and fellow Arizonan Lynn Fisher (@lynnandtonic) about a year ago. We were immediately blown away by her artwork’s sense of humor, the expressiveness of her design, and the versatility of her talent. Thankfully, several of us got to meet her in person, too, at last year’s CSSConf. We fell in love with her relaxed personality that seemed a perfect fit with her unique aesthetic and jaw-dropping talent as an illustrator. In the short time she’s been on the team she’s managed to astound us a few hundred times with her creativity. Plus, Lynn is just super cool.

Philipp Hancke

WebRTC developer Philipp (“Fippo”) Hancke has been a longtime associate/collaborator with members of our team in the thick of the XMPP world. He has pioneered making XMPP/Jingle <3 WebRTC, which has enormously contributed to Lance’s work on Stanza.io (a JavaScript API for an XML-free XMPP interface) and Jingle.js. You might know Fippo from his role as Hornsby Cornflower at last year’s RealtimeConf. (A name which he enjoyed enough to continue to tweet from as @hcornflower!) Fippo has already had a huge impact on our team’s WebRTC work, including SimpleWebRTC and Talky, and we can’t wait to see what he builds next.

Julie Ann Horvath

Like many of our new team members, we’ve been in talks with Julie about joining our team for a long time. (In her case, it’s been a year and a half!). Julie (@nrrrdcore) is a terrific designer, outstanding communicator, and a person of tremendous character who we’re proud to (finally!) be able to work with. We’ve been watching with awe what she’s accomplished with Passion Projects and several members of our team had a chance to attend the public speaking workshop she organized, which many who were present considered to be a revolutionary moment in empowering women as conference speakers in the tech community.

Peter Saint-Andre

We previously introduced Peter as our new CTO. He rounds out this list of a group of remarkable team members we’ve been able to add this year. We have been thrilled with the leadership he’s provided thus far in pushing forward our WebRTC efforts, and in thinking through how the additional structure we’re adding to our team can be in service of our ethos rather than in opposition to it.

Welcome David, Lynn, Philipp, Julie, and Peter!

Good luck. We’re all counting on you.

Are you frustrated over how much of your JavaScript code is dependent on too few members of your team?

Our team was there too. Over time, we’ve built a set of practices that have helped our team and clients write complex but sane JavaScript apps without depending heavily on one or two people.

Using approaches Henrik Joreteg and &yet introduced in Human JavaScript, after just two days you and your dev team will walk away with a practical, more sensible path to building JS apps. And your code base will look like it was written by one solid JS dev.

Introducing JS for Teams, a clear and simple approach to building complex JS apps—but it’s a bit more interesting than that.

JS for Teams will be an unforgettable experience that brings to training the multisensory magic of our last conference, RealtimeConf. (If you missed out, RealtimeConf was an immersive experience tech conference—incorporating music, storytelling, and interactive theatre.)

Training of this kind is very attentive and hands-on, so seats are limited.

Be the first to get your team’s spot secured by signing up for the announcement list.

When I was 17, two things occurred which changed my life forever.

My grandfather passed away and left me a book by John Lomax entitled, Cowboy Songs, and I discovered Pete Seeger’s seminal “American Favorite Ballads” record series produced by Smithsonian Folkways.

American Favorite Ballads, Cowboy Songs

Growing up as a ranch-hand in Silver City, New Mexico, the “real” history of the American cowboy was always important to my grandfather, and Cowboy Songs was one of the only genuinely untainted collections of that oral tradition with lyrical content that wasn’t screened or edited by its publishers to be “safe.”

As a musician wanting to know more about my personal heritage in American folk music, I soon inevitably discovered the work of Pete Seeger. With access to his live and candid albums made for Folkways, and his early traditional hits with The Weavers, these songs that had only existed on the pages of the folk anthologies I was reading actually came to life.

With Seeger maintaining a simplicity, passion, and voice akin to a spirit of as if he sang out a traditional ballad on the day it was written, I began to internalize the fact that everything I had been taught about engaging, appreciating, learning from, and making music was derived through a very different system not much older than my grandfather.

I began to see that great music didn’t have to be made by just coming up with “material” for live and recorded performances, but there was a much deeper root of common song that had been occurring naturally, and changing regionally as long as the human race has existed.

This was music birthed out of need for sharing common ideas or struggles, popular or unpopular, disposed to the circumstances of the writer’s family, community, national identity, or lack thereof.

The term “folk music” seemed like it had been culturally hijacked from me and interpreted for me as a commercial genre coupled to recordings based on the sound of a pretty acoustic guitar and a soft voice.

Essentially what I saw through Pete was the not-so-long-dead antithesis of music made as a means of the expansion of personal fame and empire, being that he’d say “I feel that my whole life is a contribution,” not just a sound, feeling, or recorded product.

Music could actually become a wonderful vehicle to bring people together and build the spirit and depth of their community—not only inspire the listening individual, as I had been so accustomed to.

Letters from Pete

Becoming more familiar with Pete’s work, I initially assumed he had long passed on as his contemporaries Woody Guthrie, Cisco Houston, and Lead Belly had done decades before. In discovering that he was very much alive and active, as one working musician to another, I quickly wrote him a letter with a head full of questions concerning direction on initiating community-oriented music in an entirely post-indigenous city.

Not expecting anything in return, I was surprised to find a simple postcard in the mail with just a couple sentences of thanks and that he was “a lucky old guy to receive a letter like” mine.

Maybe it was youthful ignorance of the fact that he was a busy man, but I wasn’t satisfied with a simple reply so I kept writing with similar questions, and the responses grew from a sentence or two to a couple paragraphs with each exchange.

He continued to do his best to summarize solutions for my concerns with a couple nuggets of wisdom at a time, and a pointer to helpful individuals and resources.

Eventually, I ended up receiving an invite to come out and join in the music at the annual Beacon Strawberry Festival near his home in upstate New York which Pete and his family had supported for years.

Pete and Ben at Beacon Strawberry Festival

The indelible lessons I acquired in tracking his work from the time of our first correspondence, to spending a week of music and conversation with him and his family in Beacon, are thus:

The commercial industry does not have to define the way you do things. You have to define how to do what you know is best by changing the system you live in daily, a step (or song) at a time.

If you listen to any of Seeger’s live recordings, you won’t be able to go more than five seconds without realizing that most of what he’s trying to do is simply get people to sing together.

If you listen just a little more, you begin to realize that he’s also effectively making historic tunes relevant to the environment he’s in.

PBS aptly titled their 2008 biographical documentary, The Power of Song, essentially encompassing the philosophy of his work. It could easily have been called “The Power of Pete,” or “The Power of Folk” or something individually focused, but it becomes relatively hard to interpret his work that way when he’d constantly quote individuals like Aunt Molly Jackson.

Jackson was a miner’s widow from Clay County, Kentucky whose songs written from her painfully home grown experience helped fuel the founding of the National Miner’s Union, making a clean break from living under the continually deadly circumstances their former employers had provided.

Under his title chapter “What can a song do?” from his autobiography The Incompleat Folksinger, Pete recalled her passion about the matter, saying “Protest songs? Even the singer of dirty songs is protesting sanctimoniousness. … Propaganda or proper goose; the truth is what matters.”

To someone living in the era before the commercialization of music, who’d seen common songs change the entire way of life for her community, the fact that songs have meaning and can actually do something (no matter what label they’re given) was as real as the fact that she could talk about it. The label must’ve seemed like an inconsequential stereotype of her passion for doing something good.

Truly, recording technology is what changed music forever.

By the time that Pete hit the pop-charts with The Weavers in 1950 with a song like “Goodnight Irene,” the task he had taken before him was to broaden the spectrum of the topical song for the American public to know that they could engage with real, life-changing topics besides the only one that was safe (mostly) in all its various forms: love.

As their follow-up best-seller, “Tzena, Tzena” became another unlikely release during the American post-war era, bringing to light the fact that fantastic music and culture was happening around the world, not just in the very powerful USA.

Pop-music by this time was on a fast train away from producing urban hymns that meant more to families and individuals than the one who performed them, building up an immensely viable market through the perfect image of crooning singers like Bing Crosby and Frank Sinatra.

Yet what made the market even exist goes back to one point in history, the proliferation of the Victor Talking Machine Company’s “Victrola” record player. With it, two completely new doors opened for the public: the ability to listen to anything without having to go anywhere or engage anyone, and the ability to be sold ‘music’ as a commercial product in the form of a medium other than a human performing it directly.

This literally marked the beginning of the end for the indigenous oral-traditions of the western world, drawing a hard line between the old and new era, and Pete knew it.

When one comes to the conclusion that there’s something they can see that they feel others aren’t able to and should know about, they can do one of two things: get pridefully cynical about their “enlightenment,” building up unnecessary animosity for themself, or bring a helpful idea to the table in a relevantly relatable way so that the community can grow and learn from it.

Weavers 78rpm records, blacklisted Decca radio station copy

“Don't Play,” followed by a radio station manager’s signature. A leftover from the Weaver’s blacklisting during the McCarthy era.

Though not methodically compromising a core-conviction of enabling communities through bringing back their old songs and standardizing new ones, Pete chose to tie himself to the commercial industry knowing that there was something he could do: not to attempt to undermine it himself with useless rhetorical attacks, but to thoroughly organize with others like him to be a clear example of how much life can be found in the old alternative.

The simplicity of getting people of all religions, creeds, and associations together in the same room singing something they all relate to and/or believe in could be much more exciting than a hall packed with faceless individuals quietly focused on a person-centric performance.

Even when I met him, Pete hated having himself as a central focus, always redirecting conversations to the great work the community was doing together.

A neighbor of his from Beacon told me, “The best way to get to know Pete is to work along side him doing something useful, like picking up trash.”

No matter what form it took, contributing to something community-centric was his way of “turning over the temple tables” that be, if you will.

As a younger man in an age where members versed in the way people’s music used to be grew old, Pete was able to maintain direct access to quite a few of America’s surviving known and unknown members of folk music history before recording existed—many of whose songs (or arrangements) made it into our national archives at the Library of Congress, and the priceless anthologies collected first hand by John and Alan Lomax, Carl Sandburg, and William Doerflinger, among others.

This culture of preservation fell into two camps: publisher, and performer. You could read all you wanted to about the way songs used to be sung and what they meant, or you kept on where the songwriters left off by doing it yourself.

Songsters like Woody Guthrie and Huddie “Lead Belly” Ledbetter became Pete’s counterparts in showing the world that folk music, in the truest sense of the word, was still very much alive and at work apart from the ever expanding recording industry.

Woody Guthrie and Lead Belly

From small town juke joints to Carnegie Hall they played for all audiences, turning performances into opportunities to get people singing together, and familiar with songs important to either their heritage or collective identity.

With blazing a trail for others to think critically via music and personally do something for the growth of their local human experience, this legacy becomes a critical idea to wrestle with at some point for contemporary musicians of 2014, during our post culturally-homogenized, global era.

In contrast, the results from our more popularly prominent school of music have facilitated patterns that appear to have benefitted not only a self-centric mentality, but have completely redesigned the conventions of music interaction on an individual and communal level. This is why…

Being a Rock-Star is bullshit. If your work doesn’t tackle the concerns of anyone more than yourself, it’s not worth your time. Building community isn’t glamorous, but the rewards are better than fame.

The creation of music as content for individual consumption is really only an entertainment ecosystem that has existed for arguably less than a century, resulting in an economy of hedonistic escapism, so far as to say: as long as the product has just enough broad-spectrum emotional familiarity, multitudes will buy it or be satisfied with it, as the bar for creativity and critical thought in music is continually lowered to what may be tolerable.

With such great emotionalism produced at such a rapid rate, this makes way for individually-targeted ‘microwaved’ empathetic experiences without a much deeper purpose than to feel something, rather than encouraging communally involved ones with a common cause.

This continually reoccurring process becomes naturally sustainable in the common exhibition of what I call the “headphone-lifestyle.”

There is no greater self-contained experience than one only available to the body of a listening individual. If one is subjected to be familiar only with sound rather than an individual artist making music, they are destined to remove the human element from the sound’s creation and associate it with either an idea, emotion, or memory.

Interaction with others becomes interaction with an extension of self, which can be beneficial with many other tools, yet is dangerous to the identity of music itself.

Instead this curates a reversal of shared-interaction: when said individual buys a ticket to see a show from the band that they had been individually listening to, and either silently watches, or is lost in an excitedly loud myriad of people.

Ultimately, this may show them that they are not as important as the ones exalted on the stage. It cannot only become an unhealthy manifestation of worshipping the idealization of what a rock-star is portrayed in image to be, but an intentional or unintentional dismissal of their worth in an associated “scene,” thus driving many to vie for acceptance within an imaginary demographic much “cooler” than the one they come from.

To clarify, there is a tremendous difference between this model and making the inspiring creative work of talented groups and individuals available to everyone, but what can be challenged here is our popular recording industry’s facilitation of a fantastically cool image, creating an unachievable fantasy for the common individual to acquire, who then may vicariously live it out through the products and services the industry provides.

If those products and services are accepted by a majority, then a purposed cultural-homogenization can be achieved. How did we get here? Circumstances and clever marketing.

By the 1960s, recording methodology reached a creative apex in available technology and artist ingenuity that will never again be repeated in this way and has been only emulated since (whether by use or building of instrumentation).

The market was ready to produce its Elvis, The Beatles, and Bob Dylan. Each of whom’s story is different, and whose musical contribution within the industry and to the public was huge, of course, but whose total glorification has left us generationally found wanting.

It’s done two things.

One, provided us with what I call Recursive Photocopy Theory (RPT), which is a core method of post-modern music crafting: with every proceeding generation of pop-musician copying and pasting past elements of earlier pop-music production for further use, the overall integrity of the end result becomes a degraded image of something before it. Just like a photocopy of a photocopy.

And two, artists feel culturally pressured to build a viable personal image, more so even than to grow in compositional ability and discovery, to maintain a hopeful relevance in letting others continually think they’re a “big deal” within the forefront of current or new trends.

This then perpetuates a “hit-it-big” mentality, eventually burning artists out by their late 20s when they never become commercially popular, yet the opportunity to do something communally relevant with their talents exists as long as they do.

The ever increasing need for the alternative to be resurrected (ironically at Pete’s death) is now more important than ever, if not only to have an alternative to fitting into total cultural-homogeny.

Pete really hit it right with his simple phrase he repeated to me and many others throughout the past few decades, “Think globally, act locally.”

Building small, globally minded, intentional communities that are individually authentic, but equipped to interact with any other one around the world seems to far exceed a lack of resistance to assimilation into an esoteric monster of one globalized society, making way for some universal “elite.”

Pete in Army uniform surrounded by people singing, 1941

Pete was able to jump into the process of global music-commercialization during his time in this way, organically growing another path culturally and environmentally with the door to access it from the inside-out. His collective work was done only with the help of willing communities, small and large, across the US (and the world).

This proves to be a reminder that: though wonderful to most of our senses, music itself isn’t as important as what you do with it. The artist has no idea of the repercussions of their work, but it’s very healthy to stay mindful that it can easily ripple a global influence, positive or negative.

This draws us back to an attempt at making a solid definition about something generally believed to be utterly subjective.

Pete said, “A good song reminds us of what we’re fighting for.”

Tackling community issues in song is at the heart of traditional folk-music, and has become a relatively lost art, or has been generated from RPT so that it topically contains a deluge of emotional struggle with no physically or historically concrete subject matter.

Those who make statements with their new or rearranged traditional lyricism are generally pushed to the fringes of relevant culture like languages that die. When English became the first language of the U.K., Welsh and its dialects were pushed to the edges of the islands, and now exist nearly entirely as a showpiece for historic reminiscence, sung at genre-specific events or recited in special-interest groups.

It would be ludicrous to say that folk music is the only music that serves a legitimate purpose, as one can draw incredible inspiration from the unending variety of forms music composition itself has acquired at this point, but unless it’s profitable, our culturally-uniform music industry is pushing critical ideas and purposes out to the point of total sterilization.

At the present time, our youth are either not aware of the power available to them in the methods behind making folk-music, or they seem to have to do their best to match up to producing something universally relevant by rooting themselves in trending commercial methods.

What did Seeger think? Well, he’d say: “Participation – that’s what’s gonna save the human race.”

No matter what you believe about that statement, I can’t forget about what stemmed from this belief in another thing he taught me: Never refuse an audience; always get them singing together no matter who they are.

In Pete’s words, he said “I have sung in hobo jungles, and I have sung for the Rockefellers, and I am proud that I have never refused to sing for anybody.”

With participation being one of the major themes of his work, he aptly summarized this entire concern to me in this way in one of our letters: “Sure young people are clobbered by the music business. But go to the kids in schools, in summer camps. Show ‘em what fun it is to join in. Later they’ll find out that songs helped us get rid of slavery, and work together!”

Without even saying it outright, he was able to encapsulate the fact that if a song is simple enough to bring children together and help them learn something while enjoying it, the performer is well on their way to giving that generation a better start than to only be subject to the industry’s influences.

In his autobiography, he also brought this up saying: “Singing with children in the schools has been the most rewarding experience of my life.”

To continue in on the spirit of this, if we collectively focused less of our attention on national shock-value entertainment, and more on songs and stories of community identity, we could turn our music into something that actually equips, not distracts our youth.

Singing to kids in school for free is nice, but how then is one supposed to make a living at music? You don’t—not directly.

American Folk Songs for Children Album

Unless the opportunity clearly presents itself, living in an all-or-nothing mentality typically becomes a drain on the artist’s sanity, and the pockets of others they depend on unless they go to work to support themselves. Pete iterated this thought recalling he was told “that someone once asked Doc Watson for advice on whether or not he should become a professional singer of folk songs. Doc was reported to have answered in his usual grave way: ‘Do it as a last resort, when you’ve failed in every other way to make a living.’”

If you don’t know about Doc, the first thing to note is that being blind didn’t stop his commercial success which began in his 40s, and he was just as similarly interested in being an auto-mechanic as a musician.

Similarly, music is not the only good thing in the world, and not everyone is naturally disposed to make it. The place of importance that music itself is often lifted to culturally can also cheat the progress of other important working talents that artists possess that can benefit the growth of their community.

In Pete’s case, he quickly moved from organizing music performance and publication, to organizing political and environmental rallies, with one of his best known successful contributions being the effort to clean up the Hudson River, which had slowly become an industrial sewer that passed right by his home. In fact, the first conversation that he and I had in person was about water chestnuts that were native to the Hudson, and had begun to flourish again.

The way things were, doesn’t have to be better than the way things are. Knowing the past well can really help you change the course of what you don’t like about where we’re headed.

The world has changed a bit since Pete and Doc were in their prime.

How is it possible to maintain an honest approach to giving authentic folk-music in a new era that seems to have moved on entirely?

Just as Pete and others thoroughly held integrity during an earlier iteration of the recording industry’s expansion, so can anyone of this age keep on before our cultural experience is almost entirely wrapped into endeavors that can change at an executive whim.

Fear is the general enemy of doing anything, and if we aren’t afraid of the powers that be or the way others respond to doing something “real” with our time, we might actually be able to fulfill some goals that involve them.

To say that the era of indigenous western music is over and that its methods have become irrelevant is a contemporarily logical thought, yet in essence is also the worst enemy of our oral tradition’s existence and survival. We are at the door to letting our story in music become either an optional parallel path, or one changed on the peak of a parabolic track.

Using our tools to create experiences that draw others communally together rather than to build isolated, yet entertained lives is a great problem before us now.

The general roots-informed approaches I’ve been witness to are polarized in either an attempt to mirror the exact recreation of folk musics of the early to mid-century modern eras, or produce work in a more polished singer-songwriter market which is viable to a commercial niche.

Though wonderfully talented and enjoyable artists are making meaningful music in both ways, the separation between the identity of our cultural folk idiom and its ability to adapt and change to the induction of relevant digital tools and the community-building opportunities within the Internet holds the key to finally let it breathe beyond the attraction of what it once was in simpler times.

To maintain natural growth and relevance, just as in the historic cultural evolution of the 20th century, our common expression must be willing to adopt new tools as desired, is necessary, and needed while fully utilizing the power of its traditional methods.

Our tragedy happens when identity is defined by image, rather than image being a relatively unimportant byproduct of executing the method.

Additionally, the means of execution are then wrapped up in a process of maintaining an image easily stereotyped to fit an existing commercial genre. Applicable influence to our growth can then be hindered by the fact that the tools we are using are so coupled to what others think we should look like, that exploring the use of new mediums in the folk idiom is met with grave opposition because folk music’s aesthetic is now forever cemented in time at it earliest interceptions of audio and film recording.

The opportunity of global cultural integration and building strong communities in the Internet is not an enemy to folk music–the dehumanization and dissolution of its powerful components for market viability in advertising is.

A second culture now parallels the importance of regional community expression: the global.

Both are important, and neither of which deserve to be completely left behind for the other. Though not yet fully realized, vital tools are being further developed daily for making the possibility of advancement in indigenous collaboration available within the Internet.

Direct browser-to-browser audio interfacing (WebRTC), and new possibilities of in-browser sound generation through audio-specific programming interfaces (web-audio APIs), among other tools, are pushing the roof off of constraints previously known against realtime collaborative music projects.

What I foresee being immediately necessary as a next step into our universally-accessible paradigm of common folk expression is the iterated development of an organic ecosystem that takes the form of an application with a gravely simple interface. In simpler terms: the GitHub method applied to collaborative music making.

It could be an in-browser digital audio workstation that allows for the open sourcing or private working of current recording projects, with an ability to browse and collaborate on any project listed as open, or available to take its current form and make something else with it by request to the original author.

Audacity Screeshot

Crucially simple UI example: Audacity. I've successfully taught this software in condensed form to multiple classes of gradeschoolers.

It could also take the form of demultiplexing various instrument and voice inputs from around the world, adjusted for latency via distance by physical location, and sent as one audio output for internationally live performances in one or many physical locations. The lack of available synthesized instruments and sound generation there within would only be as limited as our time and creativity in making them.

As constraints are continually lifted, and further problems are solved, the possibilities in collaboration and creation are only as limited as our imagination.

I feel the repercussions of making incredibly simple tools like this easily available to everyone would have the possibility of bridging many socio-economic gaps in music making which have been previously unchangeable.

In the voice of Pete, “Any darn fool can make something complex; it takes a genius to make something simple.” This will only work if it’s simple enough, and not sacrificed to the current gods of advertising.

While introducing the old spiritual “You Gotta Walk That Lonesome Valley” at a Carnegie Hall concert with Arlo Guthrie in the mid 70s, Pete summarized the reality of ingenuity in folk music by repeating Woody Guthrie’s take on the matter: “One of the most important things which Woody taught me and a lot of others is that: you can make a combination between the best of the old and the new. It doesn’t have to be either one or the other, you can mix ‘em up!”

I would dare to say that this sentiment can be applied not only to communally forged lyricism and balladry, but to the mediums and methods of our shared expression in this era and beyond. As the web enables accessible community-oriented environments, and what once was musical hardware is continually fully emulated in the browser, the accessibility of creative interaction is as simple as having access to a computer with a modern browser installed, and the Internet.

I cannot say that this approach applies to indigenous communities outside of commercialized Western civilization who do not need it, but the creation within could lead to a new renaissance of tools, interaction, and the sound of our common song itself. This of course will look very different than regional old-world music, but the path is clearing to move forward in working within what we have rather than only by what we’ve seen in old films and recordings of the last century, taking much needed influence from those before us, but using their lessons to empower the critical growth of where we’re headed.

Dealing with similarly expansive changes in his time, Pete dealt with the tension of the artist staying traditionally informed, yet still inventive and good at what they do by relating this idea to one of his greatest influences, Lead Belly: “Nowadays when the artist becomes a virtuoso, there is a greater tendency to cease being ‘folk.’”

“When Lead Belly rearranged a folk melody he had come across—he often did—he did it in line with his own great folk traditions.” Pete draws the line here for us between the words “folk” and “authentic.” As we look to what can be done in the contemporary, holding fast to the understanding of where our authentic identity came from and what it is now, could in our own way, enable us to “turn the clock back to when people lived in small villages and took care of each other.”

The spirit of this truly has the chance to thrive in our physical and inter-networked world, or even within a collision of both. It’s become much easier to sacrifice our time and efforts doing something self-indulgent, but sacrificing our lives in building authentic community will continue to make our story one that grows into something greater than commercial artifice.

The recent passing of Pete Seeger concludes the swan song of an era in the American oral tradition that now almost entirely exists only in our written and recorded archives dedicated to such things.

We’re now physically left without that link to the way things were, and missing a direct conversation as to why people did such great things in a long line of their oral traditions and folk expressions.

Still, with the prolific gift of Pete’s body of work along with others like him, we’ve been given the great opportunity to preserve, foster, and build what the community music of our indigenous culture will look like in the era to come.

This doesn’t have to stop with the death of a folk champion, let’s do something together with our craft that matters now.

Pete Seeger, 1984

As more and more people are enjoying the Internet as part of their every day lives, so too are they experiencing its negative aspects. One such aspect is that sometimes the web site you are trying to reach is not accessible. While sites can be out of reach for many reasons, recently one of the more obscure causes has moved out of the shadows: The Denial of Service attack. This type of attack is also known as a DoS attack. It also has a bigger sibling, the Distributed Denial of Service attack.

Why these attacks are able to take web sites offline is right there in their name, since they deny you access to a web site. But how they cause web sites to become unavailable varies and quickly gets into more technical aspects of how the Internet works. My goal is to help describe what happens during these attacks and to identify and clarify key aspects of the problem.

First we need to define some terms:

A Web Site -- When you open your browser and type in (or click on) a link, that link tells the browser how to locate and interact with a web site. A link is made up of a number of pieces along with the site address. Other parts include how to talk to the computers that provide that service and also what type of interaction you want with the web site.

Web Address, aka the Uniform Resource Locator -- A link, to be geeky for a moment, is what is known as a Uniform Resource Locator (URL). Although most people think of a URL as "how web sites are addressed," it is actually a part of a much wider method of access to any service on the Internet. That said, the vast majority URLs provide a way to navigate web sites.

This link, https://en.wikipedia.org/wiki/Url contains the following:

  • https:// The browser needs to talk to the web site using the https scheme - a secure web browsing web request
  • en.wikipedia.org The address of the computer (or computers) that will provide the information and content of the web site
  • /wiki/Url The resource you want to get from the web site

The address portion of a link, the "en.wikipedia.org" part above, is itself made up of various parts known as the Top Level Domain (TLD) and the hostname. For our example the TLD is ".org" and the hostname is "en.wikipedia" - the two pieces are then used by the browser to make a query to the Domain Name System (DNS). This request takes the name, determines which Name Server is the authority for that name, and then returns an IP address for the name.

IP Address -- Each computer that is connected to the Internet is given a unique address so it can be identified and contacted. This unique Internet Protocol (IP) address allows clients (such as web browsers) and other computers to find and access it. Once your browser retrieves the IP address for a web site it can then begin to contact a computer using the appropriate protocol style to get the contents of the web site and display it for you.

Now that's a lot of little things all happening behind the scenes when you go visit a web site :) - but now that we know what we're working with, it will make describing what a DoS is easier.

When someone launches a Denial of Service attack they are trying to make the computers providing a service unable to perform their duties. The difference between a DoS and a DDoS is in how many outside computers are helping perform the attack. A Distributed Denial of Service attack is, as the name implies, distributed across many many computers, all of which are making requests to the target over and over again.

That is the crux of a DoS - one group of computers overload another group of computers by making the target have to process so many requests it cannot keep up. Let's work through two examples of what a DoS attack would look like in the real world:

Example 1 -- Think of the Internet as a highway and your browser is trying to access it via the on-ramp. While a small stream of cars is trying to get onto the highway things go well, but when the flow of cars gets to be too many all hell breaks loose and everyone comes to a halt and sits in traffic.

Example 2 -- During home games, Denver Broncos quarterback Peyton Manning is able to call plays out to his team. The players can hear him just fine, since hometown crowds are quiet during the plays. However, once the Broncos traveled to the Super Bowl to take on the Seattle Seahawks, things changed. The Seattle fans, known for being very loud, were able to act as a "12th man" of the Seattle defense. So much so that Manning's teammates could not hear him above the noise of the Seattle fans! They were suffering from a Distributed Denial of Service attack from more than one fan at a time.

As with any attack, you immediately begin to wonder how they can be prevented from happening and how to deal with them while they are active. This answer varies, not only because of the nature of each attack, but also because there are quite a few different kinds of DoS attacks. This little essay is getting rather long already so we might discuss counter-measures in a future blog post.

However, now when your browser is giving you an error message or the "spinner" is doing its best to annoy you, you will at least have the information to understand what is happening when the IT or Support people say "Yes, we are being DDoS'd."

Last week, Eran Hammer came to the &yet office to introduce Hapi 2.0.

Hapi is a very powerful and highly modular web framework created by Eran and his team at Walmart Labs. It currently powers the mobile walmart.com site, as well as some portions of the desktop site. With that kind of traffic, you could definitely say Hapi is battle-tested.

Hapi's quickly becoming a popular framework among Node developers. Since mid-2013, &yet has been using Hapi for all new projects and we've begun porting several old projects to use it, too.

Before he started his presentation, Eran casually mentioned that he planned to at least touch on every feature in Hapi, and boy did he succeed.

From creating your server and adding routes and their handlers, to writing and utilizing plugins, and even configuring some options to help your Ops team keep your application running smoothly, everything is covered. As he talks about features, Eran also points out each breaking change along the way to facilitate updating your applications from Hapi 1.

If you're currently using Hapi, are considering using it in the future, or are even a little bit curious about it, I highly recommend watching.

&yet presents Eran Hammer on Hapi 2.0 from &yet on Vimeo.

It's an honor to introduce Peter Saint-Andre as a new member of our team and as the CTO of &yet.

Peter has a long history of leadership in Internet standards as an IETF Area Director, Executive Director of the XMPP Standards Foundation, and his involvement in standardizing technologies like WebSockets and OAuth. He's among a handful of people who've (with quite little fanfare) helped pave the Information Superhighway™.

His experience and involvement with Internet security, distributed systems, and collaboration is a boon to our team as well.

Peter's one of the original members of the Jabber, Inc. team who created the most widely distributed protocol for realtime communication (XMPP). He's given over a decade of deep consideration to the ways people use technology to collaborate and has a personal passion for making that better.

Peter has an incredible ability to digest complexity and produce clarity. He persistently works to build consensus among teams and effectively communicate deeply technical subjects.

As our CTO, Peter will help us use the knowledge and creations of our team to solve important, interesting problems for our customers and the open source community.

But we didn't merely recruit Peter because of his technical aptitude and accomplishments. Many of our team members have worked with Peter in the XMPP community and have experienced a level of patient, unselfish servant leadership that has long been an inspiration to our entire team. He is an outstanding listener, thoughtful, and endlessly positive.

As Bear, our most seasoned developer, puts it: "I want to be Peter when I grow up." And Bear isn't alone in that sentiment.

We believe Peter is the best possible choice to help lead our team of leaders and help us continue to forge a distributed team that is increasingly reflective of our values of what an organization should be.

It used to all make sense.

The web was once nothing but documents.

Just like you'd want some type of file browser UI to dig through files on your operating system, obviously, you need some type of document browser to view all these web-addressable "documents".

But over time, those "documents" have become a lot more. A. lot. more.

But I can now use one of these "documents" to have a 4 person video/audio conference on Talky with people anywhere in the world, play incredible full-screen first-person shooters at 60fps, write code in a full-fledged editor, or {{ the reader may insert any number of amazing web apps here }} using nothing but this "document viewer".

Does calling them "documents" seem ridiculous to anyone else? Of course it does. Calling them "sites" is pretty silly too, actually because a "site" implies a document with links and a URL.

I know the "app" vs. "site" debate is tired and worn.

Save for public, content-heavy sites, all of the apps that I'm asked to write by clients these days at &yet are fully client-side rendered.

The browser is not an HTML renderer for me, it's the world's most ubiquitous, yet capable, runtime. With the amazing capabilities of the modern web platform, it's to the point where referring to a browser as a document viewer is a insult to the engineers who built it.

There is a fundamental difference when you treat the browser as a runtime instead of a document renderer.

I typically send it nothing but a doctype, a script tag, and a stylesheet with permanent cache headers. HTML just happens to be the way I tell the browser to download my app. I deal with the initial latency issues by all-but-ensuring visitors hit the app with a primed cache. This is pretty easy for apps that are opened frequently or are behind a static login page in which you prefetch the app resources. With proper cache headers the browser won't even do the 304 not-modified dance. It will simply start executing code.

This makes some people cringe, and many web purists (luddites?! #burn) would argue that everything should gracefully degrade and that there isn't, or at least there shouldn't be, any distinction between a JavaScript app and site. When I went to EdgeConf in NYC the "progressive enhancement" panel said a lot of things like "your app should still be usable without JS enabled". Often "javascript is disabled" is really the time when the browser is downloading your javascript. To this I say:

WELL, THEN SHOW ME A TALKY.IO CLONE THAT GRACEFULLY DEGRADES!

It simply cannot be done. Like it or not, the web has moved on from that myopic view of it. The blanket graceful degradation view of the web no longer makes sense when you can now build apps whose core use case is fully dependent on a robust JavaScript runtime.

I had a great time at Chrome Dev Summit, but again, the core message of the "Instant Mobile Apps" talk was: "render your html on the server to avoid having your render blocking code require downloading your JS before it can start executing."

For simple content-driven sites, I agree. Completely. The demo in that particular talk was the Chrome developer documentation. But it's a ridiculously easy choice to render documentation server side. (In fact the notion that there was ever a client-side rendered version to begin with was surprising to me.)

If your view of the web lacks a distinction between clientside apps and sites/documents, I'd go as far as to say that you're now part of the problem.

Why?

Because that view enables corporate IT departments to argue for running old browsers without getting laughed out of the building.

Because that view keeps some decision makers from adopting 100% JavaScript apps and instead spending money on native apps with web connectivity.

Because that view wastes precious developer time inventing and promoting hacks and workarounds for shitty browsers when they could be building next-generation apps.

Because that view enables you to argue that your proficiency of browser CSS hacks for IE7 is still relevant.

Because that view will always keep the web locked into the browser.

What about offline?

I'm writing this on a plane without wifi and of course, using a native app to do so. There are two primary reasons for this:

  1. The offline web is still crap. See offlinefirst.org and this hood.ie post for more.
  2. All my favorite web-based tools are still stuck in the browser.

The majority of users will never ever open a browser without an Internet connection, type in a URL and expect ANYTHING to happen.

Don't get me wrong, I'm very supportive of the offline first efforts and they are crucial for justifying that

We have a very different view of apps that exist outside of the browser. In fact, the expectation is often reversed: "Oh right, I do need a connection for this to work".

Chrome OS is one approach, but I think its 100% cloud-based approach is more hardcore than the world is ready to adopt and certainly is never going to fly with the indie data crowd or the otherwise Google-averse.

So, have I ranted enough yet?

According to Jake Archibald from Google, ServiceWorkers will land in Canary sometime early 2014. This work is going to fundamentally change what the web can do.

If you're unfamiliar with ServiceWorkers (previously called Navigation Controllers), they let you write your own cache control layer in javascript for your web application. ServiceWorkers promise to serve the purpose that appcache was intended for: truly offline web apps.

At a high level, they now let javascript developers building clientside apps to treat the existence of a network connection as an enhancement rather than an expectation.

You may think, "Oh, well, the reason we use the web is because access to the network provides our core value as an app."

While I'd tend to agree that most of the useful apps fundamentally require data from the internet to be truly useful, you're missing the point.

Even if the value of your app depends entirely on a network connection, you can now intercept requests and choose to answer them from caches that you control, while in parallel attempting to fetch newer versions of those resources from the network.

If you think about it, that capability is no different than something like Facebook for iOS or Android.

That Facebook app's core value is unquestioningly derived from seeing your friends' latest updates and photos, which you're obviously not going to get without a connection. But the fundamental difference is this: the native app will still open the app and show you all the cached content it has. As a result (and for other reasons) the OS has given those types of apps a privileged status.

With full programmatic cache control for the web that ServiceWorkers will offer, you'll be able to choose to load your app and whatever latest content you had downloaded from cache first while optionally trying to connect and download new things from the network. The addition of a controllable cache layer in web apps means that an app like facebook really has no compelling reason to be a native app. I mean, really. If you break it down, that app is mostly a friend timeline browser, right? (the key word there being browser).

BUT, even with the addition of ServiceWorkers, there's another extremely important difference: user perception.

We've spent years teaching users that things they use in their web browser simply do not work offline. Users understand (at least at on some unconscious level) that the browser is the native app that gets sites/documents from the Internet. From a user experience standpoint, trying to teach the average user anything different is attempting to roll a quarry full of rocks up a hill.

This is where it starts to become apparent that failing to draw a distinction between a fully client "apps" and a website really starts to become a disservice to all these new capabilities of the web platform. It doesn't matter how good the web stack becomes, it will never compete with native apps in the "native" space while it stays stuck in the browser.

The addition of "packaged" chrome apps is an admirable, but in my opinion, still inadequate attempt at addressing this issue.

At the point where a user on a mobile device opts to "add to home screen" the intent from the user is more than just a damn bookmark, they're saying: "I want access to this on the same level as my native apps". It's a user's request for an installation of that app, but in reality it's treated as a shitty, half-assed install that's really just a bookmark. But the intent from the user is clear: "I want a special level of quick and easy access to this specific app".

So why not just embrace that what they're actually trying to do is "install" that web application into their operating system?

Apple sort of does this for Mac Apps. After you first "sideload" (a.k.a. download from the web and try to run) a native Mac desktop app, they treat it a bit like an awkward stepchild when you first open it. They warn you and tell you: hey, this was an app downloaded from the Internet, are you sure you want to let this thing run?

While I'm not a fan of the language or the FUD involved with that, the timing makes perfect sense to me. At the point I've opted to "install" something to my homescreen on my mobile device (or the equivalent to that for desktop), that seems like the proper inflection point to verify with the user that they do, in fact, want to let this app have access to specific "privileged" OS APIs.

Without a simple way to install and authorize a clientside web app, these kinds of apps will always get stuck in the uncanny valley of half-assed, semi-installed apps.

So why bother in the first place? Why not just do native whenever you want to build an "app"? Beyond providing a way to build for multiple platforms, there's one more thing the web has that native apps don't have: a URL.

The UNIVERSAL RESOURCE LOCATOR concept is easy to take for granted, but it's insanely useful to be able to reference things like links to emails inside gmail, or a tweet, or a very specific portion of documentation. Being able to naturally link between apps on the web is what gives the web it's power. It's unfortunate that many, when they first start building single page applications don't update URLs as they go and fail to respect the "back" button, thus breaking the web.

But when done properly, the blending the rich interactivity of native apps with the addressability and ubiquity of the web is a thing of beauty.

I cannot understate how excited I am about Service Workers. Because finally, we'll have the ability to build web applications that treat network resources the same way that good native applications do: as an enhancement.

Of course, the big IF is whether platforms play along and actually treat these types of apps as first class citizens.

Call me an optimist, but I think the capabilities that ServiceWorkers promise us, will shine a light on the bizarre awkwardness of the concept of opening a browser to access offline apps.

The web platform's capabilities have outgrown the browser.

Let's help the web to make its next big push.

--

I'm @HenrikJoreteg on twitter. I'd love to hear your thoughts on this.

For further reading on ServiceWorkers, here is a great explainer doc.

Also, check out my book on building sanely structured single page applications.

We make web software for human people.
(And have a nearly inappropriate amount of fun doing it.)

Blog feedFollow us on Twitter