We believe WebRTC is one of the most important technologies to hit the web in many years. There is a significant opportunity for WebRTC to help deliver on the promise of the open web—but for communication and collaboration.

In several of our colleague Henrik Joreteg's talks the past couple years, he's said, "WebRTC needs more Open Web hackers!" It's very true.

But WebRTC is much more complicated than other browser APIs that web developers deal with.

It seems, from our vantage point, like a large portion of the users of WebRTC have not been Open Web hackers.

Continue reading »

At &yet we have become big fans of CoreOS and Docker, and we've recently moved a bunch of our Node applications there.

We wanted to share a little tidbit for something that seems like it should be easy, but ended up being a minor stumbling block: making CoreOS's Etcd cluster available in Docker.

Here's a little background for those not up to speed on CoreOS or Etcd. CoreOS bills itself as "Linux for Massive Server Deployments." CoreOS is usually deployed with Etcd, "a highly-available key value store for shared configuration and service discovery," giving servers in a cluster the ability to share configuration data. We have been using Etcd and Confd for some time, so we naturally thought: "wouldn't it be cool to configure applications and services within Docker from Etcd?"

This can be easily accomplished by passing an environment variable to Docker at runtime and tweaking a few iptables rules. CoreOS makes the Etcd HTTP API available on its private IP address at port 4001 by default. CoreOS also sets an environment variable called $COREOS_PRIVATE_IPV4 in /etc/hosts with said IP. Fortunately, Docker is networked to the host's private IP, so this is the easiest point of access for Etcd.

Continue reading »

Hey folks! It's the holiday season, the goose is getting fat, the decorations are filling the air with glimmery goodness, and the yeti is plodding through the snow with gifts for all its web-loving friends.

But oh no! What if it doesn't know where to go?!

The yeti could shower its gifts onto the twitters, sure, but then how would it know you received it? It could leave it on this here blog, but what if–gasp–you FORGET? You might accidentally leave the yeti's gift out in the cold, cold snow where it could develop abandonment issues.

Hang on hang on. I have an idea. What about &you?

Continue reading »

Enterprise Week is the name of a week-long activity for high school seniors in the Pasco School District. It's loosely affiliated with Washington Business Week, which is our state's incarnation of a program that many states run aimed at exposing young folks to the business world.

During Enterprise Week, every senior from the three high schools in the district are pulled out of school and dropped into their "offices" in a local convention center. Volunteers from the local business community are asked to be a "Company Advisor" for each group of about a dozen students. I volunteered to be one, and I had no idea what to expect.

Luckily there was a manual.

The first, most important thing a Company Advisor (CA) should do, we were told, is to sit back and let the kids do the work. Each "company" would have a CEO, a COO, and heads of other various departments, much as you'd expect from a real business. It was the CA's job to select the CEO, and then let them run the show.

Continue reading »

Henrik follows up on his "Opinionated rundown of JS frameworks" blog post with a presentation at FFConf, in which he explored topics related to single-page apps, including:

  • Should we build apps that require JavaScript to run
  • What is a "native web app"?
  • What about progressive enhancement?
  • The performance implications of clientside apps
  • Twitter’s move away from clientside back to server-rendered
  • The two classes of web apps
  • User expectations of modern applications
  • Installable web apps
  • True offline support for web apps: ServiceWorker
  • Isomorphic (dual-rendered) applications
  • Picking tools for a rapidly changing environment
  • Optimizing for change
  • Building for the future of the web
Continue reading »

Yesterday Fippo talked about applying the principles of the open web to realtime communication with WebRTC. Consider this post the first installment of providing in-depth descriptions of what that means in practice.

Where codecs fit in

A key component of realtime audio and video systems (like Talky) consists of methods for encoding the light waves and sound waves that our eyes and ears can understand into digital information that computers can process. Such a method is called a codec (shorthand for code/decode).

Because there are many different audio codecs and video codecs, if we want interoperability, then it's important for different software implementations to "speak the same language" by having at least one codec in common. In the parlance of industry standards, such a codec is specified as "mandatory to implement" or MTI (often an MTI codec is a lowest common denominator, but software can use better codecs if available - a model that has also worked well for cipher suites in SSL/TLS and other technologies.)

Continue reading »

There's a lot of talk about this topic of "the web we want," and a lot of it has focused around WebRTC lately.

I have been working with WebRTC since mid-2012, both in the Chrome and Firefox browsers, as well as the native webrtc.org library. So far I have filed more than sixty issues in the WebRTC project issue tracker and the Chrome bugtracker. I'm especially proud that I've crashed the production version of Chrome eight times.

I am among the top non-Google people to file WebRTC issues. And I managed to get quite a few of them fixed, too. I visited Google's Stockholm office in September and had a conversation with the team there about how I use the issue tracker and how that process works. Full disclosure: I got a t-shirt (even though it turned out to be too large). And I even started reviewing the Chrome WebRTC release notes before they're sent out.

Justin Uberti's description of WebRTC as a "project to bring realtime communication to the open web platform" is still the vision I cling to.

Continue reading »

For those of you who don't know, hapi is a web framework with a rapidly growing community led by Eran Hammer.

Over the last month, a lot of work has gone into it to prepare for the release of version 8.0.0. hapi 8 represents the biggest release since the start of the framework, and with it come quite a few changes.

No more packs

That's right, those confusing pack things are gone. If you used them, though, don't worry. The functionality still exists, just in different ways. Instead of a pack that contains servers, we now have a server that contains connections. You can still create a server with multiple connections, but if you only need one; everything will feel much more straightforward and intuitive.

Continue reading »

As a semi-official part of the &yet Blog Team and a super-official, semi-professional antagonizer, I spend a lot of time kicking in office doors and demanding that people write things. Some of those folks (once they’ve come to the realization that I will not stop making this pose in their doorframe)

DO THEM

…will buckle down and whip out some words about JavaScript or Node or NodeScript or JavaNode or BackBonemBerGular or whatever in a jiffy, and if only to dislodge my presence from their immediate vicinity for one more day. Others flatly refuse, and that's okay too.

But there are others.

Continue reading »

Despite how we mostly share what we've learned about making great software, software really isn't the point of what we do at &yet. Writing code and designing interfaces and helping build software products and teaching what we know is all just an excuse to spend time on what we really care about – which is people.

Getting to be with our favorite people while we figure out challenging, interesting problems together is the whole point. If we were good at building ocean liners, that's what we'd be doing. It just happens that we're good at building software.

Last month, we started sending out a bi-weekly dispatch that we're calling &you. Close to 4,000 of you are signed up, and we're really grateful for the conversations we've been having with many of you.

So far, we've shared stories about what's on our minds these days, asked questions about what you're doing, spotlighted people in the &you community and their projects, and shared some of our favorite reads and resources.

Continue reading »

You're looking at your todo list and pondering what code to write during one of the brief moments of free time that appear on your daily schedule, when all of the sudden you get a message in team chat: Is the site down for anyone else?

It can be a frustrating experience, but never fear; you're not alone. We here at &yet experienced this type of outage once before, and then again this week. In fact, nearly every operations team has experienced at least a variation on the above nightmare. It is just a matter of time before you have to deal with people thinking your site or service is down when the problem is really with the Domain Name Service (DNS). Even shops that spend a lot of money to work with DNS vendors who themselves have some serious redundancy and scale will eventually fall prey to an orchestrated Distributed Denial of Service Attack.

So what did we learn when we were faced with an attack this week? Mainly, a reminder of the importance of redundancy. The best solution is still the simplest: have more than one DNS vendor. Now don't be fooled by the use of the word "simple" - while redundancy is the simplest, it is not a simple process to implement at all, but let's walk through what we will need.

Pick the two (or more) vendors. The crucial part for this is that both vendors have to have an API for changes to your DNS Zone records. If they don't have an API, you will be forced to make updates using their web interface and that just is not a recipe for success at all! Another criterion is that both vendors should have solid track records with dealing with DDoS events - no use picking a vendor that falls over at the slightest attack.

Continue reading »

a turkey wearing headphones listening to a phonograph, the music is probably dubstep

Six years ago, around this time of year, I met this turkey for the first time.

It was on a flyer for a little local music venue called The Red Room that had received quite a bit of acclaim. I'd heard of the venue, but hadn't been there yet. I hadn't met anyone who'd been to it yet, so somehow it wasn't really real.

The flyer sat on the counter of a tiny print shop on the far side of Pasco. I say "shop," but this was a warehouse. I could hear the giant analog four-color press churning out Spanish-language phone books, going "chkunk-shhhh-chkunk-shhhh-chkunk!" faster than I could possibly onomatopoeticate.

Continue reading »

It's one thing to have a web application in production that requires active monitoring (you are monitoring it right?), but it's another issue completely when that web application contains a "contact us" form. All good teams will use various tools to gather emails so they can manage their subscriber lists appropriately, and that's the rub; what happens when code changes in the app that impacts the form?

Nothing – why? Because you will be blissfully unaware your form is failing unless you test it.

Testing a web form is what we are going to demonstrate using Python and the Mechanize library. Mechanize allows you to load a page, inspect any forms on the page and then manipulate the form just as a user would.

Let's see some code!

Continue reading »

Last week, Google Chrome started to support the next generation video codec VP9 for WebRTC (which is highly experimental in the developer version and you need to enable it, so the ETA for this is probably going to be mid-2015). That is good news because VP9 is going to offer better video quality at the same bandwidth (another way to look at it is that VP9 gives you the same quality at lower bandwidth, although at the expense of computational power, i.e. your mobile device gets warmer).

I immediately wanted to implement and test the feature, if only so I could add Chrome 41 to the long list of versions that I managed to crash. However, this turned out to be harder than it initially seemed.

Let me try to explain the issue with an example of a videochat that has three participants, two of which support VP9: Suppose that those clients use an XMPP MUC for having the conference, supported by an SFU like the Jitsi videobridge (which is what we do in the next version of Talky). The browsers announce their capabilities ("I implement VP8," "I implement VP9," a Firefox browser might even do H.264 and VP8) through Entity Capabilites when joining the MUC.

So the first two browsers know they both implement VP8 and VP9 and use that list of codecs when calling the RtpSenders .send method for the first time. That RtpSender object is another improvement that came from ORTC. If everything goes well, they will enjoy the superior quality of VP9.

Continue reading »

Articulating our decision making is a huge part of our jobs as designers. Every day we should be asking ourselves “Why did I decide to do it this way?” Our coworkers, clients, and users will be asking the same question, so we may as well be prepared.

Everything we add or leave out is the result of decision making. Sometimes we’re called to explain decisions around entire layouts and other times it’s just the exact shade of grey we chose for a horizontal rule. Big or small, it’s important to understand why we land on the solutions we do.

Below are six common decision patterns I’ve seen in my time as a designer. Note that these aren’t specifically ordered and I’m not suggesting any one is best. It’s important first to recognize our behavior before deciding what makes the most sense for our project which, as always, depends.

Decisions made by other people

Continue reading »

Eons ago when our story first began, I told you how I needed to make a client app more consistent and efficient by implementing optimistic concurrency and JSON Patch in our model layer.

As I said before, in our app, we combine both of these forces for an efficiency and consistency one-two punch. As I worked through the integration and some other requirements, I realized that a third module that combined the previous two and added some sane defaults for conflict resolution would be really helpful, so I built one. It's called ampersand-model-optimistic-update-mixin. Say that five times fast, or just call it AMOU (pronounced "ammo").

What it does

Let's recall our good buddy Franco from last time, and suppose that his data is edited by two different people working from the same base version:

Continue reading »

A core tenet of any Operations Team is that you must enable developers to change their code with confidence. For the developer this means they have the flexibility to try new things or to change old, broken code. Unfortunately, however, with every code change comes the risk of breaking production systems, which is something Operations has to manage. A great way to balance these needs is to continuously test new code as close to the point of change as possible by reacting to code commits as they happen.

At &yet the majority of the code that is deployed to production servers is written in NodeJS, so that's the example I'll use. NodeJS uses npm as its package manager, and one aspect of npm is its ability to define scripts that are to be run at various stages of the package's lifetime. To make full use of this feature we need a way to run the defined scripts at the point that a developer is commiting code, as that is the best time to do validation and testing of the newly changed code.

Fortunately an npm package exists that will do just that - precommit-hook. It installs the required pre-commit hook into your project's .git metadata such that just before git actually performs the commit, it will run the defined set of scripts or run the lint, validate, and test scripts by default. We can use this to run any check we need, but for now I will describe how to run a script to scan the project's dependencies for any known security vulnerabilities using retire.js.

First we need to add retire.js to the project's package.json and add a reference to it so the pre-commit hook will run it:

Continue reading »

Microsoft recently announced they will support Object RTC and now everyone is talking about ORTC and how they will support it.

What is this all about and what is ORTC anyway?

In essence, ORTC is an alternative API for WebRTC. It is object-oriented and protects developers from all that ugly Session Description Protocol (SDP) madness. Some people call it WebRTC 1.1, or maybe WebRTC 2.0.

So... will &yet (and Otalk) (and Talky) support ORTC? Of course!

Continue reading »

The most important job of a leader is to listen and listen well. What sets a great leader is her willingness to give of her time and energy. And although listening requires a large amount of both time and energy, it makes people feel valued and needed, a goal which all leaders should aspire to.

Leaders need to have–or learn to develop–the humility it takes to truly listen. Not just to hear what people are saying directly, but to be an investigative listener, to pay attention and discover things in the organization that may be unseen, whether they be positive and negative. And if you do uncover a dark, hard problem, fear not! Chances are good that the solution is shrouded in wisdom which will serve you well in the future.

What other good can you do by listening? Well, the list is probably way longer than this, but here are a few gems I've discovered along the way. I'm sure you'll find your own treasure, too.

  • You create and build trust
  • You learn to trust the team
  • You build confidence
  • You discover and support people's passion for their own personal growth
  • You inspire collaboration
  • You empower the PEOPLE
Continue reading »

Today's entry: Building the Mixins!

This post is second in a three part series that I started with a little bit of background last week.

Building the optimistic concurrency mixin

Following the Human way, I made the optimistic concurrency mixin a CommonJS module and published it with npm. It's called ampersand-optimistic-sync, but we'll call it AOS here. AOS replaces the sync method on your Backbone or Ampersand models. Since sync is the core Ajax method, extending there allows AOS to read and write the versioning headers it needs.

Continue reading »

"If you don't monitor it, you can't manage it."

In the last installment of our Tao of Ops series I pointed out the above maxim as being a variation on the business management saying, "You can't manage what you can't measure" (often attributed to Peter Drucker). This has become one of the core principles I try to keep in mind while walking the Operations path.

Keeping this in mind, today I want to tackle testing the TLS Certificates that can be found everywhere in any shop doing web related production - something that needs to be done and can be rather involved in order to do properly.

According to Wikipedia TLS Certificates are:

Continue reading »

Maintaining code quality is hard. That's why a little over two years ago, I created precommit-hook to help automate things like linting and running tests.

Over the years, precommit-hook has evolved, but it's always had the same basic functionality. Run a simple npm i --save-dev precommit-hook and the module takes care of the rest. It's worked great for a long time, and been adopted by quite a few people. So then what's the problem?

Customization. If you want to change the behavior of the hook, you have to either fork it and make the changes yourself and publish a new module, or you have to make manual changes to your project's package.json. For a module with the goal of making things as simple as possible, that's kind of a bummer.

Enter git-validate. The idea behind git-validate isn't to automatically do all the things for you, but rather to provide a very simple framework for creating your own modules that do as much or as little as you want them to.

Continue reading »

While working on a line of business application for a client recently, I was asked to research and implement two different approaches towards improving data updating efficiency and consistency.

The first is JSON Patch. The idea here is to reduce data transfer by only sending the operations needed to make the remote resource identical to the local one. Even though both resources are represented as JSON objects, applying patches means we don't have to replace the entire entity on every update. This also reduces the risk of accidental changes to data that stays the same.

The second is optimistic concurrency control. This approach allows multiple users to open a data record for editing at the same time, and determines whether there are any conflicts at save time.

Our working hypothesis was that combining these two approaches would enable us to build a more bandwidth-efficient, data-consistent application while also providing a more pleasant user experience.

Continue reading »

"If you don't monitor it, you can't manage it."

That's a variation on the business management saying "you can't manage what you can't measure" (often attributed to Peter Drucker). The saying might not always apply to business, but it definitely applies to Operations.

There are a lot of tools you can bring into your organization to help with monitoring your infrastructure, but they usually look at things only from the "inside perspective" of your own systems. To truly know if the path your Operations team is walking is sane, you need to also check on things from the user's point of view. Otherwise you are missing the best chance to fix something before it becomes a problem that leads your customers to take their business elsewhere.

Active testing of your systems from the outside is crucial and something that is easy enough to set up. For each internal system you are monitoring, ask yourself how you would create a query or request from the outside using that internal system.

Continue reading »

When you think about a software project, and specifically the people that are involved with it, you probably think about developers. After all, the code itself is what makes up the project. I submit to you that we have a perception problem in the software world. In fact I think we have it backwards. The software is the least important thing in your software project.

Currently, code commits get all the attention and metrics. They are typically what a project will use to measure progress, complexity, and really anything that is considered meaningful to the work as a whole. The fact is though, they're the last thing anyone who uses your software actually sees. It doesn't matter if you're writing client-side code, or a backend helper library: the first thing anyone will likely see, and the thing they will interact with the most is the documentation.

In today's software ecosystem code is cheap. Problems are relatively easily solved. What language you choose and what approach you take can often be a matter of personal preference and style. There are of course exceptions but these are far from the vast majority of situations. What really matters is how quickly the code you write can be of usefulness to anyone else besides yourself. Chances are you are not writing code in a vacuum (if you are, hello you have a weird setup and should probably join us in the 21st century, it's nice here). Think about the last time you used any software at all. Did you just intuitively know how to run it? No you had to read the documentation. It's strange then that the first thing we see has become somehow so low in our priority list.

Visibility is important

Continue reading »

The Green Notebook

I stepped foot through the door as an official yeti almost exactly two months ago. I’ve changed jobs before, but somehow this time it felt a bit different. Sort of a cross between moving to a country where you don’t know the language, and walking into the cafeteria on your first day of 7th grade. While at the store purchasing a handful of requisite office items, I felt compelled to toss a little green notebook in the basket. I’m not sure why, but it just seemed necessary.

Socially Acceptable Security Blanket

My first few weeks on the job, I had a ton of conversations with a ton of other people. Yetis, by nature, tend to constantly burble ideas, and I didn’t want to miss any of it. Having made the transition from designer to front-end developer, and now to back-end developer, I was tasked with sponging new languages, terminologies, ways of thinking, processes, programs, and people. As a way of coping, I just cracked open that green notebook and started scribbling. I talked to people and scribbled, I worked on projects and scribbled, I read articles and scribbled…I scribbled myself off cliffs of anxiety, and I scribbled my way out of mental blocks. There were even times when I just clung to it and fiddled with the ribbon bookmark and elastic closure strap just to give my fidgety hands something to do while I made sense of what I was feeling. You could say that it was akin to a socially acceptable security blanket.

Continue reading »

For those of you playing along at home, you may have heard me mention the novel we here at &yet worked on this year, Something Greater than Artifice, like a jillion times. For those of you who haven't: Hello! Welcome to the Internet. Please enjoy the heady melange of cultural experiences but for God's sake don't read the bottom half of anything.

Anyway. Something Greater than Artifice (or SGtA for you TL;DR folks). If you don't know the story behind it I'm sure it's floating around somewhere (subtle hint: that link takes you to the RealtimeConf site, which is both cool and awesome). Main thing is that I wrote a pretty good book and a bunch of cool people helped me turn it into a pretty very good book. Because without Amy illustrating and Jenn editing and Adam occasionally saying "that part with the thing doesn't make sense" this thing would be not as pretty very good as it is.

Okay, fine. More than pretty very good. Doubleplus pretty very good. Because–if you'll abide a moment of hubris–the book was actually selected as the Kirkus Reviews Indie Book of the Month Selection (caps theirs). Which got us to thinking that maybe, just maybe, we could take this thing which started as a conversation between Adam, Amy, and I and turn it into something greater.

So we decided to get the dang thing published.

Continue reading »

Way back in 2008, my friend Jack Moffitt wrote a blog post entitled XMPP Is Better With BOSH. In those ancient days of long polling, BOSH was the state of the art for sending XMPP traffic over an HTTP transport because we needed something like the Comet model for bidirectional communication over HTTP. Even at the time, we knew it was an ugly and temporary hack to send multiple HTTP request-response pairs via long polling, but we didn't have anything better.

Since then, bidirectional communication between web browser and web service has come a long way, thanks to WebSocket. Nowadays, you start with an HTTP connection but use the HTTP UPGRADE method to bootstrap directly into a long-lived bidirectional session (for this reason, WebSocket has been likened to "TCP for the web"). WebSocket has its warts too, but compared to BOSH it significantly reduces the overhead of maintaining HTTP-based connections for XMPP. Even better, it has become a truly standard foundation for building real-time web apps, with support in all the modern languages and frameworks for web development.

The benefits of communicating XMPP over WebSocket encompass and extend the ones that Jack enumerated years ago for BOSH:

  • Greater resilience in the face of unreliable networks — here WebSocket does pretty much what BOSH and other "Comet" approaches did 10 years ago, but in a more network-friendly way by removing the need for long polling.
  • The ability to recover from data loss — the BOSH model of recovering from network outages and communication glitches was generalized with the XMPP stream management extension, this can be used with XMPP over WebSocket, too.
  • Compression for free — well, it turns out that the free compression we got by sending XMPP over HTTP wasn't so free after all (cf. the CRIME and BREACH attacks), but there's a native compression scheme for WebSocket which so far appears to avoid the security problems that emerged with application-layer compression in HTTP.
  • Firewall friendliness — in this case WebSocket isn't quite as network-agnostic as BOSH, since it's known that some mobile networks especially prevent WebSocket from working well (usually because they don't handle the HTTP UPGRADE mechanism very well). Hopefully that will improve over time, but in the meantime we can always fall back to BOSH if needed.
Continue reading »

There’s a bit of a kerfuffle right now in Angular.js land because, lo and behold, the 2.0 release contains drastic differences and there isn’t really an upgrade path.

If you want to upgrade you'll likely need to completely re-write your app!

The structural updates they're proposing all sound like good improvements, but if you built a large app on 1.x and want to upgrade it doesn't really seem like you'll be able to use much of your code.

Losing your religion

Continue reading »

Our general approach to consulting at &yet goes something like this: If we have a knack for something and we think it can help make you better at what you do, help your team eliminate risk, or move more confidently down the right path, we should do it. Starting today, we're offering 3 new WebRTC consulting packages.

In addition to building products and open source software, our team has offered consulting services as long as we've been a company. But now we've started focusing in on how to better package the skills and expertise our community (that's you!) needs.

After reaching out to and talking with teams actively working with WebRTC, we're hearing a lot of the same questions that need answering. Questions like:

  • What open source tools are out there and where should we go to get started with them?
  • How do we configure TURN/STUN servers so our WebRTC service can work consistently across firewalls?
  • What is the best way to go about implementing WebRTC on iOS?
  • How can we ensure we're providing a secure and private service? (Including HIPAA-compliance)
  • Is it possible to build and scale a WebRTC service on our own infrastructure?
  • How can we scale beyond a couple people in a group conversation?
  • How could we add chat or whiteboarding alongside our video solution?
  • How would we create a massive broadcast live video service?
Continue reading »

For the past year and a half, it's been our pleasure and privilege to serve CAA, an agency representing many of the most successful professionals in film, television, music, sports, and theater.

Glenn Scott leads the team there. Over the past few years, they've transitioned their IT to building custom applications in Node. We're proud to say we've been able to partner with Glenn's great team at CAA, playing a key role in their work during that time.

Recently, Glenn gave a nice presentation as part of Joyent's Node on the Road series. In it, he described the way CAA builds applications.

Glenn talks about the challenge of maintenance in traditional IT and how building Node and JS web apps make that much less painful.

Continue reading »

Two of our core values on the &yet team are curiosity and generosity. That's why you'll so often find my yeti colleagues at the forefront of various open-source projects, and also sharing their knowledge at technology and design conferences around the world.

An outstanding example is the work that Philip Roberts has done to understand how JavaScript really works within the browser, and to explain what he has discovered in a talk entitled "What the Heck is the Event Loop, Anyway?" (delivered at both ScotlandJS and JSConf EU in recent months).

If you'd like to know more about the inner workings of JavaScript, I highly recommend that you spend 30 minutes watching this video - it is fascinating and educational and entertaining all at the same time. (Because another yeti value is humility, you won't find Philip boasting about this talk, but I have no such reservations because it is seriously great stuff.)

Continue reading »

&yet has long been a wandering band of souls—like a mix of the A-Team and the Island of Misfit Toys from Rudolph the Red-Nosed Reindeer.

Over five years, one thing that's been a constant for &yet is realtime. We've worked our way through many technologies—some ancient, some nascent, and many of our own. We've stayed focused on the users of those technologies—end users and developers.

Our path forward has become clearer and more focused than it's ever been. Some of the terrific people we've added to our team this last year have had tremendous influence on honing our focus.

We know the type of company we aspire to be: people first, and always determined to make things better for humans on all sides of our work.

Continue reading »

A web application is not the same as the service it uses, even if you wrote them both. If your service has an API, you should not make assumptions about how the client application is going to use the API or the data.

A successful API will likely have more than one client written for it, and the authors of those clients will have very different ideas about how to use the data. Maybe you'll consume the API internally for other uses as well. You can't predict the future, so part of separating your API concerns from your clients should be feature detection.

For a real time application, feature detection is a great way to manage client subscriptions to data.

As I discussed a few weeks ago, for realtime apps it's better to send hints, not data. When a client deliberately subscribes to a data channel for updates on changes, that is an explicit subscription. By contrast, an implicit subscription occurs when a client advertises the features and data types it is capable of dealing with, and the server automatically subscribes it to the relevant data channels.

Continue reading »

Every Operations Team needs to maintain the system packages installed on their servers. There are various paths toward that goal, with one extreme being to track the packages manually - a tedious, soul-crushing endeavor even if you automate it using Puppet, Fabric, Chef, or (our favorite at &yet) Ansible.

Why? Because even when you automate, you have to be aware of what packages need to be updated. Automating "apt-get upgrade" will work, yes - but you won't discover any regression issues (and related surprises) until the next time you cycle an app or service.

A more balanced approach is to automate the tedious aspects and let the Operations Team handle the parts that require a purposeful decision. How the upgrade step is performed, via automation or manually, is beyond the scope of this brief post. Instead, I'll focus on the first step: how to gather data that can be used to make the required decisions.

Gathering Data

Continue reading »

The NoSQL "movement" in database design has been motivated by many factors (such as simplicity and scalability) and has resulted in many more choices among storage and retrieval solutions. Instead of a one-size-fits-all approach, you can choose a database that is optimized for your specific needs.

So what are your needs, and how do you choose the right database for the job?

If you don't need a database cluster, and can live with a single node and snapshot backups, then you can pretty much do whatever you want.

The CAP Theorem

Continue reading »

As you probably know, we run Talky, a free videochat service powered by WebRTC. Since WebRTC is still evolving quickly, we add new features to Talky roughly every two weeks. So far, this has required manual testing in Chrome, Opera, and Firefox each time to verify that the deployed changes are working. Since the goal of any deploy is to avoid breaking the system, each time we make a change we run it through a post-commit set of unit tests, as well as an integration test using a browser test-runner script as outlined in this post.

All that manual testing is pretty old-fashioned, though. Since WebRTC is supposed to be for the web, we decided it was time to apply modern web testing methods to the problem.

The trigger was reading two blog posts published recently by Patrik Höglund of the Google WebRTC team, describing how they do automated interop testing between Chrome and Firefox. This motivated me to spend some time on the post-deploy process of testing we do for Talky. The result is now available on github.

Let's review how Talky works and what we need to test. Basically we need to verify that two browsers can connect to our signaling service and establish a direct connection. The test consists of three simple steps:

Continue reading »

More and more application developers have come to rely on platform-as-a-service providers for building and scaling software.

WebRTC's complexity makes it ripe for this kind of approach, so it's no surprise that so many early WebRTC companies have been platform service providers. Unfortunately for customers, the nascent Rent-Your-WebRTC-Solution market has proven pretty unstable.

News came yesterday that yet another provider of WebRTC hosted services—in this case, Requestec—has been acquired. We've seen this movie before with Snapchat's acquisition of AddLive and we'll probably see it again, maybe multiple times.

At &yet, we've been working steadily at creating open source software and approaches to infrastructure to help our clients avoid the volatile WebRTC rental market.

Continue reading »

At &yet, we've always specialized in realtime web apps. We've implemented them in a wide variety of ways and we've consulted with numerous customers to help them understand the challenges of building such apps. A key difference is that realtime apps need a way of updating the application without direct intervention from the user.

Growing Pains

What data you send, and how much you send, is completely contextual to the application itself. Your choice of transport (polling, long-polling, WebSockets, server-sent events, etc.) is inconsequential as far as updating the page is concerned. App experience and performance are all about the data.

In our earliest experiments, we tightly coupled client logic with the updates, allowing the server side to orchestrate the application entirely. This seems rather "cool," but it ends up being a pain due to lack of separation of concerns. Having a tightly-coupled relationship between client and server means a lot of back and forth, nearly infinite amounts of pain (especially with flaky connections), and too much application orchestration logic.

Continue reading »

On the &yet Ops Team, we use Docker for various purposes on some of the servers we run. We also make extensive use of iptables so that we have consistent firewall rules to protect those servers against attacks.

Unfortunately, we recently ran into an issue that prevented us from building Dockerfiles from behind an iptables firewall.

Here's a bit more information about the problem, and how we solved it.

The Problem

Continue reading »

One of the best tools to use every day for locking down your servers is iptables. (You do lock down your servers, right? ;-)

Not using iptables is akin to having fancy locks with a plywood door - sure it is secure but you just cannot know that someone won’t be able to break through.

To this end I use a small set of bash scripts that ensure I always have a baseline iptables configuration and items can be added or removed quickly.

Let me outline what they are before we get to the fiddly bits...

Continue reading »

Building collaboration apps has always been too hard, especially when audio and video are involved. The promise of WebRTC - building audio and video support directly into the browser - has been that it would make collaboration as easy and open as the web itself. Unfortunately, that hasn't quite happened yet.

A big part of the problem is that we lack the kind of open platforms and frameworks that have made the web such a success - things like, say, express and hapi in the Node.js community. Instead of open standards and open source, web developers coming to the collaboration space find themselves faced with a plethora of proprietary platforms tied to specific vendors.

Don't get us wrong: some of those platforms are excellent. However, if you're dependent on a particular service provider for the platform that powers your entire real-time collaboration app then you've taken on some significant risks. Vendor lock-in and high switching costs are two that come quickly to mind. Not to mention the possibility that the startup service you depend on will go out of business, get bought by one of your competitors, or be taken off the market by an acquiring company.

That's not a recipe for happy application developers.

Continue reading »

About a year ago, our friends at TokBox published a blog post entitled WebRTC and Signaling: What Two Years Has Taught Us. It's a great overview of their experience with technologies like SIP and XMPP for stitching together WebRTC endpoints. And we couldn't agree more with their observation that trying to settle on one signaling method for WebRTC would have been even more contentious than the endless video codec debates. However, our experience with XMPP differs enough from theirs that we thought it would be good to set our thoughts down in writing.

First, we're always a bit suspicious when someone says they abandoned XMPP because it didn't scale for them. As a general rule, protocols don't scale - implementations do. We know of several XMPP server implementations that can route, distribute, and deliver tens of thousands of messages per second without breaking a sweat. After all, that's what a messaging server is designed to do. Sure, some server implementations don't scale up that high; the answer is to use a better server. We're partial to Prosody, but Tigase and a few others are well worth researching, too.

Second, pure messaging is the easy part. The hard part is defining the payloads for those messages to build out a wide variety of features and functionality. What we like best about the XMPP community is that over the last 15 years they have defined just about everything you need for a modern collaboration system: presence, contact lists, 1:1 messaging, group messaging, pubsub notifications, service discovery, device capabilities, strong authentication and encryption, audio/video session negotiation, file transfer, you name it. Why put a lot of work into recreating those wheels when they're ready-made for you?

An added benefit of having so many building blocks is that it's straightforward to put them all together in productive ways. For example, XMPP includes extensions for both multi-user chat ("MUC") and multimedia session management (Jingle). If we need multiparty signaling for video conferencing, we can easily combine the two to create what we need. Plus we get a number of advanced solutions for free this way, since MUC includes in-room presence along with a helpful authorization model and Jingle supports helpful features like renegotiation and file transfer. Not to mention that the ability to communicate device capabilities in XMPP enables us to avoid monstrosities like SDP bundling.

Continue reading »

As a portion of our elaborate training events I give a short talk about JS frameworks. I've shied away from posting many of my opinions about frameworks online because it tends to stir the pot, hurt people's feelings, and unlike talking face to face, there's no really great, bi-directional channel for rebuttals.

But, I've been told that it was very useful and helped provide a nice, quick overview of some of the most popular JS tools and frameworks for building single page apps. So, I decided to flesh it out and publish it as A Thing™ but please remember that you're just reading opinions, I'm not telling you what to do and you should do what works for you and your team. Feel free to disagree with me on twitter or even better, write a post explaining your position.

Angular.js

pros

Continue reading »

In an effort to build a more inclusive community around the events we're a part of, we'd like to announce our very first (but certainly not last) Human JavaScript Training Scholarship.

We understand that very few people, both in tech and in the world, have access to the resources needed to level-up in their careers. This is especially true of marginalized groups, who are consistently underrepresented and often even pushed out of our industry without the opportunity to thrive here.

We also understand that there are serious barriers to entry in our industry that keep people who are marginalized by race and/or gender from entering and actively participating in our field.

With this in mind, we'll be covering one person's trip and tuition to participate in Human JavaScript: LIVE!, our two-day, intensive JavaScript workshop for JS developers who are looking to level-up in building clientside, single-page web apps. This workshop focuses on writing modular and maintainable code, while emphasizing the importance of code collaboration.

Continue reading »

Introducing Ampersand.js a highly modular, loosely coupled, non-frameworky framework for building advanced JavaScript apps.

Why!?!

We <3 Backbone.js at &yet. It’s brilliantly simple code and it solves many common problems in developing clientside applications.

But we missed the focused simplicity of tiny modules in node-land. We wanted something similar in style and philosophy, but that fully embraced tiny modules, npm, and browserify.

Continue reading »

JS for Teams: It's ALIIIIVE! is a two-day training adventure happening July 24 & 25 focused on teaching teams how to build advanced single-page apps in a highly maintainable way. Tickets on sale today!

To celebrate, we're offering $200 off per ticket for the next 5 tickets – use the discount code AMPERSAND at check-out.

The tickets we set aside for our email subscribers already sold out, so don't miss your chance. Seats are extremely limited.

Enroll now

Continue reading »

Is there an inherent business risk in letting your top JavaScript developers do their best work?

(What a painful thought!)

It's one thing to build with the latest tools and techniques, but what happens when the developers who led the way on a given app move on to greener pastures?

JavaScript apps are notorious for being largely written by one "rockstar," ending up dominated by the most experienced JS dev, the most charismatic person, or at least by the fastest typer.

Continue reading »

A few months ago, WebRTC agitator and yeti-friend Chris Koehnke wrote an excellent blog post explaining that browser-based videochat won't work 100% of the time without a little help from something called a "TURN Server". As he put it:

If you're going to launch a WebRTC powered service for financial 
gain, then you need to have done everything within your power to
ensure it works reliably across as many cases as possible.

Chris was satisfied when a few simple tests worked and stopped after that. Well, he skipped the next step. But that's reasonable because he was probably bored already (does anyone get excited about TURN servers?) and he doesn't run a WebRTC powered service himself.

The next step is looking for cases where things did not work and figure out what we can do about it. But hey, we run a WebRTC service called Talky and connection failures are frustrating, so we decided to dig a little deeper.

Continue reading »

Our partners in the opposite of crime ^Lift Security, are proud to welcome their newest member to our team, builder and breaker of things, Tom Steele.

Besides having the coolest name ever, Tom brings his knowledge of varied languages, passion for open source work, and a strong desire to help empower developer communities through security education and collaboration.

His experience creating the open source project Lair as well as his early enthusiasm and contributions for the Node Security Project are just two of the many reasons we’re glad he joined the ^Lift team.

We’re very excited to have Tom onboard, and for all of the awesome things he’s going to do with the team to help push ^Lift Security to the next level.

Continue reading »

Last October, Mike Speegle introduced us to the world of the Tech Republic and the narrative behind RealtimeConf 2013 in his novel, “Something Greater than Artifice.” The book is now available in its entirety for free download at RealtimeConf.com.

Download your copy on Kindle, ePub, or PDF before Monday, May 5 when it will only be available for purchase on Amazon.

If you haven‘t explored the world of “Something Greater than Artifice” here’s what people are saying:

“[Something Greater than Artifice] examines in a new way the implications of our use of technology, while still remaining hopeful–something that is often forgotten in futuristic novels.”

Continue reading »

the cover of the novel

We are thrilled to release "Something Greater than Artifice", a tremendous work created by our friend and colleague, Mike Speegle. It's now available on all kinds of media. (The beautiful cover was designed by Amy Lynn Taylor.)

Mike put months of effort into the first half of the novel, which was released in serial form in advance of RealtimeConf, where the story was concluded as a live stage play.

Immediately after RealtimeConf, Mike went to work concluding the novel. The ending has so much more than was visible in the play at the conference. I highly recommend reading it.

Continue reading »

RealtimeConf may be over, but now the experience can live on somewhere other than in the hearts and minds of the people who were there: RealtimeConf.com.

Over the past few months, our team has collected memories from the ambitious event, recorded the original music featured there, and discovered the fates of Ros and Gregor in the phenomenal conclusion to Mike Speegle’s novel, Something Greater than Artifice.

We’ve even started planning our next epic adventure, JS for Teams, sign up here to find out about pre-registration.

It was a long road to the Tech Republic, so we hope you enjoy the trip down memory lane, and that you’ll join us on our future treks around the universe.

Continue reading »

node security training logo

In just a few weeks on April 30th, the ^lift security team will host their first secure development training on building secure Node.js web applications in Portland, Oregon.

The ^lift team has designed this training to help you understand the security challenges you will face when developing Node.js web applications and help you build habits that turn security from a worry or an annoyance, into a comfortable part of writing your code from the very beginning.

Seats at this first class are extremely limited, so grab your spot with the team that’s been trusted to secure tools you use everyday like npm, Github and Ginger, as well as leads the Node Security Project. Also discounted tickets are available if you want to bring your dev team (or hack the system and bring a couple friends, we won’t tell anyone).

Continue reading »

So Heartbleed happened, and if you’re a company or individual who has public facing assets that are behind anything using OpenSSL, you need to respond to this now.

xkcd comic about heartbleed

The first thing we had to do at &yet was determine what was actually impacted by this disclosure. We had to make list of what services are public facing, which services use OpenSSL directly or indirectly, and which services use keys/tokens that are cryptographically generated. It’s easy to only update your web servers, but really that is just one of many steps.

Here is a list of what you can do to respond to this event.

Continue reading »

Today we’re honored to welcome a few new amazing individuals to the &yet team.

Here at &yet, we strongly believe that each person who joins our team should fundamentally improve what it’s like to work here. We also count on our new teammates to help lead us toward being the type of company we want to see ourselves become. So you can bet that we take extra care and consideration when adding new folks to the team.

Here’s a tiny (but brilliant) glimpse of the direction we’re heading, represented by the newest additions to &yet team:

David Dias

Continue reading »

Are you frustrated over how much of your JavaScript code is dependent on too few members of your team?

Our team was there too. Over time, we’ve built a set of practices that have helped our team and clients write complex but sane JavaScript apps without depending heavily on one or two people.

Using approaches Henrik Joreteg and &yet introduced in Human JavaScript, after just two days you and your dev team will walk away with a practical, more sensible path to building JS apps. And your code base will look like it was written by one solid JS dev.

Introducing JS for Teams, a clear and simple approach to building complex JS apps—but it’s a bit more interesting than that.

Continue reading »

When I was 17, two things occurred which changed my life forever.

My grandfather passed away and left me a book by John Lomax entitled, Cowboy Songs, and I discovered Pete Seeger’s seminal “American Favorite Ballads” record series produced by Smithsonian Folkways.

American Favorite Ballads, Cowboy Songs

Growing up as a ranch-hand in Silver City, New Mexico, the “real” history of the American cowboy was always important to my grandfather, and Cowboy Songs was one of the only genuinely untainted collections of that oral tradition with lyrical content that wasn’t screened or edited by its publishers to be “safe.”

Continue reading »

As more and more people are enjoying the Internet as part of their every day lives, so too are they experiencing its negative aspects. One such aspect is that sometimes the web site you are trying to reach is not accessible. While sites can be out of reach for many reasons, recently one of the more obscure causes has moved out of the shadows: The Denial of Service attack. This type of attack is also known as a DoS attack. It also has a bigger sibling, the Distributed Denial of Service attack.

Why these attacks are able to take web sites offline is right there in their name, since they deny you access to a web site. But how they cause web sites to become unavailable varies and quickly gets into more technical aspects of how the Internet works. My goal is to help describe what happens during these attacks and to identify and clarify key aspects of the problem.

First we need to define some terms:

A Web Site -- When you open your browser and type in (or click on) a link, that link tells the browser how to locate and interact with a web site. A link is made up of a number of pieces along with the site address. Other parts include how to talk to the computers that provide that service and also what type of interaction you want with the web site.

Continue reading »

Last week, Eran Hammer came to the &yet office to introduce Hapi 2.0.

Hapi is a very powerful and highly modular web framework created by Eran and his team at Walmart Labs. It currently powers the mobile walmart.com site, as well as some portions of the desktop site. With that kind of traffic, you could definitely say Hapi is battle-tested.

Hapi's quickly becoming a popular framework among Node developers. Since mid-2013, &yet has been using Hapi for all new projects and we've begun porting several old projects to use it, too.

Before he started his presentation, Eran casually mentioned that he planned to at least touch on every feature in Hapi, and boy did he succeed.

Continue reading »

It's an honor to introduce Peter Saint-Andre as a new member of our team and as the CTO of &yet.

Peter has a long history of leadership in Internet standards as an IETF Area Director, Executive Director of the XMPP Standards Foundation, and his involvement in standardizing technologies like WebSockets and OAuth. He's among a handful of people who've (with quite little fanfare) helped pave the Information Superhighway™.

His experience and involvement with Internet security, distributed systems, and collaboration is a boon to our team as well.

Peter's one of the original members of the Jabber, Inc. team who created the most widely distributed protocol for realtime communication (XMPP). He's given over a decade of deep consideration to the ways people use technology to collaborate and has a personal passion for making that better.

Continue reading »

Blog Archives: