&yet Blog

● posted by Lynn Fisher

Articulating our decision making is a huge part of our jobs as designers. Every day we should be asking ourselves “Why did I decide to do it this way?” Our coworkers, clients, and users will be asking the same question, so we may as well be prepared.

Everything we add or leave out is the result of decision making. Sometimes we’re called to explain decisions around entire layouts and other times it’s just the exact shade of grey we chose for a horizontal rule. Big or small, it’s important to understand why we land on the solutions we do.

Below are six common decision patterns I’ve seen in my time as a designer. Note that these aren’t specifically ordered and I’m not suggesting any one is best. It’s important first to recognize our behavior before deciding what makes the most sense for our project which, as always, depends.

Decisions made by other people

On every project there will be plenty of decisions that have already been made for us. Most commonly this shows up as existing branding, usage rules, and style guides. An important decision we make is whether or not to adhere to these guidelines, for better or worse.

Another way this pops up in our work is a client insisting on a certain direction, typeface, or various other treatment. How many times have you heard a designer say “That was the client’s decision” about negative feedback? Funny how we rarely use that one when the reviews are great. The truth is we made the decision that implementing their idea was better than continuing to push our idea or even to walk away from the project entirely. Decisions made by others are our decisions too.

Decisions based on systems

Designers and artists have been studying forms, trying new methods, and establishing systems for centuries. Many designers today use these proven systems to inform their decisions. Typographic scales determine proper size measurements and rhythm, the Golden Ratio influences aesthetic proportions, and the Rule of Thirds helps create balanced and dynamic compositions. I like to call this type of decision leaning on history.

Decisions based on personal history

Another type of history we lean on is our own project history. We sometimes make decisions based on what we’ve personally seen succeed or fail in the past. What worked on a previous project may work swimmingly for this one or horribly for another. As we know, every project has its own set of constraints and complexities.

Bandwagon decisions

Many design decisions we make are influenced by what other people are doing. Copying what one designer is doing can be called plagiarism, but copying what lots of other designers are doing can be called “following industry trends.” There’s safety in numbers and making a bandwagon decision (following the path others have tested and approved) can make a lot of sense and save time. However, bandwagon decisions that reach ubiquity can lead to poor decisions about your specific project in the name of perceived standards. Love it or hate it, the hamburger menu is a great example.

Decisions based on data

“Designed with science!” That’s how it can feel making design decisions based on data. Data gathered through user observation, interviews, and A/B testing can show us what’s working as intended and what isn’t. Decisions about which features to change or eliminate can naturally follow. Sometimes though, the data is poorly collected or it might not be very meaningful. Designing contrary to what the data might suggest is a decision too.

Decisions based on intuition

This type of decision can be hard to explain. It’s when we say something looks or feels “right.” A designer’s intuition is the sum of the observation, training, and practice they’ve accumulated over the length of their career. I suspect this is what people are referring to when they say someone has “a natural eye.” Sometimes there isn’t a specific system or data set or past project to point to. Sometimes the concept is brand new. This is when we make decisions based on intuition.

Decide to decide

Any single decision can be informed by a combination of these approaches and many more not listed here. It’s important to know why we make decisions, but also to not let that knowledge paralyze us. No decision might seem better than a bad one, but at least with a bad decision we’ve learned something.

If you’re a leader of a team, empower your designers (and your entire team, really) to make these hard decisions and to own the consequences. As much as indecision can damage a project or team, so can constantly asking for permission.

What! That was awesome! Oh man, if only there was some way to get updates when more stuff like this comes out. Wait. Wait! I have an idea: sign up for our mailing list! Whew. That was a close one.

● posted by Aaron "Amac" McCall

Eons ago when our story first began, I told you how I needed to make a client app more consistent and efficient by implementing optimistic concurrency and JSON Patch in our model layer.

As I said before, in our app, we combine both of these forces for an efficiency and consistency one-two punch. As I worked through the integration and some other requirements, I realized that a third module that combined the previous two and added some sane defaults for conflict resolution would be really helpful, so I built one. It’s called ampersand-model-optimistic-update-mixin. Say that five times fast, or just call it AMOU (pronounced “ammo”).

What it does

Let’s recall our good buddy Franco from last time, and suppose that his data is edited by two different people working from the same base version:

// The original that both edits are based on
{
    "id": 1,
    "name": "Franco Witherspoon",
    "age": 32,
    "lastModified": "Mon, 10 Nov 2014 14:32:08 GMT",
    "createdBy": 1,
    "car": {"id": 1, "make": "Honda", "model": "CRX", "modelYear": "2006"},
    "pants": [{
        "id": 1, "manufacturer": "Levis", "style": "501",
        "size": "32", "color": "Indigo"
    },
    {
        "id": 2, "manufacturer": "Bonobos", "style": "Washed Chino",
        "size": "32", "color": "Jet Blue"
    },
    {
        "id": 3, "manufacturer": "IZOD", "style": "Cotton Lounge",
        "size": "32", "color": "Navy"   
    }]
}

// Edit #1 (the one that gets saved while #2 is still editing)
[
    {op: "replace", path: "/name", value: "Francis Withings"},
    {op: "replace", path: "/car/model", value: "CRX SiR"},
    {op: "add", path: "/pants/-", value: {
        manufacturer: "Alfani", style: "RED Slim-fit",
        size: "32", color: "Grey Sharkskin"
    }}
]

// Edit #2
[
    {op: "replace", path: "/name", value: "Frank Withers"},
    {op: "replace", path: "/car/model", value: "CRX SiR"},
    {op: "remove", path: "/pants/2"},
    {op: "add", path: "/pants/-, value: {
        manufacturer: "Joe Boxer", style: "Fleece Pajama",
        size: "32", color: "Red Plaid"
    }}
]

AMOU combines the features of AOS (ampersand-optimistic-sync) and AMP (ampersand-model-patch-mixin) with its own special conflict-detection-and-resolution sauce. It does this by using AOS’s version tracking and AMP’s difference tracking to handle all ordinary situations, and then breaking out its Einstein-like problem solving skills when that rare–but deadly–sync:invalid-version event arises. When AMOU receives a sync:invalid-version, it will, by default, detect the differences between the current client and server states and trigger a sync:conflict event with a payload something like this:

person._conflict = {
    conflicts: [{
        client: {op: "replace", path: "/name", value: "Frank Withers"},
        server: {op: "replace", path: "/name", value: "Francis Withings"},
        original: "Franco Witherspoon"
    }],
    serverState: {
        "id": 1,
        "name": "Franco Witherspoon", // <-- conflicting change
        "age": 32,
        "lastModified": "Wed, 12 Nov 2014 04:58:08 GMT",
        "createdBy": 1,
        "car": {
            "id": 1, "make": "Honda", "model": "CRX SiR", // <-- matching change
            "modelYear": "2006"
        },
        "pants": [{
            "id": 1, "manufacturer": "Levis", "style": "501",
            "size": "32", "color": "Indigo"
        },
        {
            "id": 2, "manufacturer": "Bonobos", "style": "Washed Chino",
            "size": "32", "color": "Jet Blue"
        },
        {
            "id": 3, "manufacturer": "IZOD", "style": "Cotton Lounge",
            "size": "32", "color": "Navy"   
        },
        {
            "id": 4, "manufacturer": "Alfani", "style": "RED Slim-fit",
            "size": "32", "color": "Grey Sharkskin"
        }]
    },
    resolved: [],
    unsaved: [
        {op: "replace", path: "/name", value: "Frank Withers", original: "Franco Witherspoon"},
        {op: "remove", path: "/pants/2", original: {
            "id": 3, "manufacturer": "IZOD", "style": "Cotton Lounge",
            "size": "32", "color": "Navy"   
        }},
        {op: "add", path: "/pants/-", value: {
            manufacturer: "Joe Boxer", style: "Fleece Pajama", size: "32",
            color: "Red Plaid"
        }, original: null}
    ]
};

This event payload gives you enough information to programmatically resolve the conflict or present the user with a dialog to allow them to manually resolve the conflict. AMOU doesn’t stop there, though. With one or two configuration tweaks, AMOU can resolve most conflicts for you and your users, making life easier still.

How to use it

The default functionality is very simple:

var AMOU = require('ampersand-model-optimistic-update-mixin');
var BaseModel = require('ampersand-model'); // OR require('backbone').Model;

module.exports = AMOU(BaseModel, {
    props: {id: 'number', name: 'string', age: 'number'},
    children: {car: Car},
    collections: {pants: Pants}
});

If you need to adjust the config for AOS, AMP, or AMOU itself, you’ll need to pass in an _optimisticUpdate config object:

var AMOU = require('ampersand-model-optimistic-update-mixin');
var BaseModel = require('ampersand-model'); // OR require('backbone').Model;

module.exports = AMOU(BaseModel, {
    props: {id: 'number', name: 'string', age: 'number'},
    children: {car: Car},
    collections: {pants: Pants},
    _optimisticUpdate: {
        patcher: {/* AMP config */},
        optimistic: {/* AOS config */},
        autoResolve: false,
        JSONPatch: true,
        ignoreProps: [],
        collectionSort: {},
        customCompare: {} // prop/child/collection: function (original, current) {} map where the func returns true when considered equal, false when not, or an array of operations to make original match current
    }
});

Things really get interesting when you set the autoResolve to true or 'server'. When you set autoResolve to true, AMOU resolves all non-conflicting changes. It then triggers a sync:conflict-autoResolved event when all differences are not conflicting or a sync:conflict event when there are conflicts. When you set autoResolve to 'server', all conflicts are resolved in favor of the server’s version and the sync:conflict-autoResolved event triggered.

When autoResolve is set to true, the sync:conflict payload from above would be a little different, because the new pair of Alfani pants would automatically be added to the local pants collection and be recorded in the payload’s resolved array.

person._conflict = {
    conflicts: [/* same as above */],
    serverState: {/* same as above */},
    resolved: [{op: "add", path: "/pants/-", value: {
        manufacturer: "Alfani", style: "RED Slim-fit",
        size: "32", color: "Grey Sharkskin"},
        client: undefined, clientDiscarded: undefined
    }],
    unsaved: [/* same as above */]
}

If your business rules say that the server should always win in case of conflicts, as they did in my project, simply set autoResolve to 'server' and the server’s version will be overwrite any conflicting local changes and a sync:conflict-autoResolved event will fire with this payload:

person._conflict = {
    conflicts: [/* none because we've resolved them in the server's favor */],
    serverState: {/* same as above */},
    resolved: [
        {op: "replace", path: "/name", value: "Francis Withings",
            client: "Frank Withers", clientDiscarded: true}
        {op: "add", path: "/pants/-, value: {
            manufacturer: "Alfani", style: "RED Slim-fit",
            size: "32", color: "Grey Sharkskin"},
            client: undefined, clientDiscarded: undefined}
    ],
    unsaved: [
        {op: "remove", path: "/pants/2", original: {
            "id": 3, "manufacturer": "IZOD", "style": "Cotton Lounge",
            "size": "32", "color": "Navy"   
        }},
        {op: "add", path: "/pants/-", value: {
            manufacturer: "Joe Boxer", style: "Fleece Pajama", size: "32",
            color: "Red Plaid"
        }, original: null}
    ]
}

In addition to autoResolve there are a few other configuration directives that may be helpful to you.

  • JSONPatch: if your server doesn’t support JSON Patch, you can disable that feature and still get the optimistic concurrency and conflict resolution benefits by setting this directive to JSONPatch: false
  • ignoreProps: does your server create, update, and send data that your client-side ignores? Add those props to this directive: ignoreProps: ['createdBy', 'lastModified']
  • collectionSort: do you sort child collections by name, but the server sorts them by id? No problem! Just add collectionSort: {default: 'id'} to your config. You can also set per-collection sorting by setting collectionName: 'property'. This directive also accepts functions that conforms to the Javascript Array.prototype.sort API.
  • customCompare: does your server send you some really wacky data for one of your child models or props that needs a lot of massaging to determine what is different? This directive takes prop, child model, and child collection names with a function that accepts the original and current data and returns true to indicate that they are equivalent, false if not, or an array of operations to make original into current: function (original, current) {}

The final tool that AMOU gives developers is the reverseUnsaved method. Taking the payload of either sync:conflict or sync:conflict-autoResolved, it will roll back all unsaved local changes in place. This allows you to automatically roll back unsaved changes, if that’s what your business rules dictate or in response to user action. To do auto-rollback, simply subscribe it to the events in your initialize method: this.listenTo(this, 'sync:conflict sync:conflict-autoResolved', this.reverseUnsaved).

So there you have it: three tools to help you make client-side JavaScript apps that use Ampersand or Backbone more consistent, efficient, AND user-friendly. I hope you have enjoyed reading this series as much as I have enjoyed writing it. Please feel free to give me a shout on Twitter, if you have any questions or just want to tell me how much fun writing JavaScript is. Ciao!

Want to learn even more stuff like this? How about some general goings-on to boot? Then sign up for our email list below!

● posted by Mike "Bear" Taylor

A core tenet of any Operations Team is that you must enable developers to change their code with confidence. For the developer this means they have the flexibility to try new things or to change old, broken code. Unfortunately, however, with every code change comes the risk of breaking production systems, which is something Operations has to manage. A great way to balance these needs is to continuously test new code as close to the point of change as possible by reacting to code commits as they happen.

At &yet the majority of the code that is deployed to production servers is written in NodeJS, so that’s the example I’ll use. NodeJS uses npm as its package manager, and one aspect of npm is its ability to define scripts that are to be run at various stages of the package’s lifetime. To make full use of this feature we need a way to run the defined scripts at the point that a developer is commiting code, as that is the best time to do validation and testing of the newly changed code.

Fortunately an npm package exists that will do just that - precommit-hook. It installs the required pre-commit hook into your project’s .git metadata such that just before git actually performs the commit, it will run the defined set of scripts or run the lint, validate, and test scripts by default. We can use this to run any check we need, but for now I will describe how to run a script to scan the project’s dependencies for any known security vulnerabilities using retire.js.

First we need to add retire.js to the project’s package.json and add a reference to it so the pre-commit hook will run it:

{
    "name": "example app",
    "description": "an example",
    "version": "1.0.0",
    "devDependencies": {
      "retire": "~0.3.2",
      "precommit-hook": "~1.0.7"
    },
    "scripts": {
      "validate": "retire -n -j"
    }
}

The precommit-hook will install itself into git and will trigger the running of retire -n -j during the commit process, which will then scan the project for any known vulnerabilities. Another variation on this theme would be to run the validate script during the build/test portion of a Continuous Integration process, but that would take a much longer blog post to describe. Using precommit-hook is a great way to show both it and retire.js in action.

(By the way, for a more in-depth look at what retire.js can do, head on over to the ^Lift Security article on retire.js.)

It is always a good thing when you can enable developers to do their work while also ensuring that the Operations Team can continue on their path. Tools like precommit-hook and retire.js enable both teams to be confident that they are heading in the right direction.


Enjoying the Tao of Ops as much as we are? Then why not sign up for our mailing list for other good stuff?

● posted by Philipp Hancke

Microsoft recently announced they will support Object RTC and now everyone is talking about ORTC and how they will support it.

What is this all about and what is ORTC anyway?

In essence, ORTC is an alternative API for WebRTC. It is object-oriented and protects developers from all that ugly Session Description Protocol (SDP) madness. Some people call it WebRTC 1.1, or maybe WebRTC 2.0.

So… will &yet (and Otalk) (and Talky) support ORTC? Of course!

Mostly because we have a use case where ORTC is better than the proprietary ways of solving certain problems with the WebRTC Peerconnection API. Instead of telling you how much we like ORTC, let me tell you about the problems we’ve experienced with WebRTC as it stands today.

SDP

ORTC gets rid of the SDP API surface used in the WebRTC PeerConnection API. Being XMPP people, we prefer Jingle over SDP. However, we rarely touch SDP at all, due to the magnificent sdp-jingle-json module that Lance Stout wrote.

This module transforms SDP into a JSON object and back. The object structure makes it somewhat easier to manipulate the description and change things. Still, you need to know about the semantics of the things you are manipulating. Removing SDP is not something we strongly care about. We’ve hidden it well by burying under layers of abstractions, and we are not using it on the wire.

Capability and Parameter Negotiation

One of the most important aspects I learned about recently is that ORTC distinguishes between capabilities and negotiation through the RTPSenders static getCapabilities method.

Capabilities allow us to query the capabilities of an implementation, e.g. what video codecs it supports, whether it can multiplex everything on a single UDP port pair, etc. Being static it means we can query those capabilities without creating an RTPSender. And we can also figure out if two clients would be compatible with each other beforehand.

Negotiation on the other hand means that two entities (who supposedly have a common capability) decide to use it for a particular session. That is what the Offer-Answer-Model for SDP was all about. Your offer tells me what you support and want to use and I answer with the subset I want to use.

Both capabilities and negotiation are useful and necessary. Capabilities are harder to determine in the PeerConnection API, even though it’s not impossible. ORTC just makes the distinction more clear and lets us think about how that distinction influences the protocols we design. However, as we saw with Jingle, cool features like trickle ICE can be backported to SDP semantics.

Talky

We’ve previously written about the upcoming new version of Talky. It’s a multiuser conferencing application that, like Jitsi Meet, uses the Jitsi Videobridge and XMPP. Currently it only works in Chrome (no worries, we’re talking with Mozilla).

There are two different problems here:

  • multiparty conferencing and
  • upgrading a 1-1 call to a conference

Multiparty Conferencing

In terms of API usage it is more complex than anything out there (until Hangouts came about), doing all kinds of renegotiation, adding and removing remote streams for participants. Chrome enables this through an SDP variant known as Plan B which did not get accepted by the IETF last year. Although that did not Chrome from implementing it.

Basically to add or remove a local audio/video stream you need to do a setRemoteDescription call followed by a setLocalDescription which will trigger an onaddstream and onremovestream callback depending on whether any streams were added or removed. If you want to know all the gory details, please refer to the webrtchacks article I wrote on how Hangouts uses this API. Not very surprising, the features needed were already in Chrome because they were required for Hangouts.

Hangouts also uses some advanced features like simulcast (i.e., sending different resolutions of the same video) which are activated by adding some special lines to the SDP. That’s currently completely undocumented and basically black magic. What is also lacking is a way to prioritize streams when several are competing for bandwidth.

Implementing Talky with the current API is possible. However, one should note that “the current API” here includes a number of non-standard proprietary features. And using it felt like jumping through hoops.

How will ORTC change this?

Well, first we don’t need to do a setRemoteDescription-setLocalDescription dance. Instead of getting streams (typically consisting of an audio and a video stream) pushed we can use the RTPReceiver API to pull audio or video tracks from the peer connection after setting them up with certain parameters (we want to associate those tracks with the participants in the chatroom). There is also a mode for detecting unhandled RTP streams which potentially allows us to get rid of signaling for individual participants.

The RTPSender objects allow for better control and prioritization of streams. Note that these RTPSender objects are now part of the “1.0” WebRTC API as well and Mozilla has already implemented them. So you can solve that problem there, too.

Upgrading a 1-1 Call to a Multiparty Conference

Quite a sizable portion of the current Talky usage is for 1-1 sessions which are peer-to-peer (with a small percentage being relayed through TURN servers). We do not want to route those sessions via the Jitsi Videobridge for a number of reasons. First, it costs us more money. Second, it decrypts the call and we don’t really want to have access to your private conversations. Third, it increases the latency, which affects the quality of the user experience.

So what we actually want to do is to have your 1-1 sessions in peer-to-peer mode and upgrade to a call relayed by the bridge as necessary. In theory, the current PeerConnection API allows this by doing something called an “ICE restart”. You open a new media path to the bridge and switch over once it’s connected. Turns out that this is currently not implemented by Chrome.

How will ORTC change this?

Well, in ORTC this scenario is easier to describe thanks to the better vocabulary. To do a switch like this, you setup another RTCIceTransport and RTCDtlsTransport object, wait for the connection to become active (by waiting for a RTCDtlsTransportStateChanged event on both sides) and then attach your RTPSender to that new transport.

Just having the right vocabulary to talk about this makes ORTC worthwhile.

Bugs

When you implement applications on top of the PeerConnection API in Chrome or Firefox you will notice some bugs (if not you’re probably only doing boring stuff). I ended up with reporting more than 50 Chrome bugs (and a few Firefox ones) in the last two years.

How will ORTC change this?

Well, with the Microsoft announcement I look forward to filing bugs against a third browser. 50% more fun!

When I tried the plugin Microsoft Open Tech released earlier this year it took half an hour to find two bugs.

Once Google adds ORTC as an API surface there will be more bugs there and they will have two API surfaces to support. This is rather going to slow them down. And according to the roadmap we should already have several ORTC elements in the current Chrome 38 which are not even in Canary yet.

TL;DR

ORTC will make some applications easier. Although it’s not a magic bullet that will somehow fix all the problems that exist with the PeerConnection API, it looks good on paper and we’re excited about playing with running code once it ships in the browsers (especially IE).

Want more cool stuff like this? Then sign up for our newsletter below. It’s full of Vitamin I…for interest!

● posted by Stephanie Maier

The most important job of a leader is to listen and listen well. What sets a great leader is her willingness to give of her time and energy. And although listening requires a large amount of both time and energy, it makes people feel valued and needed, a goal which all leaders should aspire to.

Leaders need to have–or learn to develop–the humility it takes to truly listen. Not just to hear what people are saying directly, but to be an investigative listener, to pay attention and discover things in the organization that may be unseen, whether they be positive and negative. And if you do uncover a dark, hard problem, fear not! Chances are good that the solution is shrouded in wisdom which will serve you well in the future.

What other good can you do by listening? Well, the list is probably way longer than this, but here are a few gems I’ve discovered along the way. I’m sure you’ll find your own treasure, too.

  • You create and build trust
  • You learn to trust the team
  • You build confidence
  • You discover and support people’s passion for their own personal growth
  • You inspire collaboration
  • You empower the PEOPLE

We communicate something powerful and sustaining by simply opening our ears instead of our mouths. But it’s not just about that, either; it’s about being patient and learning how to better ask questions so people feel encouraged to share their honest thoughts. In doing so, you have incredible power to improve the well being and health of not only your teammates, but your organization as a whole.


Want more nuggets of wisdom from cool people like Stephanie? Then sign up for our email list! (P.S. Yes Stephanie, you are cool and full of wisdom. DEAL WITH IT.)

● posted by Aaron "Amac" McCall

Today’s entry: Building the Mixins!

This post is second in a three part series that I started with a little bit of background last week.

Building the optimistic concurrency mixin

Following the Human way, I made the optimistic concurrency mixin a CommonJS module and published it with npm. It’s called ampersand-optimistic-sync, but we’ll call it AOS here. AOS replaces the sync method on your Backbone or Ampersand models. Since sync is the core Ajax method, extending there allows AOS to read and write the versioning headers it needs.

AOS supports both the ETag/If-Match and Last-Modified/If-Unmodified-Since approaches for the version information with ETag being the default.

What it does

Regardless of HTTP verb (GET, POST, PUT, DELETE, PATCH), AOS interrogates the server’s response for the configured version header (ETag or Last-Modified), and stores the header’s value on the model in a _version property and triggers a sync:version event with the model and the version data as payload. Then on any updating requests (PUT or PATCH), AOS adds the appropriate request header with the _version data as its value.

When the server will respond with a 412 - Pre-Condition Failed error status due to an invalid version, AOS triggers a sync:invalid-version event with the model, new version, and any JSON response data. This event allows the developer to handle invalid-version scenarios as needed.

How to use it

Adding AOS’s functionality to Ampersand models is as easy as:

var BaseModel = require('ampersand-model');
var AOS = require('ampersand-optimistic-sync');

module.exports = BaseModel.extend(AOS(BaseModel));

That’s pretty easy, but maybe it doesn’t give you exactly what you want. No worries though; you can do a bit of configuring:

var BaseModel = require('backbone').Model;
var AOS = require('ampersand-optimistic-sync');

module.exports = BaseModel.extend(AOS(BaseModel, {
    // [default], 'last-modified' is also supported
    type: 'etag',
    // pre-define a handler for the sync:invalid-version event 
    invalidHandler: function (model, version, response) {
        // make the most of a bad situation
    },
    // pre-define default sync options
    options: {
        all: {
            // any options you'd like to set for all requests
        },
        // you can also set options for particular methods
        create: {
            success: function (data) {
                // do stuff with data
            }
        },
        read: {},
        update: {
            patch: true
        },
        delete: {},
    }
}
}));

Now that I had a way to track versions and set a handler for invalid version events, it was time to work on the JSON Patch implementation.

Building the JSON Patch mixin

Once again, I created a CommonJS module and published it with npm as ampersand-model-patch-mixin. Rather than keep saying that mouthful, I’ll refer to it as AMP for the rest of the post.

What it does

  1. It keeps track of the last known server state for the model, so it can calculate differences.
  2. It generates JSON Patch operations as model data changes.
  3. It generates op-count events to let devs know how many it has tracked.
  4. It enables setting an autoSave test that will trigger a save based on the op-count event.

How to use it

The basics are simple. Given a pretty standard Ampersand model with a child model and a child collection, we would do the following:

var BaseModel = require('backbone').Model;
var Car = require('./car');
var Pants = require('../collections/pants');
var AMP = require('ampersand-model-patch-mixin');

// A simple person model
module.exports = BaseModel.extend(AMP(BaseModel, {
    props: {id: 'number', name: 'string', age: 'number'},
    children: {car: Car},
    collections: {pants: Pants}
}));

Now let’s assume that we fetch the record for Person 1 and make some changes:

var Person = require('../models/person');
var person = new Person({id: 1});

person.fetch();
// Server's response
{
    "id": 1,
    "name": "Franco Witherspoon",
    "age": 32,
    "lastModified": "Mon, 10 Nov 2014 14:32:08 GMT",
    "createdBy": 1,
    "car": {"id": 1, "make": "Honda", "model": "CRX", "modelYear": "2006"},
    "pants": [{
        "id": 1, "manufacturer": "Levis", "style": "501",
        "size": "32", "color": "Indigo"
    },
    {
        "id": 2, "manufacturer": "Bonobos", "style": "Washed Chino",
        "size": "32", "color": "Jet Blue"
    },
    {
        "id": 3, "manufacturer": "IZOD", "style": "Cotton Lounge",
        "size": "32", "color": "Navy"   
    }]
}
// AMP stores the response data at the property specified by its originalProperty config directive (default: _original).

// Identity is so fluid these days!
person.name = "Frank Withers";

// Let's be specific.
person.car.model += " SiR";

// Frank loved those IZOD's, but their time had come.
person.pants.remove(person.pants.at(2))

// Gotta have PJ pants.
person.pants.add({
    manufacturer: "Joe Boxer", style: "Fleece Pajama",
    size: "32", color: "Blue Plaid"
})

// Wait! Not blue, red.
person.pants.at(2).color = "Red Plaid";

console.log(person._ops.length)
// => 4

person.save();

// Sends the following to the server with Content-Type: application/json+patch
[
    {op: "replace", path: "/name", value: "Frank Withers"},
    {op: "replace", path: "/car/model", value: "CRX SiR"},
    {op: "remove", path: "/pants/2"},
    {op: "add", path: "/pants/-", value: {
        manufacturer: "Joe Boxer", style: "Fleece Pajama",
        size: "32", color: "Red Plaid"
    }}
]

Note that there are only four operation objects despite our having made five changes. AMP collapses changes to new models—ones that already have an add operation—into the add operation because their path is unknown. In order to do this, AMP stores the model’s cid in its internal _ops array. This also allows child collections to have a different sort order than the server’s.

“What about this auto-save business you mentioned earlier?” you ask. Let’s talk about that. You can set up auto-saving when AMP reaches five operations like this:

module.exports = BaseModel.extend(AMP(BaseModel, {
    _patcherConfig: {
        autoSave: 5
    },
    props: {id: 'number', name: 'string', age: 'number'},
    children: {car: Car},
    collections: {pants: Pants}
}));

If you need more control, you can also set autoSave to a function that returns truthily when you want to save:

module.exports = BaseModel.extend(AMP(BaseModel, {
    _patcherConfig: {
        autoSave: function (model, opCount) {
            // Returns true if 5 total
            if (opCount >= 5) return true;
            var roots = [];
            // Returns true if more than three different root paths in ops
            this._ops.forEach(function (op) {
                var root = op.path.slice(1).split('/').shift();
                if (roots.indexOf(root) === -1) roots.push(root);
            });
            if (roots.length >= 3) return true;
            return false;
        }
    },
    props: {id: 'number', name: 'string', age: 'number'},
    children: {car: Car},
    collections: {pants: Pants}
}));

The End or a Hint of Things to Come?

As I said before, in our app we need to combine both of these forces for a efficiency and consistency one-two punch. But as I worked through the integration and some other requirements, I realized that a third module that combined the previous two and added some sane defaults for conflict resolution would be really helpful, so I built one. It’s called ampersand-model-optimistic-update-mixin, and it’s a powerhouse, but to hear its story you’ll have to wait for the next shot: “Part Three: Let’s End This Conflict!”

Want to learn even more stuff like this? How about some general goings-on to boot? Then sign up for our email list below!

● posted by Mike "Bear" Taylor

“If you don’t monitor it, you can’t manage it.”

In the last installment of our Tao of Ops series I pointed out the above maxim as being a variation on the business management saying, “You can’t manage what you can’t measure” (often attributed to Peter Drucker). This has become one of the core principles I try to keep in mind while walking the Operations path.

Keeping this in mind, today I want to tackle testing the TLS Certificates that can be found everywhere in any shop doing web related production - something that needs to be done and can be rather involved in order to do properly.

According to Wikipedia TLS Certificates are:

Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over the Internet. They use X.509 certificates and hence asymmetric cryptography to authenticate the counterparty with whom they are communicating, and to exchange a symmetric key.

When it comes to anything that involves security - verifying is never going to be simple, and if it looks simple, then it’s time to take a step back and ask yourself what you’re missing. Crypto is hard and the code tends to be rather long, so I will be showing snippets of code below taken from kenkou, which is a site checking tool I’ve written that uses the Python pyOpenSSL library.

With the above definition and warnings fresh in our minds, let’s take a look at what’s required to make sure that your web site’s certificate is valid. For the purposes of today’s post we are going to limit the scope to:

  • Whether or not all of the certificates in the chain returned are themselves valid
  • Is the peer certificate itself not expired
  • Does the domain name match the hostname(s) within the certificate

From checkCertificate() we see the code required to open a socket to the remote site and prepare the context required to establish a secure connection.

# domain = 'example.com'
# config['cafile'] = '/etc/ssl/certs/ca-certificates.crt'
socket.getaddrinfo(domain, 443)[0][4][0]
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((domain, 443))
ctx = SSL.Context(SSL.TLSv1_METHOD)
 # prevent fallback to insecure SSLv2
ctx.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
ctx.set_verify(SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
               pyopenssl_check_callback)
ctx.load_verify_locations(config['cafile'])
ssl_sock = SSL.Connection(ctx, sock)
ssl_sock.set_connect_state()
ssl_sock.set_tlsext_host_name(domain)
ssl_sock.do_handshake()

Note that we are explicitly preventing the use of SSLv2, and we are asking pyOpenSSL to ensure we have a peer certificate. The pyopenssl_check_callback is used to ensure that any certificates present in the chain are not expired:

def pyopenssl_check_callback(connection, x509, errnum, errdepth, ok):
    '''callback for pyopenssl ssl check'''
    log.debug('callback: %d %s' % (errdepth, x509.get_issuer().commonName))
    if x509.has_expired():
        raise CertificateError('Certificate %s has expired!' % x509.get_issuer().commonName)
    if not ok:
        return False
    return ok

Now that we have an SSL context we can take a deeper look at the X.509 peer certificate. The code below calls match_hostname() to perform a rigorous check per RFC 6125 that the domain requested matches the different possible hostnames that can be found in a certificate and can be found within kenkou.py

x509 = ssl_sock.get_peer_certificate()
try:
    match_hostname(x509, domain)
except CertificateError:
    print('Hostname does not match')
expire_date = datetime.datetime.strptime(x509.get_notAfter(), "%Y%m%d%H%M%SZ")
expire_td   = expire_date - datetime.datetime.now()
if expire_td.days < 15:
    print('Expires in %s days' % expire_td.days)

We can now feel confident that our certificate is valid. We can also now be warned/reminded when the certificate will be expiring. The ability to verify allows you be proactive instead of reactive, which is a much better way to walk the Operations path than constantly reacting to issues.

Want more cool stuff like the Tao of Ops? Then sign up for our email list and have more good stuff delivered direct to your inbox.

● posted by Adam Brault

Focus is hard, painful work.

It’s especially difficult to let go of things you really care about in order to focus on the things you care about more.

But to keep our idealism, we need to grow up sometimes.

We’ve decided to close And Bang so that we can put all our efforts into Talky, Otalk, and our services (realtime consulting and training).

And Bang was one of the first products we ever built and represents several years of effort by our team. I vividly remember finishing the And Bang 1.0 signup UI and landing page in a hotel room with Henrik the night before the first Keeping it Realtime Conference in 2011.

As a company financially driven mostly by service revenue, it is very hard to invest heavily enough in both open source and products—and extremely hard when they go in different directions, technology-wise. Last May, we decided that we would not ship a paid version of And Bang, and would transition it to a new version of the product built on top of Otalk before doing so.

We aimed to keep And Bang alive, but we stopped development on it entirely, putting that energy into components of Otalk. But even keeping it in that state meant a certain amount of complexity, and some guilt over leaving it languishing.

After some hard conversations this Fall, we decided to end the product entirely, close the servers in two weeks (November 21, 2014), and redirect our available energy into fewer, clearer channels of effort.

We believe in the direction we’re going with Talky and Otalk. They really matter to us, and we believe they can help others, and fit into our vision.

Want to learn more about what we have coming up next? Then sign up for our email list!

● posted by Nathan LaFreniere

Maintaining code quality is hard. That’s why a little over two years ago, I created precommit-hook to help automate things like linting and running tests.

Over the years, precommit-hook has evolved, but it’s always had the same basic functionality. Run a simple npm i --save-dev precommit-hook and the module takes care of the rest. It’s worked great for a long time, and been adopted by quite a few people. So then what’s the problem?

Customization. If you want to change the behavior of the hook, you have to either fork it and make the changes yourself and publish a new module, or you have to make manual changes to your project’s package.json. For a module with the goal of making things as simple as possible, that’s kind of a bummer.

Enter git-validate. The idea behind git-validate isn’t to automatically do all the things for you, but rather to provide a very simple framework for creating your own modules that do as much or as little as you want them to.

Using git-validate, you essentially gain the ability to create a template for your projects. For an example of this, I have ported precommit-hook to leverage git-validate to give you an idea of how it will work. The install file is where all the magic happens.

No more assumptions are made about what linter you use, what scripts you want to run, or what files you want included in your project when the module is installed. You have absolute control. The pre-commit hook isn’t even created unless you create it. Have a script you want to run on pre-push? Go ahead! Add a Validate.installHooks('pre-push'); to your module’s install script, add the "pre-push" key to your .validate.json and you’re done!

With git-validate my hope is to see a ton of inventive new ways that people are maintaining their code quality. Create your own module and send me a link (I’m @quitlahok on twitter); I’d love to see what you’re are doing with it.

(NOTE: as of the writing of this blog post, git-validate is at version 0.1.0. That means it’s not completely finished. Linux and OS X support is functionally complete, but Windows support needs some work. If you use Windows, I would recommend waiting for the 1.0.0 release, which should be coming in the next few days.)

Hey there! Want more cool stuff from the &yet team? Then why not sign up for our mailing list? It’s chock-full of vitamin G (for goodies).

● posted by Jenna & Speegle

What is Otalk?

Jenna and Speegle sit down with Fritzy as he explains the secret sauce behind Otalk.

Otalk from &yet on Vimeo.

But wait, there’s more!

Want to know more about Otalk? What about the next release of Talky? Wanna get a bi-weekly dispatch of stuff we’re learning and doing at &yet? Join our community! We can’t be &yet without &you.