Maintaining code quality is hard. That's why a little over two years ago, I created precommit-hook to help automate things like linting and running tests.
Over the years, precommit-hook has evolved, but it's always had the same basic functionality. Run a simple npm i --save-dev precommit-hook and the module takes care of the rest. It's worked great for a long time, and been adopted by quite a few people. So then what's the problem?
Customization. If you want to change the behavior of the hook, you have to either fork it and make the changes yourself and publish a new module, or you have to make manual changes to your project's package.json. For a module with the goal of making things as simple as possible, that's kind of a bummer.
Enter git-validate. The idea behind git-validate isn't to automatically do all the things for you, but rather to provide a very simple framework for creating your own modules that do as much or as little as you want them to.
While working on a line of business application for a client recently, I was asked to research and implement two different approaches towards improving data updating efficiency and consistency.
The first is JSON Patch. The idea here is to reduce data transfer by only sending the operations needed to make the remote resource identical to the local one. Even though both resources are represented as JSON objects, applying patches means we don't have to replace the entire entity on every update. This also reduces the risk of accidental changes to data that stays the same.
The second is optimistic concurrency control. This approach allows multiple users to open a data record for editing at the same time, and determines whether there are any conflicts at save time.
Our working hypothesis was that combining these two approaches would enable us to build a more bandwidth-efficient, data-consistent application while also providing a more pleasant user experience.
That's a variation on the business management saying "you can't manage what you can't measure" (often attributed to Peter Drucker). The saying might not always apply to business, but it definitely applies to Operations.
There are a lot of tools you can bring into your organization to help with monitoring your infrastructure, but they usually look at things only from the "inside perspective" of your own systems. To truly know if the path your Operations team is walking is sane, you need to also check on things from the user's point of view. Otherwise you are missing the best chance to fix something before it becomes a problem that leads your customers to take their business elsewhere.
Active testing of your systems from the outside is crucial and something that is easy enough to set up. For each internal system you are monitoring, ask yourself how you would create a query or request from the outside using that internal system.
When you think about a software project, and specifically the people that are involved with it, you probably think about developers. After all, the code itself is what makes up the project. I submit to you that we have a perception problem in the software world. In fact I think we have it backwards. The software is the least important thing in your software project.
Currently, code commits get all the attention and metrics. They are typically what a project will use to measure progress, complexity, and really anything that is considered meaningful to the work as a whole. The fact is though, they're the last thing anyone who uses your software actually sees. It doesn't matter if you're writing client-side code, or a backend helper library: the first thing anyone will likely see, and the thing they will interact with the most is the documentation.
In today's software ecosystem code is cheap. Problems are relatively easily solved. What language you choose and what approach you take can often be a matter of personal preference and style. There are of course exceptions but these are far from the vast majority of situations. What really matters is how quickly the code you write can be of usefulness to anyone else besides yourself. Chances are you are not writing code in a vacuum (if you are, hello you have a weird setup and should probably join us in the 21st century, it's nice here). Think about the last time you used any software at all. Did you just intuitively know how to run it? No you had to read the documentation. It's strange then that the first thing we see has become somehow so low in our priority list.
I stepped foot through the door as an official yeti almost exactly two months ago. I’ve changed jobs before, but somehow this time it felt a bit different. Sort of a cross between moving to a country where you don’t know the language, and walking into the cafeteria on your first day of 7th grade. While at the store purchasing a handful of requisite office items, I felt compelled to toss a little green notebook in the basket. I’m not sure why, but it just seemed necessary.
Socially Acceptable Security Blanket
My first few weeks on the job, I had a ton of conversations with a ton of other people. Yetis, by nature, tend to constantly burble ideas, and I didn’t want to miss any of it. Having made the transition from designer to front-end developer, and now to back-end developer, I was tasked with sponging new languages, terminologies, ways of thinking, processes, programs, and people. As a way of coping, I just cracked open that green notebook and started scribbling. I talked to people and scribbled, I worked on projects and scribbled, I read articles and scribbled…I scribbled myself off cliffs of anxiety, and I scribbled my way out of mental blocks. There were even times when I just clung to it and fiddled with the ribbon bookmark and elastic closure strap just to give my fidgety hands something to do while I made sense of what I was feeling. You could say that it was akin to a socially acceptable security blanket.
For those of you playing along at home, you may have heard me mention the novel we here at &yet worked on this year, Something Greater than Artifice, like a jillion times. For those of you who haven't: Hello! Welcome to the Internet. Please enjoy the heady melange of cultural experiences but for God's sake don't read the bottom half of anything.
Anyway. Something Greater than Artifice (or SGtA for you TL;DR folks). If you don't know the story behind it I'm sure it's floating around somewhere (subtle hint: that link takes you to the RealtimeConf site, which is both cool and awesome). Main thing is that I wrote a pretty good book and a bunch of cool people helped me turn it into a pretty very good book. Because without Amy illustrating and Jenn editing and Adam occasionally saying "that part with the thing doesn't make sense" this thing would be not as pretty very good as it is.
Okay, fine. More than pretty very good. Doubleplus pretty very good. Because–if you'll abide a moment of hubris–the book was actually selected as the Kirkus Reviews Indie Book of the Month Selection (caps theirs). Which got us to thinking that maybe, just maybe, we could take this thing which started as a conversation between Adam, Amy, and I and turn it into something greater.
Way back in 2008, my friend Jack Moffitt wrote a blog post entitled XMPP Is Better With BOSH. In those ancient days of long polling, BOSH was the state of the art for sending XMPP traffic over an HTTP transport because we needed something like the Comet model for bidirectional communication over HTTP. Even at the time, we knew it was an ugly and temporary hack to send multiple HTTP request-response pairs via long polling, but we didn't have anything better.
Since then, bidirectional communication between web browser and web service has come a long way, thanks to WebSocket. Nowadays, you start with an HTTP connection but use the HTTP UPGRADE method to bootstrap directly into a long-lived bidirectional session (for this reason, WebSocket has been likened to "TCP for the web"). WebSocket has its warts too, but compared to BOSH it significantly reduces the overhead of maintaining HTTP-based connections for XMPP. Even better, it has become a truly standard foundation for building real-time web apps, with support in all the modern languages and frameworks for web development.
The benefits of communicating XMPP over WebSocket encompass and extend the ones that Jack enumerated years ago for BOSH:
Greater resilience in the face of unreliable networks — here WebSocket does pretty much what BOSH and other "Comet" approaches did 10 years ago, but in a more network-friendly way by removing the need for long polling.
The ability to recover from data loss — the BOSH model of recovering from network outages and communication glitches was generalized with the XMPP stream management extension, this can be used with XMPP over WebSocket, too.
Compression for free — well, it turns out that the free compression we got by sending XMPP over HTTP wasn't so free after all (cf. the CRIME and BREACH attacks), but there's a native compression scheme for WebSocket which so far appears to avoid the security problems that emerged with application-layer compression in HTTP.
Firewall friendliness — in this case WebSocket isn't quite as network-agnostic as BOSH, since it's known that some mobile networks especially prevent WebSocket from working well (usually because they don't handle the HTTP UPGRADE mechanism very well). Hopefully that will improve over time, but in the meantime we can always fall back to BOSH if needed.
There’s a bit of a kerfuffle right now in Angular.js land because, lo and behold, the 2.0 release contains drastic differences and there isn’t really an upgrade path.
If you want to upgrade you'll likely need to completely re-write your app!
The structural updates they're proposing all sound like good improvements, but if you built a large app on 1.x and want to upgrade it doesn't really seem like you'll be able to use much of your code.