Announcing bindinator.js

Having recently set up bindinator.com, I am “officially” announcing my side-project Bind-O-Matic.js bindinator.js. It’s a small (currently 7kB gzipped and minified) Javascript library that is designed to make developing in vanilla javascript better in every way than using one or more frameworks. It embodies my current ideas about Javascript, Web, and UI development, and programming — for whatever that’s worth.

Also, I’m having a ton of fun hacking on it.

By way of “dogfooding”, I’m simultaneously building a skunkworks version of my main work project (which is an Electron-based desktop application) with it, adapting any code I can over to it, building b8r’s own demo environment, and slowly porting various other components and code snippets to it.

Above is my old galaxy generator, updated with a bunch of SVG goodness, and implemented using b8r (it was originally cobbled together using jQuery).

Why another framework?

I’ve worked with quite a few frameworks over the years, and in the end I like working in “vanilla” js (especially now that modern browsers have made jQuery pretty much unnecessary). Bindinator is intended to provide a minimal set of tools for making vanilla js development more:

  • productive
  • reusable
  • debuggable
  • maintainable
  • scalable

Without ruining the things that make vanilla js development as pleasant as it already is:

  • Leverage debugging tools
  • Leverage browser behavior (e.g. accessibility, semantic HTML)
  • Leverage browser strengths (e.g. let it parse and render HTML)
  • Be mindful of emerging ideas (e.g. semantic DOM, import)
  • Super fast debug cycle (no transpiling etc.) — see “leverage debugging tools”
  • Don’t require the developer to have to deal with different, conflicting models

The last point is actually key: pretty much every framework tries to abstract away the behavior of the browser (which, these days, is actually pretty reasonable) with some idealized behavior that the designer(s) of the framework come up with. The downside is that, like it or not, the browser is still there, so you (a) end up having to unlearn your existing, generally useful knowledge of how the browser works, (b) learn a new — probably worse — model, and then (c) reconcile the two when the abstraction inevitably leaks.

Being Productive

Bindinator is designed to make programmers and designers more (and separately) productive, decouple their activities, and be very easy to pick up.

To make something appear in a browser you need to create markup or maybe SVG. The easiest way to create markup or SVG that looks exactly like what you want is — surprise — to create what you want directly, not write a whole bunch of code that — if it works properly — will create what you want.

Guess what? Writing Javascript to create styled DOM nodes is slower, more error-prone, less exact, probably involves writing in pseudo-languages, adds compilation/transpilation steps, doesn’t leverage something the browser is really good at (parsing and rendering markup), and probably involves adding a mountain of dependencies to your code.

Bindinator lets you take HTML and bind it or turn it into reusable components without translating it into Javascript, some pseudo-language, a templating language, or transpilation. It also follows that a designer can style your markup.

Here’s a button:

<button class="awesome">Click Me</button>

Now it’s bound — asynchronously and by name.

<button class="awesome" data-event="click:reactor.selfDestruct">
  Click Me
</button>

When someone clicks on it, an object registered as “reactor” will have its “selfDestruct” property (presumably a function) called. If the controller object hasn’t been loaded, b8r’s event handler will store the event and replay it when the controller is registered.

Here’s an input:

<inpu type="range">

And now its value is bound to the fuel_rod_position of an object registered as “reactor”:

<input type="range" data-bind="value=reactor.fuel_rod_position">

And maybe we want to allow the user to edit the setting manually as well, so something like this:

<input type="range" data-bind="value=reactor.fuel_rod_position">
<input type="number" data-bind="value=reactor.fuel_rod_position">

…just works.

Suppose later we find ourselves wanting lots of sliders like this, so we want to turn it into a reusable component. We take that markup, and modify it slightly and add some setup to make it behave nicely:

<input type="range" data-bind="value=_component_.value">
<input type="number" data-bind="value=_component_.value">
<script>
 const slider = findOne('[type="range"]');
 slider.setAttribute('min', component.getAttribute('min') || 0);
 slider.setAttribute('max', component.getAttribute('max') || 10);
 register(data || {value: 0});
</script>

This is probably the least self-explanatory step. The script tag of a component executes in a private context where there are some useful local variables:

component is the element into which the component is loaded; find and findOne are syntax sugar for component.querySelector and component.querySelectorAll (converted to a proper array) respectively, and register is syntax sugar for registering the specified object as having the component’s unique id.

And save it as “slider-numeric.component.html”. We can invoke it thus:

<span 
  data-component="slider-numeric"
  data-bind="component(value)=reactor.fuel_rod_position"
></span>

And load it asynchronously thus:

const {component} = require('b8r');
component('slider-numeric');

To understand exactly what goes on under the hood, we can look at the resulting markup in (for example) the Chrome debugger:

Chrome debugger view of a simple b8r component

Some things to note: data-component-id is human-readable and tells you what kind of component it is. The binding mechanism (change and input event handlers) is explicit and self-documented in the DOM, and the binding has become concrete (_component_ has been replaced with the id of that component’s instance). No special debugging tools required.

Code Reuse

Bindinator makes it easy to separate presentation (and presentation logic) from business logic, making each individually reusable with little effort. Components are easily constructed from pieces of markup, making “componentization” much like ordinary refactoring.

A bindinator component looks like this:

<style>
  /* style rules go here */
</style>
<div>
  <!-- markup goes here -->
</div>
<script>
  /* component logic goes here */
</script>

All the parts are optional. E.g. a component need not have any actual

When a component is loaded, the HTML is rendered into DOM nodes, the script is converted into the body of a function, and the style sheet is inserted into the document head. When a component is instanced, the DOM elements are cloned and the factory function is executed in a private context.

Debugging

Bindinator is designed to have an incredibly short debug cycle, to add as little cognitive overhead as possible, and work well with debugging tools.

To put it another way, it’s designed not to slow down the debug cycle you’d have if you weren’t using it. Bindinator requires no transpilation, templating languages, parallel DOM implementations, it’s designed to leverage your existing knowledge of the browser’s behavior rather than subvert and complicate it, and if you inspect or debug code written with bindinator you’ll discover the markup and code you wrote where you expect to. You’ll be able to see what’s going on by looking at the DOM.

Maintenance

If you’re productive, write reusable (and hence DRY) code, and your code is easier to debug, your codebase is likely to be maintainable.

Scale

Bindinator is designed to make code scalable:

Code reuse is easy because views are cleanly separated from business logic.

Code is smaller because bindinator is small, bindinator code is small, and code reuse leads to less code being written, served, and executed.

Bindinator is designed for asynchrony, making optimization processes (like finessing when things are served, when they are loaded, and so forth) easy to employ without worrying about breaking stuff.

Core Concepts

Bindinator’s core concepts are event and data binding (built on the observation that data-binding is really just event-binding, assuming that changes to bound objects generate events) and object registration (named objects with properties accessed by path).

Bindinator provides a bunch of convenient toTargets — DOM properties to which you might want to write a value, in particular value, text, attr, style, class, and so forth. In most cases bindings are self-explanatory, e.g.

data-bind="style(fontFamily)=userPrefs.uiFont"

There are fewer fromTargets (value, text, and checked) which update bound properties based on user changes — for more complex cases you can always bind to methods by name and path.

Components are simply snippets of web content that get inserted the way you’d want and expect them to, with some syntax sugar for allowing each snippet to be bound to a uniquely named instance object.

And, finally, b8r provides a small number of convenience methods (which it needs to do what it does) to make it easier to work with ajax (json, jsonp), the DOM, and events.

The Future

I’m still working on implementing literate programming (allowing the programmer to mix documentation, examples, and tests into source code), providing b8r-specific lint tools, building out the standard control library (although mostly vanilla HTML elements work just fine), and adding more tests in general. I’m tracking progress publicly using Trello.

The New Macbook Pros

As someone who was forced to pick a new laptop about two weeks ago, i.e. just before the new Macbook Pros were announced, I have to confess that I’m a little pleased that the new machines don’t blow my current machine away. But, for all the pissing and moaning on the interwebs about Apple’s underwhelming new laptops, people seem to forget that PC performance has pretty much stagnated for the last eight years.

My 2012 Mac Pro (which was, effectively, a 2009 Mac Pro) still seems perfectly decent compared to the latest and greatest, and if I gave a damn it could be upgraded to give the 2013 Mac Pros a run for the money (in CPU benchmarks at any rate, the 2013 Mac Pros have stunning throughput).

The problem here is that CPU speed has hit a wall, pixel counts have gotten ludicrous (so that people are complaining about game performance on 4K displays), the benefits of GPUs for everyday computing haven’t really materialized, and 8GB of RAM is probably still plenty for most people’s daily use, and 16GB is ample.

Still, what happened last time Apple was forced to release an underwhelming upgrade after a long pause? The Intel transition. So, I’m willing to bet that the “let’s switch Mac OS  — oops macOS — over to our ARM architecture” faction within Apple is now winning a lot of arguments it was losing two years ago. (I suspect that the Macbook is the form factor of the first ARM-based macOS device.)

We’ll see — my predictions are often correct but wildly premature.

Affinity Photo — Redeemed!

Affinity Photos "Assistant Manager" dialog
Affinity Photos “Assistant Manager” dialog

An observant reader has pointed out that Affinity Photo (now?) offers the option of using Apple’s RAW converter instead of its own, which mitigates the single biggest problem with this otherwise excellent and inexpensive tool.

I think this should be (and should always have been) the default or only option until and unless Affinity’s RAW converter is improved to the point of being useful, but in the mean time this allows photographers to use this program for their entire work flow.

To switch to Apple’s RAW converter:

  1. Open a RAW (or DNG) file in the Develop persona.
  2. Use menu item View > Assistant Manager
  3. Set the RAW engine to Apple (Core Image RAW)
  4. Cancel the develop (the change does not take effect immediately)
  5. From now on Affinity Photo will use the far superior Apple RAW converter.

The workaround is documented here.

As a final aside — this process exposes several user interface warts.

First, why the heck is this buried in an obscure, vaguely named dialog you can only reach in one mode?

If you try to quit when you're developing an image you get cock-blocked
If you try to quit when you’re developing an image you get cock-blocked

Second, if you try to quit Affinity Photo when you’re in the middle of a RAW conversion it will simply stop you. (This is why I put in step 4.) You can’t opt to “discard changes to all open files” and get on with your life. This is very un-mac-like. If I were in a hurry I’d probably have been forced to Force Quit.

When you find the Cancel button (at the top left, kind of) you then get this terrible dialog.
When you find the Cancel button (at the top left, kind of) you then get this terrible dialog.

Third, when you Cancel a RAW conversion (which is what you have to do if you get into the situation above) the dialog box offers “Yes” and “No” options instead of useful verbs, such as “Abandon” and “Continue”.

And, finally, could software companies please pull their heads out of their asses and give their applications usefully distinct names? Half the time when I try to launch Affinity Photo via Spotlight I accidentally launch Affinity Designer. Don’t bury the lede.

Tech Free Saturdays with the Kids

Romilly with her iPad
Romilly with her iPad

Like many parents, Rosanna and I are concerned about our kids’ obsession with “technology”, so we tried “tech free Saturdays”, and it worked for about an hour (I — more than slightly ironically — spent that hour with the girls playing with an Elenco electronics set — something far better than the “150-in-one” electronics kit I dreamed of when I was a kid), and then gave up. Short of getting exercise outdoors — which while almost certainly a Good Thing To Do is hardly something of which I am an examplar — what was there to do without “technology”?

I don’t pretend to know what stuff is going to be important to the success of my kids. A lot of the stuff I learned in school has turned out to be useful, or at least makes for interesting conversation (apparently, most people forget almost everything they learned in school, and — having had no interest in it when they were 14, find it intriguing as adults). But the most useful stuff I learned as a kid is the stuff society — i.e. teachers and parents — made me feel guilty about spending time on. And I don’t think this is rare. I think it’s the people who were obsessed with computer games, or Science Fiction, or Dungeons & Dragons, back in 1982, who are creating the world we live in today.

We won (or, at least, we’re ahead — when we all die young from heart disease and diabetes because we never get any exercise, the jocks from high school whose knees still work can gloat).

And having won by willfully ignoring society’s ideas of what a “healthy” obsession was when we were kids, who are we to impose our ideas of what a “healthy” obsession is on our kids? Well, we’re parents, of course, and “a foolish consistency is the hobgoblin of small minds”. Perhaps we’re just that much smarter than our parents and teachers.

Another possibility that occurs to me is that a passion for anything — programming, role-playing games, the collected works of Jack Vance — only turns into something powerful and character-building if it involves pushing against social pressure. In other words, it’s OK for us to try to stop our kids from doing what they want to do, but it’s even better if they defy us and it anyway.

In the end, I don’t mind if my kids are obsessed with Minecraft, or even Youtube. What worries me is that its too easy to feed those obsessions, and I don’t think technology is the problem. But, having said that, my father narrowly avoided the Holocaust and my mother lived through famine and the Vietnam War, whereas I had to cope with the poor selection of science fiction in local libraries and the fact that our school only had one Apple II computer.

Returning to the Adobe fold… sort of

I remain very frustrated with my Photography workflow. No-one seems to get this right and it drives me nuts. (I’m unwilling to pay Apple lots of money for a ridiculous amount of iCloud storage, which might work quite well, but it still wouldn’t have simple features like prioritizing images that I’ve rated or looked at closely over others automagically, or allow me to rate JPEGs and have the rating carried over to the RAW later.)

Anyway, Aperture is sufficiently out-of-date that I’ve actually uninstalled it and Photoshop still has some features (e.g. stitching) that its competition cannot match. So, $120 for a year of Photoshop + Lightroom… let’s see how it goes.

Lightroom

I was expecting Lightroom to be awesome what with all the prominent folks who swear by it. So far I find it unfamiliar (I did actually use LR2, and of course I am a Photoshop ninja) to the point of frustration, un-Mac-like, and ugly, ugly, ugly.

Some of my Lightroom Gripes
Some of my Lightroom Gripes

A large part of the problem is terrible use of screen Real Estate. It’s easy to hide the menubar (once you find where Adobe has hidden its non-standard full screen controls), but it’s hard (impossible) to hide the idiotic mode menu “Identity Plate”. (I found the “Identity Plate Editor” (WTF?) by right-clicking hopefully all over the place, which allowed me to shrink the stupidly large lettering but it just left the empty space behind. How can an application that was created brand new (and initially Mac-only) have managed to look like a dog’s breakfast so quickly?

But there are many little things that just suck.

  • All the menus are horrible — cluttered and full of nutty junk. Looks like design by committee.
  • The dialog box that appears when you “Copy…” the current adjustments is a crime against humanity (it has a weird set of defaults which I overrode by clicking “check none” when I only wanted to copy some very specific settings and now I can’t figure out how to restore the defaults).
  • The green button doesn’t activate full screen mode. There are multiple full screen modes and none of them are what I want.
  • Zooming with the trackpad is weird. And the “Loupe” (nothing like or as nice as Aperture’s) changes its behavior for reasons I cannot discern. (I finally figured out that the zoom in shortcut actually goes to 1:1 by default, which is useful, although it’s such a common feature I’d have assigned a “naked” keystroke to it, such as Z, which instead toggles between display modes.)
  • The main image view seizes up after an indeterminate amount of use and shortly afterwards Lightroom crashes. (This is on maxed out Macbook Pro 15″.)
  • I can’t hide the stupid top bar (with your name in it). I can’t even make it smaller by reducing the font size of the crap in it.
  • Hiding the “toolbar” hides a random thing that doesn’t seem to me to be a toolbar.
  • By default the left side of the main window is wasted space. Oh, and the stupid presets are displayed as a list of words — you need to mouse over them to get a low-fidelity preview.
A crime against humanity.
A crime against humanity.

I found Lightroom’s UI sufficiently annoying that I reinstalled Aperture for comparison. Sadly, Lightroom crushes Aperture in ways that really matter. E.g. its Shadow and Highlight tools simply work better than Aperture’s (I essentially need to go into Curves to do anything slightly difficult in Aperture), and it has recent features (such as Dehaze — I’m pretty sure inspired by a similar feature DxO was very proud of a while back). After processing a few carefully underexposed RAW images* in both programs, Lightroom gets me results that Aperture simply can’t match (it also makes it very tempting to make the kind of over-processed images you see everywhere these days with amped up colors, quasi-HDR effects, and exaggerated micro-contrast).

(* Quite a few years ago someone I respect suggested that it’s a good idea to “underexpose” most outdoor shots by one stop to keep more highlight detail. This is especially important if the sky is visible. These days, everyone seems to be on the “ISO Invariance” bandwagon which is essentially not doing anything to the signal off the sensor (boosting effective ISO) when capturing RAW, in essence, “expose to the left” automatically — the exact opposite of the “expose to the right” bandwagon these clowns were all on two years ago — here’s a discussion of doing both at the same time. Hilarious. On the bright side, ISO Invariance pretty much saves ETTR nuts from constantly blowing their highlights.)

The Photos App is far more competitive with Lightroom than Aperture
The Photos App is far more competitive with Lightroom than Aperture. And its UI is simply out of Lightroom’s league (see those filters on the right? Lightroom simply has a list of names).

Funny thing though is that the new Photos app gives Lightroom a much better run for its money (um, it’s free), has Aperture’s best UI feature (organic customization), and everything runs much faster that Lightroom. The problem with Photos is it is missing key features of Lightroom, e.g. Dehaze, Clarity, and (most curiously) Vibrance. You just can’t get as far with Photos as you can with Lightroom. (If you have Affinity Photo you can use its Dehaze more-or-less transparently from Photos. It’s a modal, but then Lightroom is utterly modal.)

On the UI level, though, Photos simply spanks Lightroom’s Develop mode. Lightroom’s organization tools, clearly with many features requested by users, are completely out of Photos’ league.

I also tried Darktable (the open source Lightroom replacement) for comparison. I think its user interface is in many ways nicer than Lightroom’s — it looks and feels better — although much of its lack of clutter is owed to a corresponding lack of features), but the sad news is that Darktable’s image-processing capabilities don’t even compete with Aperture, let alone Photos. (One thing I really like about Darktable is that it applies “orientation” (automatic horizon leveling), “sharpen”, and “base curve” automagically by default. Right now this isn’t customizable — there’s a placeholder dialog — but if it were it would be an awesome feature.)

The lack of fit and finish in Lightroom is unprofessional and embarrassing
The lack of fit and finish in Lightroom is unprofessional and embarrassing. If it’s not obvious to you, the red line shows the four different baselines used for the UI elements.
Hilarious. Lightroom's "About Box" uses utterly non-standard buttons that behave like tab selectors.
This is hilarious. Lightroom’s “About box” uses utterly non-standard buttons that behave like tab selectors. This is actually regression for Adobe, which used to really take pride in its About boxes.

At bottom, Aperture doesn’t look or feel like an application developed by or for professionals. It’s very capable, but its design is — ironically — horrible.

Photoshop

Photoshop’s capabilities are, by and large, unmatched, but its UI wasn’t good when it first came out and many of its worst features have pretty much made it through unscathed by taste, practicality, or a sense of a job well done. Take a look at this gem:

Adobe Photoshop's horrible Radial Blur dialog
Adobe Photoshop’s horrible Radial Blur dialog

This was an understandably frustrating dialog back in 1991 — in fact the attempt to provide visual cues with the lines was probably as much as you could expect, but it hasn’t changed since — every other application I use provides a GPU-accelerated live preview (in Acorn it’s non-destructive too). What’s even worse is that it looks like the dialog’s layout has been slightly tweaked to allow for too-large-and-non-standard buttons (with badly centered captions that would look worse if there were a glyph with a descender in it). At least it doesn’t waste a buttload of space on a mode menu: instead there’s a small popup lets you pick which (customizable) “workspace” you want to use, and the rest of the bar is actually useful (it shows common settings for the currently selected tool).

In the end, Photoshop at least looks reasonably nice , and its UI foibles are things I’ve grown accustomed to over twenty-five years.

I can’t wait until I get to experience Adobe’s Updater…