Undo

If there’s one feature almost every non-trivial application should have which is overlooked, left as an exercise, or put aside as “too hard” by web frameworks, it’s undo.

I do not have much (any) love for Redux, but whenever I think about it, I wonder “why isn’t undo support built in out of the box”?

To its credit, the Redux folks do actually address Undo in their documentation. “It’s a breeze” the document says at the top, and it’s a mere 14 pages later that it concludes “This is it!”.

I’ve pretty much never seen a React/Redux application that implements undo. Since it’s such a breeze, I wonder why. My guesses:

  1. The words “service” and “back-end” do not appear in the document.
  2. It doesn’t look like a breeze to me. A 14 page document describing implementation of a core concept in a toy application sounds more like a tornado.
  3. Finally, every non-trivial Redux application I’ve ever seen is an incomprehensible mess. (Once multiple Reduxes get composed together, you’re left with multidimensional spaghetti code of the ‘come-from’ variety.) Adding undo into this mix as an afterthought strikes me as a Kafkaesque nightmare.

It’s since been pointed out to me that there are libraries, such as redux-undo, around that wrap the implementation described in the Redux documentation (or variants thereof) in libraries. The example application is a counter, I don’t know of nor have I seen any non-trivial examples.

“Come-from”? Come again?

Programmers by now fall into two groups. Folks who have either read or know by reputation the article Goto considered harmful — possibly the most famous rant in the history of computer science — or folks who have grown up in a world where the entire concept of goto has been hushed up as a dark family secret.

come-from means code that is triggered by a foreign entity, you don’t know when. You can think of it as an uncharitable description of the Observer Pattern. It’s the dual of goto. There are two common examples that are employed widely in user-interfaces — event-handling and pub-sub.

Event-handling is necessitated by the way human-computer interaction works — you’re talking to the analog world, it’s not going to be pretty. The user will do something, it’s generally unrelated to what, if anything, your code was doing. You need to deal with it.

Pub-sub can be self-inflicted (i.e. used when there’s no reason for it), but is often necessitated by the realities of asynchronous and unreliable communication between different entities. Again, that’s not going to be pretty. You make a request, you make other requests. Time passes. Things change. Suddenly you get a response…

Redux is a state management system, so it is natural that it implements the Observer Pattern, but it adds the additional garnish that the mapping between action names and what parts of your application’s state they modify can be — and often is — completely arbitrary.

implementing undo in b8r should be a breeze!?

If you stand back a bit, b8r‘s registry is a similar to redux except instead of your state being a quasi-immutable object which is swapped out by “actions” (with each action being a name linked to a function that creates a new object from the old object) with b8r your state is the registry (or some subset thereof) and your actions are paths.

In my opinion, b8r is wins here because you generally don’t need to implement your actions, when you do they still act on the registry in the expected way, and instead of having an arbitrary map of action names to underlying functions the path tells you exactly which bit of state you’re messing with.

For purposes of implementing Undo, however, Redux seems to have an advantage out of the gate in that it is (partially) cloning state every time something changes. So all you need to do is maintain a list of previous states, and you’re golden, right?

What should it require to implement Undo (in b8r)?

To my mind, it should be something like this:

import {UndoManager} from 'perfectUndoLib'
new UndoManager('path.to.foo', {
    path: 'foo-history',
    onUndo: ...
    unRedo: ...
    onUndoRedo: ...
    onHandleChange: ...
});
b8r.get('foo-history.undoQueue.length') // number of undoables
b8r.get('foo-history.redoQueue.length') // number of redoables
b8r.call('foo-history.undo') // undoes the last thing
b8r.call('foo-history.redo') // redoes the last thing
b8r.call('foo-history.reset') // clears queues (e.g. after saving a file)

The first parameter could allow a list of paths to include in the undo manager’s purview, while also allowing multiple independent undo chains using multiple undo managers (e.g. some advanced graphics applications separate out changes in viewport from changes in the underlying scene).

The path option would probably default to lastThing-history and so foo-path would be superfluous.

Ideally, the callback options would allow async functions to do things like perform a service call to keep the store consistent with the interface, curate the undo and redo queues (e.g. TextMate used to do single keystroke undo which was infuriating when going back through a large edit history, addressing this appears to have paralyzed development and sidelined what had been the darling text editor of the hipter screencast brogrammer crowd), handle errors, and so on.

Implementation Details

At this point in our story, I hadn’t written a single line of code.

I have, however, been thinking about this for several years, which is more important. I’ve implemented several different non-trivial applications (RiddleMeThis, two word-processors, and a non-destructive image editor) with undo support. I think it’s worthwhile when saying something like “I wrote this code in 2h, am I not a true 100x coder!?” to acknowledge, at least to oneself, that it crystallized thoughts collected over a longer period (in this case several years), or whatever.

E.g. I wrote the core of b8r in less than two weeks, but it was a rewrite from scratch of a library I had written over a period of several months while at Facebook which was itself a rewrite of a rewrite of code I had written at USPTO.

b8r — or rather its registry — already has a robust observe method that informs you when anything happens to a particular path (or its subpaths), so something like:

import b8r from '../source/b8r.js'

Of course we’re going to need to deep clone state because — unlike with Redux — we can’t just grab the current state object.

On the plus side, we can just keep a copy of the subtree that changed. Keeping a clone of the entire state in memory every time something changes can get hairy.

Aside: two of the three serious implementations of undo I’ve been responsible for have been (a) a long form word-processor that handled both annotations and multi-user-edit-history and (b) a non-destructive image-editor). And, in fact two things I’ve been thinking about putting together as side projects are a long-form book-publishing system (which aims to publish on web and ePub vs. print/pdf/etc.) and a photography-workflow application, neither of which are going to play nicely with naive “every undo step is the state of your app” approaches.

Here is the code actually committed to the repository in the course of writing this post. Or you can see it in the inline documentation system.

After a quick visit to StackOverflow and Google, it looks at first approximation like the most robust option is “the dumb thing”:

const deepClone = (thing) => JSON.parse(JSON.stringify(thing));

OK so now for the meat!

export class UndoManager {
  constructor (watchPaths, options={}) {
    watchPaths = [watchPaths].flat()
    const {
      historyPath,
      onUndo,
      onRedo,
      onUndoRedo,
      onHandleChange,
    } = {
      historyPath: '_' + watchPaths[0].split('.').pop() + '_history',
      ...options
    }

We expose the buffers and methods in the registry, which makes binding controls to undo and redo super simple. The observer is how we get notified state we care about changes.

    b8r.register(historyPath, {
      undoBuffer: [],
      redoBuffer: [],
      undo: this.undo,
      redo: this.redo,
      observer: this.handleChange.bind(this)
    });

    Object.assign(this, {
      watchPaths,
      historyPath,
      onUndo: onUndo || onUndoRedo,
      onRedo: onRedo || onUndoRedo,
      onHandleChange
    })

    this.reset()
    this.observe()
  }

  reset () {
    this.undoBuffer.splice(0)

    this.undoBuffer.push(this.watchPaths.reduce(
      (state, watchPath) => {
        state[watchPath] = deepClone(b8r.get(watchPath))
        return state;
      },
      {}
    ));

    b8r.touch(this.historyPath)
  }

b8r.observe informs you of state-changes after they happen. (Because, for perf reasons, changes to the registry are queued and handled asynchronously, so if one or more registry entries are changed rapidly, each relevant observer will only be fired once.) So if we wait to collect state until the first user action, we’re not going to be able to restore our original state.

  observe () {
    this.watchPaths.forEach(path => b8r.observe(path, `${this.historyPath}.observer`))
  }

  unobserve () {
    this.watchPaths.forEach(path => b8r.unobserve(path, `${this.historyPath}.observer`))
  }

  get undoBuffer () {
    return b8r.get(`${this.historyPath}.undoBuffer`)
  }

  get redoBuffer () {
    return b8r.get(`${this.historyPath}.redoBuffer`)
  }

restore doesn’t care where state is coming from, its job is just to restore it without triggering handleChange and notify anything bound to the the instance that its state has changed.

  async restore () {
    this.unobserve()
    // const state = b8r.last(this.undoBuffer)
    // b8r.forEachKey(state, (value, key) => b8r.set(key, value))
    this.undoBuffer.forEach(state => {
      b8r.forEachKey(state, (value, key) => b8r.replace(key, deepClone(value)))
    })
    this.observe()
    b8r.touch(this.historyPath)
  }

It turns out there’s a couple of subtle bugs in restore both of which become obvious if the UndoManager (now called HistoryManager) is handling changes on more than one path. So restore() now replays all changes in the undo queue.

I’ve commended out the bad lines of code — the loop below fixes both bugs. There are obvious optimizations here which I have not yet implemented.

When the state of the world changes owing to user action, we need to put the new state on the undoBuffer and clear the redoBuffer (which is now obsolete).

  async handleChange (pathChanged) {
    if (this.onHandleChange && ! await this.onHandleChange()) return
    this.undoBuffer.push({
      [pathChanged]: deepClone(b8r.get(pathChanged))
    })
    this.redoBuffer.splice(0)
    b8r.touch(this.historyPath);
  }

  undo = async () => {
    if (this.undoBuffer.length === 1) return
    if (this.onUndo && ! await this.onUndo()) return false
    this.redoBuffer.push(this.undoBuffer.pop())
    await this.restore()
    return true
  }

  redo = async () => {
    if (this.onUndo && ! await this.onRedo()) return false
    this.undoBuffer.push(this.redoBuffer.pop())
    await this.restore()
    return true
  }

  destroy() {
    this.unobserve()
    b8r.remove(this.historyPath)
  }
}

And undo/redo are now super simple to implement. E.g. this example is now in the documentation of undo.js:

<button 
  data-event="click:_simple-undo-example_history.undo"
  data-bind="enabled_if=_simple-undo-example_history.undoBuffer.length"
>undo</button>
<button 
  data-event="click:_simple-undo-example_history.redo"
  data-bind="enabled_if=_simple-undo-example_history.redoBuffer.length"
>redo</button><br>
<textarea data-bind="value=simple-undo-example.text"></textarea>
<script>
  const {UndoManager} = await import('../lib/undo.js');
  b8r.register('simple-undo-example', {
    text: 'hello undo'
  })
  const undoManager = new UndoManager('simple-undo-example')
  set('destroy', () => {
    b8r.remove('simple-undo-example');
    undoManager.destroy()
  })
</script>

And that’s it!

I’ve walked through the general-purpose implementation of undo in b8r and it looks to me like you can use this to implement undo in a typical b8r application in a single line of code (plus whatever extra lines it takes to put in and add the widgets to your view and bind them). I’d say that qualifies as a breeze — at least until you get to dealing with services.

b8r goes pure(ish) Javascript

b8r now allows components to be implemented entirely in Javascript, with no eval

In an effort to make b8r work in the universe of “everything must be javascript” web development tools, I’ve finally added provision for pure javascript components in b8r.

Now, there’s four really good reasons for doing this.

First of all, the way b8r’s HTML-based components work intrinsically involves using eval (it actually uses the AsyncFunction constructor, but same difference). Whether this represents a real security threat or not depends a lot on how those components are in fact loaded, but it’s definitely an argument.

Second — and this has proven a small but persistent pain point with b8r development — it makes linters happy. To begin with, the <script> tag inside a component was actually the body of an async function with a ton of parameters. Using these parameters without annoying the lint gods involved /* global … */ to be added at the top of the scripts, while the fact that — for convenience — the code was async meant that linters would scream about use of import() and async/await.

Third — with HTML components, defining a single global controller for a family of components involved some jumping through hoops as did, for example, using computed bindings or lists where the method was registered in the component body (this works fine, but generates error messages if you don’t programmatically add the bindings after the necessary methods are registered). Now you can just do the necessary work outside the component definition (but in the same file!)

Fourth — while b8r has moved over to the world of ES6 Modules, it is distributed in cjs form for those still living with require, webpack, etc. The problem is that HTML components don’t play nicely with javascript bundlers, whereas javascript components are just javascript so if you really want to force a giant slab of javascript down your users’ throats you can. Yay.

So, these issues are fixed if you use pure javascript components.

You are still writing CSS as CSS and HTML as HTML, but now it’s likely going to be inside back-ticks.

In addition to the four big arguments for building components this way, there are some further implications (mostly good).

First of all, if and when you need to use import() inside a component, you can use relative paths without wondering “relative to what” (it used to be the location of b8r.js, because that’s where that code ends up executing).

Next, there was no convenient way to create multiple components in one source file. This is a mixed blessing — making it easy to define lots of small components in source files with no relationship between file name and component name presents the real risk of name-clashes. It may become necessary to introduce lint rules that prevent you from creating a component whose name does not match its source file in some way.

Finally, a really nice side-effect of this is that the boilerplate for a component is a lot more self-explanatory than it used to be. Yay!

Dark Mode with CSS Variables

All the cool kids are doing it, what’s the easiest way to support Dark Mode?

As little as four or so years ago, it was widely understood among front-end developers that CSS was a horrible problem, managing it was a nightmare, and Something Ought to be Done About It.

Back then, there were two popular “solutions”, LESS and SCSS, that both allowed you to transpile your CSS from a superset of CSS that allowed things like named constants and nesting rules. There was also the drum beat of the “all web development should be Javascript development” which advocated replacing CSS with Javascript (that created CSS). Incidentally, these were the same people who thought “all Javascript development should be some weird superset of Javascript that needs to be transpiled into Javascript”.

Anyway, the big problem LESS, SCSS, and the CSS-should-be-Javascript folks tried to solve was managing consistency across large sets of rules (no-one of course questioning the idea of having such large sets of rules). The downside was that this enabled even bigger sets of rules. (The best reason for using LESS or SCSS was the implementation of named constants. With these part of CSS, it seemed clear to me that their reason for existence was largely gone.)

If you reduce the cost of something, folks will use more of it. LESS, SCSS, and styling with Javascript reduced the apparent cost of adding CSS rules to web apps.

E.g. on a project I was working on at the time, about 10kB of LESS became 280kB of CSS — and this was on a team that was (a) small, and (b) ruthlessly minimized CSS whenever it could.

At this point, I can safely say “a pox on all your houses”. With bindinator’s approach of using a solid set of global styles and component-specific styles that are easy to scope to specific components (without the use of a “shadow DOM”) we never ran into any of the thorny CSS issues that plague other projects I’ve worked on, and CSS Variables are just icing on the cake.

@media (prefers-color-scheme: dark) {
  :root {  
    --content-bg-color: #222;

In fact, I was able to support “Dark Mode” in bindinator’s code base in a few hours over a couple of evenings by simply leaning hard into CSS Variables (essentially, cleaning the ones I was using up a bit and swapping out instances of hard-coded colors where I had missed them).

I find it particularly hilarious because the reason I did this was after seeing that one of our organization’s Storybooks (for our React-based UI library) had been updated to support Dark Mode (I assume, painfully) and it does not work very well (it looks like the problem is that it renders server-side and only starts respecting “Dark Mode” after the script “hydrates”).

Not only does implementing dark mode with CSS variables work flawlessly (in Chrome, Safari, and Firefox at least), but it involves literally zero lines of Javascript, and the UI transitions beautifully if you toggle dark mode in your UI settings.

I was so pleased with how easily all this worked that I quickly implemented another idea I had in the back of my mind for years, which is a CSS Variable powered theme editor (the link is to the github-hosted site because I haven’t deployed it to the bindinator home page yet).

It’s worth noting that the path of least resistance for implementing Dark Mode with LESS, or SCSS, or React Styletron, or whatever, is still going to be CSS Variables. It also makes theming controls (including web-components, even within their shadow DOMs) a breeze.

snake_case, camelCase, and other Misadventures in Coding Standards

some_programmers_like_snake_case while othersPreferCamelCase (and dont-talk-to-me-about-css-or-html) but life really becomes entertaining when different programmers — sorry “software engineers” — enforce their preferences locally.

Consider a service, implemented in python (where snake_case_is_popular) that passes data to a web application, i.e. javascript (whereCamelCaseIsPopular). You might end up with a data structure like:

{
    my_id: 17,
    my_name: 'seventeen'
}

In modern Javascript, you could write something like:

const { my_id, my_name } = await getSomeData()

And get on with your life.

But it won’t lint, because someone has enforced camelCase in your project lint rules, so now you write:

const data = await getSomeData()
const myId = data.my_id
const myName = data.my_name

And now your code lints.

No problems!

One day, you discover a problem with the data in myName and it stymies you for a bit until you find the assignment statement that renamed the value and you’re reminded that it gets populated from a service with the property my_name and you figure it out.

No problems, right? Chalk it up to experience.

After a lot of this, you kind of wish the problem would just disappear, so you do something like write snakeToCamel which converts an object with snake_case properties into an object with camelCase properties, and now you can write:

const { myId, myName } = snakeToCamel(await getSomeData());

And your code is nearly as clean as if you, say, had turned off the camelCase lint rule in the first place. But hey, snake_case is nauseating.

Later, you modify the helper library that builds async data methods (after all, snakeToCamel called on a conforming object is a no-op, isn’t it?!) so that snakeToCamel always gets called, and now you can finally write:

const { myId, myName } = await getSomeData();

You can now write a new lint rule that flags redundant inline calls to snakeToCamel so that every time a file gets touched, the redundant calls are flagged, or flex your awesomeness and write a code-transform to remove them all in one glorious diff.

Now, your code is beautiful, it’s all in camelCase, and it’s sleek and elegant and oh-so-2017…

Except that there’s no clue in the code for someone hunting down a reference to myName that it might be called my_name deeper in the pipe (e.g. in the Network tab), and now you have a bunch of folks bikeshedding about how things like my_uuid or inner_html should be treated. Because innerHTML does not work well in the other direction, and the python folks have their own lint rules…

Also, deep in your service layer, some dodgy regex is rewriting every property name you ever see, and it’s probably completely clueless about unicode (quick quiz — which unicode characters are OK in javascript variable names?! I looked up the answer and you don’t really want to know.) So, that’s definitely never coming back to bite you.

But hey, at least you don’t have to look at a mixture of snake_case and camelCase in your source code, because the cost of that is… zero?

Blender 2.8 is coming

Unbiased rendering? Check. Realtime rendering? Check. Unified shader model? Check. Class-leading user interface. Check. Free, open source, small? Check. Blender 2.8 offers everything you want in a 3d package and nothing you don’t (dongles, copy protection, ridiculous prices, massive hardware requirements).

There aren’t many pieces of open source software that have been under continuous active development that haven’t gone through a single “major version change” in twenty years. When I started using Blender 2.8 in the early 2000s, it was version 2.3-something. In the last year it’s been progressing from 2.79 to 2.8 (I think technically the current “release” version is 2.79b, b as in the third 2.79 release not beta).

What brought me to blender was a programming contract for an updated application which, in my opinion, needed an icon. I modeled a forklift for the icon in Silo 3D (which introduced me to “box-modeling”) but needed a renderer, and none of my very expensive 3d software (I owned licenses for 3ds max, ElectricImage, and Strata StudioPro among other thins) on my then current hardware. Blender’s renderer even supported motion blur (kind of).

The blender I started using had a capable renderer that was comparatively slow and hard to configure, deep but incomprehensible functionality, and a user interface that was so bad I ended up ranting about it on the blender forums and got so much hatred in response that I gave up being part of the community. I’ve also blogged pretty extensively about my issues with blender’s user interface over the years. Below is a sampling…

Blender now features not one, not two, but three renderers. (And it supports the addition of more renderers via a plugin architecture.) The original renderer (a ray-tracing engine now referred to as Workbench) is still there, somewhat refined, but it is now accompanied by a real-time game-engine style shader based renderer (Eevee) and a GPU-accelerated unbiased (physically-based) renderer (Cycles). All three are fully integrated into the editor view, meaning you can see the effects of lighting and procedural material changes interactively.

The PBR revolution has slowly brought us to a reasonably uniform conceptualization of what a 3d “shader” should look like. Blender manages to encapsulate all of this into one, extremely versatile shader (although it may not be the most efficient option, especially for realtime applications).

Eevee and Cycles also share the same shader architecture (Workbench does not) meaning that you can use the exact same shaders for both realtime purposes (such as games) and “hero renders”.

Blender 2.8 takes blender from — as of say Blender 2.4 — having one of the worst user interfaces of any general-purpose 3D suite, to having arguably the best.

The most obvious changes in Blender 2.8 are in the user-interface. The simplification, reorganization, and decluttering that has been underway for the last five or so years has culminated in a user interface that is bordering on elegant — e.g. providing a collection of reasonable simple views that are task-focused but yet not modal — while still having the ability to instantly find any tool by searching (now command-F for find instead of space by default; I kind of miss space). Left-click to select is now the default and is a first class citizen in the user interface (complaining about Blender’s right-click to select, left click to move the “cursor” and screw yourself is this literally got me chased off Blender’s forums in 2005).

Blender still uses custom file-requesters that are simply worse in every possible way than the ones the host OS provides. Similarly, but less annoyingly, Blender uses a custom-in-window-menubar that means it’s simply wasting a lot of screen real estate when not used in full screen mode.

OK so the “globe” means “world” and the other “globe” means “shader”…

Blender relies a lot on icons to reduce the space required for the — still — enormous numbers of tabs and options, and it’s pretty hard to figure out what is supposed to mean what (e.g. the “globe with a couple of dots” icon refers to scene settings while the nearly identical “globe” icon refers to materials — um, what?). The instant search tool is great but doesn’t have any support for obvious synonyms, so you need to know that it’s a “sphere” and not a “ball” and a “cube” and not a “box” but while you “snap” the cursor you “align” objects and cameras.

Finally, Blender can still be cluttered and confusing. Some parts of the UI are visually unstable (i.e. things disappear or appear based on settings picked elsewhere, and it may not be obvious why). Some of the tools have funky workflows (e.g. several common tools only spawn a helpful floating dialog AFTER you’ve done something with the mouse that you probably didn’t want to do) and a lot of keyboard shortcuts seem to be designed for Linux users (ctrl used where command would make more sense).

The blender 2.8 documentation is pretty good but also incomplete. E.g. I couldn’t find any documentation of particle systems in the new 2.8 documentation. There’s plenty of websites with documentation or tutorials on blender’s particle systems but which variant of the user interface they’ll pertain to is pretty much luck-of-the-draw (and blender’s UI is in constant evolution).

Expecting a 3D program with 20 years of development history and a ludicrously wide-and-deep set of functionality to be learnable by clicking around is pretty unreasonable. That said, blender 2.8 comes close, generally having excellent tooltips everywhere. “Find” will quickly find you the tool you want — most of the time — and tell you its keyboard shortcut — if any — but won’t tell you where to find it in the UI. I am pretty unreasonable, but even compared to Cheetah 3D, Silo, or 3ds max (the most usable 3D programs I have previously used) I now think Blender more than holds its own in terms of learnability and ease-of-use relative to functionality.

Performance-wise, Cycles produces pretty snappy previews despite, at least for the moment, not being able to utilize the Nvidia GPU on my MBP. If you use Cycles in previews expect your laptop to run pretty damn hot. (I can’t remember which if any versions of Blender did, and I haven’t tried it out on either the 2013 Mac Pro/D500 or the 2012 Mac Pro/1070 we have lying around the house because that would involve sitting at a desk…)

Cranked up, Eevee is able to render well-beyond the requirements for broadcast animated TV shows. This frame was rendered on my laptop at 1080p in about 15s. Literally no effort has been made to make the scene efficient (there’s a big box of volumetric fog containing the whole scene with a spotlight illuminating a bunch of high polygon models with subsurface scattering and screenspace reflections.

Perhaps the most delightful feature of blender 2.8 though is Eevee, the new OpenGL-based renderer, which spans the gamut from nearly-fast-enough-for-games to definitely-good-enough-for-Netflix TV show rendering, all in either real time or near realtime. Not only does it use the same shader model as Cycles (the PBR renderer) but, to my eye, for most purposes it produces nicer results and it does so much, much faster than Cycles does.

Blender 2.8, now in late beta, is a masterpiece. If you have any interest in 3d software, even or especially if you’ve tried blender in the past and hated it, you owe it to yourself to give it another chance. Blender has somehow gone from having a user interface that only someone with Stockholm Syndrome could love to an arguably class-leading user interface. The fact that it’s an open source project, largely built by volunteers, and competing in a field of competitors with, generally, poor or at best quirky user interfaces, makes this something of a software miracle.