No Man’s Sky Revisited

No Man’s Sky was originally released in 2016. I’d been waiting for it for nearly two years after seeing some early demos. This looked like a game I’d been day-dreaming about for decades.

It was one of the most disappointing games I’ve ever played.

I recently saw No Man’s Sky Beyond on sale in Best Buy (while shopping for microphones for our upcoming podcast) and immediately picked it up. Speaking of disappointing game experiences, the PS4 VR has been a gigantic disappointment ever since I finished playing Skyrim (which was awesome). Why there haven’t been more VR updates of great games from previous generations (e.g. GTA IV) escapes me, because second-rate half-assed new VR games do not impress me.

Anyway, I did not realize that (a) No Man’s Sky Beyond is merely the current patched version of No Man’s Sky, and that the VR mode is absolutely terrible. But, the current full patched version of No Man’s Sky is a huge improvement over the game I was horribly disappointed by back in 2016. It’s still not actually great, but it’s decent, and I can see myself coming back to it now and then when I want a fairly laid back SF fix.

High points:

  • There’s an arc quest that introduces core gameplay elements in a reasonably approachable way (although the start of the game is still kind of brutal)
  • There are dynamically generated missions
  • The space stations now actually kind of make sense
  • Base construction is pretty nice
  • There’s a kind of dumb “learn the alien languages” subgame
  • Planets have more interesting stuff on them

Low points:

  • Space is monotonous (star systems comprise a bunch of planets, usually at least one with rings, in a cloud of asteroids, all next to each other). Space stations seem to look like D&D dice with a hole in one side (minor spoiler: there’s also the “Anomaly” which is a ball with a door).
  • Planets are monotonous — in essence you a color scheme, hazard type (radiation, cold, heat — or no hazard occasionally), one or two vegetation themes, one or two mobility themes for wildlife, and that’s about it. (If there are oceans, you get extra themes underwater.) By the time you’ve visited five planets, you’re seldom seeing anything new.
  • Ecosystems are really monotonous (why does the same puffer plant seem to be able to survive literally anywhere?)
  • The aliens are just not very interesting (great-looking though)
  • On the PS4 the planet atmospheres look like shit
  • The spaceship designs are pretty horrible aesthetically — phone booth bolted to an erector set ugly.
  • Very, very bad science (one of my daughters was pissed off that “Salt” which was labeled as NaCl could not be refined into Sodium which — mysteriously — powers thermal and radiation protection gear). Minerals and elements are just used as random herbal ingredients for a potion crafting system that feels like it was pulled out of someone’s ass.
  • Way, way too much busywork, e.g. it’s convenient that “Silica Powder” can fuel your Terrain modifier tool (which generates Silica Powder as a biproduct of use) but why put in the mechanic at all? Why do I need to assemble three things over and over again to fuel up my hyperdrive? Why do I keep on needing to pause construction to burn down trees to stock up on carbon?

The audacity of building a game with a huge, fractally detailed universe is not what it once was. It’s an approach many developers took out of necessity in an era when memory and storage were simply too limited to store handmade content, and budgets were too small to create it — Elite, Akallabeth, Arena, Pax Imperia, and so on — but it’s disappointing to see a game built this way with far more capable technology, more resources, and greater ambition keep failing to deliver in so many (to my mind) easily addressable ways. When No Man’s Sky was first demoed, my reaction was “wow, that’s gorgeous and impressive, I wonder where the gameplay is”. When it actually came out, two years later, my reaction was “hmm, not as gorgeous as the demo, and there’s basically no gameplay”. As of No Man’s Sky Beyond — well, the gameplay is now significantly better than the original Elite (or, in my opinion, Elite Dangerous) — which is not nothing.

As a final aside, one day I might write a companion article about Elite Dangerous, a game in many ways parallel to No Man’s Sky. The main reason I haven’t done so already is that I found Elite Dangerous so repellant that I quit before forming a complete impression. Ironically, I think Elite Dangerous is in some ways a better No Man’s Sky and No Man’s Sky is a better Elite.

Data-Table Part 2: literate programming, testing, optimization

I’ve been trying to build the ideas of literate programming into b8r (and its predecessor) for some time now. Of course, I haven’t actually read Knuth’s book or anything. I did play with Oberon for a couple of hours once, and I tried to use Light Table, and I have had people tell me what they think Knuth meant, but what I’ve tried to implement is the thing that goes off in my head in response to the term.

One of the things I’ve grown to hate is being forced to use a specific IDE. Rather than implement my own IDE (which would (a) be a lot of work and (b) at best produce Yet Another IDE people would hate being forced to use) I’ve tried to implement stuff that complements whatever editor, or whatever, you use.

Of course, I have implemented it for bindinator, and of course that’s pretty specific and idiosyncratic itself, but it’s a lot more real-world and less specific or idiosyncratic than — say — forcing folks to use a programming language no-one uses on an operating system no-one uses.

Anyway this weekend’s exploit consisted of writing tests for my debounce, throttle, and — new — throttleAndDebounce utility functions. The first two functions have been around for a long time and used extensively. I’m pretty sure they worked — but the problem is the failure modes of such functions are pretty subtle.

Since all that they really do is make sure that if you call some function a lot of times it will not get called ALL of those times, and — in the case of debounce — that the function will be called at least once after the last time you call it, I thought it would be nice to make sure that they actually worked exactly as expected.

Spoiler alert: the functions did work, but writing solid tests around them was surprisingly gnarly.

The reason I suddenly decided to do this was that I had been trying to optimize my data-table (which is now virtual), and also show off how awesome it is, by adding a benchmark for it to my performance tools. This is a small set of benchmarks I use to check for performance regressions. In a nutshell it compares b8r’s ability to handle largish tables with vanilla js (the latter also cheating, e.g. by knowing exactly which DOM element to manipulate), the goal being for b8r to come as close to matching vanilla js in performance as possible.

I should note that these timings include the time spent building the data source, which turns out to be a mistake for the larger row counts.

So, on my laptop:

  • render table with 10k rows using vanilla js — ~900ms
  • render table with 10k rows using b8r — ~1300ms
  • render table with 10k rows using b8r’s data-table — ~40ms

Note: this is the latest code. On Friday it was a fair bit worse — maybe ~100ms.

Anyway, this is pretty damn cool. (What’s the magic? b8r is only rendering the visible rows of the table. It’s how Mac and iOS applications render tables. The lovely thing is that it “just works” — it even handles momentum scrolling — like iOS.)

Virtual javascript tables are nothing new. Indeed, I’ve had two implementations in b8r’s component examples for three years. But data-table is simple, powerful, and flexible in use.

So I added a 100k row button. Still pretty fast.

A little profiling revealed that it was wasting quite a bit of time on recomputing the slice and type-checking (yes, the b8r benchmarks include dynamically type-checking all of the rows in the table against a heterogeneous array type where the matching type is the second type checked). So, I optimized the list computations and had the dynamic type-checking only check a subsample of arrays > 110 rows.

  • render table with 100k rows using b8r’s data-table — ~100ms

So I added a 1M row button. Not so fast. (Worse, scrolling was pretty bad.)

That was Thursday. And so in my spare time I’ve been hunting down the speed problems because (after all) displaying 40 rows of a 1M row table shouldn’t be that slow, should it?

Turns out that slicing a really big array can be a little slow (>1/60 of a second), and the scroll event that triggers it gets triggered every screen refresh. Ouchy. It seemed to me that throttling the array slices would solve this problem, but throttle was originally written to prevent a function from being called again within some interval of the last call starting (versus finishing). So I modified throttle to delay the next call based on when the function finishes executing.

Also, you don’t want to use throttle, because then the last call to the throttled function (the array slice) won’t be called) which could leave you with a missing array row on the screen. You also don’t want to use debounce because then then the list won’t update until it stops scrolling, which is worse. What you want is a function that is throttled AND debounced. (And composing the two does not work — either way you end up debounced.)

So I needed to write throttleAndDebounce and convince myself it worked. I also wanted to convince myself that the new subtly different throttle worked as expected. And since the former was going to be used in data-table, which I think is going to be a very important component, I really want it to be sure it’s solid.

  • render table with 1M rows using b8r’s data-table — ~1100ms

Incidentally, ~600ms of that is rendering the list, the rest is just building the array.

At this point, scrolling is smooth but the row refreshes are not (i.e. the list scrolls nicely but the visible row rendering can fall behind, and live filtering the array is acceptably fast). Not shabby but perhaps meriting further work.

Profiling now shows the last remaining bottleneck I have any control over is buildIdPathValueMap — this itself is a function that optimizes id-path lookup deep in the bowels of b8r. (b8r implements the concept of a data-path which lets you uniquely identify any part of a javascript object much as you would in javascript, e.g. foo.bar.baz[17].lurman, except that inside of square brackets you can put an id-path such as app.messages[uuid=0c44293e-a768-4463-9218-15d486c46291].body and b8r will look up that exact element in the array for you.

The thing is, buildIdPathValueMap is itself an optimization specifically targeting big lists that was added when I first added the benchmark. b8r was choking on big lists and it was the id-path lookups that were the culprit. So, when you use an id-path on a list, b8r builds a hash based on the id-path once so you don’t need to scan the list each time. The first time the hash fails, it gets regenerated (for a big, static list the hash only needs to be generated once. For my 1M row array, building that hash is ~300ms. Thing is, it’s regenerating that hash.

So, eventually I discover a subtle flaw in biggrid (the thing that powers the virtual slicing of rows for data-table. b8r uses id-paths to optimize list-bindings so that when a specific item in an array changes, only the exact elements bound to values in that list item get updated. And, to save developers the trouble of creating unique list keys (something ReactJS developers will know something about) b8r allows you to use _auto_ as your id-path and guarantees it will efficiently generate unique keys for bound lists.

b8r generates _auto_ keys in bindList, but bindList only generates keys for the list it sees. biggrid slices the list, exposing only visible items, which means _auto_ is missing for hidden elements. When the user scrolls, new elements are exposed which b8r then detects and treats as evidence that the lookup hash for the id-path (_auto_) is invalid. A 300ms thunk later, we have a new hash.

So, the fix (which impacts all filtered lists where the _auto_ id-path is used and the list is not initially displayed without filtering, not just biggrid) is to force _auto_ keys to be generated for the entire list during the creation of the hash. As an added bonus, since this only applies to one specific property but a common case, in the common case we can avoid using getByPath to evaluate the value of each list item at the id-path and just write item._auto_ instead.

  • render table with 1M rows using b8r’s data-table — ~800ms

And now scrolling does not trigger Chrome’s warning about scroll event handlers taking more than a few ms to complete. Scrolling is now wicked fast.

Must all data-tables kind of suck?

b8r's data-table in action
b8r now has a data-table component. It’s implemented in a little over 300loc, supports custom header and fixed headers, a correctly-positioned scrollbar, content cells, sorting, resizing, showing/hiding of columns.

Bindinator, up until a few weeks ago, did not have a data-table component. It has some table examples, but they’ve not seen much use in production because b8r is a rewrite of a rewrite of a library named bindomatic that I wrote for the USPTO that was implemented to do all the data-binding outside of data-tables in a large, complex project and then replaced all the data-tables.

It turns out that almost any complex table ends up being horrifically special-case and bindomatic, by being simpler, made it easier to build complex things bespoke. (I should note that I also wrote the original data-table component that bindomatic replaced.)

Ever since I first wrote a simple file-browser using b8r-native I’ve been wishing to make the list work as well as Finder’s windows. It would also benefit my RAW file manager if that ever turns into a real product.

The result is b8r’s data-table component, which is intended to scratch all my itches concerning data-tables (and tables in general) — e.g. fixed headers, resizable columns, showing/hiding columns, sorting, affording customization, etc. — while not being complex or bloated.

At a little over 300 loc I think I’ve avoided complexity and bloat, at any rate.

As it is, it can probably meet 80% of needs with trivial configuration (which columns do you want to display?), another 15% with complex customization. The remaining 5% — write your own!

Implementation Details

The cutest trick in the implementation is using a precisely scoped css-variable to control a css-grid to conform all the column contents to desired size. In order to change the entire table column layout, exactly one property of one DOM element gets changed. When you resize a column, it rewrites the css variable (if the number of rows is below a configurable threshold, it live updates the whole table, otherwise it only updates the header row until you’re done).

Another cute trick is to display the column borders (and the resizing affordance) using a pseudo-element that’s 100vh tall and clipped to the table component. (A similar trick could be used to hilite columns on mouse-over.)

Handling the actual drag-resizing of columns would be tricky, but I wrote b8r’s track-drag library some time ago to manage dragging once-and-for-all (it also deals with touch interfaces).

Next Steps…

There’s a to-do list in the documentation. None of it’s really a priority, although allowing columns to be reordered should be super easy and implementing virtualization should also be a breeze (I scratched that itch some time back).

The selection column shown in the animation is actually implemented using a custom headerCell and custom contentCell (there’s no selection support built in to data-table). I’ll probably improve the example and provide it as an helper library that provides a “custom column” for data-table and also serves as an example of adding custom functionality to the table.

I’d like to try out the new component with some hard-core use-cases, e.g. the example RAW file manager (which can easily generate lists of over 10,000 items), and the file-browser (I have an idea for a Spotlight-centric file browser), and — especially — my galaxy generator (since I have ideas for a web-based MMOG that will leverage the galaxy generator and use Google Firebase as its backend).

Apple Music… Woe Woe Woe

Meryn Cadell’s “Sweater” — I particularly recommend “Flight Attendant” and “Job Application” but they don’t have video clips that I can find.

We’ve been an Apple Music family pretty much from day one. I used to spend a lot of time and money in stores shopping for albums. Now I have access to pretty much everything there is (including comedy albums) for the price of a CD per month.

I love the fact that my kids can just play any music they like and aren’t forced to filter down their tastes to whatever is in our CD collection, or their friends like, or what’s on commercial radio (not that we listen to commercial radio).

I also love the fact that when I get a new Apple device it just effortlessly has everything in my library on the device the next day.

But…

As I said, I used to spend quite a bit of time buying music. So I have some unusual stuff. Also, I lived in Australia for a long time, so I have a lot of stuff that isn’t available in the US or is available in subtly different form in the US. Similarly, my wife is a huge David Bowie fan and has some hard-to-get Bowie albums, e.g. Japanese and British imports.

We’ve both been ripping CDs for a long time, and in 2012 we ripped everything we hadn’t already ripped as part of packing for a move.

So now we have music that isn’t quite recognized by iTunes. To some extent it gets synced across our devices, probably via a process that went like this:

  1. (Before Apple Music existed) explicitly send music to iPhone
  2. When we get new phone, restore phone from backup on Mac.
  3. (Later) Restore phone from cloud backup.
  4. (Apple Music arrives) Hey, as a service we’ll look through your music library and match tracks to the thing we think it is in iTunes and rather than waste backup space, we’ll simply give you copies of our (superior!?) version. Oh, yeah, it’s DRMed because we need to disable the tracks if you stop paying a subscription fee.

Now this mostly works swimmingly. But sometimes we encounter one of three failure modes:

  • You have something Apple Music doesn’t recognize
  • You have something Apple Music misrecognizes (this happened in the old days when the hacky way iTunes (using the CDDB et al) identified tracks would identify an album incorrectly and misname it and all your tracks, but you could fix it and rename them manually)
  • You have something Apple Music recognizes correctly as something it doesn’t sell in your current region, and disables your ability to play it!

The first failure means that Apple may be able to restore the track from a backup of a device that had it, but it won’t restore it otherwise. So you have to find a machine with the original (non-DRMed file and fix it).

The second failure means that Apple’s Music (iTunes on a Mac) application will start playing random crap. If you’re lucky you can find the correct thing in Apple Music and just play that instead, but now you’re stuck with Apple’s DRM and may even end up losing track of or deleting your (non-DRMed) version.

The third mode is particularly pernicious. I have a fantastic album by Canadian performance artist Meryn Cadell (Angel Food for Thought) that I bought after hearing a couple of the tracks played on Phillip Adams’ “Late Night Live” radio program many years ago. I freaking love that album. For years, Apple would sync the album from device to device because it had no freaking clue what it was…

But sometime recently, Apple added Meryn Cadell’s stuff to Apple Music. As far as I can tell, it makes a second album (that I didn’t know existed) to the Apple Music US region but not Angel Food for Thought. So it knows that Angel Food for Thought exists but it won’t let me play it.

Now, I happen to know where my backups are. I fired up iTunes on my old Mac Pro and there’s Angel Food for Thought. It plays just fine. Then I turned on “sync to cloud” and all the tracks get disabled. It’s magical, but not in a good way.

This is ongoing… I will report further if anything changes.

Update

After escalation, I’ve been told iTunes is working as intended. So, basically, it will play music it can’t identify or that is DRM-free and already installed, but what it won’t do is download and play music from iTunes that (it thinks) matches music that (it thinks) you have if it doesn’t have the rights to that music in your jurisdiction.

So, I have the album “Angel Food for Thought” which iTunes used not to know about, so it just worked. But, now iTunes knows that it exists BUT it doesn’t have US distribution rights, so it won’t propagate copies of “Angel Food for Thought” to my new devices (but it won’t stop me from manually installing them). Super annoying, but not actively harmful.

It does seem to mean that there’s a market for something that lets you stick all your own music in iCloud and play stream it for you.

Undo

If there’s one feature almost every non-trivial application should have which is overlooked, left as an exercise, or put aside as “too hard” by web frameworks, it’s undo.

I do not have much (any) love for Redux, but whenever I think about it, I wonder “why isn’t undo support built in out of the box”?

To its credit, the Redux folks do actually address Undo in their documentation. “It’s a breeze” the document says at the top, and it’s a mere 14 pages later that it concludes “This is it!”.

I’ve pretty much never seen a React/Redux application that implements undo. Since it’s such a breeze, I wonder why. My guesses:

  1. The words “service” and “back-end” do not appear in the document.
  2. It doesn’t look like a breeze to me. A 14 page document describing implementation of a core concept in a toy application sounds more like a tornado.
  3. Finally, every non-trivial Redux application I’ve ever seen is an incomprehensible mess. (Once multiple Reduxes get composed together, you’re left with multidimensional spaghetti code of the ‘come-from’ variety.) Adding undo into this mix as an afterthought strikes me as a Kafkaesque nightmare.

It’s since been pointed out to me that there are libraries, such as redux-undo, around that wrap the implementation described in the Redux documentation (or variants thereof) in libraries. The example application is a counter, I don’t know of nor have I seen any non-trivial examples.

“Come-from”? Come again?

Programmers by now fall into two groups. Folks who have either read or know by reputation the article Goto considered harmful — possibly the most famous rant in the history of computer science — or folks who have grown up in a world where the entire concept of goto has been hushed up as a dark family secret.

come-from means code that is triggered by a foreign entity, you don’t know when. You can think of it as an uncharitable description of the Observer Pattern. It’s the dual of goto. There are two common examples that are employed widely in user-interfaces — event-handling and pub-sub.

Event-handling is necessitated by the way human-computer interaction works — you’re talking to the analog world, it’s not going to be pretty. The user will do something, it’s generally unrelated to what, if anything, your code was doing. You need to deal with it.

Pub-sub can be self-inflicted (i.e. used when there’s no reason for it), but is often necessitated by the realities of asynchronous and unreliable communication between different entities. Again, that’s not going to be pretty. You make a request, you make other requests. Time passes. Things change. Suddenly you get a response…

Redux is a state management system, so it is natural that it implements the Observer Pattern, but it adds the additional garnish that the mapping between action names and what parts of your application’s state they modify can be — and often is — completely arbitrary.

implementing undo in b8r should be a breeze!?

If you stand back a bit, b8r‘s registry is a similar to redux except instead of your state being a quasi-immutable object which is swapped out by “actions” (with each action being a name linked to a function that creates a new object from the old object) with b8r your state is the registry (or some subset thereof) and your actions are paths.

In my opinion, b8r is wins here because you generally don’t need to implement your actions, when you do they still act on the registry in the expected way, and instead of having an arbitrary map of action names to underlying functions the path tells you exactly which bit of state you’re messing with.

For purposes of implementing Undo, however, Redux seems to have an advantage out of the gate in that it is (partially) cloning state every time something changes. So all you need to do is maintain a list of previous states, and you’re golden, right?

What should it require to implement Undo (in b8r)?

To my mind, it should be something like this:

import {UndoManager} from 'perfectUndoLib'
new UndoManager('path.to.foo', {
    path: 'foo-history',
    onUndo: ...
    unRedo: ...
    onUndoRedo: ...
    onHandleChange: ...
});
b8r.get('foo-history.undoQueue.length') // number of undoables
b8r.get('foo-history.redoQueue.length') // number of redoables
b8r.call('foo-history.undo') // undoes the last thing
b8r.call('foo-history.redo') // redoes the last thing
b8r.call('foo-history.reset') // clears queues (e.g. after saving a file)

The first parameter could allow a list of paths to include in the undo manager’s purview, while also allowing multiple independent undo chains using multiple undo managers (e.g. some advanced graphics applications separate out changes in viewport from changes in the underlying scene).

The path option would probably default to lastThing-history and so foo-path would be superfluous.

Ideally, the callback options would allow async functions to do things like perform a service call to keep the store consistent with the interface, curate the undo and redo queues (e.g. TextMate used to do single keystroke undo which was infuriating when going back through a large edit history, addressing this appears to have paralyzed development and sidelined what had been the darling text editor of the hipter screencast brogrammer crowd), handle errors, and so on.

Implementation Details

At this point in our story, I hadn’t written a single line of code.

I have, however, been thinking about this for several years, which is more important. I’ve implemented several different non-trivial applications (RiddleMeThis, two word-processors, and a non-destructive image editor) with undo support. I think it’s worthwhile when saying something like “I wrote this code in 2h, am I not a true 100x coder!?” to acknowledge, at least to oneself, that it crystallized thoughts collected over a longer period (in this case several years), or whatever.

E.g. I wrote the core of b8r in less than two weeks, but it was a rewrite from scratch of a library I had written over a period of several months while at Facebook which was itself a rewrite of a rewrite of code I had written at USPTO.

b8r — or rather its registry — already has a robust observe method that informs you when anything happens to a particular path (or its subpaths), so something like:

import b8r from '../source/b8r.js'

Of course we’re going to need to deep clone state because — unlike with Redux — we can’t just grab the current state object.

On the plus side, we can just keep a copy of the subtree that changed. Keeping a clone of the entire state in memory every time something changes can get hairy.

Aside: two of the three serious implementations of undo I’ve been responsible for have been (a) a long form word-processor that handled both annotations and multi-user-edit-history and (b) a non-destructive image-editor). And, in fact two things I’ve been thinking about putting together as side projects are a long-form book-publishing system (which aims to publish on web and ePub vs. print/pdf/etc.) and a photography-workflow application, neither of which are going to play nicely with naive “every undo step is the state of your app” approaches.

Here is the code actually committed to the repository in the course of writing this post. Or you can see it in the inline documentation system.

After a quick visit to StackOverflow and Google, it looks at first approximation like the most robust option is “the dumb thing”:

const deepClone = (thing) => JSON.parse(JSON.stringify(thing));

OK so now for the meat!

export class UndoManager {
  constructor (watchPaths, options={}) {
    watchPaths = [watchPaths].flat()
    const {
      historyPath,
      onUndo,
      onRedo,
      onUndoRedo,
      onHandleChange,
    } = {
      historyPath: '_' + watchPaths[0].split('.').pop() + '_history',
      ...options
    }

We expose the buffers and methods in the registry, which makes binding controls to undo and redo super simple. The observer is how we get notified state we care about changes.

    b8r.register(historyPath, {
      undoBuffer: [],
      redoBuffer: [],
      undo: this.undo,
      redo: this.redo,
      observer: this.handleChange.bind(this)
    });

    Object.assign(this, {
      watchPaths,
      historyPath,
      onUndo: onUndo || onUndoRedo,
      onRedo: onRedo || onUndoRedo,
      onHandleChange
    })

    this.reset()
    this.observe()
  }

  reset () {
    this.undoBuffer.splice(0)

    this.undoBuffer.push(this.watchPaths.reduce(
      (state, watchPath) => {
        state[watchPath] = deepClone(b8r.get(watchPath))
        return state;
      },
      {}
    ));

    b8r.touch(this.historyPath)
  }

b8r.observe informs you of state-changes after they happen. (Because, for perf reasons, changes to the registry are queued and handled asynchronously, so if one or more registry entries are changed rapidly, each relevant observer will only be fired once.) So if we wait to collect state until the first user action, we’re not going to be able to restore our original state.

  observe () {
    this.watchPaths.forEach(path => b8r.observe(path, `${this.historyPath}.observer`))
  }

  unobserve () {
    this.watchPaths.forEach(path => b8r.unobserve(path, `${this.historyPath}.observer`))
  }

  get undoBuffer () {
    return b8r.get(`${this.historyPath}.undoBuffer`)
  }

  get redoBuffer () {
    return b8r.get(`${this.historyPath}.redoBuffer`)
  }

restore doesn’t care where state is coming from, its job is just to restore it without triggering handleChange and notify anything bound to the the instance that its state has changed.

  async restore () {
    this.unobserve()
    // const state = b8r.last(this.undoBuffer)
    // b8r.forEachKey(state, (value, key) => b8r.set(key, value))
    this.undoBuffer.forEach(state => {
      b8r.forEachKey(state, (value, key) => b8r.replace(key, deepClone(value)))
    })
    this.observe()
    b8r.touch(this.historyPath)
  }

It turns out there’s a couple of subtle bugs in restore both of which become obvious if the UndoManager (now called HistoryManager) is handling changes on more than one path. So restore() now replays all changes in the undo queue.

I’ve commended out the bad lines of code — the loop below fixes both bugs. There are obvious optimizations here which I have not yet implemented.

When the state of the world changes owing to user action, we need to put the new state on the undoBuffer and clear the redoBuffer (which is now obsolete).

  async handleChange (pathChanged) {
    if (this.onHandleChange && ! await this.onHandleChange()) return
    this.undoBuffer.push({
      [pathChanged]: deepClone(b8r.get(pathChanged))
    })
    this.redoBuffer.splice(0)
    b8r.touch(this.historyPath);
  }

  undo = async () => {
    if (this.undoBuffer.length === 1) return
    if (this.onUndo && ! await this.onUndo()) return false
    this.redoBuffer.push(this.undoBuffer.pop())
    await this.restore()
    return true
  }

  redo = async () => {
    if (this.onUndo && ! await this.onRedo()) return false
    this.undoBuffer.push(this.redoBuffer.pop())
    await this.restore()
    return true
  }

  destroy() {
    this.unobserve()
    b8r.remove(this.historyPath)
  }
}

And undo/redo are now super simple to implement. E.g. this example is now in the documentation of undo.js:

<button 
  data-event="click:_simple-undo-example_history.undo"
  data-bind="enabled_if=_simple-undo-example_history.undoBuffer.length"
>undo</button>
<button 
  data-event="click:_simple-undo-example_history.redo"
  data-bind="enabled_if=_simple-undo-example_history.redoBuffer.length"
>redo</button><br>
<textarea data-bind="value=simple-undo-example.text"></textarea>
<script>
  const {UndoManager} = await import('../lib/undo.js');
  b8r.register('simple-undo-example', {
    text: 'hello undo'
  })
  const undoManager = new UndoManager('simple-undo-example')
  set('destroy', () => {
    b8r.remove('simple-undo-example');
    undoManager.destroy()
  })
</script>

And that’s it!

I’ve walked through the general-purpose implementation of undo in b8r and it looks to me like you can use this to implement undo in a typical b8r application in a single line of code (plus whatever extra lines it takes to put in and add the widgets to your view and bind them). I’d say that qualifies as a breeze — at least until you get to dealing with services.