Undo

If there’s one feature almost every non-trivial application should have which is overlooked, left as an exercise, or put aside as “too hard” by web frameworks, it’s undo.

I do not have much (any) love for Redux, but whenever I think about it, I wonder “why isn’t undo support built in out of the box”?

To its credit, the Redux folks do actually address Undo in their documentation. “It’s a breeze” the document says at the top, and it’s a mere 14 pages later that it concludes “This is it!”.

I’ve pretty much never seen a React/Redux application that implements undo. Since it’s such a breeze, I wonder why. My guesses:

  1. The words “service” and “back-end” do not appear in the document.
  2. It doesn’t look like a breeze to me. A 14 page document describing implementation of a core concept in a toy application sounds more like a tornado.
  3. Finally, every non-trivial Redux application I’ve ever seen is an incomprehensible mess. (Once multiple Reduxes get composed together, you’re left with multidimensional spaghetti code of the ‘come-from’ variety.) Adding undo into this mix as an afterthought strikes me as a Kafkaesque nightmare.

It’s since been pointed out to me that there are libraries, such as redux-undo, around that wrap the implementation described in the Redux documentation (or variants thereof) in libraries. The example application is a counter, I don’t know of nor have I seen any non-trivial examples.

“Come-from”? Come again?

Programmers by now fall into two groups. Folks who have either read or know by reputation the article Goto considered harmful — possibly the most famous rant in the history of computer science — or folks who have grown up in a world where the entire concept of goto has been hushed up as a dark family secret.

come-from means code that is triggered by a foreign entity, you don’t know when. You can think of it as an uncharitable description of the Observer Pattern. It’s the dual of goto. There are two common examples that are employed widely in user-interfaces — event-handling and pub-sub.

Event-handling is necessitated by the way human-computer interaction works — you’re talking to the analog world, it’s not going to be pretty. The user will do something, it’s generally unrelated to what, if anything, your code was doing. You need to deal with it.

Pub-sub can be self-inflicted (i.e. used when there’s no reason for it), but is often necessitated by the realities of asynchronous and unreliable communication between different entities. Again, that’s not going to be pretty. You make a request, you make other requests. Time passes. Things change. Suddenly you get a response…

Redux is a state management system, so it is natural that it implements the Observer Pattern, but it adds the additional garnish that the mapping between action names and what parts of your application’s state they modify can be — and often is — completely arbitrary.

implementing undo in b8r should be a breeze!?

If you stand back a bit, b8r‘s registry is a similar to redux except instead of your state being a quasi-immutable object which is swapped out by “actions” (with each action being a name linked to a function that creates a new object from the old object) with b8r your state is the registry (or some subset thereof) and your actions are paths.

In my opinion, b8r is wins here because you generally don’t need to implement your actions, when you do they still act on the registry in the expected way, and instead of having an arbitrary map of action names to underlying functions the path tells you exactly which bit of state you’re messing with.

For purposes of implementing Undo, however, Redux seems to have an advantage out of the gate in that it is (partially) cloning state every time something changes. So all you need to do is maintain a list of previous states, and you’re golden, right?

What should it require to implement Undo (in b8r)?

To my mind, it should be something like this:

import {UndoManager} from 'perfectUndoLib'
new UndoManager('path.to.foo', {
    path: 'foo-history',
    onUndo: ...
    unRedo: ...
    onUndoRedo: ...
    onHandleChange: ...
});
b8r.get('foo-history.undoQueue.length') // number of undoables
b8r.get('foo-history.redoQueue.length') // number of redoables
b8r.call('foo-history.undo') // undoes the last thing
b8r.call('foo-history.redo') // redoes the last thing
b8r.call('foo-history.reset') // clears queues (e.g. after saving a file)

The first parameter could allow a list of paths to include in the undo manager’s purview, while also allowing multiple independent undo chains using multiple undo managers (e.g. some advanced graphics applications separate out changes in viewport from changes in the underlying scene).

The path option would probably default to lastThing-history and so foo-path would be superfluous.

Ideally, the callback options would allow async functions to do things like perform a service call to keep the store consistent with the interface, curate the undo and redo queues (e.g. TextMate used to do single keystroke undo which was infuriating when going back through a large edit history, addressing this appears to have paralyzed development and sidelined what had been the darling text editor of the hipter screencast brogrammer crowd), handle errors, and so on.

Implementation Details

At this point in our story, I hadn’t written a single line of code.

I have, however, been thinking about this for several years, which is more important. I’ve implemented several different non-trivial applications (RiddleMeThis, two word-processors, and a non-destructive image editor) with undo support. I think it’s worthwhile when saying something like “I wrote this code in 2h, am I not a true 100x coder!?” to acknowledge, at least to oneself, that it crystallized thoughts collected over a longer period (in this case several years), or whatever.

E.g. I wrote the core of b8r in less than two weeks, but it was a rewrite from scratch of a library I had written over a period of several months while at Facebook which was itself a rewrite of a rewrite of code I had written at USPTO.

b8r — or rather its registry — already has a robust observe method that informs you when anything happens to a particular path (or its subpaths), so something like:

import b8r from '../source/b8r.js'

Of course we’re going to need to deep clone state because — unlike with Redux — we can’t just grab the current state object.

On the plus side, we can just keep a copy of the subtree that changed. Keeping a clone of the entire state in memory every time something changes can get hairy.

Aside: two of the three serious implementations of undo I’ve been responsible for have been (a) a long form word-processor that handled both annotations and multi-user-edit-history and (b) a non-destructive image-editor). And, in fact two things I’ve been thinking about putting together as side projects are a long-form book-publishing system (which aims to publish on web and ePub vs. print/pdf/etc.) and a photography-workflow application, neither of which are going to play nicely with naive “every undo step is the state of your app” approaches.

Here is the code actually committed to the repository in the course of writing this post. Or you can see it in the inline documentation system.

After a quick visit to StackOverflow and Google, it looks at first approximation like the most robust option is “the dumb thing”:

const deepClone = (thing) => JSON.parse(JSON.stringify(thing));

OK so now for the meat!

export class UndoManager {
  constructor (watchPaths, options={}) {
    watchPaths = [watchPaths].flat()
    const {
      historyPath,
      onUndo,
      onRedo,
      onUndoRedo,
      onHandleChange,
    } = {
      historyPath: '_' + watchPaths[0].split('.').pop() + '_history',
      ...options
    }

We expose the buffers and methods in the registry, which makes binding controls to undo and redo super simple. The observer is how we get notified state we care about changes.

    b8r.register(historyPath, {
      undoBuffer: [],
      redoBuffer: [],
      undo: this.undo,
      redo: this.redo,
      observer: this.handleChange.bind(this)
    });

    Object.assign(this, {
      watchPaths,
      historyPath,
      onUndo: onUndo || onUndoRedo,
      onRedo: onRedo || onUndoRedo,
      onHandleChange
    })

    this.reset()
    this.observe()
  }

  reset () {
    this.undoBuffer.splice(0)

    this.undoBuffer.push(this.watchPaths.reduce(
      (state, watchPath) => {
        state[watchPath] = deepClone(b8r.get(watchPath))
        return state;
      },
      {}
    ));

    b8r.touch(this.historyPath)
  }

b8r.observe informs you of state-changes after they happen. (Because, for perf reasons, changes to the registry are queued and handled asynchronously, so if one or more registry entries are changed rapidly, each relevant observer will only be fired once.) So if we wait to collect state until the first user action, we’re not going to be able to restore our original state.

  observe () {
    this.watchPaths.forEach(path => b8r.observe(path, `${this.historyPath}.observer`))
  }

  unobserve () {
    this.watchPaths.forEach(path => b8r.unobserve(path, `${this.historyPath}.observer`))
  }

  get undoBuffer () {
    return b8r.get(`${this.historyPath}.undoBuffer`)
  }

  get redoBuffer () {
    return b8r.get(`${this.historyPath}.redoBuffer`)
  }

restore doesn’t care where state is coming from, its job is just to restore it without triggering handleChange and notify anything bound to the the instance that its state has changed.

  async restore () {
    this.unobserve()
    // const state = b8r.last(this.undoBuffer)
    // b8r.forEachKey(state, (value, key) => b8r.set(key, value))
    this.undoBuffer.forEach(state => {
      b8r.forEachKey(state, (value, key) => b8r.replace(key, deepClone(value)))
    })
    this.observe()
    b8r.touch(this.historyPath)
  }

It turns out there’s a couple of subtle bugs in restore both of which become obvious if the UndoManager (now called HistoryManager) is handling changes on more than one path. So restore() now replays all changes in the undo queue.

I’ve commended out the bad lines of code — the loop below fixes both bugs. There are obvious optimizations here which I have not yet implemented.

When the state of the world changes owing to user action, we need to put the new state on the undoBuffer and clear the redoBuffer (which is now obsolete).

  async handleChange (pathChanged) {
    if (this.onHandleChange && ! await this.onHandleChange()) return
    this.undoBuffer.push({
      [pathChanged]: deepClone(b8r.get(pathChanged))
    })
    this.redoBuffer.splice(0)
    b8r.touch(this.historyPath);
  }

  undo = async () => {
    if (this.undoBuffer.length === 1) return
    if (this.onUndo && ! await this.onUndo()) return false
    this.redoBuffer.push(this.undoBuffer.pop())
    await this.restore()
    return true
  }

  redo = async () => {
    if (this.onUndo && ! await this.onRedo()) return false
    this.undoBuffer.push(this.redoBuffer.pop())
    await this.restore()
    return true
  }

  destroy() {
    this.unobserve()
    b8r.remove(this.historyPath)
  }
}

And undo/redo are now super simple to implement. E.g. this example is now in the documentation of undo.js:

<button 
  data-event="click:_simple-undo-example_history.undo"
  data-bind="enabled_if=_simple-undo-example_history.undoBuffer.length"
>undo</button>
<button 
  data-event="click:_simple-undo-example_history.redo"
  data-bind="enabled_if=_simple-undo-example_history.redoBuffer.length"
>redo</button><br>
<textarea data-bind="value=simple-undo-example.text"></textarea>
<script>
  const {UndoManager} = await import('../lib/undo.js');
  b8r.register('simple-undo-example', {
    text: 'hello undo'
  })
  const undoManager = new UndoManager('simple-undo-example')
  set('destroy', () => {
    b8r.remove('simple-undo-example');
    undoManager.destroy()
  })
</script>

And that’s it!

I’ve walked through the general-purpose implementation of undo in b8r and it looks to me like you can use this to implement undo in a typical b8r application in a single line of code (plus whatever extra lines it takes to put in and add the widgets to your view and bind them). I’d say that qualifies as a breeze — at least until you get to dealing with services.

b8r goes pure(ish) Javascript

b8r now allows components to be implemented entirely in Javascript, with no eval

In an effort to make b8r work in the universe of “everything must be javascript” web development tools, I’ve finally added provision for pure javascript components in b8r.

Now, there’s four really good reasons for doing this.

First of all, the way b8r’s HTML-based components work intrinsically involves using eval (it actually uses the AsyncFunction constructor, but same difference). Whether this represents a real security threat or not depends a lot on how those components are in fact loaded, but it’s definitely an argument.

Second — and this has proven a small but persistent pain point with b8r development — it makes linters happy. To begin with, the <script> tag inside a component was actually the body of an async function with a ton of parameters. Using these parameters without annoying the lint gods involved /* global … */ to be added at the top of the scripts, while the fact that — for convenience — the code was async meant that linters would scream about use of import() and async/await.

Third — with HTML components, defining a single global controller for a family of components involved some jumping through hoops as did, for example, using computed bindings or lists where the method was registered in the component body (this works fine, but generates error messages if you don’t programmatically add the bindings after the necessary methods are registered). Now you can just do the necessary work outside the component definition (but in the same file!)

Fourth — while b8r has moved over to the world of ES6 Modules, it is distributed in cjs form for those still living with require, webpack, etc. The problem is that HTML components don’t play nicely with javascript bundlers, whereas javascript components are just javascript so if you really want to force a giant slab of javascript down your users’ throats you can. Yay.

So, these issues are fixed if you use pure javascript components.

You are still writing CSS as CSS and HTML as HTML, but now it’s likely going to be inside back-ticks.

In addition to the four big arguments for building components this way, there are some further implications (mostly good).

First of all, if and when you need to use import() inside a component, you can use relative paths without wondering “relative to what” (it used to be the location of b8r.js, because that’s where that code ends up executing).

Next, there was no convenient way to create multiple components in one source file. This is a mixed blessing — making it easy to define lots of small components in source files with no relationship between file name and component name presents the real risk of name-clashes. It may become necessary to introduce lint rules that prevent you from creating a component whose name does not match its source file in some way.

Finally, a really nice side-effect of this is that the boilerplate for a component is a lot more self-explanatory than it used to be. Yay!

Dark Mode with CSS Variables

All the cool kids are doing it, what’s the easiest way to support Dark Mode?

As little as four or so years ago, it was widely understood among front-end developers that CSS was a horrible problem, managing it was a nightmare, and Something Ought to be Done About It.

Back then, there were two popular “solutions”, LESS and SCSS, that both allowed you to transpile your CSS from a superset of CSS that allowed things like named constants and nesting rules. There was also the drum beat of the “all web development should be Javascript development” which advocated replacing CSS with Javascript (that created CSS). Incidentally, these were the same people who thought “all Javascript development should be some weird superset of Javascript that needs to be transpiled into Javascript”.

Anyway, the big problem LESS, SCSS, and the CSS-should-be-Javascript folks tried to solve was managing consistency across large sets of rules (no-one of course questioning the idea of having such large sets of rules). The downside was that this enabled even bigger sets of rules. (The best reason for using LESS or SCSS was the implementation of named constants. With these part of CSS, it seemed clear to me that their reason for existence was largely gone.)

If you reduce the cost of something, folks will use more of it. LESS, SCSS, and styling with Javascript reduced the apparent cost of adding CSS rules to web apps.

E.g. on a project I was working on at the time, about 10kB of LESS became 280kB of CSS — and this was on a team that was (a) small, and (b) ruthlessly minimized CSS whenever it could.

At this point, I can safely say “a pox on all your houses”. With bindinator’s approach of using a solid set of global styles and component-specific styles that are easy to scope to specific components (without the use of a “shadow DOM”) we never ran into any of the thorny CSS issues that plague other projects I’ve worked on, and CSS Variables are just icing on the cake.

@media (prefers-color-scheme: dark) {
  :root {  
    --content-bg-color: #222;

In fact, I was able to support “Dark Mode” in bindinator’s code base in a few hours over a couple of evenings by simply leaning hard into CSS Variables (essentially, cleaning the ones I was using up a bit and swapping out instances of hard-coded colors where I had missed them).

I find it particularly hilarious because the reason I did this was after seeing that one of our organization’s Storybooks (for our React-based UI library) had been updated to support Dark Mode (I assume, painfully) and it does not work very well (it looks like the problem is that it renders server-side and only starts respecting “Dark Mode” after the script “hydrates”).

Not only does implementing dark mode with CSS variables work flawlessly (in Chrome, Safari, and Firefox at least), but it involves literally zero lines of Javascript, and the UI transitions beautifully if you toggle dark mode in your UI settings.

I was so pleased with how easily all this worked that I quickly implemented another idea I had in the back of my mind for years, which is a CSS Variable powered theme editor (the link is to the github-hosted site because I haven’t deployed it to the bindinator home page yet).

It’s worth noting that the path of least resistance for implementing Dark Mode with LESS, or SCSS, or React Styletron, or whatever, is still going to be CSS Variables. It also makes theming controls (including web-components, even within their shadow DOMs) a breeze.

Migrating b8r code to the New World Order

I’ve got three small projects built with b8r lying around (one is something I did for a job interview a few months ago, so it’s not in a public repo). Anyway, all three simply load the version of b8r in the github master directly which means they broke hard when I pushed my latest changes.

Two of the projects are my galaxy generator (which is heavy on libraries but has no components) and my dnd spell list.

Anyway, updating the galaxy generator took about five minutes. Updating the dnd spell list took about thirty seconds. If I recall correctly, this is the second change to b8r in roughly two years that has required fixing the galaxy generator. (Obviously this should not matter, since only a crazy person would load dependencies into production code this way, but I don’t take breaking changes lightly.)

I’ve updated bindinator.com and attempted to document as much as possible about the migration process. Finally, I’ve replaced require.js with a tiny module that shows an alert() and redirects the user to the improved documentation on migrating to import.

Farewell require, you’re no longer needed…

This is a really technical article about front-end javascript programming. You have been warned.

Background

For most of the last two years, I have written my front-end code using my own implementation of require. It was inspired by the mostly delightful implementation of require used at Facebook (which I spent had over a month of my life in the bowels of). Initially, I just wanted to use something off-the-shelf, but every implementation I found required a build-phase during development (the Facebook version certainly did) and simultaneously tried to abstract away paths (because during the build phase, something would find all the libraries and map names to paths).

I wanted a require that did not depend on a build phase, did not abstract out paths (so if you knew a problem was in a required library, you also knew where that library was, was easy to use, and supported third-party libraries written to different variations of the commonjs “standard”.

What is require?

const foo = require('path/to/foo.js');

Require allows you to pull code from other files into a javascript file. Conceptually, it’s something like this (the first time).

  1. get text from ‘path/to/foo.js’
  2. insert it in the body of a function(module){ /* the code from foo.js */ )
  3. create an object, pass it to the function (as module)
  4. after the function executes, put whatever is in module.exports in a map of library files to output.

It follows that foo.js looks something like this:

module.exports = function() { console.log("I am foo"); }

The next time someone wants that file, pass them the same stuff you gave the previous caller.

Now, if that were all it needed to do, require would be pretty simple. Even so, there’s one nasty issue in the preceding when you want to allow for relative paths — if you require the same module from two different places you don’t want to return different instances of the same thing — both for efficiency and correctness.

But of course there are lots and lots of wrinkles:

  • Every javascript file can, of course, require other javascript file.
  • Circular references are easy to introduce accidentally, and can become unavoidable.
  • Each of these things is a synchronous call, so loading a page could take a very long time (if you put this naive implementation into production).
  • Javascript is a dynamic language. It’s already hard to statically analyze, require makes it harder. It’s particularly nasty because require is “just a function” which means it can be called at any time, and conditionally.
  • And, just to be annoying, the different implementations of commonjs make weird assumptions about the behavior of the wrapper function and the way it’s called (some expect module to be {}, some expect it to be {exports:{}}, some expect it to be something else, and because require is just javascript can choose to behave differently based on what they see.

Under the hood, require ends up needing to support things like dependency-mapping, circular-reference detection, asynchronous require, inlining pre-built dependency trees, and pretending to be different versions of require for different modules. It’s quite a nasty mess, and it’s vital to the correct functioning of a web app and its performance.

So, good riddance!

ES6 Modules

So, all the major browsers (and nodejs — sort of) finally support ES6 modules and — perhaps more importantly — because this is a change to the language (rather than something implemented in multiple different ways using the language) various build tools support it and can transpile it for less-compatible target platforms.

So now, you can write:

import foo from 'path/to/foo.js';

And the module looks like:

export default function(){ console.log('I am foo'); }

A couple of months ago, I took a few hours and deleted require.js from the b8r source tree, and then kept working on the b8r source files would pass b8r’s (rather meagre) unit tests. I then ran into a hard wall because I couldn’t get any components to load if they used require.

The underlying problem is that components are loaded asynchronously, and import is synchronous. There’s an asynchronous version of import I didn’t know about.

const foo = await import ('path/to/foo.js'); // does not work!

To obtain default exports via import() you need to obtain its default property. So:

const foo = (await import ('path/to/foo.js').default;

In general, I’ve started avoiding export default for small libraries. In effect, default is just another named export as far as dynamic import() is concerned.

import — again because it’s a language feature and not just some function someone wrote — is doing clever stuff to allow it to be non-blocking, e.g.

import {foo} from 'path/to/foo.js';

Will not work if foo is not explicitly exported from the module. E.g. exporting a default with a property named foo will not work. This isn’t a destructuring assignment. Under the hood this presumably means that a module can be completely “compiled” with placeholders for the imports, allowing its dependencies to be found and imported (etc.). Whereas require blocks compilation, import does not.

This also means that import affords static analysis (unlike require, import must be at top level scope in a module). Also, even though import() (dynamic import) looks just like a regular function, it turns out it’s also a new language feature. It’s not a function, and can’t be called etc.

An Interesting Wrinkle

One of the thorniest issues I had to deal with while maintaining require was that of third-party libraries which adhered to mystifyingly different commonjs variations. The funny thing is that all of them turn out to support the “bad old way” of handling javascript dependencies, which is <script> tags.

As a result of this, my require provided a solution for the worse-case-scenario in the shape of a function — viaTag — that would insert a (memoized) script tag in the header and return a promise that resolved when the tag loaded. Yes, it’s icky (but heck, require is eval) but it works. I’ve essentially retained a version of this function to deal with third-party libraries that aren’t provided as ES modules.

b8r without require

Last night, I revisited the task of replacing require with import in b8r armed with new knowledge of dynamic import(). I did not go back to the branch which had a partial port (that passed tests but couldn’t handle components) because I have done a lot of work on b8r’s support for web-components in my spare time and didn’t want to handle a complicated merge that touched on some of the subtlest code in b8r.

In a nutshell, I got b8r passing tests in about three hours, and then worked through every page of the b8r demo site and had everything loading by midnight with zero console spam. (Some of the loading is less seamless than it used to be because b8r components are even more deeply asynchronous than they used to be, and I haven’t smoothed all of that out yet.)

This morning I went back through my work, and found an error I’d missed (an inline test was still using require) and just now I realized I hadn’t updated benchmark.html so I fixed that.

An Aside on TDD

My position on TDD is mostly negative (I think it’s tedious and boring) but I’m a big fan of API-driven design and “dogfooding”. Dogfooding is, in essence, continuous integration testing, and the entire b8r project is a pure dogfooding. It’s gotten to the point where I prefer developing new libraries I’m writing for other purposes within b8r because it’s such a nice environment to work with. I’ve been thinking about this a lot lately as my team develops front-end testing best practices.

Here’s the point — swapping out require for import across a pretty decent (and technically complex) project is a beyond a “refactor”. This change took me two attempts, and I probably did a lot of thinking about it in the period between the attempts — so I’d call it a solid week of work.

At least when you’re waiting for a compile / render / dependency install you can do something fun. Writing tests — at least for UI components or layouts — isn’t fun to do and it’s not much fun to run the tests either.

“5 Story Points”

I’ve done a bunch of refactors of b8r-based code (in my last month at my previous job two of us completely reorganized most of the user interface in about a week, refactoring and repurposing a dozen significant components along the way). Refactoring b8r code is about as pleasant as I’ve ever found refactoring (despite the complete lack of custom tooling). I’d say it’s easier than fairly simple maintenance on typical “enterprisey” projects (e.g. usually React/Redux, these days) festooned with dedicated unit tests.

Anyway, even advocates of TDD agree that TDD works best for pure functions — things that have well-defined pre- and post- conditions. The problem with front-end programming is that if it touches the UI, it’s not a pure function. If you set the disabled attribute of a <button>, the result of that operation is (a) entirely composed of side-effects and (b) quite likely asynchronous. In TDD world you’d end up writing a test expecting the button with its disabled attribute set to be a button with a disabled attribute set. Yay, assignment statements work! We have test coverage. Woohoo!

Anyway, b8r has a small set of “unit tests” (many of which are in fact pretty integrated) and a whole bunch of inline tests (part of the inline documentation) and dogfooding (the b8r project is built out of b8r). I also have a low tolerance for console spam (it can creep in…) but b8r in general says something if it sees something (bad) and shuts up otherwise.

Anyway, I think this is a pretty pragmatic and effective approach to testing. It works for me!

What’s left to do?

Well, I have to scan through the internal documentation and purge references to require in hundreds of places.

Also, there’s probably a bunch of code that still uses require that for some reason isn’t being exercised. This includes code that expects to run in Electron or nwjs. (One of the nice things about removing my own implementation of require is that it had to deal with environments that create their own global require.) This is an opportunity to deal with some files that need to be surfaced or removed.

At that point there should be no major obstacle to using rollup.js or similar to “build” b8r (versus using the bespoke toolchain I build around require, and can now also throw away). From there it should be straightforward to convert b8r into “just another package”.

Is it a win …yet?

My main goal for doing all this is to make b8r live nicely in the npm (and yarn) package ecosystem and, presumably benefit from all the tooling that you get from being there. If we set that — very significant — benefit aside:

  • Does it load faster? No. The old require loads faster. But that’s to be expected — require did a lot of cute stuff to optimize loading and being able to use even cleverer tooling (that someone else has written) is one of the anticipated benefits of moving to import.
  • Does it run faster? No. The code runs at the same speed.
  • Is it easier to work with? export { .... } is no easier than module.exports = { .... } and destructuring required stuff is actually easier. It will, however, be much easier to work with b8r in combination with random other stuff. I am looking forward to not having to write /* global module, require */ and 'use strict' all over the place to keep my linters happy..
  • Does it make code more reliable? I’m going to say yes! Even in the simplest case import {foo} from 'path/to/foo.js' is more rigorous than const {foo} = require('path/to/foo.js') because the latter code only fails if foo is called (assuming it’s expected to be a function) or using it causes a failure. With import, as soon as foo.js loads an error is thrown if foo isn’t an actual export. (Another reason not to use export default by the way.)