D&D5e, Web-components, & Performance

tl;dr — web-components (edit: with shadowRoots) are kind of slow.

The graph shows the average of five trials (post warmup) in Chrome on my Macbook Pro (it’s a maxed-out 2015 model). I saw nothing to suggest that a larger number of trials would significantly change the results.

Over the long weekend I was hoping to play some D&D with my kids (it never ended up happening). One of the things I find pretty frustrating with the new (well, 2014) Players Handbook is that the spells are all mixed together in strict alphabetical order. (It’s not like earlier editions were better organized. It’s also not clear how they could really fix this, since any arrangement will either require repeating spell descriptions or some kind of tedium.) So to figure out what spells are available to, say, a first level Druid, and what they do, you need to do a lot of page-flipping and cross-referencing.

As an aside, in the late-lamented DragonQuest (SPI, 1980), magic is divided up into “colleges” with a given character having access to the spells of only one college. Each college, along with its spells, is described in one place, with the only page-flipping being required for a relatively small number of spells that are common between colleges. A typical college has access to 20-30 different spells which are thematically unified (e.g. Adepts of the College of Fire have fire-related spells).

This kind of organization is not possible for D&D because the overlap between classes is so great. Sorcerors essentially get a subset of Wizard spells, as do Bards. And so on.

So it struck me that some kind of easily filterable list of spells which let you decide how you wanted them sorted and what you wanted shown would be handy, and this seemed like a nice test for b8r’s web-component system.

Now, b8r is the third rewrite of a library I originally developed with former colleagues for USPTO. Initially we had a highly convoluted “data-table” (not an original name, of course) that essentially let you point a descriptive data-structure at an array and get a table with all kinds of useful bells and whistles. This quickly became unmanageable, and was particularly horrible when people started using it to create quite simple tables (e.g. essentially short lists of key/value pairs) and then add fancy behaviors (such as drag-reordering) to them.

So, I developed a much simpler binding system (“bind-o-matic”) that allowed you to do fine-grained bindings but didn’t try to do bells and whistles, and it quickly proved itself better even for the fancy tables. b8r‘s data-binding is the direct descendant of bind-o-matic.

Anyway, all-purpose tables are hard. So I looked around for something off-the-shelf that would just work. (I’m trying to purge myself of “not invented here” syndrome, and one of the joys of web-components is that they’re supposed to be completely interoperable, right?) Anyway, the closest thing I could find (and damn if I could find a compelling online demo of any of them) had been abandoned, while the next closest thing was “inspired by” the abandoned thing. Not encouraging.

So I decided to write a minimal conceptual set of custom elements — in essence placeholder replacements for <table> <tr> <td> and <th> — which would have zero functionality (they’d just be divs or spans with no behavior) that I could later add CSS and other functionality to in order to get desired results (e.g. the ability to switch a view from a virtual grid of “cards” to a virtual table with a non-scrolling header and resizable columns — i.e. “the dream” of data-table implementations, right?

Now, I have implemented a function named makeWebComponent that will make you a custom element with one line of code, e.g. makeWebComponent('foo-bar', {}). So I made a bunch of components exactly that way and was horrified to see that my test page, which had been loading in ~1000ms was suddenly taking a lot longer. (And, at the time it was loading b8r from github and loading the spell list from dnd5eapi.co and then making an extra call for each of the 300-odd spells in the list.)

So, I ended up implementing my interactive D&D spell list using old school b8r code and got on with my life.

Hunting down the problem

I’m pretty enthusiastic about web-components, so this performance issue was a bit of a shock. So I went to the bindinator benchmark page (which lets me compare the performance of b8r to special-purpose vanilla javascript code doing the same thing with foreknowledge of the task.

In other words, how fast is b8r compared with javascript hand-coded to do the exact same thing in a completely specific way? (An example of how unfair the comparison can be is swapping two items in a large array and then updating the DOM vs. simply moving the two items.)

So the graph at the top of the page compares creating a table of 10,000 rows with a bunch of cells in each row using vanilla js to b8r, and then the other three columns show b8r with two simple <a> tags replaced with <simple-span>, where <simple-span> is defined using makeWebComponent vs. hand-coded (but still with a style node) vs. hand-coded (with nothing except a slot to contain its children).

class SimpleSpan extends HTMLElement {
  constructor() {
    super();

    // const style = document.createElement('style');
    // style.textContent = ':host { font-style: italic; }';
    const shadow = this.attachShadow({mode: 'open'});
    const slot = document.createElement('slot');
    // shadow.appendChild(style);
    shadow.appendChild(slot);
  }
}
window.customElements.define('simple-span', SimpleSpan);

Above is the hand-coded <simple-span> so you can see exactly what I’m doing and second-guess me. The commented-out lines are the difference between the fourth and fifth columns.

I should add that I tried rewriting the above code to use cloneNode(true) instead of creating the new nodes in the constructor, but performance was no different in this case. If the nodes being created were more complex there would likely be an advantage. (I make extensive use of cloneNode(true) in b8r since past experimentation showed a benefit, and makeWebComponent makes use of it.)

I should add that even the simplest component made with makeWebComponent has a style node because Google’s best practices suggest that all custom elements should support the hidden attribute, and doing this with styles is by far the simplest (and I would hope the most performant) way to do this.

It also occurred to me that having the <style> contain :host { font-style: italic; } might be more expensive that :host([hidden]) { display: none; } but, again, that was a wash. Similarly, it occurred to me that initializing the shadowRoot as {mode: 'closed'} might help. It didn’t.

So, this appears to show that the overhead for just two trivial custom elements in each of 10,000 rows is comparable to the overhead for b8r in its entirety for the entire table row. Now, b8r is creating a given table row by cloning, ad-hoc, the hierarchy it’s using as a template, and then querying that hierarchy to find bound elements and then inserting values as appropriate.

When you consider the kinds of uses to which browsers put custom elements internally (e.g. <video> and <input type=”data”>) these are generally not something you’re going to have 20,000 of in the DOM at once. It’s easy, however, to imagine wanting every item in a pretty huge list to include a custom checkbox or popup menu. It’s worth knowing that even simple custom elements have a non-trivial overhead (and we’re talking 1ms for 20 of them on a fast, high-end laptop).

Addendum

I didn’t realize that you could create a web-component with no shadowRoot. (This isn’t exactly something anyone makes obvious — I’ve found literally zero examples of this in tutorials, etc.)

If you do this, the performance issue mostly goes away.

Now, you do lose all the “benefits” of the shadow DOM but you gain performance and still can attach style rules to custom (unique!) tagNames, which makes managing the resulting CSS easier. This is particularly nice given my desire to replace b8r components with custom elements, since there needn’t be any performance overhead (you can “seal” the styling in a container with just one shadowRoot rather than ten thousand instances).

b8r v2

b8r v2 is going to adopt web-components over components and import over require

Having developed b8r in a period of about two weeks after leaving Facebook (it was essentially a distillation / brain-dump of all the opinions I had formed about writing front-end code while working there), I’ve just spent about two years building a pretty solid social media application using it.

But, times change. I’m done with social media, and the question is, whither b8r?

In the last few weeks of my previous job, one of my colleagues paid me a huge compliment, he said I’d “really hit it out of the park” with b8r. We’d just rebuilt half the user interface of our application, replacing some views completely and significantly redesigning others, in two weeks, with one of the three front-end coders on leave.

I designed b8r to afford writing complex applications quickly, turn hacky code into solid code without major disruption, make maintenance and debugging as easy as possible, and to allow new programmers to quickly grok existing code. As far as I can tell, it seems to do the trick. But it’s not perfect.

Two significant things have emerged in the front-end world in the last few years: import and web-components (custom DOM elements in particular)

import

When I wrote b8r, I found the various implementations of require to be so annoying (and mutually incompatible) I wrote my own, and it represents quite an investment in effort since then (e.g. I ended up writing an entire packaging system because of it).

Switching to import seems like a no-brainer, even if it won’t be painless (for various reasons, import is pretty inimical to CommonJS and not many third-party libraries are import-friendly, and it appears to be impossible for a given javascript file to be compatible with both import and require).

I experimentally converted all of the b8r core over to using import in an afternoon — enough that it could pass all the unit tests, although it couldn’t load the documentation system because any component that uses require or require.lazy won’t work.

Which got me to thinking about…

web-components

I’ve been struggling with improving the b8r’s component architecture. The most important thing I wanted was for components to naturally provide a controller (in effect, for components to be instances of a class), and pave some good paths and wall up some bad ones. But, after several abortive attempts and then thinking about the switch from require to import, I’ve decided to double-down on web-components. The great thing about web-components is that they have all the virtues I want from v2 components and absolutely no dependency on b8r.

I’ve already added a convenience library called web-components.js. You can check it out along with a few sample components. The library makes it relatively easy to implement custom DOM elements, and provides an economical javascript idiom for creating DOM elements that doesn’t involve JSX or other atrocities.

Using this library you can write code like this (this code generates the internals of one of the example components):

fragment(
div({classes: ['selection']}),
div({content: '▾', classes: ['indicator']}),
div({classes: ['menu'], content: slot()}),
)

I think it’s competitive with JSX while not having any of the dependencies (such as requiring a transpile cycle for starters).

<div className={'selection'}></div>
<div className={'indicator'}>▾</div>
<div className={'menu'}>
{...props.children}
</div>

To see just how lean a component implemented using this library can be, you can compare the new switch component to the old CSS-based version.

Aside — An Interesting Advantage of Web Components

One of the interesting properties of web-components is that internally the only part of the DOM they need to care about is whether they have a child <slot>. Web-components don’t need to use the DOM at all except for purposes of managing hierarchical relationships. (Annoyingly, web-components cannot be self-closing tags. You can’t even explicitly self-close them.)

E.g. imagine a web-component that creates a WebGL context and child components that render into container’s scene description.

In several cases while writing b8r examples I really wanted to be able to have abstract components (e.g. the asteroids in the asteroids example or the character model in the threejs example). This is something that can easily be done with web-components but is impossible with b8r’s HTML-centric components. It would be perfectly viable to build a component library that renders itself as WebGL or SVG.

Styling Custom Elements for Fun and Profit

One of the open questions about custom DOM elements is how to allow them to be styled. So far, I’ve seen one article suggesting subclassing, which seems to me like a Bad Idea.

I’m currently leaning towards one or both of (a) making widgets as generic and granular as possible (e.g. implement a custom <select> and a custom <option> and let them be styled from “outside”) and (b) when necessary driving styles via CSS variables (e.g. you might have a widget named foo that has a border, and give it a generic name (–widget-border-color), specific name (–foo-border-color), and a default to fall back to.

So, in essence, b8r v2 will be smaller and simpler — because it’s going to be b8r minus require and components. You won’t need components, because you’ll have web-components. You won’t need require because you’ll have import. I also plan one more significant change in b8r v2 — it will be a proper node package, so you can manage it with npm and yarn and so forth.

<b8r-bindery>

One idea that I’m toying with is to make b8r itself “just a component”. Basically, you’d get a b8r component that you simply stick anywhere in the DOM and you’d get b8r’s core functionality.

In essence the bindery’s value — presumably an object — becomes accessible (via paths) to all descendants, and the component handles all events the usual way.

I’m also toying with the idea of supporting Redux (rather than b8r’s finer-grained two-way bindings). There’s probably not much to do here — just get Redux to populate a bindery and then instead of the tedious passing of objects from parent-to-child-to-grandchild that characterizes React-Redux coding you can simply bind to paths and get on with your life.

Summing Up

After two years, I’m still pretty happy with b8r. Over the next year or so I hope to make it more interoperable (“just another package”), and to migrate it from (its) require and HTML components to import and web-components. Presumably, we’ll have import working in node and (and hence Electron and nwjs) by then.

Happy new year!

As the Wwworm Turns

Microsoft’s recent announcement that it is, in effect, abandoning the unloved and unlamented Edge browser stack in favor of Chromium is, well, both hilarious and dripping in irony.

Consider at first blush the history of the web in the barest terms:

  • 1991 — http, html, etc. invented using NeXT computers
  • 1992 — Early browsers (Mosaic, NetScape, etc.) implement and extend the standard, notably NetScape adds Javascript and tries to make frames and layers a thing. Also, the <blink> tag.
  • 1995 — Microsoft “embraces and extends” standards with Internet Explorer and eventually achieves a 95% stranglehold on the browser market.
  • 1997 — As NetScape self-destructs and Apple’s own OpenDoc-based browser “Cyberdog” fails to gain any users (mostly due to being OpenDoc-based), Apple begs Microsoft for a slightly-less-crummy version of IE5 to remain even vaguely relevant/useful in an era where most web stuff is only developed for whatever version of IE (for Windows) the web developer is using.
  • 2002 — FireFox rises from the ashes of NetScape. (It is essentially a cross-platform browser based on Camino, a similar Mac-only browser that was put together by developers frustrated by the lack of a decent Mac browser.)
  • 2003 — Stuck with an increasingly buggy and incompatible IE port, Apple develops its own browser based on KHTML after rejecting Netscape’s Gecko engine. The new browser is called “Safari”, and Apple’s customized version of KHTML is open-sourced as Webkit.
  • As a scrappy underdog, Apple starts a bunch of small PR wars to show that its browser is more standards-compliant and runs javascript faster than its peers.
  • Owing to bugginess, neglect, and all-round arrogance, gradually Microsoft loses a significant portion of market share to FireFox (and, on the Mac, Safari — which is at least as IE-compatible as the aging version of IE that runs on Macs). Google quietly funds FireFox via ad-revenue-sharing since it is in Google’s interest to break Microsoft’s strangehold on the web.
  • 2007 — Safari having slowly become more relevant to consumers as the best browser on the Mac (at least competitive with Firefox functionally and much faster and more power efficient than any competitor) is suddenly the only browser on the iPhone. Suddenly, making your stuff run on Safari matters.
  • 2008 — Google starts experimenting with making its own web browser. It looks around for the best open source web engine, rejects Gecko, and picks Webkit!
  • Flooded with ad revenue from Google, divorced from any sense of user accountability FireFox slowly becomes bloated and arrogant, developing an email client and new languages and mobile platforms rather than fixing or adding features to the only product it produces that anyone cares about. As Firefox grows bloated and Webkit improves, Google Chrome benefits as, essentially, Safari for Windows. (Especially since Apple’s official Safari for Windows is burdened with a faux-macOS-“metal”, UI and users are tricked into installing it with QuickTime.) When Google decides to turn Android from a Sidekick clone into an iPhone clone, it uses its Safari clone as the standard browser. When Android becomes a success, suddenly Webkit compatibility matters a whole lot more.
  • 2013 — Google is frustrated by Apple’s focus on end-users (versus developers). E.g. is the increase in size and power consumption justified by some kind of end-user benefit? If “no” then Apple simply won’t implement it. Since Google is trying to become the new Microsoft (“developers, developers, developers”) it forks Webkit so it can stop caring about users and just add features developers think they want at an insane pace. It also decides to completely undermine the decades-old conventions of software numbering and make new major releases at an insane pace.
  • Developers LOOOOVE Chrome (for the same reason they loved IE). It lets them reach lots of devices, it has lots of nooks and crannies, it provides functionality that lets developers outsource wasteful tasks to clients, if they whine about some bleeding edge feature Google will add it, whether or not it makes sense for anyone. Also it randomly changes APIs and adds bugs fast enough that you can earn a living by memorizing trivia (like the good old days of AUTOEXEC.BAT) allowing a whole new generation of mediocrities to find gainful employment. Chrome also overtakes Firefox as having the best debug tools (in large part because Firefox engages in a two year masturbatory rewrite of all its debugging tools which succeeds mainly in confusing developers and driving them away).
  • 2018 — Microsoft, having seen itself slide from utter domination (IE6) to laughingstock (IE11/Edge), does the thing-that-has-been-obvious-for-five-years and decides to embrace and extend Google’s Webkit fork (aptly named “Blink”).

RAW Power 2.0

RAW Power 2.0 in action — the new version features nice tools for generating black-and-white images.

My favorite tool for quickly making adjustments to RAW photos just turned 2.0. It’s a free upgrade but the price for new users has increased to (I believe) $25. While the original program was great for quickly adjusting a single image, the new program allows you to browse directories full of images quickly and easily, to some extent replacing browsing apps like FastRAWViewer.

The major new features — and there are a lot of them — are batch processing, copying and pasting adjustments between images, multiple window and tab support, hot/cold pixel overlays (very nicely done), depth effect (the ability to manipulate depth data from dual camera iPhones), perspective correction and chromatic aberration support.

The browsing functionality is pretty minimal. It’s useful enough for selecting images for batch-processing, but it doesn’t offer filtering ability (beyond the ability to show only RAW images) or the ability quickly modify image metadata (e.g. rate images), so FastRAWViewer is still the app to beat for managing large directories full of images.

While the hot/cold pixel feature is lovely, the ability to show in-focus areas (another great FastRAWViewer feature) is also missing.

As before, RAW Power both functions as a standalone application and a plugin for Apple’s Photos app (providing enhanced adjustment functionality).

Highly recommended!

Epic Fail

Atul Gawande describes the Epic software system being rolled out in America’s hospitals.

It reads like a  potpourri of everything bad about enterprise IT. Standardize on endpoints instead of interoperability, Big Bang changes instead of incremental improvements, and failure to adhere to the simplest principles of usability.

The sad thing is that the litany of horrors in this article are all solved problems. Unfortunately, it seems that in the process of “professionalizing” usability, the discipline has lost its way.

Reading through the article, you can just tally up the violations of my proposed Usability Heuristics, and there’s very few issues described in the article that would not be eliminated by applying one of them.

The others would fall to simple principles like using battle-tested standards (ISO timestamps anyone?) and picking the right level of database normalization (it should be difficult or impossible to enter different variations of the same problem in “problem lists”, and easier to elaborate on existing problems).

There was a column of thirteen tabs on the left side of my screen, crowded with nearly identical terms: “chart review,” “results review,” “review flowsheet.”

I’m sure the tabs LOOKED nice, though. (Hint: maximize generality, minimize steps, progressive disclosure, viability.)

“Ordering a mammogram used to be one click,” she said. “Now I spend three extra clicks to put in a diagnosis. When I do a Pap smear, I have eleven clicks. It’s ‘Oh, who did it?’ Why not, by default, think that I did it?” She was almost shouting now. “I’m the one putting the order in. Why is it asking me what date, if the patient is in the office today? When do you think this actually happened? It is incredible!”

Sensible defaults can be helpful?! Who knew? (Hint: sensible defaults, minimize steps.)

This is probably my favorite (even though it’s not usability-related):

Last fall, the night before daylight-saving time ended, an all-user e-mail alert went out. The system did not have a way to record information when the hour from 1 a.m. to 1:59 a.m. repeated in the night. This was, for the system, a surprise event.

Face meet palm.

Date-and-time is a fundamental issue with all software and the layers of stupidity that must have signed off on a system that couldn’t cope with Daylight Savings boggles my mind.

A former colleague of mine linked to US Web Design System as if this were some kind of intrinsically Good Thing. Hilariously, the site itself does not appear to have been designed for accessibility or even decent semantic web, and blocks robots.

Even if the site itself were perfect, the bigger problems are that (a) there are plenty of similar open source projects, they could have just blessed one; (b) it’s a cosmetic standard, and (c) there’s pretty much no emphasis on the conceptual side of usability. So, at best it helps make government websites look nice and consistent.

(To be continued…)