Eon and π

One important spoiler! (Again, it’s a great book. Go read it.)

Another of my favorite SF books from the 80s and 90s is Greg Bear’s Eon. At one point it seemed to me that Greg Bear was looking through the SF pantheon for books with great ideas that never really went anywhere and started doing rewrites where he took some amazing concept off the shelf and wrote a story around it. Eon seemed to me to be taking the wonder and promise of Rendezvous with Rama and going beyond it in all respects.

One of the interesting things about Eon is that it was a book with a lot of Russian — or more accurately Soviet — characters written in the pre-Glaznost era. For those of you not familiar with the Cold War, our relationship with the USSR went through a rapid evolution from the early 70s, during which we signed a bunch of arms control agreements and fell in love with Eastern bloc gymnasts and things seemed to be improving, through to the Soviet invasion of Afghanistan and the so-called “Star Wars” missile defense program where things got markedly worse, the death of Leonid Brezhnev which was followed by a quick succession of guys who were dying when they got to power, and then the appearance of Mikhael Gorbachev, who — with the help of Reagan and Thatcher — reduced tensions and eventually made the peaceful end of the Cold War possible in, shall we say, 1989.

Eon was published in 1985, which means it was probably written a year or two earlier — at the height of US-Soviet tensions, and it portrays both the Americans and their Russian counterparts as more doctrinaire, paranoid, and xenophobic than the reader is likely to be. Set around 2005, the Earth is significantly more advanced technologically than we now know it was at the time, but a lot of the timeline is ridiculously optimistic (and there are throwaway comments such as the US orbital missile defense platforms having been far more effective than anyone expected in the limited nuclear exchange of the 90s).

When I first read Eon, I can remember being on the cusp of giving up on it as the politics seemed so right-wing. The Russians were bad guys and the Americans were kind of idiotically blinkered. I made allowances for the fact that the setting included a historical nuclear exchange between Russia and the US which certainly would justify bad feelings, but which itself was predicated on the US and Russians being a lot more bone-headed than the real US and Russians seemed to be.

I should note that I read Eon shortly after it was published, and Gorky Park had been published five years earlier and adapted as a movie in 1983. So it’s not like there weren’t far more nuanced portrayals of Soviets citizens in mainstream popular culture despite increasing Cold War tensions and the invasion of Afghanistan.

The Wikipedia entry for Eon is pretty sparse, but claims that:

the concept of parallel universes, alternate timelines and the manipulation of space-time itself are major themes in the latter half of the novel.

Note: I don’t really think books have “themes”. I think it’s a post-hoc construction by literary critics that some writers (and film makers) have been fooled by.

It’s surprising to me then that something Greg Bear makes explicit and obvious not long into the novel is that the entire story takes place in an alternate universe. He actually compromises the science in what is a pretty hard SF novel to make the point clear: Patricia Vasquez (the main protagonist) is a mathematician tasked with understanding the physics of the seventh chamber. To do this she has technicians make her a “multimeter” that measures various mathematical and physical constants to make it possible to detect distortions in space-time — boundaries between universes. Upon receiving it, she immediately checks to see it is working and looks at the value of Pi, which is shown to be wrong.

Anyone with a solid background in Physics or Math will tell you that changing the value of π is just not possible without breaking Math. (The prominence of π in Carl Sagan’s Contact is slightly less annoying, since it is taken to be a message from The Creator. The implication being that The Creator exists outside Math, which is more mind-boggling than living outside Time, say.) It’s far more conceivable to mess with measured and — seemingly — arbitrary constants such as electro-permeability, the Planck constant, gravity, the charge on the electron, and so forth, and some of these are mentioned. But most people don’t know them to eight or ten places by heart and lack a ubiquitous device that will display them, so (I assume) Bear chose π.

My point is, once it’s clear and explicit that Eon is set in an alternate universe the question switches from “is Bear some kind of right-wing nut-job like so many otherwise excellent SF writers” (which he doesn’t seem to be, based on his other novels) to “does the universe he is describing make internal sense”? It also, I suspect, makes it harder to turn this novel into a commercially successful TV series or movie. Which is a damn shame.

Use of Weapons, Revisited

I just finished listening to Use of Weapons. I first read it shortly after it was published, and it remains my favorite book — well, maybe second to Excession — by Iain M. Banks (who is sorely missed), and one of my favorite SF novels ever.

Spoilers. Please, if you haven’t, go read the book.

First of all, after it finished and I had relistened to the last couple of chapters just to get them straight in my head, I immediately went looking for any essays about the end, and found this very nice one. What follows was intended to be a comment on this post, but WordPress.com wouldn’t cooperate so I’m posting it here.

I’d like to add my own thoughts which are a little counter to the writer of the referenced post’s wholly negative take on Elethiomel. First, he never tries to blurt out a justification for his actions to Livueta, despite many opportunities. Even in his own internal monologues he never tries to justify his own actions. Similarly, if anything Livueta remembers him more fondly than he remembers himself (at least before the chair). If he’s a psychopath, he’s remarkably wracked by conscience.

In an earlier flashback he wonders what it is he wants from her, and considers and (if I recall correctly) rejects forgiveness.

If we read between the lines, we might conclude that the regime to which the Zakalwes are loyal is actually pretty horrible. The strong implication is that Elethiomel’s family narrowly escapes annihilation only owing to their being sheltered by the Zakalwe’s. It has the feel of Tsarist Russia about it.

Elethiomel, for all his negative qualities seems naturally attracted to the nicer side in every scrap he ends up in. When he freelances in the early flashbacks, he’s not doing anything public, he’s quietly and secretly using his wealth and power to (crudely) attempt to do the kinds of things the Culture does.

The book is full of symmetries. If you’d like one more, the Zakalwes are “nice” people loyal to a terrible regime, whereas Elethiomel is a ruthless bastard who works for good, or at least less terrible, regimes.

So it’s perfectly possible that the rebellion he led was in fact very much a heroic and well-intentioned thing, but at the end, when it was doomed, he fell victim to his two great weaknesses — the unwillingness to back down from an untenable position (if he looks like he’s losing, he simply keeps on fighting to the bitter end) and his willingness to use ANYTHING as a weapon no matter how terrible. I think it’s perfectly possible that he did not kill Darkense, but was willing to use her corpse as a weapon because it gave him one more roll of the dice. What did he want to say to Livueta, after all?

I further submit that his final unwillingness to perform the decapitation attack in his last mission shows that he has actually learned something at long last. And this is the thing in him that has changed and caused him to start screwing up (from Special Circumstances’ point of view) in missions since he was, himself, literally decapitated.

Migrating b8r code to the New World Order

I’ve got three small projects built with b8r lying around (one is something I did for a job interview a few months ago, so it’s not in a public repo). Anyway, all three simply load the version of b8r in the github master directly which means they broke hard when I pushed my latest changes.

Two of the projects are my galaxy generator (which is heavy on libraries but has no components) and my dnd spell list.

Anyway, updating the galaxy generator took about five minutes. Updating the dnd spell list took about thirty seconds. If I recall correctly, this is the second change to b8r in roughly two years that has required fixing the galaxy generator. (Obviously this should not matter, since only a crazy person would load dependencies into production code this way, but I don’t take breaking changes lightly.)

I’ve updated bindinator.com and attempted to document as much as possible about the migration process. Finally, I’ve replaced require.js with a tiny module that shows an alert() and redirects the user to the improved documentation on migrating to import.

Farewell require, you’re no longer needed…

This is a really technical article about front-end javascript programming. You have been warned.

Background

For most of the last two years, I have written my front-end code using my own implementation of require. It was inspired by the mostly delightful implementation of require used at Facebook (which I spent had over a month of my life in the bowels of). Initially, I just wanted to use something off-the-shelf, but every implementation I found required a build-phase during development (the Facebook version certainly did) and simultaneously tried to abstract away paths (because during the build phase, something would find all the libraries and map names to paths).

I wanted a require that did not depend on a build phase, did not abstract out paths (so if you knew a problem was in a required library, you also knew where that library was, was easy to use, and supported third-party libraries written to different variations of the commonjs “standard”.

What is require?

const foo = require('path/to/foo.js');

Require allows you to pull code from other files into a javascript file. Conceptually, it’s something like this (the first time).

  1. get text from ‘path/to/foo.js’
  2. insert it in the body of a function(module){ /* the code from foo.js */ )
  3. create an object, pass it to the function (as module)
  4. after the function executes, put whatever is in module.exports in a map of library files to output.

It follows that foo.js looks something like this:

module.exports = function() { console.log("I am foo"); }

The next time someone wants that file, pass them the same stuff you gave the previous caller.

Now, if that were all it needed to do, require would be pretty simple. Even so, there’s one nasty issue in the preceding when you want to allow for relative paths — if you require the same module from two different places you don’t want to return different instances of the same thing — both for efficiency and correctness.

But of course there are lots and lots of wrinkles:

  • Every javascript file can, of course, require other javascript file.
  • Circular references are easy to introduce accidentally, and can become unavoidable.
  • Each of these things is a synchronous call, so loading a page could take a very long time (if you put this naive implementation into production).
  • Javascript is a dynamic language. It’s already hard to statically analyze, require makes it harder. It’s particularly nasty because require is “just a function” which means it can be called at any time, and conditionally.
  • And, just to be annoying, the different implementations of commonjs make weird assumptions about the behavior of the wrapper function and the way it’s called (some expect module to be {}, some expect it to be {exports:{}}, some expect it to be something else, and because require is just javascript can choose to behave differently based on what they see.

Under the hood, require ends up needing to support things like dependency-mapping, circular-reference detection, asynchronous require, inlining pre-built dependency trees, and pretending to be different versions of require for different modules. It’s quite a nasty mess, and it’s vital to the correct functioning of a web app and its performance.

So, good riddance!

ES6 Modules

So, all the major browsers (and nodejs — sort of) finally support ES6 modules and — perhaps more importantly — because this is a change to the language (rather than something implemented in multiple different ways using the language) various build tools support it and can transpile it for less-compatible target platforms.

So now, you can write:

import foo from 'path/to/foo.js';

And the module looks like:

export default function(){ console.log('I am foo'); }

A couple of months ago, I took a few hours and deleted require.js from the b8r source tree, and then kept working on the b8r source files would pass b8r’s (rather meagre) unit tests. I then ran into a hard wall because I couldn’t get any components to load if they used require.

The underlying problem is that components are loaded asynchronously, and import is synchronous. There’s an asynchronous version of import I didn’t know about.

const foo = await import ('path/to/foo.js'); // does not work!

To obtain default exports via import() you need to obtain its default property. So:

const foo = (await import ('path/to/foo.js').default;

In general, I’ve started avoiding export default for small libraries. In effect, default is just another named export as far as dynamic import() is concerned.

import — again because it’s a language feature and not just some function someone wrote — is doing clever stuff to allow it to be non-blocking, e.g.

import {foo} from 'path/to/foo.js';

Will not work if foo is not explicitly exported from the module. E.g. exporting a default with a property named foo will not work. This isn’t a destructuring assignment. Under the hood this presumably means that a module can be completely “compiled” with placeholders for the imports, allowing its dependencies to be found and imported (etc.). Whereas require blocks compilation, import does not.

This also means that import affords static analysis (unlike require, import must be at top level scope in a module). Also, even though import() (dynamic import) looks just like a regular function, it turns out it’s also a new language feature. It’s not a function, and can’t be called etc.

An Interesting Wrinkle

One of the thorniest issues I had to deal with while maintaining require was that of third-party libraries which adhered to mystifyingly different commonjs variations. The funny thing is that all of them turn out to support the “bad old way” of handling javascript dependencies, which is <script> tags.

As a result of this, my require provided a solution for the worse-case-scenario in the shape of a function — viaTag — that would insert a (memoized) script tag in the header and return a promise that resolved when the tag loaded. Yes, it’s icky (but heck, require is eval) but it works. I’ve essentially retained a version of this function to deal with third-party libraries that aren’t provided as ES modules.

b8r without require

Last night, I revisited the task of replacing require with import in b8r armed with new knowledge of dynamic import(). I did not go back to the branch which had a partial port (that passed tests but couldn’t handle components) because I have done a lot of work on b8r’s support for web-components in my spare time and didn’t want to handle a complicated merge that touched on some of the subtlest code in b8r.

In a nutshell, I got b8r passing tests in about three hours, and then worked through every page of the b8r demo site and had everything loading by midnight with zero console spam. (Some of the loading is less seamless than it used to be because b8r components are even more deeply asynchronous than they used to be, and I haven’t smoothed all of that out yet.)

This morning I went back through my work, and found an error I’d missed (an inline test was still using require) and just now I realized I hadn’t updated benchmark.html so I fixed that.

An Aside on TDD

My position on TDD is mostly negative (I think it’s tedious and boring) but I’m a big fan of API-driven design and “dogfooding”. Dogfooding is, in essence, continuous integration testing, and the entire b8r project is a pure dogfooding. It’s gotten to the point where I prefer developing new libraries I’m writing for other purposes within b8r because it’s such a nice environment to work with. I’ve been thinking about this a lot lately as my team develops front-end testing best practices.

Here’s the point — swapping out require for import across a pretty decent (and technically complex) project is a beyond a “refactor”. This change took me two attempts, and I probably did a lot of thinking about it in the period between the attempts — so I’d call it a solid week of work.

At least when you’re waiting for a compile / render / dependency install you can do something fun. Writing tests — at least for UI components or layouts — isn’t fun to do and it’s not much fun to run the tests either.

“5 Story Points”

I’ve done a bunch of refactors of b8r-based code (in my last month at my previous job two of us completely reorganized most of the user interface in about a week, refactoring and repurposing a dozen significant components along the way). Refactoring b8r code is about as pleasant as I’ve ever found refactoring (despite the complete lack of custom tooling). I’d say it’s easier than fairly simple maintenance on typical “enterprisey” projects (e.g. usually React/Redux, these days) festooned with dedicated unit tests.

Anyway, even advocates of TDD agree that TDD works best for pure functions — things that have well-defined pre- and post- conditions. The problem with front-end programming is that if it touches the UI, it’s not a pure function. If you set the disabled attribute of a <button>, the result of that operation is (a) entirely composed of side-effects and (b) quite likely asynchronous. In TDD world you’d end up writing a test expecting the button with its disabled attribute set to be a button with a disabled attribute set. Yay, assignment statements work! We have test coverage. Woohoo!

Anyway, b8r has a small set of “unit tests” (many of which are in fact pretty integrated) and a whole bunch of inline tests (part of the inline documentation) and dogfooding (the b8r project is built out of b8r). I also have a low tolerance for console spam (it can creep in…) but b8r in general says something if it sees something (bad) and shuts up otherwise.

Anyway, I think this is a pretty pragmatic and effective approach to testing. It works for me!

What’s left to do?

Well, I have to scan through the internal documentation and purge references to require in hundreds of places.

Also, there’s probably a bunch of code that still uses require that for some reason isn’t being exercised. This includes code that expects to run in Electron or nwjs. (One of the nice things about removing my own implementation of require is that it had to deal with environments that create their own global require.) This is an opportunity to deal with some files that need to be surfaced or removed.

At that point there should be no major obstacle to using rollup.js or similar to “build” b8r (versus using the bespoke toolchain I build around require, and can now also throw away). From there it should be straightforward to convert b8r into “just another package”.

Is it a win …yet?

My main goal for doing all this is to make b8r live nicely in the npm (and yarn) package ecosystem and, presumably benefit from all the tooling that you get from being there. If we set that — very significant — benefit aside:

  • Does it load faster? No. The old require loads faster. But that’s to be expected — require did a lot of cute stuff to optimize loading and being able to use even cleverer tooling (that someone else has written) is one of the anticipated benefits of moving to import.
  • Does it run faster? No. The code runs at the same speed.
  • Is it easier to work with? export { .... } is no easier than module.exports = { .... } and destructuring required stuff is actually easier. It will, however, be much easier to work with b8r in combination with random other stuff. I am looking forward to not having to write /* global module, require */ and 'use strict' all over the place to keep my linters happy..
  • Does it make code more reliable? I’m going to say yes! Even in the simplest case import {foo} from 'path/to/foo.js' is more rigorous than const {foo} = require('path/to/foo.js') because the latter code only fails if foo is called (assuming it’s expected to be a function) or using it causes a failure. With import, as soon as foo.js loads an error is thrown if foo isn’t an actual export. (Another reason not to use export default by the way.)

More on web-components and perf

tl;dr web-components are only slow if they have a shadowRoot.

Here’s an updated version of the previous graph, showing two new data points — one where I replace two spans with a simple b8r component, and another where I modified makeWebComponent to allow the creation of custom-elements without a shadowRoot (and hence without styling).

My intention with the upcoming version of b8r is to replace b8r’s components with web-components, and my previous test showed that web-components were kind of expensive. But it occurred to me I hadn’t compared them to b8r components, so I did.

In a nut, a b8r component is about as expensive as a web-component with a shadowRoot (the b8r component in question was styled; removing the style didn’t improve performance, which isn’t surprising since b8r deals with component styles in a very efficient way), and a web-component without a shadowRoot is just plain fast. This is great news since it means that switching from b8r components (which do not have a shadow DOM) to web-components is a perf win.