Eon and π

One important spoiler! (Again, it’s a great book. Go read it.)

Another of my favorite SF books from the 80s and 90s is Greg Bear’s Eon. At one point it seemed to me that Greg Bear was looking through the SF pantheon for books with great ideas that never really went anywhere and started doing rewrites where he took some amazing concept off the shelf and wrote a story around it. Eon seemed to me to be taking the wonder and promise of Rendezvous with Rama and going beyond it in all respects.

One of the interesting things about Eon is that it was a book with a lot of Russian — or more accurately Soviet — characters written in the pre-Glaznost era. For those of you not familiar with the Cold War, our relationship with the USSR went through a rapid evolution from the early 70s, during which we signed a bunch of arms control agreements and fell in love with Eastern bloc gymnasts and things seemed to be improving, through to the Soviet invasion of Afghanistan and the so-called “Star Wars” missile defense program where things got markedly worse, the death of Leonid Brezhnev which was followed by a quick succession of guys who were dying when they got to power, and then the appearance of Mikhael Gorbachev, who — with the help of Reagan and Thatcher — reduced tensions and eventually made the peaceful end of the Cold War possible in, shall we say, 1989.

Eon was published in 1985, which means it was probably written a year or two earlier — at the height of US-Soviet tensions, and it portrays both the Americans and their Russian counterparts as more doctrinaire, paranoid, and xenophobic than the reader is likely to be. Set around 2005, the Earth is significantly more advanced technologically than we now know it was at the time, but a lot of the timeline is ridiculously optimistic (and there are throwaway comments such as the US orbital missile defense platforms having been far more effective than anyone expected in the limited nuclear exchange of the 90s).

When I first read Eon, I can remember being on the cusp of giving up on it as the politics seemed so right-wing. The Russians were bad guys and the Americans were kind of idiotically blinkered. I made allowances for the fact that the setting included a historical nuclear exchange between Russia and the US which certainly would justify bad feelings, but which itself was predicated on the US and Russians being a lot more bone-headed than the real US and Russians seemed to be.

I should note that I read Eon shortly after it was published, and Gorky Park had been published five years earlier and adapted as a movie in 1983. So it’s not like there weren’t far more nuanced portrayals of Soviets citizens in mainstream popular culture despite increasing Cold War tensions and the invasion of Afghanistan.

The Wikipedia entry for Eon is pretty sparse, but claims that:

the concept of parallel universes, alternate timelines and the manipulation of space-time itself are major themes in the latter half of the novel.

Note: I don’t really think books have “themes”. I think it’s a post-hoc construction by literary critics that some writers (and film makers) have been fooled by.

It’s surprising to me then that something Greg Bear makes explicit and obvious not long into the novel is that the entire story takes place in an alternate universe. He actually compromises the science in what is a pretty hard SF novel to make the point clear: Patricia Vasquez (the main protagonist) is a mathematician tasked with understanding the physics of the seventh chamber. To do this she has technicians make her a “multimeter” that measures various mathematical and physical constants to make it possible to detect distortions in space-time — boundaries between universes. Upon receiving it, she immediately checks to see it is working and looks at the value of Pi, which is shown to be wrong.

Anyone with a solid background in Physics or Math will tell you that changing the value of π is just not possible without breaking Math. (The prominence of π in Carl Sagan’s Contact is slightly less annoying, since it is taken to be a message from The Creator. The implication being that The Creator exists outside Math, which is more mind-boggling than living outside Time, say.) It’s far more conceivable to mess with measured and — seemingly — arbitrary constants such as electro-permeability, the Planck constant, gravity, the charge on the electron, and so forth, and some of these are mentioned. But most people don’t know them to eight or ten places by heart and lack a ubiquitous device that will display them, so (I assume) Bear chose π.

My point is, once it’s clear and explicit that Eon is set in an alternate universe the question switches from “is Bear some kind of right-wing nut-job like so many otherwise excellent SF writers” (which he doesn’t seem to be, based on his other novels) to “does the universe he is describing make internal sense”? It also, I suspect, makes it harder to turn this novel into a commercially successful TV series or movie. Which is a damn shame.

Use of Weapons, Revisited

I just finished listening to Use of Weapons. I first read it shortly after it was published, and it remains my favorite book — well, maybe second to Excession — by Iain M. Banks (who is sorely missed), and one of my favorite SF novels ever.

Spoilers. Please, if you haven’t, go read the book.

First of all, after it finished and I had relistened to the last couple of chapters just to get them straight in my head, I immediately went looking for any essays about the end, and found this very nice one. What follows was intended to be a comment on this post, but WordPress.com wouldn’t cooperate so I’m posting it here.

I’d like to add my own thoughts which are a little counter to the writer of the referenced post’s wholly negative take on Elethiomel. First, he never tries to blurt out a justification for his actions to Livueta, despite many opportunities. Even in his own internal monologues he never tries to justify his own actions. Similarly, if anything Livueta remembers him more fondly than he remembers himself (at least before the chair). If he’s a psychopath, he’s remarkably wracked by conscience.

In an earlier flashback he wonders what it is he wants from her, and considers and (if I recall correctly) rejects forgiveness.

If we read between the lines, we might conclude that the regime to which the Zakalwes are loyal is actually pretty horrible. The strong implication is that Elethiomel’s family narrowly escapes annihilation only owing to their being sheltered by the Zakalwe’s. It has the feel of Tsarist Russia about it.

Elethiomel, for all his negative qualities seems naturally attracted to the nicer side in every scrap he ends up in. When he freelances in the early flashbacks, he’s not doing anything public, he’s quietly and secretly using his wealth and power to (crudely) attempt to do the kinds of things the Culture does.

The book is full of symmetries. If you’d like one more, the Zakalwes are “nice” people loyal to a terrible regime, whereas Elethiomel is a ruthless bastard who works for good, or at least less terrible, regimes.

So it’s perfectly possible that the rebellion he led was in fact very much a heroic and well-intentioned thing, but at the end, when it was doomed, he fell victim to his two great weaknesses — the unwillingness to back down from an untenable position (if he looks like he’s losing, he simply keeps on fighting to the bitter end) and his willingness to use ANYTHING as a weapon no matter how terrible. I think it’s perfectly possible that he did not kill Darkense, but was willing to use her corpse as a weapon because it gave him one more roll of the dice. What did he want to say to Livueta, after all?

I further submit that his final unwillingness to perform the decapitation attack in his last mission shows that he has actually learned something at long last. And this is the thing in him that has changed and caused him to start screwing up (from Special Circumstances’ point of view) in missions since he was, himself, literally decapitated.

b8r v2

b8r v2 is going to adopt web-components over components and import over require

Having developed b8r in a period of about two weeks after leaving Facebook (it was essentially a distillation / brain-dump of all the opinions I had formed about writing front-end code while working there), I’ve just spent about two years building a pretty solid social media application using it.

But, times change. I’m done with social media, and the question is, whither b8r?

In the last few weeks of my previous job, one of my colleagues paid me a huge compliment, he said I’d “really hit it out of the park” with b8r. We’d just rebuilt half the user interface of our application, replacing some views completely and significantly redesigning others, in two weeks, with one of the three front-end coders on leave.

I designed b8r to afford writing complex applications quickly, turn hacky code into solid code without major disruption, make maintenance and debugging as easy as possible, and to allow new programmers to quickly grok existing code. As far as I can tell, it seems to do the trick. But it’s not perfect.

Two significant things have emerged in the front-end world in the last few years: import and web-components (custom DOM elements in particular)

import

When I wrote b8r, I found the various implementations of require to be so annoying (and mutually incompatible) I wrote my own, and it represents quite an investment in effort since then (e.g. I ended up writing an entire packaging system because of it).

Switching to import seems like a no-brainer, even if it won’t be painless (for various reasons, import is pretty inimical to CommonJS and not many third-party libraries are import-friendly, and it appears to be impossible for a given javascript file to be compatible with both import and require).

I experimentally converted all of the b8r core over to using import in an afternoon — enough that it could pass all the unit tests, although it couldn’t load the documentation system because any component that uses require or require.lazy won’t work.

Which got me to thinking about…

web-components

I’ve been struggling with improving the b8r’s component architecture. The most important thing I wanted was for components to naturally provide a controller (in effect, for components to be instances of a class), and pave some good paths and wall up some bad ones. But, after several abortive attempts and then thinking about the switch from require to import, I’ve decided to double-down on web-components. The great thing about web-components is that they have all the virtues I want from v2 components and absolutely no dependency on b8r.

I’ve already added a convenience library called web-components.js. You can check it out along with a few sample components. The library makes it relatively easy to implement custom DOM elements, and provides an economical javascript idiom for creating DOM elements that doesn’t involve JSX or other atrocities.

Using this library you can write code like this (this code generates the internals of one of the example components):

fragment(
div({classes: ['selection']}),
div({content: '▾', classes: ['indicator']}),
div({classes: ['menu'], content: slot()}),
)

I think it’s competitive with JSX while not having any of the dependencies (such as requiring a transpile cycle for starters).

<div className={'selection'}></div>
<div className={'indicator'}>▾</div>
<div className={'menu'}>
{...props.children}
</div>

To see just how lean a component implemented using this library can be, you can compare the new switch component to the old CSS-based version.

Aside — An Interesting Advantage of Web Components

One of the interesting properties of web-components is that internally the only part of the DOM they need to care about is whether they have a child <slot>. Web-components don’t need to use the DOM at all except for purposes of managing hierarchical relationships. (Annoyingly, web-components cannot be self-closing tags. You can’t even explicitly self-close them.)

E.g. imagine a web-component that creates a WebGL context and child components that render into container’s scene description.

In several cases while writing b8r examples I really wanted to be able to have abstract components (e.g. the asteroids in the asteroids example or the character model in the threejs example). This is something that can easily be done with web-components but is impossible with b8r’s HTML-centric components. It would be perfectly viable to build a component library that renders itself as WebGL or SVG.

Styling Custom Elements for Fun and Profit

One of the open questions about custom DOM elements is how to allow them to be styled. So far, I’ve seen one article suggesting subclassing, which seems to me like a Bad Idea.

I’m currently leaning towards one or both of (a) making widgets as generic and granular as possible (e.g. implement a custom <select> and a custom <option> and let them be styled from “outside”) and (b) when necessary driving styles via CSS variables (e.g. you might have a widget named foo that has a border, and give it a generic name (–widget-border-color), specific name (–foo-border-color), and a default to fall back to.

So, in essence, b8r v2 will be smaller and simpler — because it’s going to be b8r minus require and components. You won’t need components, because you’ll have web-components. You won’t need require because you’ll have import. I also plan one more significant change in b8r v2 — it will be a proper node package, so you can manage it with npm and yarn and so forth.

<b8r-bindery>

One idea that I’m toying with is to make b8r itself “just a component”. Basically, you’d get a b8r component that you simply stick anywhere in the DOM and you’d get b8r’s core functionality.

In essence the bindery’s value — presumably an object — becomes accessible (via paths) to all descendants, and the component handles all events the usual way.

I’m also toying with the idea of supporting Redux (rather than b8r’s finer-grained two-way bindings). There’s probably not much to do here — just get Redux to populate a bindery and then instead of the tedious passing of objects from parent-to-child-to-grandchild that characterizes React-Redux coding you can simply bind to paths and get on with your life.

Summing Up

After two years, I’m still pretty happy with b8r. Over the next year or so I hope to make it more interoperable (“just another package”), and to migrate it from (its) require and HTML components to import and web-components. Presumably, we’ll have import working in node and (and hence Electron and nwjs) by then.

Happy new year!

Epic Fail

Atul Gawande describes the Epic software system being rolled out in America’s hospitals.

It reads like a  potpourri of everything bad about enterprise IT. Standardize on endpoints instead of interoperability, Big Bang changes instead of incremental improvements, and failure to adhere to the simplest principles of usability.

The sad thing is that the litany of horrors in this article are all solved problems. Unfortunately, it seems that in the process of “professionalizing” usability, the discipline has lost its way.

Reading through the article, you can just tally up the violations of my proposed Usability Heuristics, and there’s very few issues described in the article that would not be eliminated by applying one of them.

The others would fall to simple principles like using battle-tested standards (ISO timestamps anyone?) and picking the right level of database normalization (it should be difficult or impossible to enter different variations of the same problem in “problem lists”, and easier to elaborate on existing problems).

There was a column of thirteen tabs on the left side of my screen, crowded with nearly identical terms: “chart review,” “results review,” “review flowsheet.”

I’m sure the tabs LOOKED nice, though. (Hint: maximize generality, minimize steps, progressive disclosure, viability.)

“Ordering a mammogram used to be one click,” she said. “Now I spend three extra clicks to put in a diagnosis. When I do a Pap smear, I have eleven clicks. It’s ‘Oh, who did it?’ Why not, by default, think that I did it?” She was almost shouting now. “I’m the one putting the order in. Why is it asking me what date, if the patient is in the office today? When do you think this actually happened? It is incredible!”

Sensible defaults can be helpful?! Who knew? (Hint: sensible defaults, minimize steps.)

This is probably my favorite (even though it’s not usability-related):

Last fall, the night before daylight-saving time ended, an all-user e-mail alert went out. The system did not have a way to record information when the hour from 1 a.m. to 1:59 a.m. repeated in the night. This was, for the system, a surprise event.

Face meet palm.

Date-and-time is a fundamental issue with all software and the layers of stupidity that must have signed off on a system that couldn’t cope with Daylight Savings boggles my mind.

A former colleague of mine linked to US Web Design System as if this were some kind of intrinsically Good Thing. Hilariously, the site itself does not appear to have been designed for accessibility or even decent semantic web, and blocks robots.

Even if the site itself were perfect, the bigger problems are that (a) there are plenty of similar open source projects, they could have just blessed one; (b) it’s a cosmetic standard, and (c) there’s pretty much no emphasis on the conceptual side of usability. So, at best it helps make government websites look nice and consistent.

(To be continued…)

Lost Productivity

A couple of months ago I listened to a Planet Money podcast discussing the mysterious slowdown in US productivity growth (the link is to one of several podcasts on this topic). Like most NPR content, the story got recycled through a number of different programs, such as Morning Edition.

The upshot was, that productivity — which is essentially GDP/work — has stalled since the — I dunno — 90s, and it doesn’t make sense given the apparent revolutions in technology — faster computers, better networks, etc.

Anyway, the upshot — and I’m basing this on memory because I can’t find the exact transcript — is that there’s a mysterious hole in productivity growth which, if it were filled, would add up to several trillion dollars worth of lost value added.

Well, I think it’s there to be found, because Free Open Source Software on its own adds up to several trillion dollars worth of stuff that hasn’t been measured by GDP.

Consider the dominant tech platforms of our time — Android and iOS. Both are fundamentally built on Open Source. If it weren’t for Open Source, iOS at minimum would have been significantly altered (let’s assume NeXTStep would have had a fully proprietary, but still fundamentally POSIX base) and Android could not have existed at all. Whatever was in their place would have had to pay trillions in licenses.

On a micro level, having worked through a series of tech booms from 1990 to the present — in the 90s, to do my job my employer or I had to spend about $2000-5000 for software licenses every year just to keep up-to-date with Photoshop, Director, Illustrator, Acrobat, Strata 3d, 3dsmax, Form*Z, and so on and so forth. By the mid aughts it was maybe $1000 per year and the software was So. Much. Better. Today, it’s probably down to less than $500.

And, in this same period, the number of people doing the kind of work I do, is done by far more people.

That’s just software. This phenomenon is also affecting hardware.

The big problem with this “lost productivity” is that the benefits are chiefly being reaped by billionaires.