A Quick Look at Panic’s Nova

I recently became aware that Nova (Panic’s next gen replacement for Coda) has been out since September, and decided to give it a quick spin.

Now, I’m currently working on an internal IDE that runs in Chrome so this is of special interest to me. In particular, I was wondering if Panic had come up with new magic sauce to advance the “state of the art” in lightweight platform-native IDEs, the way they did when they first shipped Nova.

I remember my first complaint about Coda was that its CSS editing tools were not up-to-par with CSSEdit, which Cabel (who has been known to reply to me directly) said was something they’d hoped to license. (Turns out they were working on Espresso, an ambitious and interesting but perpetually not-quite-there yet Coda rival.) Anyway, Coda’s CSS tools eventually got decent, but never caught up with CSSEdit.

Luckily, I became so ridiculously fluent in CSS that I’ve not really needed anything beyond a basic text editor in years. (At the time, I regarded CSS with disdain and hoped to learn as little about it as possible.)

Anyway, some quick observations:

  1. It’s beautiful. Yay.
  2. It seems very much like a native vscode. Yay.
  3. It’s not interoperable with vscode extensions (and that’s likely very hard to do) and I haven’t looked into how hard it would be to “port” an extension from vscode. Boo.
  4. It doesn’t appear to leverage existing quasi-standards, the way Sublime leveraged TextMate stuff, and vscode leverages… stuff. Boo.
  5. It doesn’t handle terminals as gracefully as vscode. vscode’s terminal by default appears as needed, at the bottom of the main window, and tucks itself into a tiny space with a click. Boo.
  6. Running node package scripts is clunky (it requires a third-party extension that isn’t nearly as nice as what vscode has gotten me used to). I don’t want Panic to over-focus on node users, because Coda’s over-focus on PHP users is one reason I stopped using it. Boo.
  7. Nova supports themes. Not a surprise. I couldn’t quickly find documentation on how to create a new theme, but I imagine it won’t be terribly difficult and it will be feasible to convert themes from other popular things to Nova. Meh.
  8. Nova lets you theme its overall UI, code editors, and terminals separately! This is awesome, since it allows you to easily differentiate panes at a glance, rather than everything looking pretty much identical. Yay.
  9. I was disappointed, when trying to open a Remote Project that it automatically (or with a click) import server settings from Transmit, of which I have been a user since 1.0). I had just made a quick fix to one of my demos using Transmit, which let me open the server, open the file (on the server) directly with BBEdit, make the fix, hit save the way I would for a desktop file, and reload the site to check the fix worked — that’s the workflow it needs to at least match. Boo.

Nova renders color swatches on the line-count column as opposed to the way vscode does it (inline in the text, somewhat disconcertingly). In both cases you can click to get a color-picker, but Nova’s is much nicer (it has spaces for you to keep common colors, for example, but seriously — use css variables!).

This is nice, but I don’t see any assistance for things like setting fonts, or borders. Although I have a license for Espresso somewhere or other, I’m not going to reinstall it just to take a screenshot, but here’s a capture from their website (yes, it lives!):

The user’s cursor is in a rule, and in addition to the text, the values are exposed as the UI shown above. This is basically a tiny part of what CSSEdit was doing, over ten years ago — my receipt for CSSEdit 2.5 is dated 2007. CSSEdit also allowed you to open a web page and then edit its underlying styles and then save those changes directly to the sources, kind of like an automatic round-trip version of the workflow many of us use in Chrome/Safari/Firefox.

The first extension I tried to use was from standardjs, which I love. I didn’t notice this until later:

This is not good. It reminds me of Lotus Notes telling me it needs me to do five specific things to do what I obviously want to do. Automatically loading dependencies is kind of a solved problem these days… Boo.

It’s hard to compete with a free tool that’s backed by Microsoft. And vscode has become a lot of people’s, especially web-developers’, preferred editor, and it’s easy to see why. Because any web developer must pretty much know all the tech that underlies vscode, it has a particularly rich extension library.

Jetbrains suite of java-based tools has also earned a huge following. (Sublime Text—once seemingly everyone’s favorite editor—hasn’t received an update since 2019. The team seems to have switched their focus to Sublime Merge.)

Overall, my initial reaction to Nova is slightly more negative than positive. I’m sorry to say this but I’m unconvinced.

b8r 0.5.0

b8r 0.5.0 is available from npm and github. See the github-pages demo here. It’s the single most radical change to b8r since it was first released, although unlike the switch from require() to ES6 Modules it’s non-breaking, so I’ve updated the documentation accordingly.

How it started:

b8r.register('myApp', { 
  foo: 17, 
  bar: { 
    baz: 'lurman'
  list: []
const { baz } = b8r.get('myApp.bar')
b8r.set('myApp.bar.baz', 'lurman')
b8r.pushByPath('myApp.list', { id: 17, name: 'Romeo + Juliet' })
const { id } = b8r.get('myApp.list[id=17]') // one of b8r's coolest features

How it’s going:

b8r.reg.myApp  = { 
  foo: 17, 
  bar: { 
    baz: 'lurman'
  list: [] 

const {baz} = b8r.reg.myApp.bar
// OMG I've been misspelling his name in all my examples!
b8r.reg.myApp.bar.baz = 'luhrmann' 
b8r.reg.myApp.list.push({id: 17, name: 'Romeo + Juliet'})
const {id} = b8r.reg.myApp.list['id=17'] // this still works!

The changes are, basically, syntax sugar for the most common operations with b8r, which means that there’s a huge potential to make code that’s already about as compact and simple as front-end code gets, and make it more compact and more simple. How simple? Here’s the code implementing the simple todo component from the React vs. Bindinator document:

data.list = []
data.text = ''
data.nextItem = 1
data.addItem = () => {
  const {text} = data
  data.nextItem += 1
  data.text = ''

The only piece of information you need to know about this is that data ia the component’s state, which has been retrieved from the registry, and if you change properties inside it, they’ll be detected and changed.

The same example used to look like this:

  list: [],
  text: '',
  nextItem: 1,
  addItem() {
    const {list, nextItem, text} = get()
      nextItem: nextItem + 1,
      text: ''

You need to know that set() is automatically going to set properties inside the component’s state (so that b8r will know they’ve been changed), and get() will retrieve the object. You need to know that if you pull an object out of the registry and mess with it, b8r will not know unless you tell it. So, at some point you’ll need to know that get() and set() also take paths, and how paths work.

How did this happen?

So, yesterday, a friend and former colleague who uses bindinator (b8r) reached out to me and said, in essence, it would be a lot easier to convert existing code to b8r if you could put b8r expressions on the left-hand-side of assignment statements.

My reaction was, in essence, “sure, that’s the holy grail, but I think it’s impossible”.

He replied that he thought that I might be able to use ES6 Proxy—which I immediately looked up…

So, as of today, the main branch of b8r now supports a new, cleaner, more intuitive syntax (the old syntax is still supported, don’t worry!), it’s documented, and there’s decent test coverage.


The basic idea of b8r is that it binds data to “paths” which are analogous to file paths on a logical storage device, except that the paths look like standard javascript (mostly).

E.g. you might bind the value in an input field to app.user.name. This looks like <input data-bind=”value=app.user.name”>.

This isn’t binding to a variable or a value, but to the path.

Similarly, you can bind an object to a name, e.g.

const obj = {
  user: {
    name: "tonio",
    website: "https://loewald.com" 

b8r.register('app', obj) // binds obj to the name "app"

If you did both of these things then the input field would now contain “tonio”. But if the user edits the input field, then the value in the bound object changes.

Now, what happens if I write:

obj.user.name = "fredo"?

Well, the value in the object has changed, but the input doesn’t get updated because it was done “behind b8r’s back”. To notify b8r of the update so it can keep bound user interface elements consistent with the application state, you need to do something like:

// simple
b8r.set('app.user.name', 'fredo')

// manual
obj.user.name = 'fredo'

This works very well, and is both pretty elegant and doesn’t involve any “spooky magic at a distance”. E.g. because you know you can change things behind b8r’s back, you can use this to make updates more efficient (e.g. you might be streaming data into a large array, and not want to force large updates frequently and at random intervals, so you could update the array, but only tell b8r when you want to redraw the view).

But what you can’t do is something like this:

b8r.get('app.user.name') = 'fredo'

A javascript function cannot return a variable. It can return an object with a computed property, so it would be possible to enable syntax like:

b8r.get(‘app.user.name’).value = ‘fredo’

Which is (a) clumsy, and (b) makes the common case less elegant. Or maybe we could enable:

b8r.set('app.user.name').value = 'fredo'

Which returns the object if no new value is passed (or whether or not a new value is passed) but this doesn’t work for:

b8r.set('app.user').name.value = 'fredo'


b8r.set('app.user').name = 'fredo'

…performs the change, but behind b8r’s back.

ES6 Proxy to the Rescue

So, in a nutshell, ES6 proxy gives the Javascript programmer direct access to the mechanism that allows object prototypes (i.e. “class instances” to work, by creating a “proxy” for another object that intercepts property lookups. This is similar to—but far less hacky than—the mechanism in PHP that lets you write a special function for a class that gets called whenever an unrecognized property of an instance is referenced.

In short, you use Proxy to create a proxy for an object (e.g. foo) that can intercept any property reference, e.g. proxy.bar, and decide what to do, knowing that the user is trying to get or set a property named ‘bar’ on the original object.

So, now, b8r.reg is a proxy for the b8r registry, which is the object containing your application’s state.

Our original example can now look like this:

b8r.reg.app =  {
  user: {
    name: "tonio",
    website: "https://loewald.com" 

And we can change a property via:

b8r.reg.app.user.name = 'fredo'

And common convenience constructs like this work:

const {user} = b8r.reg.app
user.name = 'fredo'
user.website = 'https://deadmobster.org'

And b8r is notified when the changes are made. (You can still make changes behind b8r’s back the old way, if you want to avoid updates!)

Not bad for a Saturday afternoon!


Wouldn’t it be nice if arrays worked exactly as you expected? So instead of this pattern in b8r code:

const {list} = b8r.get('path.to.list')
// do stuff to list like add items, remove them, filter them
b8r.touch('path.to.list') // hey, b8r, we messed with the list!

We could do stuff like this?



b8r.reg.path.to.list.splice(5, 10) // chop some items out

I.e. wouldn’t it be nice if you could just perform standard operations you can do with any array directly from the b8r registry, and b8r knew it happened? Why yes, yes it would.

The new reg proxy exposes array methods that tend to mutate the array with a simple wrapper that touches the path of the array. E.g. pop, splice, and forEach.

And, why about the coolest feature of b8r data-paths, id-paths in array references?

b8r.set('path.to.list[id=123].name', 'foobar')


b8r.reg.path.to.list['id=123'].name = 'foobar'

Spooky Action at a Distance?

This all might smell a bit like “spooky magic at a distance”. How is the user expected to maintain a mental model of what’s going on behind the scenes for when things go wrong?

Well, the b8r.reg returns a proxy of the registry, and exposes each of its object properties as proxies, and so on. Each proxy knows its own path within the registry. Each object registered is just a property of the registry. In

If you set the property of a proxied object, it calls b8r.set() on the path.

That’s it. That’s all the magic.

Now, here are two articles discussing how ngZone and change-detection-strategy onPush, and how React hooks work.

More Adventures in 3D Printing

The D&D boardgames come with plastic miniatures. They’re not especially nice, but neither is anything else out there.

So, I started playing the D&D boardgames with my kids. The games are pretty good (I initially bought them as a way of getting a buttload of materials for playing regular D&D, but that went nowhere).

But, my game designer brain won’t shut down… I had already toyed with the idea of a card-based RPG system, but couldn’t quite get it working. My (minor) frustrations with the D&D games gave me an idea. I’ll get to that in another post.

Anyway, in a nutshell, D&D boardgames with miniatures led me to think about custom miniatures which led me (back) to HeroForge which led my to think about 3D printing, and thus I discovered that SLA (stereolithography) printers have suddenly gotten a lot cheaper and by all signs are actually good.

So, I bought a Voxelab (or maybe Voxellab? — they can’t make up their minds) Polaris resin printer for about $160 plus tax, and a couple of 500mL bottles of resin for about $15 apiece.

This all arrived on News Year’s Eve. Woohoo! Champagne and 3D printing!

The printer didn’t come with any resin, and the instructions make reference to items not included: e.g. a container to dip prints in denatured alcohol, denatured alcohol, and a curing box. I made do with old jars, ordinary 70% alcohol (more on this later; a costly mistake I think), and paper napkins and the initial results were amazing.

Setup is amazingly easy. You basically:

  1. Place it somewhere level(ish)
  2. Plug it in
  3. Pull out resin tray
  4. Place a sheet of paper over the spot the resin tray occupied
  5. Loosen two screws on the print head, allowing it to move
  6. Run the print head down until the paper is flattened
  7. Setting this as ZERO
  8. Raise the print head
  9. Replace the tray
  10. Pour in some resin (word of warning: it’s really hard to tell how much transparent resin you’ve poured in).

This would have been quicker if the instructions weren’t in broken English and were decently organized and/or had an index. Even so, not bad at all.

The software is provided on a USB stick, which is big enough to serve as a file transfer device (this printer doesn’t do WiFi, so you save models (in *.fdb format) using the included Chitubox (free) software.

Chitubox is a semi-nice program. The buttons are all non-standard, tiny, nearly indistinguishable, and have no tooltips. So that’s nice.

My single biggest frustration was figuring out how to import the profile (on the USB stick) into Chitubox (once installed). Turns out there’s a video that more-or-less explains the process on the stick.

To print a 3D model (or several):

  1. Export models as OBJ files. (Ideally a model should be “water-tight” and clean, but it’s very forgiving.)
  2. Import one or more obj files into Chitubox.
  3. Scale, rotate, position the models to taste.
  4. Go to the supports tab and click Add All.
  5. Go back to the first tab and click Slice. This may take a while
  6. Drag the slice slider (not to be confused with the view slider) up-and-down to look at the cross sections and check they seem correct. (The view slider shows you a quickly calculated cross-section of your model, the slice slider shows you the actual rendered cross-section that will be used to print a layer.)
  7. Save the model (as FDG) to a memory stick.
  8. Put the stick in the printer.
  9. On the front panel pick your model (a correctly saved file should display your model; ignore weird Mac hidden files and such) and press Play.
It’s definitely worth scanning through the layers after slicing to sanity check before spending hours printing something that makes no sense. One annoying mistake I’ve made is making hollows with no place for the fluid to drain (or rather making them in the wrong place).


Here’s the printer in action. The curved thing is the print platform—a solid hunk of aluminum—and it is bathing in transparent resin (you can see bubbles in the resin towards the top-left). After each layer is exposed, the head is pulled up 5mm (this is adjustable) which draws more resin under the print, and then lowered 0.5mm above its previous exposure position, and so on.

Printing is slow (but it’s dictated by the height of the print, not the volume, so you can print a bunch of miniatures in parallel just as quickly as you can print one). It’s printing your model 0.5mm (?) at a time (with antialiased voxels!—or maybe just pixels) and each for each layer, the model is lowered to 0.5mm above the glass plate (at the bottom of the resin chamber), exposed, then raised to allow fluid to seep in. Lather, rinse, repeat.

Each layer is being printed by shining a UV lamp through an LCD displaying the bitmap cross-section for that layer (as seen in step 6, above). That’s it! (The printer displays the layer being printed on its color touch screen.) Mechanically, this is much simpler than an FDM printer, and the quality of the output is dictated by the resolution of the LCD screen and the quality of the resin.

The printer is quiet compared to FDM (still, a pretty loud fan) and it doesn’t cause noticeable vibration (it’s only moving the head up and down with a screw drive).

When it’s done, it beeps and raises the print clear of the resin basin.

  1. Rinse it in alcohol
  2. Pull it out of the alcohol and let the alcohol drain
  3. Leave it to cure somewhere well-ventilated

So, how good are the results?

Fresh out of the printer, transparent prints look amazing. This is all covered in liquid resin—goop Gwyneth Powell might be proud of—probably not. Below the toroid you can see the support platform (from which the model hangs during printing.

It’s hard for me not to be hyperbolic and create unreasonable expectations, given my previous frustrations with FDM (fusion deposit modeling, or “robot glue guns” as I call them), but here goes.

This is the first inexpensive 3d printer that I have found not to be disappointing.

This is the first 3d printer I have used that produced a good model the first time I used it. (And, no, it wasn’t the demo model that came with the USB stick.)

This is my first print, 36h after it emerged from the resin. It is perfect. (The support platform looks nicer than anything I’ve seen out of an FSM printer). If you pixel peep you can see “sedimentary” layers in the print surface, but this is not reflected by the surface feeling rough. The only blemishes are tiny nubs where the supports were broken off.

I compare this to the first time I saw printed output from a laser printer, or the first time I used a good color flatbed scanner, or the first time I captured video from a broadcast quality camera with a broadcast quality capture card. In terms of output quality, this shit has arrived.

In a nutshell, the output is smooth (wet, transparent output almost looks like glass, but it becomes cloudy as it cures) and the prints feel rigid but a little flexible. The way I describe it is “somewhere between plastic army men and lego”. Prints feel about as durable as a nice plastic manufactured item of the same form (perhaps slightly nicer than the plastic D&D miniatures that sent me off on this tangent).

I should add that the printer estimates the cost of each print based on the volume of resin to be used and the cost of the resin. So far, my models have “cost” less than $0.25. But they’ve all been quite small. (But wait to find out about other consumable costs…)

Flies in the ointment

New printing method, new failure modes. This print of my omnidude model mid-run looks great at first glance, but turns out to have a defect—partially visible here (most of the right leg is missing, as is the back of the base. I believe the cause is deposits on the PEP film blocking uniform exposure or possibly the a portion of the print base failing to adhere to the head.

Let me begin by saying that most of this is a result of user error. I couldn’t get pure isopropyl alcohol on New Year’s Eve at 10pm and I wanted to play with my toy!

Let’s start with nasty chemicals—the resin smells bad, and is viscous, sticky, slimy, hard to clean off, and probably bad for you. One bottle of resin arrived having leaked a bit inside its container. The prints—perhaps because I’m not using pure alcohol and/or don’t have a proper “curing box”—are slightly tacky for hours after printing. Nevertheless, my first print has lost almost all its tackiness 36h or so since printing.

Recovering unused resin using the provided paper filter is horrible, and cleaning the tray is painful. But then there’s some really gnarly issues…

First, my second print had some odd, minor flaws in it, which I ascribed to my modeling, but turned out to be much nastier.

Next, we had a power failure while a print was in progress and the printer lost track of where it was. I tried to recalibrate by just moving the print head down and fiddling (rather than draining all the resin and cleaning everything and recalibrating properly, and so on, but it didn’t work.

My weasel character was one of the two figures in progress when the power failed. (Shown here atop a quarter and an old two pound coin for scale—this thing is tiny.) It shed a few layers from the back of its tail, and there’s that small blemish on the front of his abdomen (which I don’t see with my eyes). I added some very fine detail to the eyes (an indentation on his right (cyborg) eye, and a pupil on his left eye. The latter isn’t visible.
This profile shows the flawed fail, several layers of which sloughed off at a touch.

My first print after the power outage and my dodgy recalibration was a disaster. The model fell from its supports and became a nasty blob, lightly stuck to the PEP film at the bottom of the tray.

After draining and filtering as much resin as I could (and spilling a fair bit and cleaning that up—ugh!) I cleaned the tray and found little bits of solid resin stuck to the film all over. It looked to me like one stuck deposit had been the cause of missing volume in two of my prints.

One of the stubborn spots left over after my initial attempts at cleaning the resin tray screen.

“Fixing” this involved alcohol and careful scraping with the—included—plastic scraping tool, and then wiping away the alcohol and detritus and allowing it to air dry. The results looked pretty good, but on inspection the PEP screen is quite scratched up now.

So, with this in mind, I decided to track down a decent amount of pure isopropyl alcohol, replacement PEP film, and more and—I hope—better filter cones, and then I will try it again.

(These links are not an endorsement. I provide them because I found identifying suitable items on Amazon to be quite tricky—and getting them from China was going to be slow and quite exensive.)

To be continued…

If commas were washers…

create-react-app is an abstraction layer over creating a “frontend build pipeline”. Is it leaky? Yes.

Today, if you’re working at a big tech company, the chances are that working on a web application is something like this:

  1. You’re building an app that comprises a list view and a detail view with some editing, and for bonus points the list view has to support reordering, filtering, and sorting. This is considered to be quite an undertaking.
  2. There is currently an app that does 90% or more of what your app is intended to do, but it’s not quite right, and because it was written in Angular/RxJs or React/Redux it is now considered to be unmaintainable or broken beyond repair.
  3. The services for the existing app deliver 100% of the back-end data and functionality necessary for your new app to work.
  4. (There’s also probably a mobile version of the app that likely consumes its own service layer, and somehow has been maintained for five years despite lacking the benefit of a productivity-enhancing framework.)
  5. Because the service layer assumes a specific client-side state-management model (i.e. the reason the existing app is unmaintainable) and because your security model is based on uniquely tying unique services to unique front-ends and marshaling other services (that each have their own security model) on the front-end server) there is absolutely no way your new app can consume the old app’s services. Also, they’re completely undocumented. (Or, “the code is the documentation”.)
  6. Your new app will comprise a front-end-server that dynamically builds a web-page (server-side) on demand and then sends it to the end-user. It will then consume services provided by the front-end-server OR a parallel server.
  7. When you start work each day you merge with master and then fire off a very complicated command (probably by using up-arrow or history because it’s impossible to remember) to build your server from soup-to-nuts so that, god-willing and assuming nothing in master has broken your build, in five to fifteen minutes you can reload the localhost page in your browser and see your app working.
  8. You then make some change to your code and, if you’re lucky you can see it running 15s later. If you’ve made some trivial error the linters didn’t catch then you get 500 lines of crap telling you that everything is broken. If you made non-trivial dependency changes, you need to rebuild your server.

Imagine if we designed cars this way.

Leaky Abstractions

Wikipedia, which is usually good on CS topics, is utterly wrong on the topic of “leaky abstractions”. It wasn’t popularized in 2002 by Joel Spolsky (first, it’s never been “popularized”, and second it has been in use among CS types since at least the 80s, so I’m going to hazard that it comes from Djikstra). And the problem it highlights isn’t…

the reliance of the software developer on an abstraction’s infallibility

In Joel’s article, his example is TCP, which he says is supposed to reliably deliver packets of information from one place to another. But sometimes it fails, and that’s the leak in the abstraction. And software that assumes it will always work is a problem.

I don’t disagree with any of this particularly, I have a problem with the thing that’s defined as the “leaky abstraction”. “TCP is a reliable messaging system” is the leaky abstraction, not the programmer assuming it’s true. But it’s also a terrible abstraction that no-one actually uses.

I’d suggest that a leaky abstraction is actually an abstraction is one that requires users to poke through it into the underlying implementation to actually use it effectively. E.g. if you were to write a TCP library that didn’t expose failures, you’d need to bypass it and get at the actual errors to handle TCP in actual use.

In other words, an abstraction is “leaky” when you must violate it to use it effectively.

An example of this is “the user interface is a pure function of its data model” which turns out to be impractical, necessitating the addition of local “state” which in turn is recognized as a violation of the original abstraction and replaced with “pure components” which in turn are just as impractical as before and thus violated by “use hooks to hide state anywhere”.

Assuming abstractions are perfect can be a problem, but usually it’s actually the opposite of the real problem. It’s like the old chesnut “when you assume you make an ass out of u and me”. It’s utterly wrong — if you don’t assume you’ll never get anything done, although sometimes you’ll assume incorrectly.

The command and pernicious problem is software relying on or assuming flaws in the abstraction layer. A simple (and common) example of this is software developers discovering a private, undocumented, or deprecated call in an API and using it because it’s easy or they assume that, say, because it’s been around for five years it will always be there, or that because they work in the same company as the people who created the API they can rely on inside knowledge or contacts to mitigate any issues that arise.

The definition in the Wikipedia article (I’ve not read Joel Spolsky’s article, but I expect he actually knows what he’s talking about; he usually does) is something like “assuming arithmetic works”, “assuming the compiler works”, “assuming the API call works as documented”. These are, literally, what you do as a software engineer. Sometimes you might double-check something worked as expected, usually because it’s known to fail in certain cases.

  • If an API call is not documented as failing in certain cases but it does, that’s a bug.
  • Checking for unexpected failures and recovering from them is a workaround for a bug, not a leaky abstraction.
  • Trying to identify causes of the bug and preventing the call from ever firing if the inputs seem likely to cause the failure is a leaky abstraction. In this case this code will fail even if the underlying bug is fixed. This is code that relies on the flaw it is trying to avoid to make sense.

According to Wikipedia, Joel Spolsky argues that all non-trivial abstractions are leaky. If you consider the simplest kinds of abstraction, e.g. addition, then even this is leaky from the point of view of thinking of addition in your code matching addition in the “real world”. The “real world” doesn’t have limited precision, overflows, or underflows. And indeed, addition is defined for things like imaginary numbers and allows you to add integers to floats.

The idea that all non-trivial abstractions are leaky is something I’d file under “true but not useful”, it’s like “all non-trivial programs contain at least one bug”. What are we supposed to do? Avoid abstractions? Never write non-trivial programs? I suggest that abstractions are only useful if they’re non-trivial, and since they will probably be leaky, the goal is to make them as leak-free as possible, and identify and document the leaks you know to exist, and mitigate any you stumble across.

I don’t know if some warped version of Spolsky’s idea infected the software world and led it to a situation in which, to think in terms of designing a car in the real world, to change the color of the headrest and see if it looks nice, we need to rebuild the car’s engine from ore and raw rubber using the latest design iteration of the assembly plant and the latest formulation of rubber, but it seems like it has.

In the case of a car, in the real world you build the thing largely out of interchangeable modules and there are expectations of how they fit together. E.g. the engine may be assumed to fit in a certain space, have certain standard hookups, and have standardized anchor points that will always be in the same spot. So, you can change the headrest confident that any new, improved engine that adheres to these rules won’t make your headrest color choices suddenly wrong.

If, on the other hand, the current engine you were working with happens to have some extra anchor point and your team has chosen to rely on that anchor point, now you have a leaky abstraction! (It still doesn’t affect the headrest.)

Abstractions have a cost

In software, abstractions “wrap” functionality in an “interface”. E.g. you can write an add() function that adds two numbers, i.e. wraps “+”. This abstracts “+”. You could check to see if the inputs can be added before trying to add them and um do something if you thought they couldn’t. Or you could check the result was likely to be correct by, say, subtracting the two values from zero and changing the sign to see if it generated the same result.

For the vast majority of cases, doing anything like this and, say, requiring everyone in your company to use add() instead of “+” would be incredibly stupid. The cost is significant:

  • your add may have new bugs, and given how much less testing it has gotten than “+” the odds are pretty high
  • your add consumes more memory and executes more code and will slow down every piece of code it runs in
  • code using your add may not benefit from improvements to the underlying “+”
  • time will be spent in code reviews telling new engineers to use add() instead of “+” and having back and forth chats and so on.
  • some of those new engineers will start compiling a list of reasons to leave your company based on the above.
  • all your programmers know how to use “+” and are comfortable with it
  • knowing how “+” works is a valuable and useful thing anywhere you work in future; knowing how add() works is a hilarious war story you’ll tell future colleagues over drinks
I’ve used create-react-app and now I’m ready to write “hello, world”. I now have 1018 dependencies, but in exchange my code runs slower and is more complex and has to be written in a different language…

So, an abstraction needs to justify itself in terms of what it simplifies or it shouldn’t exist. Ways in which an abstraction becomes a net benefit include:

  • employing non-obvious best practices in executing a desired result (some programmers may not know about a common failure mode or inefficiency in “+”. E.g. add(a, a) might return a bit-shifted to the left (i.e. double a) rather than perform the addition. This might turn out to be a net performance win. Even if this were true, the compiler may improve its implementation of “+” to use this trick and your code will be left behind. (A real world example of this is using transpilers to convert inefficiently implemented novel iterators, such as javascript’s for(a of b){} and turn them into what is currently faster code using older iterators. Chances are, the people who work on the javascript runtime will probably fix this performance issue and the “faster” transpiled code will now be bigger and slower.)
  • insulating your code from capricious behavior in dependencies (maybe the people who write your compiler are know to badly break “+” from time to time, so you want to implement it by subtracting from zero and flipping for safety. Usually this is a ridiculous argument, e.g. one of the funnier defenses I’ve heard of React was “well what if browsers went away?” Sure, but what are the odds React outlives browsers?
  • insulating user code from breaking changes. E.g. if you’re planning on implementing betterAdd() but it can break in places where add() is used, you can get betterAdd() working, then deprecate add() and eventually switch to betterAdd() or, when all the incompatibilities are ironed out, replace add() with betterAdd().
  • saving the user a lot of common busywork that you want to keep DRY (e.g. to avoid common pitfalls, reduce errors, improve readability, provide a single point to make fixes or improvements, etc.). Almost any function is anyone writes is an example of this.
  • eliminate the need for special domain knowledge so that non-specialists can use do something that would otherwise require deep knowledge to do, e.g. the way threejs allows people with no graphics background to display 3d scenes, or the way statistics libraries allow programmers who never learned stats to calculate a standard deviation (and save those who did the trouble of looking up the formula and implementing it from scratch).

If your abstraction’s benefits don’t outweigh its costs, then it shouldn’t exist. Furthermore, if your abstraction’s benefit later disappears (e.g. the underlying system improves its behavior to the point where it’s just as easy to use it as your abstraction) then it should be easy to trivialize, deprecate, and dispose of the abstraction.

jQuery is an example of a library which provided huge advantages which slowly diminished as the ideas in jQuery were absorbed by the DOM API, and jQuery became, over time, a thinner and thinner wrapper over built-in functionality.

E.g. lots of front-end libraries provide abstraction layers over XMLHttpRequest. It’s gnarly and there are lots of edge cases. Even if you don’t account for all the edge cases, just building an abstraction around it affords the opportunity to fix it all later in one place (another possible benefit).

Since 2017 we have had fetch() in all modern browsers. Fetch is simple. It’s based on promises. New code should use fetch (or maybe a library wrapped around fetch). Old abstractions over XMLHttpRequest should either be deprecated and/or rewritten to take advantage of fetch where possible.

Abstractions, Used Correctly, Are Good

The problem with abstractions in front-end development isn’t that they’re bad, it’s that they’re in the wrong places, as demonstrated by the fact that making a pretty minor front-end change requires rebuilding the web server.

A simple example is React vs. web-components. React allows you to make reusable chunks of user interface. So do web-components. React lets you insert these components into the DOM. So do web-components. React has a lot of overhead and often requires you to write code in a higher-level form that is compiled into code that actually runs on browsers. React changes all the time breaking existing code. React components do not play nicely with other things (e.g. you can’t write code that treats a React Component like an <input> tag. If you insert web-components inside a React component it may lose its shit.

Why are we still using React?

  • Does it, under the hood, use non-obvious best practices? At best arguable.
  • Does it insulate you from capricious APIs? No, it’s much more capricious than the APIs it relies on.
  • Does it insulate you from breaking changes? Somewhat. But if you didn’t use it you wouldn’t care.
  • Does it save you busywork? No, in fact building web-components and using web-components is just as easy, if not easier, than using React. And learning how to do it is universal and portable.
  • Does it eliminate the need for domain knowledge? Only to the extent that React coders learn React and not the underlying browser behavior OR need to relearn how to do things they already know in React.

Angular, of course, is far, far worse.

How could it be better?

If you’re working at a small shop you probably don’t do all this. E.g. I built a digital library system from “soup-to-nuts” on some random Ubuntu server running some random version of PHP and some random version of MySQL. I was pretty confident that if PHP or MySQL did anything likely to break my code (violate the abstraction) this would, at minimum, be a new version of the software and, most likely, a 1.0 or 0.1 increment in version. Even more, I was pretty confident that I would survive most major revisions unscathed, and if not, I could roll back and fix the issues.

It never burned me.

My debug cycle (PHP, MySQL, HTML, XML, XSL, Javascript) was ~1s on localhost, maybe 10s on the (shared, crappy) server. This system was open-sourced and maintained, and ran long after I left the university. It’s in v3 now and each piece I wrote has been replaced one way or another.

Why on Earth don’t we build our software out of reliable modular components that offer reliable abstraction layers? Is it because Joel Spolsky allegedly argued, in effect, that there’s no such thing as a reliable abstraction layer? Or is it because people took an argument that you should define your abstraction layers cautiously (aware of the benefit/cost issue), as leak-free as possible, and put them in the right places, and instead decided that was too hard: no abstraction layer, no leaks? Or is it the quantum improvement in dependency-management systems that allows (for example) a typical npm/yarn project to have thousands of transitive dependencies that update almost constantly without breaking all that often?

The big problem in software development is scale. Scaling capacity. Scaling users. Scaling use cases. Scaling engineers. Sure, my little digital library system worked aces, but that was one engineer, one server, maybe a hundred daily/active users. How does it scale to 10k engineers, 100k servers, 1B customers?

Scaling servers is a solved problem. We can use an off-the-shelf solution. Actually the last two things are trivial.

10k engineers is harder. I would argue first off that having 10k engineers is an artifact of not having good abstraction layers. Improve your basic architecture by enforcing sensible abstractions and you only need 1k engineers, or fewer. At the R&D unit of a big tech company I worked at, every front-end server and every front-end was a unique snowflake with its own service layer. There was perhaps a 90% overlap in functionality between any given app and at least two other apps. Nothing survived more than 18 months. Software was often deprecated before it was finished.

Imagine if you have a global service directory and you only add new services when you need new functionality. All of a sudden you only need to build, debug, and maintain a given service once. Also, you have a unified security model. Seems only sensible, right?

Imagine if your front-end is purely static client-side code that can run entirely from cache (or even be mobilized as an offline web-app via manifest). Now it can be served on a commodity server and cached on edge-servers.

Conceptually, you’ve just gone from (for n apps) n different front-end servers you need to care for and feed to 0 front-end servers (it’s off-the-shelf and basically just works) and a single service server.

Holy Grail? We’ve already got one. It’s very nice.

(To paraphrase Monty Python.)

If you’ve already got a mobile client, it’s probably living on (conceptually, if not actually) a single service layer. New versions of the client use the same service layer (modulo specific changes to the contract). Multiple versions of the client coexist and can use the same service layer!

In fact, conceptually, we develop a mobile application out of HTML, CSS, and Javascript and we’re done. Now, there are still abstraction layers worth having (e.g. you can wrap web-component best-practices in a focused library that evolves as the web-components API evolves, and you definitely want abstraction layers around contenteditable or HTML5 drag-and-drop), but not unnecessary and leaky abstraction layers that require you to rebuild the engine when you change the color of the headrest.

Computed Properties are Awesome…

…and should be used as little as possible

One of the things I’ve seen happen multiple times in my career is a language which didn’t have computed properties get them, and then watch as mediocre coders continue to cargo-cult old behaviors that are now counter-productive and obsolete.

The Problem

Things change. In particular, data-structures change. So one day you might decide that a Person looks like:


and another day you decide it really should be:


In fact, you’re likely to move from the first model to the second model in increments. Meanwhile you’re writing other code that assumes the current model, whatever it happens to be. There’ll always be “perfectly good” stuff that works with a given representation at a given time, and breaking stuff that’s “perfectly good” because other stuff is changing is bad.

The Solution

The solution is to use “getters” and “setters”. So you tell people never to expose underlying representations of data and instead write methods that can remain graven in stone that don’t change. So your initial representation is:


and now you can split _name into _firstName and _lastName, rewrite getName, figure out how to implement setName and/or find all occurrences of setName and fix them, and so on.

Problem solved!

Problems with the Solution

Before, when you started out with only three fields, your Person object had exactly three members and no code. The solution immediately changed that to nine members (3 times as many) and six of those are functions. Now, every time you access any property, it’s a function call. If you’re using a framework that transparently wraps function calls for various reasons you’re maybe calling 3 functions to do a simple property lookup.

But aside from that, it works, and the fact that it prevents unnecessary breakage is a win, even if it makes for more code, less readable code, experimentation, and costs performance and memory.

A better solution

A better solution is immediately obvious to anyone who has used a language with computed properties. In this case you go back to your original object to start with, and when things change, you replace the changed properties with getters and setters so that for legacy code (and everything is legacy code eventually) nothing changes.

E.g. when name becomes firstName and lastName, you implement name as a computed property (implemented in the obvious way) and code that expects a name property to exist just keeps on working (with a slight performance cost that is the same performance cost you would have starting out with the previous solution.

The Antipattern

This exact thing happened when Objective-C added computed properties. All the folks who told you to write getters and setters for your Objective-C properties told you to ignore computed properties and keep doing things the old way or, perhaps worse, use computed properties to write getters and setters, so now you start with:

  get name() { return this._name },
  set name(x) { this._name = x },
  /* and so on */

Is there are good reason for this?

There can be arguments at the margins in some cases that computed properties will be less performant than getters and setters (although in Obj-C that’s a huge stretch, since method dispatch is, by default, not lightning fast in Obj-C — indeed it’s a known performance issue that has a standard workaround (method calls in Obj-C involve looking up methods by name, the optimization is that you essentially grab a function pointer and hang on to it).

There’s absolutely no way that writing computed property wrappers around concrete properties just for the sake of it has any benefit whatsoever.

The short answer is “no”.

Aside: there’s one more marginal argument for getters and setters, that if you’re deprecrating a property (e.g. name) and you’d rather people didn’t call a hacky name setter that tries to compute firstName and lastName from a name string, it’s easier to codemod or grep calls to setName than calls to, say, .name =. I’ll reference this argument here even though I find it about as solid as arguments about how to write Javascript based on how well it plays with obfuscation.

I’m pretty sure cargo cult coders are still writing getters and setters in Swift, which has had computed properties since first release. (By the way, lazy computed properties are even more awesome.)