Computed Properties are Awesome…

…and should be used as little as possible

One of the things I’ve seen happen multiple times in my career is a language which didn’t have computed properties get them, and then watch as mediocre coders continue to cargo-cult old behaviors that are now counter-productive and obsolete.

The Problem

Things change. In particular, data-structures change. So one day you might decide that a Person looks like:

{
  name,
  address,
  email
}

and another day you decide it really should be:

{
  firstName,
  lastName,
  streetAddress,
  city,
  state,
  postcode,
  country,
  contacts[]
}

In fact, you’re likely to move from the first model to the second model in increments. Meanwhile you’re writing other code that assumes the current model, whatever it happens to be. There’ll always be “perfectly good” stuff that works with a given representation at a given time, and breaking stuff that’s “perfectly good” because other stuff is changing is bad.

The Solution

The solution is to use “getters” and “setters”. So you tell people never to expose underlying representations of data and instead write methods that can remain graven in stone that don’t change. So your initial representation is:

{
  getName,
  setName,
  …,
  _name,
  _address,
  …,
} 

and now you can split _name into _firstName and _lastName, rewrite getName, figure out how to implement setName and/or find all occurrences of setName and fix them, and so on.

Problem solved!

Problems with the Solution

Before, when you started out with only three fields, your Person object had exactly three members and no code. The solution immediately changed that to nine members (3 times as many) and six of those are functions. Now, every time you access any property, it’s a function call. If you’re using a framework that transparently wraps function calls for various reasons you’re maybe calling 3 functions to do a simple property lookup.

But aside from that, it works, and the fact that it prevents unnecessary breakage is a win, even if it makes for more code, less readable code, experimentation, and costs performance and memory.

A better solution

A better solution is immediately obvious to anyone who has used a language with computed properties. In this case you go back to your original object to start with, and when things change, you replace the changed properties with getters and setters so that for legacy code (and everything is legacy code eventually) nothing changes.

E.g. when name becomes firstName and lastName, you implement name as a computed property (implemented in the obvious way) and code that expects a name property to exist just keeps on working (with a slight performance cost that is the same performance cost you would have starting out with the previous solution.

The Antipattern

This exact thing happened when Objective-C added computed properties. All the folks who told you to write getters and setters for your Objective-C properties told you to ignore computed properties and keep doing things the old way or, perhaps worse, use computed properties to write getters and setters, so now you start with:

{
  get name() { return this._name },
  set name(x) { this._name = x },
  _name,
  /* and so on */
}

Is there are good reason for this?

There can be arguments at the margins in some cases that computed properties will be less performant than getters and setters (although in Obj-C that’s a huge stretch, since method dispatch is, by default, not lightning fast in Obj-C — indeed it’s a known performance issue that has a standard workaround (method calls in Obj-C involve looking up methods by name, the optimization is that you essentially grab a function pointer and hang on to it).

There’s absolutely no way that writing computed property wrappers around concrete properties just for the sake of it has any benefit whatsoever.

The short answer is “no”.

Aside: there’s one more marginal argument for getters and setters, that if you’re deprecrating a property (e.g. name) and you’d rather people didn’t call a hacky name setter that tries to compute firstName and lastName from a name string, it’s easier to codemod or grep calls to setName than calls to, say, .name =. I’ll reference this argument here even though I find it about as solid as arguments about how to write Javascript based on how well it plays with obfuscation.

I’m pretty sure cargo cult coders are still writing getters and setters in Swift, which has had computed properties since first release. (By the way, lazy computed properties are even more awesome.)

The Dark Side Beckons

With the advent of COVID, my wife and I decided to return to a world we had sworn off — MMORPGs. Those of you who know us know that we literally met while playing EverQuest, and we had, at various times in our lives, been pulled deep into that world, e.g. being officers (and in my wife’s case, leader) of raiding guilds — a more-than-fulltime job.

Our last flirtation with MMOs had been on the PS4 with Elder Scrolls Online — think Skyrim but much bigger and full of other players running around, riding impossible-looking mounts, asking for help with world bosses, and randomly casting spells to show off.

Using one PS4 per player is a pain, worse if your kids are involved. So we decided we’d try something that (a) ran on a Mac (we have enough Macs for everyone), (b) would be free to play, and (c) wasn’t WoW.

Now, I omitted our more recent flirtation with GuildWars 2, which meets all these criteria but just feels completely flavorless. (In GW2 you can sign up for and complete a quest without actually ever meeting the person you’re supposed to be doing it for — they’ve streamlined the story out of the game).

Anyway, the game we had been pining for these past years was ESO. We’d stopped playing because of a lack of time and the PS4 inconvenience, and it runs on Macs.

Our big problem came when we tried to install it on my daughter’s 2012 Mac Pro. Turns out the installed version of macOS is so out-of-date (3y or so?) that it requires multiple sequential OS updates to run it, complicated by needing firmware flashes that come with the OS updates and there’s no one-step “clean install”. Anyway, when I tried to do these it just stalled and wouldn’t go past the second version. Sheesh. (And, by the way, the Mac Pro works just fine for everything else.

Also, my wife’s Macbook really struggled with it.

Enter the Acer Predator Helios 300

So, I searched for a well-reviewed gaming laptop and got two Acer Helios 300s, not maxed out but close. Each for roughly the price of a Macbook Air, or what I would normally pay for a headless gaming desktop back in the day (i.e. ~$1200). These suckers run ESO with near maxed out-graphics settings at ~90fps (dropping to 40-50fps in massive fights, and 30fps or slower when on battery power).

Are these well-made laptops? No, they are not. The way the power brick connects literally makes me cringe. The USB and headphone ports are in super annoying locations because they need the obvious spots for fan vents.

Do they run hot? OMG yes, yes they do. Our cats like to bask in their exhaust.

Are they quiet? No, no they are not.

Do they have good speakers? In a word, hell no.

Do the trackpads work well? No. They. Do. Not.

Does Windows 10 make me livid? Yes, yes it does.

My daughters, who openly envy us our hotrod gaming laptops (and use them as much and as often as they can), have commented on how badly put together they are in every respect except for the important criterion of kicking ass.

To be continued…

WordPress WTF?!

Recently, wordpress has been so badly behaved as to boggle my mind. If I’m lucky, I only get something like this in the console when create a new post…

Embarrassing shit in the console when I load wordpress…

If I’m unlucky, I get a screenful of errors and nothing appears. It was so bad a couple of minor versions ago, I started trying to figure out how to rebuild my site without wordpress (getting the post text is easy enough, but wordpress uses an uploads directory which makes things non-trivial).

Anyway, I’m going to whine about this to my three readers in the hope that maybe something gets done about this shit. (It’s things like this that give jQuery a bad name, and I strongly doubt it’s jQuery’s fault.)

No Man’s Sky Revisited

No Man’s Sky was originally released in 2016. I’d been waiting for it for nearly two years after seeing some early demos. This looked like a game I’d been day-dreaming about for decades.

It was one of the most disappointing games I’ve ever played.

I recently saw No Man’s Sky Beyond on sale in Best Buy (while shopping for microphones for our upcoming podcast) and immediately picked it up. Speaking of disappointing game experiences, the PS4 VR has been a gigantic disappointment ever since I finished playing Skyrim (which was awesome). Why there haven’t been more VR updates of great games from previous generations (e.g. GTA IV) escapes me, because second-rate half-assed new VR games do not impress me.

Anyway, I did not realize that (a) No Man’s Sky Beyond is merely the current patched version of No Man’s Sky, and that the VR mode is absolutely terrible. But, the current full patched version of No Man’s Sky is a huge improvement over the game I was horribly disappointed by back in 2016. It’s still not actually great, but it’s decent, and I can see myself coming back to it now and then when I want a fairly laid back SF fix.

High points:

  • There’s an arc quest that introduces core gameplay elements in a reasonably approachable way (although the start of the game is still kind of brutal)
  • There are dynamically generated missions
  • The space stations now actually kind of make sense
  • Base construction is pretty nice
  • There’s a kind of dumb “learn the alien languages” subgame
  • Planets have more interesting stuff on them

Low points:

  • Space is monotonous (star systems comprise a bunch of planets, usually at least one with rings, in a cloud of asteroids, all next to each other). Space stations seem to look like D&D dice with a hole in one side (minor spoiler: there’s also the “Anomaly” which is a ball with a door).
  • Planets are monotonous — in essence you a color scheme, hazard type (radiation, cold, heat — or no hazard occasionally), one or two vegetation themes, one or two mobility themes for wildlife, and that’s about it. (If there are oceans, you get extra themes underwater.) By the time you’ve visited five planets, you’re seldom seeing anything new.
  • Ecosystems are really monotonous (why does the same puffer plant seem to be able to survive literally anywhere?)
  • The aliens are just not very interesting (great-looking though)
  • On the PS4 the planet atmospheres look like shit
  • The spaceship designs are pretty horrible aesthetically — phone booth bolted to an erector set ugly.
  • Very, very bad science (one of my daughters was pissed off that “Salt” which was labeled as NaCl could not be refined into Sodium which — mysteriously — powers thermal and radiation protection gear). Minerals and elements are just used as random herbal ingredients for a potion crafting system that feels like it was pulled out of someone’s ass.
  • Way, way too much busywork, e.g. it’s convenient that “Silica Powder” can fuel your Terrain modifier tool (which generates Silica Powder as a biproduct of use) but why put in the mechanic at all? Why do I need to assemble three things over and over again to fuel up my hyperdrive? Why do I keep on needing to pause construction to burn down trees to stock up on carbon?

The audacity of building a game with a huge, fractally detailed universe is not what it once was. It’s an approach many developers took out of necessity in an era when memory and storage were simply too limited to store handmade content, and budgets were too small to create it — Elite, Akallabeth, Arena, Pax Imperia, and so on — but it’s disappointing to see a game built this way with far more capable technology, more resources, and greater ambition keep failing to deliver in so many (to my mind) easily addressable ways. When No Man’s Sky was first demoed, my reaction was “wow, that’s gorgeous and impressive, I wonder where the gameplay is”. When it actually came out, two years later, my reaction was “hmm, not as gorgeous as the demo, and there’s basically no gameplay”. As of No Man’s Sky Beyond — well, the gameplay is now significantly better than the original Elite (or, in my opinion, Elite Dangerous) — which is not nothing.

As a final aside, one day I might write a companion article about Elite Dangerous, a game in many ways parallel to No Man’s Sky. The main reason I haven’t done so already is that I found Elite Dangerous so repellant that I quit before forming a complete impression. Ironically, I think Elite Dangerous is in some ways a better No Man’s Sky and No Man’s Sky is a better Elite.

Data-Table Part 2: literate programming, testing, optimization

I’ve been trying to build the ideas of literate programming into b8r (and its predecessor) for some time now. Of course, I haven’t actually read Knuth’s book or anything. I did play with Oberon for a couple of hours once, and I tried to use Light Table, and I have had people tell me what they think Knuth meant, but what I’ve tried to implement is the thing that goes off in my head in response to the term.

One of the things I’ve grown to hate is being forced to use a specific IDE. Rather than implement my own IDE (which would (a) be a lot of work and (b) at best produce Yet Another IDE people would hate being forced to use) I’ve tried to implement stuff that complements whatever editor, or whatever, you use.

Of course, I have implemented it for bindinator, and of course that’s pretty specific and idiosyncratic itself, but it’s a lot more real-world and less specific or idiosyncratic than — say — forcing folks to use a programming language no-one uses on an operating system no-one uses.

Anyway this weekend’s exploit consisted of writing tests for my debounce, throttle, and — new — throttleAndDebounce utility functions. The first two functions have been around for a long time and used extensively. I’m pretty sure they worked — but the problem is the failure modes of such functions are pretty subtle.

Since all that they really do is make sure that if you call some function a lot of times it will not get called ALL of those times, and — in the case of debounce — that the function will be called at least once after the last time you call it, I thought it would be nice to make sure that they actually worked exactly as expected.

Spoiler alert: the functions did work, but writing solid tests around them was surprisingly gnarly.

The reason I suddenly decided to do this was that I had been trying to optimize my data-table (which is now virtual), and also show off how awesome it is, by adding a benchmark for it to my performance tools. This is a small set of benchmarks I use to check for performance regressions. In a nutshell it compares b8r’s ability to handle largish tables with vanilla js (the latter also cheating, e.g. by knowing exactly which DOM element to manipulate), the goal being for b8r to come as close to matching vanilla js in performance as possible.

I should note that these timings include the time spent building the data source, which turns out to be a mistake for the larger row counts.

So, on my laptop:

  • render table with 10k rows using vanilla js — ~900ms
  • render table with 10k rows using b8r — ~1300ms
  • render table with 10k rows using b8r’s data-table — ~40ms

Note: this is the latest code. On Friday it was a fair bit worse — maybe ~100ms.

Anyway, this is pretty damn cool. (What’s the magic? b8r is only rendering the visible rows of the table. It’s how Mac and iOS applications render tables. The lovely thing is that it “just works” — it even handles momentum scrolling — like iOS.)

Virtual javascript tables are nothing new. Indeed, I’ve had two implementations in b8r’s component examples for three years. But data-table is simple, powerful, and flexible in use.

So I added a 100k row button. Still pretty fast.

A little profiling revealed that it was wasting quite a bit of time on recomputing the slice and type-checking (yes, the b8r benchmarks include dynamically type-checking all of the rows in the table against a heterogeneous array type where the matching type is the second type checked). So, I optimized the list computations and had the dynamic type-checking only check a subsample of arrays > 110 rows.

  • render table with 100k rows using b8r’s data-table — ~100ms

So I added a 1M row button. Not so fast. (Worse, scrolling was pretty bad.)

That was Thursday. And so in my spare time I’ve been hunting down the speed problems because (after all) displaying 40 rows of a 1M row table shouldn’t be that slow, should it?

Turns out that slicing a really big array can be a little slow (>1/60 of a second), and the scroll event that triggers it gets triggered every screen refresh. Ouchy. It seemed to me that throttling the array slices would solve this problem, but throttle was originally written to prevent a function from being called again within some interval of the last call starting (versus finishing). So I modified throttle to delay the next call based on when the function finishes executing.

Also, you don’t want to use throttle, because then the last call to the throttled function (the array slice) won’t be called) which could leave you with a missing array row on the screen. You also don’t want to use debounce because then then the list won’t update until it stops scrolling, which is worse. What you want is a function that is throttled AND debounced. (And composing the two does not work — either way you end up debounced.)

So I needed to write throttleAndDebounce and convince myself it worked. I also wanted to convince myself that the new subtly different throttle worked as expected. And since the former was going to be used in data-table, which I think is going to be a very important component, I really want it to be sure it’s solid.

  • render table with 1M rows using b8r’s data-table — ~1100ms

Incidentally, ~600ms of that is rendering the list, the rest is just building the array.

At this point, scrolling is smooth but the row refreshes are not (i.e. the list scrolls nicely but the visible row rendering can fall behind, and live filtering the array is acceptably fast). Not shabby but perhaps meriting further work.

Profiling now shows the last remaining bottleneck I have any control over is buildIdPathValueMap — this itself is a function that optimizes id-path lookup deep in the bowels of b8r. (b8r implements the concept of a data-path which lets you uniquely identify any part of a javascript object much as you would in javascript, e.g. foo.bar.baz[17].lurman, except that inside of square brackets you can put an id-path such as app.messages[uuid=0c44293e-a768-4463-9218-15d486c46291].body and b8r will look up that exact element in the array for you.

The thing is, buildIdPathValueMap is itself an optimization specifically targeting big lists that was added when I first added the benchmark. b8r was choking on big lists and it was the id-path lookups that were the culprit. So, when you use an id-path on a list, b8r builds a hash based on the id-path once so you don’t need to scan the list each time. The first time the hash fails, it gets regenerated (for a big, static list the hash only needs to be generated once. For my 1M row array, building that hash is ~300ms. Thing is, it’s regenerating that hash.

So, eventually I discover a subtle flaw in biggrid (the thing that powers the virtual slicing of rows for data-table. b8r uses id-paths to optimize list-bindings so that when a specific item in an array changes, only the exact elements bound to values in that list item get updated. And, to save developers the trouble of creating unique list keys (something ReactJS developers will know something about) b8r allows you to use _auto_ as your id-path and guarantees it will efficiently generate unique keys for bound lists.

b8r generates _auto_ keys in bindList, but bindList only generates keys for the list it sees. biggrid slices the list, exposing only visible items, which means _auto_ is missing for hidden elements. When the user scrolls, new elements are exposed which b8r then detects and treats as evidence that the lookup hash for the id-path (_auto_) is invalid. A 300ms thunk later, we have a new hash.

So, the fix (which impacts all filtered lists where the _auto_ id-path is used and the list is not initially displayed without filtering, not just biggrid) is to force _auto_ keys to be generated for the entire list during the creation of the hash. As an added bonus, since this only applies to one specific property but a common case, in the common case we can avoid using getByPath to evaluate the value of each list item at the id-path and just write item._auto_ instead.

  • render table with 1M rows using b8r’s data-table — ~800ms

And now scrolling does not trigger Chrome’s warning about scroll event handlers taking more than a few ms to complete. Scrolling is now wicked fast.