WordPress WTF?!

Recently, wordpress has been so badly behaved as to boggle my mind. If I’m lucky, I only get something like this in the console when create a new post…

Embarrassing shit in the console when I load wordpress…

If I’m unlucky, I get a screenful of errors and nothing appears. It was so bad a couple of minor versions ago, I started trying to figure out how to rebuild my site without wordpress (getting the post text is easy enough, but wordpress uses an uploads directory which makes things non-trivial).

Anyway, I’m going to whine about this to my three readers in the hope that maybe something gets done about this shit. (It’s things like this that give jQuery a bad name, and I strongly doubt it’s jQuery’s fault.)

No Man’s Sky Revisited

No Man’s Sky was originally released in 2016. I’d been waiting for it for nearly two years after seeing some early demos. This looked like a game I’d been day-dreaming about for decades.

It was one of the most disappointing games I’ve ever played.

I recently saw No Man’s Sky Beyond on sale in Best Buy (while shopping for microphones for our upcoming podcast) and immediately picked it up. Speaking of disappointing game experiences, the PS4 VR has been a gigantic disappointment ever since I finished playing Skyrim (which was awesome). Why there haven’t been more VR updates of great games from previous generations (e.g. GTA IV) escapes me, because second-rate half-assed new VR games do not impress me.

Anyway, I did not realize that (a) No Man’s Sky Beyond is merely the current patched version of No Man’s Sky, and that the VR mode is absolutely terrible. But, the current full patched version of No Man’s Sky is a huge improvement over the game I was horribly disappointed by back in 2016. It’s still not actually great, but it’s decent, and I can see myself coming back to it now and then when I want a fairly laid back SF fix.

High points:

  • There’s an arc quest that introduces core gameplay elements in a reasonably approachable way (although the start of the game is still kind of brutal)
  • There are dynamically generated missions
  • The space stations now actually kind of make sense
  • Base construction is pretty nice
  • There’s a kind of dumb “learn the alien languages” subgame
  • Planets have more interesting stuff on them

Low points:

  • Space is monotonous (star systems comprise a bunch of planets, usually at least one with rings, in a cloud of asteroids, all next to each other). Space stations seem to look like D&D dice with a hole in one side (minor spoiler: there’s also the “Anomaly” which is a ball with a door).
  • Planets are monotonous — in essence you a color scheme, hazard type (radiation, cold, heat — or no hazard occasionally), one or two vegetation themes, one or two mobility themes for wildlife, and that’s about it. (If there are oceans, you get extra themes underwater.) By the time you’ve visited five planets, you’re seldom seeing anything new.
  • Ecosystems are really monotonous (why does the same puffer plant seem to be able to survive literally anywhere?)
  • The aliens are just not very interesting (great-looking though)
  • On the PS4 the planet atmospheres look like shit
  • The spaceship designs are pretty horrible aesthetically — phone booth bolted to an erector set ugly.
  • Very, very bad science (one of my daughters was pissed off that “Salt” which was labeled as NaCl could not be refined into Sodium which — mysteriously — powers thermal and radiation protection gear). Minerals and elements are just used as random herbal ingredients for a potion crafting system that feels like it was pulled out of someone’s ass.
  • Way, way too much busywork, e.g. it’s convenient that “Silica Powder” can fuel your Terrain modifier tool (which generates Silica Powder as a biproduct of use) but why put in the mechanic at all? Why do I need to assemble three things over and over again to fuel up my hyperdrive? Why do I keep on needing to pause construction to burn down trees to stock up on carbon?

The audacity of building a game with a huge, fractally detailed universe is not what it once was. It’s an approach many developers took out of necessity in an era when memory and storage were simply too limited to store handmade content, and budgets were too small to create it — Elite, Akallabeth, Arena, Pax Imperia, and so on — but it’s disappointing to see a game built this way with far more capable technology, more resources, and greater ambition keep failing to deliver in so many (to my mind) easily addressable ways. When No Man’s Sky was first demoed, my reaction was “wow, that’s gorgeous and impressive, I wonder where the gameplay is”. When it actually came out, two years later, my reaction was “hmm, not as gorgeous as the demo, and there’s basically no gameplay”. As of No Man’s Sky Beyond — well, the gameplay is now significantly better than the original Elite (or, in my opinion, Elite Dangerous) — which is not nothing.

As a final aside, one day I might write a companion article about Elite Dangerous, a game in many ways parallel to No Man’s Sky. The main reason I haven’t done so already is that I found Elite Dangerous so repellant that I quit before forming a complete impression. Ironically, I think Elite Dangerous is in some ways a better No Man’s Sky and No Man’s Sky is a better Elite.

Data-Table Part 2: literate programming, testing, optimization

I’ve been trying to build the ideas of literate programming into b8r (and its predecessor) for some time now. Of course, I haven’t actually read Knuth’s book or anything. I did play with Oberon for a couple of hours once, and I tried to use Light Table, and I have had people tell me what they think Knuth meant, but what I’ve tried to implement is the thing that goes off in my head in response to the term.

One of the things I’ve grown to hate is being forced to use a specific IDE. Rather than implement my own IDE (which would (a) be a lot of work and (b) at best produce Yet Another IDE people would hate being forced to use) I’ve tried to implement stuff that complements whatever editor, or whatever, you use.

Of course, I have implemented it for bindinator, and of course that’s pretty specific and idiosyncratic itself, but it’s a lot more real-world and less specific or idiosyncratic than — say — forcing folks to use a programming language no-one uses on an operating system no-one uses.

Anyway this weekend’s exploit consisted of writing tests for my debounce, throttle, and — new — throttleAndDebounce utility functions. The first two functions have been around for a long time and used extensively. I’m pretty sure they worked — but the problem is the failure modes of such functions are pretty subtle.

Since all that they really do is make sure that if you call some function a lot of times it will not get called ALL of those times, and — in the case of debounce — that the function will be called at least once after the last time you call it, I thought it would be nice to make sure that they actually worked exactly as expected.

Spoiler alert: the functions did work, but writing solid tests around them was surprisingly gnarly.

The reason I suddenly decided to do this was that I had been trying to optimize my data-table (which is now virtual), and also show off how awesome it is, by adding a benchmark for it to my performance tools. This is a small set of benchmarks I use to check for performance regressions. In a nutshell it compares b8r’s ability to handle largish tables with vanilla js (the latter also cheating, e.g. by knowing exactly which DOM element to manipulate), the goal being for b8r to come as close to matching vanilla js in performance as possible.

I should note that these timings include the time spent building the data source, which turns out to be a mistake for the larger row counts.

So, on my laptop:

  • render table with 10k rows using vanilla js — ~900ms
  • render table with 10k rows using b8r — ~1300ms
  • render table with 10k rows using b8r’s data-table — ~40ms

Note: this is the latest code. On Friday it was a fair bit worse — maybe ~100ms.

Anyway, this is pretty damn cool. (What’s the magic? b8r is only rendering the visible rows of the table. It’s how Mac and iOS applications render tables. The lovely thing is that it “just works” — it even handles momentum scrolling — like iOS.)

Virtual javascript tables are nothing new. Indeed, I’ve had two implementations in b8r’s component examples for three years. But data-table is simple, powerful, and flexible in use.

So I added a 100k row button. Still pretty fast.

A little profiling revealed that it was wasting quite a bit of time on recomputing the slice and type-checking (yes, the b8r benchmarks include dynamically type-checking all of the rows in the table against a heterogeneous array type where the matching type is the second type checked). So, I optimized the list computations and had the dynamic type-checking only check a subsample of arrays > 110 rows.

  • render table with 100k rows using b8r’s data-table — ~100ms

So I added a 1M row button. Not so fast. (Worse, scrolling was pretty bad.)

That was Thursday. And so in my spare time I’ve been hunting down the speed problems because (after all) displaying 40 rows of a 1M row table shouldn’t be that slow, should it?

Turns out that slicing a really big array can be a little slow (>1/60 of a second), and the scroll event that triggers it gets triggered every screen refresh. Ouchy. It seemed to me that throttling the array slices would solve this problem, but throttle was originally written to prevent a function from being called again within some interval of the last call starting (versus finishing). So I modified throttle to delay the next call based on when the function finishes executing.

Also, you don’t want to use throttle, because then the last call to the throttled function (the array slice) won’t be called) which could leave you with a missing array row on the screen. You also don’t want to use debounce because then then the list won’t update until it stops scrolling, which is worse. What you want is a function that is throttled AND debounced. (And composing the two does not work — either way you end up debounced.)

So I needed to write throttleAndDebounce and convince myself it worked. I also wanted to convince myself that the new subtly different throttle worked as expected. And since the former was going to be used in data-table, which I think is going to be a very important component, I really want it to be sure it’s solid.

  • render table with 1M rows using b8r’s data-table — ~1100ms

Incidentally, ~600ms of that is rendering the list, the rest is just building the array.

At this point, scrolling is smooth but the row refreshes are not (i.e. the list scrolls nicely but the visible row rendering can fall behind, and live filtering the array is acceptably fast). Not shabby but perhaps meriting further work.

Profiling now shows the last remaining bottleneck I have any control over is buildIdPathValueMap — this itself is a function that optimizes id-path lookup deep in the bowels of b8r. (b8r implements the concept of a data-path which lets you uniquely identify any part of a javascript object much as you would in javascript, e.g. foo.bar.baz[17].lurman, except that inside of square brackets you can put an id-path such as app.messages[uuid=0c44293e-a768-4463-9218-15d486c46291].body and b8r will look up that exact element in the array for you.

The thing is, buildIdPathValueMap is itself an optimization specifically targeting big lists that was added when I first added the benchmark. b8r was choking on big lists and it was the id-path lookups that were the culprit. So, when you use an id-path on a list, b8r builds a hash based on the id-path once so you don’t need to scan the list each time. The first time the hash fails, it gets regenerated (for a big, static list the hash only needs to be generated once. For my 1M row array, building that hash is ~300ms. Thing is, it’s regenerating that hash.

So, eventually I discover a subtle flaw in biggrid (the thing that powers the virtual slicing of rows for data-table. b8r uses id-paths to optimize list-bindings so that when a specific item in an array changes, only the exact elements bound to values in that list item get updated. And, to save developers the trouble of creating unique list keys (something ReactJS developers will know something about) b8r allows you to use _auto_ as your id-path and guarantees it will efficiently generate unique keys for bound lists.

b8r generates _auto_ keys in bindList, but bindList only generates keys for the list it sees. biggrid slices the list, exposing only visible items, which means _auto_ is missing for hidden elements. When the user scrolls, new elements are exposed which b8r then detects and treats as evidence that the lookup hash for the id-path (_auto_) is invalid. A 300ms thunk later, we have a new hash.

So, the fix (which impacts all filtered lists where the _auto_ id-path is used and the list is not initially displayed without filtering, not just biggrid) is to force _auto_ keys to be generated for the entire list during the creation of the hash. As an added bonus, since this only applies to one specific property but a common case, in the common case we can avoid using getByPath to evaluate the value of each list item at the id-path and just write item._auto_ instead.

  • render table with 1M rows using b8r’s data-table — ~800ms

And now scrolling does not trigger Chrome’s warning about scroll event handlers taking more than a few ms to complete. Scrolling is now wicked fast.

Must all data-tables kind of suck?

b8r's data-table in action
b8r now has a data-table component. It’s implemented in a little over 300loc, supports custom header and fixed headers, a correctly-positioned scrollbar, content cells, sorting, resizing, showing/hiding of columns.

Bindinator, up until a few weeks ago, did not have a data-table component. It has some table examples, but they’ve not seen much use in production because b8r is a rewrite of a rewrite of a library named bindomatic that I wrote for the USPTO that was implemented to do all the data-binding outside of data-tables in a large, complex project and then replaced all the data-tables.

It turns out that almost any complex table ends up being horrifically special-case and bindomatic, by being simpler, made it easier to build complex things bespoke. (I should note that I also wrote the original data-table component that bindomatic replaced.)

Ever since I first wrote a simple file-browser using b8r-native I’ve been wishing to make the list work as well as Finder’s windows. It would also benefit my RAW file manager if that ever turns into a real product.

The result is b8r’s data-table component, which is intended to scratch all my itches concerning data-tables (and tables in general) — e.g. fixed headers, resizable columns, showing/hiding columns, sorting, affording customization, etc. — while not being complex or bloated.

At a little over 300 loc I think I’ve avoided complexity and bloat, at any rate.

As it is, it can probably meet 80% of needs with trivial configuration (which columns do you want to display?), another 15% with complex customization. The remaining 5% — write your own!

Implementation Details

The cutest trick in the implementation is using a precisely scoped css-variable to control a css-grid to conform all the column contents to desired size. In order to change the entire table column layout, exactly one property of one DOM element gets changed. When you resize a column, it rewrites the css variable (if the number of rows is below a configurable threshold, it live updates the whole table, otherwise it only updates the header row until you’re done).

Another cute trick is to display the column borders (and the resizing affordance) using a pseudo-element that’s 100vh tall and clipped to the table component. (A similar trick could be used to hilite columns on mouse-over.)

Handling the actual drag-resizing of columns would be tricky, but I wrote b8r’s track-drag library some time ago to manage dragging once-and-for-all (it also deals with touch interfaces).

Next Steps…

There’s a to-do list in the documentation. None of it’s really a priority, although allowing columns to be reordered should be super easy and implementing virtualization should also be a breeze (I scratched that itch some time back).

The selection column shown in the animation is actually implemented using a custom headerCell and custom contentCell (there’s no selection support built in to data-table). I’ll probably improve the example and provide it as an helper library that provides a “custom column” for data-table and also serves as an example of adding custom functionality to the table.

I’d like to try out the new component with some hard-core use-cases, e.g. the example RAW file manager (which can easily generate lists of over 10,000 items), and the file-browser (I have an idea for a Spotlight-centric file browser), and — especially — my galaxy generator (since I have ideas for a web-based MMOG that will leverage the galaxy generator and use Google Firebase as its backend).

Apple Music… Woe Woe Woe

Meryn Cadell’s “Sweater” — I particularly recommend “Flight Attendant” and “Job Application” but they don’t have video clips that I can find.

We’ve been an Apple Music family pretty much from day one. I used to spend a lot of time and money in stores shopping for albums. Now I have access to pretty much everything there is (including comedy albums) for the price of a CD per month.

I love the fact that my kids can just play any music they like and aren’t forced to filter down their tastes to whatever is in our CD collection, or their friends like, or what’s on commercial radio (not that we listen to commercial radio).

I also love the fact that when I get a new Apple device it just effortlessly has everything in my library on the device the next day.

But…

As I said, I used to spend quite a bit of time buying music. So I have some unusual stuff. Also, I lived in Australia for a long time, so I have a lot of stuff that isn’t available in the US or is available in subtly different form in the US. Similarly, my wife is a huge David Bowie fan and has some hard-to-get Bowie albums, e.g. Japanese and British imports.

We’ve both been ripping CDs for a long time, and in 2012 we ripped everything we hadn’t already ripped as part of packing for a move.

So now we have music that isn’t quite recognized by iTunes. To some extent it gets synced across our devices, probably via a process that went like this:

  1. (Before Apple Music existed) explicitly send music to iPhone
  2. When we get new phone, restore phone from backup on Mac.
  3. (Later) Restore phone from cloud backup.
  4. (Apple Music arrives) Hey, as a service we’ll look through your music library and match tracks to the thing we think it is in iTunes and rather than waste backup space, we’ll simply give you copies of our (superior!?) version. Oh, yeah, it’s DRMed because we need to disable the tracks if you stop paying a subscription fee.

Now this mostly works swimmingly. But sometimes we encounter one of three failure modes:

  • You have something Apple Music doesn’t recognize
  • You have something Apple Music misrecognizes (this happened in the old days when the hacky way iTunes (using the CDDB et al) identified tracks would identify an album incorrectly and misname it and all your tracks, but you could fix it and rename them manually)
  • You have something Apple Music recognizes correctly as something it doesn’t sell in your current region, and disables your ability to play it!

The first failure means that Apple may be able to restore the track from a backup of a device that had it, but it won’t restore it otherwise. So you have to find a machine with the original (non-DRMed file and fix it).

The second failure means that Apple’s Music (iTunes on a Mac) application will start playing random crap. If you’re lucky you can find the correct thing in Apple Music and just play that instead, but now you’re stuck with Apple’s DRM and may even end up losing track of or deleting your (non-DRMed) version.

The third mode is particularly pernicious. I have a fantastic album by Canadian performance artist Meryn Cadell (Angel Food for Thought) that I bought after hearing a couple of the tracks played on Phillip Adams’ “Late Night Live” radio program many years ago. I freaking love that album. For years, Apple would sync the album from device to device because it had no freaking clue what it was…

But sometime recently, Apple added Meryn Cadell’s stuff to Apple Music. As far as I can tell, it makes a second album (that I didn’t know existed) to the Apple Music US region but not Angel Food for Thought. So it knows that Angel Food for Thought exists but it won’t let me play it.

Now, I happen to know where my backups are. I fired up iTunes on my old Mac Pro and there’s Angel Food for Thought. It plays just fine. Then I turned on “sync to cloud” and all the tracks get disabled. It’s magical, but not in a good way.

This is ongoing… I will report further if anything changes.

Update

After escalation, I’ve been told iTunes is working as intended. So, basically, it will play music it can’t identify or that is DRM-free and already installed, but what it won’t do is download and play music from iTunes that (it thinks) matches music that (it thinks) you have if it doesn’t have the rights to that music in your jurisdiction.

So, I have the album “Angel Food for Thought” which iTunes used not to know about, so it just worked. But, now iTunes knows that it exists BUT it doesn’t have US distribution rights, so it won’t propagate copies of “Angel Food for Thought” to my new devices (but it won’t stop me from manually installing them). Super annoying, but not actively harmful.

It does seem to mean that there’s a market for something that lets you stick all your own music in iCloud and play stream it for you.