b8r 0.5.0

b8r 0.5.0 is available from npm and github. See the github-pages demo here. It’s the single most radical change to b8r since it was first released, although unlike the switch from require() to ES6 Modules it’s non-breaking, so I’ve updated the documentation accordingly.

How it started:

b8r.register('myApp', { 
  foo: 17, 
  bar: { 
    baz: 'lurman'
  list: []
const { baz } = b8r.get('myApp.bar')
b8r.set('myApp.bar.baz', 'lurman')
b8r.pushByPath('myApp.list', { id: 17, name: 'Romeo + Juliet' })
const { id } = b8r.get('myApp.list[id=17]') // one of b8r's coolest features

How it’s going:

b8r.reg.myApp  = { 
  foo: 17, 
  bar: { 
    baz: 'lurman'
  list: [] 

const {baz} = b8r.reg.myApp.bar
// OMG I've been misspelling his name in all my examples!
b8r.reg.myApp.bar.baz = 'luhrmann' 
b8r.reg.myApp.list.push({id: 17, name: 'Romeo + Juliet'})
const {id} = b8r.reg.myApp.list['id=17'] // this still works!

The changes are, basically, syntax sugar for the most common operations with b8r, which means that there’s a huge potential to make code that’s already about as compact and simple as front-end code gets, and make it more compact and more simple. How simple? Here’s the code implementing the simple todo component from the React vs. Bindinator document:

data.list = []
data.text = ''
data.nextItem = 1
data.addItem = () => {
  const {text} = data
  data.nextItem += 1
  data.text = ''

The only piece of information you need to know about this is that data ia the component’s state, which has been retrieved from the registry, and if you change properties inside it, they’ll be detected and changed.

The same example used to look like this:

  list: [],
  text: '',
  nextItem: 1,
  addItem() {
    const {list, nextItem, text} = get()
      nextItem: nextItem + 1,
      text: ''

You need to know that set() is automatically going to set properties inside the component’s state (so that b8r will know they’ve been changed), and get() will retrieve the object. You need to know that if you pull an object out of the registry and mess with it, b8r will not know unless you tell it. So, at some point you’ll need to know that get() and set() also take paths, and how paths work.

How did this happen?

So, yesterday, a friend and former colleague who uses bindinator (b8r) reached out to me and said, in essence, it would be a lot easier to convert existing code to b8r if you could put b8r expressions on the left-hand-side of assignment statements.

My reaction was, in essence, “sure, that’s the holy grail, but I think it’s impossible”.

He replied that he thought that I might be able to use ES6 Proxy—which I immediately looked up…

So, as of today, the main branch of b8r now supports a new, cleaner, more intuitive syntax (the old syntax is still supported, don’t worry!), it’s documented, and there’s decent test coverage.


The basic idea of b8r is that it binds data to “paths” which are analogous to file paths on a logical storage device, except that the paths look like standard javascript (mostly).

E.g. you might bind the value in an input field to app.user.name. This looks like <input data-bind=”value=app.user.name”>.

This isn’t binding to a variable or a value, but to the path.

Similarly, you can bind an object to a name, e.g.

const obj = {
  user: {
    name: "tonio",
    website: "https://loewald.com" 

b8r.register('app', obj) // binds obj to the name "app"

If you did both of these things then the input field would now contain “tonio”. But if the user edits the input field, then the value in the bound object changes.

Now, what happens if I write:

obj.user.name = "fredo"?

Well, the value in the object has changed, but the input doesn’t get updated because it was done “behind b8r’s back”. To notify b8r of the update so it can keep bound user interface elements consistent with the application state, you need to do something like:

// simple
b8r.set('app.user.name', 'fredo')

// manual
obj.user.name = 'fredo'

This works very well, and is both pretty elegant and doesn’t involve any “spooky magic at a distance”. E.g. because you know you can change things behind b8r’s back, you can use this to make updates more efficient (e.g. you might be streaming data into a large array, and not want to force large updates frequently and at random intervals, so you could update the array, but only tell b8r when you want to redraw the view).

But what you can’t do is something like this:

b8r.get('app.user.name') = 'fredo'

A javascript function cannot return a variable. It can return an object with a computed property, so it would be possible to enable syntax like:

b8r.get(‘app.user.name’).value = ‘fredo’

Which is (a) clumsy, and (b) makes the common case less elegant. Or maybe we could enable:

b8r.set('app.user.name').value = 'fredo'

Which returns the object if no new value is passed (or whether or not a new value is passed) but this doesn’t work for:

b8r.set('app.user').name.value = 'fredo'


b8r.set('app.user').name = 'fredo'

…performs the change, but behind b8r’s back.

ES6 Proxy to the Rescue

So, in a nutshell, ES6 proxy gives the Javascript programmer direct access to the mechanism that allows object prototypes (i.e. “class instances” to work, by creating a “proxy” for another object that intercepts property lookups. This is similar to—but far less hacky than—the mechanism in PHP that lets you write a special function for a class that gets called whenever an unrecognized property of an instance is referenced.

In short, you use Proxy to create a proxy for an object (e.g. foo) that can intercept any property reference, e.g. proxy.bar, and decide what to do, knowing that the user is trying to get or set a property named ‘bar’ on the original object.

So, now, b8r.reg is a proxy for the b8r registry, which is the object containing your application’s state.

Our original example can now look like this:

b8r.reg.app =  {
  user: {
    name: "tonio",
    website: "https://loewald.com" 

And we can change a property via:

b8r.reg.app.user.name = 'fredo'

And common convenience constructs like this work:

const {user} = b8r.reg.app
user.name = 'fredo'
user.website = 'https://deadmobster.org'

And b8r is notified when the changes are made. (You can still make changes behind b8r’s back the old way, if you want to avoid updates!)

Not bad for a Saturday afternoon!


Wouldn’t it be nice if arrays worked exactly as you expected? So instead of this pattern in b8r code:

const {list} = b8r.get('path.to.list')
// do stuff to list like add items, remove them, filter them
b8r.touch('path.to.list') // hey, b8r, we messed with the list!

We could do stuff like this?



b8r.reg.path.to.list.splice(5, 10) // chop some items out

I.e. wouldn’t it be nice if you could just perform standard operations you can do with any array directly from the b8r registry, and b8r knew it happened? Why yes, yes it would.

The new reg proxy exposes array methods that tend to mutate the array with a simple wrapper that touches the path of the array. E.g. pop, splice, and forEach.

And, why about the coolest feature of b8r data-paths, id-paths in array references?

b8r.set('path.to.list[id=123].name', 'foobar')


b8r.reg.path.to.list['id=123'].name = 'foobar'

Spooky Action at a Distance?

This all might smell a bit like “spooky magic at a distance”. How is the user expected to maintain a mental model of what’s going on behind the scenes for when things go wrong?

Well, the b8r.reg returns a proxy of the registry, and exposes each of its object properties as proxies, and so on. Each proxy knows its own path within the registry. Each object registered is just a property of the registry. In

If you set the property of a proxied object, it calls b8r.set() on the path.

That’s it. That’s all the magic.

Now, here are two articles discussing how ngZone and change-detection-strategy onPush, and how React hooks work.

If commas were washers…

create-react-app is an abstraction layer over creating a “frontend build pipeline”. Is it leaky? Yes.

Today, if you’re working at a big tech company, the chances are that working on a web application is something like this:

  1. You’re building an app that comprises a list view and a detail view with some editing, and for bonus points the list view has to support reordering, filtering, and sorting. This is considered to be quite an undertaking.
  2. There is currently an app that does 90% or more of what your app is intended to do, but it’s not quite right, and because it was written in Angular/RxJs or React/Redux it is now considered to be unmaintainable or broken beyond repair.
  3. The services for the existing app deliver 100% of the back-end data and functionality necessary for your new app to work.
  4. (There’s also probably a mobile version of the app that likely consumes its own service layer, and somehow has been maintained for five years despite lacking the benefit of a productivity-enhancing framework.)
  5. Because the service layer assumes a specific client-side state-management model (i.e. the reason the existing app is unmaintainable) and because your security model is based on uniquely tying unique services to unique front-ends and marshaling other services (that each have their own security model) on the front-end server) there is absolutely no way your new app can consume the old app’s services. Also, they’re completely undocumented. (Or, “the code is the documentation”.)
  6. Your new app will comprise a front-end-server that dynamically builds a web-page (server-side) on demand and then sends it to the end-user. It will then consume services provided by the front-end-server OR a parallel server.
  7. When you start work each day you merge with master and then fire off a very complicated command (probably by using up-arrow or history because it’s impossible to remember) to build your server from soup-to-nuts so that, god-willing and assuming nothing in master has broken your build, in five to fifteen minutes you can reload the localhost page in your browser and see your app working.
  8. You then make some change to your code and, if you’re lucky you can see it running 15s later. If you’ve made some trivial error the linters didn’t catch then you get 500 lines of crap telling you that everything is broken. If you made non-trivial dependency changes, you need to rebuild your server.

Imagine if we designed cars this way.

Leaky Abstractions

Wikipedia, which is usually good on CS topics, is utterly wrong on the topic of “leaky abstractions”. It wasn’t popularized in 2002 by Joel Spolsky (first, it’s never been “popularized”, and second it has been in use among CS types since at least the 80s, so I’m going to hazard that it comes from Djikstra). And the problem it highlights isn’t…

the reliance of the software developer on an abstraction’s infallibility

In Joel’s article, his example is TCP, which he says is supposed to reliably deliver packets of information from one place to another. But sometimes it fails, and that’s the leak in the abstraction. And software that assumes it will always work is a problem.

I don’t disagree with any of this particularly, I have a problem with the thing that’s defined as the “leaky abstraction”. “TCP is a reliable messaging system” is the leaky abstraction, not the programmer assuming it’s true. But it’s also a terrible abstraction that no-one actually uses.

I’d suggest that a leaky abstraction is actually an abstraction is one that requires users to poke through it into the underlying implementation to actually use it effectively. E.g. if you were to write a TCP library that didn’t expose failures, you’d need to bypass it and get at the actual errors to handle TCP in actual use.

In other words, an abstraction is “leaky” when you must violate it to use it effectively.

An example of this is “the user interface is a pure function of its data model” which turns out to be impractical, necessitating the addition of local “state” which in turn is recognized as a violation of the original abstraction and replaced with “pure components” which in turn are just as impractical as before and thus violated by “use hooks to hide state anywhere”.

Assuming abstractions are perfect can be a problem, but usually it’s actually the opposite of the real problem. It’s like the old chesnut “when you assume you make an ass out of u and me”. It’s utterly wrong — if you don’t assume you’ll never get anything done, although sometimes you’ll assume incorrectly.

The command and pernicious problem is software relying on or assuming flaws in the abstraction layer. A simple (and common) example of this is software developers discovering a private, undocumented, or deprecated call in an API and using it because it’s easy or they assume that, say, because it’s been around for five years it will always be there, or that because they work in the same company as the people who created the API they can rely on inside knowledge or contacts to mitigate any issues that arise.

The definition in the Wikipedia article (I’ve not read Joel Spolsky’s article, but I expect he actually knows what he’s talking about; he usually does) is something like “assuming arithmetic works”, “assuming the compiler works”, “assuming the API call works as documented”. These are, literally, what you do as a software engineer. Sometimes you might double-check something worked as expected, usually because it’s known to fail in certain cases.

  • If an API call is not documented as failing in certain cases but it does, that’s a bug.
  • Checking for unexpected failures and recovering from them is a workaround for a bug, not a leaky abstraction.
  • Trying to identify causes of the bug and preventing the call from ever firing if the inputs seem likely to cause the failure is a leaky abstraction. In this case this code will fail even if the underlying bug is fixed. This is code that relies on the flaw it is trying to avoid to make sense.

According to Wikipedia, Joel Spolsky argues that all non-trivial abstractions are leaky. If you consider the simplest kinds of abstraction, e.g. addition, then even this is leaky from the point of view of thinking of addition in your code matching addition in the “real world”. The “real world” doesn’t have limited precision, overflows, or underflows. And indeed, addition is defined for things like imaginary numbers and allows you to add integers to floats.

The idea that all non-trivial abstractions are leaky is something I’d file under “true but not useful”, it’s like “all non-trivial programs contain at least one bug”. What are we supposed to do? Avoid abstractions? Never write non-trivial programs? I suggest that abstractions are only useful if they’re non-trivial, and since they will probably be leaky, the goal is to make them as leak-free as possible, and identify and document the leaks you know to exist, and mitigate any you stumble across.

I don’t know if some warped version of Spolsky’s idea infected the software world and led it to a situation in which, to think in terms of designing a car in the real world, to change the color of the headrest and see if it looks nice, we need to rebuild the car’s engine from ore and raw rubber using the latest design iteration of the assembly plant and the latest formulation of rubber, but it seems like it has.

In the case of a car, in the real world you build the thing largely out of interchangeable modules and there are expectations of how they fit together. E.g. the engine may be assumed to fit in a certain space, have certain standard hookups, and have standardized anchor points that will always be in the same spot. So, you can change the headrest confident that any new, improved engine that adheres to these rules won’t make your headrest color choices suddenly wrong.

If, on the other hand, the current engine you were working with happens to have some extra anchor point and your team has chosen to rely on that anchor point, now you have a leaky abstraction! (It still doesn’t affect the headrest.)

Abstractions have a cost

In software, abstractions “wrap” functionality in an “interface”. E.g. you can write an add() function that adds two numbers, i.e. wraps “+”. This abstracts “+”. You could check to see if the inputs can be added before trying to add them and um do something if you thought they couldn’t. Or you could check the result was likely to be correct by, say, subtracting the two values from zero and changing the sign to see if it generated the same result.

For the vast majority of cases, doing anything like this and, say, requiring everyone in your company to use add() instead of “+” would be incredibly stupid. The cost is significant:

  • your add may have new bugs, and given how much less testing it has gotten than “+” the odds are pretty high
  • your add consumes more memory and executes more code and will slow down every piece of code it runs in
  • code using your add may not benefit from improvements to the underlying “+”
  • time will be spent in code reviews telling new engineers to use add() instead of “+” and having back and forth chats and so on.
  • some of those new engineers will start compiling a list of reasons to leave your company based on the above.
  • all your programmers know how to use “+” and are comfortable with it
  • knowing how “+” works is a valuable and useful thing anywhere you work in future; knowing how add() works is a hilarious war story you’ll tell future colleagues over drinks
I’ve used create-react-app and now I’m ready to write “hello, world”. I now have 1018 dependencies, but in exchange my code runs slower and is more complex and has to be written in a different language…

So, an abstraction needs to justify itself in terms of what it simplifies or it shouldn’t exist. Ways in which an abstraction becomes a net benefit include:

  • employing non-obvious best practices in executing a desired result (some programmers may not know about a common failure mode or inefficiency in “+”. E.g. add(a, a) might return a bit-shifted to the left (i.e. double a) rather than perform the addition. This might turn out to be a net performance win. Even if this were true, the compiler may improve its implementation of “+” to use this trick and your code will be left behind. (A real world example of this is using transpilers to convert inefficiently implemented novel iterators, such as javascript’s for(a of b){} and turn them into what is currently faster code using older iterators. Chances are, the people who work on the javascript runtime will probably fix this performance issue and the “faster” transpiled code will now be bigger and slower.)
  • insulating your code from capricious behavior in dependencies (maybe the people who write your compiler are know to badly break “+” from time to time, so you want to implement it by subtracting from zero and flipping for safety. Usually this is a ridiculous argument, e.g. one of the funnier defenses I’ve heard of React was “well what if browsers went away?” Sure, but what are the odds React outlives browsers?
  • insulating user code from breaking changes. E.g. if you’re planning on implementing betterAdd() but it can break in places where add() is used, you can get betterAdd() working, then deprecate add() and eventually switch to betterAdd() or, when all the incompatibilities are ironed out, replace add() with betterAdd().
  • saving the user a lot of common busywork that you want to keep DRY (e.g. to avoid common pitfalls, reduce errors, improve readability, provide a single point to make fixes or improvements, etc.). Almost any function is anyone writes is an example of this.
  • eliminate the need for special domain knowledge so that non-specialists can use do something that would otherwise require deep knowledge to do, e.g. the way threejs allows people with no graphics background to display 3d scenes, or the way statistics libraries allow programmers who never learned stats to calculate a standard deviation (and save those who did the trouble of looking up the formula and implementing it from scratch).

If your abstraction’s benefits don’t outweigh its costs, then it shouldn’t exist. Furthermore, if your abstraction’s benefit later disappears (e.g. the underlying system improves its behavior to the point where it’s just as easy to use it as your abstraction) then it should be easy to trivialize, deprecate, and dispose of the abstraction.

jQuery is an example of a library which provided huge advantages which slowly diminished as the ideas in jQuery were absorbed by the DOM API, and jQuery became, over time, a thinner and thinner wrapper over built-in functionality.

E.g. lots of front-end libraries provide abstraction layers over XMLHttpRequest. It’s gnarly and there are lots of edge cases. Even if you don’t account for all the edge cases, just building an abstraction around it affords the opportunity to fix it all later in one place (another possible benefit).

Since 2017 we have had fetch() in all modern browsers. Fetch is simple. It’s based on promises. New code should use fetch (or maybe a library wrapped around fetch). Old abstractions over XMLHttpRequest should either be deprecated and/or rewritten to take advantage of fetch where possible.

Abstractions, Used Correctly, Are Good

The problem with abstractions in front-end development isn’t that they’re bad, it’s that they’re in the wrong places, as demonstrated by the fact that making a pretty minor front-end change requires rebuilding the web server.

A simple example is React vs. web-components. React allows you to make reusable chunks of user interface. So do web-components. React lets you insert these components into the DOM. So do web-components. React has a lot of overhead and often requires you to write code in a higher-level form that is compiled into code that actually runs on browsers. React changes all the time breaking existing code. React components do not play nicely with other things (e.g. you can’t write code that treats a React Component like an <input> tag. If you insert web-components inside a React component it may lose its shit.

Why are we still using React?

  • Does it, under the hood, use non-obvious best practices? At best arguable.
  • Does it insulate you from capricious APIs? No, it’s much more capricious than the APIs it relies on.
  • Does it insulate you from breaking changes? Somewhat. But if you didn’t use it you wouldn’t care.
  • Does it save you busywork? No, in fact building web-components and using web-components is just as easy, if not easier, than using React. And learning how to do it is universal and portable.
  • Does it eliminate the need for domain knowledge? Only to the extent that React coders learn React and not the underlying browser behavior OR need to relearn how to do things they already know in React.

Angular, of course, is far, far worse.

How could it be better?

If you’re working at a small shop you probably don’t do all this. E.g. I built a digital library system from “soup-to-nuts” on some random Ubuntu server running some random version of PHP and some random version of MySQL. I was pretty confident that if PHP or MySQL did anything likely to break my code (violate the abstraction) this would, at minimum, be a new version of the software and, most likely, a 1.0 or 0.1 increment in version. Even more, I was pretty confident that I would survive most major revisions unscathed, and if not, I could roll back and fix the issues.

It never burned me.

My debug cycle (PHP, MySQL, HTML, XML, XSL, Javascript) was ~1s on localhost, maybe 10s on the (shared, crappy) server. This system was open-sourced and maintained, and ran long after I left the university. It’s in v3 now and each piece I wrote has been replaced one way or another.

Why on Earth don’t we build our software out of reliable modular components that offer reliable abstraction layers? Is it because Joel Spolsky allegedly argued, in effect, that there’s no such thing as a reliable abstraction layer? Or is it because people took an argument that you should define your abstraction layers cautiously (aware of the benefit/cost issue), as leak-free as possible, and put them in the right places, and instead decided that was too hard: no abstraction layer, no leaks? Or is it the quantum improvement in dependency-management systems that allows (for example) a typical npm/yarn project to have thousands of transitive dependencies that update almost constantly without breaking all that often?

The big problem in software development is scale. Scaling capacity. Scaling users. Scaling use cases. Scaling engineers. Sure, my little digital library system worked aces, but that was one engineer, one server, maybe a hundred daily/active users. How does it scale to 10k engineers, 100k servers, 1B customers?

Scaling servers is a solved problem. We can use an off-the-shelf solution. Actually the last two things are trivial.

10k engineers is harder. I would argue first off that having 10k engineers is an artifact of not having good abstraction layers. Improve your basic architecture by enforcing sensible abstractions and you only need 1k engineers, or fewer. At the R&D unit of a big tech company I worked at, every front-end server and every front-end was a unique snowflake with its own service layer. There was perhaps a 90% overlap in functionality between any given app and at least two other apps. Nothing survived more than 18 months. Software was often deprecated before it was finished.

Imagine if you have a global service directory and you only add new services when you need new functionality. All of a sudden you only need to build, debug, and maintain a given service once. Also, you have a unified security model. Seems only sensible, right?

Imagine if your front-end is purely static client-side code that can run entirely from cache (or even be mobilized as an offline web-app via manifest). Now it can be served on a commodity server and cached on edge-servers.

Conceptually, you’ve just gone from (for n apps) n different front-end servers you need to care for and feed to 0 front-end servers (it’s off-the-shelf and basically just works) and a single service server.

Holy Grail? We’ve already got one. It’s very nice.

(To paraphrase Monty Python.)

If you’ve already got a mobile client, it’s probably living on (conceptually, if not actually) a single service layer. New versions of the client use the same service layer (modulo specific changes to the contract). Multiple versions of the client coexist and can use the same service layer!

In fact, conceptually, we develop a mobile application out of HTML, CSS, and Javascript and we’re done. Now, there are still abstraction layers worth having (e.g. you can wrap web-component best-practices in a focused library that evolves as the web-components API evolves, and you definitely want abstraction layers around contenteditable or HTML5 drag-and-drop), but not unnecessary and leaky abstraction layers that require you to rebuild the engine when you change the color of the headrest.

The Dark Side Beckons

With the advent of COVID, my wife and I decided to return to a world we had sworn off — MMORPGs. Those of you who know us know that we literally met while playing EverQuest, and we had, at various times in our lives, been pulled deep into that world, e.g. being officers (and in my wife’s case, leader) of raiding guilds — a more-than-fulltime job.

Our last flirtation with MMOs had been on the PS4 with Elder Scrolls Online — think Skyrim but much bigger and full of other players running around, riding impossible-looking mounts, asking for help with world bosses, and randomly casting spells to show off.

Using one PS4 per player is a pain, worse if your kids are involved. So we decided we’d try something that (a) ran on a Mac (we have enough Macs for everyone), (b) would be free to play, and (c) wasn’t WoW.

Now, I omitted our more recent flirtation with GuildWars 2, which meets all these criteria but just feels completely flavorless. (In GW2 you can sign up for and complete a quest without actually ever meeting the person you’re supposed to be doing it for — they’ve streamlined the story out of the game).

Anyway, the game we had been pining for these past years was ESO. We’d stopped playing because of a lack of time and the PS4 inconvenience, and it runs on Macs.

Our big problem came when we tried to install it on my daughter’s 2012 Mac Pro. Turns out the installed version of macOS is so out-of-date (3y or so?) that it requires multiple sequential OS updates to run it, complicated by needing firmware flashes that come with the OS updates and there’s no one-step “clean install”. Anyway, when I tried to do these it just stalled and wouldn’t go past the second version. Sheesh. (And, by the way, the Mac Pro works just fine for everything else.

Also, my wife’s Macbook really struggled with it.

Enter the Acer Predator Helios 300

So, I searched for a well-reviewed gaming laptop and got two Acer Helios 300s, not maxed out but close. Each for roughly the price of a Macbook Air, or what I would normally pay for a headless gaming desktop back in the day (i.e. ~$1200). These suckers run ESO with near maxed out-graphics settings at ~90fps (dropping to 40-50fps in massive fights, and 30fps or slower when on battery power).

Are these well-made laptops? No, they are not. The way the power brick connects literally makes me cringe. The USB and headphone ports are in super annoying locations because they need the obvious spots for fan vents.

Do they run hot? OMG yes, yes they do. Our cats like to bask in their exhaust.

Are they quiet? No, no they are not.

Do they have good speakers? In a word, hell no.

Do the trackpads work well? No. They. Do. Not.

Does Windows 10 make me livid? Yes, yes it does.

My daughters, who openly envy us our hotrod gaming laptops (and use them as much and as often as they can), have commented on how badly put together they are in every respect except for the important criterion of kicking ass.

To be continued…

WordPress WTF?!

Recently, wordpress has been so badly behaved as to boggle my mind. If I’m lucky, I only get something like this in the console when create a new post…

Embarrassing shit in the console when I load wordpress…

If I’m unlucky, I get a screenful of errors and nothing appears. It was so bad a couple of minor versions ago, I started trying to figure out how to rebuild my site without wordpress (getting the post text is easy enough, but wordpress uses an uploads directory which makes things non-trivial).

Anyway, I’m going to whine about this to my three readers in the hope that maybe something gets done about this shit. (It’s things like this that give jQuery a bad name, and I strongly doubt it’s jQuery’s fault.)

No Man’s Sky Revisited

No Man’s Sky was originally released in 2016. I’d been waiting for it for nearly two years after seeing some early demos. This looked like a game I’d been day-dreaming about for decades.

It was one of the most disappointing games I’ve ever played.

I recently saw No Man’s Sky Beyond on sale in Best Buy (while shopping for microphones for our upcoming podcast) and immediately picked it up. Speaking of disappointing game experiences, the PS4 VR has been a gigantic disappointment ever since I finished playing Skyrim (which was awesome). Why there haven’t been more VR updates of great games from previous generations (e.g. GTA IV) escapes me, because second-rate half-assed new VR games do not impress me.

Anyway, I did not realize that (a) No Man’s Sky Beyond is merely the current patched version of No Man’s Sky, and that the VR mode is absolutely terrible. But, the current full patched version of No Man’s Sky is a huge improvement over the game I was horribly disappointed by back in 2016. It’s still not actually great, but it’s decent, and I can see myself coming back to it now and then when I want a fairly laid back SF fix.

High points:

  • There’s an arc quest that introduces core gameplay elements in a reasonably approachable way (although the start of the game is still kind of brutal)
  • There are dynamically generated missions
  • The space stations now actually kind of make sense
  • Base construction is pretty nice
  • There’s a kind of dumb “learn the alien languages” subgame
  • Planets have more interesting stuff on them

Low points:

  • Space is monotonous (star systems comprise a bunch of planets, usually at least one with rings, in a cloud of asteroids, all next to each other). Space stations seem to look like D&D dice with a hole in one side (minor spoiler: there’s also the “Anomaly” which is a ball with a door).
  • Planets are monotonous — in essence you a color scheme, hazard type (radiation, cold, heat — or no hazard occasionally), one or two vegetation themes, one or two mobility themes for wildlife, and that’s about it. (If there are oceans, you get extra themes underwater.) By the time you’ve visited five planets, you’re seldom seeing anything new.
  • Ecosystems are really monotonous (why does the same puffer plant seem to be able to survive literally anywhere?)
  • The aliens are just not very interesting (great-looking though)
  • On the PS4 the planet atmospheres look like shit
  • The spaceship designs are pretty horrible aesthetically — phone booth bolted to an erector set ugly.
  • Very, very bad science (one of my daughters was pissed off that “Salt” which was labeled as NaCl could not be refined into Sodium which — mysteriously — powers thermal and radiation protection gear). Minerals and elements are just used as random herbal ingredients for a potion crafting system that feels like it was pulled out of someone’s ass.
  • Way, way too much busywork, e.g. it’s convenient that “Silica Powder” can fuel your Terrain modifier tool (which generates Silica Powder as a biproduct of use) but why put in the mechanic at all? Why do I need to assemble three things over and over again to fuel up my hyperdrive? Why do I keep on needing to pause construction to burn down trees to stock up on carbon?

The audacity of building a game with a huge, fractally detailed universe is not what it once was. It’s an approach many developers took out of necessity in an era when memory and storage were simply too limited to store handmade content, and budgets were too small to create it — Elite, Akallabeth, Arena, Pax Imperia, and so on — but it’s disappointing to see a game built this way with far more capable technology, more resources, and greater ambition keep failing to deliver in so many (to my mind) easily addressable ways. When No Man’s Sky was first demoed, my reaction was “wow, that’s gorgeous and impressive, I wonder where the gameplay is”. When it actually came out, two years later, my reaction was “hmm, not as gorgeous as the demo, and there’s basically no gameplay”. As of No Man’s Sky Beyond — well, the gameplay is now significantly better than the original Elite (or, in my opinion, Elite Dangerous) — which is not nothing.

As a final aside, one day I might write a companion article about Elite Dangerous, a game in many ways parallel to No Man’s Sky. The main reason I haven’t done so already is that I found Elite Dangerous so repellant that I quit before forming a complete impression. Ironically, I think Elite Dangerous is in some ways a better No Man’s Sky and No Man’s Sky is a better Elite.