If commas were washers…

create-react-app is an abstraction layer over creating a “frontend build pipeline”. Is it leaky? Yes.

Today, if you’re working at a big tech company, the chances are that working on a web application is something like this:

  1. You’re building an app that comprises a list view and a detail view with some editing, and for bonus points the list view has to support reordering, filtering, and sorting. This is considered to be quite an undertaking. (This is called “dry humor”.)
  2. There is currently an app that does 90% of what your app is intended to do, but it’s not quite right, and because it was written in Angular/RxJs or React/Redux it is now considered to be unmaintainable or broken beyond repair.
  3. The services for the existing app deliver 100% of the back-end data and functionality necessary for your new app to work.
  4. (There’s also probably a mobile version of the app that consumes its own service layer, and somehow has been maintained for five years despite lacking the benefit of a productivity-enhancing framework.)
  5. Because the service layer assumes a specific client-side state-management model (i.e. the reason the existing app is unmaintainable) and because your security model is based on uniquely tying unique services to unique front-ends and marshaling other services (that each have their own security model) on the front-end server) there is absolutely no way your new app can consume the old app’s services. Also, they’re completely undocumented. (Or, “the code is the documentation”.)
  6. Your new app will comprise a front-end-server that dynamically builds a web-page (server-side) on demand and then sends it to the end-user. It will then consume services provided by the front-end-server OR a parallel server.
  7. When you start work each day you merge with master and then fire off a very complicated command (probably by using up-arrow or history because it’s impossible to remember) to build your server from soup-to-nuts so that, god-willing and assuming nothing in master has broken your build, in five to fifteen minutes you can reload the localhost page in your browser and see your app working.
  8. You then make some change to your code and, if you’re lucky you can see it running 15s later. If you’ve made some trivial error the linters didn’t catch then you get 500 lines of crap telling you that everything is broken. If you made non-trivial dependency changes, you need to rebuild your server.

Imagine if we designed cars this way.

Leaky Abstractions

Wikipedia, which is usually good on CS topics, is utterly wrong on the topic of “leaky abstractions”. It wasn’t popularized in 2002 by Joel Spolsky (first, it’s never been popularized, and second it has been in common use among CS types since at least the 80s, so I’m going to hazard a guess it comes from Djikstra). And the problem it highlights isn’t

the reliance of the software developer on an abstraction’s infallibility

In Joel’s article, his example is TCP, which he says is supposed to reliably deliver packets of information from one place to another. But sometimes it fails, and that’s the leak in the abstraction. And software that assumes it will always work is a problem.

I don’t disagree with any of this particularly, I just don’t think that’s a leaky abstraction. The solution in this case is to refine the abstraction to something like “TCP will deliver data or time out or return an error.” Which is actually what TCP does.

A leaky abstraction is more like “the user interface is a pure function of its data model” and “you can use hooks to hide state anywhere in the user interface”. (A similar example is “java programs have no global state” and “here are ten different ways to effectively hide global statein a java program”.) The first abstraction now has leaks everywhere, and any code that relied on it to be true is potentially in trouble. E.g all the tests I’ve written to verify that given such and such input data a given component will render and behave correctly could be worse than useless (e.g. give me a false sense of security, cause me to look in the wrong places for defects).

Assuming abstractions are perfect can be a problem, but usually it’s actually the opposite. (Assuming abstractions are effectively perfect is so often correct that the exceptions tend to floor you. What? Addition is broken for this case?!) The more common and pernicious problem is software relying on flaws in the abstraction layer. A simple (and common) example of this is software developers discovering a private, undocumented, or deprecated call in an API and using it because it’s easy or they assume that, say, because it’s been around for five years it will always be there, or that because they work in the same company as the people who created the API they can rely on inside knowledge or contacts to mitigate any issues that arise.

The definition in the Wikipedia article (I’ve not read Joel Spolsky’s article, but I expect he actually knows what he’s talking about; he usually does) is something like “assuming arithmetic works”, “assuming the compiler works”, “assuming the API call works as documented”. These are, literally, what you do as a software engineer. Sometimes you might double-check something worked as expected, usually because it’s known to fail in certain cases.

  • If an API call is not documented as failing in certain cases but it does, that’s a bug.
  • Checking for unexpected failures and recovering from them is a workaround for a bug, not a leaky abstraction.
  • Trying to identify causes of the bug and preventing the call from ever firing if the inputs seem likely to cause the failure is a leaky abstraction. In this case this code will fail even if the underlying bug is fixed. This is code that relies on the flaw it is trying to avoid to make sense.

According to Wikipedia, Joel Spolsky argues that all non-trivial abstractions are leaky. If you consider the simplest kinds of abstraction, e.g. addition, then even this is leaky from the point of view of thinking of addition in your code matching addition in the “real world”. The “real world” doesn’t have limited precision, overflows, or underflows. And indeed, addition is defined for things like imaginary numbers and allows you to add integers to floats.

The idea that all non-trivial abstractions are leaky is something I’d file under “true but not useful”, it’s like “all non-trivial programs contain at least one bug”. What are we supposed to do? Avoid abstractions? Never write non-trivial programs? I suggest that abstractions are only useful if they’re non-trivial, and since they will probably be leaky, the goal is to make them as leak-free as possible, and identify and document the leaks you know to exist, and mitigate any you stumble across.

I don’t know if some warped version of Spolsky’s idea infected the software world and led it to a situation in which, to think in terms of designing a car in the real world, to change the color of the headrest and see if it looks nice, we need to rebuild the car’s engine from ore and raw rubber using the latest design iteration of the assembly plant and the latest formulation of rubber, but it seems like it has.

In the case of a car, in the real world you build the thing largely out of interchangeable modules and there are expectations of how they fit together. E.g. the engine may be assumed to fit in a certain space, have certain standard hookups, and have standardized anchor points that will always be in the same spot. So, you can change the headrest confident that any new, improved engine that adheres to these rules won’t make your headrest color choices suddenly wrong.

If, on the other hand, the current engine you were working with happens to have some extra anchor point and your team has chosen to rely on that anchor point, now you have a leaky abstraction! (It still doesn’t affect the headrest.)

Abstractions have a cost

In software, abstractions “wrap” functionality in an “interface”. E.g. you can write an add() function that adds two numbers, i.e. wraps “+”. This abstracts “+”. You could check to see if the inputs can be added before trying to add them and um do something if you thought they couldn’t. Or you could check the result was likely to be correct by, say, subtracting the two values from zero and changing the sign to see if it generated the same result.

For the vast majority of cases, doing anything like this and, say, requiring everyone in your company to use add() instead of “+” would be incredibly stupid. The cost is significant:

  • your add may have new bugs, and given how much less testing it has gotten than “+” the odds are pretty high
  • your add consumes more memory and executes more code and will slow down every piece of code it runs in
  • code using your add may not benefit from improvements to the underlying “+”
  • time will be spent in code reviews telling new engineers to use add() instead of “+” and having back and forth chats and so on.
  • some of those new engineers will start compiling a list of reasons to leave your company based on the above.
  • all your programmers know how to use “+” and are comfortable with it
  • knowing how “+” works is a valuable and useful thing anywhere you work in future; knowing how add() works is a hilarious war story you’ll tell future colleagues over drinks
I’ve used create-react-app and now I’m ready to write “hello, world”. I now have 1018 dependencies, but in exchange my code runs slower and is more complex and has to be written in a different language…

So, an abstraction needs to justify itself in terms of what it simplifies or it shouldn’t exist. Ways in which an abstraction becomes a net benefit include:

  • employing non-obvious best practices in executing a desired result (some programmers may not know about a common failure mode or inefficiency in “+”. E.g. add(a, a) might return a bit-shifted to the left (i.e. double a) rather than perform the addition. This might turn out to be a net performance win. Even if this were true, the compiler may improve its implementation of “+” to use this trick and your code will be left behind. (A real world example of this is using transpilers to convert inefficiently implemented novel iterators, such as javascript’s for(a of b){} and turn them into what is currently faster code using older iterators. Chances are, the people who work on the javascript runtime will probably fix this performance issue and the “faster” transpiled code will now be bigger and slower.)
  • insulating your code from capricious behavior in dependencies (maybe the people who write your compiler are know to badly break “+” from time to time, so you want to implement it by subtracting from zero and flipping for safety. Usually this is a ridiculous argument, e.g. one of the funnier defenses I’ve heard of React was “well what if browsers went away?” Sure, but what are the odds React outlives browsers?
  • insulating user code from breaking changes. E.g. if you’re planning on implementing betterAdd() but it can break in places where add() is used, you can get betterAdd() working, then deprecate add() and eventually switch to betterAdd() or, when all the incompatibilities are ironed out, replace add() with betterAdd().
  • saving the user a lot of common busywork that you want to keep DRY (e.g. to avoid common pitfalls, reduce errors, improve readability, provide a single point to make fixes or improvements, etc.). Almost any function is anyone writes is an example of this.
  • eliminate the need for special domain knowledge so that non-specialists can use do something that would otherwise require deep knowledge to do, e.g. the way threejs allows people with no graphics background to display 3d scenes, or the way statistics libraries allow programmers who never learned stats to calculate a standard deviation (and save those who did the trouble of looking up the formula and implementing it from scratch).

If your abstraction’s benefits don’t outweigh its costs, then it shouldn’t exist. Furthermore, if your abstraction’s benefit later disappears (e.g. the underlying system improves its behavior to the point where it’s just as easy to use it as your abstraction) then it should be easy to trivialize, deprecate, and dispose of the abstraction.

jQuery is an example of a library which provided huge advantages which slowly diminished as the ideas in jQuery were absorbed by the DOM API, and jQuery became, over time, a thinner and thinner wrapper over built-in functionality.

E.g. lots of front-end libraries provide abstraction layers over XMLHttpRequest. It’s gnarly and there are lots of edge cases. Even if you don’t account for all the edge cases, just building an abstraction around it affords the opportunity to fix it all later in one place (another possible benefit).

Since 2017 we have had fetch() in all modern browsers. Fetch is simple. It’s based on promises. New code should use fetch (or maybe a library wrapped around fetch). Old abstractions over XMLHttpRequest should either be deprecated and/or rewritten to take advantage of fetch where possible.

Abstractions, Used Correctly, Are Good

The problem with abstractions in front-end development isn’t that they’re bad, it’s that they’re in the wrong places, as demonstrated by the fact that making a pretty minor front-end change requires rebuilding the web server.

A simple example is React vs. web-components. React allows you to make reusable chunks of user interface. So do web-components. React lets you insert these components into the DOM. So do web-components. React has a lot of overhead and often requires you to write code in a higher-level form that is compiled into code that actually runs on browsers. React changes all the time breaking existing code. React components do not play nicely with other things (e.g. you can’t write code that treats a React Component like an <input> tag. If you insert web-components inside a React component it may lose its shit.

Why are we still using React?

  • Does it, under the hood, use non-obvious best practices? At best arguable.
  • Does it insulate you from capricious APIs? No, it’s much more capricious than the APIs it relies on.
  • Does it insulate you from breaking changes? Somewhat. But if you didn’t use it you wouldn’t care.
  • Does it save you busywork? No, in fact building web-components and using web-components is just as easy, if not easier, than using React. And learning how to do it is universal and portable.
  • Does it eliminate the need for domain knowledge? Only to the extent that React coders learn React and not the underlying browser behavior OR need to relearn how to do things they already know in React.

Angular, of course, is far, far worse.

How could it be better?

If you’re working at a small shop you probably don’t do all this. E.g. I built a digital library system from “soup-to-nuts” on some random Ubuntu server running some random version of PHP and some random version of MySQL. I was pretty confident that if PHP or MySQL did anything likely to break my code (violate the abstraction) this would, at minimum, be a new version of the software and, most likely, a 1.0 or 0.1 increment in version. Even more, I was pretty confident that I would survive most major revisions unscathed, and if not, I could roll back and fix the issues.

It never burned me.

My debug cycle (PHP, MySQL, HTML, XML, XSL, Javascript) was ~1s on localhost, maybe 10s on the (shared, crappy) server. This system was open-sourced and maintained, and ran long after I left the university. It’s in v3 now and each piece I wrote has been replaced one way or another.

Why on Earth don’t we build our software out of reliable modular components that offer reliable abstraction layers? Is it because Joel Spolsky allegedly argued, in effect, that there’s no such thing as a reliable abstraction layer? Or is it because people took an argument that you should define your abstraction layers cautiously (aware of the benefit/cost issue), as leak-free as possible, and put them in the right places, and instead decided that was too hard: no abstraction layer, no leaks? Or is it the quantum improvement in dependency-management systems that allows (for example) a typical npm/yarn project to have thousands of transitive dependencies that update almost constantly without breaking all that often?

The big problem in software development is scale. Scaling capacity. Scaling users. Scaling use cases. Scaling engineers. Sure, my little digital library system worked aces, but that was one engineer, one server, maybe a hundred daily/active users. How does it scale to 10k engineers, 100k servers, 1B customers?

Scaling servers is a solved problem. We can use an off-the-shelf solution. Actually the last two things are trivial.

10k engineers is harder. I would argue first off that having 10k engineers is an artifact of not having good abstraction layers. Improve your basic architecture by enforcing sensible abstractions and you only need 1k engineers, or fewer. At the R&D unit of a big tech company I worked at, every front-end server and every front-end was a unique snowflake with its own service layer. There was perhaps a 90% overlap in functionality between any given app and at least two other apps. Nothing survived more than 18 months. Software was often deprecated before it was finished.

Imagine if you have a global service directory and you only add new services when you need new functionality. All of a sudden you only need to build, debug, and maintain a given service once. Also, you have a unified security model. Seems only sensible, right?

Imagine if your front-end is purely static client-side code that can run entirely from cache (or even be mobilized as an offline web-app via manifest). Now it can be served on a commodity server and cached on edge-servers.

Conceptually, you’ve just gone from (for n apps) n different front-end servers you need to care for and feed to 0 front-end servers (it’s off-the-shelf and basically just works) and a single service server.

Holy Grail? We’ve already got one. It’s very nice.

(To paraphrase Monty Python.)

If you’ve already got a mobile client, it’s probably living on (conceptually, if not actually) a single service layer. New versions of the client use the same service layer (modulo specific changes to the contract). Multiple versions of the client coexist and can use the same service layer!

In fact, conceptually, we develop a mobile application out of HTML, CSS, and Javascript and we’re done. Now, there are still abstraction layers worth having (e.g. you can wrap web-component best-practices in a focused library that evolves as the web-components API evolves, and you definitely want abstraction layers around contenteditable or HTML5 drag-and-drop), but not unnecessary and leaky abstraction layers that require you to rebuild the engine when you change the color of the headrest.

Computed Properties are Awesome…

…and should be used as little as possible

One of the things I’ve seen happen multiple times in my career is a language which didn’t have computed properties get them, and then watch as mediocre coders continue to cargo-cult old behaviors that are now counter-productive and obsolete.

The Problem

Things change. In particular, data-structures change. So one day you might decide that a Person looks like:

{
  name,
  address,
  email
}

and another day you decide it really should be:

{
  firstName,
  lastName,
  streetAddress,
  city,
  state,
  postcode,
  country,
  contacts[]
}

In fact, you’re likely to move from the first model to the second model in increments. Meanwhile you’re writing other code that assumes the current model, whatever it happens to be. There’ll always be “perfectly good” stuff that works with a given representation at a given time, and breaking stuff that’s “perfectly good” because other stuff is changing is bad.

The Solution

The solution is to use “getters” and “setters”. So you tell people never to expose underlying representations of data and instead write methods that can remain graven in stone that don’t change. So your initial representation is:

{
  getName,
  setName,
  …,
  _name,
  _address,
  …,
} 

and now you can split _name into _firstName and _lastName, rewrite getName, figure out how to implement setName and/or find all occurrences of setName and fix them, and so on.

Problem solved!

Problems with the Solution

Before, when you started out with only three fields, your Person object had exactly three members and no code. The solution immediately changed that to nine members (3 times as many) and six of those are functions. Now, every time you access any property, it’s a function call. If you’re using a framework that transparently wraps function calls for various reasons you’re maybe calling 3 functions to do a simple property lookup.

But aside from that, it works, and the fact that it prevents unnecessary breakage is a win, even if it makes for more code, less readable code, experimentation, and costs performance and memory.

A better solution

A better solution is immediately obvious to anyone who has used a language with computed properties. In this case you go back to your original object to start with, and when things change, you replace the changed properties with getters and setters so that for legacy code (and everything is legacy code eventually) nothing changes.

E.g. when name becomes firstName and lastName, you implement name as a computed property (implemented in the obvious way) and code that expects a name property to exist just keeps on working (with a slight performance cost that is the same performance cost you would have starting out with the previous solution.

The Antipattern

This exact thing happened when Objective-C added computed properties. All the folks who told you to write getters and setters for your Objective-C properties told you to ignore computed properties and keep doing things the old way or, perhaps worse, use computed properties to write getters and setters, so now you start with:

{
  get name() { return this._name },
  set name(x) { this._name = x },
  _name,
  /* and so on */
}

Is there are good reason for this?

There can be arguments at the margins in some cases that computed properties will be less performant than getters and setters (although in Obj-C that’s a huge stretch, since method dispatch is, by default, not lightning fast in Obj-C — indeed it’s a known performance issue that has a standard workaround (method calls in Obj-C involve looking up methods by name, the optimization is that you essentially grab a function pointer and hang on to it).

There’s absolutely no way that writing computed property wrappers around concrete properties just for the sake of it has any benefit whatsoever.

The short answer is “no”.

Aside: there’s one more marginal argument for getters and setters, that if you’re deprecrating a property (e.g. name) and you’d rather people didn’t call a hacky name setter that tries to compute firstName and lastName from a name string, it’s easier to codemod or grep calls to setName than calls to, say, .name =. I’ll reference this argument here even though I find it about as solid as arguments about how to write Javascript based on how well it plays with obfuscation.

I’m pretty sure cargo cult coders are still writing getters and setters in Swift, which has had computed properties since first release. (By the way, lazy computed properties are even more awesome.)

The Dark Side Beckons

With the advent of COVID, my wife and I decided to return to a world we had sworn off — MMORPGs. Those of you who know us know that we literally met while playing EverQuest, and we had, at various times in our lives, been pulled deep into that world, e.g. being officers (and in my wife’s case, leader) of raiding guilds — a more-than-fulltime job.

Our last flirtation with MMOs had been on the PS4 with Elder Scrolls Online — think Skyrim but much bigger and full of other players running around, riding impossible-looking mounts, asking for help with world bosses, and randomly casting spells to show off.

Using one PS4 per player is a pain, worse if your kids are involved. So we decided we’d try something that (a) ran on a Mac (we have enough Macs for everyone), (b) would be free to play, and (c) wasn’t WoW.

Now, I omitted our more recent flirtation with GuildWars 2, which meets all these criteria but just feels completely flavorless. (In GW2 you can sign up for and complete a quest without actually ever meeting the person you’re supposed to be doing it for — they’ve streamlined the story out of the game).

Anyway, the game we had been pining for these past years was ESO. We’d stopped playing because of a lack of time and the PS4 inconvenience, and it runs on Macs.

Our big problem came when we tried to install it on my daughter’s 2012 Mac Pro. Turns out the installed version of macOS is so out-of-date (3y or so?) that it requires multiple sequential OS updates to run it, complicated by needing firmware flashes that come with the OS updates and there’s no one-step “clean install”. Anyway, when I tried to do these it just stalled and wouldn’t go past the second version. Sheesh. (And, by the way, the Mac Pro works just fine for everything else.

Also, my wife’s Macbook really struggled with it.

Enter the Acer Predator Helios 300

So, I searched for a well-reviewed gaming laptop and got two Acer Helios 300s, not maxed out but close. Each for roughly the price of a Macbook Air, or what I would normally pay for a headless gaming desktop back in the day (i.e. ~$1200). These suckers run ESO with near maxed out-graphics settings at ~90fps (dropping to 40-50fps in massive fights, and 30fps or slower when on battery power).

Are these well-made laptops? No, they are not. The way the power brick connects literally makes me cringe. The USB and headphone ports are in super annoying locations because they need the obvious spots for fan vents.

Do they run hot? OMG yes, yes they do. Our cats like to bask in their exhaust.

Are they quiet? No, no they are not.

Do they have good speakers? In a word, hell no.

Do the trackpads work well? No. They. Do. Not.

Does Windows 10 make me livid? Yes, yes it does.

My daughters, who openly envy us our hotrod gaming laptops (and use them as much and as often as they can), have commented on how badly put together they are in every respect except for the important criterion of kicking ass.

To be continued…

WordPress WTF?!

Recently, wordpress has been so badly behaved as to boggle my mind. If I’m lucky, I only get something like this in the console when create a new post…

Embarrassing shit in the console when I load wordpress…

If I’m unlucky, I get a screenful of errors and nothing appears. It was so bad a couple of minor versions ago, I started trying to figure out how to rebuild my site without wordpress (getting the post text is easy enough, but wordpress uses an uploads directory which makes things non-trivial).

Anyway, I’m going to whine about this to my three readers in the hope that maybe something gets done about this shit. (It’s things like this that give jQuery a bad name, and I strongly doubt it’s jQuery’s fault.)

No Man’s Sky Revisited

No Man’s Sky was originally released in 2016. I’d been waiting for it for nearly two years after seeing some early demos. This looked like a game I’d been day-dreaming about for decades.

It was one of the most disappointing games I’ve ever played.

I recently saw No Man’s Sky Beyond on sale in Best Buy (while shopping for microphones for our upcoming podcast) and immediately picked it up. Speaking of disappointing game experiences, the PS4 VR has been a gigantic disappointment ever since I finished playing Skyrim (which was awesome). Why there haven’t been more VR updates of great games from previous generations (e.g. GTA IV) escapes me, because second-rate half-assed new VR games do not impress me.

Anyway, I did not realize that (a) No Man’s Sky Beyond is merely the current patched version of No Man’s Sky, and that the VR mode is absolutely terrible. But, the current full patched version of No Man’s Sky is a huge improvement over the game I was horribly disappointed by back in 2016. It’s still not actually great, but it’s decent, and I can see myself coming back to it now and then when I want a fairly laid back SF fix.

High points:

  • There’s an arc quest that introduces core gameplay elements in a reasonably approachable way (although the start of the game is still kind of brutal)
  • There are dynamically generated missions
  • The space stations now actually kind of make sense
  • Base construction is pretty nice
  • There’s a kind of dumb “learn the alien languages” subgame
  • Planets have more interesting stuff on them

Low points:

  • Space is monotonous (star systems comprise a bunch of planets, usually at least one with rings, in a cloud of asteroids, all next to each other). Space stations seem to look like D&D dice with a hole in one side (minor spoiler: there’s also the “Anomaly” which is a ball with a door).
  • Planets are monotonous — in essence you a color scheme, hazard type (radiation, cold, heat — or no hazard occasionally), one or two vegetation themes, one or two mobility themes for wildlife, and that’s about it. (If there are oceans, you get extra themes underwater.) By the time you’ve visited five planets, you’re seldom seeing anything new.
  • Ecosystems are really monotonous (why does the same puffer plant seem to be able to survive literally anywhere?)
  • The aliens are just not very interesting (great-looking though)
  • On the PS4 the planet atmospheres look like shit
  • The spaceship designs are pretty horrible aesthetically — phone booth bolted to an erector set ugly.
  • Very, very bad science (one of my daughters was pissed off that “Salt” which was labeled as NaCl could not be refined into Sodium which — mysteriously — powers thermal and radiation protection gear). Minerals and elements are just used as random herbal ingredients for a potion crafting system that feels like it was pulled out of someone’s ass.
  • Way, way too much busywork, e.g. it’s convenient that “Silica Powder” can fuel your Terrain modifier tool (which generates Silica Powder as a biproduct of use) but why put in the mechanic at all? Why do I need to assemble three things over and over again to fuel up my hyperdrive? Why do I keep on needing to pause construction to burn down trees to stock up on carbon?

The audacity of building a game with a huge, fractally detailed universe is not what it once was. It’s an approach many developers took out of necessity in an era when memory and storage were simply too limited to store handmade content, and budgets were too small to create it — Elite, Akallabeth, Arena, Pax Imperia, and so on — but it’s disappointing to see a game built this way with far more capable technology, more resources, and greater ambition keep failing to deliver in so many (to my mind) easily addressable ways. When No Man’s Sky was first demoed, my reaction was “wow, that’s gorgeous and impressive, I wonder where the gameplay is”. When it actually came out, two years later, my reaction was “hmm, not as gorgeous as the demo, and there’s basically no gameplay”. As of No Man’s Sky Beyond — well, the gameplay is now significantly better than the original Elite (or, in my opinion, Elite Dangerous) — which is not nothing.

As a final aside, one day I might write a companion article about Elite Dangerous, a game in many ways parallel to No Man’s Sky. The main reason I haven’t done so already is that I found Elite Dangerous so repellant that I quit before forming a complete impression. Ironically, I think Elite Dangerous is in some ways a better No Man’s Sky and No Man’s Sky is a better Elite.