The AR Tsunami

SciTechDaily’s artistic rendering of a volcano which probably looked nothing like Mt. Tambora.

In a recent episode of his excellent podcast Cautionary Tales, entitled “Frankenstein vs. the Volcano“, Tim Harford looks at the impact of the explosion of Mt. Tambora (which was far more powerful than Krakatoa) on the world in the immediate and long-term. In the end, you could sum up his conclusion as being “those whom disasters narrowly avoid killing are made stronger”. Western Europe suffered its last major famine as a result of the “nuclear Winter” caused by the eruption, millions died around the world, and we got (among many other things) Frankenstein, the bicycle, and nutritional science.

The obvious parallel, and one that Harford makes himself, is with COVID. The impact of COVID has been enormous, and, aside from causing millions of deaths and even more long-term suffering, it has also catalyzed many positive changes that are likely to be long-lasting. The paperless office, which was technically feasible for the last thirty years, has all but become reality. People’s labor is now much more likely to be valued for its outputs (what did you accomplish?) than its inputs (how many hours did you sit at your desk?).

AR will allow each of us to experience a user experience much better than that envisaged in Minority Report without needing to stand up, have a dedicated room, use special gloves, or bother the people around us (much). And it’s coming much sooner than you think.

All of which is to say that COVID has paved the way for a technological tsunami, and it’s Augmented Reality. You can experience this revolution, in primitive form, by using the Oculus Quest 2. It’s a self-contained VR headset (not AR) but it has a limited AR support in that it uses outward facing cameras for both head-tracking and to show the world around you when you approach the boundaries of your “safe area” or tap on it to show your surroundings so you can see what you’re doing.

Aside: what does COVID have to do with AR? COVID has made almost everything we do doable online with a smart phone or laptop, it’s turned remote work into a new normal, and it’s made people used to collaborating online. AR makes all of this easier to do with fewer devices and allows us to collaborate in new ways that will help make up for decline of the office. If I can see my colleagues working around me, what difference does it make if they’re really there?

Google Glass was also a primitive attempt at AR, but it’s more like a HUD than what I’m talking about. While potentially useful to some (consider a surgeon using a laser scalpel who has to wear protective eyewear to start with, and might want critical patient information to be available at a glance) it’s again not the real deal.

Magic Leap’s AR demos were a non-solution to a non-problem.

Magic Leap sold investors on a vision of AR that’s both unrealistic and hilariously stupid. Sure, watching a whale breach in a school gymnasium would be cool, but is it useful? Does it blend?

But that’s just a hint of what’s coming.

Right now, being an early-adopting software engineer/doofus, I carry around a fairly ridiculous array of tech. Among other things, I have my iPhone, and I also have a laptop, iPad, and watch. Each of these things has a battery, a display, networking antennae, and a CPU. A lot of the data they work with are shared and sent back and forth both between devices and the cloud. E.g. if I take a photo, it is uploaded to iCloud and then downloaded to my phone and laptop. If someone sends me a text, it appears on my phone which sends it to the watch and laptop.

Now consider this scenario: I carry a phone and a headset. The phone is beefier than my current phone: imagine if it were so thick that the camera bulge wasn’t needed to accommodate the cameras. That’s right, it’s the same size as it was (maybe a little smaller in other dimensions) but thicker. This is because it’s going to do the job of my tablet, my phone, my watch, and my laptop (except it won’t be sending data to itself) and it will be using the headset as a display.

Projected keyboards are already a niche product, but in AR/VR virtual can appear under your hands (or show through them) and be infinitely customizable (just as iOS’s glass keyboard lets you access special keys in context, only more so. in the long run, they could be replaced or augmented by chorded hand gestures (maybe we’ll all learn to sign!)

What about the keyboard? Well, some folks might use a larger device which has a keyboard (but no display) while others might carry a small keyboard. Personally, I’ll use the phone’s on-screen keyboard when using the phone’s display, but when I use the headset, it will project a virtual keyboard onto a flat surface near me and when I type on it, it will figure out what I’m typing by tracking my fingers.

When I want to work, I can put on the headset and either be immersed in VR or simply have as many and as large a collection of displays overlaid on my view as I want. The resolution won’t quite be the same as you get from today’s retina displays right away, but it will be close enough that most people don’t care and it will get better fast. Being able to have a universe-sized display that fits in your pocket and doesn’t bother other people will make up for slight deficiencies in resolution.

In return, you now only need to carry, recharge, and pay for a phone and a headset. If you’re a power user you might carry two headsets so you can charge one while you use the other and you might have a “headless laptop” if you need more computing horsepower.

How will the user interface work? Well, the baseline scenario would be demonstrated by the Oculus. It’s essentially the existing GUI paradigm on floating rectangles in the air which you manipulate with virtual laser pointers. (I could go on a rant about how clumsy some of this is, but this post is long enough.)

The worst thing about the Oculus Quest 2 is the need for the two controllers. Allowing users to use (and see!) their hands to point and type on a virtual keyboard would on its own allow the Oculus to be an incredible portable computer (albeit with a barely adequate CPU, terrible battery life, insufficient storage, and dependence on a network). Adding better outward cameras would turn it into a true AR device. Apple’s CPUs, developer toolchain, and iOS close the loop.

Apple can and will do much, much better, and everything they’ve been doing with Swift, Apple Silicon, and AR has been laying the foundations for a huge and sustainable advantage in this new world. To begin with, performance per watt is going to be crucial, and Apple is far ahead of everyone here. Next, having a power-efficient modern language (i.e. not Javascript or JVM) is crucial, and Apple is ahead there too.

I’ve caught some tech waves and missed others (e.g. I was late to Web 2.0 and Javascript). I’m hoping to surf this one.