Text Entry 😒😕😱🤷♂️…
There are at least five ways to enter text on the Quest 3 (also the Quest 2).
- “Shooting” keys on a virtual keyboard using your pointers.
- “Pointing and clicking” on keys on a virtual keyboard using hand-tracking.
- “Pressing” keys on a virtual keyboard put poking your virtual fingers through the keys using hand tracking.
- Voice recognition. I think. Haven’t managed to activate it or get it to do anything. I guess I’ll need to watch another sociopath on Youtube (see below) to find out how this is supposed to work.
- Pairing a bluetooth keyboard.
In rough terms these options are bearable, awful, awful but in a whole new way I had never previously experienced, don’t know, and planning to find out.
To give you a tiny taste of why I’d probably scream at Meta’s Quest UX team were I to have an opportunity…
Imagine you’re building a virtual keyboard for virtual reality that has by its nature all the space in the universe and can change shape, shrink, expand, whatever, when you’re pointing at it or looking at it but you don’t make the number keys and the alphabet keys and common punctuation available at the same time.
Resizing Windows 😒…😱…👍
My first major pain point with the Quest 3 was not being able to resize Windows. This is super easy to do on the Quest 2 and I was stumped. I tried pinching (dual pointers), dragging different edges, everything.
Turns out that by default you start in one display mode where all windows are the same size and appear at particular distance (too close) and a whole other option lets you create bigger, resizable windows further away.
To find this out I had to endure a horrible video by some asshole talking about his new user name meaning “god of everything” in a language he’d invented or something. OMG who are these people? How many people has this guy killed and eaten? This video was so bad I actually started scrolling through the comments (yes, I started reading Youtube comments voluntarily without a gun in my mouth) and the top comment was “go to 2:55 and thank me later” or somesuch. Thank you kind person whoever you are. I would have voted you up but I couldn’t be bothered to log in to Youtube to do so since my password’s are not easy to type on a real keyboard.
The answer is “settings, screen distance”. I’ll post screen shots when I figure out how to post screenshots.
Motion Tracking 😒… Automatic Switching to/from Motion Tracking 😱
It turns out I did have motion tracking on, and what’s more “auto-switching” was by default set to maximum sensitivity. On the Quest 2 you could transition to motion tracking by tapping your controllers together and putting them down. I tried this. Supposedly just putting the controllers down is supposed to work with the Quest 3. For me it did not. And, surprisingly, this was not owing to poor lighting—I have excellent lighting in the places I tried it.
So, auto-switching is broken but manual switching is not. But, once in motion tracking mode I discovered that hand tracking was definitely far better on the Quest 3 than the Quest 2 (as in, not laughably terrible) but even when it works it sucks. And this is due to the “clicking motion”, which involves pinching your fingers, always, without fail, also causing your perceived pointer position to change. I think that if using this system were my only option I could possibly, eventually, learn to pinch in such a way that my pointing finger did not move.
This reminds me of the way that Windows, for literally decades, did not smooth mouse motion as well as macOS leading to a jittery mouse.
Apple’s original opto-mechanical mouse tracked a ball with perpendicularly mounted wheels which spun striped reflectors in front of a laser-diode/sensor combination that literally counted flashes and this was turned into position deltas. It’s noisy and sucks for diagonal lines, and even on the original Mac had brilliantly engineered code that smoothed out the noisy data and allowed pretty clean pointing and drawing behavior. I remember demonstrating the difference to a computer artist who had grown up using PCs and swore they were just plain better than Macs in the late 1990s.
He had literally trained his brain to deal with the shitty Windows mouse input. Bravo human evolution, but I preferred the device which had let me draw, freehand, with my non-dominant hand, within an hour of picking up the device.
Aside: Windows Start Menu, and Windows Usability in General 😱
Writing this post by some random chance on my Windows PC, I got another reminder of Windows’ total, ongoing, unmitigated failures in usability. First, I had to google how to type an ellipsis in Windows because while I had somehow learned alt-0151 for em-dash some time ago, I could not remember the magic alt-0133 for ellipsis: that’s press down alt, type 0 then 1 then 3 then 3 on the numeric keypad, then release the alt key by the way, only a fool would use the regular number keys on a laptop, say). It’s option-shift-hyphen and option-semicolon on the Mac, and has been since 1984. And, yes, an em-dash is conceptually related to a hyphen and an ellipsis is conceptually related to a semicolon. These are infrequently used combinations so ease of recollection trumps ease of execution. As opposed to, e.g. copy and paste.
Now go look up the original Windows shortcuts for copy and paste. On the Mac the common shortcuts were one-handed left-hand operations intended for the majority of users who would have the mouse in their right-hands. This shit just goes on and on. But go ahead, stick your menu bars 30px below the tops of document windows and ignore Fitts’ Law.
Then, when I wanted to add emojis I tried to find out how to open the Windows 10 Emoji Keyboard and of course the primary purpose of the Start / Windows menu these days is not to actually launch apps or help you do things but to trick you into using the Edge browser and Bing search. So while typing Emoji Keyboard in Start might be expected to, say, expose the app, it simply shows a snippet of an article found on Bing telling you to type Windows-Period, which weirdly does not work on my Logitech MX-Keys keyboard. Anyway, some time later I tried right-clicking and lo-and-behold it’s the top option, something nothing Bing or Google found even suggested.
The Quest Browser 😱
Another example of the fact that the Quest is not ready for prime time despite having, allegedly, been a product for people buy and use for several years is the utter uselessness of the browser (despite being built on Chromium).
First off, I thought I’d try watching Netflix in my Quest 3. The Netflix website loads just fine and I was able to log in (literally looking up the credentials on my Phone without removing the headset). But when I tried to watch a video I ended up on a page I hadn’t seen in ten years, the “Silverlight Not Supported” page. OMFG. For those of you who don’t know, at one point Netflix decided to use Silverlight (Microsoft’s attempt to fight Flash with something worse) as its video playback engine. If I recall correctly, Silverlight supported more DRM options than Flash at the time which suited Netflix because there was some paranoia among content owners that folks would steal streams and torrent them. Or something. But anyway there was this brief shining moment when Netflix supported streaming via Silverlight on some browsers and H264 on others. OMFG.
Anyway, either the Quest’s browser doesn’t support the necessary video streaming stuff needed to let Netflix do its stuff OR it does but Netflix is using crappy feature detection or relying on browser claims and doesn’t know it’s or Netflix wants people to use its app (yes, there’s a Netflix app) and is deliberately the browser experience bad (in which case you’d think they’d detect the Oculus browser and redirect users to the app store so that would involve several layers of incompetence on Netflix’s part).
Anyway, the Netflix app on Quest 3 is horrible. It doesn’t allow mixed reality usage at all and instead embeds you in some kind of virtual bachelor pad which means not enough Quest 3 users have complained and Netflix itself doesn’t care and, presumably, neither does Meta.
Next, I’m trying to gradually turn the xinjs-ui demo site into the basis for a web-based front-end development tool that I’ll be able to use anywhere there’s a browser, which is to say anywhere, so I loaded it up to see how well everything works, and everything works great except the most prosaic thing, the form demo page. Who needs forms, right?
Most user input on the web (e.g. login pages) is done using forms even though HTML forms are not fit for use and need to be prevented from doing their default thing to work properly since “Web 2.0”. (Basically, if you hit enter or push a button inside a form its default behavior is to reload the page and lose everything. This is weirdly not what anyone wants but made sense kind of before web pages started communicating with servers directly. This is sad, but one of the reasons the web has been so insanely successful is that it is very forgiving and doesn’t break old stuff gladly.)
xinjs-ui provides a simple reusable form wrapper that does all the “usual things” people want forms to do and stops bad things from happening while trying to leave everything as much alone as possible. So it lets you use
<input type="date"> elements to display and modify date values in a robust and standard way. Guess what flat out doesn’t work at all on the Quest’s built-in browser?
The built-in interactive demos on the site let me actually quickly test a bare
<input type="date"> alongside the “wrapped” version that was failing to verify that it’s not my code that’s the problem. You simply can’t enter dates via a date input. So, good luck scheduling calendar appointments or booking airfares on any site that uses standard widgets. (Contrast this with mobile Safari which not only supports such things but goes out of its way to provide native experiences around things like auto-complete.)
I should note that the Quest browser does a great job with
<select> elements. This isn’t a failure of engineering, this is a failure of emphasis. Clearly no-one cares if you can get work done using this thing. There’s no-one coming into the office in the morning and trying to work using their Quest headset for as long as possible until they reach a blocker and then raging out, writing a bug report, and telling the manager of the team responsible to fix their shit.
Working on a Computer 🤷♂️…
Interestingly, the Quest 3 offers beta support for desktop sharing out of the box. I actually paid for a third-party solution for this for my Quest 2, which I was planning to try out on the Quest 3 once I sort out the Quest 3 being attached to the wrong account. Anyway, this looks promising.
Capturing Video 🤷♂️…
Capturing Video is pretty easy (meta-right-trigger to start and stop video capture), except that by default it won’t capture your mic, and I’d rather narrate my experience than capture silent video and then overdub it. After all, don’t you want to know what my user name means in the language I made up?
You can capture mic input by using the “camera” app to trigger video capture and manually switching the mic on for that capture, but by default it is always off (I hoped turning it on for one video might change the some underlying setting—it does not—or at least that next time I used that dialog it would default to the previous choice—and no it doesn’t do that either. AFAICT there’s no way to turn it on mid-way.
Ironically, streaming your experience is also possible via the camera app and here the default is to include mic audio. Just in case you thought Meta suddenly cared about your privacy.
Anyway, I haven’t figured out a way to conveniently capture video with mic audio nor have I got stuff syncing to my computer yet.
If you put yourself in the shoes of a usability tester at Meta, consider just how little of a damn they must give about you to make doing all this stuff so messed up. Personally, were I on the team building this stuff, I’d be frustrated just in my own ability to capture quick examples of bugs or other issues and share them and fix it just for my own convenience.
The depth of indifference to usability I read into all of this is mind-blowing. But, never ascribe to malice…
One last random aside…
At least one of the emoji used in my previous blog post (and likely this one too) does not render on the Quest 3. Apparently Meta isn’t even keeping up with Emoji (and it’s not like I’m using super modern obscure ones).
I’m a bit depressed at this point…
As an Apple shareholder I suppose I should be thrilled that the company in the second-best position to make inroads into the VR / XR / AR space is so clueless, but I really wanted to love the Quest 3. As I said to my girlfriend, when Apple made the iPhone they had Eric Schmidt doing industrial espionage for Google on their board. He went back to Google and told the Android team to stop working on their new Sidekick and instead steal Apple’s ideas. Despite this, Apple has maintained a durable technical and usability advantage in the smart phone space for fifteen years. How dominant might they be in the VR / XR / AR space when their competition is this clueless?
Back during the mass layoffs in Silicon Valley in 2022 Zuckerberg was supposedly furious that there were a ton of people working on Oculus project that weren’t using or only grudgingly using the product. Dogfooding is crucial for any consumer product and your goal needs to be a product you use all the time in preference to alternatives and probably in preference to things that aren’t even seen as alternatives.
I’m sure the Apple Watch team has people who use their Watch instead of their phone as much as they can. They probably have “leave your phone at home” days. I’m sure there iPad team has people who use iPads for things other folks use their Macs and iPhones for. I’m sure there are Vision Pro team members who don’t have any monitors, who code on their Vision Pros when they can, who attend meetings with them, and when they run into problems they fix them.
As soon as you internalize the idea that the product you’re building is for “other people” that you are imagining, you are fucked.
The fact that most Facebook employees avoid Facebook outside of work and won’t let their kids use it says a lot about it.
And yes, I worked for Facebook and no I didn’t like it and it didn’t like me. And yes, I bought Oculus products post FB-buyout and held my nose despite all of this.
More to come once I pair a keyboard and install Opera and/or Chrome.