I’ve been seeing a few ads for HP’s Touchsmart PCs (and I see the PCs themselves in Costco all the time). I’m embedding an old ad, which is actually quite pretty (and pretty pointless) but you’ll get the point, I think.
Naturally, both this and the current ads are in marked contrast to Apple’s iPhone and iPad apps. The most obvious difference, for me — and the same applies to the notorious video editing scene in Minority Report — is that the computer user is standing up, whereas the iPad user is sitting down (and we don’t know what the iPhone/iPod Touch user is doing since they’re a disembodied hand). While I might like to fantasize about being Arturo Toscanini, the fact is when I’m doing serious work I prefer to sit. Even were I a user of standing desks (my boss is such a person), I would not want to have to perform wild gestures simply to do a little scrolling.
Another obvious difference is that Apple’s ads, however fast-paced, almost always make sense. You can tell what the user is doing and how they’re doing it. HP’s ads, like most Windows ads, simply feature lots of fast cuts with no clue as to what the heck is being accomplished, why, or how. Still, perhaps the message is:
A confused customer is a Windows customer.
The bad guy’s desk from (the original) Tron is actually a far more convincing interface than pretty much any of these “concepts”, precisely because it’s based on analogs of real-world designs that actually work. Yes, it’s stupid to build a calculator GUI that just looks like a calculator, but it’s even stupider to build a rocket science interface that works much worse than an ordinary calculator. (This is where I tried and failed to find a picture of the VR library UI from Disclosure to insert — along with Jurassic Park‘s “I know UNIX” scene, perhaps the biggest UI howler in Hollywood history.)
One question I’ve been asking myself lately is whether the mouse/trackpad is going to disappear, or how it might be replaced. Even modestly quick typists do not look at the keyboard as they type, but at the words they are typing. Over a hundred years of typewriter evolution never led to the idea that superimposing the user interface on the output was a Good Idea. Indeed, even for the most direct interaction with media (e.g. drawing and painting) physicality is often as much an inconvenience as a benefit (e.g. when painting on large canvases, artists will often use a long piece of wood (held with their off hand) to use as a wrist-rest to avoid inadvertently smearing paint with their wrists or elbow; similarly when drawing with pencils, smearing lead with one’s hand is a constant risk, and keeping one’s hand entirely off the paper is very tiring.
Geoff Hans’s by now famous multitouch demos show enormous potential for the glass touchscreen to operate as some kind of musical or artistic instrument, but that’s not the same thing as being a general purpose pointing device. A mouse is a lousy tool for playing a piano, but it’s unsurpassed for manipulating text selections and bezier handles.
It seems to me that overuse of direct-manipulation interfaces (move the window itself, not a mouse that controls a cursor) and touch-based interfaces in particular (where there is no perfect mechanism for differentiating between intentional touches (“your brush”) and unintentional (“your elbow”)) affords us a golden opportunity to reap almost all the negative aspects of physicality while not capturing many of the benefits (your tablet stylus or finger will never be as excellent an instrument as a graphite pencil, charcoal stick, or sable brush). You will, of course, probably have Undo.
The UI in Minority Report seems at first glance to be ridiculous. And at second glance too. I have no doubt that a far better touch-based UI could be created for such complex tasks than has thus far been envisioned in movies or HP Touchsmart commercials, but I suspect part of it must involve leveraging small gestures — e.g. treating a small part of a giant display as a virtual trackpad — and providing users with virtual points (mouse cursors, in other words).
As I type this post (on my desktop PC), it’s quick, easy, and precise to select text and make edits and corrections. A tiny movement of my hand will move a cursor from one side of my (large) display to the other, and I can indicate something to software merely by moving a cursor over a UI element (versus clicking on it or touching it). The iPad (and iPhone/iPod Touch) have very well-designed interfaces to facilitate performing text selections and caret positioning with a touch, and while it works OK it’s orders of magnitude worse than using a mouse. Yes, I have over twenty-five years of mouse experience, but I was drawing pictures on screen with a mouse a few days after first using one (and with my right hand — I normally write and draw left-handed), and I’ve been touching things all my life. I should also add that despite many years of trackpad experience, I am far less comfortable doing graphical work with a trackpad than a mouse.
You might think that the big problem here is “text selection”. In the future “a contract will only be as good as the tape it’s recorded on” (not sure where that’s from, sorry) or having text automatically transcribed, and the importance of text manipulation will disappear. Even if that’s true, and I strongly suspect it is not, the problem with touch interfaces is far more pervasive and subtle. Even surgeons have trouble keeping a finger rock steady, but most of us have no such problems with a mouse. In part, this is because a mouse’s default state is “rest” — lightly pull your fingers away from a mouse or trackpad and the cursor doesn’t move. Keeping the cursor stable is crucial when trying to perform delicate manipulations, such as rotating objects in 3d, or selecting the edit point in a video, or adjusting the tangent of a curve using a bezier handle. This is an extremely common situation for any computer user, and mice excel at it and fingers and hands utterly suck at it.
It’s possible that with very clever software interpolation we can iron out a lot of these problems, e.g. intelligently set thresholds with drifting “rest points” that allow an unsteady hand to be perceived as being steady, but even if we all were given the hands of surgeons, holding an arm out is work, keeping a mouse steady is not work.
When looking at things, bigger is better. When moving stuff around, small is better. This is why modern bulk carriers (you know: gigantic ocean-going cargo ships) have big windows and monitors on their bridges, but are controlled with small pointing device.
So it seems to me that the future of computer interfaces is — roll drums — the mouse. Direct manipulation gets trying, and your fingers get in the way. Trackpads obviate some problems but remain cruder instruments than mice. I suspect that the stylus will make a huge comeback at such point as touchscreens which also work with pressure-sensitive styluses come back because, let’s face it, great artists don’t finger paint. It’s also possible that the stylus combined with major refinements in direct-manipulation interfaces (including virtual trackpads, pointers, etc.) might win out over mice, but we’re a few years away from that.
Last night I was messing around with Animation HD, a cute animation app on the iPad (that happened to be on sale for $0.99. There’s a slider control to set line thickness which I had used to increase the line thickness to make some crude strokes. I wanted to set the thickness back to (exactly) 16 to match my earlier strokes and needed several attempts and a lot of concentration to get it working. Obviously, this is in large part a software issue, but as someone who frequently makes very accurate adjustments with a mouse using equally poorly designed desktop software, it’s a nice example of the problems that need OS-level fixes.
I’ll add some iPad-originated animations to this post once they show up on my YouTube account.