The Magic Trackpad (if I must call it that) has generated some controversy on the TC network. MG thinks it signals the end of the mouse era. I think it’s a great tool but is being lauded by a group of people unfamiliar with decent mice (read: Mac users). I happen to love both Apple’s trackpads and great mice at the same time, but it seems to me that we’re overlooking the real conflict here. And as it turns out, mice and trackpads (magic or otherwise) are on the same side.
The next generation of input is already here; chances are you have it in your pocket. Yet, advanced as it is, there are fundamental shortcomings that will prevent it from completely supplanting the interfaces we’ve grown up with.
The technology I’m referring to is, of course, a touchscreen. One might be tempted to say “but a multi-touch touchscreen is in the same category as a multi-touch trackpad.” An understandable idea — you use similar gestures, pinch and rotate, all that kind of thing. But consider this. When you use a trackpad — or a mouse, critically — you are looking elsewhere and moving a representation of your hand or finger. It’s indirect manipulation. When you use a touchscreen, you are looking at and touching the same thing: direct manipulation.
In fact, the new conflict in input is not about which peripheral you use, but whether you use a peripheral at all: the conflict is direct versus indirect manipulation.
I wrote quite a bit about this in an article I wrote for another site (Towards a better tablet OS; part 2) if you want to get further into it, but I’ll try to break it down here in less than the 3000-odd words I used there.
UIs have been designed for decades now with the mouse in mind; principles such as accessible corners, various consequences to do with mousedowns and mouseups, hover actions, cursor feedback and so on. It’s just a fact: most modern OSes are built for mice — even when they’re not. Windows 7 is a good example of this: while it’s multitouch-friendly in some ways (the Surface project has contributed a lot to this), it’s also fundamentally organized around the mouse in other, more important ways. Even iOS bears the mark, if only in the fact that it often fails to take advantage of finger input in anything other than superficial ways. It’s only post-current-OS projects like 10/GUI that are built entirely around the idea of direct manipulation.
The benefits of direct manipulation are plain; we’ve all been exposed now to how a well-designed touchscreen interface can make many tasks easy and natural, much more so than a mouse. Direct interaction with on-screen glyphs and text is fun and in some cases powerful. Navigating maps, for instance, is both easier and better, and many games benefit from it as well.
But you run into weaknesses as quickly as you run into strengths. The direct connection of the user with the UI means there’s tension between “real” gestures like touching and dragging and “artificial” gestures like tapping with three fingers to bring up a context menu on the item the other hand is touching. These things can be minimized with clever UI tricks, but they’re accommodations for shortcomings in the interaction method — just like keyboard modifiers accommodate the impracticality of pressing more than one mouse button at a time. Yet those multiple buttons enable a multiplicity of actions at your fingertips that even a multitouch interface doesn’t approach.
And of course accuracy is reduced by an enormous amount. A psychologist would probably describe it by saying that the amount of cognitive space assigned to a desktop-mouse-keyboard combo is far, far greater than that assigned to a touchscreen. Again, this is not a pure gain or loss on either side. The reduced cognitive load means a touchscreen is easy enough for even a toddler to use. But with the mouse, the huge landscape over which you move your precision tools (arm, wrist, finger) means that you’re working at an incredible resolution, limited only by the mouse’s sensor and the resolution of the screen. This is what enables pixel-perfect art and pinpoint aiming: a large, configurable indirect manipulation space.
For those who object, saying that you can make precise movements on a trackpad — yes, exactly, because of feedback only possible with indirect interaction. Note that you don’t know (and usually it doesn’t matter) where you put your finger down to initiate an action. Everything is based on a feedback loop between the cursor, the target, and your finger’s movements. Not so with a touchscreen.
If it sounds like I’m speaking gibberish or UI technicalities, just think about how much interpretation goes into determining where you put that fat thumb of yours on an iPhone. There’s nothing precise about it (unless you use a stylus); it’s all about prediction and accommodation. I know there are people who have gotten pretty good at, say, sketching on the iPad, but let’s be honest here: they’re doing parlor tricks, and someone concerned with the precision and accuracy of his strokes isn’t going to leave it up to Apple’s blob-interpolation software.
I don’t know what advances are in store for us. It may be that somehow the benefits of indirect interaction will be annulled by some advance in direct input, or by the cleverness of a generation of UI designers working in a mouse-optional world. Yet I can’t see how they can overcome the obvious advantages indirect interaction brings to situations like gaming and art. One thing is for sure, though: you can pry my mouse from my cold, dead hands.