Written by Duncan Wilcox
on June 9, 2010
I’m very interested in iPad used for content creation, you might call it “casual content creation”. With iWork for iPad, Apple sent a clear message: the iPad is more than a content consumption device.
I believe content creation apps are going to be key to the iPad as a platform.
The competitive strategy angle is that HTML5 is narrowing the gap, so much so that a content consumption web app can come pretty close to a native app. However for key content creation aspects like interactivity, integration with the system, media richness, speed, disconnected use and polish, native apps still have an edge. This means that native, content creation apps, differentiate the iPad from (upcoming) competing tablets, more than a content consumption app can do.
The platform strategy angle is that content creation apps complete the iOS platform and, in the context of the laid back/feet on the table target, make it general purpose.
Content creation on a touch screen is a shocker to people who believe you’re only creating content when you’re typing text—how could you type all the text on a soft keyboard?
Once you overcome the concept of keyboard and mouse as the primary input devices, you realize they might in fact have been a blessing and a curse, stifling creativity by forcing tools and users to adapt, instead of the other way around. What would music sound like today if we had never evolved from harpsichord and lute, would Mark Knopfler play Sultans of Swing on a harp?
The basics of computer literacy today involve learning to coordinate eye and hand to internalize the separation between hand movement and on screen activity. Change your mouse acceleration settings and realize just how delicate the abstraction is.
To a novice user, aiming at something on screen with a mouse is like trying to ring a doorbell using a broomstick. The tool that’s between you and the target object is the cause for the lack of directness. You will get used to it out of necessity, but that doesn’t make it better than direct interaction.
To sum it up, a first level of indirection is removed by touching objects on screen: you directly touch and manipulate information you want to act upon.
This is a tricky one. Twenty six years of Windows Icons Mouse and Pointer leave us with conventions, behaviors and encrusting that build on input device limitations in ways that make user interfaces hard to learn.
Object selection is one of the relics of WIMP interfaces, the selected state is a form of intrinsic UI modality. Objects and actions are pleasingly orthogonal to the mathematically inclined, but performing actions on objects that are in a selected state is another form of indirection, akin to using a marker to highlight the Lego brick that you’ll then pick up with your “hand tool” to perform an assembly action.
Selection is the premise to the Great Inspector Hunt, whereby you click on an object to manipulate it and then go to an entirely different place to hunt down the property you’re looking for.
Despite the trend to simplify user interfaces and remove features, there still are too many features to expose only through gestures, particularly considering the lack of a shared gesture vocabulary among different apps.
Multiple selection is a common UI feature that doesn’t map well to touch screens, and in fact it doesn’t really map well to some kinds of objects even on the desktop, like discontinuous text selection. Multiple selection also prevents contextual UI placement, which I believe to be a problem of multiple selection, not of contextual inspection.
On a touch screen the two “obvious” ways of implementing multiple selection are the iWork way, tap and hold first object then use other hand to tap other objects, or the desktop-inspired drawing of a rectangular “rubber band” that selects all objects it touches. Multitouch with two hands is a pretty demanding technique, let go of the first object and you lose the selection, the first hand partially obscures then screen and is in the way and you really have to put the iPad down. Dan Messing’s otherwise excellent FreeForm drawing app implements rubber band object selection, the fundamental downside is that because there’s no hovering mouse cursor and because the entire canvas is a target that initiates a rubber band, it is exceedingly easy to activate accidentally, deselecting the object you’re working with.
The solution is to think about what multiple selection is really needed for and try and do that differently. Text style can be applied to multiple objects by copy/pasting style information, Keynote on iPad uses a suboptimal multi-hand gesture to match object sizes that is essentially style cloning. Mail on iPad groups “objects” by entering a mailbox “edit” mode, where multiple messages can be selected to be moved or deleted. There likely are other workable solutions that can contribute to kill the need for multiple selection.
So while from a software development point of view you can still think of an object as being selected, from an interaction design point of view it probably helps to think selection doesn’t exist, that by tapping an object the user is asking for the object manipulation UI to be exposed. Near the object of course.
When you approach a building you don’t type in the door angle in degrees to open it. Sometimes the door handle shape doesn’t match how you actually use the door, but you definitely manipulate the door directly.
Yet app use generic number or string editing controls all the time, instead of meaningful visual metaphors. User interface kits of standard controls help developers build apps faster, at the cost of seldom representing object properties in a way that directly manipulates the property.
The lack of directness, result of the use of a surrogate representation, might have been a reasonable compromise between immediacy and implementation simplicity in the past. Today it’s just forcing the user to think in terms of the app’s representation of a property, rather than the property itself. So, once again, it’s a form of indirection that should be removed.
Fixing this is very dependent on the kind of property, and there’s a tradeoff between frequency of use of a property and clutter when all the property controls come on screen.
A landscape touch screen makes for a surprisingly functional keyboard, considering it gives no tactile feedback. Clearly it is no match to a real keyboard, if your job is to crank out a few pages of text a day you’ll obviously want a real keyboard and perhaps a full desktop OS.
But actually if you are a gifted writer and can dump your brain and turn thoughts into actual textual content in an unretouched stream of words, the glass keyboard might be reasonably usable. The issues become apparent when you need to edit and move text around, to manipulate text instead of to produce content. Moving the cursor around with arrow keys or using keyboard accelerators to manipulate text, like hand-eye coordination in using a mouse, is second nature to anybody who spends any amount of time typing.
Beginners will instead just backspace through perfectly good text to get to a typo, until they learn the magic of the left arrow key: it’s like a backspace that doesn’t delete!
This is not to reiterate the “iPad is for young/old/dumb people” cliché, rather to point out that keyboard-based text manipulation is not a natural interface, that we should consider the idea that there might be alternate interfaces for text editing, that function and meta keys that have popped up over the decades are barnacles on the “content keys” rock.
A new interface for something so fundamental isn’t something you dream up without testing and I haven’t given it a huge amount of thought, but I do believe that a keyboard-less multitouch text navigation and manipulation UI might be a workable solution, better than trying to replicate keyboard and mouse based manipulation on a multitouch screen.
The iPad is sexy and makes you want to use it even for content creation. In a future article I plan on discussing how the above applies to the iWork UI, though it’s clear that iWork is just a first good shot, definitely not the final word on touch content creation UIs.
I believe content creation apps will define the iPad and make it many times more useful than it currently is, and I believe only UIs that remove indirection and bring content closer to the user will succeed in disappearing, as Adam Engst puts it “the iPad becomes the app you’re using”.
You shouldn’t write off the iPad as a content creation tool just because iWork isn’t quite there yet. Remember the state of Mac software on January 24th, 1984? Yeah, MacWrite or MacPaint, one at a time. And the Mac was born for content creation, as most dead-tree-age, printer-front-end solutions were at the time.
I’m at WWDC for a few more days, so if you’re in San Francisco I’d be happy to chat about this, just contact me @duncanwilcox.
About the Author
Hi, I’m Duncan Wilcox, I’m a software developer and chocolate addict, living in Florence, Italy. I’m passionate about the Mac, photography and user interaction, among other things. Contact me at email@example.com or follow me on Twitter. These days I work on Sparkle.