Apple famously keeps info about upcoming products locked down on a need-to-know basis, but that doesn’t mean it waits until hardware is fully-formed before creating software for it. Nor does it create either hardware or software in a vacuum, building one independent of the other. So how does it go about coming up with and prototyping product ideas with limited info or access to physical devices? Here’s a look at the process in broad strokes.
Essentially, Apple thinks about the early stages of prototyping as a process best accomplished with as little investment of time and resources as possible, and as something you can do with very limited access to things like physical or software prototypes. Instead, the approach involves faking as many aspects of the device or app being created as possible, in order to help you learn as much as possible about how best to build either before you start committing actual resources to the project.
So, for example, if you’re building an app for an Apple Watch that doesn’t even exist yet, you would start with a series of static images, and fake interactivity using basic animations in an app like Keynote, without any programming involved at all. Then you pair that with a rough approximate of your hardware device target, which, in the case of a Watch that doesn’t yet exist, could just be a simulated watch-sized rectangle running on an iPhone, for instance.
The key to this early phase is that while you fake much of the mechanics and programming behind the app and the hardware it’s going to run on, you make the experience of using it as real as possible – which means using it in situation where the device will be used, and seeking the help of people who’ll actually be using it. The idea between getting as close as possible to real-world context is that you’ll learn things about how both hardware and software need to change that will influence product design, before you even produce any kind of physical prototype or code a single line.
This is where reflexivity enters the process; already, what you’re doing is influencing design of both software and hardware, and changes to one are prompting changes to the other, and neither thing actually even exists yet. Approaching it this way keeps the process cost-efficient, and more importantly, extremely flexible and easier to axe or dramatically change at any stage in the process. You’re more nimble if you’re working with elaborately but expertly faked interactivity, rather than real programming hours or manufactory time.
Both internally, and externally with third-party developers, it’s crucial that Apple be ready with high-quality software experiences when its new devices have their initial launch, but not everyone building for these platforms has the luxury of working with final products, be they devices or apps.
They cycle involved has three parts which loop back around, prompting new iterations. These involve building your fake app or product, testing it with real users and gathering feedback, and then taking said feedback to inform the next version. But the key ingredient is not actually doing any of the heavy lifting of coding, building functional hardware or networking devices until it’s actually required to advance the process; hardware can be simulated via software, and creative workarounds including hiding manual process as automated ones can take the place of building a proper programming layer.
Apple’s strength, and what helps it stand apart from its competitors, is its ability to deeply pair software and hardware experiences; what’s especially impressive is how it’s able to do so with some degree of mutual blindness in the process, and those are lessons that are especially relevant to developers feeding the growing ecosystem of apps and gadgets that work with and compliment iOS, Mac and now watchOS, too.