Apple has a couple of new patents granted by the USPTO today (via AppleInsider), one of which deals with improving its mobile software for uses particular to how we interact with mobile devices. It describes a way in which a mobile OS could alter how its interface behaves and responds to user input based on whether it detects it’s in motion or not, and could be great for on-the-go iPhone interaction.
The system invented by Apple would use sensors built into the phone to detect when a user is in motion, including the accelerometer and gyroscope. It could distinguish between walking and running, for instance, and even detect the angle at which the screen is being held. Depending on all this information, the system would intelligently modify the graphic user interface of the phone’s software to make it easier to use.
Individual UI elements might be enlarged, along with their touch points, to make them easier to hit, for instance. Or fisheye and other effects that emphasize certain portions of the screen might be applied, too. Whole rows could be dynamically shifted to compensate for a bobbing motion, in some cases, to make it seem like the display is stable even when the device is moving around.
It makes sense to try to adapt devices for easier movement while out and about, since it’s fairly common for people to pull their phones out while they’re on the move. This patent has been around for a long time however (first applied for in 2007) and it seems like it would be immensely challenging to get the different shifts right in order for this to be useful. If it could work perfectly every time, it would be a great boon, but I wouldn’t hold my breath about seeing this make it to shipping products.
Apple’s second new patent is much more likely to become real, and probably fairly soon. It describes a means for editing 3D video on software like Final Cut Pro. FCP X actually doesn’t include 3D video editing, though most of the competitors it faces in the market including Adobe Premiere do. Editing 3D video created using stereoscopic imaging means treating two frames captured by two different cameras simultaneously as separate things, then stitching them back together.
The patent describes how you’d be able to link some aspects that are important between frames, like time-based cuts and trims, while keeping other elements separate, such as color correction or visual tweaks. This makes sense as both cameras might be capturing images with slightly different white balance, hues or other things, but timings should be consistent across both.
It’s actually a very basic patent with antecedents in other competitive software, and makes sense as an update to FCP down the road. A lot of that may depend on the future of the Hollywood blockbuster, however, as studios keep trying to make 3D something in demand, but don’t seem to be generating any great desire for the tech in the end.