This Natural Language Interface Aims To Let Anyone Make Animations Jump

[protected-iframe id=”c597fdd50b174a2c41af12c1ea546965-24588526-39990176″ info=”https://www.youtube.com/embed/vbUJ8jyqUOc” width=”640″ height=”360″ frameborder=”0″ allowfullscreen=””]

UK-based post-production animation software developer IKinema has been demoing a natural language interface for controlling animations (demoed in the above video) which CEO Alexandre Pechev reckons could prove its worth in a future of virtual and blended reality — assuming wearables like Magic Leap and Microsoft’s HoloLens make it big.

“I see an opportunity for areas around virtual reality and mixed reality where people will have to start generating content in a very easy way in order to use this. So if Microsoft’s HoloLens and Magic Leap succeeds we have to have tools to drive virtual characters around us,” Pechev tells TechCrunch, discussing potential commercial applications for the project, currently code-named Intimate.

“And not only drive them… we have to also customize it, so if the animation bank is for characters running on a flat surface but if the surface changes or if you want the character to look at you we have to adapt this without animation clips, and that’s what we do at the moment as a part of our work with games studio, but then we’re porting all of this to Intimate.”

The idea for creating a “simplistic” natural language interface to make it easy to stitch together existing animations to create new content was inspired in part by the realization that demand for blended reality content is likely to grow, which will in turn drive demand to make creation tools more accessible.

“Most of our work in the past has been around games and motion capture… On the post-production side we always see a need for someone to stitch animations and to make a new clip — so we started thinking about ideas around that,” he says.

“If you take live motion capture, at the moment it’s really capturing humans predominantly and when it comes to combining humans with something else in the virtual world where the director can see not the real the actor but actually a representation in the virtual world, it’s very hard to actually add any other characters that interact with this actor. And this prompted an idea to provide a simplistic interface for someone like the director to drive these animations in the virtual world which interacts with the real actor.”

“Users that want to use animation in their packages are not necessarily professional animators so a professional animator’s job is to generate clips in the best possible way, but then they go to a bank and this bank then is digested so that someone who doesn’t understand anything can demand action of their characters, without knowing what is happening behind,” he adds.

iKinema’s technique is not computing everything in real-time, rather it draws on existing animation libraries — analyzing the content to identify the various poses so that the animation can be controlled by simple natural language commands, such as ‘jump’, ‘turn left’ or ‘run’, via its interface. The system is also designed to fill in the gaps, smoothing transitions between different actions.

Pechev says it works by analyzing the cloud of points that define each pose to identify poses, and connecting dots to stitch transitions from different poses.

“What we do is we analyze [the animations] and sort of digest it so that it’s converted to the linguistic input interface… [by] identifying those transition points, and fixing automatically the sliding when you switch from one animation to another,” he explains. “It’s about finding the best way to generate a continuous clip from existing animations.”

While it’s being designed to smoothly stitch animations together, iKinema is not aiming to replace the role of the animator. On the contrary; the focus is on providing an interface for non-animators to more easily drive animators’ creations.

“Animators want to maintain their look and feel in the animation. They want to see this as a the final result. Our job really is to keep this animation but provide a simple interface to use,” he adds. “That’s our focus.”

The system is not limited to human or humanoid animations, but can be applied to “anything”, according to Pechev — a hand, a flower, even “possibly” a face. Although the focus is on single animations, rather than more complex scenarios with multiple animated objects involved.

The tech is still in development, with the team is aiming to release a commercial product sometime next year — including a run-time SDK middleware, and an offline UI that will be integrated into “industry standard” animation packages such as Maya, according to Pechev. The project is being part-funded by the U.K. government, under the Innovate UK program.

Another future scenario he sketches is the possibility of using the software to generate animated content automatically, directly from a movie script, by parsing and interpreting the written directions (using another tech, like OCR).

“We are going to spend more time… on the project to make the linguistic description so sophisticated that someone can actually scan or OCR and read a normal script from a director and convert that to action,” he says, adding: “I think that would be an interesting challenge.”

Post-production offline; games and training/simulation (using the tech to drive in-game actions); and virtual production (again for games and movie production, but also the aforementioned “new avenues” of blended reality applications) are the three main targets iKinema sees for the tech at this stage.