Adobe’s Project Fast Fill is generative fill for video

As part of its MAX conference, Adobe traditionally shows off some of its more forward-looking tech, which may or may not end up in its Creative Cloud apps at some point in the future. The idea here is to show what its engineers are working on; right now, as you can imagine, that’s a lot of generative AI. With Firefly now being part of Photoshop and now also Illustrator, the next frontier here is video and unsurprisingly, that’s where Adobe’s most interesting “sneak” of this year comes in. Project Fast Fill is, at its core, the generative fill the company introduced in Photoshop, but for video.

Project Fast Fill simply lets editors remove objects from a video or change backgrounds as if they were working with a still image, all with a simple text prompt. Users only have to do this once and the edit will then propagate to the rest of the scene. Adobe says this even works in very complex scenes with changing lighting conditions.

Image Credits: Adobe

Over the course of the last few months, we’ve seen an increase in AI-powered tools across video editors, including Adobe Premiere Pro competitors like Davinci Resolve. These typically start with voice recognition for captions and object recognition for masking, but generative fill may just be the kind of feature where Adobe has a major advantage, thanks to its work on building its own Firefly models.

Image Credits: Adobe

Another AI-centric project Adobe is showing off today is Project Draw & Delight. Here, the user can roughly doodle a sketch and add a text prompt, with Adobe’s AI then turning these into a polished vector drawing. Yesterday, Adobe launched its generative AI feature for Illustrator and in many ways, this feels like an extension of this work.

Image Credits: Adobe

Project Poseable also relies on AI. Here, the idea is to make it easier to create prototypes and storyboards by speeding up the process of posing a model through AI. “Instead of having to spend time editing every tiny detail of a scene — the background, different angles and poses of individual characters, or the way the character interacts with surrounding objects in the scene — users can tap into AI-based character posing models and use image generation models to easily render 3D character scenes,” Adobe explains.

Image Credits: Adobe

The last “sneak” is Project Stardust, Adobe’s next-gen AI-based image editing engine, which it already presented yesterday. You can read more about that here.

As with all of these previews, it’s hard to say how well they’ll work outside of the demo environment or if they’ll ever find their way into Adobe’s products. For something like Project Fast Fill, though, it feels like it may only be months before we’ll see it come to Adobe’s video tools.