The company whose tech powered the sensational MyHeritage app that turned classic family photos into lifelike moving portraits is back with a new implementation of its technology: Transforming still photographs into ultra-realistic video, capable of saying whatever you want.
D-ID’s Speaking Portraits may look like the notorious “deepfakes” that have made headlines over the past couple of years, but the underlying tech is actually quite different, and there’s no training required for basic functionality.
D-ID, which actually debuted at TechCrunch Battlefield in 2018 with a very different focus (scrambling facial recognition tech), debuted its new Speaking Portraits product live at TechCrunch Disrupt 2021. The company showed off a number of use cases, including using its new tech to create a multilingual TV anchor capable of expressing various emotions; creating virtual chatbot personas for customer support interactions; developing training courses for professional development use; and creating interactive conversational video ad kiosks.
Both this new product and D-ID’s partnership with MyHeritage, which saw the latter company’s app briefly take over the top of Apple’s App Store charts, are obviously major departures from the company’s initial focus. Up until even May of last year, D-ID was still raising funding based on its earlier approach, but its partnership with MyHeritage debuted in February, followed by a similar deal with GoodTrust after that and a splashy tie-up with Warner Bros. on the Hugh Jackman film “Reminiscence” that allowed fans to insert themselves into its trailer.
D-ID’s pivot might seem more dramatic than most, but from a technical perspective its new focus on bringing photos to life is not so far off from its de-identification software. D-ID CEO and co-founder Gil Perry told me that the company chose the new direction because it was apparent that there’s a very large addressable market when it comes to this kind of application.
Big-name clients like Warner Bros., as well as an App Store-dominating app from a relatively unknown brand, would seem to support that assessment. Speaking Portraits, however, is aimed at clients both big and small, and allows anyone to generate a full HD video from a source image, plus either recorded speech or typed text. D-ID is launching the product with support for English, Spanish and Japanese, but plans to add other languages in the future, too, as customers request support for those.
D-ID offers two basic categories of Speaking Portrait, including a “Single Portrait” that can be made using just a single still image, which features an animated head but other parts stay static. This one will also work with the existing background in the photo only.
For a bit more uncanny reality, there’s a “Trained Character” option that requires submitting a 10-minute training video of the character requested, following guidelines supplied by the company. This has the advantage of being able to work against a custom, swappable background, and features some preset animation options for the character’s body and hands.
Check out an example of a Speaking Portrait newscaster generated using the trained character method below to get a sense of how realistic it can be:
The demo that Perry showed us live at Disrupt today was created from a still photo of himself as a child. The photo was mapped to facial expressions performed by a sort of human puppeteer who also voiced the script for what the Speaking Portrait version of Gil ended up saying during the interaction between his current and younger self. You can see a video of how the speaker’s expressions were mirrored by the animated photo below:
Obviously, the ability to create photo-realistic videos from just a single photo that can convincingly deliver any lines you want is a bit of a hair-raising prospect. We’ve already seen far-ranging debates about the ethics of deepfakes, as well as industry efforts to try to fingerprint and identify when AI generated realistic, but artificial, results.
Perry said at Disrupt that D-ID is “keen to make sure it’s used for good, not bad,” and that in order to achieve that, they’re going to be issuing a pledge at the end of October, alongside partners, that outline their commitments to “transparency and consent” when it comes to using tech like Speaking Portraits. The purpose of said commitment is to ensure that “users aren’t confused about what they’re seeing and that people involved give their consent.”