A solution remains elusive. But Microsoft’s making an attempt with new media provenance features debuting at its annual Build conference.
Launching for Bing Image Creator and Designer, Microsoft’s Canva-like web app that can generate designs for presentations, posters and more to share on social media and other channels, the new media provenance capabilities will enable consumers to verify whether an image or video was generated by AI, Microsoft says. Using cryptographic methods, the capabilities, scheduled to roll out in the coming months, will mark and sign AI-generated content with metadata about the origin of the image or video.
It’s not as straightforward as a visible watermark. To read the signature, sites will need to adopt the Coalition for Content Provenance and Authenticity (C2PA) interoperable specification, a spec created with input from Adobe, Arm, Intel, Microsoft and visual media platform Truepic. Only then will the site be able to alert consumers when content has been generated by AI, modified or created by Designer or Image Creator.
So, the question is, will Microsoft’s efforts make much of a difference when so many image-generating tools haven’t embraced similar media provenance standards? C2PA does have the backing of Adobe, which recently launched its own range of generative AI tools, including an integration with Google’s Bard chatbot. But one of the more prominent players in the generative AI space, Stability AI, only very recently signaled a willingness to embrace a spec like the type Microsoft’s proposing.
Standards aside, Microsoft’s move to adopt a media provenance-tracking mechanism is in line with broader industry trends as generative AI takes hold. In May, Google said that it would use embedded metadata to signal visual media created by generative AI models. Separately, Shutterstock and generative AI startup Midjourney adopted guidelines to embed a marker that content was created by a generative AI tool.