China’s generative AI rules set boundaries and punishments for misuse

As text-to-image generators and intelligent chatbots keep blowing people’s minds, China has swiftly moved to lay out what people can do with the tools built on powerful AI models. The country’s regulators clearly verge on the side of caution when it comes to the consequences of generative AI. That’s a contrast to the U.S., which has so far largely let the private sector make its own rules, raising ethical and legal questions.

The Cyberspace Administration of China, the country’s top internet watchdog, recently passed a regulation on “deep synthesis” technology, which it defines as “technology that uses deep learning, virtual reality, and other synthesis algorithms to generate text, images, audio, video, and virtual scenes.” The regulation applies to service providers that operate in China and will take effect on January 10.

Nothing from the set of rules stands out as a surprise as the restrictions are mostly in line with those that oversee other forms of consumer internet services in China, such as games, social media and short videos. For instance, users are prohibited from using generative AI to engage in activities that endanger national security, damage public interest or are illegal. 

Such restrictions are made possible by China’s real-name verification apparatus. Anonymity doesn’t really exist on the Chinese internet as users are generally asked to link their online accounts to their phone numbers, which are registered with their government IDs. Providers of generative AI are similarly required to verify users using mobile phone numbers, IDs or other forms of documentation.

China also unsurprisingly wants to censor what algorithms can generate. Service providers must audit AI-generated content and user prompts manually or through technical means. Baidu, one of the first to launch a Chinese text-to-image model, already filters politically sensitive content. Censorship is a standard practice across all forms of media in China. The question is whether content moderation will be able to keep up with the sheer volume of text, audio, images and videos that get churned out of AI models.

The Chinese government should perhaps get some credit for stepping in to prevent the misuse of AI. For one, the rules ban people from using deep synthesis tech to generate and disseminate fake news. When the data used for AI training contains personal information, technology providers should follow the country’s personal information protection law. Platforms should also remind users to seek approval before they alter others’ faces and voices using deep synthesis technology. Lastly, this rule should alleviate some concerns around copyright infringement and academic cheating: In the case that the result of generative AI may cause confusion or misidentification by the public, the service provider should put a watermark in a prominent place to inform the public that it is a work by the machine.

Users in violation of these regulations will face punishments. Service operators are asked to keep records of illegal behavior and report them to the relevant authorities. On top of that, platforms should also issue warnings, restrict usage, suspend service, or even shut down the accounts of those who break the rules.