Advertisement

China targets deepfakes in proposed regulation governing deep learning AI technologies

  • Providers of services that manipulate images and videos will be required to verify user identities and respect social morality
  • Beijing continues to strengthen efforts to rein in companies providing consumer technologies

Reading Time:2 minutes
Why you can trust SCMP
1
China has proposed a new regulation to control the provision of deepfake services. Photo: Shutterstock

China plans to require companies providing deepfake and similar artificial intelligence (AI) services to verify the identities of their users and promote Chinese socialist values, according to a draft regulation released by the country’s top cybersecurity watchdog, as Beijing continues to tighten the screws on potentially disruptive technologies.

Advertisement

The “Internet Information Service Deep Synthesis Management Regulations” unveiled on Friday by the Cyberspace Administration of China promised to regulate technologies that generate or manipulate text, images, audio or video using deep learning, such as face swap and image enhancement. The rules are open for public consultation through February 28, with the final version subject to change.

Under the draft regulation, providers of deepfake services have to verify the identities of their users before granting them access to relevant products. Companies are also expected to “respect social morality and ethics” and “follow the correct political direction”.

The rules are the latest in a long line of regulations that Beijing has drawn up to address the hazards of emerging consumer technologies. Earlier this month, China issued a regulation to control algorithms designed to recommend articles, videos, games and merchandise to app users.

In 2019, China released rules banning online video and audio providers from using deep learning to produce fake news.

Advertisement

Last March, Chinese regulators summoned 11 Big Tech firms, including ByteDance, Alibaba Group Holding, and Tencent Holdings for a meeting and directed them to conduct security reviews on the use of deepfake technologies on their platforms. Companies were required to submit the results of their reviews.

Advertisement