In Short:
Chinese regulators are inspired by the EU AI Act but are implementing unique measures, like requiring social platforms to screen user-uploaded content, which wouldn’t work in the US. A new AI content labeling regulation is open for feedback and may soon be mandatory, raising concerns about privacy and free expression. The Chinese AI industry is advocating for more creative freedom while balancing government content control.
According to Jeffrey Ding, an assistant professor of Political Science at George Washington University, Chinese regulators appear to have drawn insights from the EU AI Act. He notes that Chinese policymakers and scholars have acknowledged the influence of the EU’s regulatory frameworks in their own legislative initiatives.
However, the measures introduced by Chinese regulators may not be easily replicable in other countries. For instance, the Chinese government mandates that social platforms screen user-uploaded content for AI-generated materials. Ding emphasizes the distinctiveness of this approach, stating, “This would never exist in the US context, because the US is famous for saying that the platform is not responsible for content.”
Concerns Over Freedom of Expression Online
The draft regulation regarding AI content labeling is currently open for public feedback until October 14, and modifications and eventual passage may take several additional months. In the meantime, Chinese companies are encouraged to prepare for its implementation.
Sima Huapeng, founder and CEO of the Chinese AIGC company Silicon Intelligence, which specializes in creating AI agents and influencers using deepfake technology, indicates that his product currently allows users to voluntarily identify generated content as AI. Should the law be enacted, he may be compelled to change this feature to a mandatory requirement.
“If a feature is optional, then most likely companies won’t add it to their products. But if it becomes compulsory by law, then everyone has to implement it,” Sima explains. While it is not technically challenging to incorporate markings or metadata labels, doing so may increase operational costs for complying companies.
While such policies can mitigate the risks of AI misuse in scams or privacy violations, Sima warns they could also engender a black market for AI services where companies seek to evade legal obligations to minimize expenses.
Gregory raises an important point regarding the balance between accountability for AI content producers and the potential for encroachment on individual expression through enhanced monitoring measures. “The big underlying human rights challenge is to ensure that these approaches don’t further compromise privacy or free expression,” he asserts. Tools such as implicit labels and watermarks, although beneficial for tracing misinformation, may also empower platforms and governments to impose stricter oversight on user-generated content.
These regulatory measures have emerged partly in response to concerns about the unpredictable behavior of AI technologies, which has propelled China to pursue proactive legislation in this arena. Concurrently, the Chinese AI sector is advocating for greater freedom to innovate, as they strive to catch up with their Western counterparts. Notably, a previous generative AI law in China underwent significant modifications, lessening penalties for non-compliance and eliminating identity verification mandates.
“What we’ve seen is the Chinese government really trying to walk this fine tightrope between maintaining content control and allowing AI labs the strategic space to innovate,” notes Ding. “This is another attempt to achieve that balance.”