In April, in an effort to regulate rapidly advancing artificial intelligence technologies, China’s internet watchdog introduced draft rules on generative AI. They cover a wide range of issues — from how data is trained to how users interact with generative AI such as chatbots.
Under the new regulations, companies are ultimately responsible for the “legality” of the data they use to train AI models. Additionally, generative AI providers must not share personal data without permission, and must guarantee the “veracity, accuracy, objectivity, and diversity” of their pre-training data.
These strict requirements by the Cyberspace Administration of China (CAC) for AI service providers could benefit Chinese users, granting them greater protections from private companies than many of their global peers. Article 11 of the regulations, for instance, prohibits providers from “conducting profiling” on the basis of information gained from users. Any Instagram user who has received targeted ads after their smartphone tracked their activity would stand to benefit from this additional level of privacy.
Another example is Article 10 — it requires providers to employ “appropriate measures to prevent users from excessive reliance on generated content,” which could help prevent addiction to new technologies and increase user safety in the long run. As companion chatbots such as Replika become more popular, companies should be responsible for managing software to ensure safe use. While some view social chatbots as a cure for loneliness, depression, and social anxiety, they also present real risks to users who become reliant on them.
The regulations also build on the CAC’s previous rules on deepfakes that came into effect in January, which make the misuse of “deep synthesis” technology illegal. The new guidelines say providers must “establish mechanisms to handle user complaints” if they discover that the generated content infringes upon a person’s “reputation rights” or “personal privacy.” This is particularly important at a moment when non-consensual sharing of deepfake intimate images is soaring around the world and governments are looking for ways to respond. In the U.S., new proposed legislation seeks to criminalize deepfake images released without the consent of all parties.
The regulations on deepfake content are already being reflected in Chinese law enforcement. Earlier this week, a man in Gansu province was arrested for using ChatGPT to generate fake news about a deadly train crash — the first confirmed case of a suspect being detained on grounds of improper use of a chatbot anywhere in the world.
Despite the potential protections they offer to the public, the regulations’ strict data and transparency requirements also help enshrine the political priorities of the Chinese Communist Party (CCP), strengthening the state’s capacity for censorship and control.
Article 4 of the regulations, for example, states that AI-generated content must “reflect the core socialist values” — which could serve as a catchall term for anything authorities decide they oppose — and “may not contain: subversion of state power, harm to national unity, false information, or content that may upset economic or social order.” Through this clause, regulators could find ways to restrict generative AI’s ability to incite political protests, such as last year’s “white paper” movement, and ensure its content toes the Party line.
Censored chatbots — like Baidu’s Ernie — might go further than China’s search engines in controlling public narratives to align with the CCP’s agenda. In order to be viable products, chatbots have to do their job: provide text responses to inquiries, which means evading hard questions might not be an option. If users asked chatbots questions like “What were China’s white paper protests?” or “Was China’s dynamic zero-Covid policy a success?” a chatbot isn’t likely to repeatedly reply “no results found.” Instead, it’s more likely to spread misinformation and propaganda in service of reflecting the official narrative.
Assuming chatbots run on similar information as Chinese search engines, they will not always have access to true answers: Running “Xinjiang” through Baidu, China’s version of Google, elicits only geographical information about the northwestern region, and no mention of the oppression against Uyghurs taking place there.
Under the new regulations, authorities ultimately get to decide if and how new AI services are rolled out. While specifics on enforcement have been left vague, generative AI providers are required to submit their products to the CAC before they can be released to the public.
China’s new AI rules offer data protections that other countries should be adopting and strengthening. But their authors and enforcers ultimately answer to China’s leaders, not its people.