China Launches Months-Long Campaign Against AI Misuse Targeting Deepfakes, Fraud, and Disinformation
April 30, 2026 - 11:23 am
The Cyberspace Administration’s annual ‘Qinglang’ campaign arrives in a materially different regulatory environment to last year’s edition, and in the same week the White House accused China of running ‘industrial-scale’ AI theft operations.
China has launched a months-long enforcement campaign targeting the misuse of artificial intelligence, according to Reuters. The campaign, initiated by the Cyberspace Administration of China (CAC) and coordinated with the Ministry of Public Security and other agencies, targets AI-enabled fraud, deepfakes, disinformation, and illegal applications that violate privacy and intellectual property rights.
This is the 2026 edition of what has become an annual enforcement mechanism, the ‘Qinglang’ (Clear and Bright) special campaign series. Its immediate predecessor, launched on 30 April 2025 and titled ‘Rectification of AI Technology Misuse’, ran for three months across two phases.
By the time its first phase concluded in June 2025, authorities had taken down more than 3,500 AI-related products, scrubbed over 960,000 pieces of illegal or harmful content, and shut down or penalised more than 3,700 accounts.
This year’s campaign arrives in a substantially more developed regulatory environment and against a geopolitically charged backdrop that makes its scope and targets distinctly more complex than its predecessor.
What the Campaign Targets?
China’s AI abuse enforcement campaigns are structured around a taxonomy of misuse that has expanded with each iteration as both the capabilities and the criminal applications of AI have advanced. Based on the established Qinglang enforcement framework and the new regulatory measures enacted in 2025 and early 2026, this year’s campaign is expected to target several categories simultaneously.
The first and most commercially significant is AI-enabled fraud and impersonation. China has seen a dramatic increase in the use of voice-cloning and face-swapping deepfake technology to impersonate celebrities, business executives, and government officials in scams targeting ordinary consumers.
The CAC’s 2025 campaign specifically targeted the use of AI to ‘impersonate relatives and friends and engage in illegal activities such as online fraud’ and the ‘improper use of AI to resurrect the dead’, a reference to the use of AI-generated likenesses of deceased people without consent.
The CAC published draft rules for digital virtual human services on 3 April 2026, covering consent requirements for likeness use and banning the bypass of biometric authentication systems, with a public comment window that closed on 6 May.
The second major category is AI-generated disinformation and ‘online water army’ activity, the industrial-scale use of AI to create fake social media accounts, generate and distribute coordinated content, manipulate engagement metrics, and create artificial trending topics.