Google Identifies First AI-Developed Zero-Day Exploit and Thwarts Planned Mass Exploitation Event
May 12, 2026 – 6:56 pm
Summary
Google has discovered the first zero-day exploit believed to be developed using artificial intelligence. The threat actor intended to deploy it in a mass exploitation event but was stopped by Google’s Threat Intelligence Group (GTIG). The report highlights the increasing use of AI for hacking, including state-sponsored actors from China, North Korea, and Russia.
Key Findings
- First AI-Generated Zero-Day: Google found a Python script that bypassed two-factor authentication on an open-source tool, containing telltale signs of AI generation.
- State-Sponsored Actors: The report documents how China, North Korea, and Russia are using AI for vulnerability research and supply chain attacks.
- AI Malware: An Android malware named PROMPTSPY uses Google’s Gemini API to autonomously gather biometric data from infected devices.
- Industrial-Scale Application of Generative Models: The use of AI in hacking is transitioning from experimental to industrial scale, as shown by the exploit and other cases.
The Exploit
The zero-day exploit targeted a semantic logic flaw, not a typical memory corruption bug or input sanitization error. It exploited a high-level design mistake where a trust assumption was hardcoded into the two-factor authentication logic. Traditional vulnerability scanners missed this flaw because they are optimized for detecting crashes and data flow sinks, while large language models can perform contextual reasoning and identify such logical errors.
Response
GTIG worked with the affected vendor to patch the vulnerability before it could be exploited. They do not believe Gemini was directly used in this case but highlight the threat actor’s history of high-profile incidents.