At a Glance:
- First AI-powered zero-day exploit discovered: Google Threat Intelligence Group found what it believes is the first real-world case of criminals using AI to create and exploit a previously unknown software vulnerability (a “zero-day exploit”).
- What was attacked: The exploit targeted a popular open-source web-based system administration tool. It could bypass two-factor authentication (2FA) if attackers already had a user’s login credentials.
- How it was stopped: Google identified the AI-generated attack code, warned the software maker, and helped release a fix before the criminals could launch a large-scale attack.
- Signs it was made by AI: The malicious code contained unusually detailed explanations (docstrings), a made-up security score, and textbook-style programming — common traits of AI-generated content.
- Broader threat: State-backed hackers from China and North Korea, along with cybercrime groups, are increasingly using AI to identify vulnerabilities more quickly and develop more advanced malware.
- What it means for everyday users and companies: Organizations and individuals that rely on open-source admin tools or similar software need to apply security updates promptly. AI is lowering the skill barrier for attackers, making cyber threats more common and potentially more damaging.
The Details:
Google Threat Intelligence Group (GTIG) has identified what it describes as the first known instance of a threat actor using an AI-developed zero-day exploit in the wild. The exploit, found in a Python script, targeted a two-factor authentication (2FA) bypass in a popular open-source, web-based system administration tool.
A zero-day exploit is a cyberattack that exploits a previously unknown security flaw in software, hardware, or firmware, per Google Cloud. The term “zero-day” means the software maker has had zero days to develop and release a fix, leaving users exposed until a patch becomes available.
In the recent case identified by Google Threat Intelligence Group, attackers used an AI-assisted tool to discover and weaponize such a flaw in an open-source administration system before the vendor was aware of it.
GTIG stated that the criminal threat actors planned a mass exploitation event, but proactive discovery by the group, followed by responsible disclosure to the vendor, may have prevented widespread use. The incident is detailed in GTIG’s May 11, 2026, report on AI-powered threats.
According to the report, analysis of the exploit code revealed hallmarks of large language model (LLM) generation, including numerous educational docstrings, a hallucinated CVSS score, and a structured, textbook-style Pythonic format. GTIG stated it has “high confidence” that an AI model supported the discovery and weaponization of the vulnerability, though it does not believe Google’s Gemini was used.
The vulnerability stemmed from a high-level semantic logic flaw involving a hard-coded trust assumption in the software, rather than common issues like memory corruption. It required valid user credentials to function, but allowed bypassing 2FA once those were obtained.
GTIG worked directly with the impacted vendor to develop a fix, which disrupted the planned campaign. The specific tool name was not disclosed in the public report.
This marks an escalation in how adversaries leverage generative AI. The report also notes state-sponsored actors linked to the People’s Republic of China (PRC) and Democratic People’s Republic of Korea (DPRK) showing interest in AI for vulnerability discovery, including persona-driven prompting and integration of specialized vulnerability datasets.
Cybercrime groups have used AI for other purposes, such as accelerating malware development with obfuscation, polymorphic code, and decoy logic. Examples include malware families like PROMPTFLUX, HONESTCUE, CANFAIL, and LONGSTREAM.
Growing Cyber Risks in the U.S.
The discovery comes as reports of AI-assisted cyber activity targeting U.S. organizations are increasing, according to Security Affairs. Cyber threats continue to affect businesses, government entities, and individuals through data breaches, ransomware, and supply chain attacks.
For companies, particularly those relying on open-source software or web administration tools, the incident underscores the need for rapid vulnerability management and monitoring of indicators of AI-generated code, per Infosecurity Magazine. Individuals using such tools for personal or small business administration may face heightened risks if patches are not applied promptly.
GTIG emphasized that while AI lowers barriers for attackers, defenders are also deploying AI tools, such as Google’s Big Sleep for vulnerability identification and CodeMender for automated fixes.