Google Reports First-Ever AI-Built Zero-Day Attack Disrupted in the Wild

Image: Bloomberg AI
Main Takeaway
Google says it stopped hackers using AI to build a zero-day exploit, a milestone that confirms long-held fears about AI weaponization in cyberattacks.
Jump to Key PointsSummary
What happened and why it matters now
Google's Threat Intelligence Group has disrupted what it describes as the first known zero-day attack built with AI. According to Bloomberg, security researchers at Alphabet believe a cybercrime group used artificial intelligence to create a hacking tool capable of bypassing defenses in a widely-used system administration tool. The Verge reported that Google spotted and stopped the exploit before it could be deployed, marking a significant milestone in AI-driven cybersecurity threats.
The attack represents the crossing of a threshold that security experts have anticipated for years. Fortune characterized the event as the realization of a dire warning, with one source quoted as saying "the world might actually be more dangerous." The incident moves AI-assisted hacking from theoretical concern to documented reality.
Google's disclosure comes amid broader findings about AI misuse by threat actors. The company has separately documented state-sponsored hackers from Iran, North Korea, China, and Russia using AI models including Google's own Gemini for reconnaissance, malware development, and phishing campaigns. The convergence of criminal and state-level AI adoption suggests a rapid acceleration of the threat landscape that defenders must now confront.
How Google's Big Sleep AI caught the vulnerability
Google's own AI systems played a central role in detecting the threat. The company's Big Sleep AI framework, an LLM-based vulnerability research tool, discovered a critical security flaw designated CVE-2025-6965 that hackers had kept secret and were preparing to exploit. Therecord reported that Google said the vulnerability was "only known to threat actors and was at risk of being exploited."
Big Sleep evolved from earlier Google research into AI-assisted vulnerability detection. The Hacker News noted that the framework had previously uncovered a zero-day vulnerability in the SQLite database engine, demonstrating the tool's capability to find flaws in widely deployed software. This latest detection represents a shift from finding unpatched vulnerabilities in open source projects to identifying actively weaponized zero-days in the wild.
The discovery creates an ironic symmetry: AI detecting AI-built attacks. Google's defensive AI spotted the vulnerability before the offensive AI could fully deploy it, suggesting a possible arms race dynamic where detection and attack capabilities evolve in parallel. Whether this parity can be maintained as attack tools improve remains an open question.
The broader pattern of AI weaponization
This single incident fits into a larger pattern that Google and other security researchers have been tracking. Cyberscoop reported that state-sponsored hackers are now using AI at "all stages" of the attack cycle, from initial reconnaissance through malware refinement. The research found evidence of advanced persistent threat groups leveraging AI tools to automate tasks that previously required significant human expertise.
SiliconANGLE detailed how state-based hackers use AI for research and content generation, lowering barriers to entry for sophisticated operations. The Artificial Intelligence News report from February 2026 documented Iranian, North Korean, Chinese, and Russian actors specifically using models like Gemini to craft phishing campaigns and develop malware.
TechRadar noted that Google tracked 90 zero-day exploits in 2025, with enterprise systems increasingly targeted over browsers. The publication warned that AI is expected to accelerate both attack and defense cycles, compressing the window between vulnerability discovery and exploitation. Google's own predictions from late 2023, reported by CSHub, anticipated generative AI would enhance social engineering and lead to more zero-day vulnerabilities employed by nation-state and criminal groups.
What this means for enterprise security
Organizations face a transformed threat environment where AI lowers the skill threshold for sophisticated attacks while simultaneously offering defensive capabilities. The Forbes coverage of Google's Chrome zero-day alerts affecting 3.5 billion users illustrates how widely deployed software remains vulnerable, and how AI-assisted attackers can target massive user bases with refined exploits.
Enterprises must now assume that threat actors have access to AI tools comparable to their own. The asymmetry that previously favored defenders with greater resources and expertise is narrowing. Google's Big Sleep detection offers a model for how AI-assisted defense might keep pace, but deployment of such tools at scale remains limited to organizations with Google's technical resources.
The incident also raises questions about supply chain security. The targeted system administration tool was widely used, meaning a successful exploit would have cascaded through numerous organizations. As AI makes vulnerability discovery faster, the pressure on vendors to patch promptly intensifies. The traditional model of responsible disclosure may struggle to keep pace with AI-accelerated exploitation timelines.
Why the attribution remains contested
Despite Google's confident public statements, important questions about the AI-built zero-day remain unresolved. The company has not publicly released technical details that would allow independent verification of the AI's role in constructing the exploit. Security researchers will need to assess whether the AI contribution was decisive or ancillary to human attacker capabilities.
The distinction matters for threat assessment. An AI that merely assisted a skilled human attacker suggests different defensive priorities than an AI capable of autonomous vulnerability discovery and weaponization. Google's framing emphasizes the AI dimension, which serves the company's narrative about both the severity of emerging threats and the necessity of its own AI-powered defensive tools.
Bloomberg's careful language, "researchers say they believe," indicates some residual uncertainty in the attribution. The Verge and Fortune adopted more definitive framing. This gap between measured technical assessment and headline-ready certainty is worth noting as the story develops.
What happens next in AI cybersecurity
The Google disclosure likely marks the beginning of a new phase rather than an isolated incident. Security vendors will accelerate AI-assisted detection tool development. Microsoft's security division, SentinelOne, CrowdStrike, and other major players have comparable research programs that will now face pressure to demonstrate equivalent capabilities.
Regulatory attention to AI cybersecurity will intensify. The European Union's AI Act and emerging US frameworks will need to address weaponization risks without constraining defensive applications. The dual-use nature of AI vulnerability research, identical tools serving both attack and defense, complicates straightforward policy responses.
Google's strategic positioning deserves attention. By publicizing both the AI-built attack and its AI-powered detection, the company advances its case for AI investment while deflecting criticism about Gemini's misuse by hostile actors. Whether this balance can be maintained as AI attack tools proliferate remains to be seen. The race between AI-powered offense and defense has begun in earnest.
Key Points
Google's Threat Intelligence Group disrupted what it calls the first AI-built zero-day exploit, targeting a widely-used system administration tool
The company's Big Sleep AI framework detected the vulnerability (CVE-2025-6965) before deployment, creating a defensive AI versus offensive AI dynamic
State-sponsored hackers from Iran, North Korea, China, and Russia have been documented using AI including Google's Gemini across all attack stages
Google tracked 90 zero-day exploits in 2025, with enterprise systems increasingly targeted and AI expected to accelerate both attack and defense cycles
Independent verification of AI's decisive role in building the exploit remains limited, with Google's public statements ranging from measured to definitive
Questions Answered
According to Google's Threat Intelligence Group, the company disrupted a zero-day exploit (a previously unknown vulnerability) that had been built using AI. The attack targeted a widely-used system administration tool. Google's Big Sleep AI framework detected the vulnerability, designated CVE-2025-6965, before the hackers could deploy it.
The certainty varies by source. Bloomberg used careful language, stating researchers "believe" AI was used. The Verge and Fortune adopted more definitive framing. Google has not released full technical details for independent verification, so security researchers cannot yet independently assess whether AI was central to the exploit's construction or merely assisted human attackers.
Big Sleep is Google's AI framework for vulnerability research, built on large language models. It evolved from earlier Google research into AI-assisted security analysis. In this incident, Big Sleep discovered the zero-day vulnerability that hackers had kept secret. The tool had previously found a zero-day in the SQLite database engine, but this marked its first detection of an actively weaponized threat.
Yes, and this is well-documented. Google has reported that state-sponsored hackers from Iran, North Korea, China, and Russia use AI models including Gemini for reconnaissance, malware development, and phishing. Cyberscoop reported hackers use AI at "all stages" of the attack cycle. This broader pattern predates the specific zero-day incident by months.
Organizations should assume AI-assisted attackers have capabilities comparable to advanced security tools, compressing the window between vulnerability disclosure and exploitation. Priorities include accelerating patch management, investing in AI-assisted detection, reviewing supply chain security for widely-deployed tools, and recognizing that social engineering and phishing will become more sophisticated as AI generation improves.
The picture is mixed. AI clearly lowers barriers for attackers, but it also powers defensive tools like Big Sleep. Fortune quoted a source saying "the world might actually be more dangerous," reflecting the offensive application. However, in this specific case, AI-based defense detected AI-based offense. Whether this parity holds as attack tools improve is the critical unknown.
Source Reliability
50% of sources are trusted · Avg reliability: 67
Go deeper with Organic Intel
Simple AI systems for your life, work, and business. Each one includes copyable prompts, guides, and downloadable resources.
Explore Systems