On Nov. 13, Anthropic announced it had disrupted the “first AI-orchestrated cyber espionage campaign,” conducted by Chinese cyber actors using its agentic Claude Code model. Discussed in depth at a congressional hearing on Dec. 17, the operation represents a major escalation from previous malicious uses of AI to generate malware or improve phishing emails, ushering in an era of high-speed and high-volume hacking.
For years, experts have warned that agentic AI would allow even unsophisticated nation-states and criminals to launch autonomous cyber operations at a speed and scale previously unseen. With that future now in reach, policymakers and industry leaders must follow a two-pronged strategy: ensuring that organizations have access to fit-for-purpose cyber defenses and managing the proliferation of AI capabilities that will allow even more powerful cyber operations in the future. Both steps are important not only to safeguard U.S. networks, but also to solidify U.S. technical leadership over competitors such as China.
How the Cyber Campaign Worked
In a detailed report, Anthropic assessed with high confidence that a Chinese state-sponsored group designated as GTG-1002 used its Claude Code model to coordinate multi-staged cyber operations against approximately 30 high-value targets, including technology companies, financial institutions and government agencies. The campaign produced “a handful of successful intrusions.” The hackers circumvented safety features in the model, breaking the workflow into discrete tasks and tricking Claude into believing it was helping fix cybersecurity vulnerabilities in targeted systems.
Humans provided supervision and built a framework that allowed Claude to use open-source hacking tools to conduct the operations. But Claude “executed approximately 80 to 90 percent of all tactical work independently” — from initial reconnaissance and vulnerability identification to gaining access to targeted systems, removing data, and assessing its value. Automation allowed GTG-1002 actors to achieve an operational tempo impossible for human operators; its “peak activity included thousands of requests, representing sustained request rates of multiple operations per second.”
Some outside researchers have questioned the effectiveness of this campaign, pointing out that Claude hallucinated about data and credentials it claimed to have taken. Some also noted the low quality of AI-generated malware. But this is only the beginning. As AI models become more powerful and ubiquitous, the techniques this campaign demonstrated will only grow more sophisticated and accessible. The question is who adopts them next and how quickly.
AI is Empowering U.S. Adversaries
Anthropic’s attribution of this campaign to Chinese state-sponsored actors grabbed headlines at a time of rising geopolitical tensions and high-profile Chinese cyber operations targeting U.S. telecommunications networks and critical infrastructure.
China has a large ecosystem of state-affiliated hacker groups that operate at scale. These groups function essentially as businesses, broadly targeting organizations in the United States and other countries and then selling stolen information to government and commercial customers. GTG-1002’s approach — targeting 30 organizations, gaining access and exfiltrating data where possible — fits this model perfectly. For a high-scale hacking enterprise, using AI automation to increase efficiency is a natural evolution. It is what every business is trying to do right now.
At the same time, the campaign relied on open-source, relatively unsophisticated hacking tools. Any resourceful adversary — Russian cyber criminals, North Korean crypto currency thieves, Iranian hackers — could conduct similar campaigns using advanced AI models. Many of them probably are right now. What was novel was the operational tempo — Claude Code executed reconnaissance, exploitation, and data analysis at a pace no human team could match.
The key takeaway is that adversaries everywhere now have the ability to conduct high-speed, high-volume hacks. Unfortunately, cyber defenders are not prepared to meet this challenge.
AI and the Cyber Offense-Defense Balance
Cybersecurity has long been a competition between offense and defense, with the offense having the edge thanks to the large attack surfaces produced by modern networks. While defenders must work to patch all vulnerabilities to keep the hackers out, the offense just needs to locate one entry point to compromise the defenders’ systems. Cybersecurity experts are concerned that AI-enabled automated operations, like the one uncovered by Anthropic, will further tip the balance by increasing the speed, scale, and persistence of hacks.
At the same time, AI holds the potential to address many long-standing cybersecurity challenges. AI-enabled testing can help software developers and infrastructure owners remediate vulnerabilities before they are exploited. Managed detection and response companies have touted their use of AI to reduce incident investigation time from hours to minutes, allowing them to disrupt ongoing operations and free up human analysts for more complex tasks. When layered and done right, these solutions can give defenders a fighting chance at keeping up with the new speed and scale of offense — but only if they are widely adopted.
For years, criminals have targeted “cyber poor” small businesses, local hospitals and schools because they are less able to purchase state-of-the art defenses to keep hackers out and less able to resist ransom demands when criminals get in. To ensure these organizations are not overwhelmed by the new pace of AI-driven hacking, organizations will need to adopt newer, high-speed defensive tools. Increased automation will make these tools cheaper and more accessible to those with limited cyber defenses. But it is hard to imagine how this will happen domestically without more funding and targeted efforts to raise cybersecurity standards in key critical infrastructure sectors — at a time that the Trump administration is cutting back on U.S. cyber investments.
The same resource divide exists internationally, where middle and lower income countries are at risk of crippling cyber incidents because they lack resources for basic defenses. It will take concerted international engagement and capacity building to ensure countries can keep pace with new threats, but it is in the United States’ interests to help them do so.. As the United States and China compete to promote global adoption of their technology ecosystems, developing countries in particular are looking for solutions across the full technology stack. AI-enabled cyber defenses — offered individually or baked into other services — can strengthen the United States’ appeal as a technology partner.
When AI Competition Meets Proliferation Risks
In addition to strengthening cyber defenses, it is also important for policymakers and industry leaders to reduce the risk that AI systems will be exploited to orchestrate cyber operations in the first place. GTG-1002’s activities were only discovered and stopped because hackers used a proprietary model; Anthropic had visibility into the groups’ activities and could cut them off, once discovered.
The good news is that companies like Anthropic, OpenAI and Google can learn from malicious use of their models and build in stronger capabilities to detect and block future incidents. Athropic’s transparency in the GTG-1002 case helps build muscle memory so that companies can work together to prevent similar incidents in the future (though some experts argue Anthropic could have gone farther in explaining how the operation worked and sharing actionable details, like sample prompts). The bad news is that as open-source models like China’s DeepSeek improve, malign actors will not need to rely on proprietary models. They will turn to open source models that operate with limited or no oversight.
This is a place where tensions between U.S.-China AI competition and cybersecurity meet. Both countries are competing across multiple dimensions to become the world’s AI leader. U.S. companies — including Google, Microsoft, OpenAI, and Anthropic — have the edge when it comes to the raw capability of their proprietary models. Chinese AI companies (and some U.S. ones, too) have pressed ahead with the development of lower cost, open-source models that are more easily accessible to users in developing countries in particular.
The economic, political, and national security stakes for this competition are enormous. To ensure the United States maintains a competitive advantage, the Trump administration has sought to reduce AI safety requirements. But if this campaign is a sign of what is to come, both the United States and China should have an interest in preventing the models their companies create from being exploited by criminals, terrorists, and other rogue actors to cause harm within their territories.
The Trump administration’s AI Action Plan calls for more evaluation of national security risks, including cyber risks in frontier models. The question is what additional safeguards need to be put in place to reduce this risk, which incentives are needed, and how to build consensus on such standards internationally.
What Must Be Done Now
It is impossible to stop AI-driven campaigns. But policymakers and industry leaders can still strengthen cyber defenses to mitigate risk. This requires incentivizing development of AI applications that enable secure software development, improved penetration testing, faster threat detection, and more efficient incident response and recovery. Funding and concerted engagement by government and private cybersecurity experts will be needed to support adoption among cyber-poor providers of critical services, like hospitals and schools.
It also requires strengthening safeguards to make it harder for bad actors to weaponize easily accessible AI models. Ideally, the United States would do this in parallel with China requiring increasing safeguards in its own models. (Otherwise, the administration’s recent decision to sell more powerful chips to China will allow China to produce more unsafe models, and faster.)
Regardless, the United States must continue efforts within its own AI safety community to identify and mitigate misuse of U.S. models. Transparency about incidents like this one is a good place to start. But to stay ahead of the threat, companies and researchers should be further encouraged to share information about risks, improve testing standards, and develop mitigations when bad actors circumvent safeguards.
FEATURED IMAGE: Visualization of floating programming code windows on a glowing cyber grid. (Via Getty Images)
Great Job Teddy Nemeroff & the Team @ Just Security Source link for sharing this story.





