ai exploits smart contracts

While cybersecurity experts have long warned about AI’s potential for mischief, nobody expected the digital reckoning to arrive so quickly. Three frontier AI models—Claude Opus 4.5, Sonnet 4.5, and GPT-5—have successfully identified and exploited vulnerabilities in smart contracts without human help. Not just theoretical attacks. Real money. $4.6 million in simulated stolen funds, to be exact.

What’s terrifying isn’t just the exploitation but the economics. GPT-5 pulled off attacks for just $3,476 in API costs. Do the math—that’s barely over a dollar per exploit. Traditional hackers are probably updating their resumes right now.

The financial calculus of AI hacking is terrifying—a dollar per exploit makes human hackers obsolete.

These AI agents didn’t just rehash old tricks. They discovered two completely novel zero-day vulnerabilities in recently launched contracts. And they did it all autonomously, no human hand-holding required. The machines are officially better at breaking things than we are at securing them. Great.

The real-world implications are already showing. Remember the Balancer blockchain fiasco last November? $120 million gone because of a rounding direction issue. Or the WebKeyDAO contract compromise in March? Both vulnerable to the same AI techniques demonstrated in these tests.

The growth curve should make anyone with digital assets nervous. Stolen funds from AI-driven attacks have doubled approximately every 1.3 months over the past year. Defensive measures can’t keep up. Not even close. Implementing robust internal controls could significantly enhance security against these AI-powered exploits.

To prove this wasn’t just theoretical, researchers tested the models on 405 smart contracts with known vulnerabilities from 2020-2025. Then they went further—unleashing AI on 2,849 newly deployed contracts with no known issues. The results? Those two zero-days worth $3,694 in simulated revenue.

One vulnerability allowed attackers to manipulate a public “calculator” function to inflate token balances beyond their legitimate values.

Anthropic predicts more than half of real-world blockchain attacks this year could leverage AI capabilities. The barrier to entry for profitable exploitation has effectively disappeared.

The future of digital security just got a lot more complicated. And expensive. And probably inevitable. The same AI systems capable of finding these vulnerabilities could also be employed as defensive measures to identify and patch weaknesses before malicious actors can exploit them. Sleep tight, crypto holders.

Leave a Reply
You May Also Like

Doordash Data Breach After Employee Duped in Brazen Social‑Engineering Scam

DoorDash’s latest breach exposes millions due to a shocking social engineering scam. What critical lessons must companies learn to prevent future disasters?

Hundreds of MetaMask Wallets Drained — Don’t Click That ‘Update’ Alert

Hundreds of MetaMask users have been left reeling as over $107,000 vanishes. Are you next? Learn how to safeguard your assets now.

Mt. Gox Hacker-Linked Wallet Stealthily Moves 2,300 Bitcoin

A mysterious wallet linked to the Mt. Gox hack is moving thousands of Bitcoin in stealthy transactions. Who’s really behind it? The plot thickens.

Google Sounds Alarm: Five AI-Powered Malware Families Linked to North Korea Crypto Heists

North Korean hackers are deploying AI-driven malware to siphon billions from crypto exchanges. Are your digital assets safe from this evolving threat?