HomeMarket News

AI Smart Contract Auditing Faces Reality Check as New Tests Show Zero Exploit Success

AI Smart Contract Auditing Faces Reality Check as New Tests Show Zero Exploit Success

  • AI auditing claims face setback as real-world exploit success hits zero
  • New research shows AI struggles with complex smart contract attacks
  • Security experts highlight gap between AI detection and real exploit execution

Confidence in fully automated smart contract auditing has taken a notable turn after new findings challenged earlier claims about artificial intelligence capabilities, as fresh analysis from security researchers suggests that while AI tools can assist in identifying known vulnerabilities, they still struggle to execute real-world attacks without human input.


Earlier reports had indicated that AI systems could exploit up to 72 percent of vulnerabilities and detect nearly half of them, however, a re-evaluation conducted by BlockSec presents a sharply different outcome. According to BlockSec co-founder Yajin Zhou, the team expanded testing conditions and applied them to real-world incidents, as a result, exploit success dropped to zero across all tested scenarios.


Moreover, the updated research introduced broader testing configurations, as researchers combined multiple AI models with different operational frameworks. This approach aimed to eliminate bias tied to specific system setups, consequently, the findings raised concerns about whether earlier results reflected true model performance or simply favorable testing conditions.


Additionally, the study addressed potential data contamination, as previous benchmarks relied on known vulnerabilities that may have appeared in training datasets. To counter this, BlockSec tested AI systems against 22 recent security incidents, as these events occurred after February 2026, ensuring they fell outside existing training data.


AI Detection Strong on Patterns but Weak on Real Attacks

Despite the limitations in exploitation, AI still demonstrated consistent performance in detecting certain vulnerabilities. The results showed that well-known issues, such as overflow errors and manipulation patterns, were identified with high accuracy.  However, performance varied significantly with more complex cases. Several vulnerabilities went completely undetected, while others were identified by only a single system. This uneven distribution highlights the current limitations of AI in handling unfamiliar or nuanced threats.


Furthermore, the findings emphasize that AI tools respond strongly when given human context. Without guidance, their ability to reason through complex attack paths remains limited. The latest findings indicate that expectations around AI-driven auditing may have been overstated. While detection capabilities remain useful, real-world exploitation still requires human involvement. Consequently, the path forward appears to rely on combining AI efficiency with human expertise rather than replacing it entirely.


Also Read: Stablecoin Yield Breakthrough Moves Crypto Bill Closer to Senate Vote