AI-Powered Cybercrime Surges as Hackers Embrace Generative and Autonomous AI, According to New Malwarebytes Report
Thursday, May 15th, 2025
Malwarebytes, a global leader in real-time cyber protection, today released its latest ThreatDown report, Cybercrime in the Age of AI, which reveals how threat actors leverage generative artificial intelligence (AI) to create entirely new forms of cyberattacks. The report predicts that AI agents will soon usher in a world of far more frequent, sophisticated, and difficult-to-detect cyberattacks. From AI-generated phishing campaigns, deepfake scams, and malware, the report outlines the growing arsenal of tools at cybercriminals' disposal and how businesses can best defend themselves from the onslaught of attacks.
"Cybercrime is undergoing a transformation," said Marcin Kleczynski, Founder and CEO at Malwarebytes. "We're not just seeing a rise in the quantity of attacks, we're seeing entirely new forms of deception and automation that would have been unimaginable just a few years ago. As AI technology matures, Malwarebytes will continue to deliver robust solutions to detect, respond to, and protect against the evolution of cybercrime."
AI Makes Cybercrime More Accessible and Convincing
Since ChatGPT's release in late 2022, criminals have rushed to exploit generative AI. Threat actors today are weaponizing these tools to write malware, craft convincing phishing emails, and launch realistic social engineering attacks.
In one case from January 2024, a finance worker was manipulated into transferring $25 million during a video call populated entirely by AI-generated deepfakes of company executives. Criminals have also found creative ways to bypass built-in AI safeguards, using techniques like prompt chaining, prompt injection, and jailbreaking to produce their own malicious outputs. In 2023, Malwarebytes' own researchers used prompt chaining to demonstrate that ChatGPT could be duped into writing ransomware, despite safeguards to prevent it.
Autonomous AI Attackers Are on the Horizon
While generative AI has already lowered the barrier to entry for cybercrime, the report warns that agentic AI is poised to escalate these kinds of attacks. Agentic AI can replace human attackers, automating, accelerating, and scaling labor-intensive techniques like ransomware. Many research teams have successfully created AI agents for offensive cybersecurity, including:
-
ReaperAI, a fully autonomous cybersecurity agent can execute fully autonomous proof-of-concept offensive operations with minimal human oversight.
-
AutoAttacker, another proof-of-concept system, mimics ransomware gang tactics, showing how AI could turn today's occasional attacks into routine, high-speed operations.
-
Google's Big Sleep agent became the first AI to independently discover a real-world zero-day vulnerability in a widely used, real-world software application.
These examples mark a new chapter in cybersecurity, where AI is no longer just a tool for attackers but AI becomes the attacker, operating at scale, 24/7, and at speeds human defenders may struggle to match. As cybercriminals grow more skilled at developing and deploying AI agents, these tools will inevitably be used to increase the volume and speed of labor-intensive attacks, especially the most dangerous kind: big game ransomware.
Defending Against AI-Powered Attacks
To counter the growing threat of AI-powered cybercrime, organizations must reduce their attack surface, monitor systems continuously, and respond to alerts immediately. That includes deploying endpoint protection, such as ThreatDown Managed Detection and Response (MDR), capable of catching the increased quantity of AI-generated threats and using 24/7 expert analysts to spot evolving tactics.
To read the full report, visit www.threatdown.com/dl-cybercrime-in-ai. Plus, to learn about the latest threats and cyber protection strategies for businesses, visit threatdown.com or follow ThreatDown on LinkedIn and X.