The State of Malware 2025

AI as you know it is dead, and cybersecurity will never be the same

The arrival of autonomous “agentic” AIs could finally deliver the profound cybersecurity disruption many expected from ChatGPT in 2022.

2025 could be the year that AI finally changes the face of cybersecurity forever.

The AIs we know are changing rapidly from passive assistants that known useful things into autonomous agents that can operate computers and navigate the web. That shift could have profound implications for the way we think about cyberthreats.

When ChatGPT hit the public consciousness in November 2022, the cybersecurity community braced for a tsunami of new AI-emboldened criminal hackers and AI-enhanced malware and… nothing happened.

OK, it wasn’t quite nothing, but the AI security apocalypse that was widely and breathlessly predicted failed to materialise. Instead, it seems that criminals were as baffled about how best to use these wildly powerful new generative AIs as the rest of us. And when they did get to grips with them, they found them to be useful assistants, just like the rest of us.

This isn’t because generative AIs can’t be used for cybercrime, or because criminals aren’t using them. They can and they are, it just doesn’t make much difference.

In its October report, Influence and cyber operations: an update, OpenAI detailed attempts by three threat actors—STORM-0817, SweetSpecter, and CyberAv3ngers—to use ChatGPT to discover vulnerabilities, research targets, write and debug malware, and setup command and control infrastructure. In each case, OpenAI concluded that its models offered the threat actors “limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”

The generative AI technology we’re used to, like ChatGPT and Google Gemini, is good at making sense of data: It can search it, summarize it, and rearrange it into new documents, code, and images, which helps people—including criminals—do their work more efficiently, but it does not address a critical cybersecurity bottleneck facing either attackers or defenders.

However, AI is about to change significantly. Technologies like Anthropic’s Computer use, which allows an AI to control computer programs, and OpenAI’s Operator, which can navigate the web, are transforming AI from a technology that knows things into a technology that does things.

And unlike generative AI, that directly impacts a significant bottleneck for both attackers and defenders.

Defenders are affected by a well-documented skills gap. According to the World Economic Forum, 67% of organizations report a moderate-to-critical skills gap in cybersecurity. At the same time, ransomware, by far the most significant cyberthreat organizations face, has its own skills gap. The number of monthly ransomware attacks is relatively low compared to other forms of cyberattack, and it’s widely believed that this is because these attacks require much more human labour.

We could soon live in a world where well-funded ransomware gangs use AI agents to pump up their workforce and break the scalability barrier, making rare ‘big game’ attacks an everyday norm, overwhelming security teams.

And that will change the threat landscape completely.

To learn more about how agentic AI could upset the decades-old status quo in cybersecurity, and what you can do to defend against it, read the 2025 ThreatDown State of Malware report.