Team of robots

Teams of AI agents can exploit zero-day vulnerabilities

New research shows that teams of AI agents working together are much better at vulnerability research and exploitation than individual LLMs.

Researchers at the University of Illinois Urbana-Champaign have released a paper detailing how teams of large language models (LLMs)—the type of Artificial Intelligence (AI) used by ChatGPT and Google Gemini—can be used to “exploit real-world, zero-day vulnerabilities”.

Previous research had detailing how large language models (LLMs) can be used to exploit known vulnerabilities when given a description of the vulnerability, but performed poorly when it came to finding and exploiting zero-days.

Zero-day vulnerabilities are a valuable commodity in the world of cybercriminals. Of course, the value of different vulnerabilities can vary significantly, depending on ease of use, reach, impact, and accessibility. But to ransomware groups, data stealers, state-sponsored actors, and other criminals, every vulnerability they can exploit while there is no known solution can rake in a lot of valuable information.

The researchers overcame the limitations of individual LLMs against zero-days by creating a taskforce made up of AI agents working as a supervised team, in a system they called Hierarchical Planning and Task-Specific Agents (HPTSA). The paper describes HPTSA as a system with three major components: A hierarchical planner, a set of task-specific, expert agents, and a team manager for the task-specific agents.

The planning agent explores the system and determines which subagents to call, resolving long-term planning issues when trying different vulnerabilities.

HPTSA
HPTSA hierarchy, credit University of Illinois Urbana-Champaign

For the experiment, it was imperative to ensure that none of the vulnerabilities were included in the training dataset, so the researchers only used vulnerabilities that were discovered past the knowledge cut-off date for the GPT-4 base model they used.

As you may have been able to tell from the agents in the diagram, the researchers focused on web vulnerabilities. Mainly because non-web vulnerabilities require complex environments to set up, or have vague conditions for success.

Against 15 real-world vulnerabilities of this type, the HPTSA model was 550% more effective than a single LLM and was able to exploit eight of the vulnerabilities, despite no previous knowledge of their existence.

This opens up a world of more focused and effective vulnerability research. Whether that will be a good thing or not, will be decided in the future. As the researchers concluded:

Now, black-hat actors can use AI agents to hack websites. On the other hand, penetration testers can use AI agents to aid in more frequent penetration testing. It is unclear whether AI agents will aid cybersecurity offense or defense more and we hope that future work addresses this question.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in check by using ThreatDown’s Vulnerability Assessment and Patch Management solutions.