
How NIST is going to secure AI
AI is becoming as commonplace as spreadsheets in business, but security is lagging behind. NIST aims to fix that.
Artificial intelligence (AI) is quickly finding its way into everyday business, and chatbots like ChatGPT are becoming as ordinary and widely used as spreadsheets. But as the National Institute of Standards and Technology (NIST) points out in its new concept paper, Control Overlays for Securing AI Systems, AI’s exciting new capabilities come with plenty of new cybersecurity risks too.
AI tools create novel risks for users, such as hallucinations and unintended data leaks, and novel risks for operators, such as data poisoning and prompt injection attacks. As a consequence, how we think about and apply cybersecurity is having to evolve very rapidly.
AI security is a large and fast moving subject area, so rather than adding to the sense of overwhelm, NIST is taking the sensible step of adapting something people are familiar with rather than inventing something new. The idea is simple—most businesses already have some security policies in place, so it’s going to start there.
NIST’s approach is to take its own widely used SP 800-53 cybersecurity controls, a long-standing catalogue of best practices for securing IT systems, and create AI-specific overlays that tailor them to AI systems.
Right now, the organisation is looking for feedback on a set of high-level use cases. Once the use cases are agreed, it will produce a library of overlays that organizations can use together or individually to manage risks in AI use and development. The overlays will assume that certain controls are already in place, such as access controls and organization-wide policies.
To make this practical, NIST has sketched out five common scenarios where organizations might need AI-specific security guidance:
- Adapting and using generative AI
Covers systems that generate content—like text, images, or code—that are hosted in-house or by a third party. Risks here include sensitive data leakage, prompt injection, and misuse of proprietary information. - Using and fine-tuning predictive AI
Covers systems that use historical data to make predictions, such as credit scoring or resume screening. NIST highlights risks across the three parts of the AI lifecycle—training, deployment, and maintenance—such as poisoned training data or untrustworthy third-party models. - Using single-agent AI systems
Covers autonomous systems that automate business tasks with limited human oversight—like coding assistants or systems for managing common email tasks. Risks here include misalignment and hallucinations. - Using multi-agent AI systems
Covers teams of AI agents working together. This is an emerging area, but teams of agents have shown startling results in medical diagnosis, science, and software vulnerability research. Multi-agent systems are likely to have highly complex attack surfaces. - Security controls for AI developers
For companies building AI rather than just using it. NIST is linking secure software development practices with AI-specific risks, so developers don’t leave critical assets like model weights or training data exposed.
NIST is inviting comments on the proposed use cases and plans to release draft overlays for review in early 2026. While this is a welcome first step towards a comprehensive road map for securing AI, the fact is that AI is here already and needs securing today.
To understand more about the novel security risks of AI, download our guide to Using ChatGPT and GenAI safely, and our comprehensive report on Cybercrime in the Age of AI.