Governments across the U.S., UK, Australia, Canada, and New Zealand—the Five Eyes—are warning organizations that agentic AI introduces serious cybersecurity risks if left unchecked.
Their guidance makes it very clear that organizations need to adopt a Zero Trust posture, rather than an entirely new security model built specifically to defend against these AI-driven threats.
The recently released Careful Adoption of Agentic AI Services guidance recommends organizations align AI security with existing cybersecurity frameworks, including least privilege access, segmentation, identity controls, continuous monitoring, and Zero Trust architecture.
Why agentic AI changes the risk landscape
Unlike traditional generative AI, agentic AI systems can autonomously make decisions, interact with tools, execute scripts, access sensitive systems, and take actions without continuous human oversight.
The Five Eyes guidance warns that compromised or overprivileged AI agents could:
- Execute malicious scripts
- Modify sensitive records
- Access financial systems
- Move laterally across environments
- Disable security controls
- Exfiltrate data
That risk grows significantly when AI agents are given unrestricted access to applications, endpoints, networks, or cloud resources.
Least privilege is critical when securing AI capabilities
One of the strongest recommendations throughout the guidance is strict enforcement of least privilege access. The agencies repeatedly warn against granting AI broad or unrestricted permissions.
This becomes especially important for AI-powered applications. If an AI tool does not need access to PowerShell, command-line tools, credential stores, or sensitive applications, those capabilities should be blocked by default.
By limiting what applications and users can access, organizations dramatically reduce the attack surface available to both attackers and compromised AI agents.
Containment matters more than detection
The Five Eyes guidance warns that agentic AI systems may chain tools together in unexpected ways, invoke unauthorized actions, or exploit loosely governed integrations. Traditional detection-based security may not stop these actions quickly enough, or at all. That is why application containment is becoming increasingly important.
If a compromised AI agent attempts to leverage PowerShell, curl, remote shells, or unauthorized network communications, application containment policies can prevent those actions from occurring in the first place.
Zero Trust must extend to networks and cloud access
The guidance also emphasizes segmentation, identity verification, and continuous authentication as critical controls for agentic AI systems.
As AI systems increasingly interact with SaaS platforms, cloud infrastructure, APIs, and remote environments, organizations need visibility and control over where those systems can communicate. Zero Trust reduces the ability for compromised AI agents to move laterally or access unauthorized systems.
How ThreatLocker stops the threat of agentic AI
The core principles of Zero Trust are to enforce the principle of least privilege access, verify explicitly, and assume a breach. ThreatLocker helps organizations implement these controls through:
- Application Allowlisting: To block unauthorized software and scripts from executing.
- Ringfencing™: A proprietary ThreatLocker application containment product that lets you restrict how applications interact with the operating system, scripts, memory, and other applications. Ringfencing controls how applications can communicate with one another, which files and memory areas they can access, which network destinations they can reach, and what scripts or executables can launch.
- Privileged Access Management: To enforce granular administrative access permissions and prevent unnecessary elevation.
- Zero Trust Network Access (ZTNA): Restricts users and applications to only the systems to which they explicitly need access.
- Zero Trust Cloud Access: Govern access to cloud applications and services.
AI security still comes back to Zero Trust
The Five Eyes guidance ultimately reinforces a simple reality: AI security is cybersecurity.
Organizations do not need to reinvent their defenses for agentic AI. They need to strengthen the controls they already know work:
- Least privilege access
- Deny-by-default security
- Application containment
- Segmentation
- Continuous verification
- Defense in depth
As AI systems become more autonomous, the organizations best prepared to defend against emerging threats will be the ones that tightly control what users, applications, and AI agents are allowed to do from the start.
That is the foundation of Zero Trust.


