Table of contents
They say AI is rewriting the rules of cybersecurity, but some things never change. While external threats are faster and insider risks are deeper, Zero Trust is still the answer.
In one of the most storied cyberattacks in recent times, threat actors Scattered Spider used artificial intelligence to clone an employee’s voice, making the impersonation sound authentic enough to fool the help desk into resetting credentials. It was September 2023, and MGM Resorts International fell victim to this sophisticated attack which began with a single AI-generated vishing call. With that one action, the attackers quickly escalated privileges and spread ransomware across critical systems.
SEC filings showed that the hack ultimately cost MGM $110 million. That’s not small.
There’s nothing new about phishing schemes, but now that AI is supercharging external attacks, from deepfake impersonation to malware creation and automated vulnerability discovery, some say we’ve entered a new era in cybersecurity. At the same time, AI has introduced new internal risks, as employees copy source code into AI tools, adopt shadow AI applications, or pull in unvetted dependencies.
This is the reality security leaders face today. AI is creating new vulnerabilities and amplifying old ones, both outside the organization and within it. The sections that follow break down the most pressing risks and offer practical steps leaders can take to contain them.
1. Model poisoning and data integrity
Attackers sometimes tamper with the data used to train AI models, slipping in malicious or biased information. The impact can be devastating. A poisoned model might start labeling dangerous malware as safe or hide secret instructions that act like a backdoor.
But data poisoning is not always the attacker’s tool. Artists have begun turning the tactic around to defend their work. Tools like Nightshade subtly alter images so that AI scrapers misread the content, even though the changes are almost invisible to people. In the same way, a human reader can spot a piece of satire instantly, while an AI system may treat it as fact and feed it back into training. In both cases, the outcome is the same: poisoned data that bends how AI learns and behaves.
Mitigation approach: Enforce strict validation of training and fine-tuning data. Do not assume third-party datasets are trustworthy. Adopt deny-by-default principles for data ingestion, verifying provenance and integrity before data enters your pipeline. Regular testing and red teaming of models helps expose hidden manipulations before they cause harm.
When using AI tools, be aware that the output can be based on incorrect input and will need to be verified. This is an important step that needs to be considered when determining whether an AI tool will result in increased productivity.
2. Prompt injection and jailbreaks
Prompt injection is similar to SQL injection, but for AI. Malicious inputs can override instructions and trick models into leaking sensitive data or executing unintended actions. Jailbreaks are especially dangerous when AI is integrated into customer-facing applications.
Attackers have used security vulnerabilities in popular AI coding assistants like GitHub CoPilot. One vulnerability, CVE-2025-53773, in GitHub CoPilot allowed attackers to execute code and elevate privileges through prompt injection. Additionally, researchers at Brave found that the AI agent in Perplexity’s Comet browser is vulnerable to prompt injection through hidden text, such as instructions embedded invisibly in a social media post.
Prevention approach: Filter and sanitize any AI inputs. Apply the same Zero Trust principle to AI queries that you apply to network traffic or SQL commands. Never assume input is benign. For sensitive applications, layer controls that isolate AI components so a compromised model cannot divulge information from or spill over into other systems. Business critical assets should only be accessible by users and tools that require access.
3. Shadow AI and ungoverned adoption
Shadow IT has a new face: Shadow AI. Developers and employees are spinning up AI tools outside of official oversight, pasting proprietary code into external models or installing plugins without review. In 2023, when AI tools were just starting to take off, Samsung found that employees had uploaded sensitive code to ChatGPT. In response, they temporarily banned the use of generative AI tools. In 2024, 38% of employees shared sensitive information with AI tools without their employer's knowledge or approval. This bypasses governance and creates unmonitored pathways for data leakage.
Prevention approach: This is likely the largest inside threat AI poses. Companies may have no choice in AI adoption, but any new software or tools should be planned and vetted by the IT department. Companies must enforce clear policies on AI use and back them up with monitoring and controls. If a tool has not been approved, it should not be running inside the organization. Training staff and reinforcing good engineering practices are essential.
4. Vibe coding
Vibe coding is a new trend in software engineering, where code is written in part or wholly by AI. There are two main scenarios where vibe coding happens. The first is when a regular user that would not usually be able to write code can use vibe coding to write code. The danger with this is that the user likely doesn’t have the experience or know-how to make the code secure, solely relying on the AI assistant. If you don’t understand what the output means or does, there’s no way to know what it does and if it’s secure. The second is when a programmer uses vibe coding to accelerate their development. This can be even more dangerous because the programmer may have elevated access, which in extreme cases, insecure AI-generated code could grant attackers access that leads to catastrophic outcomes, such as deletion of critical databases.
While vibe coding may initially seem more efficient, it can introduce vulnerabilities and make troubleshooting a nightmare. Currently, Large Language Model (LLM) AI compares huge data sets of human written code and predicts what should come next. This can be really impressive for a language-based approach, but the LLM may not be able to test the code itself.
Containment approach: Follow proper software engineering practices and understand the limitations and risks of AI. AI should not be trusted to write the final version of code but may be helpful to provide early versions of code. Companies need to enact proper AI management policies, so that users know when and how to appropriately use AI. All code should also undergo review multiple times by real, experienced developers.
5. AI-powered attacks
AI is accelerating attackers. Phishing emails are indistinguishable from real ones. Voice and video deepfakes can trick employees into handing over access. AI-generated malware mutates so quickly that traditional antivirus cannot keep up. ThreatLocker CEO and Co-founder Danny Jenkins, a former ethical hacker, has asserted that “all antiviruses suck” when adversaries are generating malware with AI — because every sample is new, always one step ahead of traditional defenses.
Containment approach: Accept that EDR alone is not enough. A Zero Trust defense means layering controls. Allow only approved applications to run, restrict what those approved apps can access, block unnecessary outbound connections, and monitor for anomalous behavior. The goal is not to detect every new AI-generated threat. It is to prevent threats from running in the first place.
The way forward: Zero Trust for AI-era threats
AI risks blur the line between an insider and an outsider. A developer pasting source code into an AI chatbot can be just as dangerous as an external attacker generating a new malware strain. The only viable defense is to assume nothing is safe by default.
Zero Trust flips the script. Approve only what is needed and block everything else. Apply this principle consistently to data, code, applications, and AI models themselves. Combine it with governance, staff training, and disciplined engineering practices, and you create a posture where AI can be used safely without opening doors to attackers.
The developers of generative AI models assume that its users are responsible for the output that is generated, including any vulnerabilities or false results. It is up to the leadership of a company to determine if the risk is worth the possible benefit of increased productivity. ThreatLocker gives leadership the tools they need to enforce their vision. Without a Zero Trust approach a company could fall victim to the next deepfake vishing message or the next code vulnerability introduced without your developer’s knowledge.
CISOs who adopt this stance will not only survive the AI era. They will lead in it.
How ThreatLocker can block AI-fueled internal and external attacks
Application Allowlisting & Ringfencing™
ThreatLocker ensures only approved applications can run, preventing AI-generated malware or unauthorized tools from executing. Ringfencing™ further restricts applications, blocking them from exfiltrating data or making unauthorized connections.
Web Control
ThreatLocker Web Control can block unapproved AI web agents or browser plugins, mitigating risks from adversarial examples and shadow AI adoption.
Storage Control
Protects sensitive data by blocking unauthorized AI tools or compromised apps from reading, encrypting, or exfiltrating files.
Network Control
Stops unauthorized outbound traffic, cutting off AI-powered malware and C2 channels before damage is done.
ThreatLocker Detect & MDR
Monitors for suspicious behaviors like prompt injection attempts, privilege escalation, or abnormal API usage. MDR delivers expert intervention in real time.
To learn how ThreatLocker® helps security leaders enforce Zero Trust and protect against AI risks inside and out, book a demo.