BACK TO BLOGS Back to Press Releases

Vibe hacking: How AI-driven cybercrime outpaces EDR and signature defenses

Written by:

The security industry has long emphasized detection, matching signatures, scoring behaviors, and tuning models to catch what looks like yesterday’s attack. But innovation cuts both ways.

The same large language models (LLMs) that help engineers write code faster now help criminals operate faster. What began as “vibe coding,” a mindset that embraces AI output with minimal review, has evolved into “vibe hacking,” attackers delegating the design, orchestration, and execution of campaigns to AI systems.

In 2025, disclosures showed a threat actor using an agentic coding assistant to run a multiphase extortion operation against 17 organizations. In a separate attack, researchers documented LameHug, a Windows data theft toolchain that integrates a live LLM to craft commands in real time.

These aren’t edge cases. They are early signals of where adversary tradecraft is going.

AI-driven, LLM-supported attacks are now disrupting the cybersecurity rhythm, and they never repeat the same beat.

What is vibe hacking?

Vibe hacking is an emerging AI-driven cyberattack where threat actors use LLMs to generate, adapt, and execute attacks with minimal manual effort. Instead of writing malware step by step, attackers can now provide intent and let AI handle the execution.

The result is a new class of cyberattacks that are highly adaptive, continuously evolving, and nearly impossible to predict.

In practice, an operator supplies intent:

Prompt (paraphrased)

Write a PowerShell script to enumerate Active Directory users, compress staged data, and upload over HTTPS using common cloud APIs. Obfuscate function names and avoid suspicious cmdlets.

The system responds with runnable code, then chains the next step. Variants are new on every pass, which undermines signature matching and simple heuristics.

This has transformed cybercrime by removing the need for extended time and advanced skill when it comes to completing a successful attack. The barrier to entry is lower, and the speed and scale of attacks is greater.

Real world examples of AI-driven cyberattacks

Two incidents disclosed in 2025 illustrate the change with startling clarity.

Claude-powered extortion

In a 2025 incident, one attacker used an AI coding assistant, Claude Code, to execute attacks against 17 targets across healthcare, emergency services, government, and faith organizations. The model was tasked to perform asset reconnaissance, generate initial access scripts, stage and compress data, and draft tailored ransom notes that referenced the victim’s operating margins and downtime exposure.

Automated stages:

  • Reconnaissance: Generate OSINT queries for subdomains, exposed services, and weak portals.
  • Infiltration: Produce PowerShell or Python loaders tuned to the environment.
  • Exfiltration: Stage archives and ship via HTTPS to attacker-controlled endpoints.

Extortion: Draft notes with financial language calibrated to each sector. When the attacker needed reconnaissance, the model mapped subdomains and exposed services. When they needed initial access, it produced loaders tailored to each target environment. When it came time to steal data, the AI wrote scripts that compressed sensitive directories and sent them out over cloud-hosting application programming interfaces (APIs), blending seamlessly into legitimate traffic.

And when the moment arrived to demand payment, the model even drafted personalized ransom notes, complete with downtime estimates, sector-specific financial language, and references pulled from public filings.

LameHug

The second case went even further. Dubbed LameHug, it was the first publicly documented Windows malware family that uses a live LLM during execution. Rather than carrying a static command sequence, it contacted an AI model mid-run, explained the kind of machine it had landed on, and asked for the most effective next steps.

The model responded by producing Windows command chains specifically crafted for that host configuration.

What makes LameHug different is there is no static script to match. The LLM returns fresh commands per host, so signatures and simple rules have little to attach to.

One victim might see a burst of administrative queries. Another might experience domain enumeration or targeted data collection. Yet another might find their data quietly exfiltrated through entirely different protocols. No two infections looked the same, meaning no traditional detection logic had anything stable to latch onto.

Why EDR and signature-based defenses fail

EDR and signature-based defenses depend on attack behavior being repeated, and AI breaks that assumption.

The difference with vibe hacking is:

  • It’s unique every time: AI produces new code paths on demand, eroding hash and pattern-based detections.
  • It looks like admin work: The chains use legitimate binaries and cloud APIs, PowerShell, Command Prompt, Office Add-ins, and HTTPS.
  • Adaptive re-prompting: Operators can ask the model to refactor until telemetry looks clean.
  • Benign cover traffic: API calls to popular AI platforms resemble legitimate usage.

Vibe hacking succeeds through endless variability rather than brilliant code. It succeeds because the code is endlessly variable.

If a payload looks suspicious, the attacker simply asks the model to rewrite it to appear benign. If telemetry spikes, they request a version that uses more benign APIs. If a command sequence resembles something seen before, they prompt again until the pattern disappears.

Each iteration happens in seconds, not days, and each produces an entirely new method that shares little DNA with the previous one. Additionally, LLMs naturally gravitate toward abusing legitimate administrative tooling. Their instinct, based on the volume of their own training data, is to use PowerShell, WMI, Python, Microsoft Office add-ins, and common cloud APIs.

These are the same tools system administrators rely on every day, which means vibe hacking does not trip many traditional alarms. It comfortably sits within the all-too-permissive boundaries most enterprises allow.

And as legitimate usage grows, communications with AI platforms increasingly look like normal business queries. If an endpoint reaches out to an LLM API, is it malware generating a command chain, or a helpdesk technician asking for a PowerShell regex?

Without strict controls, there is often no easy way to tell.

How to defend against AI-driven attacks

If AI makes attacks unpredictable, the solution is to remove the attacker’s ability to act, and this is where a Zero Trust approach is critical.

The rise of LLM-driven attacks necessitates a shift in mindset. The Zero Trust model is highly relevant. Analysts must also assume that every script, payload, ransom note, and stage of an intrusion may be unique. There will be no reliable analytics to work with when the behavior of an attack continuously mutates.

Despite the fresh threat, detection remains highly valuable—visibility and triage still rely heavily on EDR. But detection’s place in the security chain has moved, as it can no longer be the front line of prevention.

Trying to outpace vibe hacking with faster signature development is a fool’s errand. AI simply iterates too quickly. The solution is to control the environment.

How ThreatLocker stops vibe hacking

Vibe hacking is not unstoppable. With the right conditions in place, even complex, morphing attacks are readily repelled. The unpredictable creativity of an LLM becomes irrelevant against a policy framework that allows only predictable, sanctioned actions.

Application Allowlisting

If the code is not approved, it does not run, no matter how it was generated. AI can mutate code infinitely, but it cannot bypass a default-deny execution policy.

Ringfencing™

Approved apps cannot be repurposed into attack chains. Stop attackers from abusing the very tools LLMs love to weaponize. PowerShell, browsers, Python, Microsoft Office, and countless other legitimate applications are walled in, kept within strict behavioral boundaries.

This prevents them from launching unauthorized processes, reading sensitive files, or making unrestricted network calls.

Data Storage Access Control

Block unauthorized add-ins and prevent staging of archives and data dumps. Data cannot be quietly archived, altered, or dropped into locations protected by policy.

Zero Trust Endpoint Firewall

Enforce least privilege at the network level, block unauthorized connections, and only allow approved traffic. Custom policies open ports on demand, but only for approved devices.

Without approved access, attackers will be unable to get a foothold and reach their AI assistant—or their data repositories.

The Claude campaign showed end-to-end automation across 17 victims. LameHug proved that an LLM can sit directly in the malware loop, generating host specific commands that blend into normal administration.

EDR and signatures will remain useful for visibility and triage, but prevention must notdepend on recognizing the next variant.

Together, these controls place AI-generated attacks in the same category as regular malware: stopped in their tracks. The code may change, but the rules of what is allowed to run, touch data, and communicate stay fixed.

Make AI-generated attacks irrelevant

Vibe hacking treats AI as the operator: Fast, adaptive, and hard to fingerprint.

The organizations that stay ahead will be the ones that eliminate implicit trust in tools, processes, and people and instead assume every attack is unique and every environment can be breached.

Move from chasing alerts to enforcing strict control. See how the ThreatLocker platform stops unknown code, contains trusted apps, and blocks exfiltration paths.

Schedule a demo today.

No items found.

Start your path to stronger defenses

Start your trial

Try ThreatLocker free for 30 days and experience full Zero Trust protection in your own environment.

Book a demo

Schedule a customized demo and explore how ThreatLocker aligns with your security goals.

Ask an expert

Just starting to explore our platform? Find out what ThreatLocker is, how it works, and how it’s different.