Table of contents
In the hyper-competitive world of algorithmic trading, milliseconds matter. So do the models and codebases that power them. For quantitative trading firms, proprietary trading engines and predictive signals are not just tools, they are the business.
That’s why what allegedly happened inside Chicago-based Headlands Technologies should concern every firm operating in a high-value intellectual property environment and serves as a sharp reminder that Zero Trust controls are no longer optional.
Headlands, a global quantitative trading firm, developed a proprietary high-frequency trading platform known internally as “Atoms,” along with a suite of predictive models called “Alphas.” According to publicly available records, one of the developers who helped build that system—a senior developer named Cheuk Fung Richard Ho—allegedly gathered those assets while he was still employed at Headlands and later converted that intellectual property (IP) into a startup blueprint for a new business.
The allegations don’t stop at simple misappropriation. Records suggest that:
- Internal chat systems were reconfigured to auto-delete messages.
- WhatsApp communications were erased.
- Version history for source code was wiped out.
- Sensitive materials may have been moved to personal cloud accounts.
Ho was arrested earlier this year, and his criminal case is progressing through pretrial motions and hearings in the Southern District of New York. But regardless of what the courts eventually decide, the facts outlined in discovery describe what appears to be a methodical, insider-led exfiltration of proprietary trading IP conducted from within the walls of the firm by someone who had legitimate access and deep institutional knowledge.
The limits of trust
The developer didn’t break in. He didn’t bypass authentication. He was already inside, doing the job he was hired to do. That’s what makes insider threats uniquely dangerous: They exploit trust.
In many environments, access is treated as binary: Either you’re in or you’re out. Once inside, few guardrails remain. That’s where the opportunity for misuse begins.
But Zero Trust architecture assumes that every user, application, and process must be continuously monitored, and that access must be narrowly defined and strictly enforced. That’s exactly what ThreatLocker enables.
Missing controls and what Zero Trust would have cured
Storage Control
The alleged IP theft relied on the ability to move files freely within and potentially outside of internal systems. ThreatLocker Storage Control would have allowed Headlands to:
- Block source code from being copied to external drives or personal folders
- Restrict access to code repositories by time of day, user role, or specific machines
- Apply audit-only or read-only permissions for developers not actively working on the code
With those controls in place, sensitive files would have stayed where they belonged.
Ringfencing™
Even if development tools like Git, Python, or an IDE were allowed to run, ThreatLocker Ringfencing™ could have prevented them from accessing network shares, launching system processes, or opening outbound web connections. The developer may have been able to work, but not move code elsewhere or interact with external scripts.
Network Control
Transferring proprietary data to cloud platforms — like personal AWS buckets or third-party file-sharing sites—is a common insider tactic. ThreatLocker Network Control could have blocked:
- All outbound traffic by default
- Specific connections to unapproved cloud services
- VPN tunnels or CLI-based upload tool connections
Only pre-approved destinations (such as internal CI/CD pipelines) would have been reachable.
Detect Endpoint Detection and Response
Deleting version history and messaging logs isn’t normal user behavior. ThreatLocker Detect EDR would have:
- Flagged unusual file system activity
- Detected messaging apps running outside of corporate tools
- Alerted on bulk file access or anomalous privilege elevation
Security teams could have been alerted in real time—potentially before the damage was done.
Insider threat readiness checklist
IT operations
- Enforce least-privilege access to source code repositories, build servers, and deployment pipelines. Review them for user accounts that haven’t contributed in the last 90 days and disable them.
- Audit developer endpoints for personal cloud storage clients (Dropbox, Google Drive, etc.) and block their access.
- Require separation of duties in CI/CD pipelines so no single developer can code, approve, and deploy unilaterally. (Automatable in most version control and DevOps platforms)
- Rotate and audit service account credentials and API keys, ensuring they are not hard-coded or persistently tied to individual developers.
- Confirm backups of critical source code are intact and immutable and commence test restorations routinely.
GRC and compliance staff
- Establish a formal code access review schedule (e.g., quarterly), requiring managers to validate that access rights remain aligned with job responsibilities.
- Verify that your exit checklist includes revoking access to development tools and code repositories.
- Incorporate insider threat awareness into compliance training, emphasizing scenarios relevant to privileged technical staff.
Security architects
- Implement real-time monitoring for anomalous developer activities, such as bulk cloning of repositories or unusual data exfiltration patterns.
- Deploy tamper-proof logging on critical code and infrastructure changes, with alerts for privilege escalations or unusual commit activity.
- Apply data loss prevention (DLP) controls to detect and block unauthorized transfers of source code or intellectual property.
CISOs and security leaders
- Mandate executive-level reporting on developer insider threat indicators (e.g., privileged access anomalies, code review compliance rates).
- Promote a culture of shared accountability and transparency, ensuring developers see oversight as protecting the business, not as distrust.
Next step: Keep intellectual property under lock and key
For trading firms, algorithms and code are the crown jewels. ThreatLocker Storage Control ensures that sensitive code and models cannot be copied to unauthorized locations, devices, or accounts. Pair it with Ringfencing to confine development tools so they can’t reach beyond their intended scope. Together, they create the guardrails that insider threats can’t slip through.
Learn more about Storage Control | Learn more about Ringfencing





