The introduction of Generative Artificial Intelligence (AI) tools has elevated the way individuals streamline their day-to-day tasks. AI has proven to be a groundbreaking development in human efficiency and the way people create, structure, and build upon their lives in and out of work. To put how big of a phenomenon this is into perspective, TIME reported how ChatGPT had already gained 100 million active users in just two months, astronomical when compared to the two and a half years it took Instagram to accumulate the same user count. However, these same tools now also lie in the hands of users with malicious intentions, making it much more difficult for IT professionals to stay ahead of cybercriminals. In this blog, we will be covering three ways in which cybercriminals could use AI tools to harm your organization.
The user element of your cybersecurity strategy is an essential component. You rely on users' ability to identify and neutralize phishing attempts at any moment, which is why phishing awareness training has become a requirement among most, if not all, compliance regulations. Unfortunately, AI tools can deliver phishing techniques that give current phishing identification strategies, like the S.L.A.M. method, a run for their money. These AI tools can search the internet for phishing identification techniques and construct convincing copy to trick users within your organization. The grammar could be 100% correct, the false stories could be more believable, and even those who the emails are sent to could be more targeted based on the statistics and demographics the AI could research in a matter of seconds. These drastic changes can escalate phishing attempts to targeted spear phishing attacks, produced at a greater quantity, faster.
Phishing attempts come in countless forms. You may most frequently see one asking an employee to purchase gift cards and share the codes, but you may also see one that attaches a PDF, Word Document, or other item. When opened on your endpoints, these attachments can initialize, or run, malicious scripts meant to corrupt your organization’s infrastructure, exfiltrate or encrypt data. Not limited to merely disrupting operations, the bad actors can then demand a ransom for the remedy to mitigate the script. In most cases, however, nothing is done once you pay the ransom. You lose your cash and your data.
How to Mitigate Phishing Attacks
Phishing attacks are unpredictable; they can happen just under your nose and can cause your organization to come crumbling down in less than a day. One of the best ways you can prevent an org-destroying cyberattack led by phishing is to implement controls in your environment to stop cyberattacks before they can happen. Implementing Multi-Factor Authentication (MFA) protocols in your organization is the best way to mitigate a phishing attack. However, if constraints prevent the implementation of MFA, if MFA, investing in a robust application whitelisting and/or containment tool could be how you protect your organization and its valuable intellectual property from being stolen by phishing attacks.
How to Mitigate Phishing Attacks with ThreatLocker
In Learning Mode, ThreatLocker catalogs a list of applications your users run within your environment. After the list is created, you can manually go through and remove the non-business essential (or untrusted) software. From then on, ThreatLocker Allowlisting can block all software and scripts not on your Allowlist from operating in your environment, mitigating the threat of running malicious scripts received from phishing attempts.
Malicious Scripts Are Easier to Write than Ever with AI
Countless IT professionals and hobbyists are utilizing AI tools to construct scripts and code where they are having difficulties putting pieces together. However, these same tools are being used by threat actors for the same purposes, except for potentially harming your organization. To make matters worse, there are individuals that do not carry the skills to create complex malware but are capable of commanding Generative AI tools to write scripts from scratch. Without any experience whatsoever, they can just input a prompt, and in a matter of seconds, they are presented with a company-killing script, increasing the number of threat actors out in the world.
Fortunately, organizations like OpenAI are working towards restricting what their products can produce. For example, ChatGPT is showing signs of understanding how to recognize when a request is malicious, as shown by Rob Allen and Jimmy Hatzell in a recent webinar between ThreatLocker and CyberQP. However, it still has a way to go. As demonstrated by Rob and Jimmy, it is easy for anyone to create loopholes in their requests and create malware for data exfiltration. In the full video, you can witness the duo create a script that weaponizes the trusted applications on your endpoint to export your data to another location. This demonstration shows that although threat actors are using AI tools like ChatGPT to design scripts for malware, OpenAI understands the full spectrum of “good vs bad” in which their tool is being used and is actively trying to stop cybercriminals from exploiting ChatGPT’s capabilities.
Mitigate the Weaponization of Trusted Applications
Just because an application is on your Allowlist, doesn’t mean you are 100% safe from threat actors. Cyberattacks like living off the land (LOTL) attacks weaponize the trusted or allowed applications within your environment to distribute malware like ransomware and data exfiltration scripts. You can mitigate the threat of your trusted applications becoming weaponized by implementing a Zero Trust control tool that limits how each trusted application can interact with other software within your environment. These containment tools work as an excellent second line of defense when paired with whitelisting tools mentioned above.
Mitigate the Weaponization of Trusted Applications with ThreatLocker
ThreatLocker Ringfencing is an application containment tool that grants you the ability to set boundaries for the applications on your Allowlist. These boundaries may include removing file, network, and registry access permissions. For example, you may need both Microsoft Word and PowerShell on your Allowlist, but don’t necessarily need them to communicate with each other. You can set up a granular Ringfencing policy that prevents these applications from calling out to each other, and even the internet, eliminating the risk of a cyberattack that exploits their ability to send instructions to one another.
Enhancing Existing Malware
What’s nice about generative AI tools is their ability to deconstruct malware scripts. Any user can input a script and prompt the AI tool to “tell them what this does.” This capability can alert IT professionals that the “trusted” scripts shared with them are not as friendly as they believed. However, threat actors can also use this to their advantage by asking the same AI tools the same question, then prompting them to examine and alter the coding. Through reverse engineering, generative AI tools can edit these scripts and turn them into smarter, more destructive code that can identify and bypass modern security measures, such as firewalls and detection and response tools, and their unknown vulnerabilities. You won’t know the malware is in your organization until it is too late!
Mitigate the Threat of Enhanced Malware
Modern detection tools are great additions to your cybersecurity strategy, but as mentioned in the name, they detect what is already in your system. As mentioned before, though, malware is becoming extremely advanced and is able to bypass the policies that would normally detect them. It is best to pair a detection and response tool with Zero Trust control tools. This allows for the control tools to prevent a cyberattack and allow the detection and response tool to notify you of its occurrence.
Mitigate the Threat of Enhanced Malware with ThreatLocker
The ThreatLocker endpoint protection platform consists of multiple Zero Trust controls that protect your organization from malware attacks. As mentioned in the prior two examples, Allowlisting and Ringfencing band together to enact granular controls, allowing only the applications (or software) you need to run and blocking everything else, while also restricting what the allowed software is capable of doing, whether it be sending instructions to other applications, communicating with the registry, or sending information to the internet through your browser. These granular Zero Trust controls extend to the other tools in the ThreatLocker platform, like Network Control, Storage Control, and Elevation Control.
When paired with ThreatLocker Ops, a policy-based detection and automated response tool, the ThreatLocker endpoint protection suite can prevent a cyberattack from occurring within your organization and alert you of it. Craig Stevenson, Director of ThreatLocker Ops, recently demonstrated how this works in the ThreatLocker webinar, “Mitigate the Mayhem”, where Craig demonstrated how ThreatLocker Network Control blocked a brute force attack, and how ThreatLocker Ops responded to it, alerting Craig within the ThreatLocker Health Center. Implementing these tools into your cybersecurity strategy grants you both more control and oversight of your organization’s environment.
Generative AI tools are still very new, and there is immense potential for them to change the way you live your life, conduct business, and generate competitive copy and strategies. What has been discovered thus far is only the tip of the iceberg, which is why IT professionals like Roy Richardson, VP and CSO of Aurora InfoTech, are predicting that AI will be one of the biggest challenges to overcome in the near future. If you are interested in learning more about how ThreatLocker can protect your organization from malware, and more specifically, AI-generated malware, schedule a call with a ThreatLocker Cyber Hero today.