Forget the fearmongering. To fight AI-generated malware, focus on cybersecurity fundamentals. 
17 September 2024
Author
Matthew Kosinski Enterprise Technology Writer

Last summer, cybersecurity researchers at HYAS released the proof-of-concept for EyeSpy, a fully autonomous, AI-powered strain of malware that, they said, can reason, strategize and execute cyberattacks all on its own.1 This experiment, they warned, was a glimpse of the new era of devastating, undetectable cyberthreats that artificial intelligence would soon unleash.

Or maybe not.

“There is so much hype around AI, in cybersecurity and elsewhere,” says Ruben Boonen, CNE Capability Development Lead with IBM X-Force® Adversary Services. “But my take is that, currently, we should not be too worried about AI-powered malware. I have not seen any demonstrations where the use of AI is enabling something that was not possible without it."

The threat landscape is always evolving, and there might come a time when AI-based malware poses a serious danger. But for now, at least, many security experts feel discussions of AI malware are a mix of pure speculation and more than a little marketing. 

The threat actors who do use AI today are largely using it to refine the same basic scripts and social engineering attacks that cybersecurity teams are already familiar with. That means organizations can protect themselves by continuing to focus on the fundamentals, such as patching assets, training employees and investing in the right threat detection solutions.

AI-generated malware has some on edge, but many security experts are unfazed

Both general-purpose large language models (LLMs) such as Meta’s Llama and targeted applications such as IBM’s watsonx Code Assistant can help programmers accelerate development by writing, debugging and translating code.

The concern is that these benefits aren’t limited to benevolent programmers. By jailbreaking legitimate AI systems or creating their own, threat actors can hypothetically use these AI tools to streamline the malware development processes.

Some worry that AI might lower the barrier to entry in the malware market, enabling more cybercriminals to write malicious programs regardless of skill level. Or, worse, AI technologies might help threat actors develop brand-new malware that can bypass common defenses and wreak untold havoc.

Some researchers have tried to illustrate the dangers that AI-generated cyberthreats might pose by experimenting with different ways to incorporate AI into malware: 

  • BlackMamba, developed by security firm HYAS, is a polymorphic keylogger that uses ChatGPT to synthesize malicious code at runtime.

  • EyeSpy, also from HYAS, uses AI to evaluate its target system, identify the apps most likely to contain sensitive data and select a method of attack.1

  • Morris II is a worm that uses malicious prompts to trick AI apps into divulging sensitive information and spreading the worm to other people.

These experiments seem alarming at first glance, but many security experts see them as little more than curiosities.

“Things like [BlackMamba and EyeSpy] aren’t frightening to me at all,” says Boonen, who conducts red teaming exercises to help organizations strengthen their defenses against real cyberattacks. 

“When I look at the technical details around how these programs are implemented, I don’t think they would have any success if we used them in our client engagements,” he explains.

There are a couple reasons why Boonen and others are skeptical of AI-generated malware discourse today. 

First, these "new threats” are not really doing anything that security teams haven’t seen before, which means existing defense strategies are still effective against them.

“The concepts presented with BlackMamba and EyeSpy are not new,” says Kevin Henson, Lead Malware Reverse Engineer with IBM X-Force Threat Intelligence. “Defenders have encountered malware with these capabilities—hiding in memory, polymorphic code—before.” 

Henson points to malware authors who use techniques such as metaprogramming to obfuscate important data and uniquely generate certain elements, such as code patterns, with each compilation.  

Second, while LLMs do have impressive coding skills, it’s unlikely they will be able to generate any unprecedented malware variants anytime soon. 

“I think that using ChatGPT [and other AI tools] to generate malware has limitations because the code is generated by models that have been trained on a set of data,” Henson says. “As a result, the generated code will not be as complex as code developed by a human.”

While much has been said about how AI and machine learning algorithms might usher in a cybercrime renaissance by deskilling malware production, current models aren’t there yet. Users still need to know a thing or two about code to make sure that anything an LLM generates does what they want it to. 

“AI is an enabler for productivity, and to an extent there is a reduction in the level of knowledge required to write code when using them,” Boonen says. “But it’s not a massive reduction."

In fact, if threat actors were to start widely implementing AI-based malware today, chances are it would produce a glut of low-quality code that defenders would easily detect and defuse. 

“I’m not saying that there isn’t a technical possibility in the future that a really good piece of malware is created that leverages AI,” Boonen says. “If the models keep improving at the rate they have been, I think there will come a point where they will be able to do substantial things. Then, we’ll need to take it more seriously. But I don’t think we’re at this stage yet.”

“This problem closely mirrors what happens in software development because malware is just malicious software,” says Golo Mühr, Malware Reverse Engineer with IBM X-Force Threat Intelligence. 

“Right now, we don’t see a lot of apps that have AI seamlessly integrated into their code,” Mühr explains. “When we see AI becoming predominant in software in general, we can expect it to become common in malware, too.”

This pattern has played out in the past, as the X-Force Threat Intelligence Index reports. Ransomware and cryptojacking did not become pervasive threats until the legitimate technologies enabling these attacks—Microsoft Active Directory for ransomware, cryptocurrency and infrastructure as a service for cryptojacking—were fully adopted as well.  

Mühr notes that any new technology must provide a decent return on investment before developers adopt it—and the same goes for malware developers.

What AI-driven cyberattacks actually look like today

Cybersecurity researchers, including IBM’s X-Force, have yet to find evidence of threat actors using artificial intelligence to generate new malware in the wild. But cybercriminals are using AI tools for more mundane malicious activities, such as writing simple scripts and phishing emails.

“In legitimate software development, we see generative AI being used to supplement the development process, providing guidance and creating basic code snippets,” says Mühr. “That kind of AI technology is already used by threat actors for malicious purposes today, but that’s not something we would notice as an extremely sophisticated threat." 

For example, Microsoft and OpenAI have caught and stopped several nation-state actors attempting to use their LLMs as coding assistants. The Russian-linked “Forest Blizzard” group used the LLMs to research vulnerabilities in target systems, while the Iranian group “Crimson Sandstorm” used them to write web-scraping scripts.3

However, LLM-assisted phishing attacks are the most concerning malicious use of AI for many security experts. 

“I believe at this point that the biggest threat is the use of generative AI for impersonation and phishing,” says Mühr. “That is a use case where AI can already have a huge impact by creating human-like text, video and audio. And we've already seen indicators of this being weaponized for phishing.”   

For example, hackers can use LLMs to write phishing emails that closely mimic the voices of trusted brands. These LLM-generated emails also lack common red flags, such as grammatical errors and awkward phrasing, that potential victims often use to identify scams. 

Malicious actors can also leverage AI to generate deepfakes that make their scams even more convincing. For example, scammers in the Hong Kong S.A.R. of the PRC used an AI-generated video conference to trick a victim into transferring USD 25 million to fraudulent bank accounts.4

These AI-powered scams can trick both human targets and the enterprise security systems meant to stop them. For example, the cybercriminal group that X-Force calls “Hive0137” likely uses AI to generate variations on phishing emails so they can slip right by filters that look for known malicious messages. 

Detecting and preventing AI-powered attacks

AI has not fundamentally changed the cybersecurity battleground. Instead, it has helped attackers streamline things they were already doing. That means the best line of defense against AI-powered attacks is for organizations to stick with the fundamentals.

“If we are talking about AI being used to conduct attacks, the risk and response does not change for defenders,” says Ben Shipley, Strategic Threat Analyst with IBM X-Force Threat Intelligence. “Malware written by AI or by a human is still going to behave like malware. Ransomware written by AI does not have any more significant of an impact on a victim than ransomware written by a human."

Standard security measures can help close the vulnerabilities that malware—AI-assisted or otherwise—must exploit to break into a system. For example, formal patch management programs can fix software bugs before malicious actors find them. Strong identity and access controls such as multifactor authentication can combat account hijacking, one of the most common vectors for cyberattacks today.

Other measures can help fight AI attacks as well: 

  • Threat intelligence programs and platforms help security teams stay on top of emerging threats such as AI-generated malware.
     
  • A mix of threat detection tools, including both signature-based tools (such as firewalls) and anomaly-based tools, which use AI algorithms to identify suspicious activities in network traffic. Standard, hand-coded malware is already good at evading some kinds of security solutions, so using multiple detection methods makes sense regardless of whether hackers are using AI.

  • Security training can help employees spot and properly respond to AI-powered social engineering and disinformation campaigns.
Using AI to fight AI

While malicious actors can avail themselves of AI tools to streamline their processes, defenders can—and should—do the same. 

According to IBM’s Cost of a Data Breach Report, organizations that use AI and automation for cybersecurity can significantly reduce breach costs. 

AI can make prevention efforts more effective and speed up timelines for threat detection and remediation, shaving USD 1.88 million from the cost of an average data breach. (For organizations that extensively invest in security AI and automation, the average breach costs USD 3.84 million. For organizations with no security AI and automation, the average breach costs USD 5.72 million.)

Traditional, rule-based AI is already present in many common cybersecurity tools, such as endpoint detection and response (EDR) tools and user and entity behavior analytics (UEBA) tools. But new generative AI models are also poised to help defenders. 

“I think the generative AI models will have a big impact on things like incident response,” Boonen says. “For example, they might be able to understand or summarize incidents faster because the models can look at a lot more data in a shorter period of time than a human could.” 

That speeds up the process for analysts, who can use those insights to stop threats faster and more effectively.

And while AIs can produce the deepfakes that malicious actors use to trick people, they can also play a vital role in combatting those very schemes.

“Some images already look almost indistinguishable from real images, and as we go forward, I suspect that most people will not be able to differentiate,” Boonen explains. “So I think we will have to train AI models to say, ‘This video is fake’ or ‘This image is fake’ where humans can’t do it.”  

As with the future of AI-powered threats, AI’s impact on cybersecurity practitioners is likely to be more of a gradual change than an explosive upheaval. Rather than getting swept up in the hype or carried away by the doomsayers, security teams are better off doing what they’ve always done: keeping an eye on the future with both feet planted firmly in the present.

Footnotes

All links reside outside ibm.com.

1. EyeSpy poof-of-concept, HYAS, 01 August 2023.

2. BlackMamba: using AI to generate polymorphic malware, HYAS, 31 July 2023.

3. Staying ahead of threat actors in the age of AI, Microsoft, 14 February 2024.

4. Finance worker pays out USD 25 million after video call with deepfake 'chief financial officer,' CNN, 4 February 2024.

Sign up for the Think newsletter

Think Newsletter

 

The latest AI and tech insights from Think

Sign up today
Related solutions IBM X-Force

Hacker-driven offense. Research-driven defense. Intel-driven protection.

IBM Cyber Threat Management Services

Predict, prevent and respond to modern threats, increasing business resilience.

IBM Threat Detection and Response Services

24/7 prevention and faster, AI-powered detection and response.

Resources Cost of a Data Breach Report

Download

Cybersecurity in the era of generative AI Guide

Register

What is AI security? Explainer

Learn more

IBM AI Academy AI education

Get started

Take the next step

IBM cybersecurity services deliver advisory, integration and managed security services and offensive and defensive capabilities. We combine a global team of experts with proprietary and partner technology to co-create tailored security programs that manage risk.

Explore cybersecurity services