As enterprises race to deploy autonomous AI agents and assistants throughout their business, some experts caution to proceed with care and purpose in one area in particular: cybersecurity.
Experimenting with off-the-shelf agents and assistants trained on data from the internet in security applications is risky, says John Velisaris, an Associate Partner of Security Services at IBM Consulting.
“It’s equivalent to the same behavior we see in business operations: running with scissors,” he says. “You’re going to get hurt.”
Customizing security-specific assistants like IBM Consulting Cybersecurity Assistant, however, can be very powerful, says Velisaris. Trained on a company’s own historical incident data, IBM’s new digital assistant is designed to supercharge cybersecurity teams by helping them detect and resolve threats more quickly.
Most cybersecurity professionals these days are overwhelmed—and for good reason. In a recent survey of more than 1,800 cybersecurity professionals, 66% said their job was more stressful than it was five years ago. The threat landscape is becoming more challenging with the increasing frequency and sophistication of social engineering attacks such as phishing. At the same time, cybersecurity budgeting and staffing are not keeping up.
"As cyber incidents evolve from immediate crises to multi-dimensional and months-long events, security teams are facing the enduring challenge of too many attacks and not enough time or people to defend against them," said Mark Hughes, Global Managing Partner of Cybersecurity Services in IBM Consulting.
Automated assistants can help fill the growing gap, says Velisaris. They are good at speeding up the diagnosis of threats, suggesting remediation strategies based on historical analysis of similar threats and streamlining repetitive tasks. They can also help get new cybersecurity employees up to speed by serving as training assistants.
A 2024 study found that automated chatbots such as OpenAI’s ChatGPT and Google’s Bard can pass ethical hacking certification tests designed for human cybersecurity professionals, suggesting that such assistants could be useful tools for cybersecurity teams. Companies have ethical hacking squads to enhance security, in contrast to hacking, where individuals gain unauthorized access and exploit system vulnerabilities for malicious purposes.
The Certified Ethical Hacker (CEH) examination is an exhaustive test that gauges an individual's competence in ethical hacking and typically requires a 70% accuracy rate to pass. The study revealed that Google’s Bard answered questions correctly 82.6% of the time, while Open AI’s ChatGPT received an overall accuracy rate of 80.8%.
The study authors don’t suggest that automated chat agents could replace human cybersecurity professionals. Instead, they function best as tools to make cybersecurity professionals more productive, says co-author Prasad Calyam, a professor in the Department of Electrical Engineering and Computer Science at the University of Missouri.
“These agents can be used like an eager intern,” says Calyam. “You can’t exactly trust what they are doing 100 percent of the time. But if supervised, they can alleviate a lot of work.”
More broadly speaking, AI is transforming enterprise security in a manner that resembles the automation of airplanes and flying decades ago, says Velisaris. Commercial pilots, he reminds us, used to fly airplanes by holding the control stick and throttle of an airplane. These days, they operate technology systems.
In a similar manner, experts believe, cybersecurity professionals can operate security systems augmented by AI. For example, IBM Threat Detection and Response Services uses AI to automatically escalate or close up to 85% of threat alerts. Then, with the help of IBM Consulting Cybersecurity Assistant, IBM's global security analysts can investigate the remaining alerts requiring action more quickly. With one client, the cybersecurity assistant helped reduce alert investigation times by 48%.
IBM Consulting Cybersecurity Assistant also includes a generative AI conversational engine that provides real-time insights and support to both clients and IBM security analysts. The assistant can perform a wide range of actions, including responding to requests, opening or summarizing tickets or automatically triggering relevant actions, like running queries or pulling logs.
As companies throughout the world experiment with AI tools in their cybersecurity divisions, the security of the tools themselves can sometimes be an afterthought. In a recent study of C-suite executives by the IBM Institute for Business Value (IBV) found that only 24% said their current gen AI projects were being secured, even though 82% of respondents said secure and trustworthy AI was essential to the success of their business.
For this reason, IBM watsonx includes capabilities designed to accelerate the impact of AI with trusted data. watsonx.governance helps organizations direct, monitor and manage the use of AI with confidence. This is becoming increasingly critical at a time when generative AI adoption—and the risk of "shadow AI," or unsanctioned AI models—is surging.
“With the proper safeguards in place, companies can manage the application of autonomous AI capabilities across discrete use cases including cybersecurity,” says Velisaris. Whereas some clients will want to inject human interaction into critical processes, others “will need full-fledged autonomous agents,” he says. “AI autonomy needs to be a dial that you can carefully and purposefully dial up or down.”