How agentic AI enables an autonomous SOC with minimal human involvement

Shot of a young businessman using a computer at his desk during a late night at work

Author

John Velisaris

Associate Partner

IBM Cyber Threat Management Services

Security operations centers (SOCs) have faced ongoing challenges for threat detection and response for years. These challenges include the discernment of genuine security signals from background noise, inadequate context for alert investigation, lack of end-to-end automation, workflow bottlenecks and alert fatigue, just to name a few.

For years I’ve said that security operations, or cyberthreat management in any form, needs to undergo a major change like that of commercial airlines in the mid-20th century: machines fly commercial airplanes, and pilots intervene in limited situations. Similarly, the new SOC would run autonomous operations with minimal human involvement.

The SOC analysts would then become SOC pilots, choosing where and when they become involved, while the virtual machine handles standard operations.

The latest AI trends, brought to you by experts

Get curated insights on the most important—and intriguing—AI news. Subscribe to our weekly Think newsletter. See the IBM Privacy Statement.

Thank you! You are subscribed.

Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.

Using human SOC pilots to address uncertainties

Cybersecurity stands alone in grappling with the enigmatic "0-day" phenomenon: newly unearthed vulnerabilities in software or hardware, previously undetected by the security community. This concept encapsulates the unpredictability surrounding the emergence of the next threat, including its source, timing and methodology.

When uncertainties materialize, SOC pilots (human analysts) assume command, by using their expertise to counter and neutralize these novel threats.

So why don’t we already have SOCs that can function with minimal human intervention? For years, security software vendors have been driving automation into their products. SOC teams have pushed the boundaries of automation, sometimes developing sophisticated, home-grown solutions to accelerate and increase the efficacy of threat detection and response. But SOCs need more than automation. They need digital autonomy.

AI agents

5 Types of AI Agents: Autonomous Functions & Real-World Applications

Learn how goal-driven and utility-based AI adapt to workflows and complex environments.

Human insight meets AI: Moving from automation to autonomy

Artificial Intelligence (AI) can replicate human decision-making processes. This technology can facilitate a transformative shift in cybersecurity operations, particularly in routine security operations.

Threat detection already uses AI capabilities such as machine learning (ML). Various SOC technologies use ML for tasks ranging from identifying threats to categorizing alerts, thanks to integration by major software vendors. However, automating security operations is subject to certain constraints.

Most security operations teams have rules of engagement, requiring a degree of certainty before execution. This certainty explains why automation is common in closed systems such as endpoint detection and response (EDR) systems. Both the endpoint software and the console are familiar with all relevant variables and can automate responses effectively.

A security specialist at a major hyperscaler provides a practical example. Their company requires minimal SOC involvement due to its deep understanding of every technology and asset in their stack. Their setup essentially functions as a closed system, allowing for extensive automation.

For organizations without such closed systems, particularly those enterprises dealing with security information and event management (SIEM) systems, the scenario is different. Here, a security orchestration, automation and response (SOAR) application playbook manages automation.

For instance, an auto-response playbook can be programmed to quarantine a host if it isn't a server and is running recognized malicious activities. However, this automation cannot activate unless the identity of the asset is known, such as whether it's a critical server or a workstation.

Context is paramount in automating security functions, and this is where human SOC analysts shine. Through manual, "swivel chair" data collection, judgment and analysis, they provide the necessary context for automation to operate effectively in open systems. Swivel-chair operations need to make way for the new paradigm of multi-agentic autonomous operations.

Agentic AI drives true autonomy

Enter the autonomous, multi-agentic framework. IBM cybersecurity services use AI to understand the need for context, gather context, decide and allow automation to complete or fully handle automation—even bypassing the SOAR.

Our digital labor orchestrator, the autonomous threat operations machine (ATOM), develops a task list for the investigation of an alert. If ATOM determines the asset context is inadequate, it uses other AI agents to gather missing information.

To further our swivel-chair analogy, when ATOM detects a missing asset context, it acts. It proactively interacts with agents associated with vulnerability management, exposure management, configuration management databases (CMDBs), and EDR or extended detection and response (XDR) systems to gather that context.

ATOM then determines that a specific asset, based on its hostname and network location, aligns with typical workstation patterns, and concludes that it is indeed a workstation. This reasoning is the same type of logic a human analyst would apply.

After ATOM makes the contextual decision, it formulates a unique response to that specific alert. For example, it can determine whether an application programming interface (API) call to an EDR console is the best course of action or whether a workflow should return to the SOAR system.

Whether AI will allow SOC personnel to move into the SOC pilot chair is still unknown. However, orchestrated multi-agentic digital labor capabilities are closer to what is needed for autonomous SOC operations than any technology we’ve worked with at IBM before. While the full transition to fully autonomous SOCs is yet to be realized, the journey toward this efficient, minimal-human-intervention SOC model is significantly advanced with the advent of agentic AI.

This major change promises to revolutionize threat management by enabling security teams to prioritize strategic initiatives rather than being burdened by repetitive tasks. As AI continues to evolve, we look forward to a future where our SOCs are not just automated but truly autonomous, ready to take flight and leave the mundane to the machines.

Sign up for the webinar to learn how digital labor can be a strategic asset against cyberthreats

Abstract portrayal of AI agent, shown in isometric view, acting as bridge between two systems
Related solutions
IBM® watsonx Orchestrate™ 

Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.

Explore watsonx Orchestrate
IBM AI agents and assistants

Create breakthrough productivity with one of the industry's most comprehensive set of capabilities for helping businesses build, customize and manage AI agents and assistants. 

Explore AI agents
IBM Granite

Achieve over 90% cost savings with Granite's smaller and open models, designed for developer efficiency. These enterprise-ready models deliver exceptional performance against safety benchmarks and across a wide range of enterprise tasks from cybersecurity to RAG.

Explore Granite
Take the next step

Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

Explore watsonx Orchestrate Explore watsonx.ai