Back
#242
September 8, 2025

EP242 The AI SOC: Is This The Automation We've Been Waiting For?

Guest:

Topics:

Artificial Intelligence SIEM and SOC
29:29

Subscribe at Spotify

Subscribe at Apple Podcasts

Subscribe at YouTube

Topics covered:

  • What is your definition of “AI SOC”?
  • What will AI change in a SOC? What will the post-AI SOC look like? 
  • What are the primary mechanisms by which AI SOC tools reduce attacker dwell time, and what challenges do they face in maintaining signal fidelity?
  • Why would this wave of SOC automation (namely, AI SOC)  work now, if it did not fully succeed before (SOAR)?
  • How do we measure progress towards AI SOC? What gets better at what time? How would we know? What SOC metrics will show improvement?
  • What common misconceptions or challenges have organizations encountered during the initial stages of AI SOC adoption, and how can they be overcome?
  • Do you have a timeline for SOC AI adoption? Sure, everybody wants AI alerts triage? What’s next? What's after that?

Do you have something cool to share? Some questions? Let us know:

Transcript

The notes cover the key points from a conversation with Augusto Barros, Principal Product Manager at Prophet Security. The discussion centered on the definition, capabilities, and future trajectory of AI-powered Security Operations Centers (SOCs).

The central theme of the discussion was the transformative potential of AI SOCs to redefine the traditional security workflow. Augusto Barros began by defining an AI SOC as a set of tools that leverage AI to perform automated triage and investigation on behalf of a human analyst. The core value proposition is not to replace humans but to significantly expand the SOC's throughput and capacity by automating the most tedious and script-based tasks.

A major point of contrast was made between AI SOCs and traditional SOAR (Security Orchestration, Automation, and Response) solutions. While the outcomes may appear similar on the surface, the fundamental difference lies in the implementation and maintenance effort. SOAR requires extensive, costly, and manual playbook creation, which is often a significant barrier to adoption and value realization. AI SOCs, by contrast, offer a more flexible, out-of-the-box approach, with vendors providing pre-built investigation steps that can be fully leveraged from the outset. This "next-gen SOAR" capability eliminates the burden of playbook design and maintenance, allowing organizations with even immature processes to gain immediate benefits.

The conversation highlighted a key philosophical shift. The traditional SOC operates under a "funnel mentality," where every detection is funneled to a human, creating a bottleneck. This constraint forces detection engineers to be overly cautious, often suppressing noisy but potentially valuable detections to avoid overwhelming the team. An AI SOC, however, changes the dynamic by absorbing this noise, enabling detection engineers to be more comprehensive and "noisy" in their rule creation without fear of creating an unmanageable workload.

The hosts and guest explored the critical question of how to measure the success of an AI SOC. While obvious metrics like mean time to investigate (MTTI) and mean time to respond (MTTR) will naturally improve with automation, the more meaningful metrics are related to broader organizational impact. These include:

Increased Alert Throughput: The ability of the same team to handle a significantly higher volume of alerts.

Reduced Ignored Alerts: The ability to investigate a larger percentage of alerts that would have previously been dismissed due to high volume or low priority.

Expanded Detection Coverage: The ability to look for more types of threats and across more data sources because the investigation capacity has increased.

Accuracy: A crucial counter-metric that must be continuously tracked. An AI SOC's value is tied to the accuracy of its determinations, and vendors bear the responsibility for ongoing quality control and improvement.

The debrief concluded with a look toward the future of AI in the SOC. Beyond the immediate value of triage automation, the panelists identified several next-generation capabilities:

Improving Detection Engineering: AI SOCs generate a wealth of data on alert outcomes (true/false positives, time-to-close, etc.). This data can be fed back into the system to refine detection rules and perform automated gap analysis, revealing where coverage is lacking. This represents a significant leap from manual, checklist-based auditing.

Automated Remediation (with caution): While vendors may promise full remediation, the current reality is that humans must remain in the loop for final decision-making. The next step is streamlining the process between determination and human-led response.

Ethical and Transparency Considerations: The conversation stressed the importance of vendor transparency in how AI makes its determinations. Trust in the system's "reasoning" is a prerequisite for adoption. A black-box approach, where the "magic" is hidden, will ultimately hinder a security team's ability to confidently use the tool. This transparency is key to building trust, which, in turn, drives user adoption.

Timeline of Key Discussion Topics

Introduction to AI SOCs: Defining an AI SOC as an AI-driven tool for automated triage and investigation, leveraging AI technologies to offload human analysts.

The "What if it Goes Right?" Scenario: Discussing the ideal outcome of AI SOC adoption, which is a massive increase in a SOC's capacity and throughput.

SOAR vs. AI SOC: Contrasting the two approaches, highlighting how AI SOCs overcome the primary SOAR challenges of heavy playbook implementation and maintenance, effectively acting as "next-gen SOAR."

Role of AI in the SOC: The AI's primary function is described as that of a tireless Tier 1 analyst, handling routine, scriptable tasks and freeing up human experts for more complex work.

Debating SOC Metrics: Moving beyond obvious metrics like mean time to investigate and exploring more nuanced indicators of success, such as increased detection coverage and reduced ignored alerts.

Common Misconceptions: Identifying mental stumbling blocks for AI adoption, particularly the fear of human elimination and the misconception of a single, monolithic AI model solving all problems.

New Problems and Future Capabilities: Discussing new challenges created by AI SOCs and the exciting next-generation use cases beyond triage, including automated detection gap analysis and streamlining post-determination workflows.

Tips for AI SOC Adoption: Emphasizing the critical role of transparency in building trust and driving adoption, and providing recommended reading on human judgment and system resilience.

View more episodes