Back

Showing 35 episodes for Artificial Intelligence

#271
April 9, 2026

EP271 Can AI-Native MDR Actually Fix Your Broken SOC Workflows or Just Automate the Mess?

Guest:

29:29

Topics covered:

  • “10X SOC” sounds great.  But for an organization stuck in "SIEM 1.0" with poor data quality and manual workflows, is “AI-native MDR” a "leapfrog" opportunity or a recipe for disaster?
  • We’ve seen the rise of "Decoupled SIEM" and security data lakes. Does a "Modern SIEM" even need to exist if an MDR platform has an agentic layer doing the heavy lifting? 
  • You’ve argued for AI-native over AI-bolted-on. For an end user, what are the tangible differences of using "AI inside a legacy SIEM" versus using an "AI-native separate product"?
  • What is the one task you thought AI would handle by now that still requires a senior human analyst to step in?
  • If a CISO is using an AI MDR, "Mean Time to Detect" (MTTD) starts to look like a vanity metric because the machine is instant. What is the new golden metric for an AI-powered SOC? Is it "Time to Context," "Reduction in Human Toil," or something else?
  • How do you help a skeptical SOC Manager—who has been burned by false positives for a decade—trust an autonomous agent to perform a "containment" action at 3:00 AM? 
#269
March 30, 2026

EP269 Reflections on RSA 2026 - Beyond AI AI AI AI AI AI AI

Guest:

  • No guests! Just Tim and Anton
29:29

Topics covered:

  • Hard to believe we've been doing these since 2022, is that right?
  • What did we see this year at RSA, apart from AI? And more AI? And more AI?
  • What framework can we use to understand the approaches vendors take to AI and security? Just saying “AI washing” is not enough!
  • How to tell “AI washer” from “AI tourist”? 
  • I sense that “securing AI” (and agents) is finally growing as fast as "using AI for security”, do you agree?
  • Is the AI vulnerability apocalypse coming? Soon?
  • Have we seen any signs of AI backlash?
#267
March 16, 2026

EP267 AI SOC or AI in a SOC? Cutting Through Hype, Pricing Models, and SIEM Detection Efficacy with Raffy Marty

Guest:

29:29

Topics covered:

  • You argue that declaring existing SIEM being obsolete is a "marketing slogan" rather than a true thesis. What is the real pain point and the actual gap in traditional SIEMs as opposed to the more sensational claims?
  • You highlight that "correlation, state, timelines, and real-time detection require locality," making centralization a necessary trade-off. Can a truly federated or decoupled SIEM architecture achieve the same fidelity and real-time performance for complex, stateful detections as a centralized one?
  • You call the rise of independent security data pipelines the "SIEM Trojan Horse." How quickly is this abstraction layer turning SIEM into a “swappable” component, and what should SIEM vendors have done differently years ago to prevent this market from existing?
  • This "AI SOC" thing, is this even real? Is AI in a SOC a better label? Do you think major SIEM vendors will own this very soon, like they did with UEBA and SOAR?
  • If volume-based pricing is flawed because it penalizes good security hygiene, what is a better SIEM pricing model that fairly addresses compute, enrichment, and retention costs without just shifting the volume cost to unpredictable query charges?
  • You question the idea that startups can find a better way to release detection rules than large vendors with significant content teams. What metrics should security leaders use to evaluate the quality of a vendor's detection engineering (DE) output beyond just coverage numbers? Can AI fix DE?
#265
March 2, 2026

EP265 Beyond Shadow IT: Unsanctioned AI Agents Don't Just Talk, They Act!

Guest:

29:29

Topics covered:

  • Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified?
  • AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU?
  • If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint?
  • Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk?
  • The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications)  but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications?
  • Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models?
  • Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem?
#264
February 23, 2026

EP264 Measuring Your (Agentic) SOC: Two Security Leaders Walk into a Podcast

Guest:

29:29

Topics covered:

  • We’ve spent decades obsessed with MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond). As AI agents begin to handle the bulk of triage at machine speed, do these metrics become "vanity metrics"? If an AI resolves an alert in seconds, does measuring the "mean" still tell us anything about the health of our security program, or should we be looking at "Time to Context" instead?
  • You mentioned the Maturity Triangle. Can you walk us through that framework? Specifically, how does AI change the balance between the three points of that triangle—is it shifting us from a "People-heavy" model to something more "Engineering-led," and where does the "Measurement" piece sit?
  • Google is famous for its "Engineering-led" approach to D&R. How is Google currently measuring the success of its own internal D&R program? Specifically, how are you quantifying "Toil Reduction"? Are we measuring how many hours we saved, or are we measuring the complexity of the threats our humans are now free to hunt?
  • Toil reduction is a laudable goal for the team members, what are the metrics we track and report up to document the overall improvement in D&R for Google’s board?
  • When you talk to your board about the success of AI in your security program, what are the 2 or 3 "Golden Metrics" that actually move the needle for them? How do you prove that an AI-driven SOC is actually better, not just faster?
  • We often talk about AI as an "assistant," but we’re moving toward Agentic SOCs. How should organizations measure the "unit economics" of their SOC? Should we be tracking the ratio of AI-handled vs. Human-handled incidents, and at what point does a high AI-handle rate become a risk rather than a success?
#263
February 16, 2026

EP263 SOC Refurbishing: Why New Tools Won’t Fix Broken Processes (Even With AI)

Guest:

29:29

Topics covered:

  • What is the right way for people to bridge the gap and translate executive dreams and board goals into the reality of life on the ground?
  • How do we talk to people who think they have "transformed" their SOC simply by buying a better, shinier product (like a modern SIEM) while leaving their old processes intact?
  • What are the specific challenges and advantages you’ve seen with a federated SOC versus a centralized one? What does a "federated" or "sub-SOC" model actually mean in practice?
  • Why is the message that "EDR doesn't cover everything" so hard for some people to hear? Is this obsession with EDR a business decision or technology debt?
  • How do you expect AI to change the calculus around data centralization versus data federation?
  • What is your favorite example of telemetry that is useful, but usually excluded from a SIEM?
  • What are the Detection and Response organizational metrics that you think are most valuable?
  • Is the continued use of Excel an issue of tooling, laziness, or just because it is a fundamentally good way to interact with a small database?
#261
February 2, 2026

EP261 No More Aspiration: Scaling a Modern SOC with Real AI Agents

Guest:

29:29

Topics covered:

  • We ended our season talking about the AI apocalypse. In your opinion, are we living in the world that the guests describe in their apocalypse paper
  • Do you think AI-powered attacks are really here, and if so, what is your plan to respond? Is it faster patching? Better D&R? Something else altogether? 
  • Your team has a hybrid agent workflow: could you tell us what that means?  Also, define “AI agent” please.
  • What are your production use cases for AI and AI agents in your SOC?
  • What are your overall SOC metrics and how does the agentic AI part play into that?
  • It's one thing to ask a team "hey what did y'all do last week" and get a good report - how are you measuring the agentic parts of your SOC?
  • How are you thinking about what comes next once AI is automatically writing good (!) rules for your team out of research blog posts and TI papers? 
#260
January 26, 2026

EP260 The Agentic IAM Trainwreck: Why Your Bots Need Better Permissions Than Your Admins

Guest:

29:29

Topics covered:

  • Why is agent security so different from “just” LLM security?
  • Why now? Agents are coming, sure, but they are - to put it mildly - not in wide use. Why create a top 10 list now and not wait for people to make the mistakes?
  • It sounds like “agents + IAM” is a disaster waiting to happen. What should be our approach for solving this? Do we have one?
  • Which one agentic AI risk keeps you up at night? 
  • Is there an interesting AI shared responsibility angle here? Agent developer, operator, downstream system operator?
  • We are having a lot of experimentation, but sometimes little value from Agents. What are the biggest challenges of secure agentic AI and AI agents adoption in enterprises?
#259
January 19, 2026

EP259 Why Google Built a Security LLM and How It Beats the Generalists

Guest:

29:29

Topics covered:

  • What is Sec-Gemini, why are we building it?
  • How do we decide when to create something like Sec-Gemini?
  • What motivates a decision to focus on something like this vs anything else we might build as a dedicated set of regular Gemini capabilities?
  • What is Sec-Gemini good at? How do we know it's good at those things?
  • Where and how is it better than a general LLM?
  • Are we using Sec-Gemini internally?
#258
January 12, 2026

EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen

Guest:

  • Royal Hansen, VP of Engineering at Google, former CISO of Alphabet
29:29

Topics covered:

  • The "God-Like Designer" Fallacy: You've argued that we need to move away from the "God-like designer" model of security—where we pre-calculate every risk like building a bridge—and towards a biological model. Can you explain why that old engineering mindset is becoming risky in today’s cloud and AI environments?
  • Resilience vs. Robustness: In your view, what is the practical difference between a robust system (like a fortress that eventually breaks) and a resilient system (like an immune system)? How does a CISO start shifting their team's focus from creating the former to nurturing the latter?
  • Securing the Unknown: We're entering an era where AI agents will call other agents, creating pathways we never explicitly designed. If we can't predict these interactions, how can we possibly secure them? What does "emergent security" look like in practice?
  • Primitives for Agents: You mentioned the need for new "biological primitives" for these agents—things like time-bound access or inherent throttling. Are these just new names for old concepts like Zero Trust, or is there something different about how we need to apply them to AI?
  • The Compliance Friction: There's a massive tension between this dynamic, probabilistic reality and the static, checklist-based world of many compliance regimes. How do you, as a leader, bridge that gap? How do you convince an auditor or a board that a "probabilistic" approach doesn't just mean "we don't know for sure"?
  •  "Safe" Failures: How can organizations get comfortable with the idea of designing for allowable failure in their subsystems, rather than striving for 100% uptime and security everywhere?
#256
December 15, 2025

EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance

29:29

Topics covered:

  • Do you believe that AI is going to end up being a net improvement for defenders or attackers?  Is short term vs long term different?
  • We’re excited about the new book you have coming out with your co-author Nathan Sanders “Rewiring Democracy”.  We want to ask the same question, but for society: do you think AI is going to end up helping the forces of liberal democracy, or the forces of corruption, illiberalism, and authoritarianism? 
  • If exploitation is always cheaper than patching (and attackers don’t follow as many rules and procedures), do we have a chance here? 
  • If this requires pervasive and fast “humanless” automatic patching (kinda like what Chrome does for years), will this ever work for most organizations?
  • Do defenders have to do the same and just discover and fix issues faster? Or can we use AI somehow differently?
  • Does this make defense in depth more important?
  • How do you see AI as changing how society develops and maintains trust? 
#255
December 8, 2025

EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking

Guest:

29:29

Topics covered:

  • The term "AI Hacking Singularity" sounds like pure sci-fi, yet you and some other very credible folks are using it to describe an imminent threat. How much of this is hyperbole to shock the complacent, and how much is based on actual, observed capabilities today? 
  • Can autonomous AI agents really achieve that "exploit - at - machine - velocity" without human intervention for the zero-day discovery phase?
  • On the other hand, why may it actually not happen?
  • When we talk about autonomous AI attack platforms, are we talking about highly resourced nation-states and top-tier criminal groups, or will this capability truly be accessible to the average threat actor within the next 6-12 months? What's the "Metasploit" equivalent for AI-powered exploitation that will be ubiquitous? 
  • Can you paint a realistic picture of the worst-case scenario that autonomous AI hacking enables? Is it a complete breakdown of patch cycles, a global infrastructure collapse, or something worse?
  • If attackers are operating at "machine speed," the human defender is fundamentally outmatched. Is there a genuine "AI-to-AI" counter-tactic that doesn't just devolve into an infinite arms race? Or can we counter without AI at all?
  • Given that AI can expedite vulnerability discovery, how does this amplified threat vector impact the software supply chain? If a dependency is compromised within minutes of a new vulnerability being created, does this force the industry to completely abandon the open-source model, or does it demand a radical, real-time security scanning and patching system that only a handful of tech giants can afford?
  • Are current proposed regulations, like those focusing on model safety or disclosure, even targeting the right problem? 
  • If the real danger is the combinatorial speed of autonomous attack agents, what simple, impactful policy change should world governments prioritize right now?
#252
November 17, 2025

EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success

Guest:

29:29

Topics covered:

  • Moving from traditional SIEM to an agentic SOC model, especially in a heavily regulated insurer, is a massive undertaking. What did the collaboration model with your vendor look like? 
  • Agentic AI introduces a new layer of risk - that of unconstrained or unintended autonomous action. In the context of Allianz, how did you establish the governance framework for the SOC alert triage agents?
  • Where did you draw the line between fully automated action and the mandatory "human-in-the-loop" for investigation or response?
  • Agentic triage is only as good as the data it analyzes. From your perspective, what were the biggest challenges - and wins - in ensuring the data fidelity, freshness, and completeness in your SIEM to fuel reliable agent decisions?
  • We've been talking about SOC automation for years, but this agentic wave feels different. As a deputy CISO, what was your primary, non-negotiable goal for the agent? Was it purely Mean Time to Respond (MTTR) reduction, or was the bigger strategic prize to fundamentally re-skill and uplevel your Tier 2/3 analysts by removing the low-value alert noise?
  • As you built this out, were there any surprises along the way that left you shaking your head or laughing at the unexpected AI behaviors?
  • We felt a major lack of proof - Anton kept asking for pudding - that any of the agentic SOC vendors we saw at RSA had actually achieved anything beyond hype! When it comes to your org, how are you measuring agent success?  What are the key metrics you are using right now?
#251
November 10, 2025

EP251 Beyond Fancy Scripts: Can AI Red Teaming Find Truly Novel Attacks?

Guest:

29:29

Topics covered:

  • The market already has Breach and Attack Simulation (BAS) for testing known TTPs. You’re calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it?
  • Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first?
  • You're asking customers to unleash a 'hacker AI' in their production environment. That’s terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'?
  • You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws?
  • Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away?
  • So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous 'purple team engine' that’s constantly training our defenses?
  • Also, what about fixing? What makes your findings more fixable?
  • What will happen to red team testing in 2-3 years if this technology gets better?
#245
September 29, 2025

EP245 From Consumer Chatbots to Enterprise Guardrails: Securing Real AI Adoption

Guest:

29:29

Topics covered:

  • In what ways is the current wave of enterprise AI adoption different from previous technology shifts? If we say “but it is different this time”, then why?
  • What is your take on “consumer grade AI for business” vs enterprise AI?
  • A lot of this sounds a bit like the CASB era circa 2014. How is this different with AI? 
  • The concept of "routing prompts for risk and cost management" is intriguing. Can you elaborate on the architecture and specific AI engines Witness AI uses to achieve this, especially for large global corporations? 
  • What are you seeing in the identity space for AI access? Can you give us a rundown of the different tradeoffs teams are making when it comes to managing identities for agents? 
#244
September 22, 2025

EP244 The Future of SOAPA: Jon Oltsik on Platform Consolidation vs. Best-of-Breed in the Age of Agentic AI

Guest:

29:29

Topics covered:

  • You invented the concept of SOAPA – Security Operations & Analytics Platform Architecture. As we look towards SOAPA 2025, how do you see the ongoing debate between consolidating security around a single platform versus a more disaggregated, best-of-breed approach playing out? 
  • What are the key drivers for either strategy in today's complex environments? How can we have both “decoupling” and platformization going at the same time?
  • With all the buzz around Generative AI and Agentic AI, how do you envision these technologies changing the future of the Security Operations Center (and SOAPA of course)? 
  • Where do you see AI really work today in the SOC and what is the proof of that actually happening? What does a realistic "AI SOC" look like in the next few years, and what are the practical implications for security teams?
  • “Integration” is always a hot topic in security - and it has been for decades. Within the context of SOAPA and the adoption of advanced analytics, where do you see the most critical integration challenges today – whether it's vendor-centric ecosystems, strategic partnerships, or the push for open standards?
#242
September 8, 2025

EP242 The AI SOC: Is This The Automation We've Been Waiting For?

Guest:

29:29

Topics covered:

  • What is your definition of “AI SOC”?
  • What will AI change in a SOC? What will the post-AI SOC look like? 
  • What are the primary mechanisms by which AI SOC tools reduce attacker dwell time, and what challenges do they face in maintaining signal fidelity?
  • Why would this wave of SOC automation (namely, AI SOC)  work now, if it did not fully succeed before (SOAR)?
  • How do we measure progress towards AI SOC? What gets better at what time? How would we know? What SOC metrics will show improvement?
  • What common misconceptions or challenges have organizations encountered during the initial stages of AI SOC adoption, and how can they be overcome?
  • Do you have a timeline for SOC AI adoption? Sure, everybody wants AI alerts triage? What’s next? What's after that?
#238
August 11, 2025

EP238 Google Lessons for Using AI Agents for Securing Our Enterprise

Guest:

29:29

Topics covered:

  • When introducing AI agents to security teams at Google, what was your initial strategy to build trust and overcome the natural skepticism? Can you walk us through the very first conversations and the key concerns that were raised?
  • With a vast array of applications, how did you identify and prioritize the initial use cases for AI agents within Google's enterprise security? 
  • What specific criteria made a use case a good candidate for early evaluation? Were there any surprising 'no-go' areas you discovered?"
  • Beyond simple efficiency gains, what were the key metrics and qualitative feedback mechanisms you used to evaluate the success of the initial AI agent deployments? 
  • What were the most significant hurdles you faced in transitioning from successful pilots to broader adoption of AI agents?
  • How do you manage the inherent risks of autonomous agents, such as potential for errors or adversarial manipulation, within a live and critical environment like Google's?
  • How has the introduction of AI agents changed the day-to-day responsibilities and skill requirements for Google's security engineers? 
  • From your unique vantage point of deploying defensive AI agents, what are your biggest concerns about how threat actors will inevitably leverage similar technologies?
#235
July 21, 2025

EP235 The Autonomous Frontier: Governing AI Agents from Code to Courtroom

Guest:

29:29

Topics covered:

  • Agentic AI and AI agents, with its promise of autonomous decision-making and learning capabilities, presents a unique set of risks across various domains. What are some of the key areas of concern for you?
  • What frameworks are most relevant to the deployment of agentic AI, and where are the potential gaps?
  • What are you seeing in terms of how regulatory frameworks may need to be adapted to address the unique challenges posed by agentic AI?
  • How about legal aspects - does traditional tort law or product liability apply?
  • How does the autonomous nature of agentic AI challenge established legal concepts of liability and responsibility?
  • The other related topic is knowing what agents “think” on the inside. So what are the key legal considerations for managing transparency and explainability in agentic AI decision-making?
#230
June 16, 2025

EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google

Guest:

29:29

Topics covered:

  • Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process?
  • What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems?
  • Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it?
  • What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle?
  • What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field?
#227
May 26, 2025

EP227 AI-Native MDR: Betting on the Future of Security Operations?

Guest:

29:29

Topics covered:

  • Why is your AI-powered MDR special? Why start an MDR from scratch using AI?
  • So why should users bet on an “AI-native” MDR instead of an MDR that has already got its act together and is now applying AI to an existing set of practices?
  • What’s the current breakdown in labor between your human SOC analysts vs your AI SOC agents? How do you expect this to evolve and how will that change your unit economics?
  • What tasks are humans uniquely good at today’s SOC? How do you expect that to change in the next 5 years?
  • We hear concerns about SOC AI missing things –but we know humans miss things all the time too. So how do you manage buyer concerns about the AI agents missing things?
  • Let’s talk about how you’re helping customers measure your efficacy overall. What metrics should organizations prioritize when evaluating MDR?
#226
May 19, 2025

EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams

Guest:

29:29

Topics covered:

  • Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? 
  • I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? 
  • We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains?
  • We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? 
  • We are all hearing about agentic security – so can we just ask the AI to secure itself? 
  • Top 3 things to do to secure AI software supply chain for a typical org? 
#224
May 12, 2025

EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps

Guest:

29:29

Topics covered:

  • Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
  • What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better  when you do it?
  • How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
  • In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
  • How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
  • What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? 
  • Top differences between LLM/chatbot AI security vs AI agent security?
#223
May 5, 2025

EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025

Guest:

  • no guests, just us in the studio
29:29

Topics covered:

  • At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential?
  • Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor?
  • How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years?
  • Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia?
  • Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better!
#217
March 31, 2025

EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?

Guest:

29:29

Topics covered:

  • Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
  • Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? 
  • What trips most clients,  classic security mistakes in AI systems or AI-specific mistakes?
  • Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
  • I know it is not your job to fix it, but much of this is unfixable, right?
  • Is it a good idea to use AI to secure AI?
#213
March 3, 2025

EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security

Guest:

29:29

Topics covered:

  • Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
  • I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be? 
  • Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale? 
  • SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
  • We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
  • What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
  • So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders? 
#198
November 11, 2024

EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons

Guest:

29:29

Topics covered:

  • What are some of the unique challenges in securing GenAI applications compared to traditional apps?
  • What current attack surfaces are most concerning for GenAI apps, and how do you see these evolving in the future?
  • Do you have your very own list of top 5 GenAI threats? Everybody seem to!
  • What are the most common security mistakes you see clients make with GenAI?
  • Can you explain the main goals when trying to add automation to pentesting for next-gen GenAI apps? 
  • What are your AI testing lessons from clients so far?
#196
October 28, 2024

EP196 AI+TI: What Happens When Two Intelligences Meet?

Guest:

  • Vijay Ganti, Director of Product Management, Google Cloud Security
29:29

Topics covered:

  • What have been the biggest pain points for organizations trying to use threat intelligence (TI)?
  • Why has it been so difficult to convert threat knowledge into effective security measures in the past?
  • In the realm of AI, there's often hype (and people who assume “it’s all hype”). What's genuinely different about AI now, particularly in the context of threat intelligence?
  • Can you explain the concept of "AI-driven operationalization" in Google TI? How does it work in practice?
  • What's the balance between human expertise and AI in the TI process? Are there specific areas where you see the balance between human and AI involvement shifting in a few years?
  • Google Threat Intelligence aims to be different. Why are we better from client PoV?
#185
August 12, 2024

EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You

Guest:

29:29

Topics covered:

  • The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it?
  • The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
  • Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
  • How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
  • AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
  • What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
  • We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them?  In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
  • An off-the-cuff question, how to do AI governance well?
#173
May 17, 2024

EP173 SAIF in Focus: 5 AI Security Risks and SAIF Mitigations

Guest:

27:23

Topics covered:

  • What are the unique challenges when securing AI for cloud environments, compared to traditional IT systems?
  • Your talk covers 5 AI risks, why did you pick these five? What are the five, and are these the worst?
  • Some of the mitigation seem the same for all risks. What are the popular SAIF mitigations that cover more of the risks?
  • Can we move quickly and securely with AI? How?
  • What future trends and developments do you foresee in the field of securing AI for cloud environments, and how can organizations prepare for them?
  • Do you think in 2-3 years AI security will be a separate domain or a part of … application security? Data security? Cloud security?
#171
May 6, 2024

EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side

Guest:

29:29

Topics covered:

  • Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)?
  • What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical?
  • Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really?
  • Why do you think that AI favors the defenders? Is this a long term or a short term view?
  • What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?
#168
April 15, 2024

EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It

Guest:

  • Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security
  • Scott Coull, Head of Data Science Research, Google Cloud Security
27:23

Topics covered:

  • What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM?
  • What can “security trained LLM” do better vs regular LLM?
  • Does making it better at security make it worse at other things that we care about?
  • What can a security team do with it today?  What are the “starter use cases” for SecLM?
  • What has been the feedback so far in terms of impact - both from practitioners but also from team leaders?
  • Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning?
#163
March 11, 2024

EP163 Cloud Security Megatrends: Myths, Realities, Contentious Debates and Of Course AI

Guest:

  • Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud
29:29

Topics covered:

  • You had this epic 8 megatrends idea in 2021, where are we now with them?
  • We now have 9 of them, what made you add this particular one (AI)?
  • A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance? 
  • What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking?
  • Which one of the megatrends is the most contentious based on your presenting them worldwide?
  • Is cloud really making the world of IT simpler (megatrend #6)?
  • Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it?
  • Which megatrend is manifesting the most strongly in your experience?
#155
January 15, 2024

EP155 Cyber, Geopolitics, AI, Cloud - All in One Book?

Guest:

  • Derek Reveron, Professor and Chair of National Security at the US Naval War College
  • John Savage, An Wang Professor Emeritus of Computer Science of Brown University
29:59

Topics covered:

  • You wrote a book on cyber and war, how did this come about and what did you most enjoy learning from the other during the writing process?
  • Is generative AI going to be a game changer in international relations and war, or is it just another tool?
  • You also touch briefly on lethal autonomous weapons systems and ethics–that feels like the genie is right in the very neck of the bottle right now, is it too late?
  • Aside from this book, and the awesome course you offered at Brown that sparked Tim’s interest in this field, how can we democratize this space better? 
  • How does the emergence and shift to Cloud impact security in the cyber age?
  • What are your thoughts on the intersection of Cloud as a set of technologies and operating model and state security (like sovereignty)? Does Cloud make espionage harder or easier? 
#150
November 27, 2023

EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw

29:29

Topics covered:

  • Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems? 
  • If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
  • How would you threat model a system with ML in it or a new ML system you are building? 
  • What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
  • What are the key differences between securing the AI you built and AI you buy or subscribe to?
  • Which security tools and frameworks will solve all of these problems for us?