Back

Showing 14 episodes for Artificial Intelligence

#226
May 19, 2025

EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams

Guest:

29:29

Topics covered:

  • Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? 
  • I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? 
  • We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains?
  • We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? 
  • We are all hearing about agentic security – so can we just ask the AI to secure itself? 
  • Top 3 things to do to secure AI software supply chain for a typical org? 
#224
May 12, 2025

EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps

Guest:

29:29

Topics covered:

  • Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right?
  • What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better  when you do it?
  • How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we?
  • In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance?
  • How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy?
  • What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? 
  • Top differences between LLM/chatbot AI security vs AI agent security?
#223
May 5, 2025

EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025

Guest:

  • no guests, just us in the studio
29:29

Topics covered:

  • At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential?
  • Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor?
  • How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years?
  • Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia?
  • Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better!
#217
March 31, 2025

EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?

Guest:

29:29

Topics covered:

  • Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
  • Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? 
  • What trips most clients,  classic security mistakes in AI systems or AI-specific mistakes?
  • Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
  • I know it is not your job to fix it, but much of this is unfixable, right?
  • Is it a good idea to use AI to secure AI?
#213
March 3, 2025

EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security

Guest:

29:29

Topics covered:

  • Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
  • I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be? 
  • Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale? 
  • SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
  • We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
  • What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
  • So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders? 
#198
November 11, 2024

EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons

Guest:

29:29

Topics covered:

  • What are some of the unique challenges in securing GenAI applications compared to traditional apps?
  • What current attack surfaces are most concerning for GenAI apps, and how do you see these evolving in the future?
  • Do you have your very own list of top 5 GenAI threats? Everybody seem to!
  • What are the most common security mistakes you see clients make with GenAI?
  • Can you explain the main goals when trying to add automation to pentesting for next-gen GenAI apps? 
  • What are your AI testing lessons from clients so far?
#196
October 28, 2024

EP196 AI+TI: What Happens When Two Intelligences Meet?

Guest:

  • Vijay Ganti, Director of Product Management, Google Cloud Security
29:29

Topics covered:

  • What have been the biggest pain points for organizations trying to use threat intelligence (TI)?
  • Why has it been so difficult to convert threat knowledge into effective security measures in the past?
  • In the realm of AI, there's often hype (and people who assume “it’s all hype”). What's genuinely different about AI now, particularly in the context of threat intelligence?
  • Can you explain the concept of "AI-driven operationalization" in Google TI? How does it work in practice?
  • What's the balance between human expertise and AI in the TI process? Are there specific areas where you see the balance between human and AI involvement shifting in a few years?
  • Google Threat Intelligence aims to be different. Why are we better from client PoV?
#185
August 12, 2024

EP185 SAIF-powered Collaboration to Secure AI: CoSAI and Why It Matters to You

Guest:

29:29

Topics covered:

  • The universe of AI risks is broad and deep. We’ve made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it’s gotten so much traction and c) talk about where we go next with it?
  • The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
  • Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
  • How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
  • AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
  • What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
  • We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them?  In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
  • An off-the-cuff question, how to do AI governance well?
#173
May 17, 2024

EP173 SAIF in Focus: 5 AI Security Risks and SAIF Mitigations

Guest:

27:23

Topics covered:

  • What are the unique challenges when securing AI for cloud environments, compared to traditional IT systems?
  • Your talk covers 5 AI risks, why did you pick these five? What are the five, and are these the worst?
  • Some of the mitigation seem the same for all risks. What are the popular SAIF mitigations that cover more of the risks?
  • Can we move quickly and securely with AI? How?
  • What future trends and developments do you foresee in the field of securing AI for cloud environments, and how can organizations prepare for them?
  • Do you think in 2-3 years AI security will be a separate domain or a part of … application security? Data security? Cloud security?
#171
May 6, 2024

EP171 GenAI in the Wrong Hands: Unmasking the Threat of Malicious AI and Defending Against the Dark Side

Guest:

29:29

Topics covered:

  • Given your experience, how afraid or nervous are you about the use of GenAI by the criminals (PoisonGPT, WormGPT and such)?
  • What can a top-tier state-sponsored threat actor do better with LLM? Are there “extra scary” examples, real or hypothetical?
  • Do we really have to care about this “dangerous capabilities” stuff (CBRN)? Really really?
  • Why do you think that AI favors the defenders? Is this a long term or a short term view?
  • What about vulnerability discovery? Some people are freaking out that LLM will discover new zero days, is this a real risk?
#168
April 15, 2024

EP168 Beyond Regular LLMs: How SecLM Enhances Security and What Teams Can Do With It

Guest:

  • Umesh Shankar, Distinguished Engineer, Chief Technologist for Google Cloud Security
  • Scott Coull, Head of Data Science Research, Google Cloud Security
27:23

Topics covered:

  • What does it mean to “teach AI security”? How did we make SecLM? And also: why did we make SecLM?
  • What can “security trained LLM” do better vs regular LLM?
  • Does making it better at security make it worse at other things that we care about?
  • What can a security team do with it today?  What are the “starter use cases” for SecLM?
  • What has been the feedback so far in terms of impact - both from practitioners but also from team leaders?
  • Are we seeing the limits of LLMs for our use cases? Is the “LLM is not magic” finally dawning?
#163
March 11, 2024

EP163 Cloud Security Megatrends: Myths, Realities, Contentious Debates and Of Course AI

Guest:

  • Phil Venables, Vice President, Chief Information Security Officer (CISO) @ Google Cloud
29:29

Topics covered:

  • You had this epic 8 megatrends idea in 2021, where are we now with them?
  • We now have 9 of them, what made you add this particular one (AI)?
  • A lot of CISOs fear runaway AI. Hence good governance is key! What is your secret of success for AI governance? 
  • What questions are CISOs asking you about AI? What questions about AI should they be asking that they are not asking?
  • Which one of the megatrends is the most contentious based on your presenting them worldwide?
  • Is cloud really making the world of IT simpler (megatrend #6)?
  • Do most enterprise cloud users appreciate the software-defined nature of cloud (megatrend #5) or do they continue to fight it?
  • Which megatrend is manifesting the most strongly in your experience?
#155
January 15, 2024

EP155 Cyber, Geopolitics, AI, Cloud - All in One Book?

Guest:

  • Derek Reveron, Professor and Chair of National Security at the US Naval War College
  • John Savage, An Wang Professor Emeritus of Computer Science of Brown University
29:59

Topics covered:

  • You wrote a book on cyber and war, how did this come about and what did you most enjoy learning from the other during the writing process?
  • Is generative AI going to be a game changer in international relations and war, or is it just another tool?
  • You also touch briefly on lethal autonomous weapons systems and ethics–that feels like the genie is right in the very neck of the bottle right now, is it too late?
  • Aside from this book, and the awesome course you offered at Brown that sparked Tim’s interest in this field, how can we democratize this space better? 
  • How does the emergence and shift to Cloud impact security in the cyber age?
  • What are your thoughts on the intersection of Cloud as a set of technologies and operating model and state security (like sovereignty)? Does Cloud make espionage harder or easier? 
#150
November 27, 2023

EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw

29:29

Topics covered:

  • Gary, you’ve been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems? 
  • If not SBOM for data or “DBOM”, then what? Can data supply chain tools or just better data governance practices help?
  • How would you threat model a system with ML in it or a new ML system you are building? 
  • What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
  • What are the key differences between securing the AI you built and AI you buy or subscribe to?
  • Which security tools and frameworks will solve all of these problems for us?