Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
We've seen a shift in how boards engage with cybersecurity. From your perspective, what's the most significant misconception boards still hold about cyber risk, particularly in the Asia Pacific region, and how has that impacted their decision-making?
Cybersecurity is rife with jargon. If you could eliminate or redefine one overused term, which would it be and why? How does this overloaded language specifically hinder effective communication and action in the region?
The Mandiant Attack Lifecycle is a well-known model. How has your experience in the East Asia region challenged or refined this model? Are there unique attack patterns or actor behaviors that necessitate adjustments?
Two years post-acquisition, what's been the most surprising or unexpected benefit of the Google-Mandiant combination?
M-Trends data provides valuable insights, particularly regarding dwell time. Considering the Asia Pacific region, what are the most significant factors reducing dwell time, and how do these trends differ from global averages?
Given your expertise in Asia Pacific, can you share an observation about a threat actor's behavior that is often overlooked in broader cybersecurity discussions?
Looking ahead, what's the single biggest cybersecurity challenge you foresee for organizations in the Asia Pacific region over the next five years, and what proactive steps should they be taking now to prepare?
How have you seen IAM evolve over the years, especially with the shift to the cloud, and now AI? What are some of the biggest challenges and opportunities these two shifts present?
ITDR (Identity Threat Detection and Response) and ISPM (Identity Security Posture Management) are emerging areas in IAM. How do you see these fitting into the overall IAM landscape? Are they truly distinct categories or just extensions of existing IAM practices?
Shouldn’t ITDR just be part of your Cloud DR or maybe even your SecOps tool of choice? It seems goofy to try to stand ITDR on its own when the impact of an identity compromise is entirely a function of what that identity can access or do, no?
Regarding workload vs. human identity, could you elaborate on the unique security considerations for each? How does the rise of machine identities and APIs impact IAM approaches?
We had a whole episode around machine identity that involved turtles–what have you seen in the machine identity space and how have you seen users mess it up?
The cybersecurity world is full of acronyms. Any tips on how to create a memorable and impactful acronym?
Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now?
What trips most clients, classic security mistakes in AI systems or AI-specific mistakes?
Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
I know it is not your job to fix it, but much of this is unfixable, right?
Can you walk us through Google's typical threat modeling process? What are the key steps involved?
Threat modeling can be applied to various areas. Where does Google utilize it the most? How do we apply this to huge and complex systems?
How does Google keep its threat models updated? What triggers a reassessment?
How does Google operationalize threat modeling information to prioritize security work and resource allocation? How does it influence your security posture?
What are the biggest challenges Google faces in scaling and improving its threat modeling practices? Any stories where we got this wrong?
How can LLMs like Gemini improve Google's threat modeling activities? Can you share examples of basic and more sophisticated techniques?
What advice would you give to organizations just starting with threat modeling?
You are responsible for building systems that need to comply with laws that are often mutually contradictory. It seems technically impossible to do, how do you do this?
Google is not alone in being a global company with local customers and local requirements. How are we building systems that provide local compliance with global consistency in their use for customers who are similar in scale to us?
Originally, Google had global systems synchronized around the entire planet–planet scale supercompute–with atomic clocks. How did we get to regionalized approach from there?
Engineering takes a long time. How do we bring enough agility to product definition and engineering design to give our users robust foundations in our systems that also let us keep up with changing and diverging regulatory goals?
What are some of the biggest challenges you face working in the trusted cloud space?
Is there something you would like to share about being a woman leader in technology? How did you overcome the related challenges?
Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?
Google's Threat Intelligence Group (GTIG) has a unique position, accessing both underground forum data and incident response information. How does this dual perspective enhance your ability to identify and attribute cybercriminal campaigns?
Attributing cyberattacks with high confidence is important. Can you walk us through the process GTIG uses to connect an incident to specific threat actors, given the complexities of the threat landscape and the challenges of linking tools and actors?
There is a difficulty of correlating publicly known tool names with the aliases used by threat actors in underground forums. How does GTIG overcome this challenge to track the evolution and usage of malware and other tools? Can you give a specific example of how this "decoding" process works?
How does GTIG collaborate with other teams within Google, such as incident response or product security, to share threat intelligence and improve Google's overall security posture? How does this work make Google more secure?
What does Google (and specifically GTIG) do differently than other organizations focused on collecting and analyzing threat-intelligence? Is there AI involved?
Can you tell us about one particular cloud consulting engagement that really sticks out in your memory? Maybe a time when you lifted the hood, so to speak, and were absolutely floored by what you found – good or bad!
In your experience, what's that one thing – that common mistake – that just keeps popping up? That thing that makes you say 'Oh no, not this again!'
'Tools over process' mistake is one of the 'oldies.' What do you still think drives people to it, and how to fix it?
If you could give just one piece of cloud security advice to every company out there, regardless of their size or industry, what would it be?