Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
EP151 Cyber Insurance in the Cloud Era: Balancing Protection, Data and Risks
Guest:
Monica Shokrai, Head of Business Risk and Insurance for Google Cloud
29:29
Topics covered:
Could you give us the 30 second run down of what cyber insurance is and isn't?
Can you tie that to clouds? How does the cloud change it? Is it the case that now I don't need insurance for some of the "old school" cyber risks?
What challenges are insurers facing with assessing cloud risks? On this show I struggle to find CISOs who "get" cloud, are there insurers and underwriters who get it?
We recently heard about an insurer reducing coverage for incidents caused by old CVEs! What's your take on this? Effective incentive structure to push orgs towards patching operational excellence or someone finding yet another way not to pay out? Is insurance the magic tool for improving security?
Doesn't cyber insurance have a difficult reputation with clients? “Will they even pay?” “Will it be enough?” “Is this a cyberwar exception?” type stuff?
How do we balance our motives between selling more cloud and providing effective risk underwriting data to insurers?
How soon do you think we will have actuarial data from many clients re: real risks in the cloud? What about the fact that risks change all the time unlike say many “non cyber” risks?
Kelli Vanderlee, Senior Manager, Threat Analysis, Mandiant at Google Cloud
25:25
Topics covered:
Can you really forecast threats? Won’t the threat actors ultimately do whatever they want?
How can clients use the forecast? Or as Tim would say it, what gets better once you read it?
What is the threat forecast for cloud environments? “Cyber attacks targeting hybrid and multi-cloud environments will mature and become more impactful“ - what does it mean?
Of course AI makes an appearance as well: “LLMs and other gen AI tools will likely be developed and offered as a service to assist attackers with target compromises.” Do we really expect attacker-run LLM SaaS? What model will they use? Will it be good?
There are a number of significant elections scheduled for 2024, are there implications for cloud security?
Based on the threat information, tell me about something that is going well, what will get better in 2024?
We have a view at Google that AI for security and security for AI are largely separable disciplines. Do you feel the same way? Is this distinction a useful one for you?
What are some of the security problems you're hearing from AI companies that are worth solving?
AI is obviously hot, and as always security is chasing the hotness. Where are we seeing the focus of market attention for AI security?
Does this feel like an area that's going to have real full products or just a series of features developed by early stage companies that get acquired and rolled up into other orgs?
What lessons can we draw on from previous platform shifts, e.g. cloud security, to inform how this market will evolve?
EP144 LLMs: A Double-Edged Sword for Cloud Security? Weighing the Benefits and Risks of Large Language Models
Guest:
Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security
25:27
Topics covered:
Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this “baby AGI” or is this a glorified “autocomplete”?
Let’s talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?
Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?
How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they’re a) talking nonsense or b) actually keeping their output safe?
Are hallucinations inherent to LLMs and can they ever be fixed?
So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?
It seems that in many cases the challenge with cloud configuration weaknesses is not their detection, but remediation, is that true?
As far as remediation scope, do we need to cover traditional vulnerabilities (in stock and custom code), configuration weaknesses and other issues too?
One of us used to cover vulnerability management at Gartner, and in many cases the remediation failures [on premise] were due to process, not technology, breakdowns. Is this the same in the cloud? If still true, how can any vendor technology help resolve it?
Why is cloud security remediation such a headache for so many organizations?
Is the friction real between security and engineering teams? Do they have any hope of ever becoming BFFs?
Doesn’t every CSPM (and now ASPM too?) vendor say they do automated remediation today? How should security pros evaluate solutions for prioritizing, triaging, and fixing issues?