March 21, 2022

EP57 Stop Zero Days, Save the World: Project Zero's Maddie Stone Speaks



Subscribe at Google Podcasts.

Subscribe at Apple Podcasts.

Subscribe at Spotify.

Topics covered:

  • How do we judge the real risk of being attacked using an exploit for a zero day vulnerability? Does the zero day risk vary by company, industry, etc? 
  • What does pricing for zero days tell us, if anything? Are prices more driven by supply or demand these days?
  • What security controls or defenses are useful against zero days including against chained zero days?
  • Where are the cloud zero days? We get lots of attention on iOS and Android, what about the cloud platforms? 
  • So, how do we solve the paradox of zero days, are they more scary than risky or more risky than scary? Or both?

Do you have something cool to share? Some questions? Let us know:


Timothy: Hi, there. Welcome to the Cloud Security Podcast by Google. Thanks for joining us today. Your hosts here are myself, Timothy Peacock, the Product Manager for Threat Detection here at Google Cloud, and Anton Chuvakin, a reform-tamed, much nicer now, former analyst and an esteemed member of the Cloud Security team here at Google. You can find and subscribe to this podcast wherever your podcasts are sold, as well as on our website, If you like our content and want it delivered to you piping hot every Monday Pacific Time, please do hit that subscribe button. You can follow the show and argue with your hosts on Twitter as well, Anton, this is a tremendously fun episode because we have somebody who is contributing to the mission of making the world's information universally accessible and useful in a way that I've found surprising and yet completely true at the end of the day.
Anton: Yes, I definitely agree that I had a few profound realizations during the podcast, because while sometimes people make, like, loud proclamations about how they're really making the world more secure, making the society more secure, but our guest really does do that, really. And that, to me, is kind of fascinating.
Timothy: Yeah, it's an interesting episode because we get to cover a lot of ground on both the technical side, the economic side, and the human side of an issue that often people only look at one of those lenses at a time.
Anton: Yes, correct. And, of course, we didn't mention the actual topic. Those who clicked on the episode on the website already know it's about zero-days.
Timothy: Yes.
Anton: So it has additional magical flare of ominousness on top, right?
Timothy: What could be more ominous than zero-days?
Anton: Zero trust.
Timothy: Oh, no, not zero trust. Let's stick today's episode to zero-days, not zero trust, and with that, welcome our guest. I am delighted to introduce today's guest, one of the biggest celebrities we've had on the show so far, Maddie Stone, a security researcher here at Google. Maddie, we're delighted to have you today. One of the things we wanna start out talking about is there's a lot of industry debate around the risk of zero-days to organizations, you know? Some people are like, "Oh, zero-days are rare. You don't need to worry about that." Some people say, "You know, zero-days are unstoppable, and they're monstrous, and they're horrifying." How do we think about the real risk of somebody attacking my org with a zero-day? Does it depend on who the org is? How do we think about this?
Maddie: So I think there's two sort of questions and way to break it up there. And I think the reason why my team, Google Project Zero, and I care about zero-days is the fact that while they are, in fact, targeted exploits, they're not things that are generally, you know, sprayed against anyone and everyone. Even though it's that limited target, they tend to have a very outsize impact and a societal impact. Even though you may not be the person being targeted with the exact zero-day, your company may not be the one targeted with the zero-day exploits, a lot of times these actually have impacts on the rest of us. So, for example, you know, if someone is trying to hack an election in a country, and they're targeting certain people that are critical to that election with these zero-day exploits, then it tends to affect a lot of us around the world or in that country as well. Or, you know, if certain minoritized populations are targeted, then that tends to have this, like, effect across the whole population in the world we live in. So I think that's one important point in why when we're thinking about zero-day exploits, it's really not just "Are they gonna hack me with it? Are they gonna hack my company?" So then, sort of, we get to that second part of the question of "Okay, how concerned do I need to be about being hacked, you know, individually as a company or an end user?" And in general, yes, they are very targeted. They tend to be a capability that many attackers who use them don't really want them burned. It took a lot of resources, whether it's money or time or specialized people, to build it that they would like to get a good amount of use out of it, and therefore, you know, gonna be a little more deliberate in who they target. In general, probably most companies across the board, most end users do not need to worry about that targeting. However, if your company, say, doesn't have the best patching practices, if a lot of your users can be targeted with phishing, then those same attackers could just do that with this less technically sophisticated option. I think that the, you know, other way we need to think about these zero-day exploits is "Are you going to be hacked?" is sort of the question we wanna get to. If you are more technically sophisticated, a special target, then they may use zero-days. But if you're not, you can still be hacked and have that same effect on you. It doesn't cost them as much to try and do it, unless they don't use zero-days. Does that sort of answer the question?
Timothy: Yeah, and there's a really interesting dynamic in there that I think might be interesting for you to speak to for our listeners around, you know, the wielder of the zero-day wants to get a good return on their investment with that zero-day, but they also worry about getting burned, as you said. Just tell our listeners what you mean by that and what that dynamic looks like in practice.
Maddie: My team's mission is "make zero-day hard." And hard is a really hard metric to actually define and measure. And so what we sort of interpret that as is basically, it costs more for capability and it has a shorter use. So that return on investment keeps getting smaller and smaller and smaller when it comes to zero-day. In terms of getting burned, it's no longer a zero-day exploit when the defenders know about it, when there's patches out there, when there's antivirus signatures, when all these detection and intrusion teams know "This is what I'm looking for, something targeting-- the exploit looks like x. It's going after this vulnerability. The attackers no longer have that sort of ability to sneak in with something no one knows to look out for." So that's what we mean by burns, is we want to get the information about it out there fast, get information to whatever vendor it is out there so it's no longer that super special capability that attackers can use.
Anton: So in light of this, I wanted to sort of turn back to the framework of judging the risk. I don't know why I'm so obsessed about it, but maybe it's my analyst's past when analyst firms made loud proclamations that most normal companies shouldn't really worry about it because if you patch Windows twice a year, then you have many, many, many bigger issues than zero-days, and, of course, you mentioned phishing as well. So can you build us a little bit of a framework, how to think about it? So if I don't make aircraft parts, but I make, I don't know, shovels, am I at risk? Of course, I may think, "No," but you may say, "What if I'm a supplier for somebody who makes spacecrafts and suddenly, I am at risk?" How would I think about zero-day risk for me if I am a company?
Maddie: Basically, you start with making sure you're safe and protected against the least technologically sophisticated attacks. First off, can your systems be compromised with just run-of-the-mill malware that has been documented, people know about, signatures exist for? Can, then, your company or networks be compromised through phishing? Can it be compromised through in-day* exploits, you know, ones that have patches available and we know about? And if you are then sophisticated and protected against these less technologically sophisticated attack vectors, that's when then you can really start focusing on the zero-days. 'Cause otherwise, sure, you may be blocking against these super sophisticated attack vectors, but you're allowing the attackers to come at you with things that cost them a lot less money, take a lot less resources and a lot less time because they're known ways to attack. They don't have to put in the effort to find a brand new vulnerability and build a whole new exploit chain and find out how to deploy it. While more people may be targeted, we gotta start with the basics and make sure we're robust and protected against those first before we start looking at the zero-days.
Anton: And I think that answers it nicely because for a vast majority of companies, they never depart the phase called "I need to fix the basics." They are "Fix the basics" in 2007. "Hey, happy New Year 2027." They are in, "Oh, we need to fix the basics." And that sounds like they shouldn't be concerned with zero-days. They should be concerned with the basics. Okay, fine. Maybe I'm most cynical today. What a surprise. Tim, yours is next.
Timothy: I don't think you're more cynical today. I think this is about a standard level of cynicism…
Anton: Okay, good.
Timothy: On the show. I wanna shift gears a little bit, Maddie, and talk about Cloud because, you know, I care all about Cloud. That's my day in, day out. What about zero-days in Cloud? You know, we hear a lot about iOS zero-days and Windows zero-days and Android. What about AWS and Azure, GCP? Are we seeing zero-days there?
Anton: In the Cloud. Let us not name names here. It'd be safer for all of us. Big public Cloud providers. I've heard there are three of them.
Timothy: Yeah, three unnamed Cloud providers.
Maddie: I actually posed this question while I just finished doing this year-in-review of all of the zero-day exploits known as "In The Wild" in 2021. There are no Cloud-targeting zero-days, and there actually aren't any since mid-2014, when we started tracking all of this data. So there's never been a known, publicly disclosed as exploited in the wild zero-day for Cloud. This sort of brings up two questions for me. One, is the detection work not happening? Are people not going through and trying to find these exploits? Because finding zero-day exploits, finding exploits that are targeting something you don't know about are really hard, and it requires a really concentrated effort and creativity to try and come up with ways to find them. So if you're not detecting them, we're not gonna have the list even if they are being used. And then second, are they being publicly disclosed as exploited in the wild? Because that is the second piece of the puzzle. If certain vendors are hearing about this happening and like, "Hey, it was reported to them," and whoever reported say, "Hey, this is actively exploited. You may wanna get it fixed faster," if either that reporter or the vendor doesn't come out and say it, then we're not gonna know about it either. So there's this sort of three-part question to the fact that we don't have anything or is it, one, are attackers actually using these? 'Cause you can't detect or know about things that aren't happening, which I would say pretty confidently, there's at least been some. You know, it's just too ripe of an attack surface not for folks to be looking at it. So then that really breaks down to two. Is it, one, lack of detection, we don't have the capabilities to find them, or just the resources haven't even been put in to build these capabilities to find them, or two, is it happening, but those finding them or those fixing them aren't disclosing it publicly that, "Hey, this was actually in the wild. This is not just another vulnerability"?
Timothy: That's really interesting. This kinda reminds me of a conversation I was having recently about Cloud ransomware. We haven't seen a lot of public cases about Cloud ransomware, and I wonder if that's because the attackers are still finding it so profitable to operate elsewhere. And I wonder if in Cloud it's still so profitable to attack the things running on top of Cloud that adversaries aren't having to move down to the Cloud infrastructure itself.
Maddie: Maybe. I would find it hard to believe with just the number of folks that I would think would be interested in gaining access to the infrastructure that there are none. But it's hard to find such a small thing, especially when you think of, like, who could actually be doing this detection work? You know, when it's iOS and Android, well, of course, those vendors are gonna have more visibility into what's happening across the devices through maybe telemetry or something like that. End users, individual security researchers can still do some work to try and hunt these down or find them, you know, especially with the browsers. But with the Cloud infrastructure, it would sort of be, like, who can emulate that type of situation as individual researchers or stuff like that, to try and hunt them down and do that detection.
Anton: And also, we can point out that the detection side of the Cloud providers like, well, us, are probably the best of the best of the best of the best security engineers on the planet. So if we are not seeing them, there's a decent chance we're not being hit by them. Okay, I'm sorry, I'm an optimist in this little fragment of a discussion, but I feel like if we're not seeing them, they're not happening to us, because, again, of the quality of the detection. Yeah, okay, fine. I see you're smiling, so it wouldn't show up on audio. So we're gonna drop the subject. That's great.
Timothy: Listeners, you can take Maddie's silence as she doesn't believe Anton. It's great.
Anton: Okay. Now, the other fascinating subject. Of course, I spent a couple of nights reading Kim Zetter's book on the zero-days and all that stuff, and what struck me in that book is, like, the whole obsession with pricing. I know that pricing matters, and, like Maddie, for example, mentioned, the fact that zero-days are costly to develop. They're big numbers, you know, even if you look at what black market pays, what white market pays. Big numbers. So what does pricing for zero-days tells us? Is your team succeeding because the price is growing? Like, "make zero-day hard" means "make zero-days expensive," right? Or am I thinking about it all wrong?
Maddie: It does, in a way, make things more expensive, require more investment for a capability that gives you less returns. But I don't think that pricing always actually equals expensive in those terms. Because it also can be driven up in terms of things like demand as well. And so I use it as a data point when looking at--like, especially when we're talking changes in magnitude of increases in prices. I think that can be just an interesting thing to keep in mind. But in general, the pricing for zero-days going up, I don't think it's something that we can use, even though it may make us feel better about our jobs of like, "Hey, I wanna put up on my performance evaluation. Price of the zero-day went up. Clearly, it's all me," or whatever. But no, I don't think we can take that as what that means because also, if you're selling a zero-days to certain entities versus others, like, generally, we've heard from black markets and stuff like that that selling to Five Eyes countries tends to--you sell for lower prices than, say, other geographic regions or regions that might be [inaudible] and things like that. To me, it's, you know, numbers in a game that I don't put a lot of--not confidence, but I don't put a lot of…
Timothy: Stock in that.
Maddie: Stock. That's the word.
Timothy: That's really interesting, that there's different pricing dynamics, depending on to whom you're selling these things. That's fascinating. I hadn't considered that.
Anton: And I love it how it's, like, so nice and corrupting, like, the bad guys pay more than the good guys. Like, this is like--almost like a moral test. Okay, that's not a subject of our podcast, but it's still kind of fascinating how the good guys pay less and the bad guys pay more. Like, this is satanic.
Timothy: Why don't we shift gears before we learn a lot about Anton's inner political winnings? Maddie, could you tell us about a day in the life of a zero-day explorer here at Google? What does that actually look like? Do you sit in a dark room with thumping music? Like, what does it look like?
Maddie: It's usually country. In my…
Timothy: Country. Okay.
Maddie: Case, that is. I'm all about the country music.
Timothy: And is that the good country music or the bad country music?
Maddie: Both.
Timothy: Both?
Maddie: You gotta support the bad country music, yeah. So it sort of looks different on a day-to-day basis. I work closely with other teams within Google as well, which I think is really a cool aspect, like Google's Threat Analysis Group, Google Chrome, Google Android, across the board to come up with what are areas where we can make progress to make systems more robust for better detection? How do we all partner together? And so that's really exciting, you know, to not just be working on your own all the time. A lot of it is just staring at disassembly and bytes, because I'm looking at, one, either exploit samples to figure out what vulnerability were they targeting and how were they exploiting it? 'Cause exploits, really, are two parts. You can't just have a vulnerability, you'll also have to come up with an exploit method to make it useful. Trying to figure out what those two chunks really are, and then sort of taking back of what can we learn from it. Because each time a zero-day exploit is discovered in the wild, it truly is the failure cases for attackers, sort of, as we were talking about, of trying to burn zero-days or burn vulnerabilities. We want to sort of milk as much information as we possibly can from that finding, and so also then performing things like variant analysis on the vulnerability. A lot of vuln researchers find more than one vulnerability at a time because that bug pattern lives in many different places around the code base, or you're looking at a new attack surface for the first time, which hasn't been thoroughly audited, so you find a lot. So then as--then there's the vuln researchers who want to get as many vulns patched. We then try to apply that same sort of process too, "Oh, the attacker found and used this vulnerability. What other ones might they have found that look similar to this?" And that--sort of that trying to make it that much harder that they can't just plug and play a new vuln in once this one has been patched. Also, sometimes spurring into new areas of research. I'm really lucky that I work with, like, a lot of really awesome vulnerability researchers across many different platforms. And so while I'm sort of a generalist with all the zero-day exploits, looking at all the different platforms from, you know, even Internet Explorer to WebKit, Safari, Android, Windows, etc., each of my teammates are really experts and deep dives in each of those, so partnering with them to be like, "Okay, I've got to this level. What else do you think this means? What should we take from that?" And lastly, partnering with lots of vendors and communicating with general public about, like, how do we actually make this more difficult in the long haul? Because each of these zero-day exploits oftentimes really do have this human impact. Someone is being harmed through the use of it, and so remembering that this isn't just this cool technical problem, it has real-world impacts. So how do we make sure that we can try and limit the amount of harm that comes in the future and make sure we're doing everything we can in that respect?
Timothy: There's a really real motivation, at the end of the day, behind the work you're doing?
Maddie: Absolutely. This past year, I mean, it's been over and over and over more headlines of people who are actually having their devices targeted, especially sort of from the commercial surveillance vendors. And this has really big impacts to not just, you know, digital privacy and having, you know, their lives opened up, but physical safety as well. And so trying to make sure, though, that we can help these people, 'cause a lot of them are trying to really stand up and be human rights defenders across the world, they're journalists, and that they can be safe and continue doing this, like, really important work is something that definitely drives me, as well as, like, just how cool would it be if we lived in a world where everyone had safe and secure access to the Internet?
Timothy: Yeah.
Maddie: Regardless of price, your device is secure. Like, that just seems like an amazing world I wanna live in, where everyone can access information, talk to people, but doesn't depend on who you are or what you have to access.
Timothy: Yeah, you really are a key part of the, like--as cheesy as this is gonna sound, and you're all gonna make fun of me, you're actually a core part of the mission of Google of making the world's information universally accessible and useful. It's not accessible and useful if you're worried about your security. That's fascinating. I hadn't thought about that.
Anton: Me too. And I think that human cost is something that Maddie mentioned multiple times, and it's kind of stuck in the head at this point because I was a little bit lenient towards, like, technical challenges, how to solve it. I was not focused that much on the human cost. So I can--it definitely switched something in my brain. Thank you, Maddie, for this. So I wanna switch gears a little on the defense side. And, of course, lots of vendors try to sell this and this, whatever technologies, but let's think about it in the most strategic manner. Certain defenses, certain security controls clearly rely on the knowledge of the attack, and so a zero-day would not be stopped by them. But certainly, other type of defenses, I'm thinking, allow listing and I'm thinking, like, turning off certain functionality clearly, I guess you may hate me for the term, zero-day proof. So how do you think about the defenses against zero-days? Broad question.
Maddie: First of all, it's thinking about who you are and who might be targeting you with zero-days. Because, again, this is, as we've talked about, generally a relatively small target population, especially when you consider, like, do you think you are gonna be targeted by, say, one of the commercial surveillance vendors or a nation, say--or whatever it may be? If you are one of those people and zero-day is a large risk for you, then it's thinking about what are sort of those TTPs? That's a threat intel word that I'm stealing. What are the techniques they're using? So, like, for example, a lot of the commercial surveillance we've seen over the past year has been they send one-click links. So they send a link to someone, maybe through email or SMS, try to make it seem relevant, and when they click it, it actually goes to that attacker's exploit server, which then delivers the exploit chains. So if that's you, then maybe not clicking on those types of links. But in terms of actually being zero-day proof, I think it would be something along the lines of, like, you can make yourself protected against JavaScript exploits if you turn off all JavaScript in your browser. That's not a great Internet usage option for a lot of people, but that's really, I think, the extremes you have to go to to really have that confidence and zero-day proof. And in that same way, though, then you're really only protected against those browser exploits that are targeting JavaScript. You'd still be vulnerable to DOM exploits. And so if you don't wanna be targeted by browser exploits, then it's really at that point, don't use a browser. Trying to hit that zero-day proof metric is probably unrealistic for any of us. I mean, that's why the zero-days are such a sophisticated capability, especially if your company or corporation may be trying to track the behaviors of what might happen if they got access. So are they trying to install an implant that sends information back? Then maybe that's what you detect. So you got exploited, but if you detect that really quickly, then maybe the impact is not actually that great. So yeah, there's really not a great answer to that zero-day proof question, overall.
Timothy: Maddie, this has been a wonderful conversation, and I hate to have to wrap it up. But I'm gonna ask our last two questions. Do you have, one, recommended reading for our listeners so they can go learn more about this? And two, this is kind of an odd one, do you have a recommendation for people to be, I guess, better at finding zero-days or not getting popped by them? Like, what's your actionable tip for our listeners?
Maddie: Further reading, if you check out the Project Zero blog, we tend to talk about getting really into the nitty-gritty sort of details of the bits and bytes of these exploits, as well as I publish year-in-reviews. The 2021 one will be coming out soon, but 2019 and 2020 is actually up, as well as the Google Tag blog also publishes more from that threat intel, the context behind it, point of view of zero-days. So I highly suggest that. And one action item I would give, I don't think it fits exactly what you're asking, but one area I think where we'll really see progress is if end users and companies, customers of the big vendors pushing the vendors whose products you're using asking them, "Are you saying that you will publicly disclose any time you've heard that there is an in-the-wild exploit vulnerabilities, will you annotate your release notes, and what work are you doing to make yourself more robust and your software devices more robust about this? Do you have teams working on zero-day detection? Are you doing things like variant analysis and exploit mitigations each time one of these comes?" The first one, I would say, is the disclosure and the transparency around them is what I think will make the biggest progress step first.
Timothy: I really like that answer. Maddie, thank you so much for joining us today. This has been a terrifically enjoyable conversation.
Maddie: Thank y'all so much for having me. I'm honored you invited me here.
Anton: Thank you. And now we are out of time. Thank you very much for listening and, of course, for subscribing. You can find this podcast at Google Podcasts, Apple Podcasts, Spotify, or wherever else you get your podcasts. Also, you can find us at our website, Please subscribe so that you don't miss episodes. You can follow us on Twitter, And your hosts are also on Twitter @anton_chuvakin and _TimPeacock. Tweet at us, email us, argue with us, and if you like or hate what you hear, we can invite you to the next episode. See you on the next Cloud Security Podcast episode.

View more episodes