Back
#71
June 21, 2022

EP71 Attacking Google to Defend Google: How Google Does Red Team

Guest:

23:23

Subscribe at Google Podcasts.

Subscribe at Spotify.

Subscribe at Apple Podcasts.

Topics covered:

Resources:

Do you have something cool to share? Some questions? Let us know:

Transcript

TIMOTHY: Hi there. Welcome to the "Cloud Security Podcast" by Google. Thanks for joining us again today. Your hosts here are myself, Timothy Peacock, the Product Manager for Threat Detection here at Google Cloud, and Anton Chuvakin, a reformed analyst and esteemed member of the Cloud Security Team here at Google.

You can find and subscribe to this podcast wherever you get your podcasts as well as at our website, cloud.withgoogle.com/cloudsecurity/podcast. If you like our content and want it delivered to you piping hot every Monday Pacific, please do hit that Subscribe button. You can follow the show and argue with your hosts on Twitter as well-- twitter.com/cloudsecpodcast. Anton, we are talking about the red team today.

ANTON: Yes, that one's really fun because we discovered that, historically, episodes focused on something awesome that Google does in security are very popular.

TIMOTHY: They tend to be. And so this time we're talking about plasma globes? Is that true?

ANTON: No, no, no. Don't give out the spoilers, please. We're talking about red teaming.

TIMOTHY: Just red teaming. There are no plasma globes, listeners. Just red teams.

ANTON: No plasma globes are involved until the spoiler time, until the punchline.

TIMOTHY: No, but this is a really fun episode because we talk about some practical ways that Google's red team has worked to make lives better for themselves, ways they make lives harder for themselves, and ways that they work together with the rest of Google security to have red teaming not just be like "look at how clever we are" exercise, but really drive better security for Google and, of course, therefore for our users as well.

ANTON: Yes, indeed. And I think that one good lesson that I've learned from this one is that even if somebody is really high maturity, like, well, Google, there are certain lessons that can be cut in little chunks. And the first chunks, first stages, can be used by other people. So it's not like, oh, Google does things in such an amazing manner nobody can copy it. Well, that's true. But well, no, that's not the end of my remark. It's true. But there may be stages that we went through that can be useful as lessons to others.

TIMOTHY: Yes.

ANTON: And I think with red teaming, there are definitely signals of that sort in the episode today.

TIMOTHY: Absolutely. And with that, guests, I am delighted to introduce today's guest, Stefan Friedli, a Senior Security Engineer, also known as a heist orchestrator, here at Google. Stefan, what is our red team philosophy and approach here? Because there's lots of ways you could have a red team. How do we do it?

STEFAN: Hi, Tim. Thanks for the intro, and thanks for having me. It's a great question to dive into it and right away.

TIMOTHY: I figured I'd start off where it matters.

STEFAN: [LAUGHS] So our mission, in a nutshell, is that we help to make Google itself more secure by trying to hack it. And what that does is it gives our defensive teams, our blue teams, an opportunity to sort of have a sparring partner and to see how well our signals are already working and how we can improve them further, so [INAUDIBLE] sort of the simulation that hopefully prepares us for the real world.

ANTON: Yeah, that's kind of a bit of a vague answer. I guess we asked for a philosophy, so you did give us a philosophy. What about something that's more about the approach and methodology?

TIMOTHY: Yeah, tell us. Like, when you're about to run a red team exercise, do you tell the group, hey, by the way, watch out, Kubernetes, because we're coming for you? Or do you silently slip in and say, hey, Kubernetes, we got you?

STEFAN: It's a little bit of both. Maybe it's important to say we target all of Google and, by extension, Alphabet. We're not restricted to Cloud. We also exclusively don't target Cloud customers. So that might be relevant for the people listening to this as well. We're not hacking your stuff.

But yeah, our philosophy is we try to share fairly openly because we have a good level of trust with the people we work with on the defensive side. So yes, we will try to stay undetected, and we will not give a big, loud heads-up. But we also make it as easy as possible to identify our activity and not waste resources on response when we get called for something that doesn't need a response. So we do give a heads-up, but we still try to fly under the radar as much as possible.

TIMOTHY: So there's a mechanism in place where if you get caught, for example, on a Friday afternoon, we don't ruin a bunch of people's weekends.

STEFAN: Pretty much, yeah. So what we will do is we share our activity log where we track all of our actions, which is something we do consistently also to have just sheer accountability in case there are follow-up questions. We share that with blue teams. But our blue teams have too much other stuff to look into and to work with that they can constantly be on those docs. So if something weird pops up on a Friday afternoon, they will be able to go back, and check that out, and identify us, and then follow up as necessary.

It's sort of an interesting detail, but I think it's a good anecdote as well that we didn't do exercises on Fridays for a very long time because we didn't want people to spot them late Friday afternoon and then work their weekend. But at this point, response processes have been so much streamlined and are so effective at this stage that this is no longer a concern. So we were able to lift that restriction fairly recently.

TIMOTHY: That's a pretty cool claim to be able to make. Like, Google's response capabilities are so automated and powerful, even our red team can attack us on Fridays. That's awesome.

ANTON: That actually is. And I think that-- I wanted to remind others that this whole approach where there is a-- you can flip the book to the last page where the answers are if you see something weird that you can't crack. So the whole log of activities that you can check but it isn't shoved in your face up front makes sense as to power, I guess, what I would call maybe a purple teaming aspect of this, right?

So if I am the defender and I'm noticing something weird, I do have a chance of checking the red team log and see, is it them? And if it's not them, I go ruin my weekend. If it's them, I go talk to them. I go celebrate, drink champagne or whatever, that we caught them. I guess that does make sense. And this is something that started day one, or have we evolved to this? Maybe give us a 30-second-- maybe not 30-second-- history of how red teaming evolved at Google, if it makes sense, because you've been around for a while.

STEFAN: Yeah, I mean, it has definitely been an evolution. Like, we didn't start with the numbers we have now. We didn't start with a lot of exercises in parallel. So it has been sort of an organic growth.

We started doing those exercises as a 20% thing because red teaming is a concept that is well-known in the larger industry, and it wasn't as prevalent within Google security teams. And results were just really strong. One of the benefits you have with red teaming versus other ways of security analysis is that you usually have a full narrative, which gives a bit more context on specific bugs.

And yeah, we essentially started in collaboration with blue teams to run these and have figured out over time what is necessary to have this kind of collaboration, what is necessary to make sure that we don't ruin their weekends but that we give them enough information so they are able to figure out what we're doing on short notice. And that was one of the things we did fairly early on and then sort of tweaked as we went along.

TIMOTHY: So tactically, what does that look like? How do you, in practice, collaborate with them? There's a doc. What else is going on?

STEFAN: We have a lot of stakeholder management. It's not the most exciting part about red teaming, I guess.

TIMOTHY: No, but it's important.

STEFAN: Yeah, in terms of maintaining good relationships. So prior to an exercise, we will inform stakeholders about what we're going to do on a high level, so what we're targeting, what the premise is. We simulate fairly specific threats. We're not just saying, hey, we're going to do whatever comes to mind. We try to stick to threats we actually do encounter and that our Threat Analysis Group actually documents. We try to get people in the loop very early on.

And then once the exercise wraps up, there is a lot more of that as we go into remediation and as we try to make sure that things do actually get fixed. Yeah, there's a lot happening at the start and at the end. In the middle is the phase where we hunker down a bit more and are more focused on our operations and what the team is doing itself. So that's a bit of the calm phase, I would say.

TIMOTHY: So when you say starting from a particular premise, what's an example of that that you'd be able to share?

STEFAN: I think looking at what TAG puts out is probably giving a good indication without spoiling too much or without giving away information that is, at this stage, sensitive. But if we're seeing a lot of politically motivated attacks, then that's something that we might pick up in terms of how we're going to simulate a threat actor in the future. If we say, for example, we see a lot of hacktivism-based threats popping up, that's also something that we might take into account just to inform what we're going to target and how we're going to do it. I think that's it in a nutshell.

We are not really bound so much by that. We try to also have high coverage across the different areas within Google. And we try not to simulate the same type of attacker all the time just because that's most prevalent right now. But it helps us to prioritize it.

TIMOTHY: Oh, so that's nice. Your job doesn't get boring. You get to follow what's going on in the world and take on different identities as you go through this.

STEFAN: You know, the fact, it does not get boring. In fact, we're probably one of the only security teams in Google and beyond that that constantly works to make their own lives harder.

TIMOTHY: Hmm. Say more about what you mean there.

STEFAN: Well, because we have a fairly high and mature remediation program, it means that most of the things we find within exercises do actually get acted on fairly soon after we wrap up and share our findings and share the reports with affected teams. So the likelihood of us finding something and being able to leverage the same vulnerability, leverage the same weakness in the process six months later or a year later is very low. That usually doesn't happen. So--

TIMOTHY: Ah, that makes sense. So you can't pull the same trick twice. It's really a case of, fool me once, shame on me; fool me twice-- fool me-- can't be fooled again.

STEFAN: Pretty much, yeah. Pretty much.

ANTON: But it really means that the chain or the path from testing to remediation to action is really working, which, of course, is probably the trickiest part of red teaming and pen testing elsewhere in the world. So that's amazing. Like, you managed to make the toughest part of red teaming actually work.

STEFAN: Yeah, but it was a lot of work to get there, to be clear, right? And I agree with you, Anton. This is something that-- I've worked in contracting red teaming and pen testing before, where you would often do more compliance-driven testing. And it's sort of frustrating from a purely technical perspective to have the situation where you test the same environment one year, you list all the findings, and you come back the next year to find the situation virtually unchanged.

We don't really have that. We really need to start from scratch. And one of the reasons why we don't have it is because we do have dedicated folks that track remediation. So it's not necessarily the folks who run the attacks within the red team exercise who follow up, but dedicated folks who have the expertise to advise on these remediations and make sure that they happen within a reasonable amount of time.

ANTON: So can you run us maybe through an example of testing, a made-up, possibly made-up, testing outcome that powers the remediation and then what happens? Like, you read about something in TAG's materials. You decided to simulate that threat actor activity. Something happened. And then what happens? Like, can you run us through this path from test outcome to problem is solved?

STEFAN: Sure. Let's take something that is both fictional and also sort of high level, I guess. Let's assume we see that there is a lot of targeted phishing going around geared towards executives and their assistants, right? So that's something that we might pick up and do some research into and see like, OK, how feasible is this? How much of it relies on deceiving the target by social engineering them, and how much can we also do via technical means? Like, are there ways how we can manipulate the communication channels being used to be more deceptive, right? And not just in the sense of like, oh, this looks like I shouldn't click on it, but, this looks very legit, and it shouldn't.

So in that case, once we run for this very simple example, we will have some findings, right? If the attack succeeded, it means that it really fooled them, and there is a behavioral thing that we can't really fix. We don't really blame users for this sort of deception being successful.

But much more likely, there are things on the technical layer of things that we can say, hey, it should be easier for someone to spot deceptive content, and we have figured out that we can do A, B, and C in order to make sure that it's not easy to spot. Can we fix this? If so, how and which teams are responsible for this, which teams can advise on that, make it happen? And that's where we do the handover to our remediation teams and give them our technical input. And they'll triage it and run with it forward and make sure that these conversations happen and these changes are implemented if it's feasible.

ANTON: So that means that you would-- to pick one item out of this narrative is that a very likely outcome of this exercise is not somebody get beat up over the head or somebody gets, quote unquote, "re-educated." But it's more of technical elements of that are extracted, studied, and then technical solutions are implemented to make this less likely to succeed. Roughly, that's what's going on, right?

STEFAN: Yeah, 100%. That's a good call-out. I personally don't think that it makes sense to place the burden of security on the user behavior because we literally expect people to interact with email, to interact with the tools we're giving them. And telling people it's not OK to scan a QR code, telling people it's not OK to open an email or to be helpful to somebody is not doing us many favors. So we really try to focus on making it easy for people to spot deception or to report deception rather than reprimanding them for trying to do a good job.

TIMOTHY: I love this particular anecdote right now because it gives me the opportunity to do the thing I always make fun of our guests for doing, which is connect this practice back to our SRE practices. This blamelessness of red team outcomes is very similar to our blameless postmortem culture. And there's a really nice overlap there that ordinarily, guests would be like, oh, you should read the SRE book. I'm like, oh, always the SRE book. But here, very similar to the SRE book. That's great.

STEFAN: Yeah, there are definitely some similarities. And it has definitely also helped us to build those relationships to teams across Google. That is really helpful because, again, while we are simulating attackers that want to harm Google potentially, we are still-- out of that simulation context, we are still Googlers. We are still trying to make Google more secure and make Google a safer place. So I think this is a good example on how these two roles diverge a little bit from each other.

TIMOTHY: That makes a ton of sense. And while we're on the topic of other parts of Googler, what's unique about red teaming at Google? What else is very Googley about how we do this?

STEFAN: That's a really good question.

TIMOTHY: Aw, shucks. Thank you.

STEFAN: [LAUGHS] I don't really want to claim that we're doing things in a especially weird or exotic--

ANTON: What? We do most of the things in a especially weird manner at Google. What are you talking about?

TIMOTHY: This is Google. Everything here is rocket science.

ANTON: Everything is special and weird.

STEFAN: If you insist, we can run with that.

TIMOTHY: [LAUGHS]

ANTON: Yeah, we do.

STEFAN: I do think compared to previous experiences-- like as I said, the remediation rate is super high, which makes things more interesting but also ups the pressure quite a bit. But in general, we just have a lot of stuff to look at. I think I mentioned it. We have a coverage model where we try to focus on different areas of the company across a certain period of time. And once we actually get back to the start of the cycle, a lot of things will have changed.

The velocity at which Google moves makes for a lot of surprises in a very frequent manner. So I think that's one of the reasons why it keeps being interesting and it doesn't get really stale. And we have one of the larger teams. We have a really nice diverse team that brings a lot of different backgrounds to the table. And I think that's also something that qualifies us and that makes it a cool place to work at.

ANTON: I mean, remediation rates is something that-- high remediation rates is another bit that falls into this big bucket I have next to my desk that says, amazing things that Google does that we just cannot teach others to do, [LAUGHS] because it's just a question of us being, well, really smart and really well resourced and really care about this. With some other companies, you can't teach some of these tricks to them. You can't tell them, hey, build this SOC model similar to Google.

And so in this case, we now have advice about, hey, build this red team similar to Google. And suddenly they trip over a lot of things you have to do before that. It's almost like zero trust. Zero trust is easy once you spend eight years and significant amount of resources on it, right?

TIMOTHY: Going to space is easy once you've built a spaceship.

ANTON: Right, exactly, and you're halfway to orbit. OK, so let's go to something less controversial, maybe. Some fun examples-- obviously, fictitious examples-- from your testing experience, anything that, well, inspired the real story, maybe. Just anything that you can share externally that is fun [INAUDIBLE].

TIMOTHY: And change the names to protect the innocent.

ANTON: Change the names, yeah. The innocent because they're not guilty here, right?

TIMOTHY: That's right, blameless postmortems.

ANTON: Yep. Thank you.

STEFAN: Sure. I'll decline any responsibility. No. I do actually have one example--

ANTON: Ah, of course.

STEFAN: --I can share that is based on a real exercise a while ago that we have shared publicly before, so I think that's fine. So one fun attack vector we used a couple of years ago was we were trying to get a foothold on a Googler's machine. And we were running out of ideas on how to do this using, let's say, virtual means. So we were looking for ways to do it differently. And keep in mind that was probably a decade ago, so--

ANTON: Which means it's very futuristic for some companies. What you're saying, a decade ago here means in about 10 years, you get the joke, right?

TIMOTHY: Be nice, Anton.

ANTON: [LAUGHS]

STEFAN: So we were looking into more practical approaches and more physical approaches. So the team at the time decided that everybody loves swag. Everybody loves to get free stuff. And Google, like many other companies, does have its fair share of swag. So we decided, what if we send people something cool that they can connect to their computer? And we found those really cool USB-powered plasma globes.

TIMOTHY: No way.

ANTON: So futuristic maybe does still hit it, Anton. I do agree. And we applied a Google logo to them and built in a little teensy style chip that emulated the keyboard device--

TIMOTHY: No, you didn't.

STEFAN: --and used that--

ANTON: Ah, they so did.

STEFAN: And used that to successfully gain access to some workstations after that. However, since we were talking about remediation, I should mention--

TIMOTHY: Well, what's the remediation on that?

ANTON: Yeah, exactly.

STEFAN: It did actually spark work on a piece of software that detects rogue human interface devices via USB and allows you to allow list and block list them that I think was even open-sourced a while back. So might be able to find that if you look for it.

TIMOTHY: So it wasn't instructions to the physical security people to smash all the plasma globes they see.

STEFAN: I'm not sure. I feel like that would probably have been controversial because I think people did like the plasma globes in general, maybe just not the implications of connecting it.

TIMOTHY: That's a phenomenal example. I love this. OK, so we're nearing the end of time. I want to ask the traditional closing questions. What's one weird tip for helping companies red team better? And do you have recommended further reading?

STEFAN: Sure. One weird tip-- it's not really weird to me, but I get it from conversations that it can be for our folks. But it's not-- red teaming doesn't mean you're playing against the blue team, right? And that's one of these misconceptions that I think especially people starting off and trying to build those programs sometimes have.

But honestly, the more collaboration you can have with the blue team, like with detection and response or whatever you call it within your org, the more possibilities will open up for red teaming. Because if you can manage the risk of like, hey, if we break something or if we get detected, it's not like this big thing that needs investigation, it's not that big thing that requires escalation, then it frees up a lot of time. It frees up a lot of opportunities to do things that maybe are a bit more risky. But you can do them in a safe way because you have this channel and this trust relationship to the defensive teams. So that will probably be the one there. Further reading-- damn. Not a ton of--

ANTON: SRE book, fine. You can always cheat and say SRE book or TAG blog or something. Come on.

STEFAN: If it's reliable, it's reliable. That's great. Let's go with that.

TIMOTHY: All right. Stefan, thank you for joining us today.

STEFAN: Yeah, thanks for having me.

ANTON: And now we are at time. Thank you very much for listening and, of course, for subscribing. You can find this podcast at Google Podcasts, Apple Podcasts, Spotify, or wherever else you get your podcasts. Also, you can find us at our website-- cloud.withgoogle.com/cloudsecurity/podcast. Please subscribe so that you don't miss episodes.

You can follow us on Twitter-- twitter.com/cloudsecpodcast. Your hosts are also on Twitter-- @Anton_Chukavin and @_TimPeacock. Tweet at us. Email us. Argue with us. And if you like or hate what we hear, we can invite you to the next episode. See you on the next "Cloud Security Podcast" episode.

View more episodes