Back
#73
July 5, 2022

EP73 Your SOC Is Dead? Evolve to Output-driven Detect and Respond!

Guest:

Topics:

SIEM and SOC
29:29

Subscribe at Google Podcasts.

Subscribe at Spotify.

Subscribe at Apple Podcasts.

Topics covered:

  • You recently coined a concept of “output-driven Detection and Response” and even perhaps broader “output-driven security.” What is it and how does it work?
  • Detection and response is alive (obviously), but sometimes you say SOC is dead, what do you mean by that?
  • You refer to a federated approach for Detection and Response”  (“route the outcomes to the teams that need them or can address them”), but is it workable for any organization? 
  • What about the separation of duty concerns that some raise in response to this? What about the organizations that don’t have any security talent in those teams?
  • Is the approach you advocate "cloud native"? Does it only work in the cloud? Can a traditional, on-premise focused organization use it?
  • The model of “security team as a decision-maker, not an implementer” has a bit of a painful history, as this is what led to “GRC-only teams” who lack any technical knowledge. Why will this approach work this time?

Do you have something cool to share? Some questions? Let us know:

Transcript

TIM: Hi there. Welcome to the "Cloud Security Podcast" by Google. Thanks for joining us today. Your hosts here are myself, Tim Peacock, the product manager for threat detection here at Google Cloud, and Anton Chuvakin, a reformed analyst and esteemed member of our cloud security team here at Google.

You can find and subscribe to this podcast wherever you get your podcasts as well as at our website, cloud.withgoogle.com/cloudsecurity/podcast. If you like our content and want it delivered to you piping hot every Monday afternoon Pacific time, please hit that subscribe button. You can follow the show and argue with your hosts on Twitter as well-- twitter.com/cloudsecpodcast.

Anton, I feel like today's episode is a little bit of that old Mark Twain quote about reports of demise being greatly exaggerated. We're talking about SOC and what SOC means in the future. It doesn't mean that SOC is over. It just means that it's going to evolve, right?

ANTON: Yes, but at the same time, our guest in the past have occasionally written and spoken on things related to SOC. And I think in at least one of the posts, he did say that, in his opinion maybe-- drum roll-- SOC is dead.

TIM: Long live the SOC.

ANTON: Long live the SOC. Well, [LAUGHS] there is that, yes.

TIM: So I think there's really interesting stuff in here, and I think what's kind of great is the parts of SOC that he describes having go away are in many ways parts of SOC that, especially for cloud natives, we would like to see go away.

ANTON: Correct, and some of his examples are taken from those very cloud-native companies, where the SOC did in fact go away, at least in the original form circa 2002 SOC-- a big room with a bunch of people in expensive chairs.

TIM: Well, perhaps with that just archaic visual in mind, let's welcome today's guest.

ANTON: And our guest today is Erik Bloch, senior director of detection response at Sprinklr. Welcome to the podcast, Erik.

ERIK: Hey, thanks for having me. Happy to be here.

ANTON: The topic today is going to be focused on SOC, but not just on SOC-- possibly on SOC being dead. Let's not jump ahead, though. So, Erik, you recently coined the term or concept of output-driven DMR, and even perhaps output-driven security. So how does it work and why is it better?

ERIK: I think that might be leading the witness, why is it better. But I am curious about what it is.

ANTON: OK, fine. Just stick to what it is, and how does it work also?

ERIK: Well, what I've been noticing is, as companies move from on-prem data centers and having kind of their own in-house tech stacks to the public cloud, how we do, initially, detections in response changed. It went from being, you had to have a SIEM and roll your own detections because everyone's data center was kind of a unique snowflake to them. When you move to the cloud, everyone's infrastructure is common. So the outputs coming from your CSPs, your cloud providers, are known and they're common.

And we're starting to see vendors today start to take these outputs that are known and have kind of known outputs from their product-- I guess outcomes. And I think that for most cloud providers today, because they're fully aware of what the output is, they can actually help drive a lot of these outcomes. And so rather than having to roll your own detections, roll your own GRC, roll your own vulnerability management, since the outputs are known, we should be able to derive these outcomes from the cloud providers.

So the cloud providers are actually delivering, similar to how you guys do with your event-based threat detection. You guys are just giving an output. There's no inputs required. There's not a lot of rolling your own. The same thing for how some of the vendors are producing your TVM data or GRC data from your cloud. They know what the outcome is that you want and they're delivering the outcome without you having to go in and control your own like you would in an on-prem data center.

ANTON: So, by the way, here's a somewhat of an inside joke-- mostly for me and Tim, I guess. Erik just voted on the side of Tim in our long-running debate about custom detections. And I am really starting to get scared because--

TIM: I wouldn't say he voted for me. I actually wanted to pick at that. So, Erik, you mentioned the event detection that we do in Google Cloud. Are you a user of that?

ERIK: I'm not actually. If we were just in GCP, I probably would be, but since we use all three of the major cloud providers, that's where I had to go with a more agnostic tool that could speak kind of all three languages. But I am a fan of it.

TIM: So this is very interesting to me. So you chose a common output across your cloud surface rather than cloud-specific tooling that, then, your analysts would have to translate on a per cloud basis.

ERIK: Exactly.

TIM: OK. Now if you were a single cloud user and you had gone with that event product, would you feel uncomfortable not being able to see the exact logic that the rule is used, or would you accept that as part of the upside of a managed security approach?

ERIK: As long as I could customize some-- add my own things, because I mean, everybody-- you're going to have your little one-offs. The rest of it, I'd probably be OK with it. I draw the analogy to your endpoint protection software in your laptop. I couldn't tell you how whatever the product is works or what the rules they have or what machine learning engines they're using on the back side. But it seems to work and stop all the bad things. So after a while, you just have to put your hand over your eyes and just trust that they're doing the right thing.

TIM: This isn't surprising to me. And Anton and I are making faces at each other because he and I argue about this outside of the podcast all the time. I think the analogy here is a little bit like the shift from, say, writing assembly to writing Python. At some point, we accepted, I don't really know what the compiler is going to do with my address layout of my memory management, but that's OK because my life is better this way anyway.

ERIK: Well, yeah, totally. I mean, it's similar analogy to the A/B software. I mean, you don't always know what's happening under the hood, but we have to establish trust with the vendors that we're working with. I mean, trust is kind of the foundation for how the internet works in general. We trust our SaaS providers. We use Google today. We trust you guys are doing the right thing, and vice versa.

TIM: Yeah, absolutely. So I want to shift gears a little bit and talk about something I've heard you say. I've heard you say that SOC is dead. And now, I've met a lot of living SOCs, so could you clarify what you mean by that?

ERIK: Again, it's drawing from my experience moving from my last job at Salesforce, where we were a large-- we had our own cloud and our own data centers. As we move from there to the public cloud, again, we started to realize that the way you do detections, the things you care about, and the things you watch are vastly different than on prem, again, where everything's unique to you. When you move to the cloud, things are much more common.

And we started seeing, again, a lot of these third party agnostic vendors, they could translate the Googles and the Amazons and the Microsofts of the world into a common language and give us a common output that we could actually make actionable and use.

And so as I started to dive more and more into this and started to realize that the cloud providers are providing us, again, with a shared common infrastructure-- the same buttons and knobs and dials I have to push in GCP are the same ones you do. Therefore, we should be able to map the outcomes we want coming out of these clouds and have those outcomes delivered without needing a SOC.

So if something comes out-- a horrible example is just, like, an open object someplace at the Google Object Store or an S3 bucket or whatever it is that's wide open to the whole world. That gets dropped in your SOC, traditionally. The SOC guys have no idea. What are they going to do with it? They're going to go ask the team responsible for it, hey, should this be wide open or not? They don't know. And then they let the other team make the decision, and they take the action.

And what I'm saying is, hey, we know what that output is. We know that's an open object store. We know who needs to handle this. It's probably your tech ops or networking or NOC team. How about we land this with them instead of having everything be kind of default dumped into your SOC, because that team is a team that actually has the knowledge and know-how to address the issue.

And instead of having your SOC decide where this goes and deciding who to escalate it to, let's land the issue with a team that can actually fix it and then train them how to escalate back to your cert team or your SOC function, whatever you want to call it, if something is out of whack. And you could do this across everything coming out of your cloud. Your GRC, you have metrics from reporting. Go to them. For TVM information, this could go to those guys.

For a lot of the new behaviors and activities you see in your cloud environments-- for me, the console or the endpoints-- most of the time, again, your security team aren't going to be the people that actually can answer the question, is this right or wrong. You're going to be asking somebody else. So let's land the problems with those guys that can give us the thumbs up or thumbs down, and then train them how to escalate back to your security team if something's out of whack. And so that's kind of the basic premise for it.

ANTON: But so this is a federated/decentralized approach. So basically, you're not killing the S in SOC. You're not killing the O. You're killing the C. You're killing the center, essentially.

TIM: I don't think it's dead in his model. I think it just has a more narrowly-tailored set of things to deal with.

ERIK: Yes.

TIM: The false positives get farmed out, absorbed at the lower layers. This is perfect. I love this model.

ERIK: That gives me a warm fuzzy.

ANTON: Yes, but so the question is, is it workable for everybody? To me, when you federate, you practice this as a kind of a federated approach to DNR, where the outcomes are routed to the teams that need them or can address them, but what if I am very centralized? What if my organization is very top-down? So is this workable for every organization that uses cloud or for every work, period? Tell me more about where this federated approach would apply.

ERIK: Well, I've seen it applied at other businesses as well. I mean, I am cobbling together this information from other companies. I work with Alex over at Netflix on his SOC-less approach and swapping those guys over there, and I've talked to some people at Facebook, with my old bosses over there, about how they decentralize things too. And it's not to the same level, but different bits and pieces of this seem to be working.

So when I came to Sprinklr, I mean, I knew I wasn't going to have a 100-person security team, so I needed a way to do things smarter, more efficient. The other teams that we already work with today, they already have tier one functions. Your SRE team exists. You have a NOC that monitors your network stack, whatever. You have an IT helpdesk. These guys already know how to do tier one. They already want the work.

So for public cloud companies, I can see this being an easier transition from a centralized SOC model to distributing the load and having your security team focus on the actual issues that they have the skills and time to address versus them just becoming a dumping ground.

For some conventional legacy companies that still have data centers, that's a lot more difficult because you don't have that common infrastructure. You have your unique-- it's unique to you. So in that case, I'm not entirely sure if it would work, I mean, unless you had literally all your outcomes known and derived-- which, again, coming from a unique infrastructure is a lot more harder than coming from a public cloud provider.

ANTON: But for a second, you sounded like this. Let's pick three companies at random, Facebook, Netflix, Salesforce. Based on this experience, clearly this approach is workable for everybody. I am giving you back a cynical version of what you said, but I think that the point is that there would be some kind of adjustment elements to make the federated approach this type of distributed-- also, who writes the routing logic? It would still be, in my mind, a SOC-- as Tim said, narrower mission.

TIM: No, no, no.

ANTON: No?

TIM: No, no, no. You're not thinking cloud. In cloud, it's so easy to write the routing logic. It's whoever owns the project. It's whoever is responsible for this part of your org hierarchy. You got to think cloud with this one.

ERIK: Yeah, so similar to what we're doing today is we're working with our partner teams and saying, hey, these are all the outputs that are coming out. What are the outcomes you need? How can we take the given output, like some vulnerability data, say, comes out of your cloud provider? Using our automation platform, we enrich this information to produce the outcome that the vulnerability team wants. Like, OK, I want to know the delta between vulnerabilities yesterday and today. OK, cool. I can grab the data out, my automation team can make that happen, and I can deliver them the outcome they need without ever having to pass through my SOC or touch any of my security guys.

TIM: That makes a ton of sense.

ERIK: And you can apply the same model to GRC. You can apply the same model to misconfigurations, configuration management, new users being added to your environments, new environments being stood up. All these things are basically changes. And somewhere, someone's in charge of that change. It's a change control ticket that went through, your tech ops guys are doing it, your NOC knows about it, and it applies all across the board.

The only thing that lands on my security team are new behaviors. Here's a guy who's logged in his root and lateral moved and installed the compiler and it's compiling code in root. OK, that should probably go to security team because we're going to have the context and know how to handle that versus, hey, a new user got added and he has access to all your environments. Is this legit or not? I don't know. Let's ask the NOC, the tech ops guys who made it. Is there a change control ticket?

Well, my automation guys are going to pull the information for them, deliver them this outcome so they can make the decision. If it's a bad decision or if they say, look, this shouldn't have happened, they can escalate back to us, and then we'll take appropriate action, whack the guy and talk to his manager or whatever. But we're looking for those guys that have the skills and the knowledge to tell us what's right and wrong. And then we can either address it or they can address it.

ANTON: OK, what about the concern that I sometimes hear when I try to present a similar mode? People bring up some kind of a separation of duty, vaguely defined. Admittedly, they vaguely define it, and they say, wait a second, I can't have people who builds around the tech handle security issues in this. I have my own counterpoint to this, but I'd love to hear what's yours because, ultimately, to me, this argument kind of on paper ruins the model. That means you can't give it to individual teams because they're stakeholders. They may know it, but how can they be trusted? I think that's my retelling of this. Tim is making a face at me, but yeah.

ERIK: No, we've had those questions as well. As we've done this, we've realized that we've had other processes don't work correctly. So like change control process-- if that works correctly, there shouldn't be any conflict of interest. It should be a separation of duties. So if I'm going to make a new user that has access to all my cloud environments, well, there should be a change control ticket for this. It should be open by somebody. Somebody in authority should vet and approve this thing, and another person should actually be the person implementing this new user.

When we pass this back to, say, the NOC or tech ops team, what's the first thing they're going to do? They're going to look for this change control ticket. Has this been approved by somebody? Has this been vetted? If not, this is probably a bad change. The tier one functions-- the NOC or the tech ops or IT or whatever-- they don't have any context around, oh, this is my buddy Bob who made this account because he's doing something. They're just looking at the black and white. Here's a ticket. This is what it says. I'm going to vet and verify this and give you the answer.

And that's pointed out to us. So we had some broken controls. Our change control process wasn't working correctly because they couldn't give us a yes or no answer. So that pointed out additional flaws we need to go back and fix. And as we fix these things, we're noticing the process improve. But, again, there's a lot of, your process has to be in place. There has to be the whole cultural aspect of your partner teams wanting to do this with you. There has to be buy-off by your executive leadership. So there are some hurdles to this, but I don't see separation of duties as being necessarily one of them.

TIM: I think that's a compelling answer. What do you think, Anton?

ANTON: I'll buy that. I mean, I don't know if a typical European banking auditor would buy that, but I would buy that.

TIM: Those guys don't buy anything.

ANTON: Well, that's my point. They think cloud is some kind of a scary thing that is coming from the future. I don't know. I don't want to make too many snide remarks about them. But the point is that, yeah, there are some difficult arguments to be made.

Recently, something happened to me when I was trying to explain that if you have a SIEM and you want your people who respond to alerts contribute to writing the rules, and somebody from just a bank, just like that, stood up and said, that's absolutely impossible because our auditors told us we cannot combine people who write the rules and who are respond to alerts because that's separation of duty violation. He didn't really say violation--

TIM: You should absolutely have those people doing the same thing. That's crazy. So I want to roll back to something I said that also happens to be one of the questions we want to ask here, which is, this approach sounds extremely cloud-native. I immediately went to using your org hierarchy to route tickets. That's a very cloud-native concept, not a on-prem concept. So does this delegated SOC model or partner SOC model-- does it only work in the cloud, or can people on-prem use it?

ERIK: Initially, I had assumed it would be far easier to roll to a cloud environment. And I kind of assumed that would be the only place people would want to roll it out to. Since then, I've actually had conversations with people that are hybrid that either have their own data centers or are moving to the cloud or a particular bank who just has all their own data centers. And it comes down to a matter of having the correct tooling in place, the cultural issues-- having a culture that would change and embrace this is important-- and kind of having everyone rolling behind this model.

So I have talked to a few companies that are, you can say, legacy-- they have their own data centers-- that want to do this. To me, it seems like it's a really heavy lift, again, because you have so many custom things in your environment. I'm not going to say it's impossible, but I think it would be far, far easier to do in a public cloud environment.

ANTON: And so you're referring to not just that cloud is kind of a monoculture, in that sense, even though it's a bad word, but that you can have the universality of outcomes in the cloud. But it's very hard to make it in a very snowflake-y data center where everything is pets, not cattle.

ERIK: Exactly, yeah.

ANTON: Basically, maybe that's what it is. Maybe it's the cattle environment. Maybe if you run your on-prem like that, it's going to be fine.

ERIK: If you knew all the outputs and outcomes at your on-prem data centers and tools could provide, then yeah, you could map it. It'd be a one-off version of this, but you could totally do it, versus going out for cloud. Again, you can go out and buy-- there's dozens of third party vendors that can take all your data from all three major cloud providers, and it can translate Google, Microsoft, and Amazon into a common format for you, which makes it far easier and much easier lift, where you're basically paying somebody some money to do this for you versus in your own data center environment where you're having to do it for yourself. And I see that as a heavy lift. So I'm not saying it's impossible, but public cloud-- I think the cattle model would make it far easier.

ANTON: That does make sense. And I think that it's probably not for cloud only, but it's kind of the-- if you practice modern IT models that cloud is basically pioneering, then you can practice this.

ERIK: That's how I originally came at this, yeah. Because it's common across everybody, because they are public cloud providers, it'd be far easier. And it wasn't till later on that I started looking back, and I'm like, well, maybe this is possible in conventional data centers and conventional networks. But how you do it-- I mean, that starts get into a mess-- a lot of heavy lifting, a lot of one-offs.

ANTON: By the way, before we go further with the questions we have, I kind of want to go through a few more examples of-- I fear that the audience may not always have a clear visual for the outputs or outcomes. So, for example, cloud audit logs would be an input. Like, a log type or EDR trail, that's an input. But an output would be what? Detecting permission changes without authorization? Give me a few things that you would want to use in your output- or outcome-driven thinking.

ERIK: Sure. What we're doing here with the tooling we have today for security events, the things we're not farming to other people, we'll take the security event-- say it's a new connection to an unknown host or something like that. We know what the output of the tool is going to give us. We know what the outcome is you want to see. The outcome is going to be, OK, we want to have this alert. We want to look at the connection, the place they connected to. We want to see the application that actually reached out to connect to it. We want to see who did it, what user it was. We want all the context built around the output of this new connection.

Based on that, we can merge all that data together to actually produce the outcome we want to see, which is a fully contextualized alert that we can kind of pre-vet by, again, doing repudiation of IPs or domain names and seeing where it's connecting to, how it's connecting. So when it does land with one of my security guys, they can quickly look at it and make a determination of, OK, I can look into this deeper or not. So it's a matter of taking the outputs from your tools, enriching them or futzing with them, tweaking of them, to produce the outcome that every team needs.

TIM: "Futzing with"-- is that on the marketing page for a lot of SOR tools? I think it probably should be.

ERIK: Totally is. Futzing, tweaking, it's all in there, yeah. I'm pretty sure. It's probably on the last page.

TIM: And so the responsibility for building the futzing-- does that also roll back into the teams that are doing the response?

ERIK: Right now, we haven't quite got that far yet. Right now, it's still my security team doing it. We're partnering with our partner teams-- tech ops and whatnot-- to figure out what they need to work with their models. Because again, our NOC already is tier one. They already have their own run books. They already have their own procedures. And it's not a matter of us saying, hey, look we're throwing this up over the fence. We're working this and say, hey, how can we deliver an outcome that you guys can make actionable immediately. Just insert it into your workflow. We'll give you new run book, and you guys can handle it.

So a lot of it was learning how our partner teams did their work, because the IT team does something different than tech ops team. Tech ops works different from the NOC. You're NOC works there for the engineering team. So it was a lot of us listening, a lot of conversations to figure out how they work and how we can integrate with that work.

I'm hoping to get to a point, eventually, where those teams can do that. When there's a new output that comes from one of our tools, they can build their own workflows to produce the outcomes they want so they can deliver it to themselves. But we're not quite there yet.

ANTON: But it means a lot of mutual learning is kind of built in and is required in this system. If security doesn't care about what the networking guy does, and if the networking person doesn't care what the application developer does, this probably cannot work. But if there's mutual learning going on, then it can work or maybe should work. Is it fair?

ERIK: Yeah, I mean, well, if none of the teams care to begin with, I mean, you probably have other problems. If your wheels are falling off the car-- but this has actually been really good because my team has gotten to work with all the other teams, all the other partners and stakeholders in the team. And it's brought us closer. These guys want to work with us.

Take your NOC guys, your tier one NOC guys, who are just responding to networking events. They're excited. They're going to get to look at security events now. It's going to kind of broaden their horizon. It's going to give these guys a different path out of the NOC. Maybe once they get experience handling security issues, maybe they want to go be a security analyst someplace rather than being a tech ops guy.

They're excited to see it. They're excited that they get to handle the outcomes rather than having kind of the things thrown over the fence at them all the time. They're getting to work closer with us in security. The same with all of our teams-- it's building those relationships. At first, it was rocky, but now everyone's on board with this. They're excited to do it.

TIM: OK, so what I love about this-- so often, I see vendors pitching something when what they ought to be pitching is, like, therapy time for security teams and engineering.

ANTON: Culture change-- therapy time. Yeah, this is back to the same exact lesson that came in many, many successful episodes we've done is that it's sort of-- a lot of these things, especially when cloud migration transition is involved, it's like culture change, therapy time, culture change, therapy time, culture change, therapy--

TIM: And we've seen what happens when you don't do that during that migration as well. You end up patching containers and production and looking like a real jabroni.

ANTON: And 400 firewall appliances in the cloud and patching containers with broad--

TIM: Yeah, so this is great.

ANTON: Yeah, so I want to maybe go take a look at this from the left field. In the past, there were some kind of attempts-- let's put it this way-- where a security team was positioned as a broker decision-maker, while all the work was done in the individual teams. Like, oh, security just writes policy and then somebody somewhere does the work. To me, it has ended very badly in the past, and there are even books written about how bad it can get. And I see sort of glimpses of the same model, but modernizing without the negatives. At least that's how it's in my mind.

So a practical question to you, Erik. How can this model work with a security team who wants to do the work, who kind of like-- no, we're going to go do it. We wouldn't trust the networking team. We wouldn't trust the team. We have to go hack it ourselves. How do you change that view?

ERIK: That was my first step here is getting people to embrace this new model. And what I found-- I mean, to your guys' or your point, it's a cultural change. I have to be able to clearly articulate the vision, clearly articulate where we're going to be at in a year or two years from now when this is working.

Now, I have to get the team-- not only my team, but my partner team has to actually buy into that vision. And that's probably the most difficult part, which, again, goes back to this therapy time, which you guys were saying. And it kind of illustrates that almost all problems in security are actually people problems. They're not actually technology problems. I mean, what widget am I going to deploy into this environment?

But it's not the matter of what widget it is. It's a matter of getting people to agree on it. And this is the same thing. I mean, I had to get people to understand the vision. I had to be able to articulate it. I get them to buy into it. Once they did, that made it a lot easier.

TIM: What was hard in that process, and how did you get over those objections?

ERIK: The hardest part was me clearly articulating this vision, because it was fairly new, and stumbling over myself. But once I had kind of my elevator pitch down, and people could understand it, then people started falling in line, like, yeah, this totally makes sense. I don't see why we're doing it any other way. Again, it's just like pitching a startup. Once you find that 30 second pitch, people could understand it and buy into it. You're like, aha, I got it. It was the same thing.

TIM: That makes a ton of sense.

ERIK: I pitched my pitch, on Slack, to probably 100 people until people were like, oh, I totally get that.

TIM: That sounds about right. So we're just about at time, and I want to ask you our famous closing questions.

ERIK: Uh-oh.

TIM: First, do you have one weird tip for people that are trying to go down this path? And then, two, do you have recommended readings, so that we don't leave people empty-handed?

ERIK: Tips for people go down this path-- first, I would, if it's something you're interested in doing, make sure that your company is in a position to do it, first off. I mean, again, if you're at a 500,000 person bank, and you're still running mainframes, you're probably in the wrong place. If you're in a smaller SaaS company with a couple hundred or a couple thousand people and you're completely public cloud, maybe it makes sense for you to explore this.

To get people to buy into this, you need to be able to paint your vision and clearly and simply articulate it to get people to buy into it. So that would be my tip if you're interested in doing this.

As far as books go--

ANTON: For the readings, we're going to link to your articles for sure, because they're fascinating, but we'll have two of those. But anything else?

ERIK: Besides the articles, I mean, a lot of the things that I have kind of gleaned from this has been Alex Maestretti's old Netflix blog around his SOC-less approach a couple of years ago when he was kind of pioneering this. I've read some articles on how SRE engineers do things, based off some of the links that you put out there in your learnings from SREs.

And, I mean, a lot of the books I've actually read are more around people skills-- how to paint that vision, how to get people to buy into it. And those were actually far more valuable to me than the actual technology side of it, because, again, I mean, whatever vision you trying to paint, unless you can articulate it fairly easily and simply so people can buy into it, you're not going to get anywhere.

TIM: That makes sense.

ERIK: I'm not sure if that helps or not.

TIM: Oh, that absolutely helps. The trick, I think, often for security people is how do we get them to be more persuasive with people rather than with getting code execution?

ERIK: Yeah, Simon Sinek has some good books on leadership and people influencing. Jay Shetty's Think Like a Monk book is pretty good, how to kind of straighten your own mind out so you can get your thoughts out. So there's all kinds of really good people books out there.

TIM: Well, Erik, thank you so much for joining us today. This was, I think, a useful conversation for me and Anton in our long-running arguments, but I also hope a good conversation for our listeners. So thank you.

ERIK: Awesome. Thanks for having me guys.

ANTON: And now we are at time. Thank you very much for listening and, of course, for subscribing. You can find this podcast at Google Podcasts, Apple Podcasts, Spotify, or wherever else you get your podcasts. Also, you can find us at our website, cloud.withgoogle.com/cloudsecurity/podcast. Please subscribe so that you don't miss episodes.

You can follow us on Twitter at twitter.com/cloudsecpodcast. Your hosts are also on Twitter @anton_chuvakin and @_TimPeacock. Tweet at us, email us, argue with us, and if you like or hate what you hear, we can invite you to the next episode. See you on the next "Cloud Security Podcast" episode.

View more episodes