Do you have something cool to share? Some questions? Let us know:
Timothy: Hi there, welcome to the Cloud Security Podcast by Google. Thanks for joining us today. Your hosts here are myself, Timothy Peacock, the product manager for Threat Detection here at Google Cloud, and Anton Chuvakin, a reformed analyst and esteemed member of our Cloud Security Team here at Google. You can find and subscribe to this podcast wherever you get your podcasts, as well as at our website, cloud.withgoogle.com/cloudsecurity/podcast. If you like our content and want it delivered to you piping hot every Monday afternoon Pacific time, please do hit that subscribe button on your podcasting app of choice. Finally, you can follow the show and argue with your hosts on Twitter as well, twitter.com/CloudSecPodcast. Anton, we're back to zero trust today.
Anton: Yes, very much so, and it's a very fun topic, and these are very popular episodes. This time we're gonna mix zero trust with, drumroll, big government.
Timothy: That's not a popular topic.
Anton: Uh, we haven't touched it yet. So at this point, it's going to be about zero trust and about how U.S. government can, wait for it--wait for it, quickly adopt zero-trust thinking and architecture for its IT environment. How exciting is that?
Timothy: Well, this is exciting for lots of reasons. A, I think it's exciting that we made it a whole year without talking about Uncle Sam. Two, I think it's really exciting because a blog post by my former colleague was all about how they're never gonna do it, and I thought that was interesting, but the conversation we have today is pretty wide-ranging. We touch on a lot of zero trust topics, and we also get into an interesting discussion around MFA, and we have a guest today who disagrees with you. Quite forcefully at one point, which is great. I love it when people disagree with you.
Anton: That's correct, and I am almost always gentlemanly about it. Am I?
Timothy: Oh, for sure.
Anton: Am I?
Timothy: A model of a modern major analyst.
Anton: Former analyst.
Timothy: Former analyst, yes. Well, with that, let's welcome today's guest. With that, I'm delighted to introduce today's guest, Sharon Goldberg, CEO and co-founder of BastionZero and a professor at Boston University. So, Sharon, I wanna start off today's podcast, we're talking about zero trust, and zero trust is, like many things in security at this point, now a buzzword. First, does the term have meaning to you, and if so, what does it mean?
Sharon: Yeah, it definitely has meaning for me. It actually took me a really long time to figure out what it means. It's really interesting, actually. I was looking at my records and I realized I actually was a moderator of a panel in 2015 about zero trust, and I ended up changing the name of the panel 'cause I had no idea what zero trust means at the time. But now I understand--right? I think it has to do with authentication. I think it's a question of, like, how do you authenticate users and how do they prove that they really are who they say they are? So that's a really narrow definition. For me, it has to do with eliminating long-lived credentials, so not giving someone access to something that they can hold onto forever and potentially, like, walk away with it. And so, in other words, it's no long-lived credentials, no long-lived SSH keys, no IAM roles that they're holding for years at a time and they could potentially take with them when they leave, no VPN keys that are sitting in some kind of certificate they can walk off with when they leave the organization or move to a new computer. That's what zero trust means, eliminating those long-lived credentials. The other thing I think zero trust means is just--the idea that just because you can access one part of the system does not mean you should access the whole system or other parts of the system. So it's this idea that you're constantly reauthenticating yourself to a system. Every time you want to access a new resource, you have to prove that you are who you say you are and that you really should have access to that resource. I think the term--I mean, we can keep talking about this in this conversation, but I think the term has expanded a lot, but to me it's really around, like, authentication and making sure people are properly authenticated and authorized for accessing whatever they're supposed to be accessing or not supposed to be accessing.
Anton: So it sounds like you're anchoring zero trust to authentication primarily, right? Like multifactor and other things.
Sharon: Yes, I would anchor it that way for sure.
Anton: Yeah. So what would the definition then--like, how would you--how would you frame it as, like, "Zero trust is," and you have to answer?
Sharon: Zero trust is the absence of long-lived credentials and the requirement that users authenticate themselves each time they want to access a resource. That is how I would define it, but I also want to say that it's a really interesting word, the word zero trust, because when you hear zero trust, you think, we should trust in zero things basically, like, there's nothing that you need to trust for security to work. That's what it sounds like, and that's why, you know, in 2015, like, I didn't understand what zero trust meant and I ended up changing the name of my panel, 'cause I just didn't know how to moderate a panel on that topic. And actually, what's really interesting is, like, with what we're working on right now at BastionZero, it's the idea, like, we want to get closer to this idea of really trusting zero things, not trusting some specific thing because--for the way I define zero trust to you, you could have an SSO system with multi-factor authentication, and that would gate access to whatever you're looking to gate access to. That would be zero trust according to the definition that I just gave you. But that's not really zero trust in the sense that you're trusting in zero things, because the thing you're trusting is the SSO mechanism, right? Like, the actual SSO provider, you're trusting that provider to, like, check people's passwords properly, and if you have device context, you know, check that the device context is correct, and check that they did multi-factor authentication properly. Like, all of that is being done by the identity provider, the SSO provider. And that's not zero--like, you're not trusting in zero things in that case, you're really trusting the SSO provider.
Anton: Well, John Kindervag, who is the--created the term, argued pretty violently that you really do want to strive for trusting in zero things. So we had almost the same argument except the opposite sides. I was trying to convince him that zero trust means trust in the identity provider, trust in the device data, and his position was like, "No, actually, you do need to aim at the situation where you verify, you confirm, you validate, but you don't blindly trust, even the identity provider." So to me this resolves--this doesn't resolve. To me, this is an interesting discussion that people are having in the industry.
Sharon: So I would disagree with you, and I would agree with him, actually, because the idea that you have to fully trust the identity provider, that's a good thing to strive for, all you have to do is trust the identity provider. But it's still creating, like, a single point of compromise where the identity provider has this extreme sort of power over your infrastructure, and I don't know if you guys saw this, 'cause it was part of the SolarWinds incident that was maybe less discussed, but one of the parts of the SolarWinds incident was that the adversary was able to compromise the SSO provider and issue tokens for itself to whatever resources it wanted. In a sense, the identity provider was compromised, and at that point, sort of, access to everything was possible for the adversary 'cause they own the identity provider. So that is an example of, like, this definition of zero trust not being sufficient because you were really actually not trusting in zero things, you were trusting in the identity provider. So I guess where I'm going with this, and, like, my position on this, like, for me and for what we're building at BastionZero is the ideal is that you don't have trust--you know, you don't have full trust in one thing, you have some way of recovering from a compromise of the authentication system. And that's what we have with BastionZero, we have this notion of multi-route zero trust, where instead of one route of trust that controls authentication and authorization, you have multiple, and so if one of them gets compromised, you still have the other ones there and you don't have, like, a full compromise of your system the way you had with SolarWinds. So I would be, again, disagreeing with you and agreeing with him, although I note that, like, what we're doing is not the standard in the industry, it goes one step beyond what most people are doing or trying to do at this point. It does come close to, if you've ever seen a system where someone will put a VPN, and then they'll also have, like, maybe a multi-factor or hardware key on, like, an individual Linux machine. What they're doing there is they've actually put sort of multiple points, multiple authentication systems in place before someone can get into the Linux machine. 'Cause they've gotta first get into the VPN, and then they've gotta get into the Linux machine with the multi-factor authentication. So you don't have this risk where, like, one thing gets compromised and everything falls down. So I think that, you know, we should be striving to get to trust in zero things, but that is not what zero trust really means at this point in the industry, right? Maybe it will in a few years, and that's what we're trying to do at BastionZero, but it's not the case right now.
Timothy: So before we get any deeper on what zero means, because I suspect that will lead to us understanding what zero listeners means, I want to switch gears a little bit and talk about this federal memo. One of the ways this conversation actually got kicked off, Sharon, was we were really impressed with the blog post you put out, like, immediately, analyzing the White House memo. What caught your eye about it, and why'd you decide to write about it?
Sharon: I was so excited when I saw it. I was, like, sitting on--
Timothy: We could tell.
Sharon: Yeah, I was so excited. You can look at my--at my Twitter, and I, like, there's me tweeting, like, "Oh my god, I can't believe this," and, like, "I have to write about this," and then, like, four hours later, "I'm done writing, dropping this tomorrow." So I was so excited because the first line of the memo was, "The federal government"--I'm not quoting this correctly, but it was, like, "The federal government can no longer depend on perimeter-based defenses for security." And then it goes on to just spend the whole memo talking about how VPNs are not good enough. And I was like, "Whoa," like, that is--that was--maybe it doesn't sound surprising to anyone, like, this month, after having read that memo and everyone talking about it, but, like, in the beginning and middle of January, that was kind of shocking to me, to see someone saying like, "No, no more VPNs," like, "VPNs are just not good enough for security and we can't rely on them." And then I kept reading it and it kept getting more and more intense. So then it starts to say things, like, applications should be available on the public internet. Right? So it's, like, don't even put a VPN there, like, it's gotta be secure enough so that you can access it without a VPN. So, like, do the work to make your [inaudible] applications so secure that they're not behind the VPN. Right? And then I was like, "What?" And then I keep reading, and then it starts talking about DNS security and how DNS has to be encrypted, and then again, I started jumping out of my skin because I'm a big DNS geek, and there's been a lot of debate about whether we should encrypt the DNS or not, and this is, like, a sort of very contentious topic in the network security world, and the memo's just like, "No, encrypt your DNS." If you know, AWS doesn't even support that right now, so if I wanted to encrypt my DNS traffic as native AWS customer, I can't do it. So it's, like, not even a possible thing for most public cloud providers, for companies that are building the public cloud, they can't even do this right now. So I was reading this memo and I was like, "Wow," like, this is so many steps ahead of what's happening. And I guess the last thing that caught my eye was it actually talks about the federal government putting bug bounties in place, which to me is really interesting because--so I teach a class at BU on information security, and I have 20, 21-year-olds doing--learning infosec and learning hacking for the first time…
Sharon: …and I have to go kind of, like, scare them and say, you know, there's the CFAA, if you do something wrong, you can get attacked. And then I tell them the story about the MIT students who presented at Black Hat, when they hacked the subway in Boston, and then, like, the FBI, like, came to their talk and, like pulled them off the stage. And so I tell them that story so that they don't go do something stupid, and part of the reason that this is still something I have to do, and I've been doing this for 12 years, is because we often see hackers prosecuted for just doing security research and, like, here's the--
Timothy: Like in Missouri recently.
Sharon: Yeah. And it happened, like, a few weeks before even the memo came out by a government in Missouri. So I was reading this and I was like, "Finally, there's something that we can point to where the government said that bug bounties are a good thing and, like, you know, security researchers examining public infrastructure and providing feedback is a good thing," and, like, we've been saying that in this community for 20 years, but to, like, actually see it come from the government was like, "Yes!" You know, like, finally hopefully, like, next time there's a lawsuit or something, people can point to that and say, "Look, like, federal government itself is saying that this is something that researchers should be doing." So I thought that there were, like, a lot of things in there that were, like, pretty significant across the board in different areas. By the way, like, those latter two things, bug bounties and DNS security, I don't really think those are zero trust things, but they're really cool [inaudible] security things.
Timothy: So I'm surprised to hear bug bounties are a hot new thing for the federal government. I remember in college, my buddy would get bug bounties all the time and then buy rounds of beer with the bug bounties for us at the bar. That was a long time ago. Is the federal government just talking about that now?
Sharon: So I'm not fully up on what's going on. I have friends that are lawyers that are, like, cyber lawyers and there are still cases that go to court about, you know, someone doing security research and violating the CFAA, or violating the DMCA, the Digital Millennium Rights Act, which is basically copyright on digital goods. So that still happens, and, you know, there's been a lot of effort from different groups to sort of normalize the idea of bug bounties, but I hadn't really seen the government talking about it before. And there's still this kind of--I think there's still this fear that, like, you know, like, the Missouri--I think in Missouri it was the government that actually went after the reporter who did an HTML view source and found something. So to see that coming from the government I think is great. So we're not necessarily gonna see, you know, an change to the CFAA, but this is a, like, an agency actually saying something, so I believe that should have some weight. So it's nice to see, you know, after all these years.
Anton: One other thing that's kind of catching my attention is that they are of course talking about deprecating VPNs. And when I had my first encounters with the zero trust concept many years ago, I naively assumed back then that zero trust is basically replacing VPN with, well, not having a VPN. So it sounds like the government discovered that fact that they say zero trust network access, what Gartner calls ZTNA, is essentially VPN replacement. So what's your take on that? Like, are they saying you should turn off VPNs now, or are they saying be on the path to it? Because, you know, it's a long journey. It was a long journey even for Google, so just curious what's your take to this whole deprecate VPN, authenticate to apps?
Sharon: I mean I can tell you what they are saying. There is a requirement in that memo that says that each agency needs to identify one internal application behind the VPN and take it out of the VPN.
Anton: Oh, one. Just one for now.
Sharon: Yeah, just one. But you know what they're doing there is they're trying to insist that people actually build the application securely enough to bring it out from behind the VPN and then that way they will have had the experience of putting something, you know, on the internet and not on a VPN so they've got that architecture in place and then they can start migrating things off of that. So I do think that what's being advocated for in this memo is this BeyondCorp approach, or this idea of, like, you need to authenticate into the specific applications or into specific targets, into specific roles, and not just into a network or a private network, and then once you're in that private network you can go anywhere you want. I think that they're saying just stop doing that, and it sort of makes sense, because if you read the broader news around this announcement, it talks about this being a reaction to the Colonial Pipeline ransomware, which I believe was actually the result of an old VPN password. So the way that that Colonial Pipeline was infiltrated was there was some long-lived VPN password, and the attacker got that and it was some password that nobody knew was around or had forgotten about, and they got in that way. And so if you can get access to a VPN and from there, kind of travel through the infrastructure and get into all different kinds of places where you shouldn't be, that's sort of the opposite of this approach where you have authentication into individual applications, right? So imagine if there was an individual application for whom the password was compromised, it should not be the case that if you stole that password, you can kind of own the whole infrastructure, right? So I do think that they're saying that this notion of, you know, network-based access control is no longer viable, and it has to be application-based access control or target-based access control. So are you logging into a Linux server? You know, what account are you using? Are you logging into a Kubernetes cluster? What role are you using? Are you logging into some sort of website? What username and password did you use? Right? Not that it's, like, are you in the right IP space anymore. That's--I think they're very strongly coming out against that.
Timothy: Yeah, that's fascinating. It's, like, so funny to me to appreciate this as such a sea change, when in my mind, that's just how applications work. So it's fascinating. As we look forward to this, what kind of challenges do you think the government is gonna run into in actually doing this? Like, what's gonna be hard for them in particular?
Sharon: Oh, it's gonna be very hard. I mean--
Anton: I think you win the understatement of the year prize for this. You just said it's very hard. And it's easily you're getting the understatement of the year prize for that.
Sharon: I don't really work with the government, so I don't know too much about this. I will say, you know, it's an interesting situation because, like, when you wanna work with the government, you have to have something like a FedRAMP certification, and all these sort of certifications and this navigation and understanding of, like, how to work with the government. It's a hard thing to do for a lot of companies and a lot of products, and they can't necessarily buy things off the shelf because they may not have the right certifications. So if you think about it, it's not only that they're a very large organization with a lot of bureaucracy and a lot of people and a lot of legacy systems, but even the set of products they can use is limited, and the set of places that they can deploy and do things is limited. So I think it's gonna be really challenging. I think one of the things that, if you look at the memo, you see there are these, like, very small, little milestones that agencies are asked to achieve. Like, for instance, taking one application off of the VPN. So the memo is sort of recognizing how hard this will be, but I also think that the memo is not necessarily only for the government, I think it is actually meant to drive change in the whole industry, and let me give you a very concrete example. I mentioned this before, you want to encrypt your DNS. Do you want to--can you go to AWS now and encrypt your DNS? No, you can't, right? So if the government wants to go use these cloud providers and they're going to insist on encrypted DNS traffic, then the cloud providers therefore have to provide this feature, and so it's going to require them to build that and then make it available, and then the rest of us can sort of enjoy the existence of this technology, right? So a little bit, like, something that I studied when I was a professor was the deployment of security technologies, and how do you kick off, you know, advanced security technology deployment. A lot of times, that has been done by the government mandating the deployment of certain security technologies and therefore forcing vendors to actually implement it so that the government can buy it, and then the rest of us can sort of use it. So I think that's a sort of an additional function of the memo, it's not just about what the government can do, but what vendors will need to provide and what the industry will start doing.
Timothy: That's really interesting. Could you say more about that, like, government making people be more secure that then drives more security? What does that evolutionary journey look like?
Anton: You know, if I may offer a quick comment, and this was actually amazing because I see the logic behind it. If a certain standard is truly adopted and it becomes--it sort of spreads, like, I'm thinking some of the early MITRE stuff that started in the government and it became, like, a community standard. That mechanism I definitely see, but maybe Sharon has more possible channels for everybody becoming more secure.
Sharon: Well, so there have been a couple of technologies that have had this type of journey. I'm not 100% sure about this, I should have prepared more, but I believe you can use the example of IPv6. I think you can use [inaudible] as an additional example. By mandating, you know, the--like, requiring the government to operate within these technologies, or supporting these technologies, forces them to exist in certain environments and having vendors support them. And I'm struggling to remember examples of this, but I remember when I was doing my PhD, we talked about deployment of security technologies and how the U.S. government has actually driven the deployment of a lot of security technologies over time by mandating things like this.
Timothy: Mm-hmm. Yeah.
Sharon: I think this is another case of that, at least with email. Encrypted email, to me, that's the most obvious one because it's actually not that easy to find. Encrypted email and encrypted DNS right now are very hard to find. They talk about encrypted email in there, by the way, too, which is really interesting 'cause nobody says what kind of encrypted email, so I don't know what they're [inaudible] or, like, what protocol this is exactly. But those things are not easy to buy and implement for the average organization and so this will become easier for them. But I also think it's going to change the way that security architects think about architecture, right? When you're sort of…
Sharon: … you're given the answer in front of you, and the debate is kind of over. But, like, what's the right way to do these things?
Timothy: The other good example that comes to my mind of government pushing security thing that then trickles to the community is the push on Software Bill on Materials, or SBOMs now. That's the other thing that was big in the recent timeframe and I think that's an awesome example of where transparency and the simple act of documenting what you use drives people to better outcomes.
Anton: That makes sense. And of course, this is also happens to be one area where I feel like without the government, it won't work. It's not just--it's not just government does something well and everybody copies, which happens infrequently, but it does. In case of SBOM, I feel like if they don't do it, nobody does it, sort of like--fine. So this is a separate fun topic which we will explore in a future podcast, I'm sure. But let me try another fun question to ask Sharon. So my impression is that the lessons that are learned when the doc was created, when the memo was created, are also useful to very much non-government, normal companies who are implementing zero-trust strategy. Can you give us maybe three or four lessons that others can derive from a doc? Like, what are useful things you can implement if you're pursuing zero-trust strategy?
Sharon: Yeah, okay, so let's start with the first. So the very first lesson in there is login to applications, not into networks. So that means that, you know, if you're building some sort of internal, maybe you have a Grafana server or something, you know, you need to have people authenticate into the Grafana server and not just stick it behind a VPN and have it otherwise open and let people come in however they want. Similar example, right, you have a Linux server, you have an SSH key to your Linux server, you do not want anyone to be able to get into that Linux server. You wanna control access to that and not just trust that as long as it's behind the VPN, everything's gonna be okay. So for instance, don't have a server holding the SSH key to another server just sitting in a network behind a VPN. That is not acceptable according to this philosophy of zero trust, right? So log into applications or targets, not into networks. That's the first thing. The second thing is the thing that everyone knows, which is multi-factor authentication. Not very controversial. What is [inaudible] interesting and controversial in there is that the memo is just super down on the TOTP protocol, which is, like, the Google Authenticator. You know, the six digits you get from your phone app? It really hates that as a form of authentication, and it's pushing for hardware keys.
Anton: Wait, wait, wait. So that's, like, kind of like a, what I call a 1%-er/rich people's problem, right? Well, 90% of the world are no two factor, and a big part is SMS…
Anton: …and they gonna step on the people using apps?
Anton: [inaudible] hardware? I'm sorry, this is stupid!
Sharon: No, I mean that was--that was aggressive. I, like, for instance--okay, so SMS-based two-factor authentication must die, I think we can all agree. And it's not dying, it's still there. I mean banks still use it for--like, it drives me crazy that you MFA into your bank with your phone. That drives me crazy that it's just still that way. I think because they don't think regular people can install an app on their phone or something? I don't know. So that must die, but they're also negative on the TOTP. I don't think that's gonna have much of an impact--
Timothy: That's bizarre.
Sharon: Well, it makes sense. I mean there's so many attacks on that that have come out. They're a little bit more exotic, but they exist.
Sharon: In any case, we can debate MFA. I don't think we should debate whether we should have MFA, and I think we all agree that SMS-based MFA should die. So, you know, use something other than that. So that would be my second lesson. My third lesson is--this is a very interesting requirement, and I would say again, like you said Anton, like a rich person's problem. The memo talks about device context, so validating that users are actually using the right device. That is a really, I think a really hard thing for organizations to do because they now have to manage the devices and make sure that they know, like, they have a full inventory of their users' devices, they can make sure that, you know, Sharon is on this specific laptop that was issued to her. And, like, what happened to me yesterday, my laptop died and I switched to my previous laptop, and, like, maybe I wouldn't be able to login anymore with this type of system. So there is--there was some thinking about device context. I don't know to what extent that's going to affect people. And then I guess the last thing is just basic network security hygiene. So the memo talks about TLS into your applications. Great, like everyone does that. It also talks about TLS between systems that are on the same internal networks, which is interesting. So that's something to think about. And then, of course, the last point is around, like, DNS. You should be doing encrypted DNS in your network. So that's another interesting thing to think about. I don't think that's an immediate action for a lot of organizations right now, but at least the, sort of, the MFA and VPN piece is something that you can start to think about right away.
Timothy: Well, this is fascinating. The memo clearly has a lot of good meat in it. Listeners, we're going to include the link to the blog post in the show notes, and in the Twitter release, and in the LinkedIn post we do about this. Sharon, we're about at time, and I want to ask our traditional closing questions. First, do you have one tip for people to improve their security posture, and two, do you have recommended reading? Which, I think as a professor, maybe you're best suited to answer that question of all the guests we've had so far.
Sharon: Okay, I'm gonna go with my one tip being MFA. I love MFA, so yeah, if you can put it in, put it in. If you haven't turned it on on whatever it is, turn it on. That would be my one tip. The thing to read, for this podcast, you know, I wrote this blog post and 30,000 people read it within, like, 24 hours, which was crazy, so you can go read that.
Timothy: Amazing. Sharon, thank you so much for joining us today, it was a real pleasure.
Sharon: Thank you.
Anton: And now we are at time. Thank you very much for listening, and of course for subscribing. You can find this podcast at Google Podcasts, Apple Podcasts, Spotify, or wherever else you get your podcasts. Also, you find us at our website, cloud.withgoogle.com/cloudsecurity/podcast. Please subscribe so that you don't miss episodes. You can follow us on Twitter, twitter.com/CloudSecPodcast. Your hosts are also on Twitter at anton_chuvakin and _timpeacock. Tweet at us, email us, argue with us, and if you like or hate what we hear, we can invite you to the next episode. See you on the next Cloud Security Podcast episode.