CyberWire Daily - What is data centric security and why should anyone care? [CyberWire-X]

Episode Date: May 17, 2023

In today’s world, conventional cyber thinking remains largely focused on perimeter-centric security controls designed to govern how identities and endpoints utilize networks to access applications a...nd data that organizations possess internally. Against this backdrop, a group of innovators and security thought leaders are exploring a new frontier and asking the question: shouldn’t there be a standard way to protect sensitive data regardless of where it resides or who it’s been shared with? It’s called “data-centric” security and it’s fundamentally different from “perimeter-centric” security models. Practicing it at scale requires a standard way to extend the value of “upstream” data governance (discovery, classification, tagging) into “downstream” collaborative workflows like email, file sharing, and SaaS apps. In this episode of CyberWire-X, the CyberWire’s Rick Howard and Dave Bittner explore modern approaches for applying and enforcing policy and access controls to sensitive data which inevitably leaves your possession but still deserves just as much security as the data that you possess internally. Rick and Dave are joined by guests Bill Newhouse, Cybersecurity Engineer at National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE), and Dana Morris, Senior Vice President for Product and Engineering of our episode sponsor Virtru.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber Wire Network, powered by N2K. Hey, everyone. Welcome to Cyber Wire X, a series of specials where we highlight important security topics affecting security professionals worldwide. I'm Rick Howard, N2K's Chief Security Officer and the Cyber Wire's Chief Analyst and Senior Fellow. And today, Dave Bittner, the Senior Producer and Host of many of the Cyber Wire's podcasts, will be joining me at the Cyber Wire's hash table to discuss data-centric security. at the CyberWire's hash table to discuss data-centric security. After the break, you'll first hear my conversation with Bill Newhouse, an engineer at the National Cybersecurity Center for Excellence,
Starting point is 00:00:53 and then Dave will talk with Dana Morris, Senior Vice President of Product and Engineering at Virtru. Come right back. If you're like most CISOs, you've invested huge amounts of time and money implementing security controls to defend against external threats and protect sensitive data stored inside your organization. But how do you protect sensitive data that is constantly being shared with others outside your organization? The answer is data-centric security from Virtru.
Starting point is 00:01:26 It installs in hours and protects information shared externally via email, files, and SaaS workflows, which means in a matter of hours, your business can confidently share sensitive data without sacrificing security, compliance, or privacy. security, compliance, or privacy. For a quick and easy way to foster compliance with CMMC, ITAR, CJIS, HIPAA, and other data security and privacy regulations, get Virtru. It's affordable, easy to implement, and trusted by more than 8,000 customers worldwide. Visit virtru.com slash cyberwire to try it for free. That's V-I-R-T-R-U dot com slash cyberwire to try it for free. That's V-I-R-T-R-U dot com slash cyberwire.
Starting point is 00:02:16 The idea of zero trust has been around since the early 2000s. John Kinderbog formalized the idea when he published the original Forrester white paper in 2010 called No More Chewy Centers Introducing the Zero Trust Model of Information Security. Since then, security professionals and security vendors have been trying to get their hands around the idea. But as the InfoSec community has evolved the philosophy, the actual practical how-to tactics have been a moving target. The original idea was to limit access to resources on a need-to-know basis. In the early days, we concentrated on limiting access to people based on their role in the organization. Then we realized that we needed to think about devices too, like phones, tablets, servers, and by extension, cloud workloads. Then we realized that we needed to limit access to our software applications
Starting point is 00:03:08 that we buy and install commercially and the apps that we build ourselves, not to mention the APIs that come with all of that. All of those potential zero-trust controls are tactics that we might deploy to our internal digital infrastructure, those data islands where we store our essential information and workloads. But the next component that has emerged in recent years, though, is how do you apply the same zero-trust philosophy to data that exists outside of your digital infrastructure,
Starting point is 00:03:36 like email you send outside to partners and contractors, or files that you store and share in public repositories like Dropbox or Amazon S3 buckets. The U.S. National Cybersecurity Center for Excellence, NCCOE, has started calling this data-centric defense, and they have a new research project to figure out what that means. So, I reached out to the NCCOE to help us understand this new idea. Yeah, I'm Bill Newhouse. My title is cybersecurity engineer, and I work at the National Institute of Standards and Technology.
Starting point is 00:04:11 In particular, I work at our Applied Cybersecurity Center, which has the fancy name National Cybersecurity Center of Excellence. I started out by asking Bill to describe what data-centric security is. Anything centric means you're focused on it as an important thing to worry about. And when I was invited to help co-lead this project, it made sense to me to talk about data. DHS, when they describe zero trust, and I haven't totally read the DOD paper I carried around a bit. Data, if you think you have a chewy center and you're moving data around for your business processes, you're relying on data, you are a data company, data supports everything you do, it starts to sound like the thing to really
Starting point is 00:04:57 worry about and protect. So this project, data classification that I've walked into, references that zero trust has a data element. It's either called a pillar or you have tenets of good things you should do in knowing about your data because those decisions about who to authenticate to have access to it and where they go and how the data moves around within a realizable zero trust architecture, you do all this because you have data you need to process. And that's kind of Captain Obvious stuff. And I think if we circle around it, I'll probably say it in better ways.
Starting point is 00:05:33 But it is the thing. And zero trust, I frame as being everything we've always wanted to do, trying to be sold to you in one happy package and recognizing that that's kind of really tricky because we don't get to throw away all the old stuff we're using and it doesn't necessarily immediately walk in and play nice with zero trust. So if you're lucky enough to be a new startup and you're creating stuff, you can probably achieve zero trust faster. In either model, knowing where your data is and what data is important to you and how you need
Starting point is 00:06:05 to protect it and there's been different pushes on why you need to protect data offered to us in the last decade those are all very important so data centric to me is really just trying to get get your hands on on the what's important for your business and figuring out you know what to protect and then classification is an early early step in that process. Well, the way I look at it is we first started thinking about Zero Trust over a decade ago. The first thing we thought about was identifying and authorizing individuals, people, about what they have access to. And then as it became more and more acceptable for users to use their own personal devices
Starting point is 00:06:43 to do work, like their iPhones and laptops and things. And then also we knew we had a collection of servers all around the place before we went to the cloud. So we had people and devices. And as we moved into 2020s, now we're looking at software and be able to say, what can the software actually touch that we're running? That's stuff we write ourselves and stuff that we buy.
Starting point is 00:07:07 And the obvious example of that is solar winds attacks. Those kinds of things become more and more important. And now we're going to throw APIs onto the pile because we're all moving to APIs to control everything that's going on in our networks. And then that all sounds hard enough. that's going on in our networks. And then that all sounds hard enough, but then there's this last use case that I think this data-centric model really addresses is that when we want to share data,
Starting point is 00:07:31 just like a file or a set of files or a bunch of data records outside the organization that's not protected by all those other zero-trust rules. Let's say it's sitting in Dropbox somewhere. We want to be able to put some sort of zero trust rule set on this data glob and still feel like we've got a robust zero trust deployment. And you're describing the use case that our data classification project aims to hit. Islands of zero trust are wonderful. And if your own organization develops zero trust
Starting point is 00:08:09 and you've started to realize the benefits in all these different ways that we've touched on and some that we haven't, great, your own house is nice. And so if people come to visit, your closets are organized. They know where to find the silverware. They know what's in the fridge.
Starting point is 00:08:26 And everything's really nicely organized. And it just looks like, oh, Martha Stewart lives here. And that's good. Sorry, I'm doing that metaphor. Now what do you think? Let's throw one more metaphor on. But real business has frenemies and people you need to work with and transactions that all need to occur. And so you could try to work out a system by which, yes, my data, as I hand it to you, is absorbed and I've labeled it and done all the mechanisms to do something that allows you to promise me at some level that you will care for this baby the way I care.
Starting point is 00:09:06 Now another one. You will care for this data the way I cared for it. It matters to me that I prove that I'm protecting the privacy of my customers and you should too. That's still going to be difficult, but we aim to show at least, I say at least because we're going to do this with real stuff. we're going to do this with real stuff. We're going to run some stuff and put data past it to show that my markings and my schema and what I did to do data classification, as I give it to you, offers you the advantage
Starting point is 00:09:32 to take it and absorb it and use it quickly and put it into the same protections if your regulators or your personal preferences or whatever your values, you need to meet them. We have to negotiate that, but this is also an opportunity to put some technology on this so that that is potentially more realizable. I'm trying to find some notes from a conversation I had, and I'm not going to find it. I'm close in my book, but my notes are never as good as I want. But it's sort of
Starting point is 00:10:01 information release versus information safeguard. If you guys are successful, I can extend my zero trust policy outside my organization to partners and collaborators and people that just need to see the data. I still have to manage the profiles of who gets access and who can do what with it. Before that wasn't even possible unless they were inside the network. Now I can extend it out. That's what we aim to show. And some of it is for people who've never even thought, I need to organize this stuff to get ready for that. And then once you're closer to being ready for that, then you dive a little farther and you can start to – I'm spitballing here because I think I told you I've been with the government ever since I was 19 years old as a co-op student. So solving these kind of problems matter to parts of government, but for industry, for contracts and other negotiations and entitlements and things, magic words that
Starting point is 00:10:55 I'm learning as we invite folks to join us and the technology providers we invite, they have customers and clients. So they won't tell us individually about those relationships, but they bring that experience to the build. And we try to come up with use cases that would let us illuminate all the things you and I have just talked about in what we hope to be, well, functional. So if somebody says, well, they did it and they documented how they did it, they being us at the NCCOE, we've accomplished it and we told you what we wanted to do and we proved to you that we measured that it happened. We don't necessarily try to pen test the systems that we build, the functional, I call them reference designs. I don't want to use the words reference architecture
Starting point is 00:11:36 because that's a loaded term of this is the only way. But reference designs to say we rationalize with our collaborators that this is something we could create. And look, it does the good things we want it to do. And we leave a little bit of scoping to say we rationalize with our collaborators that this is something we could create. And look, it does the good things we want it to do. And we leave a little bit of scoping to say we're not going to solve all those other problems and Zero Trust leaves a lot of room for more work. We do have colleagues here at the center
Starting point is 00:11:55 focused on Zero Trust. And as you described, they are focusing on authentication. They are focusing on network segmentation early on in that project. Eventually, we're going to talk about the data and how the relationships between what they've already accomplished and what we want to accomplish with data. The policies that will be involved in data handling and storage will grow into the conversation with the Zero Trust team. So a lot of possibilities.
Starting point is 00:12:22 And if we can do it and we plan to, we believe that it'll help with adoption. And our goal is to see people adopt better and useful and hopefully even somewhat measurable. That's trickier, but proof that cybersecurity can be advanced through some of these new things we're talking about that have standards. NIST has a special pub on Zero Trust. DHS has its architecture structure and strategy, and so does DoD.
Starting point is 00:12:49 Those are all models of moving towards better practices that will hopefully keep us from having bigger impacts that a solar wind has on an organization, any ransomware attack. You start to have a better understanding of your data. You can prepare yourself in good ways for things that are happening now. And hopefully it's things that people will aim to do in the future against us. So this year you guys published a draft executive summary of what you're trying to do. The paper is called implementing data classification practices. What's the goal there? What was the, what are you trying to do with that paper?
Starting point is 00:13:25 Yeah. Each one of the practice guides that come out of the center, they're NIST Special Pub 1800 series. So this one's 1839A, which implies that we have 38 others of these in our library. There's an executive summary, volume A. Eventually, we will publish a volume B, which does a little bit more in the area of analogy. It's the recipe and the ingredients and what we want to accomplish described for the values of what security and privacy risk you can reduce. It often has mappings to things like the cybersecurity framework or security controls. So it gives lots of different angles for, I care about this recipe they're about to cook. And then we do a volume C,
Starting point is 00:14:05 which would be all the ones and zeros and yeses and nos in the setup of the hardwares and softwares and any APIs that we're using and any glue code we need to write to make those work. That's the details. So, so far, what you talked about is a volume A. It's preliminary draft, and it's our newish tactic of getting in front of people
Starting point is 00:14:23 with one more, hey hey look what we're doing please pay attention to us stay tuned for more look who we're working with uh the good the good and the bad of that is more people say i want to be a collaborator and and our i was going to say you're not looking for volunteers right you're just you're just this is an announcement that you're doing the work we asked for for the volunteers starting about 18 months ago. And that process, I never want to say it's complete because we can find ways
Starting point is 00:14:51 to add collaborators if we feel that we need more technology or some type of expertise that we don't already have. And we can be convinced of that. So people are coming to us with, I want to play. I want to be a collaborator.
Starting point is 00:15:03 And that process we described 18 months ago with the Federal Register Notice stuff on our own website. If you go to nccoe.nist.gov, there's a button you can click to see which projects are in which phase. And that bringing collaborators into the project is an earlier phase that we already accomplished. So yeah. And the purpose of the document is really just to ask you to pay attention. Check to see if you
Starting point is 00:15:32 like the language selections we've made. One thing we do believe in this data classification space is that there's a lot of language that people use and we're trying to figure out which are the ones that stick as being good, solid. I know what that means. And I'm already guilty of it during this podcast, labeling, tagging. I haven't said the word metadata. But what are the – and there's a lot of other words that people will throw into this space. Well, zero trust is just one. All right.
Starting point is 00:15:57 So we'll just throw that in there. You did a great job of describing it. And I offered my belief that it's everything we always wanted to do. And eventually we'll get to it if the world continues in a positive way, it's realizable, but it's, that's proof is in the pudding. And, and so here we're asking people, you know, look what we're doing. We don't explicitly say, Hey, you know, do you like it? Except you can offer comments to say you don't. We do tell you that you can, you can join us if you find this pub and you're not already part of our community of interest, which is a bigger layer of people who already know that we're doing this project and want to keep track of what we're doing. So they get notices if you're in the community of interest that, you know, we're having a meeting with the public and we haven't really established one lately.
Starting point is 00:16:40 But we've done our own internal education webinar on this. We've had conversations to encourage people who wanted to be collaborators to ask us questions back in that earlier phase. So the guide offers places for people to join the community of interest and just stay tuned, as I said a couple of times. It's put a little tag on us to know that we're doing this and let us know that you're interested. And we already have heard a lot. We published this quietly during the week of RSA and then made an announcement to our community of interest just recently. And this podcast will certainly go out to a large community. You asked earlier about the center and its foundation.
Starting point is 00:17:17 I think our GovDelivery email listserv has over 40,000 people in it now, which is astounding to me. But that's good. It means, you know, you and I chatted a little bit about workforce, and there's often statistics that say there aren't enough people doing cybersecurity. Well, we've at least found 40,000 hopefully real people who care somewhat about some aspect of what we're doing, and that feels good. That's at least one measurement that, you know, comes out as a pretty solid one. So, yeah. So, Bill, we're at the end of this. What's the headline here that you'd like to tell everybody about this project? We need everybody from the CyberWire audience to know.
Starting point is 00:17:55 You told me to think about that question. It is that one should organize one's data so that you can have it work for you. You can protect it. You can share it as you wish to share it and aim to have control of that process so that you're able to meet whatever regulations your industry requires and that you can make promises to your customers or yourself about that the stewardship of data is important to you, and data classification is a vital step. Next up is Dave Bittner's conversation with Dana Morris, Senior Vice President of Product and Engineering at Virtru, our show's sponsor.
Starting point is 00:18:44 Senior Vice President of Product and Engineering at Virtru, our show's sponsor. If you think about just the history of Virtru, it sort of started in 2000, like 2008 probably, where our co-founder Will Ackerley invented something called the Trusted Data Format, which was an attempt to standardize on a specification for how to classify and tag data and use those tags to enforce assertions and obligations against data wherever it goes. That is a specification that is maintained by the Office of the Director of National Intelligence, and it's the format upon which Virtru is built. And what we've seen recently, especially with the growth of Zero Trust, is this idea of really focusing on what is that core asset you really want to trust when you think about or protect when you think about security, and that's the data. It's not the perimeter, it's not the network. All of the solutions you're employing in the context of security are really about protecting data.
Starting point is 00:19:51 And so the concept here is about how to start doing more to protect the data object itself in addition to all the other ways that you're protecting data already. And I think that's been an interesting trend in the last couple of years. And we've seen industry and government momentum towards starting to think about that problem space and starting to standardize and agree on a way to approach the problem. You know, before we dig into some of the specifics, is it fair to say that certainly as we've been through COVID, that the notion of protecting the perimeter, you know, having that kind of virtual moat around your castle,
Starting point is 00:20:22 that's fallen out of favor by necessity. Yeah, I think if you think about cloud, I mean, what is the perimeter, right? There really isn't a perimeter like there used to be 10, 15 years ago. Everybody had firewalls and VPNs and they were basically, you were connecting into a data center. I used to work at IBM. We had centralized servers and we'd be connecting into those remotely. And so there was a pretty well-defined perimeter
Starting point is 00:20:48 that could be used to control what people could and couldn't access. But as we move to the cloud, and data has been increasingly moved up into SaaS applications and across different cloud solution providers, the perimeter has definitely changed dramatically. I think one thing I would say is,
Starting point is 00:21:07 I don't know that there's a move away from the perimeter as much as saying we need to do more than just think about a perimeter. Because that perimeter's changed, it's not that you would throw out any concept of trying to enforce things at the app or the network boundary, but it's about adding on to those locks
Starting point is 00:21:24 and essentially figuring out ways that you can put additional locks on the actual data itself. Well, can we dig into some of the specifics here about data-centric security? I mean, from a user point of view, how does it work? Yeah, so it really comes down to
Starting point is 00:21:40 starting with data classification. And in some ways, it's actually a really nice user experience because we're not really asking the user to think about what obligations to put on data. What does different classification actually mean in terms of protections? We're just asking them to make a decision about what is the classification of data. In some cases, we can even automate that with machine learning. So those classifications become tags or attributes. machine learning. So those classifications become tags or attributes. And then we can use those attributes to enforce sort of organization-wide policies based on that classification. So for example, if you have PHI, PHI has certain obligations associated with it, or PCI, or PII.
Starting point is 00:22:20 These are pretty well-established classifications of data. And if you have systems or people or both that are classifying that data and adding these tags to the data objects themselves, we can use those downstream to then enforce those policies and have visibility into how that data is being used. Is the tagging, is this like a metadata situation where we're tagging the data and then is the metadata available? Yeah, think of it as an envelope. So if you think about an envelope, the letter inside is the data itself. And the envelope is really providing metadata, deciding where it gets routed and how to handle it. And in this case, it's very similar. So like Trusted Data Format, for example, is a specification for how to put a structured wrapper
Starting point is 00:23:11 around the data object and optionally encrypt that data object and encrypt the policy if you wanted to and sort of attach all of that as a wrapper around the data object so that when that data object is transmitted, when it crosses boundaries, when it's sent to different people,
Starting point is 00:23:28 that wrapper still holds. And then we can use the attributes in that wrapper to decide who can access it, what they can actually do with that data, who they can share it with. We can have a lot of additional control and visibility into how they're working with that data as long as it's been wrapped with that tag. And how do we ensure that a lot of additional control and visibility into how they're working with that data as long as it's been wrapped with that tag.
Starting point is 00:23:46 And how do we ensure that this sort of thing doesn't introduce undue friction for our users? That is definitely the key, and I think it's something we've spent a lot of time and energy on at Virtru is really the UX. I think one thing is that sometimes the decisions you want to make around data don't require actual encryption. So you really just want to use those tags to decide maybe whether
Starting point is 00:24:10 something could or could not be sent. And then as you're making that decision, you might want to additionally encrypt. On the other end, you have to be thinking about the applications that are consuming that format. We're spending a lot of effort in working with the community and with application providers in how do you understand that format and then make sure that it can be seamlessly accessed even though it's wrapping that data. Can the tags have attributes like expiration dates,
Starting point is 00:24:38 things like that? They can. In fact, I would think of those more as sort of additional obligations. So if you wanted to go with more of a, let's take a simple government scenario, like a classification of secret or confidential or top secret, there might be a releaseability, as in which countries or which organizations is this data releasable to.
Starting point is 00:24:59 Those would be sort of your attributes. Releaseability is an attribute name, and as, you know, the countries or the organizations it's releasable to. And then additionally, you could have obligations that you place on the data. Maybe you want it watermarked to the user or the organization. Maybe you want to prevent it from being
Starting point is 00:25:17 forwarded. Perhaps you want to set an expiration date so that, you know, it's sort of the mission impossible scenario. This will self-destruct in five minutes or in an hour or two hours. All of those things are possible and they're just done in a standard wrapper. You know, we're certainly hearing a lot of talk these days about zero trust, you know, both within the government and in the private sector. How does this data-centric security model fit into that idea?
Starting point is 00:25:43 Yeah, I think when I think of zero trust, I think John Kinderwag went into great detail about zero trust as a concept and a framework. But personally, when I think about it, I like to focus less on the trust part and more about constant verification. And I like to tie it to a physical example. So if you've ever worked in the government in a secure location, you're constantly verifying, whether it's scanning your badge or people asking if you have your badge on or do you have the right clearance to enter this office. These are all verifications that
Starting point is 00:26:15 are happening continuously. And they're part of a strong story around security, right? Ask questions, make sure you verify who it is, what they should be doing, should they be there, etc. And when I think about zero trust in the digital world, I think it's very similar. How do I always verify that you are who you say you are, and that you have a right to be here, and you have a right to do whatever it is you're trying to do? And in the context of data-centric security, it fits very nicely into the zero-trust picture as obligations and ways of enforcing and thinking about verification of you when you get to the actual object itself. So zero-trust or any security architecture is all about layering, right? Multiple layers is going to be best. I'm going to ask, I'm going to basically verify that Dana is who he says he is and has access to this network. And then I'm going to do the same check at the application. And then
Starting point is 00:27:09 at this case, I'd be doing some of the similar checks at the data object itself. So it really fits in very nicely, almost like a Russian doll approach there with the data being the inside doll. What about onboarding for organizations that decide they're going to take this on and have a lot of data that they now have to, as you say, tag and put in these virtual envelopes? What's that process like? Yeah, I think usually the hardest part is actually the process of defining the classifications or the attributes that you want to use for your organization. And this is moving beyond the government, right? I work with a lot of banks as well, and the banking industry, financial services in general,
Starting point is 00:27:52 thinks a lot about classification. Now, they're thinking about it both from a security perspective and just from a risk perspective. Certain classifications of data have not just greater security requirements, but also have higher risk profiles. And so how do I track that? So the onboarding is kind of in two phases, right? One is sort of day forward.
Starting point is 00:28:15 So putting in place facilities so that data can be classified as it's being created and shared. That can be manual as a user doing it. Google Workspace, for example, has added a concept called labels that allow users to do this in Google Drive with docs, sheets, and slides. Or you can use automatic classification tools, whether that's machine learning or even just simple statistical modeling. Categorization and classification is actually not a very hard problem
Starting point is 00:28:41 to solve in modern day. And so another way you could do it is to start to use those kinds of solutions to auto-tag and classify data. For the folks who've gone through this and have had success here, what does that look like on the other side in terms of their experience? The other side of the sharing equation, you mean? Well, in other words, they're up and running and it's a part of their everyday operations. Beyond the security benefit, I think that risk piece is the interesting part, which is greater visibility into how is your most sensitive data being used and by whom. It's really hard to do that without those classifications because you're then looking at data that Dana owns or Dave owns, and that's one or two people out of maybe tens of thousands. But if I have a consistent classification of PHI,
Starting point is 00:29:31 for example, then I can use that classification to then actually do analytics and see where is my PHI data being used, and then where is it being shared? So that visibility part actually is one of the best benefits that most organizations see is that piece. The security comes along and it's certainly really important. But ultimately, it comes down to that risk analysis and then just agility. The idea that you could quickly make a decision to entitle Dave to PHI, for example, and he just instantly gets access to large amounts of data that he couldn't access previously, that agility piece is a really interesting benefit.
Starting point is 00:30:12 And it's one of the ones that the government and the defense intelligence base is certainly betting on. But also we see that with banks and financial services as early adopters. So that's been a big success for them is the ability to grant access to data that couldn't be seen previously in a really quick way. I would imagine that that visibility also has benefits on the regulatory side to have the data of who gets to see what, to be able to
Starting point is 00:30:38 demonstrate it. Absolutely. I think demonstrating who can see it and then thinking about things like retention and e-discovery and all the obligations you have relative to the data you own. Knowing that you have a really good classification where you can pretty precisely say, this is all of the PHI data that I have, structured and unstructured. And here's all the people that have access to it and the systems that have access to it and, you know, the places it's being shared. I think that visibility is critical if you really want to keep up with, you know, sort of modern compliance regimes. What are your recommendations for organizations to get started here? What's the best path? I think the first thing is before you even get to data-centric security or even the security pieces of it and the enforcement, it's really starting on that journey of agreeing on a classification system and then thinking about where you can apply it and starting to actually classify data. I think a lot of database vendors, a lot of organizations certainly do a good job of classifying structured data.
Starting point is 00:31:49 But the unstructured world is definitely a little less mature. And I think that's probably the biggest opportunity. So again, I go back to the Google Workspace example. I love what the Google team is doing there with labels. Make it easy for a user to apply a sensitivity label to a document. Microsoft is doing similar in their office suite. And I think that's probably the first place to start. Agree on, you know, have your working group agree on what your classification system and scheme you're tagging is going to be, and then start figuring out ways to roll that into
Starting point is 00:32:15 the applications. And then as you've got that in place, I think you can take the next step into, great, now how am I going to use that to make decisions about access and enforce security? Now, how am I going to use that to make decisions about access and enforce security? We'd like to thank Bill Newhouse from the U.S. National Cybersecurity Center for Excellence in CCOE and Dana Morris, Senior Vice President of Product and Engineering at Virtru for helping us get our arms around this data-centric model. And we'd like to thank Virtru for sponsoring the show. This has been a production of the Cyber Wire and N2K, and we feel privileged that podcasts like Cyber Wire X are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as critical security teams supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies.
Starting point is 00:33:09 N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our senior producer is Jennifer Iben. Our sound engineer is Jennifer Iben. Our sound engineer is Trey Hester. And on behalf of my colleague, Dave Vittner, this is Rick Howard signing off. Thanks for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.