CyberWire Daily - A midseason takeaway. [CISO Perspectives]

Episode Date: November 25, 2025

In this mid-season episode, Kim takes a step back to reflect on the conversations he has had so far. During the episode, Kim sits down with N2K's own Ethan Cook to connect the dots across episodes, di...ving into how new technologies are impacting longstanding challenges, both from a security standpoint and from an attacker's view. Whether you're catching up or tuning in weekly, this episode offers a thoughtful recap and fresh perspective on where we've been—and what's still to come. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyberwire Network, powered by N2K. This exclusive N2K Pro subscriber-only episode of CISO Perspectives has been unlocked for all Cyberwire listeners through the generous support of Meter, building full-stack zero-trust networks from the ground up. Trusted by security and network leaders everywhere, meter delivers fast, secure by digital. design and scalable connectivity without the frustration, friction, complexity, and cost of managing an endless proliferation of vendors and tools. Meter gives your enterprise a complete networking stack, secure wired, wireless, and cellular in one integrated solution built for performance, resilience, and scale. Go to meter.com slash CISOP today to learn more and book your demo. That's M-E-T-E-R.com slash C-I-S-O-P. Welcome back to SISO Perspectives.
Starting point is 00:01:14 I'm Ethan Cook, lead analyst at N2K, and editor of the SISO Perspectives podcast. Throughout this series, Kim's been venturing into uncharted territories and tackling some of the most complex emerging issues facing our industry from every angle. Over the past few episodes, we've had some incredible guests, leaders who've built, broken, and rebuilt security programs, grappled with emerging technologies, and reshaped what it means to be a strategic leader in cybersecurity.
Starting point is 00:01:41 Now we're pulling back the current a bit to give you a deeper look into what we've been building here at SISO perspectives. In this episode, we're hitting pause to reflect on the insights we've uncovered so far and set the stage for what's next. Today, the mic turns to Kim as he becomes the guest. I'll be asking him to look back on the past several episodes and share what stood out and talk about where the conversation goes from here.
Starting point is 00:02:03 Let's get into it. Ethan, welcome back. Good to be back. Yeah, I love when we did this not only last season, but I also loved what we did this. this earlier. It's a great opportunity for us to just have the discussions regarding some of the material that's out there and get a little more probative on some of these pieces and parts. So I think last time, because the focus was more policy and law, which was heavy into where your background is, I think I was the one being more probative of you. So what I'm going to ask
Starting point is 00:02:51 you to do this time, since the topics seem to be more, I want to say, of a technical I'm going to ask you to let's put the script and have you be a little more probative with me. So Flores here, so talk to me. So as a quick, you know, recap, we've had two major themes going on. We've been talking about privacy and how that's been evolving and the ways we can look at that, as well as fraud and identity and the impacts that new technologies are having on that and the way threat actors are evolving attacks on identity and committing fraud. As a quick recap for you all.
Starting point is 00:03:23 So I think for me, you know, I know a good amount. about privacy, not the technical sides of a bit, but from my policy standpoint. So I think let's start with the identity side because that's something that's a little less grounded for me. And maybe you can pick your brain a little bit. So one of the conversations that we had was about the impacts of AI and how AI is changing the face of identity and how it's going to change the face of identity. And we talked about different recommendations and ways to manage AI and how AI is kind of just proliferating. through every business and how it's becoming really difficult to kind of limit its access.
Starting point is 00:04:03 How are you seeing that and what is really driving you and what do you think are the best ways for practitioners to evolve that? Well, great question. And I'll start with the last part, which is, yeah, what are the best ways? Yeah, that's a good question. I think we're all trying to figure that out right now. But an interesting concept that I've been having lots of conversations about regarding identity in AI. So I like to go broad when we talk about topics like this. So if we understand the basic
Starting point is 00:04:35 concept of identity, which is for lack of a better term, and I think we talk about it more, Richard talks about it phenomenally well, about basically how do I determine that you are who you say you are and then associate permissions to do things with that in the environment. Now, if we think about this conceptually, every time an application, let's get down to applications, does something on your behalf, as in, you know, autoload your password even on a web-based application or goes forth and fetches data from a system on your application. That application is acting in limited fashion as you. It is interfacing with another system on your behalf to do service.
Starting point is 00:05:26 things within the environment. Now let's apply this, and it's a, it's not a perfect analogy. Let's apply this to AI. AI can be boundless in terms of what it does for you. So if I establish an AI agent, and my terminology is flawed, I understand, within my browser, etc., to do certain things, and I give that AI agent permission to, to act as me, that AI agent can do almost boundless things on my behalf. Now, let's take it to the next step. If that AI agent is hacked, then that AI agent on my behalf can do malicious
Starting point is 00:06:12 things within the environment. So are we getting to a point where not just AI is making it easier to commit fraud because of speed or the reality. or the seeming reality that ejects into scams, et cetera, within the environment. But are we getting to the point where the AI agent needs to be addressed as a separate persona, if you will? So there's Kim Jones, you know,
Starting point is 00:06:44 the physical entity that is authorizing apps. And then there's the, you know, the Kim Jones persona or technical or digital clone that also has potentially boundless ability to do things within certain environments. And that I need to start thinking about this in terms of accountability, in terms of assignment of permissions. Do I need to track that AI entity or the Kim Jones AI entity as a separate different entity from Kim Jones, the individual authorizing applications? And I'm hearing a lot of discussions regarding that as people see the potential boundless capability of AI. What is that going to do to that function of identity?
Starting point is 00:07:33 Not just for fraud, not just for scam, but this thing now beginning to, for lack of a better term, anthropomorphize into a separate entity, which is another step of evolution in terms of how we address our, yeah, how we address technology. our concept of, I hate to get metaphysical here, but our concept of big air quotes humanity and what it takes to do that within the environment. And then on the scary side, are we a step closer to sky net? Yeah.
Starting point is 00:08:05 I don't know, but these are the type of conversations as AI has accelerated within the environment. So where's the accountability? Is it mean? Do you track that as me? Or do you track that separately? How do you investigate? What do you do?
Starting point is 00:08:20 These are interesting questions. And I would think, you know, this, just you're saying that, I think there's another angle to that, which I know you guys talked a little bit about, but the similar conversations that emerged with cloud and how, you know, when we, when cloud first came out, I think, and to this day, we still kind of see this, which is a huge problem with cloud environments is misconfiguration.
Starting point is 00:08:40 And I'm sure that's a similar thing that's popping up with the AI right now. Everyone's so ready to get into it that people aren't really taking a step back and making sure that its scope is limited, that it is properly secured. not just from a, oh, someone's going to hack into this, so it doesn't creep out on its own and start doing things without permission. Yeah, and I tend to downplay the misconfiguration concern a little bit.
Starting point is 00:09:06 Not that it isn't a concern, Ethan. Yeah. But even the best configured systems can go wrong. Yeah. Because these best configured systems at the fundamental end are made by human beings, and human beings are flawed. For us to say that a system is perfectly configured, therefore it will not be compromised and something won't go wrong, is the fallacy of the concept or perception of perfect security.
Starting point is 00:09:34 And when I teach at Berkeley and other places, I tell people perfect securities and oxymoron. It doesn't exist. You want perfect security, close up shop, wipe your systems, dunk your computers and Lusite and drop them in the Mariners Trench. Fantastic, perfect security. Can't get anything done, the perfect security. So, you know, I agree from the configuration standpoint, it is important. I agree that we're not understanding the potential impacts. It's the classic ready fire aim associated with, you know, new tech that's out there.
Starting point is 00:10:05 But even if we take the time to slow down and do it right, stuff is still going to go wrong. Yeah. So you mentioned this earlier, and I think it's, you know, we had this conversation. We really dove into the concepts of scams and frauds as a. attack method with Mel and we talked about how the kind of the major themes that had been persisting one of which and this is a little different from that AI side but you know the the challenges we're seeing on the identity side and I think we talked about a variety of scams from cryptocurrency scams kind of being the the common one that is emerged over recent years but we
Starting point is 00:10:42 also talked about employment scams and we talked about you know the the the nomenclature is pick-butchering, but it sounds very harsh. I think Mel referred to it as like friendship scams or something along those lines. You know, out of these three kind of major scams that she highlighted and that what they're seeing at the BBB, one of them that stood out to me was the employment side and how, you know, companies are being tricked into hiring people who are not who they say they are and giving them access into things. How do you manage that?
Starting point is 00:11:18 How can we, you know, especially as we get more digital and people, remote workers, more prevalent, you know, how does that, how can we get a handle of those things? Well, I want to first talk about the other end of that. You talked about companies hiring people who are not who they say they are. My first portion of that is there are, there are mechanisms that exist in place right now in terms of background chats, et cetera, depending upon how much money you wish to spend and how, if you'll excuse the language, proctalotological. you want to get regarding your hiring practices. And heavily regulated organizations do a lot of that. I want to talk about the scam from the other end. Yeah.
Starting point is 00:11:57 And it's so funny because, yeah, because I regularly am on somebody's list right now, like sends me a text and says, hey, you know, so and so is hiring, my old company, Into it, I got a test. Into it is hiring. It's like, yeah, no. So, you know, in terms of fake jobs. And in an economy where people, in particularly when people are struggling, or they're looking for opportunities,
Starting point is 00:12:27 they are providing information regarding themselves. They want to do background checks. So they ask you to put in a social security number. Congratulations. You've just given them the case to your kingdom. And those are the ones that bothered me the most of that job scan area. And this gets back into, again, concepts of identity. Right now, right this second, if I need, you know, from my identity concept standpoint,
Starting point is 00:12:56 identity is one directional or unidirectional. I have to prove that I am who I say I am to gain access to the systems. Where do these systems have to prove that they are who they say they are to me? and the identity paradigm is set up to be unidirectional and has been for a while. So the answer to your question from my perspective, Ethan, gets down to we have to get to a point where we need to start rethinking the identity paradigm. And until we break that identity paradigm, it's going to be very hard to get in front of this and still allow this data-driven economy to move.
Starting point is 00:13:40 So I think, fundamentally, we got to break the identity paradigm. We got to change it. And I'm not hearing as much talk other than folks like from Richard and a few of the people I know who are working on saying, how do we do this effectively without overburdening the user or demanding more information from them than they're willing to give us. So it's hard. Yeah, and I think, you know, I still always come back to this thing when it comes down with identity and people abusing identity. and the exploitation of the human factor and how for years we've always had this conversation of it and to your point earlier which is you can have the best system
Starting point is 00:14:22 the best security etc do all these things but it's still managed or there's still some human aspect to this and when you introduce that into it fallibility happens mistakes happen people are taking advantage of and this human factor where we target people through social engineering makes it really difficult, especially when some of these people are, you know, social engineering has only seemingly gotten better
Starting point is 00:14:46 and more prolific year over year. How do we manage that side of identity and getting people to not get taken advantage of in a way that we haven't already attempted? So you asked the very broad question at the front in terms of how do we manage the human side? The first answer is you don't. Let me give you a physical example.
Starting point is 00:15:07 Physical crime. Law enforcement has been around for at least centuries, realistically, millennium. Yeah. Okay. Murder's still a thing. Deft is still a thing. Kidnapping is still a thing. And the list goes on.
Starting point is 00:15:25 I believe that there is an expectation of perfection within the techno ecosystem that is not realistic. Yeah. I believe that we should be in a situation to say, look, if we can't stop crime in the physical world, how the hell do you expect to make it go away in the technological world? When the scale and the speed. Yeah, that has an unrealistic expectation. And business for in order to drive the data-driven economy and cyber practitioners in order to drive our profession, in my opinion. in my opinion have set that false expectation to the rest of the world. Now, so one, in terms of how do you manage it, you don't.
Starting point is 00:16:18 That said, it doesn't mean it can't get better. If you look at crime statistics in the U.S. from the peak, I was looking at this my Berkeley class, and the numbers are going to be off, but the trend line isn't because I don't remember the numbers top of head. For a while, I was looking at, murder rates climb in up to the 80s where it peaked and then took a drastic, you know, a drastic downturn
Starting point is 00:16:47 from about 1980 down to about 2010 within the environment. So we in, I believe the FBI measures number per 100,000, et cetera, within the environment. And it wasn't just a change in how they measured. But we were beginning to see different programs come into place, different incentives, different, incarceration guidelines, different things happening,
Starting point is 00:17:09 which were forcing the rates down. They did peak a little bit, I believe, during the COVID year, but are beginning to come back down within that environment as well. So looking at that trend line and trying to make the comparison to say the fact that we can't control it
Starting point is 00:17:25 doesn't mean we can't make it better. We can't reduce it to some extent. But we have to start with, look, let's be real. I can do everything in my power and say that I'm tough on crime, I can protect the bejesies out of the environment, something's going to happen. I need to understand that.
Starting point is 00:17:47 But what I can do is I can reduce the probability and I can reduce the impact or likelihood, impact, risk. I can reduce the risk within the environment. I genuinely and sincerely believe that we need to focus on, are we reducing the risk reasonably within our environments? Now let's take another half step back. The broader question is, what aren't we already doing? I just finished a great book by Ezra Klein, which was looking at why certain things within this great nation may not necessarily be working the way that we want them to.
Starting point is 00:18:24 And frankly, isn't picking, and Ezra Klein is very, very liberal and is actually castigating liberals within the environment. So, you know, for those of you who are like, oh, he's criticizing, no, he's actually poking at liberals to say, you're the ones who say you want these sorts of things, we're the ones getting in the way of these sorts of things. So it's a good read. But he talks about innovation and what we have tended to do with innovation and looking at grants for truly new and innovative. technologies. And he uses the example of RNA. I think it's MRNA. Basically, the underlying technology that created the COVID vaccine. He talks about the original ideas and concepts regarding use of RNA to do things from a vaccination standpoint are 20 to 25 years old. And he gives the history of the woman who came up with this and for the better part of decades could not get
Starting point is 00:19:40 grants funding any traction at all within the environment. And he even goes further to talk about studies that have shown that as we look at government grants and support for innovation and things of that nature, we have tended to support more things that are focused. focused around. I use the old Bloom County Tint Control, you know, modify existing things versus truly innovative, groundbreaking ideas. Modify the status quo by half a percent versus go here. And in a later episode where we talk about investment, et cetera, I give a very pointed example of that regarding identity. Stay tuned, listeners, it's coming up. Yes, I...
Starting point is 00:20:31 You know what I'm talking about. And I think there is a, you hit a point. Not to dive down on that, but I think there's obviously an incentive to keeping the status quo the same. And that is the conversation that people should tune into. And if what we want to do in terms of what we aren't we doing now that we could be doing, that we ought to be looking at doing better, is to burn the books. It is to throw away the status quo. it's to truly look at the problem differently
Starting point is 00:21:03 and to embrace that level of true innovation out there. I think within the paradigms that we have set, we're doing just about everything we can and ought to and should do within the environment. But it's time to break the paradigm and figure out, is there a better way? And I do think regarding identity, going back to where we started this conversation,
Starting point is 00:21:28 There are ways to do these things, but they're so different. It scares people. And because it scares people, we walk away and just reinvent the same wheel with a new label and, you know, shinier stuff. Have you ever Have you ever imagined how you'd redesign and secure your network infrastructure if you could start from scratch? What if you could build the hardware, firmware, and software
Starting point is 00:22:33 with a vision of frictionless integration, resilience, and scalability? What if you could turn complexity into simplicity? Forget about constant patching, streamline the number of vendors you use, reduce those ever-expanding costs, and instead spend your time
Starting point is 00:22:50 focusing on helping your business and customers thrive. Meet Meter, the company building full-stack to zero-trust networks from the ground up with security at the core, at the edge, and everywhere in between. Meter designs,
Starting point is 00:23:06 deploys, and manages everything in enterprise needs for fast, reliable, and secure connectivity. They eliminate the hidden costs and maintenance burdens, patching risks, and reduce the inefficiencies of traditional infrastructure. From wired,
Starting point is 00:23:22 wireless, and cellular to routing, switching, firewalls, DNS security, and VPN, every layer is integrated, segmented, and continuously protected through a single unified platform. And because Meeter provides networking as a service, enterprises avoid heavy capital expenses and unpredictable upgrade cycles. Meter even buys back your old infrastructure to make switching that much easier. Go to meter.com slash CISOP today to learn more about the future of secure networking and book your demo. That's M-E-T-E-R-com
Starting point is 00:24:00 slash C-I-S-O-P. So let's, you referenced it a little earlier, and I think this is a good segue into the conversation regarding some of the challenges that we're seeing year over year and kind of instead of changing them, we're just kind of approaching it from a new way, but it's really the same way with just a new coat of paint on it. And that's how we handle it.
Starting point is 00:24:27 privacy. And I think that's been evolving year over year and I think, you know, from a policy standpoint, I have a ton of passion behind this and there's a lot of concerns not just regarding how algorithms are going after and, you know,
Starting point is 00:24:43 getting data, but how AI is going to transform these things and the scale at which it can transform how we process data about people. So the first conversation was with Christie. And we talked about the impacts on privacy from a small business perspective and the expectations on small businesses,
Starting point is 00:25:04 especially ones that go across state lines or go across country lines. We talked about the impacts of AI. We talked about how we can manage contracts and how we can manage how privacy we give out, et cetera. I think one of the things that stood out to me in this conversation was, And I just referenced it, but the expectations on small businesses who don't have the scale to be able to effectively manage privacy across, let's say, 50 different state privacy laws, because the U.S. has a central privacy law, but it's not really impactful in the same way that, let's say, the GDPR is. When we look at that, how do you interpret that conversation? How do you say to a small business or someone who doesn't have the scale, but they are an internet platform who anyone from, let's say they're based in Oregon, someone from Florida can click in and buy their products and have it shipped across state lines, but they've got to give up credit card information, address information,
Starting point is 00:26:06 you know, personal name, et cetera, email, all that. How does that small business that does not have the scale to hire 50 different lawyers that are experts manage those things? And how can we begin approaching that in a more effective way? Great question. I'm going to answer your question because I think it's absolutely relevant. And then I'm going to go broad again as we begin to talk about the concept of privacy. Christy does a great job, I think, near the end of that particular episode
Starting point is 00:26:34 because we ask that question very directly. And the short version is, the short, flippant yet accurate version is the best you can. The less short, less flippant, yet equally accurate version is you're going to have to go to third-party resources out there, invest a part of your revenue in making sure you're staying ahead of that.
Starting point is 00:26:59 Many companies, for example, let's go to PCI from regulations. They outsource their card management system and their car folder data environment to a third party who then assumes all of that risk and regulatory overhead, et cetera. You're going to have to do that sooner or later. Now, do you have to do that large, depending on. upon your business when you have two customers in Oregon and everyone else is in your backyard, or you're going to have to do that large, you know, because you're working very small and you have no intention of expanding. The biggest thing for small business to do is to not ignore
Starting point is 00:27:36 it, ask questions. Most and many of your chambers of commerce out there, the Better Business Bureau as well, may have resources that can point you in the right direction for that. So for a small business, the short, short answer is just don't ignore it. that's that's that's that's the answer there but let's take a little bit at the larger question and there's some interesting paradigms and paradigm shifts happening with privacy and let's just talk within the u.s we'll leave the international piece alone because that would take us another hour so we'll just stay here within the u.s we all understand that the paradigm for data privacy within the U.S. is markedly different than, say, Europe and a lot of other, a lot of other
Starting point is 00:28:25 countries in terms of data ownership and what organizations can do. Way back in the day, I give you my data, you can do whatever the hell you want with it. That paradigm is shifting here in the U.S., but it still hasn't shifted fully because of the potential impacts on this data-driven economy that we have created here within the environment. But I will also state that given the state of a different economy, we have also seen shifts in expectations of privacy that are decidedly generational within the environment. Now, my son, shout out to my son, Scott. My son went to the honors college here at the university, and the honors college has an undergraduate, thesis that they have to give and defend in order to graduate.
Starting point is 00:29:23 And he did his on the shifting paradigms of privacy. And when I read his dissertation and went to his defense, I'll never forget the comment that was made by one of his students, or one of his classmates, rather, who talked about their willingness to surrender data of any sort. And I'm quoting, you know, I will give up any of my data to get a 5% discount at my local Starbucks. Yeah. And, you know, that's a 22-year-old 10 years ago. That's a markedly different attitude regarding privacy within the environment.
Starting point is 00:30:06 That feeling that the stuff that I'm being asked to give up is not as valuable as the service or the return that I will get for it. And we do that with, you know, again, not picking on Google, we do that with Google all the time. Google freely admits that it spider crawls, anything you put within its systems, within its Gmail, within its documentation to figure out how to better market and advertise to you. Yet we use Google regularly because it's free.
Starting point is 00:30:39 You know, what we give up for that, we believe is not as valuable as... the services that we get. The dangers of AI combined with cloud computing, etc., is that relation, and again, I don't know if I say it here, but I know I've said it in the past regarding data versus information versus intelligence. So we take the raw data, we put it into context to create information, and then using processing speed as well as machine learning,
Starting point is 00:31:09 we can extract intelligence about you as an individual, in many cases beyond what you really intended to give up. This goes back to the first target breach in terms of someone just looking at surfing patterns for a 15, 16-year-old online determined that the young lady was pregnant before she had told her parents that she was pregnant. Now what we have is with AI and machine learning
Starting point is 00:31:38 and the speed of processing, the ability for us to take that announcement, innocuous data, contextualize it to create information, and then extract meaningful intelligence out of it, is absolutely scary. And that's the big change that I think has got people worried from a privacy standpoint. That's the scary part that's going to have a massive impact upon our perception of privacy and our actual privacy, in my opinion. So there's my doom and glooms.
Starting point is 00:32:09 So I want to actually pivot that a little bit, because you may. mentioned how the data that we give up, right? I saw a great report a couple weeks ago, not about the data that you and I give up, right? But the data that company employees are entering into AI systems to help speed up their workflow, data that which is confidential, sensitive, et cetera, and they're viewing it as, oh, I'm speeding up my job, right?
Starting point is 00:32:34 This is data that I already had access to that I was working with. So, like, from a legal perspective, they individually are fine. And I guess this gets back to that previous conversation regarding different personas and what we can manage. But because AI is, I guess, so proliferative, people haven't really gotten their handles on it. Companies certainly haven't. People are entering data into databases, Excel sheets, et cetera, into AI models. It's not the proliferation problem, Ethan. I know where you're going.
Starting point is 00:33:01 It's not the proliferation problem. AI is getting as close to, you know, I'm not sure by geek credits, as close to Madgell Barrett's voice on Star Trek. next generation and being that computer. And to that point of we assume that this thing is not just somebody else's compute power in somebody else's data center. We assume that this is just this thing that I can use and that, you know, we don't even think about what's in the back end. We're all having visions of Star Trek.
Starting point is 00:33:40 That's what's happening. You know, the technology and our expectations of the technology are interfering with the reality. A war story for you that, you know, I'm going to go and show my old that are probably a predates you because you're just a whippersnapper. Yeah. So, okay. Do you remember when Siri first came out and IBM restricting the use of Siri? Yes. Do you remember that?
Starting point is 00:34:04 Yes. That's, this is a similar case. And for those who don't remember that, you know, there were folks at IBM. when Siri came out that were asking Siri questions to help speed up their workflow and solve problems, not recognizing that that translates to what's happening is that question is going into service with Apple and so that they can process and answer it. So since we were competing with Apple, you're giving away competitive data. So when IBM finally realized that, they said no Siri used in the office.
Starting point is 00:34:39 this is just the advanced placement version of that problem. Yeah, and I think, you know, there's this, I guess the logical progression of that problem, right? Where before that was an isolated business case, that's a competitor, we don't want to give them, etc. But now it's not even competitive, right? You have companies that do not work in AI at all, have no involvement in AI. And so it's not a competitive thing, but to the point, the employee is sitting there saying, I have to get through all this data, or I have to pull meaningful insights from this data and I got to do it by the end of the day, plus 10 other things. But then I think what people
Starting point is 00:35:14 don't realize, and I think from a company perspective, like, oh, my God, this person's so great, they're getting their work done fast. There's this gap that has formed. And maybe it's because of the lack of understanding about what's happening on the back end, lack of understanding how these algorithms work, or they're just willing to accept, hey, it's not my responsibility. How do we manage that side of it? Because I think that's something that I think, at least from what I understand is only getting more prevalent. It gets down to several things in that regard. And now I'm well leaning over my skis because now we get into hiring practices,
Starting point is 00:35:48 expectations of employees, employee leadership, in addition to the tech. And that's collectively across the board. So leaning over my skis, it gets down to, if I'm going to provide somebody a tool and telling them not to use it, I think, is a, is a, is a, gesture and stupidity. You know, the tool is there. You is a, we should want to be able to use the tool. How do I use first the tool responsibly within the environment?
Starting point is 00:36:16 And I think, you know, big underline and big, you know, caveats here, I think we're beginning to grapple with that because I think people are saying if AI can do a lot of this entry level basic analytics piece, then my expectation of you should be beyond just doing that entry level analytics piece, because that which I would pay you to struggle through, you can now have somebody do with 80% accuracy within the next, within 10 minutes. So what else should I expect of you beyond that? in order to advance. So I think to hit on the last conversation
Starting point is 00:37:08 that we've had during this section, and that was our conversation with Mary about use cases and kind of that unseen world of privacy. We all know that, you know, you agree to the terms of service when you enter a website, or I agree to tracking cookies. You know, these are common things
Starting point is 00:37:22 that we kind of just accept now, or, you know, I am ordering something, I'm entering my personal information in. But there's a whole other side of privacy that I don't think it's the attention. And, you know, I think Mary brings up a great use case regarding cars, but let's expand it to the broader conversation
Starting point is 00:37:36 of IoT devices in general and how they are these finely tuned sensors that can pick up significant amounts of contextual data, correlate that data with other pieces of data that they've already collected and get insights
Starting point is 00:37:53 at a significantly higher level than we would ever expect. Mary talked about cars, and we talked about how they can get access to your phone, contact information, locations that you've been driving to, times that you were driving to them, et cetera. That is one use case, but I think as an IOT,
Starting point is 00:38:13 taking a step back and looking at IoT as a general concept, how do we, as a security professional, as a security business leader, get a handle on devices that seem to be just every year, there's more of them, and they collect more? Mm-hmm. inside my environment you're not going to end up hardening iot in my i'm talking corporate you're not going to be able to harden iot the way that you want to harden iot collectively within the environment because you're going to drive cost points up the wazoo and it's not going to happen
Starting point is 00:38:47 it's going to defeat the purpose and it's going to you know i i'm not necessarily sure the devices we're talking about could handle that what i can control is the network that can communicate on. Yeah. And I can monitor and manage and understand that IOTs transmit and receive, and I can control the transmission and the reception and where things are going, et cetera. So my first response is within a particular network, understand that IOT is not your average technology device within the environment
Starting point is 00:39:19 and what you can do within that environment, you know, to IOT and with IOT is something that's slightly different than maybe what you've seen before, look at it as a separate entity, talk about the use cases within the environment, do the threat modeling associated with IoT, and then solve the problem, big air quotes won't solve, solve the problem at the network layer.
Starting point is 00:39:42 So I think that's a great, one great solution. I think the gap where I see is on personal devices. So in privacy for cars, they put out a great paper, highly recommend reading it, but they bring a good case study where they got access to a car that used to be owned by a military contractor. And the military contractor, they were able to harvest, and that was their personal vehicle, was able to harvest significant amount of data about that person.
Starting point is 00:40:07 Nothing confidential, but certainly stuff that a military contractor would probably not want out in the public, such as email addresses, personal home addresses, military location addresses that they have driven to, et cetera. But that's not necessarily always under the purview of a company. that's something that, hey, that's their personal vehicle. That's their right to manage. But there's this weird gray zone where it's, yeah, it's their personal vehicle, but it has sensitive information on it that the company would probably not want out. How do you manage that?
Starting point is 00:40:39 I go back to what we said in terms of crime. You don't. But there's an education piece here. So let's back up for a second. First, there's an education piece. I'm old enough to remember when people realize that corporate printers had hard drives. Yeah. And that as part of their technology disposal, they have to get rid of the hard drives within their corporate printers.
Starting point is 00:41:00 That wasn't always a thing. And it was, oh, my God, think about all of the information that leaks out, because all I did was throw the printer on the scrap heat. Yeah. So, you know, we copy paste that information where possible to different devices that we have. You know, I don't know because I've been out of the military for over 20 years now. I don't know if as part of, you know, the military educating his people on security says, just remember to why, you know, if you sell your car, do what you can to wipe the devices accordingly within the environment or eliminate your contacts, eliminate tracking data, et cetera.
Starting point is 00:41:42 I don't know if the military in individuals will end up pushing these vendors to understand this within the environment. I don't know if as an individual, I have the ability. if I sell my smart car to say, I want you to go remove the hard drive in that smart card, give it to me so I can drill through it or put a nail gun in it
Starting point is 00:42:03 and then replace it at your cost. But it starts with education. And, you know, by the way, I echo your comments regarding Mary's paper remembering to read the paper, it's absolutely phenomenal. But it's first the
Starting point is 00:42:18 understanding that things that we do not think about are actually potential vectors for harm and are actually part of the iot and i got to bring up this point because so for any of you who are still struggling with the concept of your car as a member of the internet of things i urge to go you know research that it's going it started monday it's happening now within the environment so it just it was relevant and i said yeah i got to no i think it's a perfect example of the kind of that hole
Starting point is 00:42:55 that when we think about privacy, you know, I was like, oh, yeah, YouTube's harvesting my data, social media is harvesting my data, what I look at, right? Your car is harvesting your data. It's more than just the social media algorithm. It's way more widespread.
Starting point is 00:43:06 But it's got to start with the education piece. Yeah. There's only, I mean, the military probably has a better purview in terms of controlling both its borders and what its members can do. But you're going to be hard press to tell a military member that all you can buy is a purely manual, no software car to
Starting point is 00:43:29 drive on an installation, assuming you can still find one out there. Even the cheapest cars have computers in them these days. So that's going to be very difficult to do. So all you can do is educate within the environment. Or should we collectively be able to influence the people like car makers, the way we influence, you know, people who create corporate trainers, to say, okay, yeah, I understand the hard drive is here, it needs to be here. Can I influence you enough as a cyber industry or as privacy kicks up to say, place the hard drive in the car such that it can be easily swapped out as an option should I wish to maintain my privacy? Yeah, I love all the bells and whistles, but give me the hard drive so I could do what I do
Starting point is 00:44:16 with my computers that I scrap and put a nail gun to it before I hope. throw it away. And that's going to take, in my mind, probably another four to five years, assuming there's enough human cry out there. Yeah. I think that's a great point to leave this off on Kim. And I think as we sit back and we think about the reflection and we think about the past several episodes, I think this illustrates a great point about the, not just the evolving nature of these two subjects, privacy and identity slash fraud, but the, the gaps that we are still contending with and the gaps that we, as a collective, need to get a better handle on. So I thank you for sharing your insights. I thank you for taking the time
Starting point is 00:44:59 to have this conversation with me, and I look forward to our next reflection. And that's a wrap for today's episode. Thanks so much for tuning in and for your support. and 2K Pro subscribers. Your continued support enables us to keep making shows like this one, and we couldn't do it without you. If you enjoyed today's conversation and are interested in learning more, please visit the CISO Perspectives page to read our accompanying blog post, which provides you with additional resources and analysis on today's topic.
Starting point is 00:45:42 There's a link in the show notes. This episode was edited by Ethan Cook, with content strategy provided by Myon Plot, Produced by Liz Stokes, executive produced by Jennifer Ivan, and mixing sound design and original music by Elliot Peltzman. I'm Kim Jones. See you next episode. Securing and managing enterprise networks shouldn't mean juggling vendors, patching hardware, or managing endless complexity. Meter builds full-stack, zero-trust networks from the ground up, secure by design, and automatically kept up to date. Every layer, from wired and wireless to firewalls, DNS security, and VPN is integrated, segmented, and continuously protected through one unified platform.
Starting point is 00:46:48 With meter security is built in, not bolted on. Learn more and book your demo at meter.com slash CISOP. That's METER.com slash CISOP. And we thank Meeter for their support in unlocking this N2K Pro episode for all Cyberwire listeners.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.