CyberWire Daily - Election Propaganda: Part 3: Efforts to reduce the impact of future elections.

Episode Date: October 16, 2024

Thinking past the US 2024 Presidential Election, In part three of the series, Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, discusses reducing the impact of propaganda in the future ...elections with Perry Carpenter, Chief Human Risk Management Strategist at KnowBe4 and host of the 8th Layer Insights Podcast, Nina Jankowicz, Co-Founder and CEO of the The American Sunlight Project, and Scott Small, Director of Cyber Threat Intelligence at Tidal Cyber. Check out Part 1 & 2! Part 1: Election Propaganda Part 1: How Does Election Propaganda Work? In this episode, Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, discusses personal defensive measures that every citizen can take—regardless of political philosophy—to resist the influence of propaganda. This foundational episode is essential for understanding how to navigate the complex landscape of election messaging. Part 2: Election Propaganda: Part 2: Modern propaganda efforts. In preparation for the US 2024 Presidential Election, Rick Howard, N2K CyberWire’s Chief Analyst and Senior Fellow, discusses recent international propaganda efforts in the form of nation state interference and influence operations as well as domestic campaigns designed to split the target country into opposing camps. Guests include Nina Jankowicz, Co-Founder and CEO of the The American Sunlight Project and Scott Small, Director of Cyber Threat Intelligence at Tidal Cyber. References: Rick Howard, 2024. Election Propaganda Part 1: How does election propaganda work? [3 Part Podcast Series]. The CyberWire. Rick Howard, 2024. Election Propaganda: Part 2: Modern propaganda efforts. [3 Part Podcast Series]. The CyberWire. Christopher Chabris, Daniel Simons, 2010. The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us [Book]. Goodreads. Chris Palmer, 2010. TFL Viral - Awareness Test (Moonwalking Bear) [Explainer]. YouTube. David Ehl, 2024. Why Meta is now banning Russian propaganda [News]. Deutsche Welle. Eli Pariser, 2011. The Filter Bubble: What the Internet is Hiding From You [Book]. Goodreads. Kara Swisher, Julia Davis, Alex Stamos, Brandy Zadrozny, 2024. Useful Idiots? How Right-Wing Influencers Got $ to Spread Russian Propaganda [Podcast]. On with Kara Swisher. Nate Silver, 2024. What’s behind Trump’s surge in prediction markets? [Analysis]. Silver Bulletin. Niha Masih, 2024. Meta bans Russian state media outlet RT for acts of ‘foreign interference’ [News]. The Washington Post. Nilay Patel, 2024. The AI election deepfakes have arrived [Podcast]. Decoder. Nina Jankowicz, 2020. How to Lose the Information War: Russia, Fake News and the Future of Conflict [Book]. Goodreads. Perry Carpenter, 2024. FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions [Book]. Goodreads. Perry Carpenter, 2021. Meatloaf Recipes Cookbook: Easy Recipes For Preparing Tasty Meals For Weight Loss And Healthy Lifestyle All Year Round [Book]. Goodreads. Perry Carpenter, n.d. 8th Layer Insights [Podcast]. N2K CyberWire. Renee DiResta, 2024. Invisible Rulers: The People Who Turn Lies into Reality [Book]. Goodreads. Robin Stern, Marc Brackett, 2024. 5 Ways to Recognize and Avoid Political Gaslighting [Explainer]. The Washington Post. Sarah Ellison, Amy Gardner, Clara Ence Morse, 2024. Elon Musk’s misleading election claims reach millions and alarm election officials [News]. The Washington Post. Scott Small, 2024. Election Cyber Interference Threats & Defenses: A Data-Driven Study [White Paper]. Tidal Cyber. Staff, n.d. Overview: Coalition for Content Provenance and Authenticity [Website]. C2PA. Staff, 2021. Foreign Threats to the 2020 US Federal Elections [Intelligence Community Assessment]. DNI. Staff, n.d. Project Origin [Website]. OriginProject. URL https://www.originproject.info/ Stuart A. Thompson, Tiffany Hsu, 2024. Left-Wing Misinformation Is Having a Moment [Analysis] The New York Times. Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber Wire Network, powered by N2K. Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions. This coffee is so good. How do they make it so rich and tasty? Those paintings we saw today weren't prints. They were the actual paintings. I have never seen tomatoes like this. How are they so red? With flight deals starting at just $589, it's time for you to see what Europe has to offer.
Starting point is 00:00:31 Don't worry. You can handle it. Visit airtransat.com for details. Conditions apply. AirTransat. Travel moves us. Hey, everybody. Dave here.
Starting point is 00:00:44 Have you ever wondered where your personal information is lurking online? Like many of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me. I have to say, Delete.me is a game changer. Within days of signing up, they started removing my personal information from hundreds of data brokers. I finally have peace of mind knowing my data privacy is protected. DeleteMe's team does all the work for you with detailed reports so you know exactly what's been done. Take control of your data and keep your private life private by signing up for DeleteMe.
Starting point is 00:01:22 Now at a special discount for our listeners. private by signing up for Delete Me. Now at a special discount for our listeners, today get 20% off your Delete Me plan when you go to joindeleteme.com slash N2K and use promo code N2K at checkout. The only way to get 20% off is to go to joindeleteme.com slash N2K and enter code N2K at checkout. That's joindeleteme.com slash N2K, code N2K. This is the finale of our three-part election propaganda series, where our goal is to help the average American citizen navigate the 2024 presidential election information storm by providing a toolkit that helps distinguish
Starting point is 00:02:19 between the deceptive narratives and legitimate content in the ever-evolving world of election security. In part one, we covered how propaganda spreads on social media platforms with five interlocking and reinforcing agents of the social media machine that I call the Pentad. The platform, the algorithm, the influencers, the crowd, and the media. In the second part, we looked at recent and impactful propaganda campaigns from the past decade that used the pin tad from both nation-state information operators and domestic world-class influencers. For this last part in the series, part three, the final part, we want to cover what happens after the election. Because whatever we do before the election as individual citizens, users of the social media platforms, or platform owners setting their own content moderation policies,
Starting point is 00:03:19 or government lawmakers trying to provide the right incentives to reduce the impact of election propaganda will not be the final story. At this point, a couple of weeks before the 2024 presidential election, platform owners have no real interest in reducing the rage machine. Wrapping up the machine is how they make money. And government lawmakers seem befuddled about how complicated the pen tag is or scared to poke the social media bear for fear of getting tarred and feathered by the social media machine themselves. Or they actively use the pen tag to drive a wedge between their voting base and the base of their political opponents. Some of them have become world-class influencers themselves. Here at N2K, after the election, we're interested in what happens next. Will
Starting point is 00:04:10 there be any efforts before the next presidential election in 2028 that will reduce the impact of propaganda campaigns? Whenever I want to think about the future so that I can plan accordingly about the possible scenarios, one person that I can rely on to bring his deep reasoning and thoughtful intelligence to the problem is Perry Carpenter. He's currently the Chief Human Risk Management Strategist at KnowBefore, the host of one of N2K's excellent podcasts called Eighth Layer Insights about the eighth layer of security, the humans, and he's the author of several books that focus on the human part of the security equation. And it just so happens that his most recent book, Fake, spelled F-A-I-K, a practical guide to living in a world of deep fakes, disinformation, and AI-generated deceptions, fits perfectly into today's discussion.
Starting point is 00:05:19 I started out by asking Perry why he published the book now. Why now is because everybody and their AI-generated dog is talking about deepfakes and AI right now. So if you talk about market and timing, that answers the question on like why put something in the world that people might buy is questions abound and pocketbooks open,
Starting point is 00:05:44 I think is what it comes down to. There is a market for the information. The more real and less fluffy answer is, it's fascinating. I've been going down the AI rabbit hole for a few years now, even prior to the chat GPT moment. And I continue to be fascinated with what's possible. The fact that you can call things into existence, creative things into existence with just a string of words and a prompt is amazing. And then where it comes into the book, Deepfakes, Disinformation, and AI-Generated Deceptions, is that this is the intersection point
Starting point is 00:06:33 between a lot of interests that I have and a lot of passions that I have. I've been fascinated with social engineering and deception for basically my entire life. And now the fact that we have creative versions of artificial intelligence that somebody with the right mindset and the right creativity can use to deceive entire populations, that's both fascinating, interesting, and terrifying.
Starting point is 00:07:02 I said both, but that's three things. Is there a significance to how you spelled it f-a-i-k it's really just a play on deep fake as far as the word but there's ai in the middle of it so ai is the thing that's driving the fake but the funny thing is since then there have been people that just didn't get it and they've come up and they're like, what does fake mean for all I know? Turns out that is an acronym that is out there. And I actually think works as another way of interpreting. Is that real or is that fake? I don't know if it's real for all I know, if it's fake for all I know.
Starting point is 00:07:42 So, you wrote this book and it's about deep fakes and artificial intelligence. I want to get your sense of kind of where we are today going into the election with that kind of technology. The way I frame it in the book is that I say, and I'm going to use a word that's way overused, but I say that weapons-grade deception is now democratized. And so apologies for using the word democratized, but I really think that it captures the moment that we're in. So I think about technology, and especially sophisticated technology, in four different levels. First one is accessible only to nation states. So it takes tons of money, deep pockets, lots of expertise, people in lab coats to run this stuff. Then it kind of moves down to corporate grade. And corporate grade still needs some fairly deep pockets, people that have given their lives to the study and the furtherance of a discipline or technology. and the furtherance of a discipline or technology.
Starting point is 00:08:46 Then it moves down to consumer grade, people like you and me that may have a couple hundred dollars and might invest a little bit of time in a user manual in order to learn something. And then the final form of that, the full democratization of a piece of technology, is what I would call folk grade. And that means that everybody that has access to a piece of technology like a smartphone or a mobile device
Starting point is 00:09:10 or a laptop can use that thing. And it's usually zero to $20 a month. It's the equivalent of MS Paint being on every desktop. And there's probably a free version somewhere. Yeah, there are free versions. And what we've seen is that with even the free versions, you can create a weapons-grade deception right now. And the quote-unquote tells that many people believe that they see can be eliminated just by rolling the die of the generation a few more times.
Starting point is 00:09:41 Or simple improvement over a couple months of the technology. So I think we're at the point where, and it's been, it's confirmed by research, what they saw last year, 2023, was that even if you told people that within the next five videos, they would see a deepfake video, only about 21.3% of the time, if I remember right, could they accurately tell which one it was. So they would accurately identify it just over 20% of the time. And then also they would misidentify it about 20 to 25% of the time, which means we don't know what's real anymore. That reminds me of that. It has nothing to do with what we're talking about. It reminds me of that video was popping around the internet, I don't know, five, six years ago. The idea was there was a bunch of guys and gals playing basketball underneath a bridge,
Starting point is 00:10:34 and they wanted you, the task was to count the number of passes between the players. And so, the video goes on for a couple of minutes, and then at the end, they said, how many passes? And, you know, we all make our guesses. And then they said, did you happen to see the juggling bear on a unicycle go across the screen? And, you know, nobody did because they didn't notice it was going on. That seems very apropos to what you were talking about here is we don't notice the deep fakes as they're happening before our very eyes. We don't notice the deep fakes as they're happening before our very eyes. We don't. The experiment you mentioned is based on a really famous one by a guy named Daniel Simons called The Invisible Gorilla. And there's a book based on that as well that goes through
Starting point is 00:11:15 all of those little cognitive leaps that we make where we don't see what's right in front of us and framing effects and cognitive bias and everything else that goes along with that. So it's fascinating studies in that area of what we just miss. You know, the thing that I'm telling people more and more is that because we're living in a world or stepping into a world where we can't tell what is AI generated and what's not, because even quote unquote reality now has the fingerprints of AI on it. I mean, your Instagram feed that has everybody with whatever filters they use
Starting point is 00:11:52 to make themselves look better means the fingerprints of AI are on everything that we have right now. And especially as you have AI enhanced in every tool that's out there, from LinkedIn to Microsoft Word to Google Docs and so on. Every bit of text is going to have the fingerprints of AI on it. Every video is going to have fingerprints of AI. So the question that I care less about right now is, is the thing that I'm looking at in front of me real or fake?
Starting point is 00:12:23 Because chances are we won't be able to tell. Because it's blurred so much now is what you're saying. Yeah, there's so much blurring of what reality is. So I care less about that question. I care more about the question of what is the story that this is trying to tell me? What's the message? question of what is the story that this is trying to tell me? What's the message? Yeah, what's the message? Why did it land in front of me? Who stands to gain? Who stands to lose? What emotions am I feeling when I'm looking at it? What emotions might somebody be wanting to stir up in me? And then what do they want me to do with it based on that? Or what do they want me to believe
Starting point is 00:13:01 based on that? So let me restate what you just said, Parrish. I want to make sure I have this right. We pretty much can't trust whether a video or a picture or text or quotes or any kind of communications medium was generated by a person. It's all being touched in some way by some artificial intelligence engine. And your point is, it doesn't really matter anyway. Just assume all that's faked up anyway. Consumers of that content just need to pay attention to the message. I think it's two ways. So, there's the, let me look out at the piece of media and start to try to disambiguate what that may be trying to make
Starting point is 00:13:45 me do or believe. And then let me introspect as well and ask myself, is confirmation bias at play? Am I being emotionally riled up in some way? Is there maybe some little bit of ignorance or confusion or something else that may be being played on about the way that I understand technology or the world to work? And then is there some kind of wedge that's being driven? Like you mentioned the us versus them type of mindset. Is somebody trying to play on that in some way? And I talk about, just reflecting on that, what I call the four horsemen of online vulnerability in the book. And they are the confirmation crusader, which is confirmation bias, the emotional tempest, which is that inflamed emotion,
Starting point is 00:14:40 fear, urgency, anger, disgust, you know, all those kind of things. The digital naive, which is just digital illiteracy, so confusion or not knowing what's possible. And the sower of discord, so polarization, us versus them, pulling people apart. When you can see a combination of one or more of those things, it's very likely that somebody is trying to sell you a story. So that's this election. And I know we're all worried about this election,
Starting point is 00:15:12 but you know, this is just the beginning. You know, like you said in your book, we're here with AI today. That's very similar to where the Wright brothers accomplished the first flight, right? Which I love, you mentioned that in your book. It's all been pretty amazing, but it's just starting, right? There's going to be elections here in the States in 2026
Starting point is 00:15:33 and the next presidential elections in 2028. I'm going to ask you to look into your crystal ball here, Perry, and say, where do you think we're going to be, saying, in 2028 for the next election? So I think, again, we're going to continue to be at that spot where we can't tell what's real and what's not. We can't reliably tell. There's a lot of work in regulation and also things like being able to trace where things have been created and distributed and all that. I think that all that's good work.
Starting point is 00:16:07 All that, though, can be worked around by anybody that knows that it exists. So you can game all of that to your advantage if you're the bad actor. Or you can download open source tools that don't have that Providence markers and watermarks and everything, or metadata or anything else like that in it.
Starting point is 00:16:25 So assume that even if there's good detection mechanisms that they can be, they'll only be right about half the time, if that. So take all of that. We still won't know what's real and what's not. It's still going to come down to narrative. It's still going to come down to either money or minds as the goals of the attackers.
Starting point is 00:16:47 And if there's going to come down to either money or minds as the goals of the attackers. And if there's going to be one phrase that I think is going to come up over and over and over again in the next few years, it's going to be the liar's dividend. Wow. I love that phrase. The fact that because we can't tell what's real or not anymore, and everybody's going to be more and more aware of that. The only people that stand to benefit are the people who are deceiving. Because as soon now as you capture political candidate X on
Starting point is 00:17:14 tape doing something or saying something that's polarizing, all they have to go and say is, well, that's a deepfake. I never said that. And enough doubt will come over their supporters, people that have already bought in, and journalists with integrity will have to stop and do some investigation. And so, again, reality is up for grabs. The people that stand to benefit the most
Starting point is 00:17:38 are the ones that are already doing the bad stuff. are already doing the bad stuff. Admittedly, I'm pretty naive about this, but I've had this crazy idea for a long time now that we already have the technology to protect us from deepfakes. If the originators of content, be it audio, video, or even just plain text, cryptographically signed everything,
Starting point is 00:18:05 we would know immediately if that video of President Howard saying something stupid was from him or from one of his opponents. That technology has been around since the late 1970s, and it essentially allows us to drive Internet commerce. Surely we could find a way to use it to help mitigate deepfake content generated by AI systems. So there's several different ways to cryptographically sign or watermark, you know, add some kind of special fingerprint to something. Almost all of those can be bypassed
Starting point is 00:18:36 or obliterated as soon as you know that it exists. How do you bypass a signed file? How do you just... Well, you do a screen grab of it instead, and you distribute that. Because the thing that matters to people is not necessarily whether the file's legit. It's the story behind it. I totally get that. All right. My point is that if we somehow magically train our population to look for the digital signature, somehow magically trained our population to look for the digital signature.
Starting point is 00:19:10 And if they don't find it, then they need to be more suspect of where that content come from. I agree that's a long ways from where we are today. Yeah. I don't know that we get there, though. And I'm not trying to be disagreeable on this. I'm just trying to think through the way that most people work, because when you live in a society that is essentially meme-driven, right? Everybody's taking screen grabs of stuff. Everybody's finding different ways to rip videos and share them their own way. You can easily just obliterate any watermark.
Starting point is 00:19:40 You can obliterate any cryptographic signage because people just don't care about it enough. Only the journalist with integrity that's going to try to go to the source to say, where did this really originate, cares. And everybody else is just out blabbing about the story because it makes them feel whatever. Either they feel righteous agreement with something or righteous outrage at something. And that's going to fuel, I think, everybody way more than trying to get down to the bottom of the legitimacy of whatever file is there. Because for them, seeing is, I'm not even not even gonna say believing seeing is whatever they want it to be at the time well assuming that my my simplistic solution is the solution we should pursue what you're just saying is people don't want that they want to they want to throw the
Starting point is 00:20:38 you know the crazy video into the ether and let people react to it yeah do we get there by some yeah go ahead. I was going to say, I don't think it's that they don't want it. I think they want it when they agree with the outcome of it. Yeah, that's exactly right. So when they can use that to prove their point or disprove somebody else's point, then they're going to lob that out on the table. Yeah.
Starting point is 00:21:02 When it matches their filter bubble, right? When it matches their bespoke reality. Sure. Let's assume that my solution is the solution, which it clearly is not. But how do you get us there? Is that some sort of government compliance? Can the government from around the world say, use social media platforms? You have to verify, sign files or whatever?
Starting point is 00:21:23 Is that even remotely possible to have that kind of thing done? So there's a couple consortiums that are trying to do that right now that the major players have signed on to. So OpenAI and I believe Anthropic and Facebook and others. I forget the acronym for it,
Starting point is 00:21:43 but it's all around provenance Facebook and others. I forget the acronym for it, but it's all around, you know, provenance monitoring and tracking. As an aside, I found two organizations working on this research, the Coalition of Content Provenance and Authenticity with members like OpenAI, Google, and Meta. And Project Origin with members like Microsoft, The New York Times, and the BBC. And so I think that that's, you know, the work is being done not only by the people that are running these large models, but by government agencies as well, trying to figure out how to regulate it.
Starting point is 00:22:26 The biggest problem is, of course, the open source community can make all that null. And that you can just download your own model of whatever to your own machine. If there's some of that in there, you could strip it out of the code. Or you can just find uncensored, unmoderated models of all those things and still create whatever you want. So it kind of solves the problem for the people that want to play the game because they're trying to build a product and work within a capitalist society on that. They're not just taking the anarchist route and trying to do it that way. So I think that what it will help solve is, to take it back into the regular
Starting point is 00:23:15 cybercrime world, is it solves crimes of convenience. The opportunistic people that don't necessarily know that those things are there and they're trying to create their bit of disinformation or whatever and spin that out. And then you can easily disprove that. For somebody that's motivated or funded, then I think it still doesn't really help. So that's why I hear you saying by the next election, the 2028 presidential election, we shouldn't be expecting help from volunteer work by the platforms, nor should we expect help with compliance regulation to make them do anything. That's not going to be there, right? So that brings it back to the normal citizen consuming content. And I asked you this before.
Starting point is 00:24:04 Is there anything that Americans can do to protect themselves, not only in this election, but going forward? And I was looking for something along the lines, you know, five easy ways to spot deep fakes. But you say that in your book, the idea of that working,
Starting point is 00:24:18 and I love this quote, Perry. It's such a great thing. Doing something like the top five list is as effective as trying to staple water to a ghost. That is fantastic. I had a couple people mention that to me. Yeah. And I was looking for the right metaphor.
Starting point is 00:24:34 I think you found it. And, you know, it just continues to escape you because you can't say like, well, if it's a deep fake because you can look at the fingers or you can look at somebody's hair or you can look at the tech. None of that is reliable even when people were mentioning that because you can just roll a few more generations until it's good enough. But the tools are improving so fast that if I were to say a couple things that would help society, help all of us, it would be to develop a healthy sense of skepticism without falling into cynicism. And I think there's an interesting line.
Starting point is 00:25:21 That's a huge ask. I know. I know because you don't want to be debilitated by the realization that there's not much about the fact that AI is advancing and the world
Starting point is 00:25:35 is going to be this mishmash of both real and fake stuff and you can't really tell the difference. But you can learn to look at everything through a little bit of a lens of skepticism. We'll be back with more of our election security dismiss special after this. During this three-part series, we've enlisted expertise from a number of very smart people who have been thinking about the problem of election propaganda for a very long time. Before we go, I wanted to hear from them about their advice on how to mitigate this problem going forward.
Starting point is 00:26:37 Since we've been talking to Perry on this episode, let's start with him. The era of the liar's dividend is upon us. Let's start with him. The era of the liar's dividend is upon us. Our minds work and the human race works rallying around points of story and narrative. We do want there to be heroes and we want there to be villains. We want there to be people who are all good and people who are all evil. And in reality, most of us just kind of sit in this gray zone in between.
Starting point is 00:27:18 And as soon as we realize that none of our politicians and nobody around us is all good or all bad, then I think we can start to have way more productive conversations. And I would look towards the communities and the politicians who are willing to have more nuanced conversations. Here's Scott Small, the intelligence director at Tidal Cyber. One other thing that I would raise a flag, and it's so hard to do this because it's tempting. I think it's a natural instinct to want to do this. But if you see some bit of information, if you have a pretty strong opinion, and if you do, that's great. But if it just completely validates your opinion, I think a lot of information, a lot of things that happen in the world these days, it's just not the case. And so I myself have kind of an immediate reaction. It's almost like too good to be true and maybe dig a little bit further into especially the source of that information,
Starting point is 00:28:02 because I feel like more often than not, you're going to find that that maybe wasn't exactly the case. It was spun in a certain way. But again, natural instinct to want to just, you know, re-promote that because it fits your worldview. So I don't know where and at what point, you know, in education, we kind of try to build that in as part of what we do. But that's kind of going pretty far upstream, but that's what I would personally, I guess, like to advocate for or at least see happen. And here is Nina Jankowicz, the co-founder and CEO of the American Sunlight Project. She recommended that if you're feeling rage about something, maybe not hit the like button or share it with your Facebook group until you can calm down a bit and can understand
Starting point is 00:28:45 the issues a little more. Yes, as the Gen Z kids say, they say, go touch grass, you know, go outside, go for a walk, try to, you know, if you're still thinking about it in five minutes, by all means, engage, see who else is sharing stuff like that, see what, you know, what motivates them to share content if it's all similarly salacious. But try to take a pause before you do, as I say, pound that share button. Remembering the humanity of the person behind the screen or at the Thanksgiving dinner table, as it were, you know, we all are part of this country. We love this country. That's hopefully why the emotions are running so high right now. And I think rather than fact check crazy Aunt Sally or crazy Uncle Bob over your Thanksgiving stuffing or what have you, the best thing to do is ask questions, right? When we look
Starting point is 00:29:36 at the literature, the psychological literature behind bringing people back from extremism or, you know, fact checking disinformation, it's much better to say, it's really interesting, Uncle Bob, why do you believe that? How'd you get that information? And saying, to use the QAnon conspiracy theory as an example, a lot of people who are into QAnon were actually legitimately concerned about child trafficking. They made the leap and then eventually ended up in this, you know, crazy blood drinking pizza gate situation. But, you know, they were legitimately concerned about it. And so understanding what motivates them to seek that information out and then having, again, a really sympathetic conversation about, okay, like, it's really
Starting point is 00:30:21 admirable that you care about that. Have you looked at other sources? Like, not necessarily saying you're wrong and certainly not leaving a comment on Facebook or Twitter or anything else that's like, let me fact check that for you. That's not going to get you anywhere. Nobody wants to be proven wrong in public. So have that conversation in private, one-to-one, or via a DM or on the phone.
Starting point is 00:30:42 And again, remember that we're all hopefully working toward something that is a better, more democratic future for America. We're all Americans. Yes, exactly. And if I can be so bold as to offer my own advice, if you're the average American, you should really consider voting. I'm probably speaking to the choir here. If you're listening to this third part of a three-part series on election propaganda, I think the chances are high that you're going to
Starting point is 00:31:09 vote. But let me make an impassioned appeal to the folks who are not. And I know, I know, nobody wants to hear another podcaster, especially a cybersecurity podcaster, talking about the importance of voting. What the hell do I know about anything? And at this point in your life, you're either a, I vote in every election because that's what I do kind of person, or I'm so disillusioned with the political system that I can't be bothered kind of person. There isn't a lot of wiggle room between the two. But for me, voting is about being an appreciative citizen and not taking for granted the privileges won by the spilled blood of our ancestors. It's about giving back to the community in some small measure
Starting point is 00:31:51 in order to preserve these rights that men and women thought were so important in our country's history that they were willing to lay down their lives for it. I vote because the idea of one person, one vote is perhaps the cornerstone to our participative democratic republic, a thing we can all point to in our aspiration to the American exceptionalism ideal, and I don't want to take it for granted. I vote because it took the country over 200 years to establish the one person, one vote idea
Starting point is 00:32:19 through one awful war, the Civil War, five constitutional amendments, numerous national laws, and continuous attacks to limit the franchise. I vote because of all the contentious issues that lay before us as a nation. The act of voting is the one thing that we do together to address those issues. Voting is precious to me, and I never want to lose the privilege. I vote because I refuse to abdicate my only direct way to influence the process. I vote because, for me, it's about the example I set for my own children as a man standing up for my country. On 5 November 2024, the American people will vote for the next
Starting point is 00:32:59 President of the United States. The choice is between Vice President Kamala Harris and former President Donald Trump. Just a month before the election, Nate Silver, the renowned politics and sports forecaster, famous for his accurate predictions in the 2008 presidential election, forecasted that the 2024 election is essentially a statistical tie and will not likely change at all before the American people go to the polls. What that means is that this is now a game of inches. The two sides are entrenched and equally matched. Nobody is going to change their minds between now and 5 November. The winning candidate then will be the politician who convinces more people from their side to actually vote than the candidate that loses. In a country of over 239 million
Starting point is 00:33:47 eligible voters, the winning candidate will likely only have 5 million more popular votes than the loser, just 2%. Let me say that again. The election is going to be decided by just 2% of the population. Whatever side of the political spectrum you lean towards, don't you want the bulk of that 2% to be from your side? But politics aside, I'm appealing to those I'm so disillusioned with the political system that I can't be bothered people. I hear you. With all the political shenanigans that we have witnessed in our lifetime on both sides of the political aisle, nobody can blame you for wanting to wash your hands of the entire process. But I'm reminded of a quote from Keith Ellison, the current Minnesota Attorney General. He says that not voting is not a protest, it's a surrender. Let me just say this. Of course,
Starting point is 00:34:37 our political system is messy and unsatisfying. It's run by people, and people are messy and unsatisfying. But these same people, these Americans, are also my people. I love them all, despite the fact that half of them don't agree with me about how to do things. Maybe especially because of that. In the 2020 presidential election, voter turnout reached an all-time high, almost 66%. For me, personally, though, when this election is over and half the country is angry about the outcome, I don't want to be home thinking to myself, what else could I have done? As Thomas Jefferson said, we do not have government by the majority. We have government by the majority who participate. The one easy way to participate, to oblige our civic duty, is to vote,
Starting point is 00:35:24 to push down our disillusionment and disappointment with the system, and cast a ballot with one of our people, regardless of how messy and unsatisfied I am with them. Call it an act of faith that the system can get better. Call it an act of hope that the system can and will make the lives of individual Americans better, or at least less hard. I choose to believe that, and I hope that you do, too. Before we close this out, I wanted to bring in N2K CyberWire's executive editor, Brandon Karp, who, by the way, also was one of the editors of our first principles book, to talk a little bit about why we chose to do this series
Starting point is 00:36:31 when it really, strictly speaking, isn't about cybersecurity at all. Here's my short conversation with Brandon. So Brandon, we started talking about doing this series, I don't know, in early summer, right? And I'm wondering if you want to talk about why we thought we needed to do that. Yeah, well, obviously moving into the election season in the United States, but also broadly speaking, this year was called the year of elections globally. It was something, I don't have the exact numbers, but something like 60% of the world's population
Starting point is 00:37:07 were electing new leaders this year, which to me meant the information environment was going to be lousy with misinformation and disinformation, bad information, unusable information. And when I think about our role, really our responsibility here at CyberWire, what we do, what we try to do is make it easy to stay at the cutting edge and stay in the know. But more importantly, make it easy to make good decisions. That's at the end of the day why we do what we do here. And so when you and I were talking about just interesting content series that we could work on this year. This kept coming back up in my mind as something that would be very important for us to do as trusted information stewards in our community.
Starting point is 00:37:55 Well, we talked about, too, in the first episode, we were mentioning that our official model is finding the signal out of the noise. our official model is, you know, finding the signal out of the noise, right? And I realized this topic is, well, you're smiling at me. Is that not correct? No, that is correct. I mean, it's funny because I hear so many organizations use that phrase and it's almost lost its meaning. Yeah.
Starting point is 00:38:25 Which is unfortunate because it's deep, right? It's more signal, less noise. It's finding the signal in the noise, which is what we try to do every day. And as executive editor, that's my goal is everything we put out there in the information sphere is hopefully more signal than noise. But yeah, that is exactly our motto.
Starting point is 00:38:46 Well, and also this topic, election propaganda, is not really cybersecurity, and we're all about cybersecurity. But we said in the first episode, it's sort of cyber-adjacent. We're talking about the integrity of our election process. And then couple that with the, you know, find the signal from the noise. It seemed like an obvious thing to do. Yeah, I mean, in my mind, the information domain is part of the cyber domain. Yeah.
Starting point is 00:39:14 And so, yes, it's adjacent to cybersecurity from a technical perspective. But in terms of an operational perspective, anyone working in cybersecurity or working on all these digital systems needs to take into account the human element and the information element of the cybersecurity domain in order to do their job well. So to me, providing the industry a toolkit, which you did in episode one, and then started applying that toolkit in episode two, and now this episode three,
Starting point is 00:39:47 is important just broadly speaking for anyone who's going to work in modern society. Well, and we also said, I think what pulled us over the edge, right, is that we didn't want to get to the end of the election season and all of us wander on the backside of the presidential election and say to ourselves, man, we didn't do enough to highlight, you know, important issues like this one.
Starting point is 00:40:11 So I think that's what pulled me over the edge. What about you? Yeah, it was that. It was, you know, early this year when the Stanford Internet Observatory, yeah, that whole story came out that they were shutting down. And that organization was shutting down because partly driven by misinformation and disinformation. Oh, totally. We talked about it in episode two pretty hard, right? Yeah. They totally shut down by that whole mechanism that we described in one, in episode one. Yeah, horrible and sad.
Starting point is 00:40:44 Right. in episode one. Yeah, horrible and sad. Right, it is sad, right? Because this world is getting more complex, right? The amount of information in the world doubles every 18 months, which means that no one can keep up with that. And so if we're losing our sources of trusted information, our trusted information providers, if we can no longer agree on what is true, what is fact,
Starting point is 00:41:08 that's really scary. And so, yeah, to me, being in this environment, being a trusted source of information, I thought it was our responsibility to put together a little toolkit and a little information on just how to operate, how to survive, how to... What's the survival kit for a layperson in this information environment. And I think that you did a particularly
Starting point is 00:41:33 good job of keeping it focused on that mission of more signal, less noise. We're all just trying to do good work here. We're all trying to survive and do right by our friends and our families and our coworkers. And so how do we live in this digital world and not get confused and manipulated and turned around and used by those with malicious intent? Well, I think that's a good way to end it.
Starting point is 00:42:03 Thanks, Brandon. We appreciate you coming on and telling us how to do this. Thank you. Yeah, I think that's a good way to end it. Thanks, Brandon. We appreciate you coming on and telling us how to do this. Thank you. Yeah, of course. And thank you, Rick, for everything you've done, you know, pulling this together and for CyberWire. I think that this series especially is something that can live on far beyond just this election series that we can keep revisiting, which is how do we at CyberWire provide you, the listener, with the best information possible so that you can make good, trusted decisions that you're confident in, that you're making a good decision for yourself, for your team,
Starting point is 00:42:38 for your organization, for your family. And at the end of the day, that's what we're trying to accomplish. Excellent. Thank you, sir. Thank you. And that's a wrap. Not only for this episode, for the entire limited three-part series on election propaganda. It was brought to you by the N2K Cyber Wire, where you can find us at thecyberwire.com. On the show notes page, I've added some reference links to help you do more of a deep dive
Starting point is 00:43:12 if that strikes your fancy. And like I said last episode, believe me, the well is deep here. We've just barely scratched the surface. I've also included the timeline of the five constitutional amendments and numerous U.S. national laws concerning the act of voting. If you're a history buff, give that a look.
Starting point is 00:43:30 Also, don't forget to check out our book, Cybersecurity First Principles, a reboot of Strategy and Tactics that we published in 2023. And by the way, we'd love to know what you think of our show and especially this series. Please share a rating and review in your podcast app. But if that's too hard, you can fill out the survey in the show notes or send an email to csop at n2k.com. We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams while making
Starting point is 00:44:17 your team smarter. Learn how at n2k.com. I want to give a special shout out to two people especially that helped me put this thing together. The extremely talented Elliot Peltzman, who's in charge of all music and sound quality. Really nice job, Elliot. And the fabulous Liz Stokes, who produced the entire series. Way to go, Liz. But as you know, we have a wonderful team here at N2K of really talented people doing insanely great things to make me and this show sound good. I think it's only appropriate that you know who they are. I'm Liz Stokes. I'm N2K's CyberWire's Associate Producer. I'm Trey Hester, Audio Editor and Sound Engineer. I'm Elliot Peltzman, Executive Director of Sound and Vision.
Starting point is 00:45:03 I'm Jennifer Iben, Executive Producer. I'm Brandon Karf, Executive Editor. I'm Simone Petrella, the President of N2K. I'm Peter Kilby, the CEO and Publisher at N2K. And I'm Rick Howard. Thanks for your support, everybody. Thanks for listening. Thank you. and adaptable. That's where Domo's AI and data products platform comes in. With Domo, you can channel AI and data into innovative uses that deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts,
Starting point is 00:46:18 and act with ease through guided apps tailored to your role. Data is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.