CyberWire Daily - Ukraine evaluates Russia’s cyber ops. Smartphones go to war. Lilith ransomware. ChromeLoader evolves. Rolling-PWN looks real after all. Schulte guilty in Vault 7 case.

Episode Date: July 14, 2022

An overview of the cyber phase of Russia's hybrid war. Smartphones as sources of targeting information. Lilith enters the ransomware game. ChromeLoader makes a fresh appearance. Honda acknowledges tha...t Rolling-PWN is real (but says it's not as serious as some think). Part two of Carole Theriault’s conversation with Jen Caltrider from Mozilla's Privacy Not Included initiative. Our guest is Josh Yavor of Tessian to discuss Accidental Data Loss Over Email. A guilty verdict in the Vault 7 case. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/11/134 Selected reading. Ukraine's Cyber Agency Reports Q2 Cyber-Attack Surge (Infosecurity Magazine) 2022 Q2 (SSSCIP) The weaponizing of smartphone location data on the battlefield (Help Net Security)  New Lilith ransomware emerges with extortion site, lists first victim (BleepingComputer) A new ransomware operation has been launched under the name 'Lilith,' and it has already posted its first victim on a data leak site created to support double-extortion attacks. New Ransomware Groups on the Rise (Cyble) Cyble analyzes new ransomware families spotted in the wild led by notable examples such as LILITH, RedAlert, and 0Mega. New Lilith ransomware emerges with extortion site, lists first victim (BleepingComputer) New Ransomware Groups on the Rise (Cyble) Researchers Uncover New Variants of the ChromeLoader Browser Hijacking Malware (The Hacker News) ChromeLoader: New Stubborn Malware Campaign (Unit 42)  Honda Admits Hackers Could Unlock Car Doors, Start Engines (SecurityWeek) Honda redesigning latest vehicles to address key fob vulnerabilities (The Record by Recorded Future)  Statement Of U.S. Attorney Damian Williams On The Espionage Conviction Of Ex-CIA Programmer Joshua Adam Schulte (US Department of Justice)  Ex-C.I.A. Engineer Convicted in Biggest Theft Ever of Agency Secrets (New York Times) Former CIA Staffer Convicted For Massive Data Breach To WikiLeaks (Forbes) Learn more about your ad choices. Visit megaphone.fm/adchoices

Transcript
Discussion (0)
Starting point is 00:00:00 You're listening to the Cyber Wire Network, powered by N2K. Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions. This coffee is so good. How do they make it so rich and tasty? Those paintings we saw today weren't prints. They were the actual paintings. I have never seen tomatoes like this. How are they so red? With flight deals starting at just $589, it's time for you to see what Europe has to offer.
Starting point is 00:00:31 Don't worry. You can handle it. Visit airtransat.com for details. Conditions apply. AirTransat. Travel moves us. Hey, everybody. Dave here.
Starting point is 00:00:44 Have you ever wondered where your personal information is lurking online? Like many of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me. I have to say, Delete.me is a game changer. Within days of signing up, they started removing my personal information from hundreds of data brokers. I finally have peace of mind knowing my data privacy is protected. Delete.me's team does all the work for you with detailed reports so you know exactly what's been done. Take control of your data and keep your private life private by signing up for Delete.me.
Starting point is 00:01:22 Now at a special discount for our listeners. private by signing up for Delete Me. Now at a special discount for our listeners, today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code n2k at checkout. That's joindeleteme.com slash n2k code N2K at checkout. That's joindelete.me.com slash N2K, code N2K. An overview of the cyber phase of Russia's hybrid war. Smartphones as sources of targeting information. Lilith enters the ransomware game. Chrome loader makes a fresh appearance.
Starting point is 00:02:11 Honda acknowledges that rolling phone is real. Part two of Carol Terrio's conversation with Jen Kultreiter from Mozilla's Privacy Not Included initiative. Our guest is Josh Yavor of Tessian to discuss accidental data loss over email and a guilty verdict in the Vault 7 case. From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Thursday, July 14th, 2022. The State Service for Special Communications and Information Protection of Ukraine has issued a report on the current state of the cyber phases of Russia's war
Starting point is 00:03:08 as they unfolded during the second quarter of the year. It sees Russia as concentrating on, first, espionage, second, network disruption, third, data wiping, and fourth, disinformation. Of these four, the network disruption and disinformation have come to represent a relatively greater fraction of Russian cyber operations. The report says, comparing to the first quarter of 2022, the number of critical IS events originating from Russian IP addresses decreased by 8.5 times. decreased by 8.5 times. This is primarily due to the fact that providers of electronic communications networks and or services that provide access to the Internet blocked IP addresses used by the Russian Federation.
Starting point is 00:03:55 The SSS-CIP, like many observers, considers the nominal hacktivists who've been recently active as simple front groups for the Russian intelligence services. Traditionally, targeting cells distinguish between targets, which is something sufficiently identified and located to be hit with an attack, and target indicators, which is information that provides a lead that may, with further investigation, be developed into a target. that may, with further investigation, be developed into a target. Sometimes the process is quick, as it might be with data from weapons-located radar, and other times protracted, as it might be with spot reports from the field.
Starting point is 00:04:40 So, if Private So-and-So says he thinks he heard something on the other side of the ridge, that's a target indicator. Someone needs to go over and look. If the radar says that enemy shell came from grid such-and-such, that's a target. But now, the commodification and ubiquity of smartphones on the battlefield and elsewhere have not only given armies a new operational security challenge, but they've also provided targeting cells with a wealth of battlefield information that can provide targets with a hitherto unprecedented immediacy. An analysis by Mike Fong, CEO of Prevoro, published in HelpNet Security,
Starting point is 00:05:15 explains the risks and opportunities the smartphone presents on the battlefield. Mr. Fong says, Of all the signals given off by smartphones in the normal course of operation, location data is perhaps the most valuable during battle. Unlike captured conversations or call metadata, location data is actionable immediately. Location data can reveal troop movements, supply routes, and even daily routines. A cluster of troops in a given location may signal an important location. Aggregated location data can also reveal associations between different groups.
Starting point is 00:05:52 The obvious risk to soldiers is that their location data can be used by the enemy to direct targeted attacks against them. Notably, it has been reported that a Russian general and his staff were killed in an airstrike in the early weeks of the invasion after his phone call was intercepted and geolocated by the Ukrainians. So, monitoring phones can yield not only intelligence, but targets. Fong's conclusion is that how an army handles smartphones and the data they throw off can be a difference maker on the battlefield.
Starting point is 00:06:24 smartphones and the data they throw off can be a difference maker on the battlefield. He says, smartphones are so ubiquitous that their presence on the battlefield is inevitable, even when they've been prohibited or otherwise discouraged from use due to lethal consequences. But each location ping gives the enemy another signal that may ultimately culminate in a targeted missile strike or an improved defensive posture. The side that can best fight this information battle very likely has the upper hand in winning the war. Researchers at CYBAL describe a new ransomware operation, Lilith, and Bleeping Computer reports that the group not only operates a new strain of malware, but that it's already posted the first victim to its double extortion dump site.
Starting point is 00:07:09 Seibel notes, Throughout 2021 and 2022, we have observed record levels of ransomware activity. While notable examples of this are rebrands of existing groups, newer groups like Lilith, Red Alert, and Omega are also proving to be potent threats. As far as the origins of the name Lilith are concerned, she's either a demon from Mesopotamian mythology or the spouse of Dr. Crane on Cheers. Which source you like depends upon where you take your cultural references. Palo Alto Network's Unit 42 describes new variants of the
Starting point is 00:07:47 Chrome Loader malware now making their appearance in the wild. The researchers write, this malware is used for hijacking victims' browser searches and presenting advertisements, two actions that do not cause serious damage or leak highly sensitive data. However, based on the wide distribution the attackers gained in such a short time, they were able to inflict heavier damage than the damage inflicted by the two primary functions of the Chrome extension. The extension serves as adware and as an information stealer, pulling in the victim's browser searches. The gang using Chrome Loader seems to have clear ideas about what it's up to, Unit 42 writes.
Starting point is 00:08:28 Additionally, the authors were quite organized, labeling their different malware versions and using similar techniques throughout their attack routines. But ease of criminal use has its downsides too, at least from the criminal's point of view, Unit 42 writes, This probably made their lives easier while developing their attack framework and maintaining their attack chains. But unintentionally, this also made the investigation process significantly easier. In fact, it improved the research ability so much that we were able to detect two new versions of this malware, the first one and the latest, which have never been linked to this malware family before.
Starting point is 00:09:10 Security Week reports that Honda has acknowledged that the rolling-pone proof-of-concept does indeed work against the carmaker's remote keyless system. It is in fact possible for someone to unlock the car and even start it. But Honda says they couldn't just drive the car away, since that requires the key fob to be present. The record quotes Honda's statement, However, while it is technically possible, we want to reassure our customers that this particular kind of attack,
Starting point is 00:09:40 which requires continuous close-proximity signal capture of multiple sequential RF transmissions, cannot be used to drive the vehicle away. The 2022 and 2023 models are said to be protected against rolling pwn, and Honda is making other security upgrades to mitigate the vulnerability. And finally, the second trial of Joshua Schulte has ended in a guilty verdict. Mr. Schulte, a former CIA employee, was arrested after WikiLeaks' 2017 disclosure of the Vault 7 classified documents that outlined Langley's methods of penetrating networks operated by its intelligence targets.
Starting point is 00:10:21 The New York Times reports that his first trial had resulted in convictions for contempt of court and lying to federal investigators, but the jury had been unable to reach a verdict on the more serious charges of which he's now been convicted. The U.S. attorney for the Southern District of New York, who prosecuted Mr. Schulte, offered a brief statement on the outcome of the trial and some thoughts on Mr. Schulte's motivation, which in the U.S. attorney's view comes down to resentment, which would be the E in the familiar counterintelligence acronym MICE, summarizing the motives that bring people to espionage as money, ideology, compromise, and ego.
Starting point is 00:11:02 The statement is short enough to quote in full, Joshua Adam Schulte was a CIA programmer with access to some of the country's most valuable intelligence-gathering cyber tools used to battle terrorist organizations and other malign influences around the globe. When Schulte began to harbor resentment toward the CIA, he covertly collected those tools and provided them to WikiLeaks, making some of our most critical intelligence tools known to the public, and therefore our adversaries.
Starting point is 00:11:33 Moreover, Schulte was aware that the collateral damage of his retribution could pose an extraordinary threat to this nation if made public, rendering them essentially useless, having a devastating effect on our intelligence community by providing critical intelligence to those who wish to do us harm. Today, Schulte has been convicted for one of the most brazen and damaging acts of espionage in American history. Do you know the status of your compliance controls right now? Like, right now. We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks.
Starting point is 00:12:22 But get this. More than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta. Here's the gist. Vanta brings automation to evidence collection across 30 frameworks like SOC 2 and ISO 27001. They also centralize key workflows like policies,
Starting point is 00:12:44 access reviews, and reporting, and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC. Get $1,000 off Vanta when you go to vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off. And now, a message from Black Cloak. Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home. Black Cloak's award-winning digital executive protection platform
Starting point is 00:13:29 secures their personal devices, home networks, and connected lives. Because when executives are compromised at home, your company is at risk. In fact, over one-third of new members discover they've already been breached. Protect your executives and their families 24-7, 365 with Black Cloak. Learn more at blackcloak.io.
Starting point is 00:14:00 Security firm Tessian recently released survey data tracking accidental data loss over email. Josh Yavor is chief information security officer at Tessian. We see that around three in five organizations have experienced accidental data loss via email in the last year. I just mentioned email. The reason I mention that is because the majority of the reported data loss events, they have a human cause at the as a root cause. And in addition to humans being involved, the a significant number of those events are not actually malicious or intentional. They're accidental. And so I think that's a key takeaway is that, you know, today in 2022, this is still a very large problem and a largely unsolved problem in terms of sufficiently addressing the reality of human behavior and being able to reduce the frequency of accidental data loss. Yeah, one of the things that caught my eye in the report was that employee negligence was a leading cause of data loss incidents and folks just generally not following policies that are in place.
Starting point is 00:15:10 Yeah, or not knowing about them. And so I think that's another layer to consider as we think about negligence, right? Negligence isn't always intentional, right? somebody who is negligent because they didn't know there was a policy or didn't understand the policy because it was poorly communicated or did not fit clearly the use case that they were, they as the employee were trying to achieve in terms of data sharing with a customer or a partner or something like that. I think that's a really great call out because as we think about what negligence actually means, we have to account for the fact that most often, I believe, and I think we're getting more and more data that backs this up, negligence is not necessarily a reflection of employees making intentionally incorrect decisions. More often
Starting point is 00:15:57 than not, they're trying to do the right thing, but they're either unaware of the requirements or the requirements don't actually clearly fit their purpose. And that's the responsibility of security teams to address. Yeah, another thing that caught my eye was that it's not necessarily equal among the various teams within an organization. That some parts of an organization, I suppose largely because of the type of work that they're expected to do, they may be more at risk. That's right. And so I like to think about them as external facing teams in particular. So if you look at like marketing, public relations, and so on, they tend to have higher rates of occurrences for accidental data loss in particular. But if you expand on that thinking and think about who else is included,
Starting point is 00:16:46 that also includes your recruiting team. It includes if you're in a product business and you have customer success or customer support roles, those roles as well. And if you think about it, these are roles that fundamentally require sharing of files, clicking of links, opening of sharing of files, clicking of links, opening of files bidirectionally. And these are generally just overall some of the riskiest roles to need to support. And we see that manifest here in data loss as well, because I think if I step back and look at how we support these roles, our traditional tooling and approaches don't sufficiently empower people in those roles to avoid making these mistakes. Well, if we come at it from a different direction and ask the
Starting point is 00:17:31 question, you know, for the organizations that are doing it right, who are finding the most success here, are there any common threads in the things they're doing? Yes. And this admittedly goes a little bit beyond the foundational data that we have in this report, and it's an extrapolation that I would bring to the table. The theme that I would highlight is that organizations who are conscious of what we just talked about, the reality that employee negligence is usually not intentional, and it's reflective of a need to better empower and educate users in a way that makes sense to them, organizations that have that realization and make that investment tend to do better. The other thing that we see is that organizations that specifically select tooling
Starting point is 00:18:16 and processes that engage the employees rather than subject them to blocking events and things that really disrupt the business and really just approaches that engage employees and let them make empowered and informed choices securely, they tend to do better as well. So basically meeting them in that moment where they're sending an email, sharing a file or so on,
Starting point is 00:18:40 and providing them with that coaching and clear education, that leads to much better outcomes generally. So there really is, in addition to all the technology that we throw at this problem, seems to me there's a corporate culture element here as well. Absolutely. If you're in a corporate culture that's punitive, right, and you make a mistake and you're subjected to a nasty email to you and your manager, or you're forced to take an hour or more of remedial security awareness training. What that means is that you're less likely to actually report things that need to be reported or ask for help when you need it because you're afraid to get in
Starting point is 00:19:16 trouble. And in the worst case scenarios, people who are in unhealthy corporate cultures around security and data loss will often then seek to, or at least be tempted to seek alternate data sharing flows where it's outside of the view of the security team so that they run lower risk of making a mistake that will end up costing them time and energy. But at the same time, that's introducing shadow IT and decreasing the overall security of the organization. And so there's definitely a balancing act between positive engagements and good security culture, and again, having punitive and consequential
Starting point is 00:20:00 culture components in place. What do you hope people take away from the report? What are some of the lessons here that you hope people learn? Yeah, I think a key lesson here is that it's not enough to focus on intentional insider threat as your core area of focus for human-led data loss risk, and that we need to recognize that accidental and unintentional data loss events are really predominant in what we're seeing reported across the majority of organizations. And that the key takeaway should be that if our assumption is that people are generally trying to do the right thing and they just need help, our focus in terms of supporting them from tooling to internal security team practices and so on needs to be focused on that emphasis.
Starting point is 00:20:49 How do we empower people to make the best choices in the moment and have our entire tooling and process stack reflect that? That's Josh Yavor from Tessian. Thank you. a default-deny approach can keep your company safe and compliant. On yesterday's CyberWire Daily Briefing, we shared the first half of Carol Terrio's conversation with Jen Kaltreiter from Mozilla's Privacy Not Included initiative. They wrap up their discussion today. Here's Carol. So today we have part two of my interview with Jen Kaltreiter. She is the lead at Privacy Not Included. That's a Mozilla Foundation project. the lead at Privacy Not Included. That's a Mozilla Foundation project. And it helps people assess the privacy levels of different devices, apps, and everything technology, basically. So welcome back
Starting point is 00:22:34 to the show, Jen. Now, maybe you can tell us about your latest research. What were you looking into and what did you find? We just got done reviewing mental health apps and they were a particularly creepy space for us. They were like like probably the creepiest things I've ever reviewed are these mental health apps. And one part because they collect such a huge amount of personal information on you. What's your mood? How often are you seeing a therapist? I have what your OCD triggers are, your eating disorder triggers, or what symptoms you're having, what meds you're taking, and all this really personal data that they collect. Also, with the mental health crisis that's exploded over the past year, there's hundreds
Starting point is 00:23:13 of millions of dollars kind of flowing into this space. And so the companies are growing really quickly. And they're caring about making money right now more than they are about protecting privacy, even though they say they care about privacy. And so in our experience, just reviewing these mental health apps over these past couple of months, we had some things where we read the privacy policies
Starting point is 00:23:35 and of the 31 companies that we emailed asking our questions at the email address listed in their privacy policy, after a month and a half, only three companies had actually responded to us. Wow. Are you kidding me? Yeah. They just didn't respond to our questions. And then one of the companies we then followed up with, they weren't really happy because we called them out for some questionable privacy
Starting point is 00:23:59 practices and they weren't really happy. And they're like, well, why didn't you reach out to us before you launched? And we're like, we did three times at the email address in your privacy policy. And their response was, oh, well, the person that monitors that left in March and we haven't replaced them. And I'm like, well, that's not showing that you care about privacy. Another company was unhappy with us and they wrote up a post defending themselves. And in the post, they said, oh, Mozilla got everything wrong because they tried to infer our business practices from our privacy policy. And I had to laugh because I'm like, well, how else is a person supposed to infer your business practices around privacy other
Starting point is 00:24:38 than your stated business practice around privacy? It feels like to me that it's almost a game for these companies to write these privacy policies that too often are vague, vaguely worded, have wiggle room. There's like five privacy policies. There's a privacy policy and a privacy notice and an addendum for the EU and an addendum for California. And they don't make it easy for consumers to actually stop and understand. And one of the things that really got me when I was reading all this was going back to that, we'll never share or sell you data without consent. Well, what does consent look like, right? Like downloading and registering to use the app, I think too often might count as consent, which most people are like, no,
Starting point is 00:25:21 that's not consent. I just want to use your app. You better ask me first before you give my personal data to Facebook. And no, based on my reads of privacy policies, a lot of time consent is as simple as you downloaded the app, you registered, you've given us your consent to do this. And then you read, well, how do I withdraw consent? And a lot of times it's like, in order to withdraw consent, you must delete. Yeah. You're like, that's not great. Even if consumers do go in and read privacy policies, which please do. I'm a nerd and I know they suck and it's hard to read them, but it's also a good exercise. But there's just so many questions that you kind of walk away with where things aren't clear that, you know, maybe even if you read it, it sounds good at the top, but then you actually get down into the depth of it
Starting point is 00:26:10 and you're like, hold on. It said you wouldn't, I saw privacy policy said that we'll never share your data with for advertising purposes. And then down below it said, here's the advertisers we share your data with. And I'm like, wait a minute, you know? So it's, it's tricky. It's really tricky. Do you think the value that we're putting on so-called big data is what's causing this problem? Oh yeah, absolutely. I mean, you know, back in 2017 that we started, you know, okay. So there was your fitness tracker tracked your steps and maybe your heart rate and your smart speaker would listen for, you know, a couple of commands. But now, you know, we have apps that are collecting our conversations with therapists. We have apps, you know, we have fitness trackers that can tell your
Starting point is 00:26:56 emotional state and whether you're drunk or not, you know, so the amount of data that can be collected now, like, well, this is the data economy. If we're going to make money, this is how it's done. But there's just so much personal information that's out there now, so much more that people are going to, I think, rather quickly go, well, nothing bad's happening. I'm not feeling any repercussions from all this data sharing too. Holy crap, how do they know this about me? And now what do I do? How do I take that back? And so I feel like in any kind of justice movement, there's a tipping point. And we're not quite at that tipping point with privacy, but I feel like it's getting awfully close when your cars can know everywhere you've been and your mental health apps can know your
Starting point is 00:27:42 emotional state and Facebook is developing algorithms to know as much personal information about you as can so that people all over the world could target you with ideologies. It's getting really scary. And now's the time to kind of make a difference, I feel like. Sometimes we have to end on a more sobering note, but an important one. We've just been talking with Jen Kultreider. She is the lead at Privacy Not Included, a Mozilla Foundation project. That's Carole Terrio speaking with Jen Kultreider from Mozilla's Privacy Not Included initiative. And that's The Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Starting point is 00:28:39 The Cyber Wire podcast is proudly produced in Maryland out of the startup studios of Data Tribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Liz Ervin, Elliot Peltzman, Trey Hester, Brandon Karp, Eliana White, Puru Prakash, Justin Sabey, Rachel Gelfand, Tim Nodar, Joe Kerrigan, Carol Terrio, Ben Yellen, Nick Vilecki, Gina Johnson, Bennett Moe, Chris Russell, John Petrick, Jennifer Iben, Rick Howard, Peter Kilby, and I'm Dave Bittner. Our in-studio executive producer today was Dexter the Dog. Thanks for listening. Thank you. not only ambitious, but also practical and adaptable. That's where Domo's AI and data products platform comes in. With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Starting point is 00:29:53 Secure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts, and act with ease through guided apps tailored to your role. Data is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.