CyberWire Daily - When location data becomes a weapon.
Episode Date: November 20, 2024A WIRED investigation uncovers the ease of tracking U.S. military personnel. Apple releases emergency security updates to address actively exploited vulnerabilities. Latino teenagers and LGBTQ individ...uals are receiving disturbing text messages spreading false threats. Crowdstrike says Liminal Panda is responsible for telecom intrusions. Oracle patches a high-severity zero-day vulnerability. Trend Micro has disclosed a critical vulnerability in its Deep Security 20 Agent software. A rural hospital in Oklahoma suffers a ransomware attack. A leading fintech firm is investigating a security breach in its file transfer platform. Researchers deploy Mantis against malicious LLMs. Ben Yelin from the University of Maryland Center for Health and Homeland Security discusses AI’s bias in the resume screening process. Tracking down a lost Lambo. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today, we have Ben Yelin, Program Director, Public Policy & External Affairs at the University of Maryland Center for Health and Homeland Security and our Caveat podcast co-host, discussing AI’s racial and gender bias in the resume screening process. You can read about it here. Selected Reading Anyone Can Buy Data Tracking US Soldiers and Spies to Nuclear Vaults and Brothels in Germany (WIRED) GAO recommends new agency to streamline how US government protects citizens’ data (The Record) Apple Issues Emergency Security Update for Actively Exploited Flaws (Infosecurity Magazine) Texts threatening deportation and 're-education' for gays stoke both fear and defiance (NBC News) Chinese APT Group Targets Telecom Firms Linked to BRI (Infosecurity Magazine) Oracle Patches Exploited Agile PLM Zero-Day (SecurityWeek) Trend Micro Deep Security Vulnerability Let Attackers Execute Remote Code (Cyber Security News) Oklahoma Hospital Says Ransomware Hack Hits 133,000 People (GovInfo Security) Fintech Giant Finastra Investigating Data Breach (Krebs on Security) AI About-Face: 'Mantis' Turns LLM Attackers Into Prey (Dark Reading) Hackers Steal MLB Star Kris Bryant’s $200K Lamborghini By Rerouting Delivery (Carscoops) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
DeleteMe's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for DeleteMe.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash N2K and use promo code N2K at checkout. The only way to get 20% off is to go to joindeleteme.com slash N2K and enter code
N2K at checkout. That's joindeleteme.com slash N2K, code N2K.
A wired investigation uncovers the ease of tracking U.S. military personnel.
Apple releases emergency security updates to address actively exploited vulnerabilities.
Latino teenagers and LGBTQ individuals are receiving disturbing text messages spreading false threats.
CrowdStrike says liminal panda is responsible for telecom intrusions.
Oracle patches a high severity zero day.
Trend Micro has disclosed a critical vulnerability in its deep security 20 agent software.
A rural hospital in Oklahoma suffers a ransomware attack.
A leading fintech firm is investigating a security breach in its file transfer program.
Researchers deploy Mantis against malicious LLMs. Ben Yellen from the
University of Maryland Center for Health and Homeland Security discusses AI's bias in the
screening process and tracking down a lost Lambo.
It's Wednesday, November 20th, 2024.
I'm Dave Bittner, and this is your CyberWire Intel Briefing.
Thanks for joining us here today.
It is great, as always, to have you with us.
A contractor commuting from a home near Wiesbaden, Germany,
to U.S. military installations has inadvertently highlighted a serious national security risk posed by unregulated mobile location data sales.
by unregulated mobile location data sales.
Investigative reporting by Wired, Berischer Rundfunk, and Netzpolitik.org revealed how data brokers legally sell granular location information
that can track U.S. service members and contractors at sensitive sites.
The revelation stemmed from a dataset obtained from Florida-based DataStream Group,
containing billions of location signals tied to mobile advertising IDs.
For two months in 2023, the dataset tracked devices at critical installations,
including Lucius D. Clay Cascern, the U.S. Army's European headquarters,
and Buchel Air Base, home to U.S. nuclear weapons. Detailed
movement patterns were observed, such as daily commutes, weekend activities, and even stops at
local brothels. The risks are profound. Foreign adversaries or terrorists could exploit such data
to identify personnel with sensitive access, uncover base vulnerabilities, or plan
attacks. Patterns could reveal guard schedules or entry points, while personal habits might
expose individuals to blackmail or coercion. Efforts to regulate the data broker industry
in the U.S. have faltered. The Fourth Amendment is Not for Sale Act, which would ban federal agencies from buying such data without a warrant,
remained stalled in Congress.
Meanwhile, the Federal Trade Commission plans to file lawsuits recognizing U.S. military installations as protected sites,
but broader protections remain absent.
The Department of Defense acknowledges the risks of geolocation data,
but has largely deferred responsibility to service members through operational security protocols.
Critics argue this approach is insufficient given the pervasive integration of mobile technology into daily life. Researchers emphasize that the systemic sale of mobile location data undermines
privacy and creates substantial
vulnerabilities for national security, with experts like Ron Wyden, U.S. Senator from Oregon,
calling the industry's practices outrageous. The investigation underscores the urgency of
regulating data brokers, tightening operational security, and safeguarding the privacy of military and intelligence personnel.
Without action, adversaries could exploit this data to threaten U.S. personnel and operations,
escalating the risk to national and international security.
Meanwhile, the U.S. Government Accountability Office has urged Congress to establish a federal office to ensure consistent safeguards for civil
rights and liberties in government use of personal data. A GAO report highlights uneven data
protection practices across 24 federal agencies, with many lacking policies to address civil
liberties. Emerging technologies like facial recognition and AI amplify privacy risks, including bias and misidentification.
The GAO warns that without unified oversight, agencies risk violating citizens' rights and recommends Congress develop comprehensive technology-agnostic regulations.
regulations. Apple has released emergency security updates to address two actively exploited vulnerabilities affecting devices like iPhones, iPads, and Macs. These updates fix
JavaScript core and WebKit flaws, enabling code execution and cross-site scripting.
Apple advises immediate patching to prevent malicious exploitation, which was
discovered by Google's Threat Analysis Group. Older Macs with Intel processors are specifically
called out in the update. Latino teenagers in Georgia and LGBTQ individuals nationwide
are receiving disturbing anonymous text messages spreading false threats and targeting
their identities. Messages sent to Latino students claim they are set to be deported by Immigration
and Customs Enforcement, ICE, while others tell LGBTQ individuals to report to re-education camps.
ICE has denied involvement, stating these messages do not align with its
operations. Santiago Marquez of the Latin American Association reported multiple concerned calls from
parents whose children received such texts. One screenshot detailed ICE enforcement via a brown
van. LGBTQ individuals, including a lesbian business owner, receive text messages referencing
discriminatory re-education under a fabricated presidential directive. The FBI is investigating
these incidents, which resemble earlier racist messages targeting Black Americans. Advocacy
groups emphasize the harm caused, especially to vulnerable teens and
marginalized communities. CrowdStrike has identified a new Chinese cyber espionage group,
Liminal Panda, responsible for telecom intrusions previously attributed to Light Basin. Active
since 2020, Liminal Panda targets telecom providers and countries linked to China's
Belt and Road Initiative, gathering network telemetry and subscriber data for intelligence,
not financial gain. Using advanced tools and exploiting telecom interconnectivity,
the group breached networks in Asia and Africa. While linked to Chinese state-sponsored tactics, definitive
attribution remains inconclusive. CrowdStrike recommends enhanced network access controls,
password policies, and monitoring to mitigate risks.
Oracle has released patches for a high-severity zero-day vulnerability in Agile product lifecycle management,
which has been exploited in the wild.
The flaw, with a CVSS score of 7.5,
allows unauthenticated attackers to remotely access files
under the application's privileges via HTTP.
Oracle credited CrowdStrike researchers Joel Snape and Lutz Wolf for identifying the issue.
Oracle urges customers to apply updates immediately to mitigate the risk of critical data exposure or full system access.
Trend Micro has disclosed a critical vulnerability in its Deep Security 20 agent software. The flaw, rated 8.0,
allows attackers with low privileged access
to inject remote commands and execute arbitrary code.
Trend Micro has released patches to address the issue
and urges immediate updates.
Organizations should also review access policies
to prevent exploitation.
Great Plains Regional Medical Center in Oklahoma
suffered a ransomware attack in September, compromising the personal data of just over
133,000 individuals. The attack, which impacted the hospital systems between September 5th and 8th,
led to partial restoration but left some patient data unrecoverable.
Exposed data included names, health details, and social security numbers.
Rural hospitals, like Great Plains, face heightened risks due to limited cybersecurity resources,
making them targets for attackers.
Experts urge increased federal support and public-private partnerships to
bolster defenses against such threats. Finastra, a leading fintech firm serving top global banks,
is investigating a security breach in its file transfer platform, potentially exposing sensitive
client data, Krebs on Security reports. Hackers operating under the alias Abyss
Zero claimed to have stolen over 400 gigabytes of data and listed it for sale on cybercrime forums
targeting Finastra's banking clients. Detected on November 7th, the breach involved credential
compromise but no malware deployment. Finastra assured customers its operations remain
unaffected, launching an alternative secure file-sharing platform. Investigations continue
to determine the scope of the theft, and impacted customers will be notified directly.
The cybercriminal who initially listed data for $20,000 abruptly vanished, with their online accounts deactivated.
Finastra had previously been hit with ransomware back in 2020.
Researchers at George Mason University have developed a novel defense system they're calling
Mantis to counter cyberattacks conducted by large language models. Mantis uses deceptive techniques to lure malicious LLMs into engaging with decoy services,
such as fake FTP servers.
The system embeds prompt injection attacks and responses to manipulate and disrupt the attacker's strategy.
By exploiting the iterative process used by attacking LLMs,
By exploiting the iterative process used by attacking LLMs, Mantis can redirect their actions, waste resources, and even create reverse shells to compromise the attacking system.
Mantis employs both passive and active defenses, achieving a success rate above 95%. Passive strategies raise the cost of attacks, while active defenses target the attacking AI directly.
The vulnerability-exploited prompt injection is a fundamental weakness in LLMs,
difficult to patch without diminishing their utility.
This innovation highlights the potential for using AI's own methods against it,
marking a significant step in AI-driven cybersecurity defenses.
Coming up after the break, my conversation with Ben Yellen about AI's bias in the
resume screening process and tracking down a lost Lambo. Stay with us.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks. But get this.
More than 8,000 companies like Atlassian and Quora
have continuous visibility into their controls with Vanta.
Here's the gist. Vanta brings
automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off. And now, a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
And joining me once again is Ben Yellen. He is from the University of Maryland Center for Health
and Homeland Security and also my co-host on the Caveat podcast. Hey there, Ben.
Good to be with you again, Dave.
Interesting story came by from the folks over at GeekWire.
This is about some research at the University of Washington
where they were examining racial and gender bias
in AI-driven resume screening.
What's going on here, Ben?
So this is a really striking story.
So these researchers tried to examine
whether there was bias in AI-driven resume
screening. And spoiler alert, there is. When you train a system on a lot of data and a lot of human
decisions went into that training data, and humans, as we know, can be extremely biased on a
number of different grounds, then this is the result that you're going to get. It's very reflective of the training data.
So some of the findings are pretty striking.
They tested three open source large language models
and resumes with white associated names
were preferred 85% of the time.
Female associated names were favored only 11% of the time,
even for jobs that one would think would be traditionally female jobs.
Black men were least preferred.
Models chose candidates nearly 100% of the time.
Again, the only thing that was changed on these sample resumes was the name.
And those are extremely striking findings.
So what are the implications of this?
For one, I think policymakers have a role to play here.
We've seen jurisdictions across the country try to address this problem.
New York City passed a policy requiring transparency to understand exactly what is going into these artificial intelligence hiring tools to make sure that they are not reflective of biases. California has a new law in the book protecting intersectional
characteristics. There's further academic research to be done. I think to make this kind of a double
blind experiment, the researchers are looking at human trust in AI-driven hiring next and whether technology inadvertently reinforces the biases that we currently have.
So I think this should give organizations major pause and concern
about using some of these tools.
I understand why these tools are helpful.
Just like any AI system, it cuts down on the work that we as humans have to do
in sorting through a giant pile of resumes.
But when you see statistics like this, I think there are flashing red lights everywhere.
So it's a very illuminating study.
Yeah, one of the things that stuck out to me was that they found that even if you remove the name, that in a lot of cases, these LLMs can infer someone's identity from other
information on their resume. Yeah. So you could look at somebody's work experience, for example,
and if somebody worked at, for example, at the NAACP, something from that work experience will
trigger and that will end up introducing bias of the data. The system suspects that that candidate is likely
black or African-American, and those biases will manifest themselves in whatever the output is.
Again, this is after you've stopped using names. And so it's not unique to just stereotypically
white or non-white names. That's not what's driving the bias here.
It's more than that.
It's the names, yes,
but it's also what is contained inside of the resumes.
So what's kind of depressing here
is that the systems are advanced enough
to be uncommonly good at perpetuating racial biases here.
Like it's extremely efficient
in a number of different ways at
reinforcing some of the biases that all of us have as human beings. Yeah, as I've said before,
it seems like these systems in a lot of ways, they reflect who we actually are rather than who
we aspire to be. Yeah, it's sort of depressing that we thought that machines could solve a lot of our problems.
And it turns out when you try to develop machines and try and get them to think like human beings,
they're going to reflect the good and the bad of human beings.
There are a lot of good aspects about thinking like human beings,
let's say relative to standard computing where it's all very linear,
you're looking for decision points.
Thinking like a human being is great,
making these neural connections from a bunch of different stimuli,
but it comes with the negative, and this is the negative being manifest here.
It reminds me of a story I heard about folks auditioning for symphony orchestras
or even auditioning for colleges, you know, elite music colleges,
that they found that there was a lot of gender bias in the hiring or the acceptance of musicians
who are auditioning, you know, by playing their instrument in front of a panel of judges.
So the first thing they did was they put up a black curtain on the stage so that the person playing would walk out behind the black curtain, sit down, play their instrument, and be judged.
They should use some of the chairs they use on The Voice that are reversible so that they can't see the person.
Well, so my understanding of the story is that the curtain helped a little bit.
my understanding of the story is that the curtain helped a little bit,
but what they ultimately learned was that they also had to put carpet on the floor because the judges could hear the way the people walked
and women's shoes sound different from men's shoes.
And so the click, click, click of, for example, someone in heels
is different from a man in dress shoes.
And they were implying things just based on that. So it's tricky.
It's very tricky. It's so hard to completely root out these biases.
I think some people just kind of want to throw up their hands and say,
well, humans are more biased than these machines, even after understanding what these studies are showing.
So despite the deficiencies of these systems, we should still use them.
And I think we have to work beyond that and say we now know how policymakers are reacting to biased AI systems.
We have tools potentially to help root out some of the discrimination that we're seeing here.
And I think it's incumbent upon us to at least consider the use of those tools through things
like regulation, requiring transparency, et cetera. Yeah. All right. Well, again, it's a
study from the University of Washington and written up by the folks over at GeekWire. We
will have a link in the show notes. Ben Yellen, thanks for joining us. Thank you.
Cyber threats are evolving every second,
and staying ahead is more than just a challenge.
It's a necessity. That's why we're thrilled to partner with ThreatL are evolving every second, and staying ahead is more than just a challenge. It's a necessity.
That's why we're thrilled to partner with ThreatLocker,
a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data,
and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com
today to see how a default deny approach can keep your company safe and compliant.
And finally, our exotic motoring desk tells us that Chris Bryant, the Colorado Rockies' third
baseman, had a rough off-season when his flashy 2023 Lamborghini Huracan went AWOL en route to
his Las Vegas home. The saga began on October 2nd when the supercar mysteriously vanished,
saga began on October 2nd when the supercar mysteriously vanished, sparking a multi-agency investigation. Turns out the transport company fell victim to business email compromise,
a scam that rerouted Bryant's Lambo to an unauthorized Las Vegas destination.
Thanks to license plate recognition cameras, police tracked the car's journey,
Thanks to license plate recognition cameras, police tracked the car's journey, recovering it on October 7th and nabbing multiple suspects.
The bust revealed a jackpot of criminal goodies, fake VINs, key fobs, fraudulent docks, and other stolen vehicles.
One bonus car even turned up in California. Though police didn't name-drop Bryant, the Denver Post did, with Detective Justin Smith quipping,
we'd treat it the same if it were a Ford 150. But hey, a Lamborghini does make for a cool case. And that's The Cyber Wire.
For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights
that keep you a step ahead
in the rapidly changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app.
Please also fill out the survey in the show notes or send an email to cyberwire at n2k.com.
We're privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector,
from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies.
N2K makes it easy for companies to optimize your biggest investment, your people.
We make you smarter about your teams while making your team smarter.
Learn how at n2k.com.
This episode was produced by Liz Stokes.
Our mixer is Trey Hester
with original music
and sound design
by Elliot Peltzman.
Our executive producer
is Jennifer Iben.
Our executive editor
is Brandon Karp.
Simone Petrella
is our president.
Peter Kilpie is our publisher.
And I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow.