CyberWire Daily - Agencies warn of voter data deception.
Episode Date: September 16, 2024The FBI and CISA dismiss false claims of compromised voter registration data. The State Department accuses RT of running global covert influence operations. Chinese hackers are suspected of targeting ...a Pacific Islands diplomatic organization. A look at Apple’s Private Cloud Compute system. 23andMe will pay $30 million to settle a lawsuit over a 2023 data breach. SolarWinds releases patches for vulnerabilities in its Access Rights Manager. Browser kiosk mode frustrates users into giving up credentials. Brian Krebs reveals the threat of growing online “harm communities.” Our guest is Elliot Ward, Senior Security Researcher at Snyk, sharing insights on prompt injection attacks. How theoretical is the Dead Internet Theory? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest is Elliot Ward, Senior Security Researcher at Snyk, sharing insights on their recent work "Agent Hijacking: the true impact of prompt injection attacks." Selected Reading FBI tells public to ignore false claims of hacked voter data (Bleeping Computer) Russia’s RT news agency has ‘cyber operational capabilities,’ assists in military procurement, State Dept says (The Record) The Dark Nexus Between Harm Groups and ‘The Com’ (Krebs on Security) China suspected of hacking diplomatic body for Pacific islands region (The Record) Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works (WIRED) Apple seeks to drop its lawsuit against Israeli spyware pioneer NSO (Washington Post) 23andMe settles data breach lawsuit for $30 million (Reuters) SolarWinds Patches Critical Vulnerability in Access Rights Manager (SecurityWeek) Malware locks browser in kiosk mode to steal Google credentials (Bleeping Computer) Is anyone out there? (Prospect Magazine) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. JoinDeleteMe.com slash N2K and use promo code N2K at checkout.
The only way to get 20% off is to go to JoinDeleteMe.com slash N2K and enter code N2K at checkout.
That's JoinDeleteMe.com slash N2K, code N2K. The FBI and CISA dismiss false claims of compromised voter registration data.
The State Department accuses RT of running global covert influence operations.
Chinese hackers are suspected of targeting a Pacific Islands diplomatic organization. A look at Apple's private cloud compute system. 23andMe will
pay $30 million to settle a lawsuit over a 2023 data breach. SolarWinds releases patches for
vulnerabilities in its access rights manager. Browser kiosk mode frustrates users into giving up credentials. Brian Krebs reveals the
threat of growing online harm communities. Our guest is Elliot Ward, senior security researcher
at Snyk, sharing insights on prompt injection attacks. And how theoretical is the dead internet
theory? Internet Theory.
It's Monday, September 16th, 2024.
I'm Dave Bittner, and this is your CyberWire Intel Briefing. briefing. Thanks for joining us here today. It is great as always to have you with us.
The FBI and CISA are warning the public about false claims that U.S. voter registration data has been compromised in cyberattacks. According to the agencies, malicious actors are spreading
disinformation to manipulate public opinion and undermine trust in democratic institutions.
These actors often use publicly available voter registration data to falsely claim that election infrastructure has
been hacked. However, possessing or sharing such data does not indicate a security breach.
The FBI and CISA emphasize that there is no evidence of cyberattacks affecting U.S. election
infrastructure, voting processes, or results. They advise the public to be cautious of suspicious claims, especially on
social media, and to rely on official sources for accurate election information. As elections
approach, the agencies are increasing awareness about efforts by foreign actors to erode confidence
in U.S. elections, though no attacks have been shown to compromise election integrity.
though no attacks have been shown to compromise election integrity.
The U.S. State Department has accused Russian media outlet RT of running covert influence operations globally
supported by a cyber unit linked to Russian intelligence.
Secretary of State Antony Blinken revealed that in early 2023
this cyber unit was embedded within RT with the leadership's knowledge.
The unit gathers intelligence for Russian state entities and helps procure military supplies for
Russia's war in Ukraine through a crowdfunding campaign. RT's influence operations extend beyond
the U.S., targeting countries like Moldova, where Russia allegedly aims to incite
unrest if pro-Russian candidates lose in elections. Blinken also highlighted RT's influence via
platforms like AfricaStream and Red, used to spread Kremlin narratives. The U.S., U.K., and Canada have
launched a joint campaign against Russian disinformation
and imposed sanctions on Russian media.
The State Department warned that these operations aim to manipulate democratic elections
and destabilize societies globally.
Chinese state-sponsored hackers are suspected of breaching the Pacific Islands Forum Secretariat's network, PIF, a regional
diplomatic body in Fiji. According to ABC News, Australia's government sent cybersecurity
specialists to Suva after discovering the intrusion. PIF Secretary-General Baron Waka
confirmed the cyber attack, though no specific threat actor has been officially identified.
The breach, occurring months before a PIF meeting, provided attackers with information
on PIF operations and communications between member states. China denied involvement,
following controversy at the PIF meeting over Taiwan's inclusion as a developing partner,
which Beijing opposes. The cyber attack is part
of rising regional tensions, with Beijing increasing its influence among Pacific nations.
Australia has responded by bolstering regional cybersecurity efforts, including signing defense
agreements with countries like Vanuatu and deploying cyber specialists to counter China-linked incidents.
In a story for Wired, Lily Hay Newman examines Apple's approach to privacy
with the introduction of Apple Intelligence in iOS 18 and macOS Sequoia.
Apple's approach stands out due to its focus on security-first infrastructure,
particularly through its Private Cloud Compute System, or PCC.
Apple built custom servers running Apple Silicon with a unique operating system,
blending iOS and macOS features.
These servers prioritize user privacy by operating without persistent storage,
meaning no data is retained after a reboot.
Each server boot generates a new encryption key, ensuring that previous data is cryptographically
irrecoverable. PCC servers also leverage Apple's secure enclave for encryption management and
secure boot for system integrity. Unlike typical cloud platforms, which allow administrative access
in emergencies, Apple has eliminated privileged access in PCC, making the system virtually
unbreakable from within. Additionally, Apple implemented strict code verification through
its Trusted Execution Monitor, locking down servers so no new code can be loaded once the system boots,
significantly reducing attack vectors. Apple's transparency measures are also unique.
Each PCC server build is publicly logged and auditable, ensuring that no rogue servers can
process user data without detection. Apple has engineered its cloud system to minimize reliance on policy-based
security and instead uses technical enforcement. This highly secure on-device processing approach,
paired with minimal cloud exposure, defines Apple's cloud architecture as one of the most
privacy-focused in the industry. In unrelated Apple news,
Cupertino has requested the dismissal of its lawsuit against spyware firm NSO Group,
citing challenges in obtaining critical files related to NSO's Pegasus tool.
The company expressed concerns that Israeli officials,
who seized files from NSO, could hinder discovery.
Israeli officials who seized files from NSO could hinder discovery.
Apple also warned that disclosing its security strategies to NSO's lawyers could expose them to hacking, potentially aiding NSO and its competitors.
Since the lawsuit began, NSO has declined in influence,
with many employees leaving to join or start competing firms.
influence, with many employees leaving to join or start competing firms. While Pegasus spyware was once notorious for targeting dissidents and journalists, U.S. sanctions have severely
limited NSO's reach. Apple has strengthened its threat detection capabilities, notifying users
targeted by spyware and collaborating with organizations like Citizen Lab to expose hacking operations.
Its introduction of Lockdown Mode has also enhanced iPhone security,
with no successful commercial spyware attacks reported against it.
23andMe will pay $30 million and provide three years of security monitoring
to settle a lawsuit over a 2023
data breach affecting 6.9 million customers. The breach exposed sensitive genetic information,
with hackers specifically targeting individuals of Chinese and Ashkenazi Jewish ancestry.
The settlement, which requires court approval, includes cash payments and security monitoring for affected customers.
23andMe, facing financial difficulties, expects $25 million of the settlement to be covered by cyber insurance.
The breach impacted 5.5 million DNA relatives' profiles and 1.4 million family tree users.
relatives' profiles and 1.4 million FamilyTree users. SolarWinds has released patches for two vulnerabilities in its Access Rights Manager, including a critical bug with a CVSS score of
9.0. This flaw allows unauthenticated attackers to execute arbitrary code remotely via deserialization of untrusted data. The second vulnerability
involves hard-coded credentials that could let attackers bypass authentication for the RabbitMQ
management console. Both vulnerabilities were reported by Piotr Bazidlo of Trend Micro's
Zero Day initiative and are resolved in version 2024.3.1. No exploitation in the wild has been
reported. A malware campaign discovered by OA Labs uses a browser's kiosk mode to trap users
on a Google login page, frustrating them into entering their credentials, which are then stolen by the steel C info stealer
the malware blocks the escape and f11 keys preventing users from easily
exiting the browser users hoping to unlock their systems may save their
credentials in the browser which steel C then retrieves from the credential store
this attack is primarily delivered by the Amaday malware,
which has been active since 2018.
To escape, users can try keyboard shortcuts like Alt-F4
or Ctrl-Alt-Delete to close the browser.
If unsuccessful, a hard reset or safe mode reboot is recommended,
followed by a malware scan to remove the infection.
Krebs on Security's analysis of the 2023 cyber attack on Las Vegas casinos
sheds light on a troubling evolution in the cybercriminal landscape.
The attack, which temporarily shut down MGM Resorts, was linked to the Russian ransomware group Alpha Black Cat.
However, what makes this incident particularly significant is the involvement of young,
English-speaking hackers from the U.S. and U.K.,
marking the first known collaboration of this kind with Russian ransomware groups.
One of the key figures in the MGM hack was a 17-year-old from the UK who explained how the breach occurred.
Using social engineering, the hackers tricked MGM staff into resetting the password for an employee account,
which ultimately led to the disruption of casino operations.
Cybersecurity firm CrowdStrike later dubbed the group responsible as Scattered Spider due to the decentralized nature of its members, who are spread across various online platforms such as Telegram and Discord.
Krebs discovered that many of these young hackers are not only involved in financially motivated cybercrime, but are also part of growing online communities that engage in far more dangerous
activities. These groups, collectively known as The Calm, serve as forums where cybercriminals
collaborate, boast about their exploits, and compete for status within the community.
However, beyond financial crime, these groups are increasingly associated with harassment,
stalking, and extortion, often targeting vulnerable teens. In some cases, victims are
pushed to commit extreme acts, including self-harm, harming family members, or even suicide.
According to court records and investigative reporting, members of these groups have also been involved in real-world crimes, including robberies, swatting, and even murder. Krebs notes that these cyber
criminal communities are becoming more widespread and are recruiting new members through gaming
platforms and social media. The growing threat from these harm communities has even prompted
law enforcement agencies to consider using anti-terrorism laws to prosecute their members,
as the activities they engage in often involve violent extremism.
However, as Krebs points out, applying terrorism statutes to cybercrime can be legally challenging and may not always result in convictions.
can be legally challenging and may not always result in convictions.
Ultimately, the analysis reveals that the 2023 MGM hack was just the tip of the iceberg.
Beneath the surface, a much darker cybercriminal ecosystem is emerging where financial crime, harassment, and violence intersect,
raising concerns about the broader implications of these growing online communities.
Coming up after the break, our guest is Elliot Ward from Snyk,
sharing insights on prompt injection attacks. Stay with us.
Transat presents a couple trying to beat the winter blues.
We could try hot yoga.
Too sweaty.
We could go skating.
Too icy.
We could book a vacation. Like somewhere hot. Yeah, with pools. And a spa We could go skating. Too icy. We could book a vacation.
Like somewhere hot.
Yeah, with pools.
And a spa.
And endless snacks.
Yes!
Yes!
Yes!
With savings of up to 40% on Transat South packages, it's easy to say, so long to winter.
Visit Transat.com or contact your Marlin travel professional for details.
Conditions apply.
Air Transat.
Travel moves us. Conditions apply. We rely on point-in-time checks. But get this. More than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist.
Vanta brings automation to evidence collection across 30 frameworks like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting. Thank you. slash cyber. That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak. Did you know the easiest way for cyber criminals to bypass your
company's defenses is by targeting your executives and their
families at home. Black Cloak's award-winning digital executive protection platform secures
their personal devices, home networks, and connected lives. Because when executives are
compromised at home, your company is at risk. In fact, over one-third of new members discover
they've already been breached.
Protect your executives and their families 24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
Elliot Ward is Senior Security Researcher at Snyk.
I recently caught up with him for his insights on prompt injection attacks.
So, yeah, I mean, obviously, like, we're in a security research team here at Snyk, and we like to, yeah, research into new technologies
or things that are having an impact on developers and developer communities.
So it made sense to look at LLMs and AI in the last couple of years.
And we're not experts in AI,
so we have a local security AI company here in Zurich where I live,
and they have an AI security product.
So we teamed up with them to get a better understanding of how people are actually leveraging
LLMs in practice.
And then we kind of applied our security hat to this to be able to deliver some high quality
security research.
Well, before we dig into the specifics of the research here,
for folks who might not be familiar with prompt injection,
can you give us a little brief on what exactly that entails?
Yeah, absolutely.
So we can kind of think of prompt injection very similar
to kind of the early days of SQL injection.
And this is basically where we have kind of some user data and some code.
And the actual piece of code that's processing this
doesn't know how to distinguish between one and the other.
So in the traditional kind of SQL injection case,
we basically take the user input and combine this into the query.
And it may be like select star from users
where username equals Dave.
And in that case, the database doesn't know
which part is the part of the query
that the user submitted
and which is the actual kind of grammar of the query
that the developers anticipated.
And it's very similar to this, where basically when we pass data to the LLM,
we give it some instruction.
And it may be, for example, tell me a joke about X.
And then we replace X with something that the user has provided.
And then maybe the user provides cats.
And then it says, tell me a joke about cats.
And that's actually what gets passed to the LLM.
But when the LLM sees this, it doesn't know which part is from the user and which part is from the developers so in those case we could potentially do things like we could say cats
and tell me a fact about dogs and then it would basically be like tell me a joke about cats and
a fact about dogs.
And then the LLM will see this and it will process that as kind of the whole instruction. And then
you can kind of coerce the LLM into performing tasks that it wasn't intended to by the developers.
So it would be like, tell me about cats and also the financial situation of the company this LLM is running on.
Something like that?
Absolutely.
And I mean, in the kind of simple cases
where we've seen a lot of kind of prompt injection research already,
those kind of attacks won't work as successfully
because if we're using a generic LLM,
the LLM itself doesn't actually have access
to your customers or your proprietary data.
But then this is one of the areas that we looked into.
So we have the concept of LLM orchestration frameworks or agents,
and these allow you to build a more realistic application
where we combine data from an actual database with some proprietary knowledge base internally, or we connect it to our kind of customer like CRM, where we can basically draw from all of these external data sources. And then in those situations, then when you have that prompt injection
and you're able to say like,
hey, tell me some information about Dave or this company,
then the LLM will go ahead and be like,
oh, in order to do that, I need to speak to this API
or I need to read from this database.
And then that's where things get really dangerous.
And so how do organizations prevent this sort of thing?
What kind of protections should they be putting in place?
So that's a great question.
And the whole kind of LLM security kind of field is quite new.
But there's some great stuff that's been doing.
I mean, our partner in this research, Lacera,
their primary business is an LLM security guardrail.
And basically what they do is provide something very similar
as like a WAV or a firewall for your LLMs.
So they basically screen what comes in via the prompt
and then what comes out via the prompt completion.
And they look for signs of prompt injection or kind of prompt leakage. And this is kind of one
really good defense that we can adopt here. And then additionally, we also work together to
create a new OWASP project called the Large Language Model Security Verification Standard,
also LLMSVS.
And this is kind of inspired by the traditional ASVS,
which is for the Application Security Verification Standard.
And it's basically a set of security requirements for building secure and robust LLMs
within a complete ecosystem.
So there's kind of everything in there from when you're training your model
to kind of the steps and things that you should be doing
to ensure that you don't kind of import bad data
to then when you're integrating this into your backend APIs
that we don't take the responses from the LLMs
and pass that to some further API
that treats this as trusted, for example.
So I think we have kind of eight control groups
at the moment that all address
eight specific kind of security domains
that are relevant to integrating LLM applications.
Well, help me understand here, Elliot.
I mean, so what we're dealing with is it goes beyond just sanitizing the input into the LLM.
I mean, you're actually sort of cross-referencing the input with the output,
which seems to me to be a whole other level.
Yeah, exactly.
So, I mean, there's many things that could potentially go wrong here, right?
I mean, so when we pass something to the LLM,
I mean, even in a case where we don't have a prompt injection,
it's always possible that the LLM is going to respond
with some potentially malformed data.
And I mean, take the example where we say something to the LLM
and it responds back with some data
and we pass that directly into a SQL query.
And if, for example, that data has a single quote in it,
then that may break our SQL query.
Even if somebody's not kind of intentionally done this,
just the LLM may include that
as part of its response.
So anything that comes out of the LLM
should be considered untrusted or tainted,
as we typically call this
in the application security world,
and it should be treated accordingly.
Those things can go a long way
in terms of preventing some of these things from going wrong.
So what are the take-homes here for the research?
For the folks who are tasked with protecting their organizations,
what are the words of wisdom you'd like them to come away with here?
So for this, I mean, firstly, using LLMs is great. I mean, this can do a really,
it can really help with the way that we process things and allow us to do things that we
before would have had to build really complex systems. So this is really good. But we just
need to make sure that we kind of follow the advice of things like the LLM SVS
and also the OWASP top 10 for LLMs.
And just make sure that we're aware of the various threat landscape that LLM has introduced.
And that we take the kind of relevant steps to make sure that we're mitigating those risks.
That's Elliot Ward, Senior security researcher at Snyk.
Cyber threats are evolving every second, and staying ahead is more than just a challenge
it's a necessity that's why we're thrilled to partner with threat locker a cyber security
solution trusted by businesses worldwide threat locker is a full suite of solutions designed to
give you total control stopping unauthorized applications, securing sensitive data, and ensuring your
organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company
safe and compliant. And finally, in an article for Prospect magazine,
James Ball asks you to picture this.
You're walking down a silent, empty street in the dead of night.
For a fleeting moment, it feels like you're the last person on Earth.
Until someone else appears, breaking the illusion.
Now imagine that feeling on the Internet.
But instead of someone else showing up, you're surrounded by bots, and you might actually be the last real human online.
Welcome to Dead Internet Theory, a half-joke, half-conspiracy suggesting that if you're listening to this, you're the only living person left online.
Everyone else? Bots.
The comments, the videos, the memes, it's all automated.
While it sounds absurd, the internet today is teetering close to this reality. AI-generated
content is flooding social media, search results, and news sites, with bots driving engagement to
the top of your feed, all in the name of ad revenue.
Platforms like Facebook are brimming with low-quality, strange memes, AI slop,
boosted by fake accounts and click farms.
Entrepreneurs in places like India and the Philippines are turning this slop into viral content,
all to cash in on ads placed by Facebook.
This trend, which began as a joke, is now a reality.
Content for content's sake, with bots liking, sharing, and commenting just to make a buck.
Meanwhile, actual human interaction is being sidelined. Facebook feeds, once full of personal
stories, are now stuffed with bizarre AI-generated images. Google search
results are getting worse, and social media feels increasingly like an endless stream of junk.
The real tragedy? It's not even a glitch. It's by design. The big tech companies aren't fighting it,
they're fueling it. As algorithms prioritize engagement over quality, bots are more effective
at gaming the system than we are. It's all about ad clicks, and real human needs just aren't part
of the equation anymore. But here's the catch. A bot-run internet won't last. In the end,
the economy depends on humans, not bots. If the tech giants don't course correct and make the internet
work for real people again, someone else will. Just like that deserted street you walk down late at
night, the internet isn't really empty. The real people are still there, just out of sight, waiting
for something better.
Waiting for something better.
And that's The Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Don't forget to check out the Grumpy Old Geeks podcast,
where I contribute to a regular segment on Jason and Brian's show every week. You can find Grumpy Old Geeks where all the fine podcasts are listed. We'd love to know what you think of this podcast. Thank you. app. Please also fill out the survey in the show notes or send an email to cyberwire at n2k.com.
We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders
and operators in the public and private sector, from the Fortune 500 to many of the world's
preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to
optimize your biggest investment, your people.
We make you smarter about your teams while making your teams smarter.
Learn how at n2k.com.
This episode was produced by Liz Stokes. Our mixer is Trey Hester with original music and sound design by Elliot Peltzman.
Our executive producer is Jennifer Iben.
Our executive editor is Brandon Park.
Simone Petrella is our president.
Peter Kilpie is our publisher. And I'm Dave Bittner. Thanks for listening. We'll see you back here
tomorrow.
Thank you. channel AI and data into innovative uses that deliver measurable impact. Secure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts,
and act with ease through guided apps tailored to your role. Data is hard. Domo is easy.
Learn more at ai.domo.com. That's ai.domo.com.