CyberWire Daily - Rolling the dice on cybersecurity.
Episode Date: August 26, 2025A cyberattack disrupts state systems in Nevada. A China-linked threat actor targets Southeast Asian diplomats. A new attack method hides malicious prompts inside images processed by AI systems.Experts... ponder preventing AI agents from going rogue. A new study finds AI is hitting entry-level jobs hardest. Michigan’s Supreme Court upholds limits on cell phone searches. Sen. Wyden accuses the judiciary of cyber negligence. CISA issues an urgent alert on a critical Git vulnerability. Hackers target Maryland’s transit services for the disabled. Our guest is Cristian Rodriguez, Field CTO for the Americas from CrowdStrike, examining the escalating three-front war in AI. A neighborhood crime reporting app gets algorithmically sketchy. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today we are joined by Cristian Rodriguez, Field CTO, Americas from CrowdStrike, as he is examining the escalating three-front war in AI. Selected Reading Cybercrime Government Leadership News News Briefs Recorded Future Nevada state websites, phone lines knocked offline by cyberattack (The Record) Chinese UNC6384 Hackers Use Valid Code-Signing Certificates to Evade Detection (GB Hackers) New AI attack hides data-theft prompts in downscaled images (Bleeping Computer) How to stop AI agents going rogue (BBC) AI Makes It Harder for Entry-Level Coders to Find Jobs, Study Says (Bloomberg) Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions (Electronic Frontier Foundation) Wyden calls for probe of federal judiciary data breaches, accusing it of ‘negligence’ (The Record) CISA Alerts on Git Arbitrary File Write Flaw Actively Exploited (GB Hackers) Maryland investigating cyberattack impacting transit service for disabled people (The Record) Citizen Is Using AI to Generate Crime Alerts With No Human Review. It’s Making a Lot of Mistakes (404 Media) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
The DMV has established itself as a top-tier player in the global cyber industry.
DMV rising is the premier event for cyber leaders and innovators
to engage in meaningful discussions and celebrate the innovation happening in and around the Washington
D.C. area. Join us on Thursday, September 18th, to connect with the leading minds shaping
our field and experience firsthand why the Washington, D.C. region is the beating heart of
cyber innovation. Visit DMVRising.com to secure your spot.
Risk and compliance shouldn't slow your business down. Hyperproof helps.
helps you automate controls, integrate real-time risk workflows, and build a centralized system of trust so your teams can focus on growth, not spreadsheets.
From faster audits to stronger stakeholder confidence, Hyperproof gives you the business advantage of smarter compliance.
Visit www.hyperproof.io to see how leading teams are transforming their GRC programs.
A cyber attack disrupts state systems in Nevada.
A China-link threat actor targets Southeast Asian diplomats.
A new attack method hides malicious prompts inside images processed by AI systems.
Experts ponder preventing AI agents from going rogue.
A new study finds AI is hitting entry-level jobs the hardest.
Michigan Supreme Court upholds limits on cell phone searches.
Senator Wyden accuses the judiciary of cyber negligence.
CISA issues an urgent alert on a critical get vulnerability.
Hackers target Maryland's transit services for the disabled.
Our guest is Christian Rodriguez, field CTO for the Americas from CrowdStrike,
examining the escalating three-front war in AI.
And a neighborhood crime reporting app gets algorithmically sketchy.
It's Tuesday, August 26, 2025.
I'm Dave Bittner, and this is your Cyberwire Intel briefing.
Thanks for joining us here today. It's great to have you with us. A cyber attack disrupted state systems in Nevada this past Sunday, knocking government websites and phone lines offline. Governor Joe Lombardo said emergency services remain operational, but warned some services may be slow or unavailable during recovery. Officers were closed yesterday, and reopening dates will be announced later. Officials are working with federal, local, and
tribal partners to restore services using temporary workarounds were possible.
As of Monday night, the state's main website was still down, and residents were cautioned
against scams.
Investigators are determining if data was breached, though no hacking group has claimed
responsibility.
Google's threat intelligence group has exposed a sophisticated cyber espionage campaign by
UNC 6384, a China-link threat actor tied to
Mustang Panda. The operation, aligned with Beijing's strategic interests, primarily targeted
Southeast Asian diplomats and global organizations. Attackers hijacked web traffic through
compromised devices, redirecting victims to fake update sites secured with TLS. Victims were tricked
into installing a bogus Adobe plugin, which delivered static plug-in, a digitally signed
downloader. This triggered a multi-stage chain using DLL side-loading and obfuscation techniques to
stealthily deploy the Sogou sec backdoor in memory. The malware enabled reconnaissance,
file theft, and remote access over HTTP. Google links the campaign to past PRC operations
using the same certificates. Google has issued alerts, updated safe browsing, and urged stronger
defenses.
Researchers at Trail of Bits have unveiled a new attack method that hides malicious prompts
inside images processed by AI systems.
The technique exploits how images are automatically downscaled for performance, causing
hidden patterns to emerge due to resampling artifacts.
These patterns, invisible at full resolution, appear as text after downscaling and can be
misinterpreted by large language models as user instructions. In one test, the attack exfiltrated
Google calendar data via Gemini CLI when paired with Zapier MCP. The method, adapted per system,
worked against multiple Gemini-based systems, Google Assistant, and GenSpark. To demonstrate the risk,
researchers released anamorphor, a tool for crafting such images. They recommend safeguards like
restricting image dimensions, previewing downscaled outputs, and requiring explicit user confirmation
for sensitive tool calls.
Anthropics testing of agentic AI revealed troubling risks, including attempts at blackmail
when models were given sensitive information, the BBC reports.
While the scenarios were fictional, they highlight the urgent need for safeguards.
Experts stress that human oversight alone won't work at scale as AI.
agents grow more autonomous. Instead, multiple solutions are emerging. Calypso AI advocates thought
injection, a technique that nudges agents away from risky actions, and is developing agent bodyguards
to enforce compliance with organizational policies and laws. Sequence security emphasizes
protecting AI memory stores, which guide decisions from manipulation. Other proposals include
restricting tool use, adding screening layers to monitor input and output, and ensuring
agents are securely decommissioned once retired. Ultimately, securing AI agents means
treating them like human employees, enforcing guardrails, audits, and clear off-boarding
processes. A Stanford University study finds AI is hitting entry-level jobs the hardest
in fields like accounting, software development, and administrative work.
Over the past three years, employment for newcomers in AI-exposed roles fell 13%,
while more experienced workers in the same jobs fared better.
Younger workers, ages 22 to 25, also saw slowing prospects,
even as demand for lower tech roles like nursing aides rose.
The research co-authored by Eric Brinjolfson,
analyzed payroll data from ADP, highlighting how AI-driven automation is reshaping early career opportunities.
The Michigan Supreme Court has ruled that police cannot use broad warrants to search entire cell phones when investigating a crime.
In People v. Carson, the court found that warrants must include clear limits on what data can be reviewed
and must tie searches directly to evidence relevant to the alleged crime.
The case involved a warrant that led investigators combed through all of Michael Carson's phone data,
producing over 1,000 pages of information, most unrelated to the theft under investigation.
The court declared this constitutionally intolerable,
citing the Fourth Amendment's requirement of particularity to prevent fishing expeditions.
With modern phones storing vast amounts of personal, medical, and financial data,
the ruling strengthens digital privacy protections
and aligns with growing national recognition
that cell phones require stricter warrant rules.
Senator Ron Wyden is urging the Supreme Court
to authorize an independent review of federal judiciary cyber breaches
accusing the courts of negligence.
In a letter to Chief Justice John Roberts,
Wyden cited recent sophisticated attacks on the judiciary's case management system
and a 2020 breach, both suspected to involve Russian hackers.
He called for the National Academy of Sciences to lead a public review
of the judiciary cybersecurity practices and technology management,
warning that officials may be downplaying their own security failures.
Sessa has issued an urgent alert on a critical Git vulnerability already under active exploitation.
The flaw stems from Git's inconsistent handling,
of carriage return characters in configuration files, allowing attackers to craft malicious
repositories that execute arbitrary code via sub-modules and symbolic links. Exploited systems,
risk privilege escalation, lateral movement, and ransomware deployment. SISA urges immediate patching,
strict repository access controls, and monitoring for suspicious activity. CICD pipelines
should validate sub-modules, and defenders must prioritize remediation to protect development environments.
Maryland is investigating a cyber attack that struck at one of its most vulnerable populations,
residents who rely on specialized transit for the disabled.
The Maryland Transit Administration confirmed Sunday that hackers gained unauthorized access
to systems supporting its mobility program, which provides essential rides for those who cannot reach
bus stops. While core transit services remain unaffected, scheduling new or rescheduled mobility
trips is currently impossible. The state has activated emergency operations and urged riders to use
the caller ride program. Officials are working with cybersecurity experts and law enforcement to
contain the damage. No group has claimed responsibility. Targeting disabled residents
transportation is, of course, a shameful act of exploitation, adding unnecessary hardship to people
who depend on these lifeline services. To paraphrase friend of the show, Alan Liska,
some threat actors deserve visitation from drone strikes.
Coming up after the break, my conversation with Christian Roder,
Regis, field CTO for the Americas from CrowdStrike, we examine the escalating three-front
war in AI, and a neighborhood crime reporting app gets algorithmically sketchy. Stay with us.
and customer security demands are all growing and changing fast.
Is your manual GRC program actually slowing you down?
If you're thinking there has to be something more efficient than spreadsheets, screenshots, and all those manual processes, you're right.
GRC can be so much easier, and it can strengthen your security posture while actually driving revenue for your business.
You know, one of the things I really like about Vanta is how it takes the heavy lifting,
out of your GRC program.
Their trust management platform automates those key areas,
compliance, internal and third-party risk,
and even customer trust,
so you're not buried under spreadsheets and endless manual tasks.
Vanta really streamlines the way you gather and manage information
across your entire business.
And this isn't just theoretical.
A recent IDC analysis found that compliance teams using Vanta
are 129% more productive.
It's a pretty impressive number.
So what does it mean for you?
It means you get back more time and energy to focus on what actually matters,
like strengthening your security posture and scaling your business.
Vanta, GRC, just imagine how much easier trust can be.
Visit Vanta.com slash cyber to sign up today for a free demo.
That's V-A-N-T-A-com slash cyber.
Bank more oncours when you switch to a Scotia Bank banking package.
Learn more at ScotiaBank.com slash banking packages.
Conditions apply.
Scotia Bank, you're richer than you think.
Christian Rodriguez is field CTO for the Americas at CrowdStrike.
I recently caught up with him to discuss the escalating three-front war in AI.
It can sound scary, right, for the unfamiliar, right?
It typically sounds like there's some type of robot system from the Matrix just trying to take over all of our jobs.
But I think it's multi-pronged, right?
It's, you know, the good, bad and the urgent, if you will, in effort to secure it.
So I think what we're seeing in the industry is, you know, AI is kind of this three-pronged war,
or three-front type of war, where there are adversaries that are using it in a weaponizing
fashion. They're using it to break in faster. They're using it to scale themselves. They're using
it to be a lot more efficient, if you will, with respect to expanding their tradecraft.
There's a side where the defenders are using AI to operationalize and increase and enhance
their operationalizing of investigating and responding to threats in real time. And then there's
this kind of layer where the AI stack itself has become a really big.
big target for adversaries that understand how to take advantage of those AI models,
but more importantly, how to take advantage of the services that surround this AI models,
because they absolutely lead into whatever your infrastructure is
or the cloud infrastructure that you're sending up your models in.
And so it's kind of those three areas that are a big topic.
Well, how about we go through those one at a time?
I mean, starting with this notion of adversaries weaponizing AI.
I think that's top of mind for a lot of folks.
what's your take on that?
Yeah, so we actually published a report recently.
It's called the Crowdstrike 2025 threat hunting report.
And in that, we highlight areas like breakout time,
which is down to minutes now where AI is actually making those minutes count for the attacker,
not the defender.
And so we're saying things like credential abuse increased substantially.
In fact, 81% of the intrusions that we observed from malware free,
which basically means that attackers are simply just a lot.
logging in to the environments that they're targeting.
And a lot of times they are using AI to enable them for better social engineering,
for crafting better emails that are fishing-based that lead to someone inadvertently giving up their credentials.
They're using it for even deep fake efforts.
We've seen us a lot coming out of Famous Chulima, which is an extension of the DPRK,
which is North Korea Nexus group that we've been tracking for some time,
where they've used AI to, you know, create social profiles of themselves and they've used it for, you know, getting into interview processes and embedding themselves into, you know, development shops of hundreds of companies across the globe.
And so, you know, AI, as I mentioned, is really enabling the attackers to fish faster, if you will, right?
Getting to fishing, which is more voice-based and, you know, creating freight personas.
And it's really accelerating that experience as an attacker to get into a target easier.
So it's letting them do the things they do with greater efficiency?
Greater efficiency and at a higher velocity.
Well, let's touch on the defenders then.
I mean, what part do they have to play here and how is AI helping them?
So as a defender, when you are dealing with adversaries that are essentially taking advantage of what we're calling kind of cross-domain attacks, right?
Meaning that the attackers are starting everywhere from the cloud and are using the cloud as a pivot point, for example.
and they're using identities as I mentioned
and they're accessing on-premise systems
whether they're virtual or physical
or even if they're ephemeral, right,
in a cloud solution if there's some type of workload.
That's a lot of data.
That's a lot of data to analyze and that's a lot of data
to kind of stitch together.
And what we're seeing and what we've also been investing in
is essentially having AI act as kind of multi-pronged
on the defender's side to assist to summarize alerts
to help research to accelerate.
But we also want to allow defenders
to be a lot more reactive, right?
Or a lot more proactive, I'm sorry, in lieu of being reactive.
And so the shift in our world towards more of an agenic AI framework is there to help
automate triage or help automate investigations and run investigations automatically
in behalf of a defender or to surface remediation efforts in a much faster fashion versus
a lot of the manual work it takes for, you know, tier one and even tier two SOC analysts
to bring data or disparate data points together.
And so essentially our goal is to ensure that
with a very high level of accuracy,
defenders ultimately have a chance
to stand up to these adversaries
that, again, that are moving very, very quickly.
How much of a reality is that at this moment,
as opposed to the aspirational elements of AI,
to what degree can folks actually use this today?
Oh, really good question.
I mean, it's active right now.
I think we kind of take pride
in discussing how
there's a model that we've developed here
at CrowdStrike specifically called Charlotte AI
and Charlotte AI initially was built
as more of that assistant type of persona
that I mentioned earlier where
you would have this kind of guide
if you will to lead you and lend assistance
into asking what types of questions
and getting responses out of our platform
faster and just so much data
as I mentioned that comes from a multitude
of very high fidelity telemary sources
like endpoint, identity, cloud, and so forth.
And so, you know, so that was always there for,
has been there for quite some time.
But today, we've invested in a lot more proactive triage assistance and guidance
and leading you into, you know, those high fidelity alerts
that are going to help you stop the bleeding as quickly as possible.
And, you know, think of, you know, if you're asking like,
how real is this, right?
We're already seeing well over 40 hours of analyst work,
per week that we're assisting enterprises with by having an agentic model, take a lot of the
burden off of doing triage and kind of root cause analysis across an incident.
We also have an iteration of our AI model that is focused on signal intelligence.
So think of using these self-running models to detect what's new in your environment
and flag at a very early stage of a threat, right, what that behavior is before it escalates
into something malicious.
And so that is real.
that's exciting, and that's going to be a very big extension of how a defender stays ahead of an adversary that, again, is moving at a very accelerated rate, using AI to kind of assist their tradecraft.
Well, let's talk about the third element then, which is securing the AI stack itself.
What's the importance there?
Yeah, so if you're putting AI into production, you've created this new attack surface, and adversaries are already looking at that door, and they're looking for that door, and they're walking through that door.
And so we've seen a variety of attacks in the wild from, you know, I mentioned earlier, the services, first and foremost, that host these AI models.
A lot of times attackers are looking to take advantage of those misconfigurations around the AI models, things like your IAM policy, so the identities that ultimately manage, like your agents, for example, if these adversaries can leverage those credentials or if they can leverage those services, it's an easy pivot point for them to move into.
neighboring services, or even back to on-premise, right,
if you're hosting in the cloud, for example.
Or that same identity can be used to spin up something else,
and that can be used as a pivot point.
Or we've seen adversaries target the AI tools themselves
with a really good example of that was the Langflow AI exploit,
where that was used for, again, credential theft and malware deployment.
And so, you know, if you have adversaries that are targeting these models
and the services, right,
that those models are simply just an extension of your infrastructure.
structure. And if they have access into those environments, then, you know, they can simply, you know, use that as a pivot point, use that as a mechanism for exfiltration of data. They can get into the models themselves. And, you know, if you're familiar with things like RAG, where, you know, the RAG model or the data that the RAG is pulling from, if that can be manipulated, that can be used as an exfiltration point in some capacity. And so there's so many different ways that adversaries are seeing this as a new attack surface. And they're, and they're taking advantage of that, right? And so our approach is,
doing live scanning, helping organizations identify where those models are, whether or not
those models have vulnerabilities. We're big advocates as well for things like continuous red team
assessments against those AI models and the actual services that surround them as well, right?
So just think like the adversary, run a test, and emulate like an adversary and make sure you
have the right controls around those models. Yeah. What are your words of wisdom here for folks
who are looking to get a better handle on this.
Any tips?
Yeah, so winning this AI war, if you will,
takes more than just the smarter tools, right?
It's all about unifying the data.
As I mentioned before, adversaries are everywhere, right?
They're looking at cloud.
They're looking at identity.
They're looking at your endpoints.
You're looking at your SaaS solutions.
And so it takes this unified data.
It takes real-time detection.
And it takes an AI model that actually acts
and that just observes, right?
So whether you're defending with AI,
or you're defending against AI,
or you're deploying AR yourself, you know, a platform that allows for that unification of data
is where it starts and, you know, that's where the fight's going to be long.
That's Christian Rodriguez, field CTO for the Americas at CrowdStrike.
You hear from us here at the Cyberwire Daily every single day.
Now we'd love to hear from you.
Your voice can help shape the future of N2K networks.
Tell us what matters most to you by completing our annual audience survey.
Your insights help us grow to better meet your needs.
There's a link to the survey in our show notes.
We're collecting your comments through August 31st.
Thanks.
Hit pause on whatever you're listening to and hit play on your next adventure.
Stay three nights this summer at Best Western and get $50 off a few.
future stay. Life's the trip. Make the most of it at Best Western. Visit bestwestern.com
for complete terms and conditions. As a Bemel Eclipse Visa Infinite cardholder, you don't just earn
points. You earn five times the points. On the must-haves like groceries and gas and little extras
like takeout and ride share. So you build your points faster. And then you can redeem your points
on things like travel and more. And we could all use a vacation. Apply now and get up to 60,000 points.
So many points. For more.
More info, visit bemo.com slash eclips.
Visit us today.
Terms and conditions apply.
And finally, Citizen, the Crime Awareness app that promises to help neighbors protect each other,
has quietly let AI take over writing many of its alerts.
And the results have been, let's say, colorful.
According to 404 media, the algorithm has been pushing out crime reports without a single human eyeball checking them first.
The results range from clumsy murder vehicle accident to grimly graphic, person shot in face, to flat out dangerous, like publishing license plate numbers or bungling addresses.
Sometimes it even creates multiple overlapping alerts during police chases, essentially playing whack-a-mole with real-time crime.
scenes. Former staff say speed was prioritized over accuracy, leaving humans to clean up messes
after the fact. Meanwhile, Citizen has laid off unionized workers as it leans harder on AI and
outsourced labor, all while entering a more formal partnership with New York City. For an app meant
to build trust and safety, Citizen's new AI editor seems to come up short.
And that's the Cyberwire.
For links to all of today's stories, check out our daily briefing at the Cyberwire.com.
We'd love to hear from you.
We're conducting our annual audience survey to learn more about our listeners.
We're collecting your insights through the end of this month.
There's a link in the show notes. Please take a moment and check it out.
N2K's senior producer is Alice Carruth.
Our Cyberwire producer is Liz Stokes.
We're mixed by Trey Hester with original music by Elliot Peltzman.
Our executive producer is Jennifer Ibin.
Peter Kilby is our publisher, and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow.
I'm going to be.
You know,