CyberWire Daily - BEAR-ly washed and dangerous.
Episode Date: May 27, 2025“Laundry Bear” airs dirty cyber linen in the Netherlands. AI coding agents are tricked by malicious prompts in a Github MCP vulnerability.Tenable patches critical flaws in Network Monitor on Windo...ws. MathWorks confirms ransomware behind MATLAB outage. Feds audit NVD over vulnerability backlog. FBI warns law firms of evolving Silent Ransom Group tactics. Chinese hackers exploit Cityworks flaw to breach US municipal networks. Everest Ransomware Group leaks Coca-Cola employee data. Nova Scotia Power hit by ransomware. On today’s Threat Vector, David Moulton speaks with his Palo Alto Networks colleagues Tanya Shastri and Navneet Singh about a strategy for secure AI by design. CIA’s secret spy site was… a Star Wars fan page? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector In this segment of Threat Vector, host David Moulton speaks with Tanya Shastri, SVP of Product Management, and Navneet Singh, VP of Marketing - Network Security, at Palo Alto Networks. They explore what it means to adopt a secure AI by design strategy, giving employees the freedom to innovate with generative AI while maintaining control and reducing risk. You can hear their full discussion on Threat Vector here and catch new episodes every Thursday on your favorite podcast app. Selected Reading Dutch intelligence unmasks previously unknown Russian hacking group 'Laundry Bear' (The Record) GitHub MCP Server Vulnerability Let Attackers Access Private Repositories (Cybersecurity News) Tenable Network Monitor Vulnerabilities Let Attackers Escalate Privileges (Cybersecurity News) Ransomware attack on MATLAB dev MathWorks – licensing center still locked down (The Register) US Government Launches Audit of NIST’s National Vulnerability Database (Infosecurity Magazine) Law Firms Warned of Silent Ransom Group Attacks (SecurityWeek) Chinese Hackers Exploit Cityworks Flaw to Target US Local Governments (Infosecurity Magazine) Everest Ransomware Leaks Coca-Cola Employee Data Online (Hackread) Nova Scotia Power Suffers Ransomware Attack; 280,000 Customers' Data Compromised (GB Hackers) The CIA Secretly Ran a Star Wars Fan Site (404 Media) Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network, powered by N2K.
Hey everybody, Dave here.
I've talked about DeleteMe before, and I'm still using it because it still works.
It's been a few months now, and I'm just as impressed today as I was when I signed
up.
DeleteMe keeps finding and removing my personal information from data broker sites and they
keep me updated with detailed reports so I know exactly what's been taken down.
I'm genuinely relieved knowing my privacy isn't something I have to worry about every
day.
The DeleteMe team handles everything.
It's the set it and forget it
peace of mind.
And it's not just for individuals. Delete Me also offers solutions for businesses, helping
companies protect their employees' personal information and reduce exposure to social
engineering and phishing threats.
And right now, our listeners get a special deal, 20% off your Delete Me plan.
Just go to joindeleteeme.com slash n2k and use promo code n2k at checkout.
That's joindeleteeme.com slash n2k, code n2k. Laundry Bear airs dirty cyber linen in the Netherlands.
AI coding agents are tricked by malicious prompts in a GitHub MCP vulnerability.
Tenable patches critical flaws in network monitor on Windows, MathWorks confirms ransomwares behind a
MATLAB outage, the feds audit NVD over vulnerability backlogs, the FBI warns law
firms of evolving silent ransom group tactics, Chinese hackers exploit a
CityWorks flaw to breach US municipal networks, Everest Ransomware Group leaks Coca-Cola employee data,
Nova Scotia Power's been hit by ransomware,
on today's Threat Vector, David Moulton speaks with his Palo Alto Network's colleagues
Tanya Shastri and Navneet Singh about a strategy for secure AI by design.
And the CIA's secret spy site was...
a Star Wars fan page?
It's Tuesday, May 27, 2025. CyberWire Intel Briefing.
Thanks for joining us.
It is great to have you with us, and we hope everybody had a great long holiday weekend
here in the US anyway. holiday weekend. Dutch intelligence just introduced the world to Laundry Bear, a fresh Russian threat actor
with a knack for speed, stealth, and stealing inboxes.
The group, also tracked by Microsoft as Void Blizzard, has been linked to cyberespionage
across NATO with a suspicious focus on defense contractors Aviation and Ukraine.
Laundry Bear first popped up after a hack on the Dutch police in 2024.
Using session hijacking and credentials from the cyber criminal flea market,
the bear broke in, swiped contacts, and likely hit other targets too.
Despite overlapping tactics with Fancy Bear, APT-28,
and the usual GRU suspects,
Laundry Bear is being treated as a distinct creature
in the growing Russian menagerie.
Think of it as the laundry-doing cousin
of sandworm, Cozy, and the rest.
The Bear's tools are simple, automated, and stealthy, just enough to
make defenders lose sleep without ever deploying custom malware.
Researchers at Invariant Labs uncovered a critical vulnerability in GitHub's
model context protocol server, exposing AI coding agents to prompt injection
attacks. The flaw lets attackers plant hidden commands
in public GitHub issues.
When users direct their AI agents to review these issues,
the agents can be tricked into leaking sensitive data
from private repositories.
This exploit doesn't compromise the MCP tool itself,
but manipulates the AI's trust in external content.
One proof of concept prompted an agent to pull sensitive data, like salaries and private
repo info, and publish it publicly, all under the guise of user feedback.
The vulnerability is model agnostic and impacts the broader AI dev tool ecosystem. As AI agents become central to software development,
this incident shows traditional security may not be enough.
Tenable has patched two high severity flaws
in its network monitor tool for Windows, discovered by
researcher Will Dorman. The bugs affect versions before
651 and allow local privilege escalation
and arbitrary code execution. The first flaw arises from insecure directory permissions
in non-default installations, enabling attackers with local access to elevate privileges. The
second flaw is more severe, allowing low-privileged users to plant malicious files and execute
them with system rights, no admin clicks required.
Tenable's latest update also upgrades several key libraries, addressing broader vulnerabilities.
Organizations using Tenable Network Monitor on Windows are urged to update immediately
and review directory permissions.
These flaws, while requiring local access, pose a serious threat in shared or multi-user
environments where the platform's privileged network monitoring role makes it a high-value
target.
MathWorks has confirmed a ransomware attack is responsible for the week-long outage that
crippled MATLAB, affecting
millions of users.
The incident began on May 18 and disrupted both internal systems and key online services,
including licensing and MATLAB Online, widely used in academia.
Users, including frustrated students and engineers, were left in limbo with vague status updates
and no clear cause until MathWorks broke its silence.
Some users even resorted to pirating the software just to meet deadlines.
The attack especially impacted students during exam season, with licensing servers down and
access to MATLAB Greater stalled.
Although many services are now restored,
full recovery is ongoing.
Commercial customers with local license servers
largely avoided disruption,
while educational users who rely on cloud-based access
bore the brunt.
MathWorks has involved federal law enforcement
and is working with cybersecurity experts
to finish cleanup and restore remaining services.
The U.S. Department of Commerce has launched an audit of the National Vulnerability Database
to address a growing backlog of unprocessed security flaws.
The backlog emerged after a key contract was terminated in early 2024, leaving vulnerabilities unexamined.
The audit, led by the Office of Inspector General,
aims to evaluate NIST's oversight
and improve future processing.
NVD leaders recently pledged to use automation and AI tools
to catch up and prevent future delays
in vulnerability analysis.
The FBI has issued a warning that law firms and prevent future delays in vulnerability analysis.
The FBI has issued a warning that law firms are being targeted by the Silent Ransom Group,
also known as Chatty Spider, Lunamoth, and UNC 3753.
Active since 2022, Silent Ransom Group previously used phishing emails
posing as fake subscription alerts
to lure victims into phone-based scams.
As of March of this year, they've pivoted to calling employees directly while posing
as internal IT staff.
Victims are tricked into joining remote access sessions, enabling attackers to install tools
like WinSCP or R-Clone to exfiltrate sensitive data.
Silent Ransom Group then demands ransom, threatening to leak data and even calling employees to
pressure payment.
Their use of legitimate tools makes detection difficult.
While law firms are prime targets, medical and insurance organizations have also been
hit.
The FBI urges strong phishing awareness training,
MFA, data backups, and reporting of any SRG-related incidents.
Cisco Talos reports that a Chinese-speaking threat group, UAT 6382, has been exploiting
a critical vulnerability in CityWorks to breach US local government networks since January
of this year. CityWorks is an enterprise asset management and public asset
management platform designed primarily for local governments and public works
agencies. The flaw rated CVSS 8.6 allows remote code execution. After gaining
access, the attackers deploy web shells,
custom malware, and tools like Cobalt Strike and V-Shell
to establish long-term control.
The group showed a specific interest in utility management systems.
Evidence such as Chinese language code and tools like TetraLoader,
built using the Chinese malware builder MaLoader,
supports Cisco's assessment of the group's origin and motives.
The FBI urges affected organizations to update CityWorks immediately
and review Cisco's technical indicators to detect possible compromise.
The campaign underscores the risk of software vulnerabilities in municipal infrastructure
and the growing trend of financially motivated state-linked cyber operations.
The Everest Ransomware Group has leaked 502 megabytes of data containing sensitive information
on 959 Coca-Cola employees across the Middle East, including the UAE, Oman, and Bahrain.
Posted on both their dark web leak site and the XSS Cybercrime Forum, the files include
personal data like names, addresses, passports, visas, banking details, and salary records.
Also leaked are internal documents mapping COCA-COLCola's system admin accounts, HR roles,
and organizational hierarchies.
Critical intel for spearfishing, social engineering, and further intrusions.
While no passwords were exposed, the data significantly raises Coca-Cola's cyber risk.
Everest is known for leaking data when ransom demands are ignored. Coca-Cola hasn't commented on whether negotiations occurred.
Nova Scotia Power confirmed it suffered a ransomware attack traced back to March 19th
of this year, although it was only detected on April 25th.
The breach disrupted key IT systems like billing, payments, and customer portals, but not electricity
supply. About 280,000 customers had sensitive data stolen and leaked online after the utility
refused to pay ransom, citing sanctions compliance and law enforcement advice. Stolen data includes
names, contact info, addresses, social insurance and driver's license numbers,
and bank details for auto-pay users.
The company is offering free credit monitoring and has brought in cybersecurity experts to
restore systems and strengthen defenses. Coming up after the break on today's Threat Vector, David Moulton speaks with his Palo
Alto Network's colleagues Tanya Shastri and Navneet Singh about a strategy for secure
AI by design.
And the CIA's secret spy site was a Star Wars fan page.
Stay with us.
And now a word from our sponsor, Spy Cloud.
Identity is the new battlegroundground and attackers are exploiting stolen identities
to infiltrate your organization. Traditional defenses can't keep up.
Spy Cloud's holistic identity threat protection helps security teams uncover and automatically
remediate hidden exposures across your users from breaches, malware and phishing to neutralize
identity-based threats like account takeover, fraud and ransomware. Don't let invisible threats compromise your
business. Get your free corporate darknet exposure report at
spycloud.com slash cyberwire and see what attackers already know.
That's spycloud.com slash cyberwire. compliance regulations, third-party risk, and customer security demands are all
growing and changing fast. Is your manual GRC program actually slowing you down? If
you've ever found yourself drowning in spreadsheets, chasing down screenshots, or
wrangling manual processes just to keep your GRC program on track, you're not alone.
But let's be clear, there is a better way.
Banta's Trust Management Platform takes the headache out of governance, risk, and
compliance.
It automates the essentials, from internal and third-party
risk to consumer trust, making your security posture stronger, yes even
helping to drive revenue. And this isn't just nice to have. According to a recent
analysis from IDC, teams using Vanta saw a 129% boost in productivity. That's not
a typo, that's real impact.
So if you're ready to trade in chaos for clarity, check out Vanta and bring some serious
efficiency to your GRC game.
Vanta.
GRC.
How much easier trust can be.
Get started at Vanta.com slash cyber.
On today's Threat Vector segment, David Moulton speaks with his Palo Alto Network's colleagues,
Tanya Shastri and Navneet Singh about a strategy
for secure AI by design.
Hi, I'm David Moulton, host of the Threat Vector podcast, where we discuss pressing cybersecurity threats and resilience
and uncover insights into the latest industry trends.
In our latest episode, I sat down with two of my colleagues,
Tanya Shastri, SVP of Product Management,
and Nav Singh, VP of Marketing,
to explore a topic that's quietly redefining
enterprise security.
How to secure AI before it secures you a front page breach.
Tanya and Nav pull back the curtain on what you're not seeing.
Shadow AI, invisible threats inside browsers,
and the hidden vulnerabilities in your AI dev pipeline.
They break down how attackers are already exploiting gaps
in AI security and how the most forward-leaning
organizations are staying two steps ahead.
This episode will challenge the way you think about AI
and security and what you haven't done yet.
Check it out wherever you listen to podcast.
How are employees using GenAI tools like ChatGPT,
Co-Pilots, and Gemini inside the enterprise today?
And what risk does that create?
Employees are using Gen.AI applications
in a variety of different ways, especially, you know,
if you look at a marketing department,
I lead marketing for network security here
at Palo Alto Networks. we use it in many different ways.
One example of this, we just came out of
the cybersecurity's biggest conference,
which is RSA and biggest week, RSA week.
At that time, during that time,
we launched many new products,
and we had a campaign come out of it.
One of the things that we did in order to launch that was
actually do a competition,
internal competition where we had to use
AI tools to come up with taglines, concepts,
creative concepts, videos, and so on.
So we got 56 submissions in two days.
One of those submissions was actually chosen.
And that's what we went with.
We had Deploy Bravely for Prisma Airs.
That actually came out of this competition.
So this is just one of the ways in which we are using it
in my team.
And when we talk to customers,
they're using it in a variety of ways in sales,
marketing, finance, and so on.
Tanya, talk to me about the critical components
of a security strategy that allows employees AI
or GenAI use without putting data at risk.
Yes, so as Nav just mentioned,
there's tremendous adoption of AI,
but there is lack of visibility
into what users are actually using.
Essentially, we call it shadow AI.
So first and foremost,
one of the very important components is having
visibility into what the employees are using,
what apps they're using,
and then having a visibility into what the app actually does.
What does it do, the various attributes of that application,
so you can assess the risk of that application.
That is one area or one component that's important.
And as you think about it, really, these apps are being
generated so quickly.
And there are more and more new apps, so staying up to speed
and being able to recognize and understand which these new
apps are continues to be important.
Then another area, another component or another piece
that's very important is once you have visibility,
you have to be able to control the usage of the app.
And that control could be a blunt tool
where you say it's too risky and I just don't allow access
to the application.
But more importantly, you also have
to be able to have a finer approach in that you allow
access to
an application, but then you are able to have a finer-grained
ability to decide what you do with that application.
So for example, having access to a chat LLM, chat GPT,
for general use makes a lot of sense.
There's a lot of value to leveraging it, but you may not
want it to be used for situations where employees are
sharing code with it and asking ChatGPD to improve their code.
So being able to have that kind of fine-grained control over
how an app is used and what data is shared with the app,
ensuring that no private sensitive data is shared
either inadvertently or otherwise with the application.
That's also another important piece,
essentially control of that application.
Then if you choose to decide to allow that application to be used,
it's very important to continuously monitor
the traffic that's actually going between the application back and
forth to ensure that there are no threats, no malware,
no other command control, other such things in the
communication between the application and back.
Nav, let me take it over to you.
How does the secure AI by design approach help
organizations bake security into the AI development lifecycle?
So let me talk about a customer example.
So I was talking to a customer
which is a professional services firm
and they're building their application.
They in fact have tested it internally.
It helps their consultants prepare for their meetings
to X faster because it gives them so much information so quickly and so easily. helps their consultants prepare for their meetings 2x faster,
because it gives them so much information so quickly and so easily,
and which basically for a professional services firm means that
they could potentially even double their revenues with the same headcount.
So, it can be a game changer.
So, when you look at this,
going back to what you were saying about CISOs, they are
going to feel the pressure from their CEOs and the board to
really allow AI. So, that's why we believe that the best
approach is secure AI by design, which means you use AI in
your development lifecycle, as Tanya was mentioning.
So we offer capabilities to secure AI or safely enable AI in both use cases that we just mentioned.
Either employees using third-party GNI applications like ChatGBT, so employees can safely use,
but prevent sensitive data from leaking.
And secondly, enterprises developing their own AI
applications.
So all those risks that Tanya had mentioned,
so model scanning, red teaming so that we
can find vulnerabilities, looking at the posture,
do you have overly permissive AI applications or agents,
the runtime security, prompt injection attack,
preventing multiple different types of prompt injection attack, preventing multiple different types
of prompt injection attacks, right?
All of that is something that we offer
as part of our portfolio of AI.
And that's what we mean by being able
to secure AI applications by design
and securely being able to embrace AI.
Tanya, what does it mean to secure the entire AI pipeline
from development to deployment?
So I just shared how the entire stack for AI is new
and how there is a lot of complexity
in terms of the technologies that are being brought together
to deliver an AI application,
and then essentially the threats that it opens brought together to deliver an AI application, and then essentially
the threats that it opens up.
And when you think about it, essentially all the threats
that open up during development, deployment, and runtime
are essentially what we need to take care of.
So starting with development, being able to scan these ML
models, being able to have the confidence that the ML models that are
being used are secure,
do not have any malware or vulnerabilities in them,
starting right there with scanners and so on,
ensuring that there are no secrets shared inadvertently,
no data being included in code that should not be included,
those kinds of things at the code development time.
From a deployment standpoint, you really need to be able to
first assess what all exists in the infrastructure that is all
related to the AI application.
So essentially discovery, to be able to discover all the pieces
that are being brought together to develop the application.
Ensure that all those pieces are deployed correctly,
do not have any misconfigurations,
being able to ensure that that is done right.
That's another big piece from a deployment standpoint.
So essentially all the things I talked about,
whether it's new agents, plugins, LLMs, data sources,
all those need to be deployed and configured appropriately.
And then from a runtime perspective,
being able to continuously monitor.
So essentially when these applications are put in production,
they now access the outside world,
they're communicating with other applications,
with external applications, with external entities.
And being able to continuously monitor that connection and being able to ensure
that all the traffic that's going back and forth doesn't have any malware in it,
doesn't have any threats in it, there isn't any data being exfiltrated,
being able to make sure that there's no data loss, all those things are also important.
And I do also want to highlight, and I mentioned before, right,
with AI, there is no AI without data for all practical purposes.
So ensuring that the data is secure, not just the access to
the data, but important, secure data,, private data is locked down as appropriate.
If you've liked what you heard,
catch the full episode now
in your Threat Vector podcast feed.
It's called Securing AI in the Enterprise,
released May 22nd. Don't get left behind.
Be sure to check out the complete Threat Vector podcast wherever you get your favorite podcasts. And finally, imagine logging into a crusty old Star Wars fan site, StarWarsWeb.net, only to learn years later
that it wasn't just peddling Battlefront 2 nostalgia and LEGO sets.
It was a covert CIA channel for communicating with human intelligence sources around the
world.
Like these games, you will, read the site's Yoda quote, which, honestly, this podcast
host probably clicked on twice without ever realizing it was part of an international
spy network.
According to security researcher Ciro Santilli, this now-defunct relic was one of many CIA-operated
sites disguised as innocuous hobbies.
Extreme sports, Brazilian music, even comedy fan sites.
The idea?
Hide spy communications in plain sight.
The method?
A secret login triggered by typing a password into the site's search bar.
The results?
Well, I've got a very bad feeling about this.
Iranian authorities caught wind of the setup over a decade ago, eventually
unraveling a web that reportedly led to the deaths of over two dozen CIA sources
in China between 2011 and 2012. Santilli's interest in the case started
with some personal curiosity, his mother-in-law is part of the Falun Gong
movement, but quickly turned into a deep-dive
hobby involving Torbots, HTML sleuthing, and hours of crawling through the Wayback Machine.
His breakthrough was discovering that the CIA hadn't bothered to mask IP address patterns
or remove file names from publicly posted screenshots.
From there, he tracked down hundreds of related domains.
Zach Edwards, an independent cybersecurity researcher, says the findings align with what
the InfoSec community suspected for years.
He said, yes, the CIA absolutely had a Star Wars fan website with a secretly embedded
communication system, noting that even in Spycraft, developer
errors like leaving digital breadcrumbs can bring an operation down.
Santilli unearthed the sites using a mix of open-source tools, sheer patience, and presumably
zero Jedi mind tricks.
And that's the CyberWire.
For links to all of today's stories, check out our daily briefing at the cyberwire.com.
N2K's senior producer is Alice Carruth. For links to all of today's stories, check out our daily briefing at the cyberwire.com.
N2K's senior producer is Alice Carruth.
Our Cyberwire producer is Liz Stokes.
We're mixed by Trey Hester with original music and sound design by Elliot Peltsman.
Our executive producer is Jennifer Iben.
Peter Kilpey is our publisher.
And I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow. And now, a word from our sponsor, ThreatLocker.
Keeping your system secure shouldn't mean constantly reacting to threats.
Threat Locker helps you take a different approach by giving you full control over what software
can run in your environment. If it's not approved, it doesn't run. Simple as that.
It's a way to stop ransomware and other attacks before they start without adding extra complexity
to your day. See how Threat Locker can help you lock down your environment at
www.threatlocker.com