CyberWire Daily - Jamming in a ban on state AI regulation.
Episode Date: May 13, 2025House Republicans look to limit state regulation of AI. Spain investigates potential cybersecurity weak links in the April 28 power grid collapse. A major security flaw has been found in ASUS mainboar...ds’ automatic update system. A new macOS info-stealing malware uses PyInstaller to evade detection. The U.S. charges 14 North Korean nationals in a remote IT job scheme. Europe’s cybersecurity agency launches the European Vulnerability Database. CISA pares back website security alerts. Moldovan authorities arrest a suspect in DoppelPaymer ransomware attacks. On today’s Threat Vector, host David Moulton speaks with Noelle Russell, CEO of the AI Leadership Institute, about how to scale responsible AI in the enterprise. Dave & Buster’s invites vanish into the void. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector Recorded Live at the Canopy Hotel during the RSAC Conference in San Francisco, David Moulton speaks with Noelle Russell, CEO of the AI Leadership Institute and a leading voice in responsible AI on this Threat Vector segment. Drawing from her new book Scaling Responsible AI, Noelle explains why early-stage AI projects must move beyond hype to operational maturity—addressing accuracy, fairness, and security as foundational pillars. Together, they explore how generative AI models introduce new risks, how red teaming helps organizations prepare, and how to embed responsible practices into AI systems. You can hear David and Noelle’s full discussion on Threat Vector here and catch new episodes every Thursday on your favorite podcast app. Selected Reading Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill (404 Media) Spain investigates cyber weaknesses in blackout probe (The Financial Times) Critical Security flaw in ASUS mainboard update system (Beyond Machines) Hackers Exploiting PyInstaller to Deploy Undetectable macOS Infostealer (Cybersecurity News) Researchers Uncover Remote IT Job Fraud Scheme Involving North Korean Nationals (GB Hackers) European Vulnerability Database Launches Amid US CVE Chaos (Infosecurity Magazine) Apple Security Update: Multiple Vulnerabilities in macOS & iOS Patched (Cybersecurity News) CISA changes vulnerabilities updates, shifts to X and emails (The Register) Suspected DoppelPaymer Ransomware Group Member Arrested (Security Week) Cracking The Dave & Buster’s Anomaly (Rambo.Codes) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network powered by N2K.
Hey everybody, Dave here.
Join me and my guests, Outpost 24's Laura Enriquez and Michelo Steppa on Tuesday, May
13th at noon Eastern time for a live discussion on the biggest threats hitting web applications
today and what you can do about them. We're going to talk about why attackers still
love web apps in 2025, the latest threat trends shaping the security landscape, how
to spot and prioritize critical vulnerabilities fast, along with scalable
practical steps to strengthen your defenses. Again, the webinar is Tuesday,
May 13th for our live conversation on the state
of modern web application security. You can register now by visiting events.thescyberwire.com.
That's events.thescyberwire.com. We'll see you there.
Hey, everybody. Dave here. I've talked about DeleteMe before, and I'm still using it because it still works.
It's been a few months now, and I'm just as impressed today as I was when I signed
up.
DeleteMe keeps finding and removing my personal information from data broker sites, and they
keep me updated with detailed reports, so I know exactly what's been taken down.
I'm genuinely relieved knowing my privacy isn't something I have to worry about every
day.
The DeleteMe team handles everything.
It's the set it and forget it piece of mind.
And it's not just for individuals.
DeleteMe also offers solutions for businesses, helping companies protect their employees'
personal information,
and reduce exposure to social engineering and phishing threats.
And right now, our listeners get a special deal, 20% off your DeleteMe plan.
Just go to joindeleteeme.com slash n2k and use promo code N2K at checkout.
That's joindeleteeme.com slash n2k, code N2K at checkout. That's joindeleteme.com slash N2K code N2K.
House Republicans look to limit state regulation of AI.
Spain investigates potential cybersecurity weak links in the April 28th power grid collapse.
A major security flaw has been found in ASIS motherboard's automatic update system.
A new macOS info-stealing malware uses Pi installer to evade detection.
The U.S. charges 14 North Korean nationals
in a remote IT job scheme.
Europe's cybersecurity agency
launches the European Vulnerability Database.
CISA pairs back website security alerts.
Moldovan authorities arrest a suspect
in doppel-paymer ransomware attacks.
On today's threat vector, David Moulton
speaks with Noelle Russell from the AI Leadership Institute about AI operational maturity.
And Dave and Buster's invites vanish into the void.
It's Tuesday, May 13, 2025. I'm Dave Bittner and this is your CyberWire Intel Briefing.
Thanks for joining us here.
Once again, it's always great to have you with us.
House Republicans have added controversial language to the new budget reconciliation bill
that could severely limit state regulation of artificial intelligence.
The bill, introduced by Representative Brett Guthrie, includes a clause barring states
from enforcing any AI-related laws for 10 years.
The sweeping language could nullify existing laws in states like California and New York
that require transparency and bias audits for AI tools in health care and hiring.
Critics argue this is a major gift to the AI industry, which has close ties to Trump-era
officials and has resisted oversight.
If passed, the bill would block states from protecting citizens from unchecked AI use,
marking a dramatic shift in tech policy.
Spain is investigating whether small renewable energy generators were a cybersecurity weak
link in the April 28th power grid collapse
that cut 60 percent of the country's electricity, the Financial Times reports.
The National Cybersecurity Institute is questioning solar and wind operators about their cyber
defenses, remote access, and system anomalies.
While no cyber attack has been confirmed, authorities haven't ruled one out,
and a judge is now probing that possibility. Spain's shift from centralized fossil fuel
plants to thousands of smaller renewable sites has increased potential cyber attack targets.
Devices managing energy flow and communication links may offer entry points. Red Electrica, the grid operator, said no attack hit its systems, but flagged risks
tied to data gaps from small producers.
Despite skepticism from energy experts about the likelihood of a coordinated cyber attack,
officials stress that all scenarios remain under review. Spain is investing 1.1 billion euros to boost national cybersecurity across sectors.
A major security flaw has been found in ASIS main board's automatic update system affecting
Armory Crate and Driver Hub tools on AMD and Intel platforms.
Two vulnerabilities allow remote attackers to alter system behavior or access features
via crafted HTTP requests.
The root issue lies in software auto-installed from the UEFI BIOS using Windows Platform
Binary Table.
ASUS has released updates to fix these issues.
Users should update immediately and scan BIOS files for threats using VirusTotal.
A new info-stealing malware targeting macOS systems has been uncovered using PyInstaller
to evade detection.
First spotted in January and analyzed by Jamf Threat Labs, the malware is bundled in mock-o binaries
and remains undetected by most antivirus tools.
PyInstaller allows the malware to run without a native Python installation, especially effective
since macOS 12.3 removed built-in Python.
The malware harvests user credentials via fake AppleScript dialogues, extracts data
from the keychain, and targets crypto wallets.
It uses multiple obfuscation layers, including Base85 encoding, XOR encryption, and Zlib
compression.
The malware's behavior is stealthy, leaving little trace on disk, and operates across
Mac architectures.
Researchers warn users to be wary of unsigned executables and unexpected password prompts.
They recommend monitoring for Pi installer activity and suspicious environment variables
as this method grows more popular among attackers.
Meanwhile, Apple has issued a critical security update for macOS Sequoia to patch eight major
vulnerabilities that could allow malicious apps to access sensitive user data.
The flaws affect key components like Apple Intelligence Reports, Core Bluetooth, Finder,
and the TCC privacy framework.
Notable issues include permission bypasses and improper state
management that could expose personal data. Though no active exploitation has been reported,
security experts warn these flaws underscore growing challenges in maintaining privacy
across complex operating systems.
The U.S. has charged 14 North Korean nationals in a scheme that used stolen identities to
secure remote IT jobs at U.S. companies, sending at least $88 million to the DPRK over six
years. Flashpoint's investigation, based on a DOJ indictment, revealed that the group
used fake companies, malware, and remote access tools
to infiltrate corporate networks. Domains linked to fake firms like Baby Box Info and
Cubics Tech US were used to build fake resumes and references. Infected devices in places
like Pakistan, Nigeria, and Dubai were found with saved credentials, job board activity, and evidence of coordination
with North Korean handlers.
Signs included Korean language settings, VPNs masking DPRK connections, and tactics to avoid
detection like faking voice calls and smuggling laptops.
The findings point to a global operation aimed at stealing money, data, and access, reinforcing
the need for stronger cybersecurity and hiring verification across industries.
Europe's cybersecurity agency, INISA, has officially launched the European Vulnerability
Database, a centralized platform for tracking cybersecurity flaws.
Developed under the NIS 2 directive, the EUVD mirrors the U.S. national vulnerability database
and aims to enhance risk management and transparency across the EU.
It gathers data from sources like CSERTS, vendors, and databases such as MITRE's CVE
and CISA's KEV catalog.
Users can access three dashboards highlighting critical, exploited, and EU-coordinated vulnerabilities.
Each entry includes details like affected products, severity, and mitigation steps.
Concerns over the future of the US-based CVE program have increased interest in the EUVD as a stable,
independent resource.
Inissa says the tool is vital for public users, companies, and authorities to better manage
threats and respond effectively to known vulnerabilities.
CISA announced a major change in how it shares cybersecurity updates.
Only urgent alerts about emerging
threats or major cyber activity will now appear on its website.
Routine guidance, vulnerability notices, and product warnings will be distributed via
email, RSS, and ex-Twitter.
This shift, possibly tied to budget cuts and staff reductions under a Trump-aligned cost-cutting initiative has raised concerns among experts. Critics, including former CISA
Director Jen Easterly, warned that reducing visibility for routine security
updates undermines national cybersecurity. The policy reflects a
broader trend of federal agencies moving communications to ex-Twitter, despite its limitations.
Agencies like the NTSB and Social Security Administration have also begun phasing out
traditional press releases and email updates.
Observers worry this change favors Elon Musk's platform and limits accessibility to critical
public information. CISA urges users to subscribe to its email notifications
to stay informed.
Moldovan authorities have arrested
a 45-year-old foreign national suspected of involvement
in doppel-paymer ransomware attacks,
including a 2021 attack on the Dutch Research Council
that caused 4.5 million euros in damages.
The suspect, whose identity remains undisclosed, is accused of ransomware deployment, extortion,
and money laundering.
Seized items include laptops, phones, and 84,800 euros in cash.
The arrest follows international efforts to dismantle Doppelpamer, a ransomware strain
linked to the TA505 Group, which has targeted critical infrastructure and multiple sectors
since 2019.
Coming up after the break, David Moulton speaks with Noelle Russell, CEO of the AI Leadership
Institute about AI operational maturity, and Dave and Buster's invites vanish into the
void. And now, a word from our sponsor, ThreatLocker.
Keeping your system secure shouldn't mean constantly reacting to threats.
ThreatLocker helps you take a different approach by giving you full control over what software
can run in your environment.
If it's not approved, it doesn't run. Simple as that. It's a way to stop ransomware and other attacks
before they start without adding extra complexity to your day. See how ThreatLocker can help
you lock down your environment at www.threatlocker.com. Let's be real, navigating security compliance can feel like assembling IKEA furniture without
the instructions.
You know you need it, but it takes forever and you're never quite sure if you've done
it right.
That's where Vanta comes in.
Vanta is a trust management platform that automates up to 90% of the work for frameworks
like SOC 2, ISO 27001, and HIPAA, getting you audit ready in weeks, not months.
Whether you're a founder, an engineer, or managing IT and security for the first time,
Vanta helps you prove your security posture without taking over your life.
More than 10,000 companies, including names like Atlassian and Quora,
trust Vanta to monitor compliance, streamline risk, and speed up security reviews by up to five times.
And the ROI? A recent IDC report found Vanta saves businesses over half a million dollars a year and pays
for itself in just three months.
For a limited time, you can get $1,000 off Vanta at vanta.com slash cyber.
That's vanta.com slash cyber. On today's Threat Vector segment, host David Moulton speaks with Noelle Russell, CEO of
the AI Leadership Institute, about how to scale responsible AI in the enterprise. Hi, I'm David Moulton, host of the Threat Vector Podcast, where we discuss pressing
cybersecurity threats and resilience and uncover insights into the latest industry trends.
In my latest episode, I sat down with Noel Russell, founder and chief AI officer at AI
Leadership Institute, to talk about how to scale responsible AI in the
enterprise. Noelle's advice, be a doer not a talker. In a world racing to adopt AI,
it's a reminder that hands-on experience matters more than hype and that early
decisions about accuracy, fairness, and security can have long lasting
consequences. This episode will help you ask better questions, close blind spots, and move forward with confidence.
Check out the episode wherever you listen to podcasts.
Noel is a multi-award winning futurist and an executive AI strategist whose career spans roles at
Amazon, Alexa, AWS, Microsoft, IBM, Accenture, and NPR.
And now she's the author of a powerful new book, Scaling Responsible AI, From Enthusiasm
to Execution, where she outlines the framework and principles that organizations can use
to scale AI ethically, securely, and
effectively.
I downloaded the PDF copy of the book and got into it as far as I could.
Before I said, you know what, I need to have a conversation with you about it.
And today we're going to talk about AI leadership going from prototyping into production, and then how organizations can rapidly adopt
what they're doing in generative AI,
and what is the tipping point,
that balancing innovation with risk, speed,
and responsibility.
So Noel, your book, Scaling Responsible AI,
From Enthusiasm to Execution,
I think it's already making waves,
and I especially liked your baby tiger
metaphor.
And I see you've got your baby tiger with you today.
Bruiser.
Bruiser.
I love the framing.
It's both cute, but baby tigers are dangerous if mishandled.
Can you tell us where that analogy came from and what you want business leaders to take
away from that analogy? Absolutely. It actually came from and what you want business leaders to take away from that analogy.
Absolutely. It actually came from my journey, as you mentioned.
Yes, I've worked at a lot of companies. The interesting thing about my career is that I always end up at these companies
before they've done a thing, before they've gone into the world of Amazon Alexa or before they've launched
cognitive services at Microsoft. And so I was at Microsoft, I was hired to help the research organization productize AI.
So they had 17 research models that
weren't going to be in my purview.
I immediately thought of them like,
I would use the term herding cats.
So herding cats transformed into this concept of a tiger,
because cats aren't that fierce and I'm a cat owner,
but you don't want a bunch of cats around,
but they're more a nuisance than a danger. So I realized I needed to I'm a cat owner, but you don't want a bunch of cats around, but they're more a nuisance than a danger.
So I realized I needed to change that a little bit,
and so we ended up with a tiger.
That metaphor though has now become even more interesting over time.
Because now we're looking at,
I always will tell people when you start an AI project,
you start with this adorable,
cute little model
that you think, you know, it does novel things,
trite things, it's exciting, everyone loves it,
people want to be on the team.
And then at some point, you're hoping someone will go,
wow, Baby Tiger, like how big are you going to be?
Or what are you going to eat?
Or you have razor sharp teeth,
like how much do you have to eat?
Where are you going to live?
What happens when I don't want you anymore?
Like no one asks that in baby tiger mode.
And so that's how this book was created,
was literally I was like, what happens when,
like, it's still a baby tiger,
but like, nobody's asking these questions.
So...
What happens when it grows up?
Yes. How do we, you know, avoid...
How do you deal with that?
Yeah, baby tigers become big tigers,
and big tigers eat people, right?
Like, so...
Yep.
Let's be careful.
Well, let's talk about the human element of responsible AI.
You emphasize that people,
not just the technology,
are the key to responsible AI.
What's the role of a security culture
in helping AI succeed at scale?
So in this case, we look at that weaving, I like that you said the DNA. I haven't used that So in this case, we kind of look at that weaving.
I like that you said the DNA.
I haven't used that analogy in a while,
but it has to be part of the DNA.
It has to be woven into the fabric of these projects.
So now all of a sudden, which is why most of the time,
the technology part is probably 25% of what I do
when I go to an organization, help them build a solution
or deploy a solution.
The tech is usually not the hard part.
The hard part is, how do you get a team of people
that are going to care about all the things
that we've shared, that are going to care about accuracy
and fairness and security, and how do you get them
into that project early enough to ensure
that you've built it into the model's behavior,
not just bundled it on.
That's why governance is required, but it's not enough
because you can just change your governance policies, or worse,
get acquired by a company that completely dismantles your governance process.
Then what are you going to do? So it needs to be built in, and that's the beauty of having LLMs
as part of your infrastructure. So I'll encourage, you know,
if we expand our mind and think about how do we use an LLM to actually
be the security auditor in
these systems and embed it into the deployed feature.
So now when you get that feature and LLM is built in to say,
oh, no, these are the rules by which I'm abided.
Yeah.
There's a framework called the AI safety system,
and Microsoft and Amazon both use it.
I think Microsoft is the only one that's called it out,
that this is what they do intentionally.
But that safety system is like four layers,
and it starts with the human AI experience,
which is like that's when you involve security,
legal, compliance, everyone's in the room,
plus the line of business owners,
plus the engineers, and you're like,
what are we trying to do?
This is when you define delegation.
What's the AI going to do?
What are the humans going to do?
This is like the Skynet moment,
when you decide if you want to, can you give everything to the AI going to do? What are the humans going to do? This is like the Skynet moment, right? When you decide, if you want to,
can you give everything to the AI?
You could, it'll hurt you, baby tiger, right?
But most organizations are like,
no, there's stuff I want to keep.
Usually, security is one of those things.
Accuracy is one, fairness is one.
So there are certain things,
but once that human AI experience is defined,
that's not a technical problem, that's like a designer problem.
So you have these user experience designers designing how AI will be
integrated into a workflow or a process.
The next thing is the system prompt,
is realizing with every machine you deploy,
you have the ability to control the way it operates.
Most people, when they think prompt engineering,
they're thinking the prompts they use to ask their questions.
But this is the prompt that's used to tell
the bot how to answer the questions.
Okay.
That's completely controlled and most context windows for that,
it's like 375,000 characters.
That's a lot of space for you to,
and that's the first thing I do in
an executive briefing when they're like,
yeah, we're using AI.
I'm like, great, let's take a look at one.
I go into the configuration of the system prompts and it's like, you are a bot that does blah.
That's it.
And I'm pretty sure it's a default setting.
Yeah.
Like, I mean, it's not uncommon to many of these security
things you walk in, you're like, you know,
we wrote a book on this.
There's a document on this.
Like, it's well-documented, but people just won't do it.
Many reasons, time, resources.
But now you can build an LLM that will infuse it into
the life of your systems and feature releases.
There's no excuses now.
Then just quickly, the last two are less controlled.
One, model selection.
So we talked about Helm,
picking the right model for the right task.
Then the last one is infrastructure,
which again, we're getting deeper and deeper.
So if you're not building a model,
you won't get to choose the infrastructure it runs on,
but you should know, are you running on Amazon?
Are you running on Microsoft? Are you running on Google?
Are you running on hardware in your basement?
Are you good at that? Have you ever built a NIC card?
Nobody asks these questions.
How far down the stack do you want to go?
Yeah.
But you should know.
But you should know, or at least they should be transparent about it.
Even if they have what are called system cards.
So I was just speaking with CISO at Anthropic and Meta
at the event here, and they both were like,
we have system cards and they monitor how many people
read them, and it's like less than 1% of people
who use their systems go to that page
and download their system cards.
Not because they didn't publish it,
not because they didn't say we're responsible. Here you go, explainability. People aren't
even asking the question.
If you like what you've heard, catch the full episode now on your threat vector podcast feed. It's called How to Scale Responsible AI in the Enterprise,
released May 6th.
And be sure to check out the complete threat vector podcast
right here on the N2K Cyberwire Network,
or wherever you get your favorite podcasts.
What's the common denominator in security incidents?
Escalations and lateral movement.
When a privileged account is compromised,
attackers can seize control of critical assets.
With bad directory hygiene and years of technical debt,
identity attack paths are easy targets
for threat actors to exploit,
but hard for defenders to detect.
This poses risk in active directory,
entra ID and hybrid configurations.
Identity leaders are reducing such risks with Attack Path Management. You can learn how Attack
Path Management is connecting identity and security teams while reducing risk with Bloodhound
Enterprise, powered by SpectorOps. Head to spectorops.io today to learn more. SpectorOps, see your
attack paths the way adversaries do.
And finally, a recent episode of the Search Engine podcast tackled an absurd but very
real iOS bug.
Say the phrase, Dave and Busters in an audio message, and poof, the message vanishes into
the void.
It never reaches the recipient, leaving only a ghostly dot dot dot typing animation behind.
It's all thanks to iOS's hyper-vigilant blastdoor service.
Turns out the transcription engine hears Dave and Buster's, transcribes it with an ampersand,
and forgets to properly escape it in XHTML.
The poor parser sees the rogue ampersand, panics, and nopes out, crashing the message.
Basically, Apple's message security is so strict it breaks over the mention of a popular
sports bar. The bug isn't dangerous, it's actually a sign that Blastdoor is doing its
job. But still, maybe don't invite anyone to Dave and Buster's via voice message
unless you want your plans to mysteriously disappear. And that's the CyberWire.
For links to all of today's stories, check out our daily briefing at the cyberwire.com.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly
changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey
and the show notes or send an email to cyberwire at n2k.com.
N2K's senior producer is Alice Carruth. Our cyberwire producer is Liz Stokes. We're
mixed by Trey Hester with original music and sound design by Elliot Keltsman. Our executive
producer is Jennifer Ibane. Peter Kilpey is our publisher, and I'm Dave Bittner. Thanks for listening,
we'll see you back here, tomorrow. And now a word from our sponsor, Spy Cloud.
Identity is the new battleground, and attackers are exploiting stolen identities to infiltrate
your organization.
Traditional defenses can't keep up.
Spy Cloud's holistic identity threat protection helps security teams uncover and automatically
remediate hidden exposures across your users from breaches,
malware and phishing to neutralize identity-based threats like account takeover, fraud and ransomware.
Don't let invisible threats compromise your business.
Get your free corporate darknet exposure report at spycloud.com slash cyberwire and see what
attackers already know.
That's spycloud.com slash cyberwire.