CyberWire Daily - Closing cracks before hackers do.
Episode Date: November 12, 2025Patch Tuesday. Google sues a “phishing-as-a-service” network linked to global SMS scams, and launches “private ai compute.” Hyundai notifies vehicle owners of a data breach. Amazon launches ...a bug bounty program for its AI models. The Rhadamanthys infostealer operation has been disrupted. An initial access broker is set to plead guilty in U.S. federal court. Our guest is Bob Maley, CSO from Black Kite, discussing a new AI assessment framework. “Bitcoin Queen’s” $7.3 billion crypto laundering empire collapses. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On our Industry Voices segment, we are joined by Bob Maley, CSO from Black Kite, discussing a new AI assessment framework. You can hear Bob’s full conversation here. Selected Reading Microsoft Fixes Windows Kernel Zero Day in November Patch Tuesday (Infosecurity Magazine) Chipmaker Patch Tuesday: Over 60 Vulnerabilities Patched by Intel (SecurityWeek) ICS Patch Tuesday: Vulnerabilities Addressed by Siemens, Rockwell, Aveva, Schneider (SecurityWeek) Adobe Patches 29 Vulnerabilities (SecurityWeek) High-Severity Vulnerabilities Patched by Ivanti and Zoom (SecurityWeek) Google launches a lawsuit targeting text message scammers (NPR) Private AI Compute: our next step in building private and helpful AI (Google) Hyundai confirms security breach after hackers access sensitive data (CBT News) Amazon rolls out AI bug bounty program (CyberScoop) Rhadamanthys infostealer disrupted as cybercriminals lose server access (Bleeping Computer) Russian hacker admits helping Yanluowang ransomware infect companies (Bitdefender) $7.3B crypto laundering: ‘Bitcoin Queen’ sentenced to 11 Years in UK (Security Affairs) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
Ever wished you could rebuild your network from scratch to make it more secure, scalable, and simple?
Meet Meter, the company reimagining enterprise networking from the ground up.
Meter builds full-stack, zero-trust networks, including hardware, firmware, and software,
designed to work seamlessly together. The result? Fast, reliable, and secure connectivity without the
constant patching, vendor juggling, or hidden costs. From wired and wireless to routing, switching,
firewalls, DNS security, and VPN, every layer is integrated and continuously protected in one
unified platform. And since it's delivered as one predictable monthly service, you skip the heavy
capital costs and endless upgrade cycles.
Meter even buys back your old infrastructure to make switching effortless.
Transform complexity into simplicity and give your team time to focus on what really
matters, helping your business and customers thrive.
Learn more and book your demo at meter.com slash cyberwire.
That's M-E-T-E-R dot com slash cyberwire.
We've got a patch Tuesday roundup.
Google sues a Fishing as a Service network linked to global SMS scams and launches a private AI compute.
Hyundai notifies vehicle owners of a data breach.
Amazon launches a bug bounty program for its AI models.
The Radamathis Info-Steeler operation has been disrupted.
An initial access broker is set to plead guilty in U.S. federal court.
Our guest is Bob Maley, CSO from Black Kite, discussing a new AI assessment framework.
And the Bitcoin Queen's $7.3 billion crypto laundering empire collapses.
It's Wednesday, November 12th.
2025. I'm Dave Bittner, and this is your Cyberwire Intel Briefing.
Microsoft's November patch Tuesday addressed more than 60 security flaws, including one actively exploited in the wild.
Among them is a race condition and double-free bug,
which allows low-privileged attackers to corrupt kernel memory
and escalate to full-system privileges.
While exploitation requires precise timing and local access,
chaining it with other flaws could enable full-system compromise,
credential theft, and ransomware deployment.
Researchers also warned about a critical remote code execution bug
in the Windows GDI Plus graphics library,
with a CVSS score of 9.8.
The flaw can be triggered by uploading a crafted image file,
making it a top patching priority for any internet-facing systems.
This update cycle also marks the first after Windows 10's end-of-life,
with Microsoft issuing an out-of-band fix for enrollment issues
in its extended security updates program.
In the industrial control system sphere,
major vendors including Siemens, Rockwell Automation, Aviva, and Schneider Electric issued advisories
for a batch of vulnerabilities affecting their ICS and OT products. This includes an Aveva flaw that
also impacts Schneider Electric solutions, underscoring vendor interdependencies. Although
exploitation evidence is not detailed in the reporting, the risks revolve around unauthorized
access and potential disruption of industrial processes.
Meanwhile, Adobe released updates addressing 29 vulnerabilities across products such as
in-design, in-copy, Photoshop, Illustrator, Substance 3D Stager, and format plugins.
Several of the flaws permit arbitrary code execution, and one involves a security bypass issue
in Adobe Pass.
Adobe assigned all bugs a priority rating of three, which indicates that exploitation is not
expected, and noted no current evidence of these vulnerabilities being used in the wild.
In the hardware and firmware space, Intel Corporation published around 30 new advisories,
covering more than 60 vulnerabilities in areas including zion processors, slim bootloader,
graphics, quick assist technology, and firmware and driver modules. The issues include high
severity flaws that could enable privilege escalation, denial of service,
and information disclosure.
Ivanti and Zoom
released patches this week for multiple
vulnerabilities, including several
rated high severity.
Avanti fixed three flaws in its
endpoint manager platform that could enable
remote code execution or
privilege escalation, affecting
all versions before 2024
SU4. The company
says there's no evidence of exploitation
so far.
Zoom also issued
nine advisories addressing three high
severity and six medium severity bugs across its desktop and mobile apps.
The most serious issues could allow privilege escalation, though none are known to be exploited.
Google has filed a lawsuit in U.S. federal court against a China-based criminal network it calls
Lighthouse, accused of running a large-scale fishing-as-a-service operation. The group allegedly
sells software kits and fake website templates that mimic major U.S. organizations, including
Google itself, the power widespread smithing scams sent via text message. According to the suit,
Lighthouse has operated more than 32,000 fraudulent sites impersonating the U.S. Postal Service
and may have compromised millions of credit cards across 120 countries. The defendant's real
identities are unknown, identified only by online aliases on telegram. Google's goal isn't
prosecution, but deterrence, seeking a court declaration that Lighthouse's infrastructure is
illegal to help other platforms shut it down and protect users from future fishing campaigns.
Elsewhere, Google has introduced a new platform called Private AI Compute, designed to bring its
Gemini AI models to the cloud while keeping user data private. The system processes information
in a sealed hardware-secured environment
using encryption and remote attestation
to prevent access, even by Google itself.
The company says the approach delivers the speed
and capability of cloud AI
with the privacy of on-device processing.
It's part of Google's broader push
to prove that powerful AI can also be privacy preserving.
Hyundai Auto Ever America,
the digital arm of Hyundai Motor Group,
is notifying vehicle owners about a data breach that expose names,
social security numbers, and driver's license details.
Hackers accessed company systems for nine days between February and March before detection.
While the company serves over 2.7 million users, only about 2,000 were affected.
Hyundai says it's investigating with outside experts and offering two years of credit monitoring.
The breach underscores growing industry concern over how automakers.
protect driver data.
Amazon has announced a new bug bounty program inviting select researchers to probe its
Nova large language models for security flaws.
The program will reward discoveries involving prompt injection, jailbreaking, and other
vulnerabilities with real-world exploitation potential.
Participants, chosen through an invite-only process, will also test whether Nova could be
manipulated to aid in developing weapons of mass destruction.
Amazon says the effort aims to strengthen AI safety across its ecosystem, which powers services like Alexa and AWS Bedrock.
The Radamanthus Info-Stealer operation has been disrupted, leaving many of its criminal customers unable to access their servers.
Researchers say users are reporting lost SSH access and new certificate-based logins, signs which suggest law and law enforcement.
enforcement intervention. Radamanthus, a subscription-based malware that steals credentials and cookies,
is typically spread through fake software and ads. Investigators believe German police or
Operation Endgame, a multinational campaign targeting cybercriminal infrastructure, may be behind
the takedown. The malware's tour sites are offline, but not officially seized.
Russian national Alexei Olegovich Volkov, age 25, is set to plead guilty in U.S. federal court for helping
ransomware gangs gain access to victim networks. Prosecutors say Volkov acted as an initial
access broker, selling stolen credentials to the Yang-Lowang Ransomware Group in exchange for a share
of ransom payments, earning over $250,000. Arrested in Rome in 2020,000.
and extradited to the U.S., Volkov has agreed to pay more than $9 million in restitution.
His case highlights the growing specialization within ransomware operations.
Coming up after the break, Bob Maley, CSO from Black Kite, discusses a new AI assessment framework
and a Bitcoin Queen's $7.3 billion dollar crypto laundering empire collapses.
Stay with us.
We've all been there.
You realize your business needs to hire someone yesterday.
How can you find amazing candidates fast?
Well, it's easy.
Just use Indeed.
When it comes to hiring, Indeed is all you need.
Stop struggling to get your job post noticed.
Indeed's sponsored jobs helps you stand out and hire fast.
Your post jumps to the top of search results, so the right candidates see it first.
And it works.
Sponsored jobs on Indeed get 45% more applications than non-sponsored ones.
One of the things I love about Indeed is how fast it makes.
makes hiring. And yes, we do actually use Indeed for hiring here at N2K Cyberwire. Many of my colleagues
here came to us through Indeed. Plus, with sponsored jobs, there are no subscriptions, no long-term
contracts. You only pay for results. How fast is Indeed? Oh, in the minute or so that I've
been talking to you, 23 hires were made on Indeed, according to Indeed data worldwide.
There's no need to wait any longer. Speed up your hiring.
now with Indeed. And listeners to this show will get a $75-sponsored job credit to get your jobs more
visibility at indeed.com slash cyberwire. Just go to indeed.com slash cyberwire right now and
support our show by saying you heard about Indeed on this podcast. Indeed.com slash cyberwire.
Terms and conditions apply. Hiring. Indeed is all you need.
What's your 2 a.m. security worry?
Is it, do I have the right controls in place?
Maybe are my vendors secure?
Or the one that really keeps you up at night,
how do I get out from under these old tools and manual processes?
That's where Vanta comes in.
Vanta automates the manual work,
so you can stop sweating over spreadsheets,
chasing audit evidence, and filling out endless questionnaires.
Their trust management platform continuously monitors your systems,
centralizes your data, and simplifies your security at scale.
And it fits right into your workflows,
using AI to streamline evidence collection, flag risks,
and keep your program audit ready all the time.
With Vanta, you get everything you need to move faster, scale confidently,
and finally, get back to sleep.
Get started at Vanta.com slash cyber.
That's VAN.
dot com slash cyber
Bob Mayley is
chief security officer at Black Kite
and in today's sponsored industry voices segment
we're discussing a new AI assessment framework.
Can we start with the big picture here?
From your point of view,
what kind of pressure our third party risk management teams
feeling right now when it comes to AI?
Extreme. In one word. That's all you need to think about. It's such a rapidly changing
industry that it's hard to keep up with the technology first. And then to be able to assess that,
I think people are totally clueless and just floundering in trying to figure out what's the
best way to do that, not only in our own organizations, but with the third parties.
And that's kind of what sparked the idea of this and started me doing some research about a year and a half ago.
Well, why have traditional risk frameworks struggled to keep up with this rapid adoption of AI?
Well, because AI changes so rapidly.
Fair enough.
And it's interesting.
So a lot of people think that AI is a completely separate entity that you have to do completely new assessments on AI.
And that is a totally misconception.
A lot of the underpinning infrastructure that AI runs on, you're already assessing.
But there are obviously new components about it.
And that's been the challenge.
And you know, you watch the industry every week, every two weeks, a new framework,
some new organization, new government comes out with.
Here's what we say is the best way to assess AI.
When I think of the current landscape of AI risk assessments, it strikes me that it's very fragmented.
And is this a matter of there being just so many frameworks or even so much regulation?
Or is it a lack of consistency?
What's going on here?
Yes, all the above.
And here, let me make an analogy.
So, AI is a technology.
that has been advancing faster than anything we've ever experienced before.
You know, if you go back far enough and we remember about in the early days of personal
computers, every two to three years, they would double in capabilities.
There were something called Moore's Law.
And that held true for a long time.
But now it seems like the AI competition every week a particular LLM has come
out with something new and better than the competition.
They've enhanced it.
The cost of computers down.
And, you know, so as a risk assessment methodology, yeah, that's a very sprawling
landscape that we have to take into.
And imagine a city where every city block they've developed their own slightly different
building code.
And that's because it might be the industry they're in.
The financial industry looks at things one way.
It might be geography.
If you look at in Europe, their initial frameworks that they were bringing out
weren't really focused so much on technical.
They were focused on other things.
But, you know, look at that.
The city keeps growing.
There's a new framework.
There's somebody new bringing it in.
And ultimately, what you've got is you have a city that is in,
in total chaos about what the best way to do.
You know, we looked at, and this again, you know, we give the number 50 plus, but I can
predict and I usually don't do predictions, that in the next month that's going to increase,
there'll be new ones.
But essentially, you know, going through that and synthesizing all that variety and seeing,
okay, underneath, what's the basic non-negotiable security standards in a building,
building code, you know, the analogy, it's like, you know, fire exits, load bearing walls,
plumbing, things like that. Why couldn't we do something like that for AI? That's kind of what
really was the genesis of all this. Well, let's dig into the BKGA3 framework. I mean, what
problem were you and your colleagues trying to solve when you began developing it? Well, I started
on my own because as a vendor, we undergo assessments. And I look at the history of how third-party
risk assessments have been done. And they're very static, very non-agile. And, you know, it's always
been challenging to be able to look at that and understand, you know, where the risk is or how I like
to say it is, you know, the reason why I look at a third party and I'm assessing them for risk is I want
understand and reduce the amount of surprise in that relationship.
Surprise is the unknown, uncertainty.
And, you know, I don't want to be surprised.
So that's what we're trying to do.
And so when AI, you know, AI's been around a long time.
It's not like it just came out.
But two and a half years ago, chat ATP really hit the market.
And it just captured people's attention.
It captured investors.
it's been expanding ever since.
So I started playing around with looking at using AI to analyze the frameworks in about a year
and a half, almost two years now, it was it was about 15.
And to see what commonalities there were.
And that was the genesis.
And as we saw it get worse and worse because new frameworks came out, new things happened.
well, the folks that run our research department, they thought that that initial look
that I looked at, that they found it to be very interesting and they expanded it.
And that's kind of, you know, I'm not the genius behind it.
I was a curious one that was asking questions and smart people then took it and run with
it and developed this.
Well, you've described this as an open standard for assessing AI risk.
Why is that openness important to you?
Well, it's openness is in the DNA of our company.
So one of our co-founders, John Lund Bullockbush, has always been focused on giving back to the community.
And from day one, you know, with Black Kite, there were tools.
There were things that he would make available that people could use to help, as I say it, reduce the surprise.
As he would say it, you know, assess the risk.
but that's always been important to the company.
And, you know, one of those challenges is if you look at in the AI world,
some of the frameworks that are maybe de facto standards, they're not free.
You have to pay.
You have to subscribe.
There's a tool you have to buy.
And that really goes against that, you know,
that whole concept of, you know, making things better for the global community.
So we looked at that.
We looked at that there really wouldn't be any value to us developing something that was proprietary.
Everything that Black Kite does is based on open standards.
And since there was no common open standard for AI, why not create one and put it out there and make it open to whoever wants to use it?
So that's what brought that to the table.
Can you take us behind the scenes?
I mean, you mentioned you evaluated hundreds of.
requirements from over 50 frameworks. What was that undertaking like? Well, for me, it was
using the power of AI to read them, to assess them, to categorize them, to use semantic
similarities to see which ones had commonalities, which ones could be boiled down to maybe
eight or nine frameworks that are looking at the same control. The language
is a little different, but ultimately they're trying to get to the same point to understand
about that particular thing. So why not make it simpler and be able to, you know, be that Rosetta
so to speak of all those different frameworks. So that's how I got started. And then the research
folks, they put a lot more intelligence into it. And I mean, they're some very smart people
And using tools like that, the company's been built on nine years ago,
using AI to be able to look at documents like SOC2s and information security policies
and assess those and analyze what controls they speak to.
So the technology was there, so they were able to use that as well.
While you all were on this journey, were there any surprises and anything that popped up
along the way that was unexpected?
The only thing unexpected is it's growing faster than I even imagined, the capabilities of the
various LLMs.
I was doing some research.
I was in a meeting and somebody mentioned that a particular version, an LLM was really biased,
and you should never use it.
It's horrible.
And statements like that are always interesting.
thing to me because I want to find out, you know, is that based on fact? So I used one version of
AI to do a deep dive and research that very question. And essentially the AI came back
a little snarky and said, yeah, that particular AI, it's only good for writing fiction.
And it said, I could do it better. And I go, oh, well, this sounds interesting. So I said,
Okay, well, here's a concept.
Go ahead, write something for me.
And what developed from that was,
it's a thing I publish on LinkedIn,
on LinkedIn articles.
It's called the Clause of Compliance,
the Rise of the Raccoon Red Team.
A hundred percent written by AI,
really crazy.
But then after the first one gave me a couple episodes,
it started to, I noticed that it was getting repetitive.
So I thought, oh, well, let's give its evil competitor an opportunity and let it write it.
And so I've been doing that.
My plan is to have four or five different LLMs complete the entire series and then do a vote and see which one was better.
And then that'll be my best A-I.
So that's how I'm looking at it.
Wow.
You know, you mentioned how quickly all of this is evolving.
How do you plan on keeping BKGA3 up to date, keeping it current?
Well, obviously, use of AI, use of automation,
constantly looking at changes in the compliance landscape, you know,
foremost, are there changes to any of the existing frameworks,
are there new frameworks?
That's one component of it.
The other component is looking at the risks that are being identified with AI,
MIT's done some really great research on that.
They've identified a large number of risks.
So that's part of, again, the DNA of our company looking at potential risks in third parties.
And this just leads right into that very well.
And it will grow.
That's one of the A's adaptability to be able to adapt to that ever-changing world, whether it's from the compliance or whether it's from bad actors.
and updating it on a much more frequent basis than most frameworks get done.
Most frameworks get changed at most once a year,
very, very low agile capability,
and it doesn't really work today that.
The bad actors are very agile.
They're using AI.
So being able to produce something that help an assessment process become more agile and keep up.
You know, there's this phrase that folks,
are using these days responsible AI risk management. I'm curious what that means to you and
how you feel like the industry is adapting to that. Well, that's a challenge when you have to identify
what does that mean? And your question is, what does it mean to me? Because it means something
different to everybody. That example I used about a particular LLM being biased,
Well, responsible AI is to reduce the biases as much as possible.
I don't think that we'll ever be able to completely remove all bias because it's a fundamental human frailty is that we tend to have biases.
And whether we know it or not, they get built into things that we do.
But responsibility is looking and trying to ensure that the –
through the development that you've done the best possible work to keep that that bias minimal.
For folks who are curious about this and want to dig deeper or even check out BKGA3,
what's the best way for them to find out more?
I'd have to check in with our folks who are going live with it.
There's an entire announcement.
There's resources that will be available that they'll publish.
It isn't live today.
It will be very shortly.
Check the black kite.com website for the announcements there and the links to the free resources will become available there.
You know, Black Kite has a history of releasing tools and research openly to the community.
It seems to me like this really aligns with that fundamental part of your mission.
Absolutely.
One of the things we bring out every week, it's called Focus.
Friday. It's the research that our team does to enhance our platform. And it's done. It's used for
that. But the research has a far greater value if you put it out there for everybody else so they can
take advantage of it as well. So it's that concept. So it's something that's not new. It's something
we've been doing for a long time. That's Bob Maley, Chief Security Officer at Black Kite.
At TALIS, they know cybersecurity can be tough and you can't protect everything.
But with TALIS, you can secure what matters most.
With TALIS's industry-leading platforms, you can protect critical applications, data and identities,
anywhere and at scale with the highest ROI.
That's why the most trusted brands and largest banks,
retailers, and health care companies in the world
rely on TALIS to protect what matters most.
Applications, data, and identity.
That's TALIS.
T-H-A-L-E-S.
Learn more at talusgroup.com slash cyber.
And now a word from our sponsor, Threat Locker, the powerful zero-trust enterprise solution that stops ransomware in its tracks.
Allow listing is a deny-by-default software that makes application control simple and fast.
Ring fencing is an application containment strategy, ensuring apps can only access the files, registry keys, network resources, and other applications they truly need to function.
shut out cybercriminals with world-class endpoint protection from threat locker.
And finally, London's Southwark Crown Court has officially dethroned the Bitcoin Queen.
Jemin Kwan, also known as Yadi Zhang, was sentenced to 11 years and eight months in prison
after laundering a staggering $7.3 billion from a Chinese crypto scam
that fleeced more than 128,000 victims.
Kwan, who fled China under a false identity,
tried to reinvent herself in London's luxury property market,
apparently forgetting that the blockchain remembers everything.
Police eventually seized 61,000 Bitcoin worth 5.5 billion pounds,
the largest cryptocurrency hall ever recorded.
Her accomplices didn't fare much better,
one serving nearly seven years, another five.
British officials called it a landmark case
in tracking digital crime,
proving that while money may talk,
in crypto, it also leaves a paper trail,
just with fewer trees.
And that's The CyberWire.
For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights
that keep you a step ahead
in the rapidly changing world of cybersecurity.
If you like our show,
please share a rating and review in your favorite podcast app.
Please also fill out the survey in the show notes or send an email to Cyberwire at N2K.com.
N2K's senior producer is Alice Carruth.
Our Cyberwire producer is Liz Stokes.
We're mixed by Trey Hester with original music by Elliot Peltzman.
Our executive producer is Jennifer Ibin.
Peter Kilpie is our publisher, and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow.
Thank you.
Thank you.
