CyberWire Daily - Europe clamps down on global hackers.
Episode Date: March 17, 2026The EU imposes sanctions after cyberattacks. DHS boosts surveillance spending. AI firms recruit weapons-risk experts. Stryker disruption, no patient impact. LeakNet leans on ClickFix. Sears chatbot da...ta spills. A Chinese security firm leaks a private key. Tech giants team up on scams. Teens sue xAI over alleged AI-generated abuse. On today’s Threat Vector segment, David Moulton and guest Erica L. Shoemate, founder of The EN Strategy Group, explore how AI is fundamentally reshaping the security landscape. Cyber crooks cause a complimentary curbside convenience. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector What if the choices we make about AI security today determine who holds power tomorrow? On this Threat Vector segment, David Moulton and guest Erica L. Shoemate, founder of The EN Strategy Group, explore how AI is fundamentally reshaping the security landscape, from compressed decision-making timelines and asymmetric threat capabilities to the erosion of trust that creates strategic vulnerabilities. You can listen to David and Erica's full conversation here and catch new episodes of Threat Vector from Palo Alto Networks each Thursday on your favorite podcast app. Selected Reading EU Sanctions Iranian and Chinese Firms for Cyberattacks Against European Networks (TechNadu) DHS-built surveillance apparatus to surge in year ahead, documents show (FedScoop) AI firm Anthropic seeks weapons expert to stop users from 'misuse' (BBC) Stryker attack wiped tens of thousands of devices, no malware needed (Bleeping Computer) LeakNet ransomware uses ClickFix and Deno runtime for stealthy attacks (Bleeping Computer) Sears Exposed AI Chatbot Phone Calls and Text Chats to Anyone on the Web (WIRED) China's biggest cybersecurity firm accidentally leaked an SSL key in a public installer (Neowin) Google has signed the Industry Accord Against Online Scams and Fraud. (Google) Teenage girls sue Musk’s xAI, accusing Grok tool of creating child sexual abuse material (The Guardian) Free parking in Russia after Distributed Denial-of-Service attack knocks city's parking system offline (Bitdefender) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
AI is changing how enterprises operate and how they stay protected.
It's time to eliminate risk and protect innovation.
From March 23rd through the 26th, join Trend AI for actionable AI security insights.
Catch impactful sessions at RSC, then unwind and grab a bite at their lounge in Trapasue.
Experience industry-leading AI security.
person, engage with the experts, and get your chance to win $500,000.
San Francisco lets AI fearlessly.
Learn more at trendmicro.com slash RSA.
The EU imposes sanctions after cyber attacks.
DHS boost surveillance spending.
AI firms recruit weapons risk experts.
Stryker says their disruption led to no patient impact.
Leaknet leans on ClickFix, Sears,
chatbot spills data. A Chinese security firm leaks a private key. Tech giants team up on scams.
Teens sue XAI over alleged AI generated abuse. On today's threat vector segment, David Moulton
and guest Erica Schumate, founder of the E.N. Strategy Group, explore how AI is fundamentally
reshaping the security landscape. And cyber crooks cause a complementary curbside convenience.
It's Tuesday, March 17, 26. I'm Dave Bittner, and this is your Cyberwire Intel briefing.
Thanks for joining us here today. It's great to have you with us.
The European Union has imposed targeted sanctions on three foreign companies and two individuals
linked to cyber attacks against its member states. The measures affect China-based Integrity
Technology Group and Anxom Information Technology.
along with Iran-based Eminet Passargad.
EU officials say integrity facilitated the compromise of more than 65,000 devices
across six countries between 2022 and 2023.
Anzun allegedly provided hacking services targeting critical infrastructure,
while its co-founders were also designated.
Eminet Passar Gad is accused of breaching a French database,
selling the data on the dark web and conducting disinformation operations during the 2024 Paris Olympics.
The sanctions prohibit EU entities from providing financial resources and impose travel bans on individuals.
The EU's cyber sanctions regime now covers 19 individuals and seven entities,
reflecting a broader response to escalating global cyber threats.
The Department of Homeland Security is preparing a major expansion of surveillance technology spending in 26,
with contract forecasts outlining hundreds of millions of dollars for enhanced detection and tracking systems.
This includes a $1 billion agreement with Pallantir and additional investments in AI-driven platforms,
mobile surveillance tools, and data extraction technologies, officials and advocacy.
groups say increased funding, including a $191 billion package passed in 2025, has significantly
accelerated these efforts. Critics argue oversight has not kept pace. Lawmakers and watchdogs have
raised concerns about civil liberties risks tied to tools capable of facial recognition,
phone data extraction, and large-scale monitoring. Questions have also emerged about transparency
as privacy impact assessments declined sharply and none have been filed this year.
Internal tensions are also surfacing.
The DHS Inspector General alleges the agency has obstructed oversight efforts,
while lawmakers continue to push for investigations and limits on surveillance authorities.
Anthropic is seeking a chemical weapons and explosives expert
to help prevent what it calls catastrophic misuse of its use of its own.
its AI tools. Amid concerns, they could reveal how to build dangerous weapons. The role
requires experience in weapons defense and knowledge of radiological devices. OpenAI has posted
a similar position, reflecting a broader industry trend. While companies frame these hires as
safety measures, some experts warn they may introduce new risks by exposing AI systems to sensitive
weapons knowledge. Critics also highlight.
the lack of international regulation governing AI and weapons-related information,
raising concerns about oversight as the technology continues to advance.
Stryker says a recent cyber attack was contained to its internal Microsoft environment
and triggered a mass device wipe, disrupting operations but not products or patient safety.
The company reports that tens of thousands of employee devices were remotely erased after
attackers gained administrator access and used Microsoft in tune to issue wipe commands.
Investigators found no evidence of malware deployment or data exfiltration,
despite claims by the handler group that it destroyed over 200,000 systems and stole data.
Electronic ordering remains offline, forcing manual processing while restoration efforts continue.
The incident shows how compromised identity and cloud management tools
can cause large-scale disruption without ransomware.
According to Stryker and investigators,
medical devices were unaffected and recovery is underway.
Leaknet ransomware is using a click-fix social engineering lure
and a legitimate Dino runtime to gain initial access
and execute malware in memory, reducing detection.
Researchers at Relya Quest report
that victims are tricked into running malicious scripts,
which deploy Dino, assigned JavaScript runtime, to execute payloads directly in memory.
This Bring Your Own runtime approach helps bypass security controls and leaves minimal forensic evidence.
Once active, the malware fingerprints the system connects to command and control infrastructure
and enables follow-on actions like credential theft, lateral movement, and data exfiltration via Amazon S3.
attackers are increasingly abusing trusted tools to evade defenses,
according to Reliocquest, consistent behaviors like unusual Dino use
or abnormal PS-Exactivity may help defenders detect these attacks.
Millions of customer interactions with Sears Home Services AI chatbot Samantha
were exposed in publicly accessible databases,
according to security researcher Jeremiah Fowler.
The data included 3.7 million chat logs, 1.4 million audio files,
and transcripts containing sensitive customer details like names,
addresses, phone numbers, and appliance information.
Some recordings captured hours of ambient audio after calls ended,
potentially exposing private conversations.
The databases owned by Transformco were seen,
secured after disclosure, but it remains unclear how long they were exposed or if others accessed
them. Exposed service data can enable targeted fishing and fraud. Researchers warn that rapid AI adoption
without strong data protections increases privacy and reputational risk for companies handling large
volumes of customer interactions. Chinese security firm Kihoo 360 reportedly exposed a sensitive wild
card SSL private key inside the public installer for its 360 security claw AI assistant,
creating serious security risks. Researcher Lucas Alejnick found the key embedded in an uncompressed
archive, allowing anyone to extract it and potentially authenticate as the company's servers.
The certificate, valid until 2027, covers all subdomains, meaning attackers could impersonate
services, intercept traffic, or launch convincing fishing campaigns. The issue is notable,
given Ki-Hoo-360's role as a major cybersecurity provider with hundreds of millions of users.
Leaked private keys undermine core internet trust mechanisms. According to available reports,
the company has not yet revoked the certificate or issued a public response, leaving potential
exposure unresolved.
Google and major tech companies have signed the industry accord against online scams and fraud
at the UN Global Fraud Summit, aiming to coordinate defenses against increasingly sophisticated global
scam networks. The agreement brings together firms like Amazon, Microsoft, and Meta to share
threat intelligence and align efforts. Google also plans to expand its $15 million investment
with AI-driven detection tools,
increased collaboration with law enforcement,
and initiatives like the Global Signal Exchange.
Scams are becoming more organized and cross-border,
requiring unified industry and government responses
to reduce financial and emotional harm.
Three teenage girls have filed a lawsuit
against Elon Musk's X-AI,
alleging its GROC image generator was used
to create and distribute A.E.
generated child sexual abuse material using their photos.
The complaint says altered nude images of the miners were shared on platforms like Discord and
Telegram without consent, with one case leading to a suspect's arrest after CSAM was found
on his device. Plaintiffs allege the content was generated through a third-party app using
GROC's technology, arguing XAI still bears responsibility because it licenses and powers the
system. The case highlights growing risks of AI-generated exploitation and questions platform
accountability. According to the lawsuit, XAI failed to prevent misuse despite known risks
contributing to reputational and psychological harm. The company has not publicly responded.
Coming up after the break, on today's threat vector segment, David Moulton speaks with
Erica Schumate about how AI is fundamentally reshaping the security landscape,
and cyber crooks cause a complementary curbside convenience. Stick around. No, it's not your imagination.
Risk and regulation really are ramping up, and these days customers expect proof of security
before they'll even do business. That's where Vanta comes in. Vanta automates your compliance process
and brings compliance, risk, and customer trust together on one AI-powered platform.
So whether you're getting ready for a SOC2 or managing an enterprise governance risk and compliance program,
Vanta helps keep you secure and keeps your deals moving.
Companies like Ramp and Writers spend 82% less time on audits with Vanta.
That means less time chasing paperwork and more time focused on growth.
For me, it comes down to this.
Over 10,000 companies from startups to large enterprises trust Vanta to help prove their security.
Get started at Vanta.com slash cyber.
Most security conferences talk about zero trust.
Zero trust world puts you inside.
This is a hands-on cybersecurity event designed for practitioners who want real skills, not just theory.
You'll take part in live hacking labs, where you'll attack real environments, see how modern threats
actually work and learn how to stop them before they turn into incidents. But Zero Trust World is
more than labs. You'll also experience expert-led sessions, practical case studies, and technical
deep dives focused on real-world implementation. Whether your blue team, red team, or responsible
for securing an entire organization, the content is built to be immediately useful. You'll earn
CPE credits, connect with peers across the industry, and leave with strategies you can put into action
right away. Join us March 4th through the 6th in Orlando, Florida. Register now at ZTW.com and take your
zero-trust strategy from theory to execution. On today's segment from the Threat Vector podcast, host
David Moulton sits down with Erica L. Shoemate, founder of the EN Strategy Group. They're exploring
how AI is fundamentally reshaping the security landscape.
Hi, I'm David Moulton, host of the Threat Vector podcast, where we break down cybersecurity threats, resilience, and the industry trends that matter the most.
What you're about to hear is a snapshot from my conversation with Erica Schumann, a public policy strategist and former FBI intelligence analyst who spent years working at the intersection of national security, AI, and technology ethics.
Erica brings something rare to this conversation.
She's lived on both sides.
She's worked counterterrorism, counterintelligence, and crimes against children as a federal analyst.
Then she moved into big tech.
She's seen how policy gets treated as an afterthought when speed is the priority,
and she has a clear point of view on what it costs us.
We talked about who actually holds power when AI compresses decision time,
why siloing engineering from ethics is a liability,
and what the next generation of security leaders needs to think.
about beyond technical skill.
Erica, welcome to ThreatVector.
I'm really excited to have you here and have been looking forward to this conversation
since we started planning it.
Same.
I am very, very excited to be here today and be able to just have a conversation
and hope that your audience finds it actually very valuable.
I know what I was looking at your background.
I was impressed of your time in the intelligence community.
and then how you shifted that service into the private sector,
helping out a number of different companies.
Think about AI and cybersecurity and that intersection where things come together,
even going into national security.
Could you talk to me a little bit about that journey,
you know, two sides, but kind of the same mission?
For me, my North Star is always thinking about the human first
and what human-centered design is.
my whole mission is working at the intersection of where people and technology collide.
And when I take a look back at like the work that I've done and walking through that path,
for me it's been having grown up in the FBI in the U.S. intelligence community,
that work started very early on where I was focused on very much so, you know, national security
and criminal matters from counterterrorism, counterintelligence, transnational organized crime,
and also national kidnappings, crimes against children.
Like, I literally worked a gamut of different programs.
And what was very unique for me is that when I came into the FBI, I was in a very small satellite field office.
And there, I had the opportunity to work all the things that I'm telling you about.
like any, pick any hour, the day I can't even say any day of the week, and I could be working very,
it matters just because of the nature of how the office was set up and also the location of where I was.
And that really set this, you know, very young, naive, professional up really to be able to, what I would say,
kind of tip my toe into a bit of everything and actually understand it and do it very well.
because I understood no matter whatever I was working,
the analytic trade craft in itself is the same,
even though the threat in all of the emerging trends might be different.
That piece to me was like all of it was the same.
And so that's kind of what I think about taking myself back to the beginning
of my career in national security.
So it sounds like what you developed was a framework for dealing with
threats or, you know, assessing risk.
And then you could apply that to different domains or specific instances.
Am I understanding it right?
Yeah.
No, you're one-jewson, right.
Well, let's shift away from that sort of like environment that you grew up in and some
of the national security risks of that post-9-11 era to another big thing that's hit,
you know, with a ferociousness is AI.
And I'm curious how you react to.
and what you think stands out most about how AI is being integrated into national defense
and to cybersecurity?
What stands out for you right now?
Great question.
What stands out for me most is that AI is really being operationalized in national defense and cybersecurity,
quite frankly, before we've even fully internalized how it changes the threat dynamic.
and we're not just automating task.
We're automating judgment under this, this real pressure, right?
You have the additional points that you're thinking about that you have to layer in.
AI compresses time.
Think about detection.
Think about decision making and response.
They all move faster because of AI, which can be a good thing.
But then on the flip side, you have to think about how your adversaries benefit from
this too, especially our non-state actors and legacy cyber frameworks that assume human pace
escalations really AI breaks that entire assumption of what is possible and what is not possible.
So in real world security operations, I'm curious, how do you ensure that the ethical principles
survive, the pressures of mission urgency or that hot threat receipts?
response that's going on.
Great, great question. Ethics. Love it. Ethics don't survive because people are good.
Like, that's what people want to believe, but that's just not how it works. They survive because
systems enforce them so holding the accountability piece, right? Ethics must be embedded into our
workflows. Accountability must also be predefined. What is that criteria? What is accountability?
What happens if I do this or the system does this, then what is the consequence of that?
What am I being held to if this thing fails as the person who is leading the thing?
Pressure tested before real world deployment also is part of that that we need to always keep top of mind.
And when we think about the tools and processes, we want to think about, again, the human.
human in the loop for high-impact decisions is a must.
It is a non-negotiable, and we have to really think about that.
Kill switches and escalation protocols are also necessary.
Again, we're dealing with what we talked about earlier, fast technology.
We have to have a way to be like, we got to kill it now, even if you're like, oh, my gosh, it's going to cost so much.
We got to do the right thing and think about that part later because they're a real,
people in front of this technology.
Post-incident reviews that focus on learning, not blame, is where we keep the ethics
at center and not the finger pointing, right?
It's so easy to try to find someone where they're going to fall in a sort when we want
to just think about the lessons learned, particularly, again, we talk about not if it's
going to happen, the when.
if we're working from that standpoint from the beginning,
we can always continue to have our active action post-mortem
where our people still believe that this company is doing the right thing.
And we've followed all the steps.
And if we did, what was the mishap and why?
And being able to lean into that is what people care about.
I believe the most, too.
This one is worth your full attention, especially if you're making decisions about AI deployment
or trying to close the gap between your security posture and your governance structure.
The episode is called Who Holds Power When AI Compresses Decision Time?
And it's live now in your Threat Vector feed.
Thanks for listening. Stay secure.
Goodbye for now.
Be sure to check out the complete episode of ThreatVector wherever you get your favorite podcasts
or on our website, thecyberwire.com.
Ever wished you could rebuild your network from scratch to make it more secure, scalable, and simple?
Meet Meter, the company reimagining enterprise networking from the ground up.
Meter builds full-stack zero-trust networks, including hardware, firmware, and software,
all designed to work seamlessly together.
The result? Fast, reliable, and secure connectivity without the constant patching, vendor-juggling, or hidden costs.
From wired and wireless to routing, switching firewalls, DNS security, and VPN, every layer is integrated and continuously protected in one unified platform.
And since it's delivered as one predictable monthly service, you skip the heavy capital costs and endless upgrade cycles.
Meter even buys back your old infrastructure to make switching effortless.
Transform complexity into simplicity and give your team time to focus on what really matters.
helping your business and customers thrive.
Learn more and book your demo at meter.com slash cyberwire.
That's M-E-T-E-R dot com slash cyberwire.
When cyber threats strike, minutes matter.
Booz Allen brings the same battle-tested expertise
trusted to protect national security
to defend today's leading global organizations.
They safeguard their data,
strengthen enterprise resilience,
and mobilize in minutes across energy,
health care, financial services, and manufacturing.
Their teams don't just respond.
They anticipate, outthink, and stay ahead of evolving threats.
This is powerful protection for commercial leaders only from Booz Allen.
See how your organization can prepare today at boozalan.com slash commercial.
And finally, drivers in Perm Russia got an unexpected perk this week, free parking,
courtesy of a cyber attack rather than civic generosity.
A large-scale DDoS attack overwhelmed the city's parking payment systems,
knocking the perm parking portal offline and making it impossible to pay.
Officials responded pragmatically,
suspending enforcement, and effectively turned paid zones into a temporary free-for-all,
with hopes of restoring the service soon.
The incident is a reminder that when attackers flood systems with trance,
traffic, even routine services can grind to a halt. Disruptions like this can ripple into daily
life, sometimes with oddly welcome side effects. According to local authorities, although the outage
was caused by a massive DDoS attack, local drivers may remember it more fondly than most cybersecurity
incidents. And that's the Cyberwire. For links to all of today's stories, check out our daily
briefing at the Cyberwire.com.
what do you think of this podcast? Your feedback ensures we deliver the insights that keep you a step
ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating
and review in your favorite podcast app. Please also fill out the survey in the show notes or send
an email to Cyberwire at n2k.com. N2K's lead producer is Liz Stokes. We're mixed by Trey Hester
with original music and sound design by Elliot Peltzman. Our contributing host is Maria Bermazis.
executive producers Jennifer Iben, Peter Kilpe as our publisher, and I'm Gabe Bittner.
Thanks for listening. We'll see you back here. Tomorrow.
If you only attend one cybersecurity conference this year, make it R-SAC 2026.
It's happening March 23rd through the 26th in San Francisco,
bringing together the global security community for four days of expert insights,
hands-on learning, and real innovation. I'll say this plainly, I never miss this
Conference, the ideas and conversations stay with me all year. Join thousands of practitioners and
leaders tackling today's toughest challenges and shaping what comes next. Register today at rsacconference.com
slash cyberwire 26. I'll see you in San Francisco. When it comes to mobile application security,
good enough is a risk. A recent survey shows that 72% of organizations reported at least one mobile
application security incident last year, and 92% of responders reported threat levels have
increased in the past two years.
Guard Square delivers the highest level of security for your mobile apps without compromising
performance, time to market, or user experience.
Discover how Guard Square provides industry-leading security for your Android and iOS apps
at www.gardsquare.com.
