CyberWire Daily - AI chips flow east.
Episode Date: September 16, 2025A controversial Trump administration deal gives the U.A.E. access to cutting-edge U.S. AI chips. FlowiseAI warns of a critical account takeover vulnerability. A new social engineering campaign imperso...nates Meta account suspension notices. A macOS Spotlight 0-day flaw bypasses Apple’s Transparency, Consent, and Control (TCC) protections. Are cost saving from outsourced IT services worth the risk? Poland boosts its cybersecurity budget after a surge in Russian-backed attacks. NTT Group joins the Comm-ISAC. Jaguar Land Rover’s global shutdown continues. A data breach affects millions of customers of top luxury brands. On today's Threat Vector segment, David Moulton speaks with Palo Alto Networks’ Spencer Thellmann about the dual challenges of securing employee use of generative AI tools and defending internally built AI models and agents. AI chatbots hustle seniors for science. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. Threat Vector Segment On today's segment of Threat Vector, host David Moulton, Director of Thought Leadership for Unit 42, speaks with Spencer Thellmann, Principal Product Manager at Palo Alto Networks. David and Spencer explore the dual challenges of securing employee use of generative AI tools and defending internally built AI models and agents. You can listen to the full conversation here, and catch new episodes of Threat Vector each Thursday in your podcast app of choice. Selected Reading In Giant Deals, U.A.E. Got Chips, and Trump Team Got Crypto Riches (The New York Times) Critical FlowiseAI password reset flaw exposes accounts to complete takeover (Beyond Machines) New FileFix attack uses steganography to drop StealC malware (Bleeping Computer) From Spotlight to Apple Intelligence (Objective- See) The Elephant in The Biz: outsourcing of critical IT and cybersecurity functions risks UK economic security | by Kevin Beaumont | Sep, 2025 (DoublePulsar) Russian hackers target Polish hospitals and city water supply (The Financial Times) NTT Group Joins the U.S. Communications-ISAC (Topics) Jaguar Land Rover says cyberattack shutdown to last 'at least' another week (The Record) Bags of info stolen from multiple top luxury brands - double check your data now (TechRadar) We wanted to craft a perfect phishing scam. AI bots were happy to help (Reuters) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
The DMV has established itself as a top-tier player in the global cyber industry.
DMV rising is the premier event for cyber leaders and innovators
to engage in meaningful discussions and celebrate the innovation happening in and around the Washington
D.C. area. Join us on Thursday, September 18th, to connect with the leading minds shaping
our field and experience firsthand why the Washington, D.C. region is the beating heart of cyber
innovation. Visit DMV Rising.com to secure your spot.
certificates lifespans will be cut in half, meaning double today's renewals.
And in 2029, certificates will expire every 47 days, demanding between 8 and 12 times the renewal
volume.
That's exponential complexity, operational workload, and risk, unless you modernize your strategy.
CyberArk, proven in identity security, is your partner in certificate security.
CyberArc simplifies life cycle management with visibility, automation, and control at scale.
Master the 47-day shift with CyberArk.
Scan for vulnerabilities, streamline operations, scale security.
Visit cyberark.com slash 47-day.
That's cyberark.com slash the numbers 47-D-A-Y.
A controversial Trump administration deal gives the UAE access to cutting-edge U.S. AI chips.
Flow-wise AI warns of a critical account takeover vulnerability.
A new social engineering campaign impersonates meta-account suspension notices.
A macOS spotlight zero-day flaw bypasses Apple's transparency, consent, and control protections.
Are cost savings from outsourced IT services worth the risk?
Poland boosts its cybersecurity budget after a surge in Russian-backed attacks.
NTT Group joins the Com ISAC.
Jaguar Land Rover's global shutdown continues.
A data breach affects millions of customers of top luxury brands.
On today's threat vector segment, David Moulton speaks with Palo Alto Network's Spencer Thelman
about the dual challenges of securing employee use of generative AI tools.
and defending internally built AI models and agents,
and AI chatbots hustle seniors for science.
It's Tuesday, September 16, 2025.
I'm Dave Bittner, and this is your Cyberwire Intel briefing.
Thanks for joining us here today.
It's great to have you with us.
According to reporting by the New York Times,
the Trump administration is advancing a deal
that would give the UAE access to hundreds of thousands
of cutting-edge U.S. AI chips,
despite warnings from national security officials.
Many chips are slated for GF.
42, a tech firm controlled by Sheikh Tanun bin Zayyad, who has long-standing ties to Chinese companies.
Experts fear the chips or the models built on them could ultimately flow to Beijing,
undermining U.S. export controls and AI safeguards.
The Times also uncovered a parallel $2 billion investment into World Liberty Financial,
a crypto company tied to the Trump and Whitkoff families,
Critics say the overlap blurs government duties with private enrichment,
raising conflict of interest and insider risk concerns.
From a cybersecurity perspective, the risks are clear,
potential loss of AI supremacy,
third-party data exposure in Emirati infrastructure,
and compliance vulnerabilities tied to crypto and Binance's AML history.
Safeguards exist, but enforcement remains shaky.
Low-Wise AI has issued an urgent warning about a serious flaw that lets attackers easily
take over user accounts.
The problem affects both its cloud service and self-hosted setups, exposing personal details
and allowing outsiders to reset passwords without permission.
Security experts say the issue is extremely severe, urging all users to update immediately.
Those who cannot upgrade should block public access to the password reset feature,
until a fix is applied. Failure to act leaves accounts fully exposed. A new social engineering
campaign called FileFix is impersonating meta-account suspension notices to spread the SteelC Info-Stealer,
according to Akronis. FileFix is an evolution of the ClickFix attack method, which tricks users
into pasting malicious commands into system dialogue boxes. This variant abuses the Windows File Explorer
address bar. Victims are directed to a fishing page that claims their meta account will be
disabled, then urge to paste what appears to be a file path. Instead, a hidden power shell command
installs malware. The campaign uses steganography to hide additional payloads inside images
hosted on Bitbucket, eventually unleashing SteelC. The malware steals browser credentials, cookies,
cloud keys, crypto wallets, messaging app logins,
and can capture screenshots.
Researchers warn that file-fix tactics are rapidly evolving,
making user education critical to defense.
Akronis observed multiple variants in just two weeks,
signaling ongoing refinement by attackers.
A new blog from Objective C reveals a zero-day flaw in MacOS Spotlight plugins
that bypasses Apple's transparency, consent, and control protections.
Spotlight plugins index user files, including sensitive system databases,
but researchers showed they can be exploited to leak private data
fueling Apple intelligence AI features.
Despite sandboxing, the bug, which is rooted in a decade-old flaw,
lets malicious plugins transmit protected file content to outside processes.
Since Spotlight plugins can be installed without notarization,
attackers or malware could abuse them for persistence, data theft,
or AI model ex-filtration. Apple has patched related issues before, but this zero-day shows
macOS sandboxing gaps remain exploitable. Researcher Kevin Beaumont examined several major UK companies,
including the co-op group, Marks & Spencer, and Jaguar Land Rover, who have outsourced critical
IT and cybersecurity functions to Tata Consultancy Services, TCS, and concludes this has led to redundancies
and growing risk exposure. These functions include security operations, governance, and identity
management, core defenses against breaches, while outsourcing cuts costs, attackers like Lapsis
have exploited weaknesses in shared help desks and standard operating procedures. Critics argue that
TCS's denials focused narrowly on whether its own systems were breached, sidestepping the
real question of how its customers were compromised. The broader issue is structural. Cost-cutting
and over-reliance on managed service providers concentrate risk across many organizations.
With ransomware incidents escalating, experts say UK firms remain hyper-focused on data protection
laws but lack cyber resilience planning. The risk isn't just stolen data, its service disruption
severe enough to threaten economic stability. Poland is boosting its cybersecurity budget to a record
1 billion euros after a surge in Russian-backed attacks on critical infrastructure, according to the
financial times. Officials say Poland faces between 20 and 50 sabotage attempts daily, mostly thwarted,
but some breaches have disrupted hospitals and exposed medical data.
A recent attack infiltrated a major city's water system but was stopped before supplies were cut.
The government is allocating 80 million euros to secure water systems
and expand protections across 2400 local administrations.
Warsaw says it is the most frequent Russian cyber target in the EU,
with GPS jamming from Russia's Kalingrad increasingly disrupting flights.
The move comes amid rising hybrid threats,
including drone incursions and NATO's first direct interceptions of Russian assets
since the 2022 invasion of Ukraine.
Cross-party consensus has emerged in Poland to urgently strengthen cyber resilience.
Japanese telecom giant NTT Group has become the first global technology services company
invited to join the U.S. Communications Information Sharing and Analysis Center
the Com ISAC, marking a milestone in international collaboration on critical infrastructure security.
NTT says the move underscores their commitment to cyber resilience, situational awareness,
and collective defense of global communications networks.
By partnering with Com ISAC members and sector sponsors,
NTT says they'll help strengthen defenses against cyber threats
while advancing innovation and sustainability.
The company stressed that,
trust, partnerships, and information sharing are essential to securing the digital backbone of
modern society.
Jaguar Land Rover has extended its global shutdown until September 24th as it investigates the
major cyber attack that forced thousands of employees and supply chain workers into temporary
layoffs. The disruption, costing an estimated $98 million per day, highlights risks not only to
JLR, but to the wider U.K. economy, where the company represents 4% of exports.
Investigators confirmed attackers accessed internal data, raising potential fines under privacy
law. Experts warn the incident underscores policy gaps. Regulation prioritizes personal data
protection, while service continuity and economic security remain under-addressed.
French luxury giant Kering has confirmed.
a data breach affecting millions of Balenciaga, Gucci, and Alexander McQueen customers.
The hacker group Shiny Hunters, also linked to breaches at Google and Adidas, claimed responsibility,
saying it stole 7.4 million email addresses, along with names, phone numbers, home addresses,
and spending amounts, in some cases exceeding $80,000.
While caring stress, no payment data was taken, experts warn high spenders may be targeted,
in follow-on scams.
Authorities have been notified.
Caring denies negotiating with the attackers.
Coming up after the break on today's threat vector segment,
David Moulton speaks with Palo Alto Network Spencer Thelman
about the dual challenges of securing employee use of generative AI tools
and defending internally built AI models and AI models
and agents, and AI chatbots hustle seniors for science.
Stay with us.
And now a word from our sponsor.
The Johns Hopkins University Information Security Institute is seeking qualified applicants
for its innovative Master of Science in Securities,
Informatics degree program.
Study alongside world-class
interdisciplinary experts and gain
unparalleled educational research
and professional experience
in information security and assurance.
Interested U.S. citizens
should consider the Department of Defense's
Cyber Service Academy program,
which covers tuition,
textbooks, and a laptop,
as well as providing a $34,000
additional annual stipend.
Apply for the fall 2020,
sixth semester and for this scholarship by February 28th. Learn more at c.j.j.u.edu slash MSSI.
We've all been there. You realize your business needs to hire someone yesterday.
How can you find amazing candidates fast?
Well, it's easy.
Just use Indeed.
When it comes to hiring, Indeed is all you need.
Stop struggling to get your job post noticed.
Indeed's sponsored jobs helps you stand out and hire fast.
Your post jumps to the top of search results, so the right candidates see it first.
And it works.
Sponsored jobs on Indeed get 45% more applications than non-sponsored ones.
One of the things I love about Indeed is how fast it.
makes hiring. And yes, we do actually use Indeed for hiring here at N2K Cyberwire. Many of my colleagues
here came to us through Indeed. Plus, with sponsored jobs, there are no subscriptions, no long-term
contracts. You only pay for results. How fast is Indeed? Oh, in the minute or so that I've been
talking to you, 23 hires were made on Indeed, according to Indeed data worldwide. There's no need
to wait any longer. Speed up your hiring right now with Indeed. And listeners to this show will get a $75
sponsored job credit to get your jobs more visibility at indeed.com slash cyberwire. Just go to
indeed.com slash cyberwire right now and support our show by saying you heard about Indeed on this
podcast. Indeed.com slash cyberwire. Terms and conditions apply. Hiring. Indeed is all you need.
On today's threat vector segment, David Moulton, director of thought leadership for Unit 42 at Palo Alto Networks, speaks with Spencer Thelman, principal product manager at Palo Alto Networks.
They're exploring the dual challenges of securing employee use of generative AI tools and defending internally built AI models and agents.
Hi, I'm David Moulton, host of the Threatvector podcast, where we break down cyber security threats, resilience, and the industry trends that matter the most.
What you're about to hear is a snapshot of my conversation with Spencer Thielman, principal product manager of Palo Alto Networks where he leads AI runtime security.
Spencer's team tracks AI applications across the enterprise landscape.
What his team discovered reveals the scope of this challenge.
Last December, they cataloged 800 AI applications.
By May, that number hit 2,800.
That's 250% growth in just five months.
Meanwhile, over half of enterprise employees now use generative AI apps daily,
and up to 30% of what they send contains sensitive data.
If you're still thinking AI security is a future problem, you're already behind.
Spencer, welcome to ThreatVector.
I've been excited to have you here.
I've been dying to have this conversation with you for weeks.
So happy to be here.
Looking forward to it.
How should enterprises think about their AI security strategy?
And maybe what are the most impactful mental models that you use?
Certainly.
So before we get into this, I think it's always important to start with why we do what we do.
And in the context of AI, like our why, is that we believe that the benefits of AI are profound, but so are the risks.
And we therefore have a kind of like moral obligation to help our customers capture the power of AI, but do so safely and securely, right?
So that's where we're always coming from when we have these kind of conversations.
And the way that we think about this is that you can break enterprise AI security down into basically two pillars.
The first is I need to think about how to secure my employee use of generative AI SaaS apps like chat, GPT, complexity, and grammarly.
That's the first part.
And the second piece is, how do I go about securing the AI apps, models, and agents that I'm running in my own cloud environment?
That could be AWS, Google Cloud, Azure, on-prem, or some other variation of those.
So those are the two things that matter.
What are my employees doing?
How can I control that and have deep visibility into it?
The other piece is, how do I secure the AI apps models and agents that I run in my own cloud environment?
That's how we kind of split up the problem, so to speak.
All right, let's shift gears a little bit and talk about holistic AI.
AI security, how do you break down the pillars of AI security?
I know we've got model scanning, AI red teaming, posture management, LLM security, agent security.
Am I missing another big area that we should talk about today?
So we break AI security down into five pillars.
And again, I want to kind of re-center this to the mental model that's guiding the whole conversation.
Whenever we speak about securing AI, it's about thinking about how employees are using generative AI SaaS apps,
We just covered that in last 10 minutes or so.
And then the second piece is, how do I go about securing the AI apps, the models, and the agents that I'm running in my own environment or that I've built, right?
And for that second problem to secure like enterprise AI apps, models, and agents, we've constructed kind of five pillars that define this.
The first is model scanning.
So I want to scan my model files to make sure that my models don't do things like contain malware or are vulnerable to do serialization attacks.
And I want to do it as part of my ops process so that bad models don't ever even end up in production.
We scan them before they go to prod.
That's the first piece.
And the second part is looking at AI apps, models, and agents at the posture level.
Great example of this with agents is like looking at their permissions.
Are they excessive?
If yes, let's scope those down.
That's the second piece.
The third part is red teaming.
Here we want to attack AI apps, models, and agents to see which threats go through and which don't,
which then informs the runtime security part of AI security.
So once you've made sure that the model file is free of threats,
that it's secure at the posture level,
you've re-teamed it to understand which threats go through,
then it's time to secure, like, let's say, that AI app at runtime.
By looking at inputs and outputs to it,
prompts and model responses, for example,
and checking for threats like prompt injections,
sensitive data, malicious URLs, and the like.
And then the final piece of all of this is AI agent security,
which kind of spans across the preceding four columns,
but agent security is primarily broken down into runtime, security, and posture.
And a great way to think about agent security is that it's kind of a superset of large language model security.
Every threat that applies to large language models applies to agents,
but because of what agents are, and we can talk about that,
there's kind of a broader threat surface here.
Well, let's just hop right into it.
When you're talking about an AI agent, how do you define that, you know, what are the bounds,
what's not an agent maybe?
certainly so last year was all about chatbots right and if you think about what is a chatbot
it's an inherently passive interface right those i i ask a question the chatbot runs inference
something comes back to me and then the interaction is over until i ask another question but agents
differ in the way that they take action on behalf of users and you know organizations a good working
definition for an agent is that it's a it's an application that's autonomous has the ability to reason
and to take action in pursuit of a goal.
I'll give you an example for my personal life
to maybe make this a little bit more real.
So a few weeks ago, I went to Las Vegas
to see one of my favorite bands at the Sphere,
dead in company.
And as an experiment, I had a chatbot
determined the entire trip,
where I stayed, which restaurants I saw, et cetera,
because I wanted to experience the city
that I'd been to many times
kind of through a new lens.
So the chat bot told me what to do,
where to stay, where to go.
But I couldn't book any of that.
I then had to spend about an hour
on Expedia, Uber, OpenTable, et cetera,
to kind of construct that trip from beginning to end.
An agent could do that for me.
I could tell my agent, hey, here's my budget,
here's what I like, here's what I don't like.
Go construct this for me.
And the agent would interact with APIs,
again, for Expedio, Uber, OpenTable, etc.,
to just kind of put that together for me.
And it's that autonomy that make agents profoundly powerful.
I work with some enterprise customers, for example,
that kind of leapfrog chatbots.
Chappbots weren't really interesting to them,
but agents are because of the productivity and efficiency gains that they can leverage.
Because now you have, again, almost like a synthetic virtual employee that's interacting on your behalf.
That's a really big moment for the notion of work.
But it carries these risks, because in order to do what an agent does, it needs to be autonomous.
It needs to have memory, and it needs to interact with your tools.
All three of those carries some novel risks that we actually outlined in a paper called the OASP AI Agent Threat Report,
things like tool misuse,
tool misuse, memory manipulation
and cascading hallucinations.
I'll give you just one example, right?
So let's say that one of your employees
has gone and built an agent in Microsoft
Copilot Studio, and it's designed
to kind of ingest leads and
send them to Salesforce, right?
That's a pretty common workflow.
But what if its permissions are excessive?
What if it could delete records in Salesforce?
It probably shouldn't be able to do that.
An agent shouldn't be able to go drop
tables in Salesforce, right? Because the impact of that could be destructive. What we need
to do is look at, here's all the things that an agent could do, and then restrict its freedoms
down to just the things it needs to do to accomplish its goal. Spencer calls this the biggest
challenge in cybersecurity today. When half your workforce is using tools that leak sensitive
data by design, the window for getting ahead of this threat is closing fast. If this got your
attention, don't wait. Listen to the full episode now in your Threat Vector podcast feed.
It's called Inside AI Runtime Defense, and it's live now.
This one's a reality check you can't afford to miss.
Be sure to check out the full conversation on this week's Threat Vector podcast.
You can find that wherever you get your favorite podcasts.
At TALIS, they know cybersecurity can be tough and you can't protect everything,
but with TALIS you can secure what matters most.
With TALIS's industry-leading platforms, you can protect critical applications,
data and identities, anywhere and at scale with the highest ROI.
That's why the most trusted brands and largest banks reach,
tailors and health care companies in the world rely on Talis to protect what matters most.
Applications, data, and identity.
That's Talis.
T-H-A-L-E-S.
Learn more at talusgroup.com slash cyber.
With Amex Platinum, access to exclusive Amex pre-sale tickets can score you a spot trackside.
So being a fan for life turns into the truth.
of a lifetime. That's the powerful backing of Amex. Pre-sale tickets for future events subject to availability
and varied by race. Terms and conditions apply. Learn more at mx.ca.com slash Yannex.
And finally, Reuters teamed with a Harvard researcher to see what happens when top chat bots are
asked to cook up a fishing scam aimed at seniors. The journalists used the bots to write emails
suggest timing and shape the pitch. Then they tested nine of these AI-crafted messages on 108
volunteers. About 11% clicked. Some bots slammed the brakes at first. Others complied after a little
coaching. It's for research, or it's for a novel, did the trick. Grock wrote a convincing
charity plea. Gemini even suggested the best time of day to send it. Google retrained Gemini
after being told about this.
The result is blunt.
AI can turbocharge scams.
The FBI has warned about this.
Companies say they're tightening safeguards.
Meanwhile, seniors remain vulnerable.
The takeaways?
Be suspicious of urgent asks.
Verify senders.
Don't click unexplained links
and keep your loved ones alert.
This genie isn't going back into the bottle.
And that's the Cyberwire.
For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights that keep you a step ahead
in the rapidly changing world of cybersecurity.
If you like our show,
please share a rating and review in your favorite podcast app.
Please also fill out the survey and the show notes
or send an email to Cyberwire at N2K.com.
N2K's senior producer is Alice Carruth.
Our Cyberwire producer is Liz Stokes.
We're mixed by Trey Hester with original music by Elliot Heltsman.
Our executive producer is Jennifer Iben.
Peter Kilpe is our publisher, and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow.
Attention
Attention security startups.
There's less than a week left to apply for the 2025 data tribe challenge.
This unique program accelerates early stage
cyber companies, refine your messaging with startup veterans, then pitch to top venture firms
shaping the future of cyber. The live pitch competition takes center stage at Cyber Innovation
Day, November 4th in Washington, D.C. Applying is easy. Go to challenge.com, share your
company info, and upload your pitch. Submissions closed September 19th. Submit your entries today.
And now, a word from our sponsor, Threat Locker,
the powerful zero-trust enterprise solution that stops ransomware in its tracks.
Allow listing is a deny-by-default software that makes application control simple and fast.
Ring fencing is an application containment strategy,
ensuring apps can only access the files, registry keys, network resources,
and other applications they truly need to function.
Shut out cybercriminals with world-class endpoint protection from threat locker.
