CyberWire Daily - When hackers go BIG in cyber espionage.
Episode Date: October 16, 2025F5 discloses long-term breach tied to nation-state actors. PowerSchool hacker receives a four-year prison sentence. Senator scrutinizes Cisco critical firewall vulnerabilities. Phishing campaign imper...sonates LastPass and Bitwarden. Credential phishing with Google Careers. Reduce effort, reuse past breaches, recycle into new breach. Qilin announces new victims. Manoj Nair, from Snyk, joins us to explore the future of AI security and the emerging risks shaping this rapidly evolving landscape. And AI faces the facts. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Manoj Nair, Chief Innovation Officer at Snyk, joins us to explore the future of AI security and the emerging risks shaping this rapidly evolving landscape. In light of the recent high-severity vulnerability in Cursor, Manoj discusses how threats like tool poisoning, toxic flows, and MCP vulnerabilities are redefining what secure AI-driven development means—and why organizations must move faster to keep up. Selected Reading F5 disclosures breach tied to nation-state threat actor (CyberScoop) CISA Directs Federal Agencies to Mitigate Vulnerabilities in F5 Devices (CISA) ED 26-01: Mitigate Vulnerabilities in F5 Devices (CISA) PowerSchool hacker sentenced to 4 years in prison (The Record) Cisco faces Senate scrutiny over firewall flaws (The Register) Fake LastPass, Bitwarden breach alerts lead to PC hijacks (Bleeping Computer) Google Careers impersonation credential phishing scam with endless variation (Sublime Security) Elasticsearch Leak Exposes 6 Billion Records from Scraping, Old and New Breaches (HackRead) Qilin Ransomware announced new victims (Security Affairs) When Face Recognition Doesn’t Know Your Face Is a Face (WIRED) Semperis Announces Midnight in the War Room: A Groundbreaking Cyberwar Documentary Featuring the World's Leading Defenders and Reformed Hackers (PR Newswire) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
We've all been there.
You realize your business needs to hire someone yesterday.
How can you find amazing candidates fast?
Well, it's easy.
Just use Indeed.
When it comes to hiring, Indeed is all you need.
Stop struggling to get your job post.
noticed. Indeed's sponsored jobs helps you stand out and hire fast. Your post jumps to the top
of search results, so the right candidates see it first. And it works. Sponsored jobs on Indeed
get 45% more applications than non-sponsored ones. One of the things I love about Indeed is how
fast it makes hiring. And yes, we do actually use Indeed for hiring here at N2K Cyberwire. Many
of my colleagues here came to us through Indeed.
Plus, with sponsored jobs, there are no subscriptions, no long-term contracts.
You only pay for results.
How fast is Indeed?
Oh, in the minute or so that I've been talking to you, 23 hires were made on Indeed,
according to Indeed data worldwide.
There's no need to wait any longer.
Speed up your hiring right now with Indeed.
And listeners to this show will get a $75-sponsored job credit to get your job.
more visibility at indeed.com slash cyberwire.
Just go to indeed.com slash cyberwire right now
and support our show by saying you heard about Indeed on this podcast.
Indeed.com slash cyberwire.
Terms and conditions apply.
Hiring?
Indeed is all you need.
F5 discloses long-term breach tied to nation-state actors.
Power school hacker receives a four-year prison sentence.
Senator scrutinizes Cisco critical firewall vulnerabilities.
Fishing campaign impersonates Last Pass and Bitwarden.
Credential fishing with Google careers.
Reduce effort, reuse past breaches, recycle into new breach.
Killen announces new victims.
Mnuch Nair from Sniguel.
Joins us to explore the future of AI security and the emerging risks shaping this rapidly
evolving landscape. And AI faces the facts.
Today is October 16, 2025. I'm Maria Varmazas, host of T-minus Space Daily, in for Dave
Fittner. And this is your Cyberwire.
Intel Briefing.
Happy Thursday, everyone.
Thank you for joining me today.
Let's get started.
Seattle-based cybersecurity firm F5 disclosed yesterday that state-sponsored hackers
had long-term persistent access to its networks, leading to the theft of source code
and customer information.
The company says that hackers had access to the
development environment for its Big IP product suite and its engineering knowledge management
platform. In an SEC filing, the company said, through this access, certain files were
exfiltrated, some of which contained certain portions of the company's Big IP source code and
information about undisclosed vulnerabilities that it was working on in Big IP. We are not aware
of any undisclosed critical or remote code vulnerabilities, and we are not aware of active
exploitation of any undisclosed F5 vulnerabilities. We have no evidence of modification to our software
supply chain, including our source code and our build and release pipelines. Bloomberg cites people
familiar with the matter as saying that the hack is believed to be linked to China and that the hackers
were inside F5 networks for at least 12 months. Ars Technica notes that F5's big IP line is used
across the U.S. government and by most of the largest companies in the world.
The U.S. cybersecurity and infrastructure security agency, or SISA, issued an emergency
directive ordering federal civilian agencies to immediately inventory F5 devices and apply the latest
updates by October 22nd. The agency stated, the threat actors' access to F5's proprietary
source code could provide that threat actor with a technical advantage to exploit F5 devices and
software. The threat actor's access could enable the ability to conduct static and dynamic
analysis for identification of logical flaws and zero-day vulnerabilities, as well as the
ability to develop targeted exploits.
19-year-old Matthew Lane of Massachusetts has been sentenced to four years in prison after pleading
guilty to hacking education software provider power school. It was in the local vernacular
a wicked bad idea. Lane stole information belonging to more than 70 million
individuals and demanded a ransom of $2.9 million in exchange for not publishing the data.
In addition to his prison sentence, Lane has been ordered to pay $14 million in restitution
and a $25,000 fine.
U.S. Senator Bill Cassidy has formally pressed Cisco for answers over two critical firewall
vulnerabilities that allegedly allowed hackers to breach at least one federal agency.
The senator's letter demands clarity on Cisco's timeline, knowledge of exploitation,
customer guidance, and internal communication protocols.
The request follows a CISA directive instructing agencies to patch, audit logs,
and retire unsupported devices within 24 hours, citing unacceptable risk from Cisco's
ASA and FTT platforms.
Cisco has admitted the flaws were exploited as early as May and linked to the Arcane Door
espionage campaign.
Leaping Computer reports that a fishing campaign is impersonating last pass and
Bitwarden with phony breach notifications.
The emails claim that the companies have been hacked, instruct users to install a more secure
version of the password managers, and that file will download the Synchro Remote Monitoring
and Management tool, which the attackers then use to install ScreenConnect software.
Now, ScreenConnect is a legitimate remote management tool, but is frequently abused by
attackers to take control of victims' computers.
Last Pass issued a statement on the fishing campaign, noting, quote, to be clear,
last pass has not been hacked, and this is an attempt on the part of a malicious actor
to draw attention and generate urgency in the mind of the recipient, a common tactic for social
engineering and fishing emails. Sublime security shares a new wave of credential fishing
scams impersonating Google careers pages to target job seekers, employing near-limitless variations
to bypass defenses. Legitimate sounding domain names like Google-carrears.site, house fake
login forms that harvest credentials.
Attackers then tweak page design, copy, and URLs constantly,
meaning each campaign looks slightly different and evades static detection rules.
Very clever.
The scammers also exploit password reset flows, job alerts, and recruitment messages to lure victims.
Sublime security warns that these campaigns are effectively infinite in variation,
making them harder to hunt and block using traditional signatures or rules.
The Post recommends defenses such as domain monitoring.
monitoring, anomaly detection, user awareness, and strong multi-factor authentication.
An elastic search cluster exposed nearly 6 billion records, apparently accumulated from multiple
past breaches and data scraping operations. The repository contains sensitive user data,
like emails, names, phone numbers, and IPs, spanning across over 40 million unique individuals.
The leak is believed to aggregate information from many known instances.
rather than originate in a single new breach, the database was publicly accessible for weeks,
enabling anyone to query it until it was taken offline. Even though the data itself isn't
newly stolen, its centralization magnifies risk, making it a rich target for opportunistic cybercrime.
Ransomware Group Killen has publicly listed new victims after recent attacks, expanding its
victim swap in the ransomware underworld. Reported targets include organizations in France,
and the United States across sectors like health care, finance, and manufacturing.
Now, Killen is known for double extortion, encrypting data, and threatening to release
sensitive information unless it is paid. In most recent cases, the group claimed to have stolen
proprietary documents, employee records, and customer data, and demanded multi-million dollar
ransoms. Analysts warned that Killen's pressure tactics are intensifying with shorter deadlines
and more aggressive leak strategies.
Organizations are urged to verify their backups,
strengthen segmentation, and monitor for signs of reconnaissance.
Coming up after the break, Manoje Neer,
chief innovation officer at Sneak,
joins us to explore the future of AI security
and the emerging risks shaping this rapidly evolving landscape.
And AI faces the facts.
Stick around.
What's your 2am security worry?
Is it, do I have the right controls in place?
Maybe are my vendors secure?
Or the one that really keeps you up at night,
how do I get out from under these old tools and manual processes?
That's where Vanta comes in.
Vanta automates the manual work
so you can stop sweating over spreadsheets
chasing audit evidence
and filling out endless questionnaires.
Their trust management platform
continuously monitors your systems,
centralizes your data,
and simplifies your security at scale.
And it fits right into your workflows,
using AI to streamline evidence collection,
flag risks,
and keep your program audit ready
all the time.
With Vanta, you've got to,
get everything you need to move faster, scale confidently, and finally, get back to sleep.
Get started at vanta.com slash cyber. That's v-a-t-a-com slash cyber.
And now a word from our sponsor, Threat Locker, the powerful zero-trust enterprise solution
that stops ransomware in its tracks. Allow listing is a denomination.
by default software that makes application control simple and fast.
Ring fencing is an application containment strategy,
ensuring apps can only access the files, registry keys,
network resources and other applications they truly need to function.
Shut out cybercriminals with world-class endpoint protection from threat locker.
Dave Bittner recently sat down with Mnognear, who is the chief innovation officer at Sneak,
to explore the future of AI security and the emerging risks that are shaping this rapidly evolving landscape.
Here's their conversation.
So today we're talking about AI security and your outlook on that.
I would love to start with some high-level stuff here if we could.
Can you give us your perspective on sort of the state of things when it comes to AI and security?
Where do you suppose we find ourselves at this moment?
I think we're at the very early innings of really understanding the security risks for AI.
I think like every wave in technology, we are about probably the first few innings of adopting the technology.
But security is usually the following the actual technology innovation.
And so people are starting to understand the risks.
but kind of early in understanding, you know, what do you do about the risks?
And the risks are also, you know, just emerging as we speak.
Well, we've seen transitions over the years.
You think about folks moving to the cloud and things like that.
Does this one strike you as being different?
I think both the speed of the technology transformation and the understanding of security both are different in that they are,
moving at what I call the AI speed.
I think the good news is there is a bigger understanding
when I talk to a lot of very large companies
and CISOs and enterprises.
I think, for example, just contrasting with the cloud era,
that took a few years, several years after cloud became mainstream,
I think, for the security professionals
to truly understand that the risks are different
and they need to own it and do something different.
I don't see that with AI.
I see a leaning in of the security.
teams wanting to know what to do different, wanting to be close to the business, enable the
business, understand the risks, understand that they cannot be just saying no. And so there's a lot
like that, that for me is a pretty market differentiation between these two technology waves.
Well, let's dig into some of the risks. I mean, what are some of the things that are top
of mind for you in terms of the things that have your attention and your concern?
Let me break it down maybe into a couple of the key use cases that we see from where I sit
and where the company sits.
One of the, you know, in our personal lives, AI adoption is chat, video, voice, all of these
use cases.
All of us are using it every day.
Our kids are using it.
So it's all kinds of fun that we can talk about.
on the work front, code has become like the chat, like use case, right?
So there is, you know, it used to be just co-pilot three years ago.
There are tons of companies here, some that are breaking every record, companies like
Cursor and Windsor, you know, Anthropic and Open AI themselves have introduced coding
assistance and agentic AI IDs and agentic orchestrators.
So there's three generations of code-related innovation that has already emerged in the AI space.
And so ground zero of the risks is, you know, the magic of LLMs that we all like is, you know, they're, oh, look at this.
They thought of this unique thing.
And it's just them using all the training data that they have and manifesting results in different ways that we think is magic.
On the code side, we call it hallucinations.
and hallucinations are really bad for security.
And so one of the biggest, you know, things that three years ago was education,
and today I don't find a CSO is not aware that it is improving.
But that security risk is actually profound in that it's much higher than human-produced code.
Will it get better?
It will.
But there's also a human psychology element there,
where especially the junior doctor tends to think that anything that,
that the machine produces is accurate.
So that is a huge set of risks emerging from that,
whether it's SQL injections getting into the code
and no one is catching it until late
or packaged risks, malicious packages.
We saw some recent attacks over the last few,
even weeks where people are creating malware
and open source packages.
and these LLMs are hallucinating packages that don't exist in a predictable way.
So they go post-malware.
It's called type of squatting.
So all these terms are emerging.
These risks are emerging, whether it's code or the packages or the supply chain.
So there's a new set of, I would say, coding and supply chain risks emerging with just the first use case, right?
And I'll pause there because then there's that next set of things that people are doing with LLMs.
Well, yeah, so before we get to the next step, let me ask you this.
I mean, have we hit the point yet where it's worth it,
where it's not just aspirational to hope that these LLMs are going to actually return on their investment from a coder's point of view?
I think the productivity is there.
I do see some occasional research that says, you know, because,
of so many other things that they now have to do.
If you think about a typical engineer these days,
there's research that we have done, others have done,
this is you don't spend more than,
let's say, somewhere between 10 to 30% of your time coding.
So if you truly look at the full software development lifecycle,
you've got to understand that AI has got to impact all of that.
And we're seeing innovative companies
who are trying to impact that entire software development lifecycle
all the way from design to code to test.
And until that happens,
and that happens with proper guardrails,
it is hard to find that full productivity impact that's promised.
But today there's that euphoric moment
where some of the busy work can go away,
or if you're not a regular developer,
you're now able to code again,
or this term citizen dooper emerges,
but I talked about my,
two roles. I love that my marketing team that some of them have never looked at code before,
able to use some of these wipe coding apps and create technology, that doesn't mean I'm going
to just deploy that in production without a lot of checking and security guardrails.
So the pain moves somewhere else and to truly get the full productivity benefit, which I'm
a believer that we will get, you do need to have the proper expansion of both the
the technology guardrails, like AI that can secure AI might be a simple way of thinking about
it, but also AI that can test AI, AI that can do PR checks for AI.
So new innovation needs to figure out what is the new sets of pain points it's creating
and then find solutions for that too.
That's happening.
It's happening in real time.
Well, given these realities, what's your advice then for the folks who are charged with
protecting their organizations, what are your words of wisdom?
My first question I ask, any CSO, is do you know what the devs are doing in terms of building
Gen AI apps and LLM apps and MCP servers into their code?
And the answer is, you know, the answer.
And so start with visibility like everything else.
You know, where's the shadow AI happening in your organization?
I've been on calls where CTO and the CSO are both on the call, large organization, and you
ask the question, how many models do you?
in production, one goes, we don't do, we don't deploy AI right now.
You can imagine this is security professional.
And then, you know, the other bill site team goes thousands.
And so this is the dichotomy that's there.
So start with visibility.
And once you start with visibility, then just follow, you know, traditional security
principles.
Like you're not going to say no because the pressure is from the sea levels in the board.
Can you work with your, you know,
technology counterparts to figure out what's the proper governance model.
Do you need to use every one of the 2 million models in Hugging Face?
Does the dev team really need that access, or could you find a few secure ones?
Then, of course, the security professional, you're going to find what tools allow me to know
which, how do you approve and disapprove some of these models.
And so this back and forth, that collaboration is key, visibility is key.
Finding tooling that can move at the pace of AI is key.
So find, you know, who are the providers?
for being very innovative
because you're not going to see it
from most of your,
if you're a security professional
thinking that my traditional endpoint
or traditional network company
is adding some AI features
and that's going to help me here.
The problem is AI is really code
and it's being built
and it's being downloaded.
We call these terms inferencing,
using GPUs to run these models.
All that is very dynamic.
It's going between
acting and inferencing and using data fairly quickly.
So what are the dynamic set of capabilities I need as a security professional to allow
the team to be very innovative while sensing and reacting and putting governance in place
and putting visibility in place, but also figuring out how do I really know how the model
is behaving after deployment, and bringing all of that data back to continuously
update the policy. So something like that. One is just get educated yourself. We are holding the
first industry event in San Francisco, October 22nd, 23rd. It's called the AI Security Summit.com.
It's free. It's us partnering with an organization called AI.com. They're the ones who
held the largest AI engineering conference in June, 3,000 plus AI engineers and all the leading AI
companies were there.
So this is an industry event founded by Sneak and AI engineer.
You've got CEOs of 10, 15 companies there, and there's a practitioner track, and there's a leader track.
I mean, finding events like this to really educate yourself on what is the state of art of agentic gen AI development,
and then what is the best practice way to start educating and training your team to have these AI security engineers who can,
be paired with AI engineers to really be able to go and drive the security of gen AI apps.
That would be my recommendation.
That was Minognear sitting down with Dave Bittner to explore the future of AI security
and the emerging risks shaping this rapidly evolving landscape.
At TALIS, they know cybersecurity can be tough and you can't protect everything.
But with TALIS, you can secure what matters most.
With TALIS's industry-leading platforms, you can protect critical applications,
data and identities, anywhere and at scale with the highest ROI.
That's why the most trusted brands and largest banks, retailers, and healthcare companies in the world
rely on TALIS to protect what matters most.
Applications, data, and identity.
That's Talis.
T-H-A-L-E-S.
Learn more at talusgroup.com slash cyber.
With Amex Platinum, access to exclusive Amex pre-sale tickets can score you a spot trackside.
So being a fan for life turns into the trip of a lifetime.
That's the powerful backing of Amex.
Pre-sale tickets for future events subject to availability and varied by race.
Terms and conditions apply.
at mx.ca slash yNX.
Facial recognition is becoming part of everyday life,
from unlocking our phones to verifying our identities online.
But for millions of people that are living with facial differences,
that technology can be more of a barrier than a convenience.
There's new reporting from Wired that reveals that some individuals
are being locked out of their essential services,
like renewing driver's licenses,
accessing financial accounts, or even just verifying their identity, simply because the systems
can't recognize their faces. Experts say that the issue stems from algorithms that weren't trained
with enough diversity, leaving people with craniofacial conditions or other differences
literally unseen by the technology. And advocates warn that this isn't just some technological
glitch. It is a solid reminder that when AI systems fail to include everyone, they can deepen
long-standing inequities and isolation.
They're calling for more inclusive design and human support
when automated systems fall short.
It's proof that even advanced AI
can sometimes miss what's right in front of it.
And that's The CyberWire Daily, brought to you by N2K CyberWire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Hey, CyberWire listeners, as we near the end of the year, it's the perfect time to reflect on your company's achievements and set new goals to boost your brand across the industry next year.
And we would love to help you achieve those goals.
We've got some unique end-of-year opportunities, complete with special incentives to launch 2026.
So tell your marketing team to reach on out.
Send us a message to sales at thecyberwire.com
or visit our website so we can connect about building a program to meet your goals.
We'd love to know what you think of our podcasts.
Your feedback ensures we deliver the insights that keep you a step ahead
in the rapidly changing world of cybersecurity.
If you like the show, please hear a reading and review in your podcast app.
Please also fill out the survey in the show notes or send an email to Cyberwire at
N2K.com.
N2K's senior producer is Alice Carruth.
Our producer is Liz Stokes.
We're mixed by Elliot Peltzman and Trey Hester
with original music by Elliot Peltzman.
Our executive producer is Jennifer Eibin.
Peter Guilfey is our publisher,
and I'm your host, Maria Vermazas,
in this week for Dave Bittner.
Thank you for listening.
We'll see you tomorrow.
Cyber Innovation Day is the premier event for cyber startups,
researchers and top VC firms building trust into tomorrow's digital world.
Kick off the day with unfiltered insights and panels on securing tomorrow's technology.
In the afternoon, the eighth annual Data Tribe Challenge takes center stage.
as elite startups pitch for exposure, acceleration, and funding.
The Innovation Expo runs all day, connecting founders, investors, and researchers around
breakthroughs in cybersecurity. It all happens November 4th in Washington, D.C.
Discover the startups building the future of cyber.
Learn more at cid.d. datatribe.com.
Thank you.
