CyberWire Daily - One rule to rule them all.
Episode Date: December 12, 2025A new executive order targets states’ AI regulations, while the White House shifts course on an NSA deputy director pick. The UK fines LastPass over inadequate security measures. Researchers warn of... active attacks against Gladinet CentreStack instances. OpenAI outlines future cybersecurity plans. MITRE ranks the top 25 vulnerabilities of 2025. CISA orders U.S. federal agencies to urgently patch a critical GeoServer vulnerability. An anti-piracy coalition shuts down one of India’s most popular illegal streaming services. Our guest Mark Lance, Vice President, DFIR & Threat Intelligence, GuidePoint Security, unpacks purple team table top exercises to prepare for AI-generated attacks. Hackers set their sights on DNA. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Mark Lance, Vice President, DFIR & Threat Intelligence, GuidePoint Security, is discussing purple team table top exercises to prepare for AI-generated attacks. Selected Reading Trump Signs Executive Order to Block State AI Regulations (SecurityWeek) Announced pick for No. 2 at NSA won’t get the job as another candidate surfaces (The Record) LastPass Data Breach — Insufficient Security Exposed 1.6 Million Users (Forbes) Gladinet CentreStack Flaw Exploited to Hack Organizations (SecurityWeek) OpenAI lays out its plan for major advances in AI cybersecurity features (SC Media) MITRE Releases 2025 List of Top 25 Most Dangerous Software Vulnerabilities (SecurityWeek) CISA orders feds to patch actively exploited Geoserver flaw (Bleeping Computer) MKVCinemas streaming piracy service with 142M visits shuts down (Bleeping Computer) The Unseen Threat: DNA as Malware (BankInfoSecurity) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
We've all been there.
You realize your business needs to hire someone yesterday.
How can you find amazing candidates fast?
Well, it's easy.
Just use Indeed.
When it comes to hiring, Indeed is all you need.
Stop struggling to get your job post.
noticed. Indeed's sponsored jobs helps you stand out and hire fast. Your post jumps to the top
of search results, so the right candidates see it first. And it works. Sponsored jobs on Indeed
get 45% more applications than non-sponsored ones. One of the things I love about Indeed is how
fast it makes hiring. And yes, we do actually use Indeed for hiring here at N2K Cyberwire. Many
of my colleagues here came to us through Indeed.
Plus, with sponsored jobs, there are no subscriptions, no long-term contracts.
You only pay for results.
How fast is Indeed?
Oh, in the minute or so that I've been talking to you, 23 hires were made on Indeed,
according to Indeed data worldwide.
There's no need to wait any longer.
Speed up your hiring right now with Indeed.
And listeners to this show will get a $75-sponsored job credit to get your job.
more visibility at indeed.com slash cyberwire.
Just go to indeed.com slash cyberwire right now
and support our show by saying you heard about Indeed on this podcast.
Indeed.com slash cyberwire.
Terms and conditions apply.
Hiring?
Indeed is all you need.
A new executive order targets states AI regulations.
While the White House shifts course on an NSA deputy director pick, the U.K. finds last pass over inadequate security measures.
Researchers warn of active attacks against Gladdenet center stack instances.
OpenAI outlines future cybersecurity plans.
Miter ranks the top 25 vulnerabilities of 2025.
SISA order.
U.S. federal agencies to urgently patch a critical geo-server vulnerability.
An anti-piracy coalition shuts down one of India's most popular illegal streaming services.
Our guest is Mark Lance, vice president for DFIR and threat intelligence at guidepoint security,
unpacking purple teen tabletop exercises and preparing for AI-generated attacks.
And hackers set their sights on DNA.
It's Friday, December 12, 2025.
I'm Dave Bittner, and this is your Cyberwire Intel briefing.
Thanks for joining us here today. Happy Friday. It is great as always to have you with us.
President Donald Trump signed an executive order aimed at preventing U.S. states from creating
their own artificial intelligence regulations, arguing that a fragmented regulatory landscape
could hinder innovation and weaken America's ability to compete with China.
Trump said requiring companies to navigate approvals in all 50 states would discourage
investment and slow development. The order directs the Attorney General to form a task force to
challenge state AI laws and instructs the Commerce Department to identify regulations deemed problematic.
It also threatens to withhold certain federal funds, including broadband grants, from states that enact
AI rules. The move comes amid bipartisan calls in Congress and pressure from civil liberties and
consumer groups for stronger AI oversight. Several states, including California, Colorado,
Utah, and Texas have already passed AI laws focused on data limits, transparency, and
discrimination risks. Supporters say such measures address real harms, while the administration
argues only the most burdensome regulations should be targeted, leaving room for protections
like child safety. Elsewhere, the Trump administration
has reversed its decision on who will serve as deputy director of the National Security Agency,
withdrawing its earlier pick amid internal opposition and pressure from far-right conservatives.
Joe Franciscan, announced in August for the number two role,
was recently informed he would no longer be appointed, according to multiple sources.
Francescon, a former NSA analyst and National Security Council official,
never began the job and faced criticism from conservative activists,
as well as resistance within the administration.
He has since declined alternative NSA roles and moved to the private sector.
The White House now plans to name Tim Koseba,
a former senior NSA and FBI official to the position.
Koseba reportedly has backing from Trump allies
and recently completed a polygraph at NSA headquarters.
The change adds to all.
ongoing leadership instability at the NSA, which remains without a Senate-confirmed director
and faces additional senior departures in the coming weeks.
The U.K. Information Commissioner's Office has fined last pass about $1.6 million over a 2022 data breach
that affected roughly 1.6 million U.K. users. Regulators concluded that last pass
failed to implement sufficiently robust technical and security measures.
allowing a hacker to gain unauthorized access to a backup database tied to a third-party cloud
storage service. While there is no evidence that customer passwords were decrypted, the ICO said
the company nonetheless failed users who trusted it to protect sensitive information.
Last Pass, which serves more than 20 million customers and 100,000 businesses globally,
remains a recommended security tool despite the incident. Industry experts described,
the fine as a watershed moment, highlighting that modern breaches often stem from identity
compromise, governance failures, and supplier risk, rather than weak passwords alone.
Huntress is warning of active attacks against Gladdenet Centerstack instances, where attackers
exploit a newly identified cryptography flaw to steal machine keys and gain remote code
execution. The issue stems from center stack reusing static cryptographic strings, allowing
attackers to access the web.comfig file, forge trusted requests, and abuse ASPX view state
deserialization. Huntress has observed nine impacted organizations across multiple sectors. No CVEE has
been assigned. Gladanet has fixed issues, and organizations are urged to update immediately.
and review indicators of compromise.
Open AI has outlined plans to treat all future AI models
as having potentially high cybersecurity capabilities,
acknowledging they could both aid defenders and be misused by attackers.
Under its preparedness framework,
such models might automate vulnerability discovery or cyber operations,
prompting a defense-in-depth approach.
Rather than limiting access or knowledge,
OpenAI plans to rely on targeted training, red-teaming, and system-wide monitoring to curb abuse.
Models are designed to refuse or safely respond to malicious requests,
with suspicious activity blocked, downgraded, or escalated for enforcement.
OpenAI also plans a trusted access program,
offering enhanced capabilities to qualified cybersecurity defenders
and a frontier risk council of experts.
While OpenAI cites improving model performance as evidence of advancing capabilities, outside analysts' caution against overstating current AI-driven threats.
MIDER has published their 2025 CWE Top 25, ranking the past year's vulnerabilities.
Cross-site scripting was the most dangerous software weakness, followed by SQL injection and cross-site request forgery, missing authorization.
climbed to fourth, while out-of-bounds right placed fifth. The list adds six new entries,
including multiple buffer overflow flaws and access control weaknesses, while several issues
dropped off due to methodology changes. Sessa says the updated list is designed to help reduce
vulnerabilities and costs. Agencies urge developers and security teams to use it to guide
secure-by-design practices, testing, and vendor evaluations.
SISA has ordered U.S. federal agencies to urgently patch a critical geo-server vulnerability
that's being actively exploited in the wild.
The flaw is an unauthenticated XML external entity or XXE vulnerability,
affecting multiple geoserver versions.
By abusing weak XML input handling in a specific get-map endpoint,
the attackers can retrieve arbitrary files, trigger denial of service conditions,
access sensitive data, or enable server-side request forgery.
SISA has added the flaw to its known exploited vulnerabilities catalog
and directed federal civilian executive branch agencies to remediate by January 1st.
While the mandate applies only to federal agencies, SISA strongly urges all organizations
running geoserver to patch immediately, noting widespread exposure and active exploitation.
An anti-piracy coalition has shut down Mark V cinemas,
one of India's most popular illegal streaming services,
cutting off access to free movies and TV shows used by millions.
The operation was led by the Alliance for Creativity and Entertainment, or Ace,
backed by more than 50 major studios and networks, including Disney, Netflix, and Warner Brothers.
Ace identified the operator in Bahar, India, who agreed to cease operation,
and transfer 25 related domains,
now redirected to a legal streaming portal.
The coalition also dismantled a file cloning tool
widely used in India and Indonesia
to distribute pirated content via cloud storage.
Ace says the takedown underscores
its continued collaboration with global law enforcement
to disrupt large-scale piracy networks.
Coming up after the break, my conversation with Mark Lance from Guidepoint Security.
We're discussing Purple Team Tabletop exercises to prepare for AI-generated attacks.
And hackers set their sights on DNA.
Stay with us.
Most environments trust far more than they should, and attackers know it.
Threat Locker solves that by enforcing default deny at the point of execution.
With Threat Locker Allow listing, you stop unknown executables cold.
With ring fencing, you control how trusted applications behave,
and with Threat Locker, DAC, defense against configurations,
you get real assurance that your environment is free of misconfigurations,
and clear visibility into whether you meet compliance standards.
Threat Locker is the simplest way to enforce zero-trust principles without the operational pain.
It's powerful protection that gives CSO's real visibility, real control, and real peace of mind.
Threat Locker make zero-trust attainable, even for small security teams.
See why thousands of organizations choose Threat Locker to minimize alert fatigue,
stop ransomware at the source, and regain control over their environments.
Schedule your demo at Threatlocker.com slash N2K today.
AI is transforming every industry, but it's also creating new risks that traditional frameworks can't keep up with.
Assessments today are fragmented, overlapping, and often specific to industries, geographies, or regulations.
That's why Black Kite creates.
created the BKGA-3 AI Assessment Framework to give cybersecurity and risk teams a unified,
evolving standard for measuring AI risk across their own organizations and their vendors' AI use.
It's global, research-driven, built to evolve with the threat landscape and free to use.
Because Black Kite is committed to strengthening the entire cybersecurity community.
Learn more at Blackkite.com.
Mark Lance is vice president for DFIR and threat intelligence with guidepoint security.
I recently sat down with him to discuss Purple Team Tabletop exercises to help prepare for AI-generated attacks.
Yeah, we've seen this to be a huge growth area over the last couple years,
where more and more people are seeing the benefits and starting to perform tabletop exercises.
We think a lot of that is driven internally from, you know, management teams,
even technical teams trying to drive awareness and visibility upwards,
but also board directives.
Could be insurance and compliance requirements are driving people to do these tabletops.
In general, we recommend that a tabletop has performed at least once a year
across your senior leadership team,
and then additionally a separate one across your technical
or maybe management teams.
Well, for folks who have never been part of one,
can you describe it for us?
What typically goes into a tabletop exercise?
A tabletop is going to be a hypothetical incident
where you are taking and leveraging this incident
to educate the participants in the audience
on the different types of things
that could transpire during an incident
And while it's also used for the opportunity for education, it's also about enablement,
making sure that people know how to leverage their existing incident response plans,
knowing their roles and responsibilities during those incidents.
And what are those key decision points that could occur?
But all in all, the ideas to take this hypothetical incident that's applicable to you
and your organization and get people to understand how they would react by executing that
with an audience and having them make the decisions on what steps they would take
based upon different injects and things that would happen during that incident process.
Well, of course, the hot topic these days is AI,
and how has that changed the way people are framing their tabletops these days?
AI, similar to any other threat, just needs to be accounted for
when putting together these hypothetical incidents or these tabletop exercises.
you know, different organizations have different types of threats that they should be aware of and
conscious of that are the most relevant to them in their organization and in their business.
And, you know, AI is one of the things that obviously we're very conscientious about because we know that,
you know, cybercriminals and threat actors are leveraging AI in certain technologies in different ways.
And so accounting for those during these tabletops, whether that is more efficient, you know,
fishing attempts, whether it is custom code created, whether it's accessing AI
AI infrastructure that you're leveraging and using within your own environment.
Those are all things that should be or could be accounted for when you're creating
and developing these unique scenarios to test somebody's ability to respond to them.
What's your advice for organizations figuring out what the cadence should be for them?
How often they should do this?
I think the cadence for these tabletop exercises and crisis simulations is contingent upon an organization's maturity.
A lot of businesses, this could be the very first time they're doing it, and maybe they don't have an existing incident response plan.
And so they're leveraging this as an opportunity to vet out who could be doing what during an incident and who's going to be responsible,
so they can then go develop that incident response plan
because they don't have that level of sophistication yet.
Now, there are other organizations that might have existing incident response plan,
playbooks, specific runbooks, and it's more of an opportunity to test those out,
and you can perform them more frequently because you have the established plans
and you're just seeing how people would respond to certain types of incidents.
And so I think the level of maturity in organization has will determine what the
purpose and intent behind it is, but I also think what's very important is making sure that you're
targeting specific audiences based on the type of conversation that's occurring and the talk
track for the tabletop itself. Well, related to that, how do you go about deciding who should be
included in the exercises? It really varies upon the audience and the intent behind the tabletop and
in the exercise. For a technical tabletop as an example, for a technical tabletop,
it could be more around how are things identified? How did you identify a certain type of
incident or an incident has occurred in your environment? How are you then tracking that
incident? How are you escalating it? How are you sharing information internally about that?
Who's determining a severity in escalations and when that should go up to senior leadership?
Now, the audience for more of a senior leadership or, you know, executive tabletop is going to be less about the intricacies involved with the incident itself and going to be more about decisions and business-related decisions that need to be made.
You know, you are going to get high-level details about the incident, not necessarily having to know the intricacy and technical details involved in how it's tracked, but instead, you know, who's going to be communicating with your cyber insurance carrier?
Are we going to engage external counsel?
Is there a necessity to shut down certain areas of the business?
Who's drafting internal and external notifications and letting people know that we are being
potentially impacted by an incident?
And when should those occur?
And so really the audience can drive the intent, but the intent can also drive the audience.
But realistically, we do see we're breaking apart those sessions into different
audiences, audience specific to their role is very important.
You know, Mark, I know as part of your role there at guidepoint, you help organizations run
their tabletop exercises. Can you give us a kind of a peek into that world? I'm specifically
curious, like, do you find you have to bring some people along? I guess I'm asking, are some
folks skeptical when they walk into that room that they're not sure what they're getting into,
or is this the best use of my time?
Yeah, absolutely.
We have a lot of clients who might not necessarily know what they're getting into,
which, again, dependent upon the intent of the exercise,
in most circumstances, we actually recommend that clients aren't prepared
with knowing the full incident scenario.
Because in a real incident situation,
you're not going to know everything about the incident up front.
you're going to be dealing with it reactively versus, you know, having all of the details and
able to know that, well, eventually this is going to happen. Instead, information gets trickled in
during a real incident response effort. And so we try to simulate what a real incident would
be like. And a lot of that is, you know, having poor preparation with policies and processes
and then being able to navigate those based on the variables
and the details that are shared with you as part of the incident.
So realistically, when we're going to develop these things,
it's one, who's the audience?
Two, what is the intent behind the tabletop?
Is it to test plans?
Is it for educational purposes to teach people about different types of threats?
Is it enablement for the team to further understand their roles and responsibilities?
and then developing a scenario that's going to test different areas of the business
and help them establish some muscle memory.
And then also to educate them on, okay, well, here were there were deficiencies in your process
and you need to potentially make improvements to be more effective in the future if this was
a real scenario.
And what's it like after the fact?
Is this a revelation for some of the people to have this real worldview of
of the possibilities?
It is.
You know,
it's a lot of our clients
walk away from these things
saying,
holy crap,
are these the kinds of things
that could really happen to us
and that we need to be thinking about?
And the answer is generally yes.
You know,
the intent isn't to necessarily
scare people,
but it is to bring awareness to them
so that they know
the details about
the true impacts of potential incidents to their business and their organization.
What are your recommendations for folks who want to go down this path, who maybe haven't done this
before? What's a good place to get started? I think understanding your level of maturity,
one, is the first piece. Have you done these types of things in the past? If you haven't,
I think in general, everybody should be performing tabletop exercises. They
should be, you know, actively, you know, having these simulations for different target audiences,
you know, periodically. Two, you know, do we have the capability to potentially try to do this
or want to try to do this internally ourselves? Or do we want to bring in outside help with the
expertise and experience of knowing what some of the pitfalls or speed bumps might be? We do see
where a lot of people will, you know, attempt to perform these. And that's great. At least they're
trying, but then they do realize sometimes there is some outside experience that can be brought
to the table. And so looking for others and consulting them on the opportunity for that experience
in those services. And then I think that, you know, three is just making sure that you are
leveraging best practices like doing target and specific audiences, building custom scenarios
that are going to be specific to your environment, but also making sure that they're relevant
and things that could really happen
because a lot of times you'll lose the audience
and people will say,
oh, that couldn't really happen to us.
So making sure that they are viable
and real things that could potentially impact
your specific environment and infrastructure.
And then the last piece is making sure
that you're taking the lessons learned from that
and applying them.
These are learning opportunities.
They're not necessarily a test,
but they're an opportunity to say,
hey, we were extremely efficient here and here are things we did very well, but maybe we're missing
some of these policies or processes here. Here's where there was some confusion and people didn't
know who should be handling certain actions or activities. We also weren't sure who our third parties are
or we weren't tracking that information. So making sure that you're leveraging the lessons learn from
that so that you can grow and be more efficient and effective in the following exercises.
That's Mark Lance from Guidepoint Security
And finally,
has cybersecurity has officially crossed the Rubicon, and it did so carrying a pipette.
Researchers have shown that malware no longer needs phishing emails or poisoned downloads.
It can hitch a ride inside synthetic DNA.
In a University of Washington demonstration, carefully crafted DNA sequences were shown to trigger
exploits when processed by sequencing software, turning lab workflows into attack paths.
Once sequenced, biological data moves through cloud platforms and custom code, where hidden
instructions could corrupt data or enable remote access.
For sectors like genomics, biotech, health care, and agriculture, this raises uncomfortable
questions about data integrity, intellectual property, and national biosecurity.
Traditional controls barely notice the threat because DNA looks like biology, not malware.
The takeaway is simple and unsettling.
Genomic pipelines are now part of the attack surface.
genome is no longer just life's blueprint, it is executable input. And yes, that means your
lab bench just joined the threat model. Now, if you'll excuse me, I'm going to go watch
the latest episode of Pluribus.
And that's the CyberWire.
Be sure to check out our daily briefing at thecyberwire.com.
Be sure to check out this weekend's Research Saturday.
In my conversation with Daniel Schwabby,
Domain Tools, Head of Investigations, and SISO,
we're sharing their work inside the grape firewall.
That's Research Saturday.
Check it out.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights
that keep you a step ahead
in the rapidly changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app.
Please also fill out the survey and the show notes or send an email to Cyberwire at N2K.com.
N2K's senior producer is Alice Carruth.
Our Cyberwire producer is Liz Stokes.
We're mixed by Trey Hester with original music by Elliot Peltzman.
Our executive producer is Jennifer Ibin.
Peter Kilpe is our publisher and I Dave Bittner.
Thanks for listening.
We'll see you back here next week.
Thank you.
