CyberWire Daily - Where encryption meets executive muscle.
Episode Date: December 19, 2025Trump signs the National Defense Authorization Act for 2026. Danish intelligence officials accuse Russia of orchestrating cyberattacks against critical infrastructure. LongNosedGoblin targets govern...ment institutions across Southeast Asia and Japan. A new Android botnet infects nearly two million devices. WatchGuard patches its Firebox firewalls. Amazon blocks more than 1,800 North Korean operatives from joining its workforce. CISA releases nine new Industrial Control Systems advisories. The U.S. Sentencing Commission seeks public input on deepfakes. Prosecutors indict 54 in a large-scale ATM jackpotting conspiracy. Our guest is Nitay Milner, CEO of Orion Security, discussing the issue with data leaking into AI tools, and how CISOs must prioritize DLP. Riot Games finds cheaters hiding in the BIOS. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Nitay Milner, CEO of Orion Security, discusses the issue with data leaking into AI tools, and how CISOs must prioritize DLP. Selected Reading Trump signs defense bill allocating millions for Cyber Command, mandating Pentagon phone security (The Record) Denmark blames Russia for destructive cyberattack on water utility (Bleeping Computer) New China-linked hacker group spies on governments in Southeast Asia, Japan (The Record) 'Kimwolf' Android Botnet Ensnares 1.8 Million Devices (SecurityWeek) New critical WatchGuard Firebox firewall flaw exploited in attacks (Bleeping Computer) Amazon blocked 1,800 suspected DPRK job applicants (The Register) CISA Releases Nine Industrial Control Systems Advisories (CISA.gov) U.S. Sentencing Commission seeks input on criminal penalties for deepfakes (CyberScoop) US Charges 54 in Massive ATM Jackpotting Conspiracy (Infosecurity Magazine) Riot Games found a motherboard security flaw that helps PC cheaters (The Verge) Share your feedback. What do you think about CyberWire Daily? Please take a few minutes to share your thoughts with us by completing our brief listener survey. Thank you for helping us continue to improve our show. Want to hear your company in the show? N2K CyberWire helps you reach the industry’s most influential leaders and operators, while building visibility, authority, and connectivity across the cybersecurity community. Learn more at sponsor.thecyberwire.com. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
Ever wished you could rebuild your network from scratch to make it more secure, scalable, and simple?
Meet Meter, the company reimagining enterprise networking from the ground up.
Meter builds full-stack, zero-trust networks, including hardware, firmware, and software,
all designed to work seamlessly together.
The result, fast, reliable, and secure connectivity
without the constant patching, vendor juggling, or hidden costs.
From wired and wireless to routing, switching firewalls, DNS security, and VPN,
every layer is integrated and continuously protected in one unified platform.
And since it's delivered as one predictable monthly service,
you skip the heavy capital costs and endless upgrade cycles.
Meter even buys back your old infrastructure to make switching effortless.
Transform complexity into simplicity and give your team time to focus on what really matters,
helping your business and customers thrive.
Learn more and book your demo at meter.com slash cyberwire.
That's M-E-T-E-R dot com slash cyberwire.
Trump signs the National Defense Authorization Act for 2026.
Danish intelligence officials accuse Russia of orchestrating cyber attacks against critical infrastructure.
Long-nosed goblin targets government institutions across Southeast Asia and Japan.
A new Android botnet infects nearly 2 million devices.
Watchguard patches its fireboxes.
firewalls. Amazon blocks more than 1,800 North Korean operatives from joining its workforce.
SISA releases nine new industrial control system advisories. The U.S. Sentencing Commission
seeks public input on deepfakes. Prosecutors indict 54 in a large-scale ATM jackpotting
conspiracy. Our guest is Nete Milner, CEO of Orion Security, discussing the issue with data leaking
into AI tools and how Sissos must prioritize DLP.
And Riot Games finds cheaters hiding in the BIOS.
It's Friday, December 19, 2025.
I'm Dave Bittner, and this is your Cyberwire Intel briefing.
Thanks for joining us here today.
It is great to have you with us.
President Donald Trump signed a $901 billion National Defense Authorization Act for
26 that includes major cybersecurity provisions, and it passed with bipartisan support.
The bill authorizes record.
defense spending and preserves the long-debated dual-hat leadership of U.S. Cyber Command and the
National Security Agency by barring Pentagon funds from weakening the Cyber Command's commander's
authority. That provision reinforces a structure Trump previously considered splitting, but ultimately
abandoned. Trump also nominated Army Lieutenant General Joshua Rudd to lead both organizations.
The NDAA allocates roughly $417 million to Cyber Command for digital operations, other activities, and headquarters maintenance.
It mandates secure encrypted mobile devices for senior Defense Department leaders following Inspector General criticism of insecure communications.
The bill also requires reviews of foreign-sourced infrastructure components and orders the Pentagon to streamline its cybersecurity requirements.
Danish intelligence officials have accused Russia of orchestrating cyber attacks against Denmark's critical infrastructure as part of a broader hybrid campaign against Western countries.
The Danish Defense Intelligence Service said two pro-Russia groups, Z-Pentest and no-name 05716, carried out attacks on water utilities and launched DDoS attacks ahead of local elections, aiming to create insecure.
and punish Denmark for supporting Ukraine.
Officials said the cyberactivity is part of a wider influence effort
to undermine Western backing of Kiev with elections used to attract public attention.
Denmark's defense minister called the attacks unacceptable
and said Russia's ambassador would be summoned.
The warning aligns with broader European concerns,
echoed by incidents in Norway and a recent joint advisory from U.S. and European
agencies about pro-Russian hacktivist threats to global critical infrastructure.
Researchers have identified a previously unknown China-aligned hacking group
targeting government institutions across Southeast Asia and Japan.
The group, dubbed Long-Nosed Goblin by ESET, has been active since at least September
2023 and was uncovered during an investigation of a Southeast Asian government network.
The hackers abused Windows Group Policy, a legitimate administrative tool to deploy malware and move laterally.
Their tools include Nosey Historian, which harvests browser data to identify high-value victims, and Nosey Door, a selective backdoor suggesting carefully chosen targets.
Researchers warn that a newly identified Android botnet dubbed Kim Wolf has infected more than one.
point eight million devices and can launch massive DDoS attacks.
Chinese firm X-Lab says the botnet mainly targets Android TV set-top boxes and focuses on traffic
proxying, but issued over 1.7 billion attack commands in late November. Kim Wolf is linked to the
Turbo Morai class Isuru Botnet and may have powered recent near 30 terabit per second attacks. The malware
uses encrypted DNS to evade detection and operates on a globally distributed infrastructure.
WatchGuard has issued an urgent warning for customers to patch a critical, actively exploited
remote code execution vulnerability affecting its firebox firewalls. The flaw impacts
devices running multiple versions of Fireware OS and allows unauthenticated attackers to execute
malicious code remotely through low-complexity attacks.
Watchguard caution that devices may remain vulnerable even after certain VPN settings are removed.
The company said it has observed active exploitation in the wild and released indicators of compromise,
urging affected users to rotate credentials if compromise is suspected.
Temporary mitigations are available for organizations unable to patch immediately.
The advisory follows a pattern of similar watchguard firewall vulnerabilities that were
widely exploited and later flagged by Sissa.
Amazon says it has blocked more than 1,800 suspected North Korean operatives from joining
its workforce since April 24, underscoring how widespread the so-called fake IT worker scam
has become. Chief Security Officer Steve Schmidt said applications linked to North Korea
rose 27% quarter over quarter this year. The scheme involves real development.
developers using stolen or fabricated identities, AI-generated resumes, and even deepfakes
to secure remote jobs, then funneling wages back to the regime.
Some attackers also steal sensitive data or extort employers.
Amazon uses AI screening and human verification to detect the fraud, but Schmidt warned
tactics are evolving, including hijacked LinkedIn accounts and U.S.-based laptop farms that
disguise overseas workers as domestic employees.
Sessa has released nine new industrial control system advisories covering security vulnerabilities
across a wide range of widely used operational technology products.
The advisories address systems from major vendors including inductive automation,
Schneider Electric, National Instruments, Mitsubishi Electric, Siemens, Ed Vantec, Rockwell
Automation, and Access Communications.
Affected products range from SCADA platforms and distributed control systems
to industrial networking stacks and camera management software.
SISA urged asset owners, operators, and administrators to review the advisories
for detailed technical information and recommended mitigations to reduce risk in industrial
and critical infrastructure environments.
The U.S. Sentencing Commission is proposing preliminary sentencing guidelines under
the Take It Down Act, a bipartisan law passed earlier this year to combat non-consensual deepfake
pornography. The law makes it a federal crime to distribute real or AI-generated intimate imagery
without consent and requires platforms to remove reported content within 48 hours, with
enforcement authority given to the Federal Trade Commission. It outlines prison sentences of up to
two years for deep-faking adults and up to three years for minors, with the Commission now refining
penalties by offense type. Proposed updates clarify definitions tied to online services and
intent, including abuse or sexual exploitation. The Commission is seeking public comment on
the guidelines through February 16, 26, as concern grows over increasingly realistic AI-generated
media.
U.S.
prosecutors have indicted
54 individuals
for their alleged roles
in a large-scale
ATM jackpotting
conspiracy involving malware
and coordinated cash
theft.
A federal grand jury in
Nebraska returned
two indictments,
one in October
charging 32 people,
and another in December
charging 22 more.
Authorities allege
the scheme used
plautis malware
to force ATMs
to dispense.
cash, resulting in losses of about $40.7 million as of August 2025.
The indictment links the activity to Tren de Aruguay, a Venezuelan criminal syndicate
designated as a foreign terrorist organization, accusing it of laundering proceeds to fund
broader criminal operations.
Investigators say the group conducted surveillance, physically accessed ATMs to install malware,
and use techniques designed to evade detection and obscure evidence.
If convicted, defendants face sentences ranging from decades to life in prison.
Coming up after the break, my conversation with Nite Milner, CEO of Orion Security.
We're discussing issues with data leaking into AI tools.
And Riot Games finds cheaters hiding in the BIOS.
Stick around.
What's your 2 a.m. security worry?
Is it, do I have the right controls in place?
Maybe are my vendors secure?
Or the one that really keeps you up?
at night, how do I get out from under these old tools and manual processes? That's where Vanta comes
in. Vanta automates the manual works so you can stop sweating over spreadsheets, chasing audit
evidence, and filling out endless questionnaires. Their trust management platform continuously
monitors your systems, centralizes your data, and simplifies your security at scale. And it
fits right into your workflows, using AI to streamline evidence collection, flag risks, and keep your
program audit ready all the time. With Vanta, you get everything you need to move faster,
scale confidently, and finally get back to sleep. Get started at Vanta.com slash cyber. That's V-A-N-T-A.com
slash cyber.
Nitae Milner is CEO of Orion Security.
We recently discussed the issues with data leaking into AI tools
and how SISOs must prioritize DLP.
So traditional TLP tools, legacy tools like FOSPA and SEMONTECH,
were all about creating policies to make sure you protect your data.
So the mission was to protect sensitive data for banks.
It can be credit cards for healthcare providers.
It can be PHA data.
and what you had to do with the traditional tools
is you had to define a policy
for every use case of data that you want to protect.
For example, you don't want credit cards
to get outside to external recipients over emails.
So you had to sit down and create a policy,
a rule basically, for every one of these use cases.
And then what usually would happen
is you had to tweak it over and over again
because you'd get a lot of false positives
catching all sort of data that looks like credit card data
or data that looks like PHA data.
And then he had thousands of allows of false positives.
And these tools were known for being, to say the least,
not very effective for enterprise companies.
And so what's changed over time
in terms of the state of the art
when it comes to DLP tools today?
So over time, during the past years,
technologies like UEBN,
try to basically reinvent DLP with an anomaly detection.
But that didn't really work well.
In other industries like EDR, antiviruses,
an anomaly detection makes a lot of sense
because you can predict how, for example,
processes should behave.
But it's really, really hard to understand
how people are going to behave.
So an anomaly detection for detecting data loss for people is really hard.
So you get a lot of false positives
because people change their how they handle some sort of data every day.
So it's hard to just use a nobody detection, and that had failed for the past 10 years.
What had changed recently is LLMs and generative AI application.
So basically what is trying to be done right now in the market is to use LLMs, basically human cognition,
the missing piece of DLP, and basically think like a secure,
the analyst for every
data extradition in the company
looking at a lot of indicators
for example the person
doing the action, the type of
data that is being sent outside of
the organization, the source of the data
that's projection, and then getting
to a decision
if this is basically
a verdict, if this is
risky for the company or just
a normal business
ops business data flow
in the organization.
So is the idea here that you're making use of the LLM to basically sort through your logs
to check through a user's behavior and see how that matches up to a set of potential rules
that you may have set up or red flags that you've set up?
Yeah, so it's a different approach than rules.
So rules, what we have up until now were deterministic.
Rules, policies are basically one or a zero.
But when you add LLMs to it, when you add cognition to it,
it can have more possibility than just right or wrong.
It can look at other contexts, at other criteria and basically get to a decision,
like basically a security, a DLP security analyst will look at the person doing the action,
how long is he in the company, how does he usually act around certain sort of data,
and then decide if this is a false positive or not.
So this is what we're adding right now to the table, just like an ad hoc verdict for every data exploitation attempt in the company.
So then does the LLM, for example, when it makes a decision, it alerts a security professional, a human, as to whether or not maybe this needs a little more attention?
Yeah, definitely.
So we're definitely keeping the human in the loop right now, but instead of meeting five DLP analysts today,
that goes through, and I'm not exaggerating thousands of alerts every month,
that most of them, like 90% of them are false positive.
They would get more around like 5 or 10% of false positive,
and they will be added into the loop only when necessary.
So if you need like five people up until now to run your DLP program,
with the new technologies in the market, you can do it with 20% of an FTE time.
And this is a real game changer in this market.
And are these systems growing in intelligence as they operate?
Does the feedback go back into the process itself?
Just like a security analyst, you can teach it.
So basically, when you get the false positives, you can mark a thumbs down
and you can explain exactly why this is a false positive.
For example, this is an approved destination to send our sensitive data to.
Maybe it's a third-party vendor that we're working.
or this data is not really that sensitive for the company
and the LLM model adds it to his context
in this specific company
and the next time it will happen
it will use this context to reduce the false politics.
It really sounds to me like
what you're introducing here
is the ability to have nuance in these decisions.
Exactly. You can think about it like
it's more similar to like EDR for data.
If you remember the old days,
of Norton and McCarthy
where you have antiviruses.
So EDR came along and basically
gave a new approach to detecting
malware and more detection
and response approach. It's
the same thing we're doing, but up until now
it wasn't really possible in DLP because
human behaviors are much more complicated
than machine behaviors.
But now with LLM, we can actually
build something that looks
like EDR for data.
It's easy to maintain. It's a plug-in-play
solution, and it can protect your data
without needing to hire an army of DOP analysts.
What about for people who are hesitant to feed any of their information into an LLM?
Are there protections for them?
Yeah, definitely.
It doesn't train the model over separate customers,
and you can run your LLM model in your own environment.
So basically, you can keep the constraint on your side.
Where do you suppose this is headed?
What's the future look like for DLP?
That's a great question and what I've been waiting for.
So basically, what we talked up until nowadays is that how AI can help DLP and help data security.
But we didn't talk about how AI is a threat in data security
and how it's going to change the landscape around direct filtration.
So those three main ways of AI to be looked upon as a threat in data security.
One is sensitive data exploitation to AI.
For example, chat GPT or cloud, an employee can take a board presentation or a financial document
with very sensitive data and just feed it to a third-party chat GPT.
It happens all the time and it's at top risk for a lot of companies today.
The second one is AI agents exploitating data.
data outside of the company. So AI agents will replace a lot of the human work. For example,
an email assistant AI agent that will write emails for you and will share data for you.
It can extortrate a lot of sensitive data maliciously or just by mistake. And the third one is
AI makes corporate data very accessible. If you're familiar with companies like Glean data
sharing and data searching with AI makes data very accessible, sometimes not the data that
want this person to access in the company.
So, for example, you can just type down the salary of the CEO,
and if by mistake, you have access to it, you can see it right away.
So data security is about to have a lot of new problems in the upcoming year.
And AI can be looked upon as a threat, but also as a huge enabler for creating a 100x
better solutions with one-tenth of the operational costs associated with the traditional ones.
So AI giveth and AI taketh away, right?
Exactly.
100%.
100%.
All right.
Well, I think I have everything I need for our story here.
Is there anything I missed?
Anything I haven't asked you that you think it's important to share?
I think that data security is going through a generational shift.
And everybody can feel that right now, that we can build much better solutions with much
less costful companies.
And we're going to see a lot of new threats with AI.
And like our mission, or my mission even personally,
is to make sure that people can access these benefits as fast as possible.
And I think that we're going to have a very interesting few years in front of us
when it comes to data security and data protection.
That's Nite Milner, CEO of Orion Security.
And finally, Riot Games has discovered that some recent motherboards were quietly letting cheaters slip past the velvet rope.
A flaw in BIOS firmware from vendors including Azrock, ASIS, Gigabyte, and MS.
meant certain DMA-based cheats could operate invisibly,
bypassing protections meant to keep games fair.
Riot says the issue undermined IOMMU defenses that looked awake
but were not fully on the job, like a nightclub bouncer dozing off mid-shift.
The fix is less glamorous than a ban wave, but more effective.
Motherboard makers have released BIOS updates,
and Riot's Vanguard anti-cheat may now insist players install them before launching Valorant.
Riot caused it a necessary escalation in the hardware cheat arms race,
one that shuts down a whole category of previously untouchable tricks
and makes cheating a lot more expensive.
And that's The CyberWire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Be sure to check out this weekend's Research Saturday in my conversation with Darren Meyer,
security research advocate at check marks.
The research we're discussing is titled Bypassing AI Agent Defenses with Lies in the Loop.
That's Research Saturday. Check it out.
We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to Cyberwire at N2K.com.
N2K's senior producer is Alice Carruth. Our Cyberwire producer is Liz Stokes. We're mixed by Trey Hester with original music by Elliot Peltzman.
Our executive producer is Jennifer Ibn.
Peter Kilpsey is our publisher, and I'm Dave Bittner.
Thanks for listening.
We'll see you back here next week.
