CyberWire Daily - CISA shrinks while threats grow.
Episode Date: April 11, 2025CISA braces for widespread staffing cuts. Russian hackers target a Western military mission in Ukraine. China acknowledges Volt Typhoon. The U.S. signs on to global spyware restrictions. A lab support...ing Planned Parenthood confirms a data breach. Threat actors steal metadata from unsecured Amazon EC2 instances. A critical WordPress plugin vulnerability is under active exploitation. A new analysis details a critical unauthenticated remote code execution flaw affecting Ivanti products. Joining us today is Johannes Ullrich, Dean of Research at SANS Technology Institute, with his take on "Vibe Security." Does AI understand, and does that ultimately matter? Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Joining us today is Johannes Ullrich, Dean of Research at SANS Technology Institute, discussing "Vibe Security," similar to “Vibe Coding” where security teams overly rely on AI to do their job. Selected Reading Trump administration planning major workforce cuts at CISA (The Record) Cybersecurity industry falls silent as Trump turns ire on SentinelOne (Reuters) Russian hackers attack Western military mission using malicious drive (Bleeping Computer) China Admitted to US That It Conducted Volt Typhoon Attacks: Report (SecurityWeek) US to sign Pall Mall pact aimed at countering spyware abuses (The Record) US lab testing provider exposed health data of 1.6 million people (Bleeping Computer) Amazon EC2 instance metadata targeted in SSRF attacks (SC Media) Vulnerability in OttoKit WordPress Plugin Exploited in the Wild (SecurityWeek) Ivanti 0-day RCE Vulnerability Exploitation Details Disclosed (Cyber Security News) Experts Debate: Do AI Chatbots Truly Understand? (IEEE Spectrum) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network powered by N2K.
Cyber threats are evolving every second and staying ahead is more than just a challenge,
it's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted
by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping
unauthorized applications, securing sensitive data, and ensuring your organization runs
smoothly and securely.
Visit ThreatLocker.com today to see how a default deny approach can keep your company
safe and compliant.
CISA braces for widespread staffing cuts. Russian hackers target a Western military mission in Ukraine.
China acknowledges Volt Typhoon. The US signs on to global spyware restrictions.
A lab supporting Planned Parenthood confirms a data breach.
Threat actors steal metadata from unsecured Amazon EC2 instances.
A critical WordPress plugin vulnerability is under active exploitation,
a new analysis details a critical unauthenticated remote code execution flaw affecting Avanti
products.
Joining us today is Johannes Ulrich, Dean of Research at the SANS Technology Institute
with his take on Vibe Security.
And does AI really understand?
And does that ultimately matter? It's Friday, April 11th, 2025.
I'm Dave Bittner and this is your CyberWire Intel Briefing.
Thanks for joining us here today.
Happy Friday.
It is great to have you with us.
The Trump administration is preparing to cut about 1300 positions at the Cybersecurity and Infrastructure Security Agency,
slashing roughly half its full-time staff and 40% of its contractors.
These planned cuts follow White House frustration over CISA's perceived role in moderating conservative content. Major reductions are expected at the National Risk
Management Center and the Stakeholder Engagement Division. CISA's threat hunting team will
also be downsized. Some responsibilities may shift to the Cybersecurity Division.
Officials say the exact scope and timeline remain undecided and could change. Meanwhile, the administration is pushing early retirements and buyouts, offering up to $25,000.
Political appointments for regional directors are also under consideration.
CISA Director nominee Sean Planky's confirmation is being blocked by Senator Ron Wyden over
transparency issues.
The cybersecurity industry has largely stayed silent after President Trump revoked security
clearances for Sentinel-1 staff, Reuters reports.
The move appears tied to the company hiring Chris Krebs, ex-SISA chief fired by Trump
in 2020 for rejecting election fraud claims.
Despite Krebs' respect in cyber circles,
most major cybersecurity firms declined to comment, fearing retaliation. Only the Cyber
Threat Alliance spoke out, criticizing the action as political targeting. Sentinel-1
said it expects no major impact, though its stock dropped 7% following the news.
Russian state-backed hacking group Gamarodon, also known as Shukworm, has been targeting
a Western military mission in Ukraine using removable drives to deploy attacks.
Between February and March of this year, they used an upgraded version of their Gamma-Steel
malware to steal sensitive data.
The group likely gained access via malicious shortcut files on external drives.
Recent tactics show a shift to PowerShell-based tools, increased obfuscation, and use of legitimate
services for stealth.
Once infected, the malware collects screenshots, system info, and documents,
storing payloads in the Windows registry and using PowerShell or Curl over Tor for exfiltration.
It also spreads to other drives and establishes persistence via registry keys.
Symantec notes Gamarodon's tactics are evolving,
making the group a growing threat despite its relatively unsophisticated
methods.
In a secret December 2024 meeting in Geneva, Chinese officials indirectly admitted to cyber-attacks
on U.S. infrastructure tied to the Volt Typhoon campaign, according to the Wall Street Journal.
The U.S. delegation, part of the outgoing Biden administration, interpreted the admission as a warning over American support for Taiwan.
Volt Typhoon, attributed to China in 2023, targeted critical U.S. sectors using zero-day exploits
and stayed undetected in parts of the electric grid for 300 days.
The attacks spanned communications, energy, transportation, and more, raising concerns
about espionage and potential disruption.
The meeting also touched on the Salt Typhoon campaign, which compromised telecom data from
senior officials.
While the U.S. views Volt Typhoon as a serious provocation, salt typhoon is seen as typical cyber espionage.
Both nations continue to escalate mutual cyber attack accusations.
The US will join an international agreement under the Pall Mall Process, an international
initiative launched in February by the United Kingdom and France to address the misuse of
commercial spyware. launched in February by the United Kingdom and France to address the misuse of commercial
spyware.
This follows a voluntary code of practice signed by 21 countries aiming to regulate
commercial cyber intrusion capabilities and curb abuses targeting civil society.
Sparked by scandals in Poland, Mexico, and Greece, the agreement seeks to separate responsible
vendors from
those linked to human rights violations. Human rights advocates praised the move as a bipartisan
step toward responsible spyware governance. Laboratory Services Cooperative, LSC,
a non-profit supporting reproductive health labs, confirmed a data breach affecting 1.6 million people.
Hackers accessed its network in October 2024,
stealing sensitive data, including personal IDs,
medical records, and insurance details.
Most affected individuals had lab work done
through select Planned Parenthood centers.
LSC is offering credit and identity protection services and says
no stolen data has appeared on the dark web so far. An investigation is ongoing with federal
law enforcement and cybersecurity experts involved.
In March, a threat actor used server-side request forgery attacks to steal metadata from unsecured Amazon EC2 instances, according to F5 Labs.
The attacker targeted EC2-hosted websites that left instance metadata exposed, potentially
leaking sensitive AIM credentials. The campaign ran from March 13 through the 25th and involved
tens of thousands of GET requests from IPs tied to French firm
FBW Networks SAS.
F5 advises migrating from IMDSV1 to IMDSV2 or blocking requests to the metadata IP to
mitigate future risks.
A critical vulnerability in the AutoKit WordPress plugin is being actively exploited, according
to security firm Defiant.
The plugin, with over 100,000 installs, allows attackers to bypass authentication and create
admin accounts on un-configured sites by exploiting a missing value check in API key validation.
This gives full site control,
including uploading malicious files or injecting spam.
While only unconfigured installations are at risk,
users are urged to update to the latest version
to patch the flaw.
A newly published analysis
details a critical unauthenticated remote code execution flaw
affecting Avanti products, including ConnectSecure, PolicySecure, PulseConnectSecure, and ZTA
gateways.
Exploited by a suspected China-linked actor, the bug stems from a stack-based buffer overflow
in the web server binary via the X-Forwarded 4 header.
Exploitation is complex due to payload restrictions.
Only digits and periods are allowed, forcing attackers to use heap spray and ROP techniques
to gain code execution.
The attack bypasses ASLR through brute force.
Ivanti patched Connect Secure in February, with other product updates due in April.
Pulse Connect Secure is no longer supported.
Given the public proof of concept and active exploitation, urgent patching or mitigation
is critical. Coming up after the break, my conversation with Johannes Ulrich, Dean of Research at
the SANS Technology Institute, with his take on Vibe security.
And does AI understand?
And does that ultimately even matter?
Stay with us.
Bad actors don't break in, they log in. Attackers use stolen credentials in nearly nine out
of ten data breaches, and once inside Once inside, thereafter one thing, your data.
Veronis's AI powered data security platform
secures your data at scale.
Across LAS, SAS and hybrid cloud environments,
join thousands of organizations who trust Veronis
to keep their data safe.
Get a free data risk assessment at veronis.com.
And I'm pleased to be joined once again by Johannes Ulrich.
He is the Dean of Research at the SANS Technology Institute and also the host of the SANS ISC Stormcast podcast.
Johannes, welcome back.
Yeah, thanks for having me back.
So I want to talk about vibes today, Johannes.
I want to talk about vibes.
It seems like the word vibe has found its way into InfoSec.
People are vibe coding.
Yeah, vibe coding is a big thing.
Why learn how to code with AI? We just describe the problem and AI will magically solve it
for us with some really interesting code
that we don't really need to understand.
We just use it and I guess then complain, cry later.
You sound like you might have a particular opinion
about this approach, Johannes,
that perhaps Vibe coding isn't the best path.
Yeah, and the same methodology, of course,
enters all kinds of realms, including security.
With coding, if you are still on X,
there's some great little memes that
went around there with developers applying that methodology. I think most of them are
made up, but funny enough, it doesn't really matter that they're made up.
The Vibe security comes in if, I thought a little bit about this, you know, work for the science college, you know, where
you sort of, you know, colleges have this problem with the vibe paper writing, you know,
where people write papers using AI, then faculty, not our faculty, but other faculty may use
AI to actually then create the paper.
So you have like, and the similar thing happens
with security where you have developers use AI
to create code and then you have security teams using AI
to check that code for security flaws.
And of course, now you're basically losing any kind
of diversity in your methodologies here.
You're hitting a point where things just become too
complex to actually double check what the AI is doing.
That's the really important part here.
If you don't know how to code, if you don't know what proper security looks like, how
do you know if that firewall rule set that AI came up with, these 200 lines of
firewall rules or whatever, if they're actually correct.
Well, let's say you are the person who's supervising a team
of coders and they're saying to you, hey, we can have some
real efficiency upgrades here by partnering with these AI
systems. How do you manage it to make sure that it stays within the realm of checkability?
Yeah, and I think you mentioned an important part of partnering.
It's a partnership.
It's not where I just hand it over.
And I think it's very similar to outsourcing code generation and has very similar problems.
A lot of companies outsource coding and outsource security.
That's a valid thing to do, but the problem usually comes back to bite you is
did you write the specifications complete?
If you can't really write specifications that allow a human developer to create a system for you.
How is AI supposed to have a chance at it?
Back to the firewall rules, how is AI going to write correct firewall rules if it doesn't
know what your network looks like?
So you still have to do the inventory, which of course a lot of people are having issues
with.
And if that's not right, it's sort of the good old garbage in, garbage out.
You won't get any good results if you don't tell it really what these results are supposed
to accomplish.
You know, I was chatting with someone earlier today about interacting with AI, and one of
the things that she warned against was boxing AI into a corner, you know, because
AI tends to want to please you.
So if you tell it, you must give me these results.
It will.
Even if that involves lying to you or, you know, making things up.
It strikes me that with coding where things are black and white, you know, we're talking, they work or they don't,
that you have to be careful about not backing your AI into a corner to the point where you don't understand what it's making for you.
Yeah, and backers of the specifications, I think that hits us with security all the time.
If we have software that actually works just fine.
Unless you start to try to bypass authentication and then it will still work fine.
It will do everything you told it to do.
It will just not verify who you are.
And if you never told it how to do it, well, it won't do it.
So that's where it comes from.
It's an old computer science problem
where developers are usually created
on passing functionality tests, not security tests.
And if you give the same reward system to AI,
it'll basically end up with the same bad code.
So are we talking about having some audit methodologies
in place here?
I mean, how do you get the best of both worlds?
I think you get the best of both worlds
by having developers like a partner with the AI,
where the developer is still ultimately in charge
and reviewing the results and
able to understand what the AI produced. Whether that's code, whether that's cloud
formation configuration, whether it's firewall rules or whatever. At our
college we actually implemented a policy around this. I think already this, like two years ago, last year, I forgot, time flies.
But they said, hey, you're free to use AI,
but your response for the result,
it's not for you can say, hey, AI ate my homework.
So there still has to be a developer,
someone who's actually double checking the work
and also double checking
the specifications and prompts that are being used.
And I think in order to write good prompts, just in order to write good specifications
for human developer, you need to understand the system.
Yeah.
It seems to me reasonable as well that whoever your developer is, even if they're partnering
with the AI to do their work,
they're responsible for being able to walk you through what the code does and actually
not hand waving away that, oh, here's where a miracle occurs.
Correct. So they have to understand what's happening there. And I think one of the most dangerous
things about all of this, which I don't really see people talk about too much, is AI is really
good. And that's very dangerous, because if you have like that partner that's usually
right and you wasted a lot of time in the past trying to prove it wrong, then you start
hand waving and the results are usually really close to the right result if they're wrong.
I'm not sure if you tried it to have AI summarize security news.
And I tried and it's actually pretty good at that.
You give an article and tell me, hey, you know, what's the takeaway from this or what's the action?
What I found it, it gives you the right result.
It may not give you the result that you're really looking for.
And the real difference here is like you have an article about some problem with no authentication. the right result, it may not give you the result that you're really looking for.
The real difference here is like you have an article about some problem with no authentication
and it'll tell you, hey, you know, use strong passwords.
Well, but this was an authentication bypass where passwords weren't at all involved.
So the results sounded reasonable,
and it's sort of one of those results
that maybe someone who has just taken
their first security class would have given you,
reading that article.
But they didn't sort of read deeper,
they didn't sort of understand the context
of that particular vulnerability.
No, it's a great insight.
And I've often said that to me,
a useful way to look at AI for that sort of thing
Is that it's a tireless intern, right?
Like it will go off and do all the work that you ask it to do
But at the same time you would not bet the company on an intern. You just wouldn't do it
Oh, yeah, you blame interns for any vulnerabilities there. So now you just blame
That analogy yeah, actually I think it's really good there all right Johannes Ulrich thanks so much for
joining us yeah thank you Do you know the status of your compliance controls right now?
Like right now.
We know that real-time visibility is critical for security, but when it comes to our GRC
programs, we rely on point in time checks. Look at this, more than 8,000 companies like Atlassian and Quora have continuous visibility
into their controls with Vanta.
Here's the gist, Vanta brings automation to evidence collection across 30 frameworks
like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber for a thousand dollars off.
And finally, large language models are acing benchmarks faster than researchers can invent
them, but does that mean they understand?
To tackle this big question, IEEE Spectrum and the Computer History Museum hosted a lively
March 25th debate.
On the no side was Emily Bender, a vocal LLM critic and co-author of Stochastic Parrots.
On the yes side stood Sebastian Bubeck of OpenAI, co-author of Sparks of AGI.
The fiery but respectful showdown explored whether these AI systems truly comprehend
or just cleverly imitate.
The debate kicks off with Emily Bender on Team Nope and Sebastian Bubek from Team Kinda,
yeah, diving into linguistics, AI benchmarks, and whether machines can grasp meaning like
we do.
Bender leans hard into the they're just parrots metaphor, warning us about the illusions of
understanding and the dangers of relying on LLMs in health care, law, and more. Meanwhile, Bubeck cheerfully reminds us these models
are pulling off math feats that make your high school teacher weep, and whether
or not they understand, they're undeniably useful. The debate was spirited,
nuanced, and philosophical. Oxford Union meets Silicon Valley. One of
the takeaways is understanding might be overrated if your chatbot can still beat you at logic
puzzles or build you an app overnight. These are still early days and time will tell how
much we come to trust and rely on these technologies in our day-to-day lives. For now, I think it's fair to say that love them or hate them,
they are here to stay.
And that's the CyberWire. For links to all of today's stories, check out our daily briefing at the cyberwire.com.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly
changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes
or send an email to cyberwire at n2k.com.
N2K's senior producer is Alice Carruth. Our Cyberwire producer is Liz Stokes. We're
mixed by Trey Hester with original music and sound design by Elliot Peltzman. Our executive
producer is Jennifer Iban. Peter Kielpe as our publisher, and I'm Dave
Bittner. Thanks for listening, we'll see you back here next week. Looking for a career where innovation meets impact?
Vanguard's technology team is shaping the future of financial services
by solving complex challenges with cutting-edge solutions.
Whether you're passionate about AI, cybersecurity, or cloud computing,
Vanguard offers a dynamic and collaborative environment
where your ideas drive change.
With career growth opportunities and a focus on work-life balance, offers a dynamic and collaborative environment where your ideas drive change.
With career growth opportunities and a focus on work-life balance, you'll have the flexibility
to thrive both professionally and personally.
Explore open cybersecurity and technology roles today at Vanguardjobs.com.