CyberWire Daily - Sleeper malware denied at Sellafield nuclear site.
Episode Date: December 5, 2023The UK Government's denial of a cyber incident at Sellafield. There’s been a surge in Iranian cyberattacks on US infrastructure. Misuse of Apple's lockdown mode, the mysterious AeroBlade's activitie...s in aerospace, and a clever "Disney+" scam. Plus The latest application security trends, and a new cybersecurity futures study. In our Industry Voices segment, On today’s Industry Voices segment, we welcome Matt Radolec, Vice President of Incident Response and Cloud Operations at Varonis explaining the intersection of AI, cloud and insider threats. And insights on resilience from the UK's Deputy PM. CyberWire Guest On today’s Industry Voices segment, we welcome Matt Radolec. Matt is Vice President of Incident Response and Cloud Operations at Varonis. He talks about the intersection of AI, cloud and insider threats. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/230 Selected Reading Sellafield nuclear site hacked by groups linked to Russia and China (The Guardian) Response to a news report on cyber security at Sellafield (GOV.UK) Guardian news article (Office of Nuclear Regulation) Ministers pressed by Labour over cyber-attack at Sellafield by foreign groups (The Guardian) US warns Iranian terrorist crew broke into 'multiple' US water facilities (The Register) Florida water agency latest to confirm cyber incident as feds warn of nation-state attacks (The Record) AeroBlade on the Hunt Targeting the U.S. Aerospace Industry (Blackberry) Fake Lockdown Mode: A post-exploitation tampering technique (Jamf) Disney+ Impersonated in Elaborate Multi-Stage Email Attack with Personalized Attachments (Abnormal Security) Building Security in Maturity Model (BSIMM) report (Synopsis) Deputy Prime Minister annual Resilience Statement (GOV.UK) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
The UK government's denial of a cyber incident at Sellafield.
There's been a surge in Iranian cyber attacks on US infrastructure.
Misuse of Apple's lockdown mode.
The mysterious Aeroblade's activities in aerospace.
A clever Disney Plus scam.
Plus the latest application security trends.
In our Industry Voices segment, we welcome Matt Radelec,
Vice President of Incident Response and Cloud
Operations at Varonis, explaining the intersection of AI, cloud, and insider threats, and insights
on resilience from the UK's Deputy PM. It's December 5th, 2023. I'm Dave Bittner and this is your CyberWire Intel Briefing. The Guardian reported a cyber attack yesterday on the British nuclear facility at Sellafield,
allegedly perpetrated by foreign threat actors linked to China and Russia.
The attack reportedly involved sleeper malware, potentially dating back to 2015,
and was disclosed in a report by the Office for Nuclear Regulation, the ONR,
which noted security shortfalls at the facility, primarily engaged in nuclear waste storage and processing.
Sellafield Limited, the facility's operator, and the HM government have strongly denied these claims.
Sellafield stated that there is no record of evidence of such an attack
and has challenged the Guardian to provide evidence for their allegations.
The ONR supported this denial, confirming the absence of evidence for the reported hack.
Nonetheless, the ONR did acknowledge ongoing security investigations at Sellafield and noted that the facility is not meeting certain required cybersecurity standards,
resulting in increased regulatory attention. In response to these reports, the Labour opposition
has sought clarification from the government's ministers regarding the Guardian's claims.
This development has sparked concerns and prompted political inquiry into the matter,
highlighting the critical nature of cybersecurity
and sensitive national infrastructure.
Since the Cyber Avengers,
linked to Iran's Islamic Revolutionary Guard Corps,
claimed a tax on a water utility and a brewery in western Pennsylvania,
citing their use of Israeli-made Unitronics PLCs,
three other Iranian-affiliated groups have followed
suit. Hagoyan, CyberTufan Group, and Yareh Gamnam Team have also claimed similar attacks against
users of Israeli equipment, as reported by the Register. In a separate incident, the record notes
that Florida's St. Johns River Water Management District experienced an unspecified cyberattack,
potentially ransomware, by an unknown or undisclosed threat actor.
The district has managed to implement successful containment measures after detecting suspicious activity in its IT environment.
BlackBerry researchers have discovered a new threat actor named Arrowblade
targeting the U.S. aerospace sector through a spearfishing campaign.
Arrowblade, which emerged late last year, focuses on commercial and competitive cyber espionage.
The group's activities extend beyond mere information collection.
BlackBerry's report suggests that Arrowblade's primary objective might be to assess
the internal resources and vulnerabilities of its targets, potentially setting the stage for
future ransom demands. This indicates a strategic approach to cyber espionage,
where initial data gathering could lead to more aggressive financial extortion tactics.
aggressive financial extortion tactics.
Jamf has identified a post-exploitation technique where attackers can deceive users by making an already compromised iOS device
appear to be in lockdown mode, creating a false sense of security.
The researchers emphasize that while lockdown mode reduces the attack surface on iOS devices,
it does not function as antivirus
software. It cannot detect existing infections nor prevent malware from operating on a compromised
device. So, its effectiveness is limited to protecting attacks before they occur by reducing
potential entry points for attackers. This research highlights the importance of understanding lockdown mode's
capabilities and limitations, underlining that it cannot mitigate threats on already compromised
devices. Abnormal Security has reported a phishing campaign using a Disney Plus theme.
The campaign sends emails with PDFs resembling invoices using the recipient's real name and falsely claiming
they will be charged $49 for the next month's subscription, significantly higher than the actual
cost. The PDFs include a phone number for canceling the subscription. Upon calling, victims may face
two risks. They could be asked for sensitive information like banking details or login
credentials,
which attackers can use for fraudulent transactions or account compromises.
Or they might be instructed to download software purportedly to stop the charge,
but the software actually infects their computer with malware.
This campaign highlights the need for vigilance against phishing attempts
that use familiar brands and seemingly legitimate documentation to exploit users.
Synopsys, in its latest Building Security and Maturity Model Report, highlights a significant
trend in software security, the increased focus on automation within the software development
lifecycle.
Modern toolchains are enabling organizations to integrate security
testing and touchpoints throughout the SDLC, not just at the initial stages, the shifting left,
but rather adopting a shift-everywhere approach. This trend is characterized by the automation of
security tasks, making them more accessible and efficient. For instance, security testing in the QA stage
can now be automated, similar to static application security testing scans conducted earlier in the
development process. This allows for scripted actions in response to the outcomes of automated
security tests, enhancing the efficiency and effectiveness of security measures.
Furthermore, firms are increasingly utilizing automation
to gather and leverage intelligence from sensors across the SDLC.
This proactive approach helps in preventing vulnerabilities
before they pose significant challenges to developers,
thereby strengthening software security.
software security. Coming up after the break in our industry voices segment, we welcome Matt Radelet, Vice President of Incident Response and Cloud Operations at Varonis. He's explaining the
intersection of AI, cloud, and insider threats. Stay with us.
Do you know the status of your compliance controls right now? Like, right now? We know that real-time visibility is critical for security, but when it comes to our
GRC programs, we rely on point-in-time checks. But get this, more than 8,000 companies like
Atlassian and Quora have continuous visibility into their controls with Vanta. Here's the gist,
Vanta brings automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses
is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home,
your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families
24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
In today's sponsored Industry Voices segment,
my conversation with Matt Radilek,
Vice President of Incident Response and Cloud Operations at Varonis.
Our conversation centers on the intersection of AI, cloud, and insider threats.
To start thinking about the insider threat,
you've got different categories, right?
You have on one end of the spectrum,
your malicious insider.
This is your Edward Snowden and the like.
They have a clear motivation for doing whatever things they're going to do.
And they're going to carry them out with high impact.
and they're going to carry them out with high impact.
Then you have your lesser severity insiders,
like your person who's knowingly violating a policy or the person who's not knowingly violating a policy,
that end user that makes a mistake.
And I think when we look at AI,
AI is at least generative AI. It's enabling our
workforce to access and amass information at a higher productivity rate, as in faster and
potentially of greater efficacy than they did before. So it's going to take these three different
scenarios, and you're going to give a tool to people to make them that much worse.
So if we go through those three different insiders again, you've got your malicious insider.
Now, instead of needing, and we'll use the Snowden example, instead of needing administrative privileges and the ability to walk in and out of classified rooms with storage drives,
walk in and out of classified rooms with storage drives.
You can just chat up your friendly AI bot and ask it to search and amass
all this really interesting data for you.
So it makes their job easier
in terms of getting the data
and potentially getting it out.
And then I think where accidents happen,
and I'm not sure, Dave, if you use Microsoft 365,
but I'm sure many of our listeners do,
it's built, you know, teams and the like.
It's built to be so incredibly easy to collaborate, to share. You know, you can't really realize the
value of data unless you can share it with someone. And so Copilot, which is Microsoft's
generative AI, is going to leverage all the data that a person has access to when it returns results. So that means that
you'll be able to query and search a large data set, create a new file, and share that new file
with someone at a speed that I don't think people could keep up with. And so it's going to make all
these little mistakes a lot more apparent. We talk about a lot of Varonis as what we call the
blast radius or kind of how bad is it. It's going to make the average thing worse because people are
going to get access to and be able to generate data off of a larger data set than they realized
they had before. So where's the balance here between the utility of these tools, which I think a lot of people think is legit,
versus limiting what they have access to? For a lot of organizations, they either think that
they've already done that or they don't realize just how much someone has access to. So I think
the reality is that you have to figure that out. You have to determine for your own organization,
The reality is that you have to figure that out.
You have to determine for your own organization,
do we have an open access issue?
Do we need to try to work on that a little bit before we go to full speed with AI?
Or is our data fairly locked down?
Either way, we'd benefit from trying to figure that out.
And are we in a pretty good place
in order to enable our workforce to use AI?
Because I don't think going against it
or not using it is the answer. It's such an innovation. It's just a huge innovation for
mankind that even me as a person who sits at the front of organizations that are having crisis,
right? They're having an incident, they're having a breach and they need help. I can't look at AI
and say, don't do this
because of the productivity gains and the innovation that it's going to afford us.
So instead, I want to look to organizations, use all my knowledge, skills, and experience,
and encourage them to do this. But to think about that critical question, are we giving people
this ability to access and exfiltrate data at a much higher velocity than we did before?
Is our blast radius too big?
Should we get a handle on that before we go full speed with generative AI on these large data sets?
That's really the point I'd want to challenge organizations on.
Is it fair to say that there's a good bit of crossover between insider threat and shadow IT, where a lot of this
could be a cultural issue where people are just trying to get their work done and they feel as
though there's some friction being put in place by the team who's trying to manage security.
Absolutely. I think that's really well stated. I always think about it in the terms of balancing productivity or usability
with security. That is the nature of any security team and the ebb and flow that happens over the
life cycle of a security practitioner's career. To an extent, we got to have security and security
needs to be there, but we can't have things be so secure that they're not usable. And so this tension,
I think it only gets multiplied when we think of AI because an organization that has these policies
in place but never enforced them might not have thought that it was going to be that easy for
employees to violate those policies. Well, now it's definitely gotten easier. Objectively,
it's going to be easier for people that don't have policies to do whatever they want and people that do have policies to likely be able to violate those policies if the organization hasn't mapped their security controls to be just as smart and just as powerful as their AI-enabled workforce now is.
I really like that point you made, which is, does it change the insider threat?
Well, I personally predict it makes those two non-malicious insider threats, it makes them a lot more likely.
Because malicious insider, I mean, it doesn't matter what technology stack you put in front of them, they're going to try to carry out their mission.
They have motivation that surpasses means, right?
They're going to try to find the means to carry out the mission that they have but these these accidental insiders these people that are going to you know don't know that they're creating
a new spreadsheet with lots and lots of personal data in it don't realize when they go to share it
with their group of co-workers that they also added their personal gmail account on it and
don't realize that that personal gmail account is is tied to another app that scrapes the emails and ingests them for searching.
And now there's a copy of that data that exists in a way that isn't as protected as it's supposed
to be.
And I think these are the real challenges that are going to come from this AI-enabled
workforce that we're all at the forefront of, where people are going to be creating
and storing really sensitive content
in places that aren't as protected as they should be to house that data.
And I think we'll see more of these, I call them more routine breaches.
These are the accidental breaches, the mishandling of information.
I predict we'll see more of that, not less of that,
with an AI-enabled workforce.
And so what are your recommendations then? I mean,
how should organizations come at this? Yeah, this concept, and I've said it a couple times,
and I'll probably say it again, Dave, is this concept of a blast radius, right? If you pick up
a person from their computer, from their chair, I always like to say, and you try to figure out
how much do they have access to, How much data? How many systems?
How much of your crown jewels can this person get to from their day one of employment?
What we find is that somewhere between 25% and 50% of that data is too much.
So if you're one of those organizations where when you go and you do that exercise, you realize that the access is too vast.
You need to go through an exercise that just compounds, just compound some of the basics of security.
You need to do some of that least privilege.
You need to use a lot of automation in order to get there.
You need to limit what people can have access to.
And there are a lot of ways that you can do that
to like a high degree of automation
and a high amount of effectiveness
without taking your environment from,
you know, what you think it is today
to like a, you know, government grade or Fort Knox grade security
where everything's locked down
and you need multiple layers of access to get through it.
Sometimes just getting access control right
is a really strong security control.
So just limiting what the basics,
like kind of getting rid of data that's open to every employee
or data that's shared with everyone in the company,
kind of limiting those pockets
can have a lot of effect and a lot of success
in trying to protect that data
when you're trying to scale out a program
where people end up ultimately getting access
to more data with something like generative AI.
Our thanks to Matt Radelec,
Vice President of Incident Response
and Cloud Operations at Varonis,
for joining us.
Cyber threats are evolving every second,
and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control, stopping
unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com today to see how a default-deny approach
can keep your company safe and compliant.
And finally, in the UK's Deputy Prime Minister's annual Resilience Statement to Parliament,
a significant emphasis was placed on the importance of resilience,
especially in the context of cybersecurity.
The statement highlighted the necessity for people to be prepared to revert to
analog technologies in the case of a cyber attack that
disrupts critical infrastructure like the power grid and communication systems. As reported by
the Telegraph, the Deputy Prime Minister advised citizens to consider the essentials stored under
their stairs, suggesting that items such as battery-operated radio, candles, and a torch, or flashlight, are fundamental.
For the Atlantic's Western cousins, this list might extend to a bug-out bag, canned goods,
bottled water, gold coins, perhaps a feisty dog for added security. So, it seems in the digital
age, it's still wise to keep one foot in the analog world,
just in case you need to tune in, light up, and bug out.
And that's the Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
You can email us at cyberwire at n2k.com.
We're privileged that N2K and podcasts like The Cyber Wire are part of the daily intelligence routine
of many of the most influential leaders and operators in the public and private sector,
as well as the critical security teams supporting the Fortune 500
and many of the world's preeminent intelligence and law enforcement agencies.
N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people.
We make you smarter about your team while making your team smarter.
Learn more at n2k.com.
This episode was produced by Liz Ervin.
Our mixer is Trey Hester with original music by
Elliot Peltzman. Our executive
producers are Jennifer Iben and Brandon
Karp. Our executive editor is
Peter Kilby and I'm Dave Bittner.
Thanks for listening. We'll see you back
here tomorrow. Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to your role. Thank you.