CyberWire Daily - Notes from the cyber phases of two hybrid wars. Alerts on Cisco, Atlassian vulnerability exploitation. Updated guidance on security by design.
Episode Date: October 17, 2023A bogus RedAlert app delivered spyware as well as panic. BloodAlchemy backdoors ASEAN southeast asian targets. A serious Cisco zero-day is being exploited. Valve implements additional security measure...s for Steam. A warning on Atlassian vulnerability exploitation. Allies update their security-by-design guide. Ukrainian telecommunications providers hit by cyberattack. Ben Yelin explains attempts to tamp down pornographic deepfakes. Our guest is Ashley Rose from Living Security with a look at measuring human risk. And, as always, criminals see misery as opportunity. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/198 Selected reading. Malicious “RedAlert - Rocket Alerts” Application Targets Israeli Phone Calls, SMS, and User Information (The Cloudflare Blog) Disclosing the BLOODALCHEMY backdoor (Elastic Security Labs) BLOODALCHEMY provides backdoor to ASEAN secrets (Register) Active exploitation of Cisco IOS XE Software Web Management User Interface vulnerability (Cisco Talos Blog) Actively exploited Cisco 0-day with maximum 10 severity gives full network control (Ars Technica) Cisco warns of actively exploited zero-day in IOS XE software (Computing) Widespread Cisco IOS XE Implants in the Wild (VulnCheck) Steam enforces SMS verification to curb malware-ridden updates (BleepingComputer) Threat Actors Exploit Atlassian Confluence CVE-2023-22515 for Initial Access to Networks | CISA (Cybersecurity and Infrastructure Security Agency CISA) CISA, U.S. and International Partners Announce Updated Secure by Design Principles Joint Guide (Cybersecurity and Infrastructure Security Agency) CERT-UA Reports: 11 Ukrainian Telecom Providers Hit by Cyberattacks (The Hacker News) CVE-2023-38831 Exploited by Pro-Russia Hacking Groups in RU-UA Conflict Zone for Credential Harvesting Operations (Cluster25) Pro-Russian Hackers Exploiting Recent WinRAR Vulnerability in New Campaign (The Hacker News) Cyberattack targets Belgian public service websites for second time in a week (Brussels Times) Spam trends of the week: Spammers piggyback on the Israel-Gaza war to plunder donations (Hot for Security) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
A bogus red alert app delivered spyware as well as panic.
Blood alchemy backdoors Southeast Asian targets.
A serious Cisco zero day is being exploited.
Valve implements additional security measures for Steam.
A warning on Atlassian vulnerability exploitation.
Allies update their security by design guide.
Ukrainian telecommunications providers are hit by a cyber attack.
Ben Yellen
explains attempts to tamp down pornographic deepfakes, our guest is Ashley Rose from Living
Security with a look at measuring human risk, and as always, criminals see misery as opportunity.
I'm Dave Bittner with your CyberWire Intel briefing for Tuesday, October 17th, 2023. Cloudflare looked into the compromised Red Alert app
that served false alarms of rocket attacks against Israeli users.
They traced it to a knockoff of the legitimate Red Alert app,
and they found that it had spyware functionality
as well as the obvious panic-inducing
disinformation. Cloudflare wrote, the malicious Red Alert version imitates the legitimate Rocket
Alert application but simultaneously collects sensitive user data. Additional permissions
requested by the malicious app include access to contacts, call logs, SMS, account information, as well as an overview of all installed apps.
The researchers also found that the bogus app was flacked using domain impersonation.
The bogus website, redalerts.me, differed by the single letter S from the legitimate
Red Alert site, redalert.me.
The site directed Apple users to the real Red Alert source,
but Android users were sent to a site that served a malicious version of the app.
Roger Grimes, data-driven defense evangelist at KnowBefore,
urged users of any apps to use only official app stores.
While not perfect, they're far less risky than going off-brand.
Researchers at Elastic Security Labs are tracking a new backdoor they're calling
Blood Alchemy that's being used to conduct cyber espionage against governments and
organizations in the Association of Southeast Asian Nations. Blood Alchemy is part of the REF5961 intrusion set described by Elastic earlier this month.
The researchers believe the activity is state-sponsored and espionage-motivated,
launched by a threat actor aligned with the Chinese government.
The researchers note that Blood Alchemy is a backdoor shell code containing only original code,
no statically linked libraries. The code
appears to be crafted by experienced malware developers. The backdoor contains modular
capabilities based on its configuration. These capabilities include multiple persistence,
C2, and execution mechanisms. While unconfirmed, the presence of so few effective commands
indicates that the malware may be a sub-feature of a larger intrusion set or malware package still in development, or an extremely focused piece of malware for a specific tactical usage.
Cisco has disclosed an actively exploited zero-day vulnerability in the Web User Interface feature of of Cisco iOS XE software when exposed
to the internet or untrusted networks. Cisco states, successful exploitation of this vulnerability
allows an attacker to create an account on the affected device with privilege level 15 access,
effectively granting them full control of the compromised device and allowing possible subsequent unauthorized activity.
Cisco says a threat actor has been exploiting the vulnerability since at least September 18th, with broader activity observed in October.
Cisco says,
We assess that these clusters of activity were likely carried out by the same actor.
Both clusters appeared close together,
with the October activity appearing to build off the September activity.
The first cluster was possibly the actor's initial attempt at testing their code,
while the October activity seems to show the actor expanding their operations to include establishing persistent access via deployment of the implant.
Cisco strongly recommends that organizations that may be affected by this activity
immediately implement the guidance outlined in Cisco's
Product Security Incident Response Team Advisory.
Valve will require additional security measures for game developers on Steam
in an attempt to prevent compromised developer accounts
from being used to
push malicious updates, Bleeping Computer reports. On October 24th, Valve will begin enforcing SMS-based
security prompts for new updates to games' default release branches. Bleeping Computer notes that the
move follows a spike in the use of compromised Steamworks accounts to distribute malware over the past
few months. Yesterday, CISA, the FBI, and the MS-ISAC issued a joint cybersecurity advisory
on the active exploitation of CVE-2023-22515, a vulnerability in Atlassian Confluence data
center and server, a widely used collaboration platform.
Exploitation enables a malicious actor to create unauthorized Confluence administrator accounts
with the attendant possibility of data exfiltration.
The advisory recommends immediately upgrading to a patched version of the vulnerable product.
The advisory doesn't offer attribution of the
ongoing exploitation, but various security firm researchers credibly point to China's
Ministry of State Security as the probable responsible threat actor. The Five Eyes,
plus Germany and the Netherlands, previously produced the original guide to security by design
titled Shifting the Balance of Cybersecurity Risk,
Principles and Approaches for Secure by Design Software.
They've been joined by their counterparts in the Czech Republic, Israel, Japan,
the Republic of Korea, Norway, the Organization of American States,
and Singapore in updating the guidelines.
CISA described the goal of the updated version made available
yesterday, stating, this guidance is intended to further catalyze progress toward investments and
cultural shifts necessary for measurable improvements in customer safety, expanded
international conversation about key priorities, investments, and decisions, and a future where technology is safe, secure, and resilient by design.
There's some minor skirmishing in the cyberspace surrounding Russia's hybrid war against Ukraine.
CERT-UA reported Sunday that 11 telecommunications providers in Ukraine had experienced interference
by an organized group of criminals tracked by the identifier UAC-0165.
The goal of the attacks seems to be disruption as opposed to theft or extortion.
The Hacker News says that a successful breach is followed by attempts to disable network and server equipment,
specifically microtic equipment, as well as data storage systems.
typically microtic equipment, as well as data storage systems.
Researchers at Cluster25 are tracking attacks by what they characterize as a Russia-Nexus nation-state threat actor. The campaign aims at harvesting credentials, and it involves phishing with a baited PDF
that carries an exploit for CVE-2023-38831,
It carries an exploit for CVE-2023-38831, a vulnerability in WinRAR compression software versions prior to 6.23.
The fish bait is a PDF that purports to share indicators of compromise associated with malware strains that include SmokeLoader, NanoCoreRat, CrimsonRat, and agent Tesla. Cluster 25 offers no more specific attribution than Russia
Nexus, but the hacker news speculates that the activity may be run by the SVR, Foreign Intelligence
Service. In what they've declared to be retaliation for Belgian support of Ukraine, the Brussels
Times reports, websites belonging to the Belgian Senate,
Federal Public Service Finance, the Prime Minister's Chancellery,
and the Monarchy were affected last Sunday.
Service had returned to normal on all but the Senate's site by early Monday morning.
The hacktivists posted a message to the Senate's site
complaining of Belgium's commitment last week to supply Ukraine with F-16 fighters by 2025.
Finally, returning to the other major ongoing hybrid war, the one between Hamas and Israel,
there's a surge in scams seeking to steal from people moved to donate to humanitarian relief in the Middle Eastern conflict zone.
Financially motivated criminals are using
opportunities for charitable donations as fish bait. Last week, Bitdefender's anti-spam lab saw
an increase in these sorts of fraudulent appeals. Some of them are cast as appeals on behalf of
humanitarian organizations with the look, more or less, of a relief agency site, others are cast as personal appeals, with addiction and false intimacy usually associated with people claiming to be the widow of a Nigerian prince.
In any case, be wary and donate only to organizations you know and whose activity you can at least to some extent verify.
A big flashing light of warning, Bitdefender points out, is asking
for money in certain specific forms. Donation requests in crypto, wire transfers, and gift cards
are a big red flag to be avoided at all costs. Resist, too, the temptation to let the scammer
know that you're on to them and what you think of them. That just confirms that your email is being read
and that there's someone with strong feelings behind your keyboard.
They'll be back with more chum and other fish bait.
And of course, do donate safely and securely
where you think your charity is most needed.
Coming up after the break, Ben Yellen explains attempts to tamp down pornographic deep fakes. Our guest is Ashley Rose from Living Security with a look at measuring human risk.
Stay with us. Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility
into their controls with Vanta. Here's the gist. Vanta brings automation to evidence collection
across 30 frameworks, like SOC 2 and ISO 27001. They also centralize key workflows like policies,
access reviews, and reporting, and helps you get security questionnaires done five times faster
with AI. Now that's a new way to GRC. Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak. Did you know the easiest way for cyber criminals to bypass your company's
defenses is by targeting your executives and their families at home? Black Cloak's award-winning
digital executive protection platform secures their personal devices, home networks, and connected
lives. Because when executives are compromised at home, your company is at risk. In fact, over one-third of new members discover
they've already been breached. Protect your executives and their families 24-7, 365,
with Black Cloak. Learn more at blackcloak.io.
Ashley Rose is CEO of Living Security, a firm that specializes in the quantification of human risk. I spoke with her on how to measure human risk as a component of overall cyber risk.
Security leaders, organizations are spending as much as over $170 billion on IT security,
but we're seeing breaches continuing to rise at an unprecedented rate.
The rise in DBIR, so they talk about human risk and the kind of percentage of breaches
that humans are responsible, as much as 74% of these breaches are caused by some sort
of human behavior or human risk.
Yet we're only spending $2.7 billion on the training or the human problem.
So one of the most important things to note is that cybersecurity is a human problem.
And we've been trying to solve it through basically improper investment in other technology.
And here we still are 10 years later.
You know, I hear folks talk a lot about insider risk. investment in other technology. And here we still are 10 years later.
You know, I hear folks talk a lot about insider risk. Is there a nuanced difference between that and human risk? So we look at human risk as an expansion on insider risk because I think
oftentimes when we think about insider risk, people align it more closely to, you know, insider threat.
And when you think about insider threat, oftentimes there's the notion that it's,
you know, from a malicious intent perspective. And obviously that's, you know, not accurate and
it's true description, but what we really want to understand with human risk and specifically
human risk management is how do we take a more proactive view at the behaviors that
are causing risk to the organization? So it's really this notion of being able to shift left
from a prediction and prevent perspective versus a detection and response, which is where I see
most of the sort of insider threat or insider risk tools, you know, fitting into the security
tech stack today.
Well, can you take us through some of the primary elements here that encompass human
risk?
What are some of the things that you all track?
For living security specifically, when we think about our human risk index or human
risk score, we're actually looking at three different components that make up risk.
So the easiest one, the one that people
are most familiar with, would just be the behaviors themselves. So some examples of behaviors would be
a user observed using a sale browser, repeat phishing offenders, phishing followed by an
incident or malware, password management adoption, MFA adoption, sharing sensitive data against
policy.
So there's a number of different behaviors that could cause risk to the organization.
But when we think about true holistic risk management, the risk to the company exists
beyond just the vulnerability. We also need to think about the threat. When we think about our
risk model, we have to combine those behaviors with also the events that could be causing risk
to that individual or to the organization. So if someone is highly targeted by a lot of malicious
or spam-based email, they're going to be at a higher probability to falling susceptible to
phishing, for instance. And then the third component, so we have our threat, we have our
vulnerabilities or our behaviors, and then we also have to think about the impact.
So who is that person? What is their job title? What is their role? What kind of data or sense of data do they have access to?
What's the impact if that person is compromised or if there is a breach?
And so when we think about human risk, we're actually looking at all three components and then combining it to create
this sort of view or quantification of human cyber risk for companies.
To what degree do things like security awareness training come into play here?
Yeah, so security awareness and training is really where we got our start. And most of the
companies that we come into, the way that they are measuring
and monitoring the human side of risk is through traditional security awareness and training
metrics. And so those are things like phishing click rates, phishing report rates, quiz scores
on training. Those are the traditional compliance or training metrics that we see the earliest companies start with. Our goal is
actually to expand beyond just the simulated phishing metrics for companies and to take more
of a holistic view of risk. And so you asked earlier, if I think from a categorical perspective,
what matters? Training and compliance is something that we do track and monitor.
But we're also looking at things like account compromise. We're looking at data loss. We're looking at malware. We're looking at phishing and email. And so you can think about human risk management as an expansion of the security awareness and training program, where those metrics are important, but they're only one piece of the overall pie. You know, I think cybersecurity
is so focused on a lot of the technical aspects here. I'm wondering, do you find that there are
areas that people mistakenly assign a technical side when it really is a human element?
Absolutely. And, you know, I think oftentimes, you know, as we've seen even most recently, you know, with some of the social engineering attacks and the smishing and dishing attacks that are hitting the hospitality industry, the human is traditionally like the first point in, right?
The first part of the attack.
And then there's this kind of kill chain, this, you know, the technical controls then start failing right beyond the human component.
And so then I think what we're seeing is CISOs becoming disillusioned by the opportunity
of being able to affect that initial point of entry because security awareness and training
and phishing for so long have failed to mitigate that and change behavior.
And so there is an overemphasis on what happens next.
And I think the fact of the matter is,
and as we've seen over the last 10 to 15 years,
no matter how many controls that we have in place,
what different types of technologies we have in place,
the human is still a major vulnerability
and point of attack.
And we need to be effectively addressing it
and thinking about a different way to do that. That's Ashley Rose from Living Security.
And joining me once again is Ben Yellen.
He's from the University of Maryland Center for Health and Homeland Security and also my co-host on the Caveat podcast.
Hello, Ben.
Good to be with you, Dave.
So, interesting article from Wired.
This is written by Matt Burgess.
And the title is Deepfake Porn is Out of Control.
And it really highlights some of the issues that folks are facing here.
It's certainly a policy issue.
We've talked about deepfakes over on Caveat quite a bit.
And as the tools become more readily available,
this trend of people using deepfake technology to generate pornography,
and this article specifically is talking about
non-consensual imagery and videos and things like that that are vastly disproportionately
used and abusive in harassment towards women and the issues there. Before we dig into some
of the details here,
is that a decent description of what they're talking about here, Ben?
Yeah, I mean, I think there have been a couple of factors.
One is the improvement in AI technology makes it easier not only to create deepfakes,
but to make them more realistic.
And then there's the proliferation of websites either exclusively devoted to deepfakes
or partially devoted to deepfakes or partially devoted to deepfakes to the extent that you can use search engines,
Microsoft or Google,
to find specific websites dedicated to hosting these images.
So there's some responsibility, obviously,
for the website makers themselves,
but also potentially for these search engines,
which are directing people to these websites.
I think deepfakes have
been a problem for about a half decade, but the problem is growing exponentially because of these
factors. And so what are some of the potential policy solutions to something like this?
So it is really hard to target policies against deepfakes. I know here in the state of Maryland,
we've had long conversations about how just on a
practical level, we can start to regulate it. What California has done is to provide a cause of
action in limited circumstances for people who feel that they've been the victim of deepfake
porn videos. That is a solution that's going to be opposed by the industry. They don't want to be
held liable, especially some of these search engines.
It's certainly hard at the federal level when trying to hold the search engines accountable.
They are protected by Section 230 of the Communications Decency Act.
Public pressure on the search engines is certainly something that's achievable.
I think both Google and Microsoft expressed in this article that it is not their intention to facilitate the distribution of deepfakes. For Microsoft, they said that
this violates their policies on what can be displayed in a search engine query,
and that any result containing deepfakes should be reported. And I think Google said
something similarly as well. So there's sort
of the rely on the private sector or try to regulate this at the federal or state level.
The problem is just jurisdictional. I mean, I think we've seen with a lot of state laws that
are targeting any type of internet activity, it's just very difficult to enforce. You only have
jurisdiction over your own state. And then there become a bunch of
jurisdictional questions. What counts as a deepfake being posted within this state?
Are you banning residents of your state from accessing deepfake videos? Or are you simply
banning people from posting deepfake videos, which you could only do if they are within the
jurisdiction of your state? So this is not an exclusive problem to deepfake videos.
We've seen this with states, for example,
trying to ban TikTok in app stores
that operate within the state of Montana, as one example.
And I think that same struggle
is manifesting itself on this issue.
You know, I was puzzling through this in my own mind
and wondering, could this go the way of CSAM, you know, child sexual abuse
materials? But I think because you could make a case where, for example, you know, this is
something that consenting adults could enjoy, you would have trouble with a universal ban of
something like this. Yeah, I mean, it's very difficult
because CSAM is very clearly unprotected First Amendment activity. There's really a carve-out
in the First Amendment for CSAM. It's more complicated here. I think when we're talking
about what this article is really referring to, which is the non-consensual use of these
deepfake images or videos, that to me is more of a clear-cut case where there is no First Amendment
public policy rationale for allowing that material. The risks certainly outweigh any of the benefits.
But when we're not talking about non-consensual images, I think however disgusted you are,
you have to recognize that the First Amendment comes into play. And there could be some artistic value or political value
or just kind of any value adding something to the public square of conversation
in some of these images that are going to trigger First Amendment protections.
I think any challenge to both consensual and non-consensual deepfake videos
are going to run into those First Amendment challenges
because it is an inhibition
on First Amendment protected activity.
You know, we have decided that a bunch of things
that are technically,
or restrictions that are not allowed
under our First Amendment jurisprudence
should nonetheless be allowed for public policy reasons.
We've done that in a number of circumstances,
including certain types of obscenity, false advertising.
So I think it's possible for us to make a societal choice that this type of non-consensual pornography with deep fakes is unacceptable and falls outside of First Amendment protected activity.
We have not made that decision yet as a society.
So I think it's going to be part of our national conversation. Is this another example
of the technology perhaps outstripping the policy's ability to deal with it? Yeah, it always
does. You know, we've now gone about a half decade with deepfakes being an issue. I think, you know,
there have been congressional hearings on deepfakes and the deleterious impact of them.
A lot of social psychology
experts from many of our best
universities have been writing about
the harmful mental health effects
of deepfakes on
mainly the women being depicted in them.
So it certainly entered into
the zeitgeist, but we have not seen a lot of
concrete, outside of California, we haven't seen
a lot of concrete policy changes
in this area. So
it is true that technology always outpaces the ability of our legal system to respond. And I
think that's definitely the case here. All right, we'll point you back to the article here, again,
written by Matt Burgess. Deepfake porn is out of control. That's over on Wired.
Ben Yellen, thanks so much for joining us. Thank you.
Cyber threats are evolving every second, and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide. Thank you. runs smoothly and securely. Visit ThreatLocker.com today to see how a default deny approach can keep your company safe and compliant.
And that's the Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
You can email us at cyberwire at n2k.com.
Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity.
that help keep you a step ahead in the rapidly changing world of cybersecurity.
We're privileged that N2K and podcasts like The Cyber Wire are part of the daily intelligence routine of many of the most influential leaders
and operators in the public and private sector,
as well as the critical security teams supporting the Fortune 500
and many of the world's preeminent intelligence and law enforcement agencies.
N2K Strategic Workforce Intelligence optimizes the
value of your biggest investment, your people. We make you smarter about your team while making
your team smarter. Learn more at n2k.com. This episode was produced by Liz Ervin and senior
producer Jennifer Iben. Our mixer is Trey Hester with original music by Elliot Peltzman. The show
was written by our editorial staff.
Our executive editor is Peter Kilby and I'm Dave Bittner.
Thanks for listening. We'll see you back here tomorrow. Thank you. Domo's AI and data products platform comes in. With Domo, you can channel AI and data into
innovative uses that deliver measurable impact. Secure AI agents connect, prepare, and automate
your data workflows, helping you gain insights, receive alerts, and act with ease through guided
apps tailored to your role. Data is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.