CyberWire Daily - Artificial intelligence behaving badly? Or just tastelessly? Third-party risks. Signs that the advantage may be tilting toward the defender.
Episode Date: February 27, 2023Social engineering with generative AI. Mylobot and BHProxies. PureCrypter is deployed against government organizations and staged through Discord. Dish Network reports disruption. Third-party app and... software as a service risk. Further assessments of the cyber phase of Russia's war so far, with warnings to stay alert. Are tough times coming in gangland? Comments on NIST's revisions to its Cybersecurity Framework are due this Friday. AJ Nash from ZeroFox on Mis/Dis/and Malinformation. Rick Howard digs into Zero Trust. And get this—AI is writing science fiction! For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/38 Selected reading. Social engineering with generative AI. (CyberWire) Who’s Behind the Botnet-Based Service BHProxies? (KrebsOnSecurity) Mylobot: Investigating a proxy botnet (Bitsight) PureCrypter targets government entities through Discord (Menlo Security) PureCrypter malware hits govt orgs with ransomware, info-stealers (BleepingComputer) Uncovering the Risks & Realities of Third-Party Connected Apps: 2023 SaaS-to-SaaS Access Report (Adaptive Shield) Ukraine war anniversary likely to bring ‘disruptive’ cyberattacks on West, agencies warn (Global News) How the Ukraine War Has Changed Russia’s Cyberstrategy (Foreign Policy) A year of wiper attacks in Ukraine (WeLiveSecurity) Russia's yearlong cyber focus on Ukraine (Axios) A year after Russia's invasion, cyberdefenses have improved around the world (Washington Post) One year on, how is the war playing out in cyberspace? (WeLiveSecurity) The Russia-Ukraine cyber war: one year later (IT World Canada) Russia launched large-scale operations in cyberspace alongside war (euronews) WSJ News Exclusive | Hackers Extort Less Money, Are Laid Off as New Tactics Thwart More Ransomware Attacks (Wall Street Journal) AI-generated fiction is flooding literary magazines — but not fooling anyone (The Verge) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
Social engineering with generative AI,
MiloBot and BH proxies,
pure crypto is deployed against government organizations
and staged through Discord,
Dish Network reports disruption,
third-party app and software as a service risk,
further assessments of the cyber phase of Russia's war so far
with warnings to stay alert,
are tough times coming in gangland?
Comments on NIST's revisions
to its cybersecurity framework are due this Friday.
A.J. Nash from ZeroFox on NIST, DISS, and malinformation.
Rick Howard digs into Zero Trust.
And get this, AI is writing science fiction.
From the CyberWire studios at DataTribe,
I'm Dave Bittner with your CyberWire summary for Monday, February 27th, 2023. Researchers at Safeguard Cyber have observed a social engineering campaign
on LinkedIn that used the DALI generative AI model to make images for phony ads,
designed to gather personal information.
The malicious ads purported to offer a link to a white paper that would empower sales teams with next-level insights and strategies.
A spicy come-on you've probably seen a time or two.
If a user clicks the ad, they'll be asked to enter their personal information,
including their email address and phone number, in order to receive the white paper.
Safeguard Cyber's researchers comment that this information would be useful in preparing future targeted phishing attacks. In a report earlier this month from BitSight that described the BH Proxies botnet, residential proxy service, and the actor behind it,
Krebs on Security wrote Friday that the goal seems to be a move in the criminal-to-criminal market.
BH Proxies seems to be linked to a six-year-old botnet named MyloBot, and its goal seems to be the transformation of the infected system into a proxy.
The BH Proxies service allows for the rental of residential IP addresses
to use as a relay for their internet communications,
providing anonymity and the advantage of being perceived as a residential user surfing the web.
It's said to deliver access to over 150,000 devices. The MyloBot
threat actor, whose first activity was detected in an October 2017 sample by Deep Instinct,
has used sophisticated methods of camouflage, lying dormant for a couple of weeks on an
infected system before making contact with command and control servers and running only in the
temporary memory of the infected computer. BitSight researchers say that they cannot prove that BH
proxies is linked to MyloBot, but they have a strong suspicion since MyloBot and BH proxies
use the exact same IP on an interval of 24 hours.
Menlo Security is tracking a campaign that's using the commodity downloader PureCryptor to target government entities.
The threat actor uses Discord to host the downloader
and employs a compromised domain belonging to a non-profit organization
as a command and control server.
The attackers are using PureCryptor to deliver a variety of malware strains,
including the Redline Stealer, Agent Tesla, Eternity, Black Moon, and Philadelphia Ransomware.
The researchers conclude that this threat actor doesn't appear to be a major player in the threat landscape,
but the targeting of government entities is surely a reason to watch
out for them. Adaptive Shield's annual SaaS to SaaS Access Report, which discusses this year's
organizational security risks posited by connected third-party apps, was released this morning.
The researchers report that companies with 10,000 SaaS users of Microsoft 365
have on average just over 2,000 applications connected to the productivity software.
That number jumps to about 6,700 in Google Workspace connections.
For companies with 10,000 to 20,000 users of Google Workspace,
the average number of connected apps increases to just shy of 14,000 users of Google Workspace, the average number of connected apps increases to just shy
of 14,000. High-risk access to permissions, such as the ability to see, create, edit, and delete
Google Drive files and M365 data, have been found in 39% of apps connected to Microsoft 365
and 11% to Google Workspace. The apps most commonly connected to such software
have been email applications, followed by file and document management apps. Scheduling,
content management, and project management apps also earned a spot on the top 10 list.
Organizations are advised to look to their policies and look to their training.
Organizations are advised to look to their policies and look to their training.
While Russian offensive cyber action against Ukraine has been heavy and marked by the intelligence service's attempts at disruptive attacks,
using wipers, for example,
it has fallen far short of pre-war expectations.
Ukrainian resilience has blunted much of the Russian cyber offensive effects.
ESET offers a history of wiper attacks over the course of the war.
CyberScoop draws attention to the success of Ukrainian defensive measures,
which have certainly minimized the effects of the wipers
and other attempts to influence the outcome of the war in cyberspace.
The Canadian Center for Cybersecurity issued a warning Friday calling
for a heightened state of vigilance, especially for those in the critical infrastructure sector,
and to bolster their awareness of and protection against malicious cyber threats.
There's another sign that the advantage may have tilted a bit toward the defenders.
State-directed and politically motivated threat actors
aren't the only ones finding their tasks harder.
The Wall Street Journal reports that cyber gangs' proceeds
from their crimes have fallen off,
and the individual criminals themselves
are facing the equivalent of layoffs.
Companies encouraged by more stringent requirements
for obtaining cyber insurance
have improved their defenses,
and more aggressive law enforcement activity has also taken a direct toll on the gangs.
Again, that's not a reason to get complacent,
but it does offer some reassurance that the defender's task isn't a futile one.
Proposed changes to U.S. National Institute of Standards and Technologies
guidance found in NIST's Cybersecurity Framework 2.0 concept paper, potential significant updates
to the cybersecurity framework, are open for public comment through this Friday, March 3,
2023. Among other goals, the changes are intended to expand the scope of the framework to organizations of all sizes in all sectors.
They also reflect an increased emphasis on international cooperation and a more extensive treatment of cybersecurity as an exercise in risk management.
Comments on Framework 2.0 can be emailed to NIST.
What fresh hell is this?
That's what poet Dorothy Parker used to say
when she walked into a party.
We'll say it again now.
The Verge says science fiction magazines
are getting a lot of AI-written submissions.
Apparently, the editors say they can tell,
so at least we got that going for us.
These submissions aren't fan fiction, but seriously, can AI-written fanfic be far behind?
We fear we won't be spared.
Somewhere, some algorithm is churning out saucy versions of Fifty Shades of Jean-Luc Picard,
or Chewbacca visits the Valley of the Dolls.
Luke Picard or Chewbacca visits the Valley of the Dolls.
Are the artificially intelligent progeny of us,
allegedly naturally intelligent homo sapiens,
destined to make all of our mistakes only in a more robotic way?
We knew this would happen.
It comes from hanging out with bad data and bad algorithms,
the kind of algorithms you find hanging out on street corners,
throwing rocks at cars.
It's a shame, but there you have it.
So stay in school, friends.
By friends, we mean algorithms.
And choose your role models with care. Coming up after the break,
AJ Nash from ZeroFox on mis, dis, and malinformation,
and Rick Howard digs into Zero Trust.
Stick around.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks.
But get this.
programs, we rely on point-in-time checks. But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta. Here's the gist. Vanta brings
automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done
five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta
when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak. Did you know the easiest way for cyber criminals to bypass your
company's defenses is by targeting your executives and their families at home.
Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached. Protect
your executives and their families 24-7, 365, with Black Cloak. Learn more at blackcloak.io.
In a recent report outlining predictions for 2023,
A.J. Nash from ZeroFox outlined the prevalence of mis-, dis-, and mal-information.
He makes the case that they may indeed rank at the top of the list of threats
that governments and organizations face in the coming year.
On the horizon piece, what we're seeing growth in is mis-dis and mal-information. We've seen a pretty good size growth in that area, I would say, over the last
six or seven years. In social media, we've seen it in regular media at this point. And this has
been a huge threat that I think is really a growing problem that people need to look at,
because this is no longer just newsworthy. This is really impacting the lives of everyone that we run into right now. I don't know anybody
right now, honestly, that isn't impacted by mis, dis, or malinformation right now.
Can we unpack that a little bit? Because misinformation, disinformation, I think people
probably are clear on, but malinformation, I think there's some nuance there. How do you and your colleagues there separate those three different flavors of information?
Yeah, that's a good question. People confuse these terms all the time. To be honest,
I've confused them regularly. I'm often having to go back and check my own references because
it was a mis-dis malinformation. They all sort of blend together. And they're misused in media. They're misreferred to. So to be clear, misinformation is false information, but it's not intended to
cause harm. So a good example of that would be, my aunt tells me, hey, I heard a story.
Did you hear about this political figure said this or did this, right? Any random thing.
She doesn't mean harm. She read
it someplace. She believes it's true and she's passing it along to me. It's misinformation.
It's not accurate, but it's being passed along unintended. It will still cause harm,
but it's unintended to do so. Frankly, when things go viral, that's what we get into a lot
of misinformation. Disinformation is false information, but it's intended to manipulate
or cause damage. So the original
source, for instance, if in using the same scenario, my aunt who's quoting something,
perhaps where she got it from was intentional. Somebody actually intended to cause harm. They
said something that wasn't true about, let's say a politician they disliked or a sports figure
they disliked or, you know, a media figure, whatever it might be. Right. So they're pushing
disinformation, but it's now being pushed around
and it's become misinformation along the way.
People aren't intending to push it,
but they've made it go viral.
So there's that subtle difference there.
The other issue that goes in this
is sometimes misinformation was never intended
for harm to begin with.
It wasn't necessarily disinformation
that turned into misinformation.
It's just somebody made a mistake.
Somebody heard something wrong,
they misinterpreted something,
they said something,
and that just spread like wildfire. And we see that a lot. And then people, especially famous people have to come back and unwrap those, you know,
they were misquoted or somebody said, well, I saw this person fell down in the streets and they must
have a health issue. And it turns out they just tripped on a crack on the sidewalk and I've got
video to prove it. And you've got to go back and point that out. Right. Because it impacts reputation.
Now, malinformation, that's really the most dangerous one because malinformation starts with a grain of truth.
There's something in there that was true, and then it's exaggerated in a way that is misleading or causes harm.
So malinformation is really difficult, much like any form of lying. the most effective lie is a lie that's based on something true because somebody might see the
truth in it and then not be as scrutinizing of the rest of the comment or the rest of the
commentary that follows, which is in fact, it's dishonest. It's not true. So these are subtle
differences and they're hard to distinguish. And to be honest, in terms of their impact,
they may not need to be distinguished for you or I.
If we're getting information to us, it's either accurate or it's not.
Professionals will go back and try to figure out why it was out there, why it was inaccurate, why it caught fire, went viral, and who's responsible.
But for you and I, the key piece really is we've got to do the fact-checking.
We've got to do the research and determine if what
we're reading and what we're hearing is true. And then if it's not, if we have the time and energy,
we certainly can go back and try to figure out why we were given this false information.
But there's just so much out right now. We're all drowning. I pointed out 147 minutes a day
on the internet's the average user right now. We're all living in a space of just massive inputs.
What are your recommendations on an organizational level for dealing with this?
Is this, you know, keeping an eye on the Slack channels to see if some of this stuff starts
to take off?
Or, you know, what are some practical ways folks can stay on top of this?
Yeah, that's a good question.
So, and it's a really tough piece because what a lot of this starts with is we have
to agree on what is considered a reliable source.
And I think this is where we're running into a lot of challenges is societally is people
are discrediting sources that were always considered reliable in the past.
And so if you don't have a reliable source people agree on, then you end up with this
siloing of information and people choose their sources and the sources align with what they want and it just feeds their bias. So I think it's important to
have third parties that you trust, that you rely on, that you say, these are sources we trust and
believe in. These are unbiased people trying to do good work. And then, and that's why a lot of
times this ends up being put out to third parties. And it takes a lot of vigilance. There has to be an incredible amount of monitoring and observing
to understand what is being said and to see the early stages of a misinformation campaign or a
disinformation campaign or a malinformation campaign and to get in front of those things.
You know, whether it's a social media campaign and looking for somebody that's maligning your brand
and being in a position to stop that. The importance of fact-checking.
It's really hard,
but we really need to do a better job
of helping people understand what is considered a valid source
and do some fact-checking.
And looking at several different sources.
Sources maybe you believe in,
and sources maybe you don't believe in,
so you can get a wider picture.
And if there's conflict between them,
then how are you going to decide?
But it's really important to analyze what we're taking in,
take a moment and say, where did this come from? You know, what's the likelihood it's true? If it
sounds outrageous, whether it sounds outrageous because you're offended by the thing you've just
read or heard or outrageous because it just sounds so amazingly incredible, it couldn't possibly be
true because how awesome is this thing? In either case, you should really take a look at the source
and start looking through and saying, what's the likelihood this is true? If the politician that I
really, really dislike, this horrible story just came out about them and it's going to be a massive
scandal, but I haven't read it anyplace else or heard it anywhere else, maybe I should do some
research. Maybe it's just not true. It's like any other scam. If it's too good to be true or too bad to be true, it's certainly worth taking time
to research and seeing what's there.
You know, it's a hard thing to do.
But if we don't take the time to do the fact checking, if we don't take the time to do
the research, if we just live in our own silos and our own bubbles, we're victims.
You know, we're allowing ourselves to be.
We're actually willing victims of that when we're choosing to be the dupe, we're choosing
to be the fool because we like what we're allowing ourselves to be. We're actually willing victims of that. We're choosing to be the dupe. We're choosing to be the fool because we like what we're hearing. And that's really the
biggest danger to me in these misdis and malinformation campaigns is there are people
who want to take advantage of those who are willing to be taken advantage of if they're
told the things they want to hear. That's A.J. Nash from ZeroFks. It is always my pleasure to welcome back to the show Rick Howard. He is the CyberWire's
Chief Security Officer and also our Chief Analyst. Rick, welcome back. Hey, Dave. So, in our CyberWire programming meeting earlier this week, we were going over all of the published
episodes of CSO Perspectives starting back in 2020, and one of the first ones you ever did
covered the topic of zero trust as a first principle strategy. It's been over three years,
Rick. What else you got?
as a first principle strategy.
It's been over three years, Rick.
What else you got?
Ouch.
That is so true.
Point well taken, sir.
Okay.
But I think it's fair to say that Zero Trust comes up a lot on the network of CyberWire's podcasts and newsletters, and not to mention the training side that we get with our brothers
and sisters from the CyberVista merge we did last year.
that we get with our brothers and sisters from the Cyber Vista Merge we did last year.
Well, you know, most of the stuff I see regarding zero trust is kind of strategic or philosophical.
And my sense is we tend to come up a little short when it comes to practical implementations out there in the field.
I know you've got your ear to the ground.
You hear a lot about pilot projects and things like that, but not necessarily always a lot of success stories. Yeah, I think that's true. And I think one of the reasons you don't
hear a lot of success stories from the field is the fact that zero trust is a strategy. You know,
there are a million things you can do to improve your zero trust architecture. And more importantly,
it's kind of a journey with no obvious end point. Right. So it's not like you get to the end of your Zero Trust project and say, folks, we've solved it.
We can wrap up.
We can move on to something else.
Zero Trust is complete.
We'll check off that box.
Yeah, we're going to move on to curing cancer now.
No, we don't get to that point yet.
I mean, is there anybody you can think of out there who is having success?
Well, that was our question here at CSO Perspectives.
For the past year, the interns down in the underwater sanctum sanctorum have been scouring the landscape to find those stories, and they found a great one.
We're going to talk to John McLeod, the CISO at NOV, who has not only moved his organization far down the Zero Trust journey,
he did it during the pandemic, right?
So it's a remarkable story.
Well, thank goodness for the interns.
I guess you're going to have to double their meager rations this week as a reward.
Yeah, indeed.
We'll double their bread and double their water rations.
They're so happy down there right now.
I'll bet.
I'll bet.
So that's over on the subscription side.
What is on the public side of your CSO Perspectives podcast?
We are unvaulting a Rick DeToolman episode from June of 2022 on the current state of intelligence sharing.
And we're talking to some pioneers in the field, some of the original members of the FSISAC that got the ball rolling back in the early
2000s. We have Denise Anderson, we got Errol Weiss, and Byron Colley, just to name three.
Well, before I let you go, what is the phrase of the week over on your WordNotes podcast?
This week, we're covering ZTNA. That's Zero Trust Network Access. It seems to be a theme for our little conversation here, right?
But these are the technologies that directly support the Zero Trust Strategy.
And we even hear from the father of the concept, John Kinderbach.
All right.
Well, look forward to that.
Rick Howard, always a pleasure to speak with you.
Thanks so much for joining us.
Thank you, sir.
Cyber threats are evolving every second, and staying ahead is more than just a challenge.
It's a necessity. That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications,
securing sensitive data, and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default deny approach can keep your company safe and compliant.
And that's The Cyber Wire.
For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
The Cyber Wire podcast is a production of N2K Networks,
proudly produced in Maryland
out of the startup studios of Data Tribe,
where they're co-building the next generation
of cybersecurity teams and technologies.
This episode was produced by Liz Ervin
and senior producer Jennifer Iben.
Our mixer is Trey Hester
with original music by Elliot Peltzman.
The show was written by John Petrick.
Our executive editor is Peter Kilby
and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow.
Your business needs AI solutions that are not only ambitious, but also practical and Thank you. The Cure AI agents connect, prepare, and automate your data workflows, helping you gain insights, receive alerts,
and act with ease through guided apps tailored to your role.
Data is hard. Domo is easy.
Learn more at ai.domo.com.
That's ai.domo.com.