CyberWire Daily - UK's NCA and NCSC release a study of the cybercriminal underworld. HijackLoader's growing share of the C2C market. Russia's hacker diaspora in Turkey. Cyber diplomacy, free and frank..
Episode Date: September 11, 2023UK's NCA and NCSC release a study of the cybercriminal underworld. HijackLoader's growing share of the C2C market. Russia's hacker diaspora in Turkey. Author David Hunt discusses his new book, “Irr...educibly Complex Systems: An Introduction to Continuous Security Testing.” In our Industry Voices segment, Mike Anderson from Netskope outlines the challenges of managing Generative AI tools. And a senior Russian cyber diplomat warns against US escalation in cyberspace. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/173 Selected reading. Ransomware, extortion and the cyber crime ecosystem (NCSC) HijackLoader (Zscaler) New HijackLoader malware is rapidly growing in popularity (Security Affairs) New HijackLoader Modular Malware Loader Making Waves in the Cybercrime World (Hacker News) Spyware Telegram mod distributed via Google Play (Secure List) Millions Infected by Spyware Hidden in Fake Telegram Apps on Google Play (The Hacker News) 'Evil Telegram' Android apps on Google Play infected 60K with spyware (BleepingComputer) Influx of Russian fraudsters gives Turkish cyber crime hub new lease of life (Financial Times) Russia warns "all-out war" with US could erupt over worsening cyber clashes (Newsweek) New strategy for global cybersecurity cooperation coming soon: State cyber ambassador (Breaking Defense) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
The UK's NCA and NCSC release a study of the cybercriminal underworld,
hijack loaders' growing share of the C2C market,
Russia's hacker diaspora in Turkey.
My interview with author David Hunt,
discussing his new book, Irreducibly Complex Systems.
In our Industry Voices segment,
Mike Anderson from Netscope outlines the challenges of managing generative
AI tools. And a senior Russian cyber diplomat warns against U.S. escalation in cyberspace.
I'm Dave Bittner with your CyberWire Intel briefing for Monday, September 11th, 2023.
Music
The UK's National Cyber Security Centre and National Crime Agency this morning published a report looking at ransomware's place in the cybercrime ecosystem
and outlining the attack chain used by ransomware actors.
The agencies think that a broad view of the ransomware landscape is necessary to address the problem more effectively.
In some ways, the report argues, attribution is superficial. ransomware landscape is necessary to address the problem more effectively.
In some ways, the report argues, attribution is superficial. They state, while on the surface an attack can be attributed to a piece of ransomware such as LockBit, the reality is more nuanced,
with a number of cybercriminal actors involved throughout the process. Tackling individual
ransomware variants, something which the NCSC and NCA are
frequently challenged on, is akin to treating the symptoms of an illness and is of limited use
unless the underlying disease is addressed. Taking a more holistic view by understanding
the elements of the wider ecosystem allows us to better target the threat actors further upstream,
ecosystem allows us to better target the threat actors further upstream, in addition to playing whack-a-mole with the ransomware groups. So, no whack-a-mole, says NCSC and NCA. Why is this?
It's because cybercriminals aren't stupid, or at least not in a way that would tend to make them
run afoul of the usual sanctions, indictments, and prosecutions. They rebrand, they modify code, and they distance
themselves from the details of the original attacks. These simplistic measures are sometimes
enough to keep them in business. The criminal-to-criminal markets facilitate this kind of
dodging. As the report notes, each function can be conducted by a different threat actor and sold
to each other as a service.
It's also possible for gangs to vary their tools to use, in the report's language, different functions,
and indeed some functions are merely optional, useful in some cases but not others.
The report recommends that organizations concentrate on the high-level attack paths and especially the methods by which the crooks gain initial access,
as opposed to the specific scoundrel at the keyboard. Leave that to the people with the badges.
Of course, what's for sale in the C2C markets remains interesting. Researchers at Zscaler,
for example, are warning about a new malware loader that's gained market share in the
underground market. Hijack Loader, as it's known, has spiked in popularity over the past few months.
The loader first emerged in July 2023 and is being used to deliver several malware families,
including Danabot, SystemBC, and Redline Stealer. Zscaler notes, even though Hijack
Loader does not contain advanced features,
it is capable of using a variety of modules for code injection and execution
since it uses a modular architecture, a feature that most loaders do not have.
The researchers add,
We expect code improvements and further usage from more threat actors,
especially to fill the void left by Imhotep and Kakbot.
Kaspersky discovered several malicious Telegram clones in the Google Play Store that appear to
be designed to target Chinese-speaking users, particularly China's Uyghur population. The apps
purport to be faster versions of the legitimate Telegram app and are capable of stealing the victim's entire correspondence,
personal data, and contacts.
Bleeping Computer notes that the apps have been downloaded more than 60,000 times.
Google has since removed the apps from its Play Store.
The Financial Times reports that among the many thousands of young military-aged men
who skipped from Russia last fall to evade increased conscription were a large number of hackers, IT workers, and most significantly, cybercriminals.
Turkey received several thousand such emigrants, and many of them have either connected with local Turkish gangs or formed small criminal groups themselves.
with local Turkish gangs or formed small criminal groups themselves. Conditions for cybercriminals in Turkey are not as easy as they are in Russia, where cybergangs operate with the connivance of
the government. They enjoy no such official protection in Turkey, but hope to stay at large
by keeping their crimes petty, by avoiding hitting targets in Turkey, and by keeping their trade as unobtrusive and evasive as possible.
The expatriate criminal's preferred tool is redline, commodity malware that nonetheless seems to evade widely used offensive software.
It's most often downloaded inadvertently by people using illegal websites to play video games or pirated versions of popular software.
illegal websites to play video games or pirated versions of popular software. The criminal take is retail-level stuff, passwords and other login credentials, as well as credit card data.
It also includes stolen cookies, possession of which makes it easier to use the other data the
thieves hold. The information is traded in an underground market researchers call the
underground cloud of logs. The newly arrived
Russians are said to have taught the existing Turkish cybercriminals how to make better use
of their tools, and in particular, how to organize their stolen data in ways that render them more
attractive in the C2C markets. In an interview with Newsweek, Artur Lukmanov, director of the Russian Foreign Ministry's International Information Security Department and special representative to President Vladimir Putin on international cooperation on information security, reiterated familiar Russian non-denial denials of Moscow's offensive cyber operations.
U.S. allegations are accompanied by a lack of hard evidence, he says,
so it's not much we didn't do it as where's your evidence, and beside, you're the guilty ones here.
He described the U.S. national cybersecurity strategy as an inherent escalatory document
that deeply implicates the U.S. government and U.S. corporations in preparations for cognitive warfare.
He said, we want to halt further deterioration.
A mistake in the use of ICTs may lead to a direct conflict, an all-out war,
especially as the White House is aware that Russia has all the necessary capabilities to defend itself.
A devastating computer attack against our critical information
infrastructure will not be left without response. One of the principal lessons the U.S. has drawn
from Russia's war is that effective cyber defense depends on international cooperation,
and specifically upon cooperation among the public and private sectors of democracies.
Breaking Defense reports that Ambassador-at-Large Nate Fick
told the Billington Cybersecurity Summit last week
that a new strategy for promoting such cooperation was under preparation
and that it would be circulated this fall.
And finally, we'd be remiss if we didn't close
with a brief remembrance of the terrorism of 9-11,
now 22 years in the past.
Join us in sparing a thought for those who suffered and died
in the attacks and their aftermath.
And also, when you can,
reach out to those who mourn or care for them.
Sometimes the best thing you can do for grief
is simply listen. Coming up after the break, my interview with author David Hunt
discussing his new book, Irreducibly Complex Systems, an introduction to continuous security
testing. In our Industry Voices segment, Mike Anderson from Netscope outlines the challenges
of managing generative AI tools.
Stay with us.
Do you know the status of your compliance controls right now?
Like, right now? We know
that real-time visibility is critical for security, but when it comes to our GRC programs,
we rely on point-in-time checks. But get this, more than 8,000 companies like Atlassian and Quora
have continuous visibility into their controls with Vanta. Here's the gist. Vanta brings automation
to evidence collection across 30 frameworks, like SOC 2 and ISO 27001. They also centralize
key workflows like policies, access reviews, and reporting, and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices,
home networks, and connected lives.
Because when executives are compromised at home,
your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families
24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
Mike Anderson is Chief Digital and Information Officer at Netscope,
with over 25 years of experience in the industry. In this sponsored Industry Voices segment,
I ask Mike Anderson about the
proliferation of generative AI tools and how organizations can balance the utility of these
tools against the potential security risks they present. There's a lot of conversation from the
boardroom down around how is GeneAI going to impact how we operate as an organization,
going to impact how we operate as an organization,
what skills is it going to require,
what positions may be impacted,
which ones may not be impacted.
And so a lot of the conversation is,
we can't block it from people using it in our organization.
In fact, that's a very daunting task for most companies because every week we're seeing three to five new startups
that are coming out
building on top of the existing platforms like the OpenAI, the ChatGPT, BARD, and others.
And so you've got that aspect of it, but at the same time, there's lots of concern around
are people uploading sensitive information into public models? How do we make sure we
distinguish between a public model and a private model? And so there's a lot of questions and governance type
things that people are talking about today because they definitely want to say, how do we
safely enable generative AI in our organizations, but at the same time,
stay on top of the changes that are going on globally as well.
And how do you suppose an organization can come at striking that balance between the usefulness of these tools,
but those legitimate concerns as well?
Yeah, so what I'm seeing a lot of my peers doing in the industry is they're paying for opening up some of the new paid models
from providers, whether it's Google or Microsoft.
up some of the new paid models from providers, whether it's Google or Microsoft. They're buying licenses now for their employees to give them a safe place to go innovate versus some of the
free models. If we think about ChatGPT, we have the free model where things get uploaded
into the public large language models. And then we've got our private ones where the data is contained within our
environment. The challenge is really that creates a good framework, but then there's so many of
these new applications popping up. And Grammarly is a great example. It's very difficult to
distinguish between a paid Grammarly subscription and a free subscription. And so because we can't
distinguish, we block that from our users using that platform until we get to the point where we
can distinguish. Because what we don't want is effectively a keylogger logging all the
interactions our users are having and having that information go into a public large language model.
And so a lot of it is give people a place to go and experiment in a safe way versus outright blocking. So where do organizations stand when it comes to
addressing things like data governance and consent management? That's a great question.
What I see people doing today is, one, is they're looking at the lineage of data. For example,
there was a case that came up recently in case law with a judge where an attorney had basically gone through
and searched for a brief or a precedent to basically support a claim they were making,
and they used generative AI. So from a data governance standpoint, one of the things we're
seeing is people trying to make sure there's a good lineage of where did data come from,
what's the source, the attribution of the data is key because we can't just rely
on things in a public large language model because it's sourcing data from the entire internet.
It's scanning everything. And so there was a good example recently in a courtroom where an attorney
basically used information from ChatGPT to support their claim they were trying to make,
but the data was actually from something
that was fictitious, not something that was real. And so that starts to bring concern where we're
actually seeing in lawrooms and courtrooms where people have to cite their evidence. They have to
attribute where that information came from, and they have to actually say, did they use generative
AI in any form in anything to do from a legal standpoint to make sure that it stands up.
And so when we think about data governance, it's that lineage.
Where did that data come from?
How is it attributed?
So when decisions are being made, especially even on private models, how can I make sure that I trust the information that's coming from it to make a business decision?
decision. And so oftentimes, you know, to help temper expectations today around kind of where we're at, what you see is some of my peers are giving questions to their board members and their
C-suite to say, go to some of these chat GPT or some of these public models and ask the following
questions and look at the answers you get. And they're questions that all the boardroom members
and all the C-suite would know the answers to, to compare the answer accurate or not.
And it's a good way to level set expectations from when you really think about the governance of data that's used to make these decisions.
And so I find that to be a very good place, but I feel like we're at the beginning.
But this is a truly transformational moment in technology. I correlate it to when we saw the iPhone introduced in 2007. We're at that
point now with generative AI where we're just at the beginning and everyone's really trying to put
the structures around it in real time. What about the communications channels themselves,
securing that pathway between the user and these large language models?
that pathway between the user and these large language models?
Yeah, so the ones where you're going directly to the tool, the chat GPT,
those are the easier ones to address.
Where it becomes more complicated is this world of third-party plugins we see within, whether it's Microsoft or Google or Salesforce,
any of our key SaaS applications that we leverage today,
we have the ability to plug in various add-ons.
We see it in the browser world.
If we look at Google Chrome, I can download add-ons for my Google Chrome browser.
And so it's those type of plugins where I feel like we have more heartburn because they're
harder to detect.
And so it really comes into this whole conversation around third-party risk.
And that's another area where we're also using some of our own technology.
We just announced here recently the ability from a SaaS security posture management standpoint,
the ability to identify all the different plugins that people are trying to use and assess risk against those.
So we've cataloged over 70,000 applications, each with their own individual risk scores.
And so then we can apply that same
risk scoring to those third-party plugins that people are trying to use, whether it's a browser
or it's something that plugs directly into a Teams or a Slack or a Covalent-type tool
we're using today within our organizations. What are your recommendations for organizations who
are just getting started on this journey? They realize and recognize the power of these tools,
but perhaps they're feeling a little overwhelmed
at getting a handle on securing them.
Do you have a suggestion for where to begin
and what pathway to take?
Well, selfishly, we want everyone to take a look at Nesco
because we use our own technology
and feel pretty good about
how we're managing these things internally.
What I always recommend to people is give people a safe, realize that you're not going to block it.
I mean, I go back to the 90s before we had email that could work outside of our organizations.
You know, we saw the consumerization of IT. So email, so when the free email platforms came out
like Yahoo Mail back in the late 90s, what we saw is people would forward their work email
to their personal email so they could get access to it at home.
And that was a forcing function for organizations
and to open up email so people could access it
from outside the four walls of their organization.
And so we're seeing the same thing happen today
when we think about generative AI
and we think about other examples like that.
What we need to do is give people a safe place to go experiment.
Outright blocking is not a good strategy, so how do we give people that safe sandbox?
Educate them. I always say, give people a license to go fishing.
Make sure they're fishing in the right place with the right equipment
so when they get something on the line and they reel it in, we have a positive
outcome versus perhaps a negative outcome.
And so put the right guardrails and give people the license to experiment, but help them understand the right place to experiment.
And then use tools that are out there in the market today to basically police that third-party component we spoke about around those third-party plugins.
those third-party plugins, but then also to make sure we're protecting and guiding our users and giving them that GPS or that compass to make sure they know where to go, where not to go, what to do
and not to do in real time. And don't just rely on someone reading something or attending a webinar
internally, which we know people have to hear things 27 times before they remember it. So let's
make sure and remind them every time so it starts to become, you know, brainstem for all of our users.
That's Mike Anderson from Netscope.
David Hunt is co-founder and CTO at Prelude Security and author of the new book,
Irreducibly Complex Systems, an Introduction to Continuous Security Testing.
David Hunt has worked at organizations like MITRE, Mandiant, John Deere, and the U.S. government.
While at MITRE, he designed and built the Caldera Framework, an open-source tool for conducting semi-autonomous purple team assessments.
Our conversation begins with him describing his motivation for writing the book.
Yeah, I've been in the security space for, I guess, about 17 years now, and I've done a lot of writing on the topic.
And I've kind of bounced between public and private sector in terms of red teaming and offensive security.
And I've kind of seen a shift in the last, I don't know, 6, 12, 18 months in how security testing is happening across different organizations.
And watching that trend happen and then kind of like really feeling it through my daily work, I wanted to get that down on paper.
I wanted to get that down on paper. And so I think it's pushing against the grain in a lot of ways in terms of what has been done in security testing, the idea of continuously testing your security.
And I want to get that down on paper and kind of give an explanation of where I see that trend going and kind of some of the technical reasoning as to how we got there.
Well, can you help us with a definition here?
How do you describe continuous security testing? The way I like to describe it is repeatedly testing if your defenses are capable of defending against emerging threats. And so maybe a more
understandable way of saying that is, as we read the news and we see different attacks occurring,
we've talked a lot about the move at vulnerability over the last couple of months. The question always comes down
to, could this happen to me? Am I vulnerable to this actual attack? And the idea behind continuous
security testing is around the clock to be able to test each one of your security controls for
that particular vulnerability. So even if you don't
have it today, if it popped up tomorrow, you would understand how your defense reacted to it.
And what are the advantages of adopting this kind of system?
It's really information and intelligence early on. And so when we look at what we've done in
red teaming in the past, we are able to create intelligence, but it's point
in time. And so we might, taking the move it example, we might see that we have a vulnerability
that's move it. We understand that at this point in time, we have that vulnerability,
but we lose sight of that next month, a month after that, and so forth.
When you're running tests continuously, what you start to realize is
you're able to regression test an entire production infrastructure. So it doesn't matter when the
vulnerability comes into your environment or if it goes away and comes back, you actually have a
heartbeat the entire time. Can you give us some examples here of how this actually works in
practice? So there's like a lot of security testing, it's two parts.
And so what you want in continuous security testing is you want one part that's what's
called a probe or an agent.
And when you deploy those out on your endpoints, so things like computers, servers, containers,
and so forth, those things create a persistent connection back to what you'd refer to as
your command and control center.
That command and control center is basically an automated scheduler. And so what the behavior
that you want in the real world is you want to set your command and control center up where it can
schedule out tests on a repeated basis to all of your endpoints. And as these endpoints retrieve
tests, they execute them and spit the results back to the command and control center where those results can be aggregated.
And how do you ensure that in this process you're going to do no harm?
That's one of the biggest tenets that I go into in the book is continuous security testing needs to do no harm. That harm, I think, is most obviously represented in the tests themselves, making
sure that the test cannot actually create a negative effect on the host. Because continuous
security testing is designed to run in production and across all your devices, it introduces that
as a potential risk. So the way that I describe this in the book is each one of the tests should have
guardrails built in. So the tests themselves, for example, can be limited based on the amount of
runtime that you give them. I like 10 seconds. So you try to accomplish everything that you need to
accomplish in the test within 10 seconds. Another guardrail that's pretty popular is verification
of where the test comes from. So each one of these endpoint probes
that you can deploy inside of your environment should have the ability to verify the test is
coming from a location that you approve. That avoids any sort of man-in-the-middle attacks,
which would be one of the biggest threat vectors to a system like this.
Well, and then how do organizations take the information that they've gathered here and turn that into some sort of actionable strategy?
That is a great question, because this is also one of the biggest changes in continuous security testing that I go into in the book.
The way, and I like to describe it from kind of where we're coming from with security testing.
from with security testing. Where we're coming from is a world where we run security tests,
and then we have a security engineer or a red teamer contextualize what those results are in order to determine what to do remediation-wise. And so, for example, you would run a test
from the terminal, you would look at the terminal output, and you would say,
hey, these IP addresses
have specific ports open that have a vulnerability. Therefore, based on my knowledge and ability to
contextualize the terminal output, here's what I would do. Now, that doesn't scale really well
beyond a couple of people inside of a smaller environment. So continuous security testing takes
a much more production-ready type of approach. And what continuous security testing takes a much more production-ready type of approach.
And what continuous security testing emphasizes is a simple result code, an exit code, be returned for every test.
So when you run an actual test, the output, the terminal output, is disregarded, and a
particular exit code is sent off of the endpoint into your command and control
center. Now it's the aggregate amount of those exit codes that tell the picture and do the
contextualizing for you in a very automated way. So for example, one exit code might be 105. 105
might be quarantined test. That would indicate that a defensive control, say an EDR,
quarantine the security test while it was running. That'd be a good thing. You want the defense to
quarantine bad things. And so at scale, you're able to collect all of those codes for all of
these tests and build basically a giant heat map of what your environment looks like at any time.
That's David Hunt from Prelude Security.
The book is titled Irreducibly Complex Systems,
an introduction to continuous security testing.
Cyber threats are evolving every second, Thank you. solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com
today to see how a default deny approach can keep your company safe and compliant.
This episode is brought to you by RBC Student Banking.
Here's an RBC student offer that turns a feel-good moment into a feel-great moment.
Students, get $100 when you open a no-monthly-fee RBC Advantage Banking account
and we'll give another $100 to a charity of your choice.
This great perk and more, only at RBC.
Visit rbc.com slash get 100, give 100.
Conditions apply.
Ends January 31st, 2025.
Complete offer eligibility criteria by March 31st, 2025.
Choose one of five eligible charities.
Up to $500,000 in total contributions.
And that's The Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Don't forget to check out the Grumpy Old Geeks podcast,
where I join Jason and Brian on their show for a lively discussion of the latest news every week.
You can find Grumpy Old Geeks where all the fine podcasts are listed.
We'd love to know what you think of this podcast.
You can email us at cyberwire at
n2k.com. Your feedback helps us ensure we're delivering the information and insights that
help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K
and podcasts like the Cyber Wire are part of the daily intelligence routine of many of the most
influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500 and
many of the world's preeminent intelligence and law enforcement agencies. N2K Strategic
Workforce Intelligence optimizes the value of your biggest investment, your people. We make
you smarter about your team while making your team smarter. Learn more at n2k.com.
This episode was produced by Liz Ervin and senior producer Jennifer Iben.
Our mixer is Trey Hester with original music by Elliot Peltzman.
The show was written by our editorial staff.
Our executive editor is Peter Kilby and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow. Thank you. data into innovative uses that deliver measurable impact. Secure AI agents connect, prepare,
and automate your data workflows, helping you gain insights, receive alerts, and act with ease
through guided apps tailored to your role. Data is hard. Domo is easy. Learn more at
ai.domo.com. That's ai.domo.com.