CyberWire Daily - FBI strikes against a cybercrime syndicate.
Episode Date: May 16, 2024The FBI seizes BreachForums. NCSC rolls out a 'Share and Defend' initiative. ESports gaming gets a level up in their security. The spammer becomes the scammer. Bitdefender is sounding the alarm. The c...ity of Wichita gets a wake-up call. In our Threat Vector segment, host David Moulton discusses the challenges and opportunities of AI adoption with guest Mike Spisak, the Managing Director of Proactive Security at Unit 42. And no one likes a cyber budgeting blunder. Our 2024 N2K CyberWire Audience Survey is underway, make your voice heard and get in the running for a $100 Amazon gift card. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest In our Threat Vector segment, David Moulton, Director of Thought Leadership at Unit 42, discusses the challenges and opportunities of AI adoption with guest Mike Spisak, Managing Director of Proactive Security at Unit 42. They emphasize the importance of early security involvement in the AI development lifecycle and the crucial role of inventorying AI usage to tailor protection measures. You can listen to the full episode here. Selected Reading FBI seize BreachForums hacking forum used to leak stolen data (Bleeping Computer) New UK system will see ISPs benefit from same protections as government networks (The Record) Riot Games, Cisco to Connect and Protect League of Legends Esports Through Expanded Global Partnership (Cisco) To the Moon and back(doors): Lunar landing in diplomatic missions (WeLiveSecurity) New Black Basta Social Engineering Scheme (ReliaQuest) IoT Cameras Exposed by Chainable Exploits, Millions Affected (HackRead) Kimsuky APT Using Newly Discovered Gomir Linux Backdoor (Decipher) Law enforcement data stolen in Wichita ransomware attack (The Record) Nigeria Halts Cybersecurity Tax After Public Outrage (Dark Reading) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. JoinDeleteMe.com slash N2K and use promo code N2K at checkout.
The only way to get 20% off is to go to JoinDeleteMe.com slash N2K and enter code N2K at checkout.
That's JoinDeleteMe.com slash N2K, code N2K. The FBI seizes breach forums.
NCSC rolls out a share and defend initiative.
The spammer becomes the scammer.
Bitdefender is sounding the alarm.
The city of Wichita gets a wake-up call.
Esports Gaming gets a level up in their security.
In our Threat Vector segment, host David Moulton discusses the challenges and opportunities of AI adoption with guest Mike Spisak, the Managing Director of Proactive Security at Unit 42.
And no one likes a cyber budgeting blunder.
Today is May 16th, 2024.
And no, you did not hit play on the wrong podcast.
I'm Maria Varmazes, host of the T-Minus Space Daily podcast,
sitting in for Dave Bittner today.
This is your Cyber Wire Intel briefing.
Leaping Computer reports that the U.S. Federal Bureau of Investigation seized the Breach
Forums website and Telegram channel.
Breach Forums is a notorious hacking forum used to leak and sell stolen data.
The website now displays everybody's favorite boilerplate seizure notice, stating,
this website has been taken down by the FBI and DOJ with assistance from international partners.
We are reviewing the site's back-end data. If you have information to report about cybercriminal
activity on Breach Forums, please contact us.
Now, Breach Forums was the successor of a string of hacking forums used by cybercriminals to buy, sell, and trade hacked data, tools, and services.
The first of these sites was known as Raid Forums, and that initially launched in 2015 and became the largest site for distributing stolen data by ransomware and extortion groups.
Bleeping Computer notes that the data stolen from a Europol investigation-sharing portal was leaked on breach forums just last week.
The UK's National Cyber Security Centre launched its Share and Defend system,
offering ISPs access to malicious domain block lists previously used for government networks.
This initiative, which was announced at the Cyber UK conference,
aims to enhance national cyber defenses by blocking access to harmful content,
such as phishing sites.
Participation is voluntary.
BT and JISC are already enrolled, and Vodafone and TalkTalk are expected to join.
The system seeks to raise cyber resilience without replacing individual
vigilance. ESET researchers published a report on two newly uncovered backdoors, Lunar Web and
Lunar Mail, used by the Russian-linked Turla APT group. They were used to compromise a European
Ministry of Foreign Affairs and its diplomatic missions. Lunar Web communicates via HTTPS,
while Lunar Mail uses email, with both employing steganography to hide commands.
Active since 2020, these tools use advanced techniques including Trojanized software and
LUA scripting. The attack methods suggest prior domain controller access, with spear phishing and
misconfigured software
abuse as the likely initial access points. The investigation highlights sophisticated cyber
espionage targeting diplomatic entities. So, sorry space fans, while I am the host of T-Minded Space
Daily and my eyes may have lit up when I saw the headline, Lunar Mail and Lunar Web, this actually
has no real lunar implications. ReliaQuest describes a major social
engineering campaign that's distributing the black Basta ransomware. The campaign uses mass email
spam and voice phishing, otherwise known as vishing. Attackers overwhelm users with spam emails and
then impersonate IT support to persuade victims to download remote access tools like
QuickAssist or AnyDesk, gaining initial access to systems. They then execute scripts to establish
a command and control connections, exfiltrate data, and move laterally within networks.
ReliaQuest recommends blocking newly registered domains and setting up application whitelisting.
Bitdefender researchers identified four critical vulnerabilities in ThruTech's Calais platform,
exposing over 100 million IoT devices globally to potential attacks.
These flaws allow attackers to gain root access, execute remote code, and obtain sensitive data.
Devices affected include the OwletCam, Wyze Cam, and Roku Indoor Camera.
Bitdefender reported the issues in October 2023, and ThruTech released fixes by April 2024.
Users are urged to update their devices to prevent exploitation.
ESET researchers discovered that the North Korean-linked KimSuki APT group is deploying a new Linux backdoor named Gomir.
This backdoor is structurally similar to the Windows-based GoBear malware and is used to target organizations in South Korea.
Gomir has various capabilities, such as checking TCB connections, reporting machine configurations, and exfiltrating files.
TCB connections, reporting machine configurations, and exfiltrating files.
This malware is part of Kim Suu Kyi's broader strategy,
which includes supply chain attacks using Trojanized software installers to infiltrate targets.
The city of Wichita is warning residents about a recent breach that we reported on that revealed the chilling truth that no organization is immune to cyber threats.
Hackers exploited a known vulnerability,
breaching city networks and plundering law enforcement data,
including sensitive personal information.
As city officials scramble to contain the damage,
services grind to a halt,
with police resorting to paper records
and offices reverting to cash transactions.
But Wichita is just one casualty in a nationwide onslaught.
St. Helena, Macon Bibb County, and countless others have fallen prey to similar attacks,
leaving governments scrambling to restore functionality and safeguard citizen data.
The notice from Wichita officials didn't specify the vulnerability or the number of affected people
and are still unsure when systems will be restored.
Cisco and Riot Games expanded their global partnership for League of Legends esports.
Cisco will now serve as the official security partner.
This collaboration will integrate Cisco's security and digital experience solutions
to enhance the gaming experience for players and fans.
The partnership, which is ongoing since 2020,
aims to improve cybersecurity, prevent outages,
and ensure seamless digital experiences.
No bets on if it will decrease angry in-game allegations of hacks
or if noobs just need to get good.
Coming up after the break, we'll share our threat vector segment with host David Moulton and guest Mike Spisak talking about AI.
We'll be right back. Transat presents a couple trying to beat the winter blues.
We could try hot yoga.
Too sweaty.
We could go skating.
Too icy.
We could book a vacation.
Like somewhere hot.
Yeah, with pools.
And a spa.
And endless snacks.
Yes!
Yes!
Yes!
With savings of up to 40% transat south packages it's easy
to say so long to winter visit transat.com or contact your marlin travel professional for
details conditions apply air transat travel moves us do you know the status of your compliance
controls right now like right now we know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks. have continuous visibility into their controls with Vanta. Here's the gist.
Vanta brings automation to evidence collection across 30 frameworks,
like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber.
That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365, with BlackCloak.
Learn more at blackcloak.io.
Our Threat Vector segment features host David Moulton, Director of Thought Leadership at Unit
42, discussing the challenges and opportunities of AI adoption with his guest, Mike Spisak.
and opportunities of AI adoption with his guest, Mike Spisak.
Welcome to Threat Vector, the Palo Alto Network's podcast where we discuss pressing cybersecurity threats, ways to stay resilient, and uncover the latest industry trends.
I'm your host, David Moulton, Director of Thought Leadership for Uniforty2. In today's episode, I'll share our conversation I had with Technical Managing Director Mike
Spisak, who's responsible for proactive security solutions at Unit 42.
Mike is spearheading Unit 42's effort to safeguard AI systems.
In our conversation, we'll get into how organizations can harness AI
to build cutting-edge tools and platforms without compromising security
and reflect on the lessons learned from the early days of cloud computing.
Let's jump right into the conversation. We were going to talk about the work that you're leading here at Palo Alto Networks Unit 42 on protecting AI systems.
You know, we both lived through the shift to cloud computing. And I look at that as a near-term proxy for the sort of the environment that we're in today.
Do you see some key similarities between the early days of cloud adoption and the current wave of AI integration?
That's a great question and observation.
What's interesting about AI and its sort of compare and contrast, an analogy, if you will, to the adoption
of cloud computing, both are extremely revolutionary and in their own right, and also equally mystifying,
right? Because of that, I think, you know, you could look at it from a practical perspective,
right? Cloud computing really revolutionized computing, access, data storage, making infrastructure scalable and accessible to many.
Generative AI, or just AI in general, is sort of reshaping our interactions, content creation, embedding intelligence into various services.
embedding intelligence into various services.
And the two are closely related because generative AI and AI in general
is taking advantage of cloud computing.
So yeah, so they are parallel tracks,
but they're leveraging each other in some ways,
in many ways.
Now, from a security perspective,
I was talking about the adoption of generative AI
and I was speaking with chief information security officers, small room, intimate setting, and we were discussing how many CISOs in the room were finding out about AI projects as they were headed out the door.
And one individual spoke up and gave us a quick story and basically said, yeah, we're pushing a generative AI app out the door.
Security, you're our last gate.
We just need you to approve this so we can push to prod.
And a couple of things I want to observe there. Number one, this was the first time the security organization was hearing about this app that had been built.
So that's a problem.
So that's a problem. Yeah.
So that's a problem.
Right.
Number two,
you're our last gate.
Right.
How often have you heard that?
So security is the last gate out before they can go out the door,
which not only are they hearing about it for the first time,
they're the last box to check.
I was recently talking to Noel Russell,
and she talked about this idea of building AI applications, voice applications at a variety of different companies.
One of the analogies Noelle came up with was this idea of adopting a baby tiger.
And at first, they seem cute.
They seem cuddly.
You're not too worried about the fact that they have big paws.
They have claws. They have fangs. You're not worried about what the fact that they have big paws they have claws they
have fangs you're not worried about what are you going to do with those things that are dangerous
but as those ai models grow up their danger becomes a lot more apparent and i think that
as we see this ai enthusiasm i'd wonder if you have thoughts on some of the potential pitfalls or dangers that organizations are overlooking as they rush to adopt AI technology.
So I love that analogy, too, by the way, and I may have to borrow it.
But you smile a little when you think about, you know, baby tigers.
Right. They're cute.
They're cute, right?
tigers. Right. They're cute. They're cute. Right. And, uh, and then my goodness, when they get older and you still see, you know, they, they could still be well-behaved, um, pets, but you
know, they, but, but what's interesting about that analogy is it's, it's the training over time,
right. Of an animal, right. As it grows up and very, it's laughable, but almost quite true for,
especially for generative AI models that will change and alter their behavior over time based on what it's been trained on and what it's been trained to do.
Some of these interesting, you asked a question around some of these threats.
Now, what's a little dangerous about this is, depending upon how you manifest an AI application, it could very much at the surface look like a web app or a mobile app or an API, just like anything else. And you may treat it just like, from a cybersecurity perspective,
you may treat it just like you would any other mobile app, web app, or API type of interaction.
And that wouldn't necessarily be a wrong thing to do, right? You would want to monitor it. You
would want to ensure least privilege. You would want to put a firewall in front of it. You would want to ensure least privilege. You would want to put a firewall in
front of it. You would want to make sure there's encryption at rest and so on. So these are all
great things. But leveraging AI does introduce, and in particular generative AI, which is where
all of us are headed to now, introduces just sort of this, what I'll call an expanded attack
surface beyond what classic cybersecurity controls will be able to handle for us.
For example, prompt injection or insecure output handling
or model theft or, you know, exposure
or sensitive data oversharing or overexposure.
How do you take that enthusiasm that companies have
about new AI tools, but then have them balance that
with an understanding of the inherent risks
and ethical considerations?
So I think that's a multi-part question.
So if I had to break it down into a couple of key things,
and some of them we covered already,
but I'll just sort of reiterate them.
The first one would be education.
And when I say education, it comes in multiple flavors. I think this idea of consumption, right? So there's education in the sense of
using generative AI, what types of information is safe to put in, what types of information is not
safe to put in from a consumption perspective. The other side of the education would be, you know, workshops from a technical perspective
to allow builders and engineers a very similar sense of, you know, what are we allowed to
use, what libraries are safe to use, how much data, right?
What are our objectives?
But also understanding what are the new nuances related to software engineering that I need
to be aware of that will allow me to
effectively process data that goes into or comes out of a generative AI system.
So education, training, I think are paramount and are almost always, after inventory and
discovery, almost always the next step or one of the earlier steps.
Guidelines and policies.
one of the earlier steps. Guidelines and policies, and I mentioned this a few times about just everything from an organizational-wide acceptable use policy down to, you know, we all had to be
trained on what's confidential information, what's top secret information, what's proprietary,
and so on. I think we need to have clear guidelines and policies around, you know,
the ins and outs of AI,
just what I'll call classic or narrow AI as well as generative AI.
I think extending past that, when we get into ethics,
there should be committees and leadership just promoting and advocating.
It's easy to say, hard to do.
So without champions and leaders in your organization and at the organization level and down even at the department level, right?
Champions and leaders to effectively and transparently communicate these policies and lead by example from an ethical perspective.
That, in my experience, goes a lot further than just the forced march of this is the policy you shall obey.
So I think that'll go a lot further, having a champion, having leadership backing at all levels.
And that does another thing as well, not just the lead by example,
but also shows that you have an organization that is committed to leaning in to the adoption
of this accelerant technology, but at the same time doing it in a way that's ethical, effective
for the business. And it will allow all of us to accelerate and flourish together.
Yep. Well put, Mike. I appreciate you coming on ThreatVector today to talk about all things AI, whether it's getting into the career in security and understanding where AI can help out or to protect those systems that you're building and to be thoughtful, responsible, ethical in the way that you're deploying and reducing risk as this sort of new thing goes out into the world.
It's always a pleasure to talk to you and to learn from you.
I appreciate that, David.
Thank you for having me.
Look forward to chatting again soon.
That's it for Threat Vector this week.
I want to thank the Threat Vector team.
Michael Heller is our executive producer.
Our content team includes Shira Ladrosky, Tanya Wilkins, and Danny Milrad. I edit the show,
and Elliot Peltzman mixes the audio. We'll be back in two weeks. Until then,
stay secure, stay vigilant. Goodbye for now.
You can find links to the latest episode of Threat Vector in our show notes.
Check out your favorite podcast app to follow Threat Vector every other Thursday
to get the latest in the world of cyber threats. Thank you. trusted by businesses worldwide. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications,
securing sensitive data, and ensuring your organization runs smoothly and
securely. Visit ThreatLocker.com today to see how a default deny approach can
keep your company safe and compliant.
Nigeria's attempt to fund cybersecurity through a levy on electronic transactions was swiftly halted due to public outcry amid an economic crisis. Amidst this backdrop, the proposed cybersecurity tax aimed to fortify defenses against cyber threats,
a pressing concern given Nigeria's history as a cybercrime hotspot.
The rollback of the levy raises concerns about the potential surge in cyber threats.
Deloitte's cybersecurity outlook warns of heightened risks,
including insider-supported attacks driven by economic desperation.
The forecast of increased ransomware attacks underscores the urgency of robust cybersecurity measures, especially for vulnerable sectors like government assets.
What's the moral of this cyber tale?
Well, proactivity pays off.
As Nigeria grapples with vulnerabilities, it's a wake-up call for the cyber community on
beefing up defenses. Plus, let's not forget the importance of transparency and smart spending.
No one likes a cyber budgeting blunder. Say that three times fast.
And that's The Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast.
Your feedback ensures that we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity.
If you like the show, please share a rating and review in your podcast app.
If you like the show, please share a rating and review in your podcast app.
Also, please fill out the survey in the show notes or send an email to cyberwire at n2k.com.
We're privileged that N2K Cyber Wire is part of the daily routine of the most influential leaders and operators in the public and private sector,
from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies.
N2K makes it easy for companies to optimize your biggest investment, your people.
We make you smarter about your teams while making your team smarter.
Learn how at n2k.com.
This episode was produced by Liz Stokes. Our mixer is Trey Hester, with original music and sound design by Elliot Peltzman.
Our executive producer is Jennifer Iben.
Our executive editor is Brandon Karp.
Simone Petrella is our president.
Peter Kilpie is our publisher.
And while I'm not Dave Bittner, I am Maria Varmazes.
Thanks for listening. Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts,
and act with ease through guided apps tailored to your role.
Data is hard. Domo is easy.
Learn more at ai.domo.com.
That's ai.domo.com.