CyberWire Daily - Python developers under attack.
Episode Date: March 25, 2024A supply chain attack targets python developers. Russia targets German political parties. Romanian and Spanish police dismantle a cyber-fraud gang. Pwn2Own prompts quick patches from Mozilla. Presiden...t Biden nominates the first assistant secretary of defense for cyber policy at the Pentagon. An influential think tank calls for a dedicated cyber service in the US. Unit42 tracks a StrelaStealer surge. GM reverses its data sharing practice. Our guest is Anna Belak, Director of the Office of Cybersecurity Strategy at Sysdig, who shares trends in cloud-native security. And a Fordham Law School professor suggests AI creators take a page from medical doctors. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Guest Anna Belak, Director of the Office of Cybersecurity Strategy at Sysdig, shares trends in cloud-native security. To learn more, you can check out Sysdig’s 2024 Cloud-Native Security and Usage Report. Selected Reading Top Python Developers Hacked in Sophisticated Supply Chain Attack (SecurityWeek) Russian hackers target German political parties with WineLoader malware (Bleeping Computer) Police Bust Multimillion-Dollar Holiday Fraud Gang (Infosecurity Magazine) Mozilla Patches Firefox Zero-Days Exploited at Pwn2Own (SecurityWeek) Biden nominates first assistant defense secretary for cyber policy (Nextgov/FCW) Pentagon, Congress have a ‘limited window’ to properly create a Cyber Force (The Record) StrelaStealer targeted over 100 organizations across the EU and US (Security Affairs) General Motors Quits Sharing Driving Behavior With Data Brokers (The New York Times) AI's Hippocratic Oath by Chinmayi Sharma (SSRN) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © 2023 N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash n2k code N2K at checkout. That's joindelete.me.com slash N2K, code N2K.
A supply chain attack targets Python developers.
Russia targets German political parties.
Romanian and Spanish police dismantle a cyber fraud gang.
Pwn2Own prompts quick patches from Mozilla.
President Biden nominates the first Assistant Secretary of Defense for Cyber Policy at the Pentagon.
An influential think tank calls for a dedicated cyber service in the U.S.
Unit 42 tracks a Strelastira surge.
GM reverses its data sharing practice.
Our guest is Anna Bellick, director of the Office of Cybersecurity Strategy at Sysdig,
who shares trends in cloud-native security.
And a Fordham Law School professor suggests AI creators take a page from medical doctors.
It's Monday, March 25th, 2024. I'm Dave to have you here with us today. Hackers targeted
Python developers by creating a malicious clone of the popular Colorama tool, which aids in using ANSI escape character sequences on Windows.
They executed a supply chain attack by typosquatting a legitimate Python mirror domain
to distribute the clone.
Among the victims was a maintainer of Top.gg,
a platform with a large Discord community.
The attackers hijacked accounts, including editor syntax,
to spread the malware through compromised commits
and increase the visibility of malicious repositories.
The attack method involved using stolen cookies for account access,
blending malicious files with legitimate ones,
and hiding malicious code within the Colorama clone.
Once executed, the malware stole data from browsers, Discord, cryptocurrency wallets,
and more, transmitting it via various methods.
Given the growing prevalence of supply chain attacks, our Don't-Come-To-Me-With-Problems-Without-Also-Bringing-Me-Solutions
desk came up with five tactics that you and your team can employ to fortify your supply chain defenses
and ensure your code remains secure.
First up, meticulously verify package sources.
The crux of many attacks lies in the deceptive simplicity of typosquatting,
where fraudulent URLs mimic those of legitimate sources.
By double-checking the authenticity of package sources
and being vigilant against misleading URLs,
developers can thwart attempts
to introduce malicious clones into their projects.
Next, let's talk about package signing and verification.
This robust mechanism serves as a gatekeeper,
confirming both the integrity and the origin of your packages.
By embracing this practice, you're building a formidable barrier,
ensuring that only verified packages make their way into your supply chain.
Implementing multi-factor authentication across Accounts and Environments is our third tactic.
In an era where account credentials can be compromised, adding an additional layer of
security through MFA is invaluable. This simple yet effective measure can significantly decrease
the likelihood of unauthorized access, safeguarding your repositories against compromised accounts.
The fourth tactic involves employing automated security scanning tools.
These proactively scan for unusual patterns within your codebase and dependencies.
By identifying potential risks before they're woven into your project,
these tools act as an early warning system.
Lastly, implement rigorous code review processes. Peer reviews are more than just
quality checks. They're a collective effort to spot and rectify anomalies that might escape
automated detection. This human-centric approach leverages the expertise of your development team,
applying a critical eye to every change and ensuring that your codebase remains resilient against
vulnerabilities. By integrating these five tactics into your development practices,
you're contributing to a more secure ecosystem for the entire community.
Mandiant reports that APT29, a hacking group tied to Russia's SVR, has shifted focus to target German political parties, specifically
using phishing campaigns themed around the Christian Democratic Union. This marks a
departure from their usual targets like governments and diplomatic missions. The phishing attacks
deploy a backdoor malware named Wine Loader, granting remote access to compromised devices.
named Wine Loader, granting remote access to compromised devices.
This malware, first spotted in operations against various countries' diplomats,
is noted for its encrypted communication with its control server and modular customized design for espionage.
Mandiant's discovery of APT29's campaign against German political entities
highlights the group's evolving tactics and
continued efforts to infiltrate and gather intelligence, possibly aiming to influence
or monitor political processes. Romanian and Spanish police dismantled a cyber-fraud gang,
seizing over €182,000 in cash, gold, and numerous electronic devices after conducting 22 house searches in
Romania. The gang, involved in sophisticated scams including fake ads and business email compromise,
tricked victims, primarily in Spain, into paying for non-existent holiday rentals,
cars, and electronics. Losses ranged from 200 to 10,000 euros per victim. Organized
into three units for operations, fishing, and money laundering, the gang's activities highlight
the growing threat and complexity of cyber fraud, with BEC scams alone generating over 2.9 billion
dollars in 2023. Europol described the group's operations as unprecedented in sophistication,
necessitating a comprehensive investigation to dismantle.
Mozilla swiftly patched two critical zero-day vulnerabilities in Firefox that were identified
by Manfred Paul at the Pwn2Own Vancouver 2024 event.
These flaws allowed attackers to escape the browser's sandbox and execute code on the system.
One of the bugs involves out-of-bounds access
that can bypass range analysis,
while another permits privileged JavaScript execution,
leading to a sandbox escape,
but is only exploitable in desktop versions of Firefox.
Manfred Paul earned $100,000 for these Firefox vulnerabilities
as part of the Pwn2Own contest that saw participants earning over $1.1 million.
President Joe Biden has nominated Michael Solmeier
as the inaugural Assistant Secretary of Defense for Cyber Policy at the Pentagon,
a position mandated by the 2023 Defense Policy Bill to enhance focus on civilian-led cyber policy.
Solmeyer, currently a principal cyber advisor to the Secretary of the Army,
advisor to the Secretary of the Army, brings extensive experience from roles in the Office of the Secretary of Defense, the National Security Council, Cyber Command, and Academia.
His background includes directing the Cybersecurity Project at Harvard's Belfer Center
and affiliations with University of Texas School of Law and Georgetown Center for Security and
Emerging Technology. Congratulations to Mr. Solmeier and best wishes for success.
The Foundation for Defense of Democracies, a nonpartisan Washington think tank,
advocates for the U.S. to establish a dedicated cyber service to address recruitment challenges,
coordination issues, and the lack of a cohesive culture for digital warriors.
Highlighting cyberspace as a critical warfighting domain, the think tank emphasizes the urgency for
Congress to act to ensure sustainable cyberforce readiness. With the military struggling to develop
a talent pipeline and provide sufficient support, the report suggests starting a cyber
force with around 10,000 personnel and a $16.5 billion budget under the Army's auspices.
The recommendation comes amid growing concerns over cyber threats from rivals like Russia and
China and debates over enhancing the Department of Defense's cyber capabilities, including the discussed
Cyber Command 2.0 initiative. The think tank plans to lobby lawmakers for an independent study
into establishing this new military branch, aiming to enhance the U.S.'s cyber posture
amidst evolving global cyber challenges. Palo Alto Network's Unit 42 has uncovered a surge in Strelas Stealer malware attacks,
impacting over 100 organizations across the EU and US.
Initiated through spam emails with malicious attachments, the latest variant of Strelas
Stealer, an email credential stealer first identified in November of 2022, employs updated
obfuscation techniques and is delivered via a zipped JScript file. The campaigns, notably
aggressive in November 2023 and peaking again on January 29th of 2024, have targeted sectors such
as high-tech, finance, legal, and manufacturing. The malware's distribution method
involves spear phishing emails that lead to the download of a JScript file, which then drops and
executes a Base64 encrypted DLL payload using sophisticated evasion tactics. This development
demonstrates the malware's continuous evolution and the threat actor's efforts to bypass security measures.
General Motors has ceased sharing driver data with LexisNexis Risk Solutions and Verisk,
data brokers for the insurance industry,
after a New York Times report revealed GM's practice of sharing detailed driving behavior
through its OnStar Smart Driver feature.
The feature, available in GM's internet-connected cars,
tracked and shared information on mileage, braking, acceleration, and speed,
affecting some drivers' insurance rates.
The decision comes as a Florida man files a class action complaint against GM and involved parties,
following a significant increase in his insurance rates linked to the collection of driving data.
Over 8 million vehicles were part of the Smart Driver program,
which generated revenue in the low millions for GM annually.
A tip of the hat to New York Times technology reporter Kashmir Hill for running down this story and in doing so demonstrating why good journalism helps keep big companies like GM honest.
Coming up after the break, my conversation with Anna Bellick,
Director of the Office of Cybersecurity Strategy at Sysdig.
We're talking cloud-native security.
Stay with us. Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this.
More than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist. Vanta brings automation to evidence collection across 30 frameworks like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting, and helps you get security
questionnaires done five times faster with AI. Now that's a new way to GRC. Get $1,000 off Vanta
when you go to vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off.
And now a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses
is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365
with Black Cloak. Learn more at blackcloak.io.
I recently had the pleasure of speaking with Anna Bellick, Director of the Office of Cybersecurity
Strategy at Sysdig. She shared trends in cloud-native security.
So we started this report, I want to say, I'm going to get it wrong, in like 2018.
Anyway, whatever seven years ago is.
And the beauty of it is that unlike a lot of reports that you see, which are survey data,
where people express how they feel about their environment,
this is actual data of what people are actually doing about their environment. This is actual data of
what people are actually doing in their environment because our tool provides visibility into that,
and we're able to aggregate and anonymously share what our actual customers are doing in the cloud.
Well, let's dig into some of the details here. What are some of the things that caught your eye?
You know, every year I see the data and I'm like, I'm surprised, I'm not surprised. I can't really decide if it's actually surprising. So maybe for me, the unsurprising one that's nonetheless a bit startling is the identity data.
So we found last year, a very high percent of permissions that are granted to cloud accounts are unused. And this year, we actually saw that number go up. So this year, it's 98 percent of permissions that are granted are not used.
Unpack that for me. What exactly does that mean?
Yeah, so this is basically your default password problem from the old days, right?
So what happens in the cloud is we have all these different roles and identities we create.
And it's actually quite complex, right? Because you need different types of identities to do different things. So you have
humans who are, you know, administering things, you have machine identities that perform certain
tasks only. And then you may have like interesting roles designed in specific ways for specific
environments or purposes, right? And so you would think, because you have all this flexibility, you could design a very explicit set of permissions. So any given human or machine can
only access the required assets or the required tasks that they can perform and so on. But what
actually happens in real life is nobody has the time to do that. And also, it's not clear whose
explicit responsibility it is to define those policies.
So people just do the thing
that's the fastest and most convenient,
which is grant most accounts
all possible privileges they would ever need
and then not revoke them.
So what you end up with
is a whole lot of identities in the cloud
that have way too much access
that they really don't need
and that creates a lot of opportunity for attackers.
Yeah, that's interesting.
What other things caught your attention here?
So my favorite data point is actually a little bit complex,
so I'm just going to touch on it, but it's about drift control, right?
So drift control is this notion that if you are in the cloud and you define everything as code,
then you always know if the environment or the workload or the configuration
has changed from what it was supposed to be, because it's like literally codified in policy.
And we have a capability in our tool that lets you see when those workloads have changed.
So we saw that 25% of our customers have it turned on, and 4% of our customers have it
turned on in a blocking mode.
So that means they do not allow drift on workloads.
of our customers have turned on in blocking mode.
So that means they do not allow drift on workloads.
And at first I was like, well, those numbers are kind of small,
but actually
blocking mode is pretty intense,
right? So this means that you are so certain
that your workloads are truly immutable
that any drift in them
is unacceptable to the point where you don't even want
to see an alert, you're just going to kill it.
Right? And I think that's pretty, like, that's
very forward-looking. That means that people are actually
have learned how to design
architectures that are truly cloud-native
and how to build
security around those architectures
in a way that actually ends up saving
a lot of the noise, right? Because we hear about noise a lot
and blocking drift is like,
well, there's no noise because it's just gone, right?
So I thought that was pretty cool in terms of, like, optimism.
That is cool.
What about the 25% who have it on,
but aren't being so intense about it?
Yeah, so I think that speaks to the fact
that most orgs are in real life,
transitioning from some legacy state to some future state.
And they may forever operate a hybrid environment
because there are lots of applications
that have business value that are
just not worth refactoring, right?
So I suspect that in the 25% what's going on
is they're either applying
the drift policy,
like they're turning it up, like they started
with alerting so they can see what's going on, and they're
maybe changing their workloads
to be drift-blockable,
even though they're not prepared yet.
Or they maybe have just multiple types of workloads.
They have some workloads that are immutable and some that are mutable,
and maybe they're commingled somehow.
So in any case, most environments are kind of a mess.
And so I suspect that most of the folks who have it on in alerting mode
are in some sort of transition state where they are maturing the organization,
essentially, which is also encouraging, by the way.
Was there anything particularly surprising
or unexpected for you?
I would say, about Drift
or in general? Just in general.
I would say the AI
data point. Now, people
know that I'm a filthy AI
skeptic.
I just...
But you own it. I mean, you gotta lean in, right? There's two reasons I'm an AI skeptic. I just... But you own it. I mean, you got to lean in, right?
There's two reasons I'm an AI skeptic.
One is that I have a PhD in computational
physics, so I know what it looks like underneath.
And the other is that
when I was a Gartner, the machine learning
craze came through about
almost 10 years ago now, where
everything was ML, ML, ML.
And 99% of that was just BS. So now we're doing the same thing except it's AI, AI, ML, and like 99% of that was just BS, right?
So now we're doing the same thing, except it's AI, AI, AI, because now it's a different
flavor of AI, and it's like 99% BS.
Anyway, the data point we found, we actually looked at who has deployed the packages, the
AI packages in their environment, right?
So you have to understand that most people interact with AI these days through an API.
So they're using like chat GPT, they're typing stuff into a box, which means it's someone else's environment that's actually performing the calculation and you're just giving it some parameters, right?
But if you believe the hype that like AI is the future and like nobody can survive in business without adopting AI, you would expect a lot more organizations to be building their own stuff.
And I don't mean like making their own models necessarily, but I mean like building applications around models
or leveraging these packages.
So like deploying that capability
inside their own infrastructure
as opposed to just making an API call, right?
And what we actually did see was
that this number was very small.
So we only saw 31% of users integrate any kind of AI.
And most of that AI was not gen AI.
It was like statistical
models, machine learning models, like analytics
of various kinds. So of the
31% that we saw deploying anything,
only 15% were Gen AI.
Right. So that indicates
a much lower adoption than what you
would expect from what you hear in the media
and see on the internet.
So is that...
I will be cynical and say,
is that checkbox AI for the marketing department?
You know, it's...
Or am I overstating it?
My security optimist theory
is that people are actually cautious
because of the security and safety implications of AI.
Okay.
My true cynic theory is that, honestly, there are
not that many phenomenal use cases for this
in all lines of business, right?
And that's sort of my perspective in general
on the current wave of AI, is like, there's some phenomenal
use cases for large language
models and for generative AI in general.
My favorite is maybe, like, art.
Like, there's really cool digital
manipulation you could now do with AI
that's, that's amazing.
But for every company in the world to have to use Gen AI for whatever they're doing is kind of silly
because there are very few technologies that actually unilaterally affect everybody.
So my theory is just that the folks who really are going to benefit from this are going all in
and the rest of us are kind of making noises but not really going all in.
So based on the information that you all have gathered here,
what are your recommendations?
What are the actionable bits of advice you can share?
I think the biggest take-home is a little disappointing, I guess.
And it's just that we're still doing the same things we've always been doing.
We're still choosing convenience and speed over security and rigor and quality.
And it's very understandable why that is.
It's very clear that
A, we're all humans and we just want to get the job done.
And B, ultimately
your business exists because it makes
money and so anything that impedes the making
of money is kind of bad.
So I think we
still have a lot of work to do on the
security side of
proving to the business that
security is a value add and not a money sink. And then also, you know, on the technical side,
creating a lot more simple processes for the various teams who are not themselves security
teams to perform their work more securely, kind of inherently, right? So I think that's kind of
the challenge of the next decade. I mean, it's not easy. We've been trying to do it for a long
time. And now we're just trying to do it
in a whole new environment of cloud,
which has its own complexities
and then new problems and so on.
So I think that's the sad one.
The kind of happier one
is sort of what I pointed out about,
like, you know, Drift and even AI, right?
You see these small numbers like,
oh, 15% of 30% or like 4% blocking, whatever.
And that feels like not very much,
but I think you got to take a different perspective
and say like, listen, those guys are early adopters
and they're really pushing the frontiers
and they are clearly successful, right?
So I just think that's more of a signal
that we have a lot of opportunity
to become more secure in this novel environment
and we should sort of pattern our efforts after those folks that have, in fact, already done it.
That's Anna Bellick, Director of the Office of Cybersecurity Strategy at SISTI.
Thank you. ThreatLocker is a full suite of solutions designed to give you total control, stopping unauthorized applications, securing sensitive data, and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant. Maybe, but definitely 100% closer to getting 1% cash back with TD Direct Investing.
Conditions apply. Offer ends January 31st, 2025. Visit td.com slash dioffer to learn more.
And finally, an academic paper from Shinmai Sharma,
Associate Professor of Law at Fordham Law School,
makes the case that the core issue with AI's potential for harm,
ranging from bias to manipulation,
is not the technology itself but its creators,
who are often guided by profit-driven companies
rather than the pursuit of safe, socially beneficial products.
Sharma says government and litigation have been ineffective in curbing detrimental AI engineering practices.
The proposed solution is to professionalize AI engineering, requiring engineers to obtain licenses for building commercial AI products,
adhere to scientifically-backed technical standards, and self-regulate.
This approach aims to prevent AI harms by influencing engineering decisions at their source,
shifting control from companies to engineers,
and fostering the development of trustworthy AI by design.
This move towards professionalization
is likened to practices in other fields
where public welfare is paramount,
suggesting that AI engineers
should also prioritize doing no harm,
similar to medical professionals.
It's an interesting proposal, worth a read.
I can imagine a Hippocratic Oath for AI
at the top of the list, of course, would be
first, do no harm. Maybe number two should be do not create Skynet.
And that's the Cyber Wire. For links to all of today's stories, check out our daily briefing at thecyberwire.com.
We'd love to know what you think of this podcast. You can email us at cyberwire at n2k.com.
N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people.
We make you smarter about your team while making your team smarter.
Learn more at n2k.com.
This episode was produced by Liz Stokes.
Our mixer is Trey Hester with original music by Elliot Peltzman.
Our executive producers are Jennifer Iben and Brandon Karp.
Our executive editor is Peter Kilby and I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow. Thank you. With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to your role.
Data is hard. Domo is easy.
Learn more at ai.domo.com.
That's ai.domo.com. That's ai.domo.com.