CyberWire Daily - Cyberattack cripples major American chipmaker.
Episode Date: August 21, 2024A major American chipmaker discloses a cyberattack. Cybercriminals exploit Progressive Web Applications (PWAs) to bypass iOS and Android defenses. Mandiant uncovers a privilege escalation vulnerabilit...y in Microsoft Azure Kubernetes Services. ALBeast hits ALB. Microsoft’s latest security update has caused significant issues for dual-boot systems. The DOE’s new SolarSnitch program aims to sure up solar panel security. Researchers uncover LLM poisoning techniques. An Iranian-linked group uses a fake podcast to lure a target. Our guest is Parya Lotfi, CEO of DuckDuckGoose, discussing the increasing problem of deepfakes in the cybersecurity landscape. Return to sender - AirTag edition. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest Parya Lotfi, CEO of DuckDuckGoose, discusses the increasing relevance of deepfakes in the cybersecurity landscape. Selected Reading Microchip Technology discloses cyberattack impacting operations (Bleeping Computer) Android and iOS users targeted with novel banking app phishing campaign (Cybernews) Azure Kubernetes Services Vulnerability Exposed Sensitive Information (SecurityWeek) ALBeast: Misconfiguration Flaw Exposes 15,000 AWS Load Balancers to Risk (HACKREAD) Microsoft’s latest security update has ruined dual-boot Windows and Linux PCs (The Verge) DOE debuts SolarSnitch technology to boost cybersecurity in solar energy systems (Industrial Cyber) Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code (Dark Reading) Best Laid Plans: TA453 Targets Religious Figure with Fake Podcast Invite Delivering New BlackSmith Malware Toolset | Proofpoint US (Proofpoint) Serial mail thieves thwarted when victim sends herself an AirTag (Apple Insider) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
A major American chipmaker discloses a cyber attack.
Cyber criminals exploit progressive web applications to bypass iOS and Android defenses.
Mandiant uncovers a privileged escalation vulnerability in Microsoft Azure Kubernetes services.
AL Beast hits ALB.
Microsoft's latest security update has caused significant issues for dual boot systems. The DOE's new Solar Snitch program aims to shore up solar panel security.
Researchers uncover LLM poisoning techniques.
An Iranian-linked group uses a fake podcast to lure a target.
Our guest is Paria Latfi, CEO of DuckDuckGoose,
discussing the increasing problem of deepfakes in the cybersecurity landscape, and Return to Sender, AirTag Edition.
It's Wednesday, August 21st, 2024.
I'm Dave Bittner, and this is your CyberWire Intel Briefing. Thanks for joining us once again here today.
It is great to have you with us.
Microchip Technology Incorporated, a major American chipmaker,
has disclosed a cyber attack that disrupted operations across multiple manufacturing facilities.
The company, headquartered in Chandler, Arizona,
serves approximately 123,000 customers in various sectors, including industrial, automotive, and aerospace.
The attack, which was detected on August 17, forced Microchip to shut down and isolate affected systems, resulting in reduced manufacturing capacity and impacting its ability to fulfill orders.
In a recent SEC filing, Microchip revealed that an unauthorized party had disrupted its use of certain servers and business operations.
The company is currently assessing the damage with the help of external cybersecurity experts
while working to restore normal operations.
The full extent and impact of the attack are still unknown,
and while the filing hints at a possible ransomware incident,
no group has yet claimed responsibility.
Microchip is also evaluating whether the breach will materially affect its financial condition.
Cybercriminals are exploiting progressive web applications, PWAs, to bypass iOS and Android defenses, launching a malicious campaign targeting users in Eastern Europe.
These PWAs, which look like legitimate banking apps, are actually just malicious websites packaged as apps.
as apps. Users are tricked into installing them through phishing links delivered via SMS,
social media ads, and automated calls urging them to update their banking apps. Once installed,
these fake apps mimic real ones but lead to phishing sites where login credentials are stolen.
ESET researchers uncovered that at least two threat actors are behind this campaign,
using different command-and-control infrastructures.
The campaign has primarily affected users in the Czech Republic, Poland, Hungary, and Georgia.
ESET warns that this method could lead to more spyware PWAs, as browser APIs allow these fake apps to request access to sensitive device functions.
A privilege escalation vulnerability in Microsoft Azure Kubernetes services, AKS,
could have allowed attackers to access sensitive information,
such as service credentials used by the cluster, Mandiant reports.
The issue affected AKS clusters using Azure CNI for network configuration and Azure for network policy.
Attackers with command execution in a pod with the cluster could exploit this vulnerability to download cluster node configurations,
extract TLS bootstrap tokens, and access all secrets in the cluster. The flaw could be exploited even
without root privileges or host network enabled. Microsoft resolved the issue after being notified.
Mandiant highlights the risk of Kubernetes clusters lacking proper configurations,
as attackers could use this vulnerability to compromise the cluster, access resources, and even expose internal cloud services.
The flaw also allowed attackers to use the TLS bootstrap token to gain broader access to cluster secrets.
discovered by MIGO Research that allows attackers to bypass authentication and authorization in applications using AWS Application Load Balancer, ALB.
This misconfiguration in ALB's user authentication can lead to unauthorized access,
data breaches, and data exfiltration.
ALBeast affects applications relying on AWS ALB, especially those not following updated AWS documentation.
Attackers can exploit this vulnerability by creating a malicious ALB, forging a token, and manipulating ALB configurations to bypass defenses.
MIGO research identified over 15,000 potentially vulnerable ALBs. AWS addressed the
issue by updating authentication documentation and providing guidance to affected organizations.
To mitigate the risk, organizations should verify token signers, restrict traffic to trusted ALB
instances, and ensure security configurations are aligned with AWS
recommendations. Microsoft's latest security update has caused significant issues for dual-boot
systems running both Windows and Linux. Intended to fix a two-year-old vulnerability in the Grub
bootloader, the update inadvertently affected dual booting devices,
preventing Linux installations from booting properly. Users have reported security policy
violation and something has gone seriously wrong errors across various Linux distributions.
The patch was meant to enhance secure boot by blocking vulnerable Linux boot loaders,
but Microsoft claimed it wouldn't affect dual boot systems.
Despite this, many users are facing problems, and Microsoft has yet to comment.
The Department of Energy's Office of Cybersecurity, Energy Security, and Emergency Response
has introduced SolarSnitch, a cybersecurity technology developed by Sandia
National Laboratories. SolarSnitch is designed to protect communications within photovoltaic
systems at the grid's edge by analyzing cyber and physical data in PV smart inverters and using
machine learning to detect potential cyber threats.
Funded with $490,000 from DOE's CSER and the Solar Energy Technologies Office,
the project aims to mature Solar Snitch for commercialization over the next 24 months through real-world testing. The technology is part of a broader effort to secure distributed energy resources like rooftop solar systems, which are increasingly critical to grid reliability.
Solar Snitch is among 50 clean energy projects selected in the fiscal year 2024 Technology Commercialization Fund.
Developers are increasingly using AI programming assistants to write code,
but new research highlights the risks of blindly accepting AI-generated code.
A team of researchers from University of Tennessee, Knoxville,
Singapore Management University, and University of Connecticut uncovered a technique called CodeBreaker,
which can poison AI models, like large language models, to suggest
vulnerable code that appears benign. This method bypasses static analysis tools and hides malicious
code in ways that make it difficult to detect, potentially leading to serious security risks.
The research underscores the importance of developers carefully reviewing AI-generated code,
not just for functionality, but for security.
Developers are urged to maintain a critical approach and to learn prompt engineering techniques to generate secure code.
The study builds on previous work showing that AI models can be poisoned
by inserting malicious examples into their training data
as AI becomes more integrated into development processes,
ensuring the security of these tools and the code they produce is crucial.
In July of this year, the Iranian-linked threat group TA-453
impersonated the research director of the Institute for the Study of War
to target a prominent Jewish figure with a phishing campaign.
The attackers used a fake podcast invitation to lure the target,
eventually sending a malicious link through DocSend and Google Drive.
The final payload, delivered via a zip file,
included the Blacksmith toolset and the Anvil Echo PowerShell Trojan
designed for intelligence collection. TA453, which overlaps with groups like Microsoft's
Mint Sandstorm and Mandiant's APT42, uses sophisticated social engineering tactics to
build trust with targets before delivering malware. Their advanced toolset,
Anvil Echo, consolidates previous malware capabilities into a single script,
highlighting the group's ongoing efforts to refine their cyber espionage techniques
in support of Iranian government interests. A fake podcast is nothing sacred.
Coming up after the break,
Hariya Latvi, CEO of DuckDuckGoose,
discusses the increasing problem of deepfakes in cybersecurity.
Stay with us. We know that real-time visibility is critical for security, but when it comes to our GRC programs, we rely on point-in-time checks.
But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility into their controls with Vanta.
Here's the gist. Vanta brings automation to evidence collection across 30 frameworks, like SOC 2 and ISO 27001.
They also centralize key workflows like policies, access reviews, and reporting,
and helps you get security questionnaires done five times faster with AI.
Now that's a new way to GRC. Get $1,000 off Vanta when you go to vanta.com slash cyber. That's vanta.com slash
cyber for $1,000 off. And now a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover
they've already been breached.
Protect your executives and their families
24-7, 365, with Black Cloak.
Learn more at blackcloak.io.
Paria Latfi is CEO of DuckDuckGoose, and I recently caught up with her to discuss the increasing problem of deepfakes in cybersecurity.
So deepfakes, for those who might be new to the term,
are AI-generated or AI-man AI manipulated videos, images, or audio.
And those are almost indistinguishable from real content. We're actually talking about technology
that can make it look like anyone saying anything and even doing things that they never actually
have done before. And this is not just about fun anymore. We are seeing deepfakes being used for, for example,
manipulation of elections, spreading disinformation, and even target businesses, but also individuals.
In fact, the World Economic Forum recently ranked disinformation, including deepfakes,
as one of the top global risks for 2024. And that's how serious it has become.
for 2024. And that's how serious it has become. As we look into the use cases of DeepFix, it is really important to also emphasize the variety and the dangers of these applications. For example,
picture this. It is election season and suddenly a video pops up showing a political leader saying
something outrageous. It looks real, but it's completely fake.
And of course, having the social media and speed to spread content nowadays,
it is perfectly possible to even manipulate people's opinion
regarding that specific election case.
And this is not just hypothetical.
It's happening, in fact, nowadays.
And it is, as I said, a way to, for example, manipulate elections.
There is also the media use
case. So imagine you turn on your TV and the news and find out about a story that is based on a
video or an audio clip that might never actually have happened in reality. And this kind of fake
news, of course, can spread confusion and distrust regarding digital content.
And what, of course, we're also seeing and investigating at DuckDuckGoose is the use of deepfake for financial scams.
We have seen cases where, for example, companies, CEOs,
or executive-level people appear to instruct staff to transfer funds,
but also information, of course, sharing information which might be sensitive.
But again, it's all made up by an artificial intelligence network. What also is really alarming is that we have seen an increase up to 10% of deepfake selfies being pictures of
people that are being used for, for example, biometric verifications to open a bank account,
let's say, or if you have a bank
account to log into the bank account by using a deepfake of an existing people or a deepfake of a
synthetic identity, so a fabricated person that is completely, of course, non-existent, but has an
online life, let's say. So this means that someone's digital identity could be completely fabricated.
And in that case, banks, but also other financial institutions, are at a high risk of financial fraud with all the consequences that they might suffer from afterwards.
Is it fair to say that the availability of the technology is such that there's a low barrier of entry for people to be able to generate extremely convincing images and videos?
Yes, absolutely.
In the last couple of years, we have seen the accessibility,
but also the quality of deepfake generation tools increasing tremendously.
And that also has caused kind of a cat and mouse game between deep fake generation technologies out
there and the detection capabilities that are being provided by mainly companies and research
groups that are working on this issue. Nowadays, it is even possible that you use a short video of
someone presenting their face in that video to generate a completely reliable looking and even live deep fake of that person.
I was having a discussion with my good friend and also co-founder Mark recently, and he
posed this question to me, which really made me think deeply.
His question was, nowadays in this deep fake area or in this digitalfake area, in this digital deepfake area, if you present your face in interviews, in, for example, YouTube videos, or even in video calls that we are having nowadays, all of us, we tremendously rely on tools such as Zoom or Microsoft Teams to have conversations with, for example, our business partners.
for example, our business partners.
In that case, if you present your face anywhere,
does your face belong to you?
Since we know that everyone can access that information,
your biometrics in that case,
and use that to generate a deep fake of you.
And this really got me thinking because that's true.
I have a couple of videos online.
I am in Teams and Zoom meetings continuously, actually.
So if someone would make a recording of my face or even use a single picture, that's enough in some cases as well. They could be
able to generate a deep take of me and, for example, join a meeting instead of me. So it's
actually tremendously easy these days to generate a deep take of anyone that you would like to,
even with a single picture. So what are the technologies that are available?
How are folks coming at this problem
of being able to detect
whether or not something is a deepfake?
That's an excellent question.
So at DuckDuckGoose,
we really stand for raising awareness
regarding deepfakes.
That's also why we say that being skeptical
and being aware that not anything
or not everything that we see or hear digitally
nowadays is necessarily authentic. In order to be able to protect ourselves from, for example,
deep fake dangers, firstly, we need to verify the source of any content that we see as an individual.
So let's say that you see a video or an image or an audio clip, the first thing that you should do nowadays is to try to verify the source of it. So take a moment to trace back to its source and its
origin and see where it comes from. And if you cannot verify the source, it's better, of course,
to be cautious. The second thing that we can do regarding defect detection as an individual is,
of course, let's say you're on a video call with someone and you feel that something is not all right and something just feels off. You can ask the person to, for example,
perform a real-time action such as turning their face for more than 90 degrees to see if there are
any strange things happening in their face. And if that's the case, there's a chance that the person
is using a deep fake.
And one more thing that, for example, an individual could do is looking in the eyes.
Let's say that you see a video of a person that might be non-existing, so a fully generated person in a video or in a picture. One thing you could do to confirm if it's a deep fake or not is to
look into, for example, the reflections in the eyes, because those hold details that should be in someone's eyes
if the person is a real person.
And if you notice that the eyes of this person in the video
look a bit dull or lifeless, it is worth being suspicious.
And of course, I can imagine that these tips and tricks
might help in individual cases, for example,
for a person to be able to verify the content. But what happens cases, for example, for a person to be able to verify the content.
But what happens if, for example, companies such as banks, let's say,
are having millions and millions of images coming through their systems
for identification and authentication purposes, being, for example, people's facial biometrics?
In that case, it is really needed to also take a look at the technological landscape to see
what is out there that can be used for deepfake detection. And since deepfakes are becoming
better and better, the need to have generalizable tools, such as AI technology as well, but AI
technology that is able to look beyond its training dataset, meaning that if you have deepfake types A, B, and C
that you have trained your network on to recognize them,
but there's all of a sudden the next day
there is a deepfake type D coming out,
it is important to keep up with the developments
and make your technology in such a way
that it's also able to uncover those types of deepfakes.
And that is the biggest value
that a user of a deepfake detection
technology can get from their detection tools that are being used for their protection purposes.
How reliable are the deepfake detection tools these days? So where do we stand? What's the
state of the art? Great question. So as I was explaining, the quality of a detection technology, which is mainly nowadays based on AI technologies as well, mainly depends on the type of data that you have used to train your systems and make them familiarize with deepfake actually to make it able to uncover those defects later on.
later on. What we see is that if you have a proper detection technology, again, based on AI,
and you test that under lab circumstances, the performance might go up to 95 and even up to 99% in some cases. But that, again, doesn't say much about how good this technology is when you
use data from the wild as an input. That's something that at DuckDuckCoos also plays a big role.
So we have seen that it is really crucial
to basically know the attack landscape out there
for your users and the partners
that are making use of our technology,
relying on our technology.
And that way we are making our technology
the best suitable for those type of data.
And of course, as I was saying, the quality of your data sets that you use for training is crucial for your quality of detection later on.
But as I was also explaining, the deepfake generation landscape is developing tremendously fast.
So keeping up in this cat and mouse game when it comes to deepfake detection technologies
is becoming more and more challenging. That's why we see that the importance of more generalizable
technologies is increasing at the moment to make sure that your performance keeps up when there
are new types of deepfake being used for attacks. So generally speaking, to summarize my answers,
we hear a lot that detection technologies are capable of deepfake detection up to 99%.
But the point is to also make it happen when you get data as input that is unseen for your detection technology.
And that's what we are heavily investing in at DuckDuckGoose.
That's Pari Yalati from DuckDuckGoose.
Cyber threats are evolving every second, and staying ahead is more than just a challenge. It's a necessity.
Thank you. ensuring your organization runs smoothly and securely. Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and compliant.
And finally, our philately desk tells us the sad tale of a pair of postal mail thieves in Santa Maria Valley, California, who thought they were scoring more checks and credit cards,
but instead nabbed a package containing an Apple Air
Tag. A clever and fed-up resident, tired of her mail being stolen, decided to track down the
culprits herself by mailing the Air Tag to her own address. When the device inevitably disappeared,
she promptly alerted the Santa Barbara County Sheriff's Office, who followed the AirTag's trail right to the
unsuspecting crooks. The thieves were found with a treasure trove of stolen goods, leading to
charges of identity theft and fraud. The Sheriff's Office praised the victims' ingenuity, while the
thieves likely learned crime really doesn't pay, especially when your loot comes with a GPS tracker.
And that's the Cyber Wire.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity.
If you like our show, please share a rating and
review in your favorite podcast app. Please also fill out the survey in the show notes or send an
email to cyberwire at n2k.com. We're privileged that N2K Cyber Wire is part of the daily routine
of the most influential leaders and operators in the public and private sector, from the Fortune
500 to many of the world's preeminent intelligence and law enforcement agencies.
N2K makes it easy for companies to optimize your biggest investment, your people.
We make you smarter about your teams while making your teams smarter.
Learn how at n2k.com.
This episode was produced by Liz Stokes.
Our mixer is Trey Hester, with original music and sound design by Elliot Peltzman.
Our executive producer is Jennifer Iben.
Our executive editor is Brandon Parr.
Simone Petrella is our president.
Peter Kilby is our publisher.
And I'm Dave Bittner.
Thanks for listening.
We'll see you back here tomorrow. Thank you. is hard. Domo is easy. Learn more at ai.domo.com. That's ai.domo.com.