CyberWire Daily - FCC around and find out.
Episode Date: February 6, 2025Chaos and security concerns continue in Washington. Spanish authorities arrest a man suspected of hacking NATO, the UN, and the US Army. A major U.S. hiring platform exposes millions of resumes. Anoth...er British engineering firm suffers a cyberattack. Cisco patches multiple vulnerabilities. Cybercriminals exploit SVG files in phishing attacks. SparkCat SDK targets cryptocurrency via Android and iOS apps. CISA directs federal agencies to patch a high-severity Linux kernel flaw. Thailand leaves scamming syndicates in the dark. Positive trends in the fight against ransomware. Our guest is Cliff Crosland, CEO and Co-founder at Scanner.dev, discusses the evolution of security data lakes and the "bring your own" model for security tools. Don’t eff with the FCC. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Today on our Industry Voices segment, guest Cliff Crosland, CEO and Co-founder at Scanner.dev, discusses the evolution of security data lakes and the "bring your own" model for security tools. For some additional details, check out their blog on “Security Data Lakes: A New Tool for Threat Hunting, Detection & Response, and GenAI-Powered Analysis.” Selected Reading Musk’s DOGE agents access sensitive personnel data, alarming security officials (Washington Post) Union groups sue Treasury over giving DOGE access to sensitive data (The Record) Hacker Who Targeted NATO, US Army Arrested in Spain (SecurityWeek) Hiring platform serves users raw with 5.4 million CVs exposed (Cybernews) IMI becomes the latest British engineering firm to be hacked (TechCrunch) Cisco Patches Critical Vulnerabilities in Enterprise Security Product (SecurityWeek) Scalable Vector Graphics files pose a novel phishing threat (Sophos News) Crypto-stealing apps found in Apple App Store for the first time (Bleeping Computer) Ransomware payments dropped in 2024 as victims refused to pay hackers (TechCrunch) CISA orders agencies to patch Linux kernel bug exploited in attacks (Bleeping Computer) Thailand cuts power supply to Myanmar scam hubs (The Record) Robocallers posing as FCC fraud prevention team call FCC staff (Bleeping Computer) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the CyberWire Network, powered by N2K.
Hey everybody, Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try
DeleteMe. I have to say, DeleteMe is a game changer. Within days of signing up, they started
removing my personal information from hundreds of data brokers. I finally have peace of mind,
knowing my data privacy is protected. DeleteMe's team does all the work for you, with detailed
reports, so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for DeleteMe.
Now at a special discount for our listeners, today get 20% off your DeleteMe plan when you go to JoinDeleteMe.com slash N2K and use promo code n2k at checkout.
The only way to get 20 percent off is to go to join delete me dot com slash n2k and enter
code n2k at checkout.
That's join delete me dot com slash n2k code n2k. Chaos and security concerns continue in Washington.
Spanish authorities arrest a man suspected of hacking NATO, the UN, and the US Army.
A major US hiring platform exposes millions of resumes. Another British engineering
firm suffers a cyber attack. Cisco patches multiple vulnerabilities. Criminals exploit
SVG files in phishing attacks. SparkCat SDK targets cryptocurrency via Android and iOS
apps. CISA directs federal agencies to patch a high severity Linux kernel flaw.
Highland leaves scamming syndicates in the dark.
Positive trends in the fight against ransomware.
Our guest is Cliff Crossland, CEO and co-founder at Scanner.dev,
discussing the evolution of security data lakes and the bring-your-own model for security tools.
And don't F with the FCC.
It's Thursday February 6th 2025. I'm Dave Bittner and this is your CyberWire
Intel Briefing. Thanks for joining us here once again.
Always great to have you with us.
Elon Musk's Department of Government Efficiency, DOGE, has gained access to restricted U.S.
government records on millions of federal employees,
including Treasury and State Department officials, in sensitive security roles," the Washington
Post reports.
According to anonymous sources, Doge's involvement raises concerns about potential misuse of
personnel data amid threats of retaliation against federal workers by Trump administration
officials. The Office of Personnel Management holds sensitive employee data,
including addresses, salaries, and disciplinary records.
DOGE agents, some in their early 20s with ties to Musk's private companies,
were granted administrative access to OPM systems shortly after Trump's inauguration.
This access allows them to install software,
alter records, and potentially transfer data externally.
There is no evidence they have done so,
but officials are alarmed at the risk.
Doge's arrival has disrupted OPM
with mass staff reductions planned,
including the removal of key IT and financial executives,
tensions have risen between Doge agents and career officials, contributing to low morale.
The halt of IT upgrades and Doge's access to government networks increased security
vulnerabilities reminiscent of past cyber breaches, such as China's 2014 theft of U.S. security clearance records.
Security experts warn that foreign adversaries could exploit the chaos
as Doge's access extends to Treasury's payment systems, which contain classified expenditure
details. The Senate Intelligence Committee has demanded transparency on DOJ's vetting process and system access.
Meanwhile, a lawsuit challenges OPM's privacy policies, arguing that unencrypted,
government-wide email deployments create security risks. Experts fear that foreign
intelligence services could infiltrate DOJ due to its rapid and opaque hiring process.
due to its rapid and opaque hiring process.
Spanish authorities have arrested an 18-year-old suspected hacker for cyberattacks on over 40 organizations, including NATO, the UN, and the US Army.
The suspect allegedly leaked stolen data and managed over 50 cryptocurrency accounts.
Investigators believe he used multiple online aliases, including
NATO Hub, who claimed breaches on breach forums. Between June 2024 and January of this year,
NATO Hub posted 18 times about data breaches, sometimes selling or freely sharing stolen
information. Authorities seized electronic devices during the arrest.
Faux & Beaux, a U.S. hiring platform used by major brands like KFC, Taco Bell, and Nordstrom,
exposed millions of job applicants' resumes due to an unsecured AWS bucket.
The leaked data included full names, contact details, birth information, employment history,
education, and social media links.
Cybersecurity researchers warn that the breach increases the risk of identity theft, allowing
criminals to create fraudulent accounts or launch targeted phishing scams.
Attackers could impersonate past employers to trick victims into revealing financial
details or installing malware.
Scammers might also exploit financially vulnerable individuals with deceptive job offers.
The exposed data set contained 5.4 million files, but after multiple warnings, the company secured the database.
British engineering firm IMI has disclosed a cybersecurity incident shortly after a rival
Smiths Group reported a similar attack.
IMI, which designs industrial automation and transport products, confirmed unauthorized
access to its systems in a London stock exchange filing.
The company has engaged cybersecurity experts to investigate and
contain the breach. IMI declined to comment on potential data exfiltration.
Meanwhile, Smith's group is also working to recover from an attack with neither
company providing a recovery timeline. Cisco has released patches for multiple
vulnerabilities, including two critical flaws in its Identity
Services engine.
These bugs could allow authenticated attackers to execute arbitrary commands and tamper with
device configurations.
Patches are available and Cisco says there are no workarounds. Additionally, Cisco warned of high severity SNMP vulnerabilities in iOS,
iOS XC, and iOS XR, which could cause denial of service attacks. Patches are expected by
March. Medium severity flaws affecting various Cisco products were also addressed. No active
exploits have been reported. Researchers at Sophos say cybercriminals are exploiting scalable vector graphics files in phishing attacks
to bypass email security filters.
SVG files, unlike typical image formats, can contain embedded links and scripts that direct victims to phishing sites. Attackers disguise these files as legal documents, voicemails, or invoices using familiar brands
like DocuSign and Microsoft SharePoint.
Once opened, the file redirects users to fraudulent login pages that steal credentials.
Some attacks also deliver malware or leverage CAPTCHA gates to evade detection.
Researchers identified evolving tactics, including localized phishing pages and embedded keystroke
loggers.
Security experts recommend setting SVG files to open in Notepad instead of a browser and
carefully checking URLs for legitimacy.
Sophos suggests organizations should update email security solutions
to detect malicious SVG attachments and prevent credential theft.
A malicious software development kit called SparkCat has been discovered
in Android and iOS apps, stealing cryptocurrency wallet recovery phrases
using optical
character recognition. The malware hidden in SDKs named Spark, GZip, Google App SDK
and Stat extracts sensitive text from images on devices enabling attackers to
access crypto wallets. On Google Play alone, the infected apps were downloaded
over 242,000 times, with some still available on both
Google Play and the App Store.
Kaspersky identified 18 Android and 10 iOS infected apps,
with attackers using a Rust-based module for
communication with the command and control servers.
Users are advised to uninstall affected apps immediately, scan devices with antivirus software,
and avoid storing recovery phrases in screenshots.
Instead, use offline, encrypted storage for security.
Google and Apple have yet to respond.
CISA has ordered U.S. federal agencies to patch a high-severity Linux kernel flaw within
three weeks due to active exploitation.
The vulnerability found in the USB Video Class Driver enables privilege escalation on unpatched
devices.
Google patched it for Android users, warning of limited targeted attacks. Security experts believe forensic tools may be exploiting this flaw.
CISA also flagged critical vulnerabilities in Microsoft.net and Apache OF Biz,
urging manufacturers to enhance network forensic visibility to aid cyber defense.
On Wednesday, Thailand took a decisive step against online scamming syndicates
by cutting off electricity, fuel, and internet to key scam hubs in Myanmar.
These enclaves, run by organized crime groups, have become centers for cyber fraud
targeting victims worldwide.
The move follows pressure from China's Assistant Minister of Public Security, Louis Zhangjie,
who urged Thailand to intensify its crackdown.
Louis revealed that 36 Chinese-run scam operations in Myanmar employ over 100,000 workers, many
trafficked and forced into fraud. The high-profile rescue of Chinese actor Wang Jing from one of these compounds heightened
scrutiny.
Thailand's Prime Minister, Pei Tongtarn Sinawatra, defended the action, citing the scam's $2
million daily impact on Thailand's economy.
The crackdown aligns with her visit to China, where both nations pledged stronger law enforcement
cooperation to combat cross-border cybercrime.
At the start of 2024, ransomware groups seemed as powerful as ever, pulling in hundreds of
millions of dollars in extortion payments.
But as the year progressed, something shifted.
Law enforcement agencies, cybersecurity firms, and victims themselves began pushing back harder than ever before.
By year's end, ransomware payments had dropped 35% from the previous year, marking the first significant decline in years.
According to research from Chainalysis, it wasn't just government action that slowed ransomware operators.
Victims became more resilient, with more organizations refusing to pay and instead relying on backups
to recover their data.
Ransomware gangs adapted, working faster than ever, sometimes beginning negotiations within
hours of an attack.
But even with these tactics, the market fractured. The collapse of LockBit
and Black Cat, two of the biggest ransomware groups, left a void that no single group was
able to fill. New players emerged. Groups like Akira and Fogg stepped into the spotlight,
specializing in exploiting VPN vulnerabilities to infiltrate corporate networks. Meanwhile,
Iranian-linked ransomware strains rebranded and resurfaced, proving that attackers
were not giving up, they're just adapting.
Financially, ransomware groups faced another hurdle, moving their money.
In the past, they relied on cryptocurrency mixers to launder their earnings, but after
sanctions and takedowns of services like Tornado Cash,
they turned to cross-chain bridges and centralized exchanges instead.
However, even this became riskier as governments cracked down on crypto platforms with loose
know-your-customer policies.
Perhaps the most telling sign of ransomware's changing landscape was Lockbit's desperate attempt to stay relevant after being hit by Operation Kronos,
the once-dominant group resorted to reposting old victims, inflating their numbers in a bid to maintain their reputation.
Despite the decline in payments, ransomware is far from defeated.
The criminals behind these attacks are still out there learning, adapting, and searching for new ways to evade security measures. But for the
first time in years, defenders seem to have the upper hand. And that's worth
celebrating.
Coming up after the break, our guest Cliff Crossland, CEO and co-founder at Scanner.dev
discusses the evolution of security data lakes.
And don't F with the FCC.
Stick around. Cyber threats are evolving every second and staying ahead is more than just a challenge,
it's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted
by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total
control, stopping unauthorized applications, securing sensitive data, and ensuring your
organization runs smoothly and compliant.
Do you know the status of your compliance controls right now? Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks.
But get this, more than 8,000 companies like Atlassian and Quora have continuous visibility
into their controls with Vanta. Here's the gist, Vanta brings automation to evidence collection
across 30 frameworks, like SOC 2 and ISO 27001. They also centralize key workflows like policies,
access reviews, and reporting, and helps you get security questionnaires done five times faster with
AI. Now that's a new way to GRC. Get $1,000 off Vanta when you go to vanta.com slash cyber. That's vanta.com slash cyber for a thousand dollars off.
Cliff Crossland is CEO and co-founder at Scanner.dev. In today's sponsored industry voices segment, we discuss the evolution of security data
lakes and the bring your own model for security tools.
So, a data lake is an evolution beyond a data warehouse, and they're all these funny terms
for big data storage areas. But a data lake is a strategy for taking data of many different formats.
A data warehouse is sort of the first step in this direction
where the idea was to have tons and tons of data that matched a really strict structure.
But data lakes, the idea there is just to have a storage repository
of lots and lots
of messy data of many different formats and many different structures.
And the idea is that you could just pour tons of data into this lake and then make sense
of it afterward and then analyze it afterward.
So it just makes it very easy to get lots of data in.
And then the challenge becomes trying to get value back out
again and query it and get a sense for what's going on.
And so data lakes are starting to become more and more
popular, especially in security, because there's just
so many different kinds of data to collect in security.
And it's much, much easier, more scalable, and just very cheap
compared to other tools to store data in a data lake.
And just to get really specific about data lakes,
people tend to just store lots and lots of data
in cloud storage, whether that's like Amazon S3 or Azure Blob
Storage or GCP storage.
There are a couple of different places where people store it,
but it tends to be just really big cloud storage
lakes of data.
And it's very helpful once your data reaches massive scale
to afford and to scale up with all of the massive amounts
of data volume that come in now.
So we think, anyways, I get into more details there,
but we think that because there's so much data now,
and if you're operating in cloud service,
there's so many different data sources that really,
data lakes are becoming a more and more important part
of a security team's detection and response strategy.
Just because it really allows you to get coverage on massive amounts of data volume
that becomes too expensive in the traditional way that logs and data are stored.
So that's just like a whirlwind tour of data lakes. But yeah, it's a rough idea.
So I hear folks talking about this bring your own model.
Can you describe that for us?
What does that entail?
Yeah, I think it's a really powerful way
that software is being deployed now.
And I really think that this is the future
of how more and more tools are going to work.
So back in the good old days,
and the operational approach
was to send all of your data off to a vendor.
If this is like if you're using a SIEM tool,
Security Information Event Management tool,
oftentimes what that looks like is shipping lots of logs
over to a third party.
And it can be expensive to transfer, etc.
But now, the way that things are moving is that you will store all of your own data in
your own storage buckets, in your own cloud storage, and then you will plug in many different
vendor tools into that data, give them permission to analyze it in different ways and use tools
for what they're strong at.
It's really cool to bring your own storage, even bring your own cloud compute.
You can basically say to a vendor, please deploy your software into my environment.
There are lots of cool tools doing this, whether it's security or database related.
There are many different companies using this approach.
But you kind of get the power of SaaS products where they can get deployed frequently and
updated all the time and it's a really good user experience.
But you're letting the vendor run everything in your own cloud environment, which means you keep full data custody.
You can get perfect visibility into what's going on
and how much compute you're using,
how much storage you're using.
You can drive costs down.
It's a pretty powerful new approach for security teams
and data analysis in general. So, and I think like as, you know, AI applications
and use cases start to explode,
you're starting to see that happen too.
So it's pretty exciting.
There's just a lot of new tools
that are deploying into your cloud, into your storage.
And instead of getting locked into a vendor,
you get to maintain full custody of that data.
It's a cool new pattern.
Can we talk about the scalability here, the possibilities for
growing beyond your expectations if need be?
Yes. So I think some of the interesting things that,
the interesting trends with security log data
is as people are operating more and more services in the cloud,
that they're operating more and more SaaS tools,
the traditional log and traditional data management
tools get to be extremely expensive.
So the beauty of data lakes is that cloud storage is very cheap and can scale forever.
As long as you can apply tools and smart ways to organize the data and make it fast to access,
it can really drive down costs and make it possible for you to have a lot of visibility
into historical data. So yeah, a lot of tools that people tended to use
up until a couple years ago,
you really can only retain like a couple of weeks
or like maybe a couple of months of logs maximum.
And then you would just kind of dump the rest of your logs
into cloud storage just for compliance purposes.
You have no way to get value out of them.
But because there's this new model of being
able to store your logs at scale into massively scalable cloud
storage at low cost, and there are new really cool data lake
tools to analyze that data and get you answers quickly,
yeah, you can really get value out of this historical data. So instead of
spending millions of dollars on a SIM tool, you might spend ten times
less now by taking a data lake approach. So yeah, it's really cool what
is like the scalability that data lakes can achieve, which I think is a big reason why, yeah, big companies.
So Snowflake, Databricks, et cetera, lots of companies,
whether in security or in data analysis,
are really excited about data lakes
and all of the applications they have.
I would imagine it cuts down on redundancy
quite a bit as well, right?
Because as you say, you can have different,
I'll just call them plugins,
looking at this big lake full of data,
you don't have to duplicate that data to be analyzed
from platform A or program B,
it's all there and you can send things to analyze it
as need be.
Yes, absolutely. And I think, so one thing we've seen too is,
in the past you'd have different teams at the company using multiple tools,
shipping the same data off to those multiple tools, duplicating the data, as you were saying.
But also, if you have different divisions, different departments across the company,
they would themselves also ship data off to many different tools, which was a huge problem.
It would just be duplicating the same massive data flows from all of these different log
sources to tons and tons of different tools.
So like you'd have the security team looking at one set of the data and one set of tools.
You'd have the application developers who are trying to debug things and get like health
metrics on the application.
They're using a totally different set of tools and shipping the same data off there.
But the really cool thing about the data lake is yes, there's a centralized place and then
you can plug in many different tools to go and analyze that data for different use cases.
And then it's fun to see teams become, once they start to build their own data lake,
they become a resource across the entire company. And then everyone starts to pile in and say,
this is really cool. It kind of breaks down the silos and wow, the security team has this kind of
data for like the web application firewall.
That's actually really helpful for debugging
this other problem for our infrastructure team.
And because there is the centralized place
in the data lake, everyone can jump in and analyze that.
And they're not replicating this cost
by shipping the data off to 10 you know, 10 different vendors.
There's just one location and they can just use different vendor tools to
analyze that same data. Yeah, so it's really cool to see. Oftentimes, what
I've seen is one team, like at the security team, or maybe like the
business intelligence team start to use a data lake and then lots of other teams get excited
and everyone starts to break down silos
and start to develop really cool use cases
for the same data sets and share them across the company.
So that's another really powerful thing about data lakes.
Yeah, interesting.
Well, help me understand how data lakes handle different types of data.
Am I understanding this has evolved over time?
Yes. So it's really interesting. The beauty of data lakes is that you can store lots of different
kinds of data in one place. The challenge becomes trying to get value out
of really messy data of different kinds,
and different tools have arisen to tackle
the messiness of data lake data.
So you might have web application firewall logs
coming in that the security team really cares about,
and network flow logs.
And they have very different formats.
And trying to build data lake tools to make that useful team really cares about and network flow logs, and they have very different formats.
And trying to build data lake tools to make that useful
has been a lot of progress there and a nice evolution.
So when the data lakes were originally introduced,
it was pretty strict.
And there was a lot of upfront work to transform.
You had to do a lot of work every time
you added a new data source to transform the data
to fit an appropriate schema.
And otherwise, the data would be really slow to access.
But over time, you start to see cool new innovations.
Apache Iceberg is a big one.
Amazon has introduced a new product there with S3 tables to really natively support
Apache Iceberg. And the cool thing there is that it's much more easy to evolve the schemas
and the data structures over time and edit them. So I think where things are heading
in our opinion is the tools will get smarter and smarter and better about just handling the messiness and the structure for you.
It's getting easier. We still feel like it's a little too challenging. I think there are cool things that we really care about to make things easier and more schema-less, et cetera.
But yeah, it's really fun to watch as new tools appear
on the scene to make data lakes easier to use.
We see the messiness of data getting handled
more and more intelligently, making it easier
and easier for people to adopt.
So yeah, that's definitely the way things are heading.
And we hope, like, eventually, I think
it's really cool just to see what people are doing with LLMs
as well with generative AI.
It can do a really good job at helping you figure out
what the schema should be and just kind of taking
on the annoying transformation work every time you
add a new data source.
Yeah, it's really neat to see
like how easy these data lake tools are getting.
And we think the future will be,
you'll just like point a tool at your messy data
and it will just totally make perfect sense of it all
for you.
Right.
That's where things will eventually get to.
But yeah, it's heading that way slowly.
Yeah.
You touched on this idea that the bring your own model
reduces vendor lock-in.
Can we dig into that a little bit?
What's the advantage here?
Yeah, so the beauty here is with other logging platforms
in the past, other, in particular with SIM tools,
the idea would be to ship your logs off to a tool
that you maybe were running internally,
or maybe you were shipping them off to a third party.
And then that data is just locked in
and into that specific tool and that specific format
that is very tightly coupled to that vendor.
And so that can be nice if there's
like a
strong vendor ecosystem. If basically all of the features that you want are
handled by that vendor, that's fine. But then what that also means is that the
vendor can increase their prices and you're stuck there. The beauty of
Data Lakes is that you bring the data, you can bring the compute as well.
And the idea is that the vendor will
supply tools you can use to analyze that,
and you're not locked into any of those vendors.
You could drop one, you could pick up another.
There are a lot of really cool open formats
that people are using for data lake files and the catalogs
that people use to track what data is in the data lake.
So we really think the direction that things should go in is that you should have a lot
of flexibility and be able to select from any different vendors that can all analyze
the same data set without getting stuck in one forever.
And you might love your vendor at first,
and then over the years, they kind of stop innovating,
but then it's very hard to move off.
You may have built a lot of dashboards and queries
and detection roles there,
but with the data lake approach,
there's just way more flexibility
and this really cool notion of having full data custody
that gives you freedom to pick and choose as you want.
So what are your recommendations then
for folks who wanna look into this,
who wanna explore the possibility for their own organization?
What's a good place to start?
Yeah, definitely.
So I think that there are a couple of tools
that you should definitely look at as you get started.
If you're in AWS, that would be Amazon's Athena.
In Google, that would be BigQuery.
And for Azure, there is the Azure Data Lake suite of tools.
That's probably the best place to start playing with things.
Then if you want to really get deep into security,
there are lots of cool security data lake related tools
to help you pull data into your data lake
or to structure it well or to search it well.
Just a whole suite of things that
are centered around the data lake technologies that
exist out there.
So I think I would really recommend that people take
a look at Apache Iceberg. This is probably one of the cool innovations that's happening now.
We still think Iceberg can be a little bit too difficult to use because it still is a little
too strict, but it's definitely a step in the right direction
of making the data lake really flexible.
The place to start would be with the different cloud providers
data lake specific tools.
If you want to get started with getting security data
into those tools, there are plenty of different services and startups
and companies that will help you load logs
from different locations into your data lake.
But you could start with just a few log sources, maybe log
sources that are really easy to get into your own cloud
providers data lake, like the cloud audit logs,
or maybe your identity
provider logs, and then just play with the different data lake tools that exist out there
to see what suits your use case as well.
Who has the best detections or whose structure works best for you, who's easiest to use.
There are lots of cool things out there.
That's Cliff Crossland, CEO and co-founder at Scanner.dev.
If you'd like additional details, we have a link
to their blog on Security Data Lakes in our show notes. And now, a message from our sponsor Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs, yet breaches continue
to rise by an 18% year-over-year increase in ransomware attacks and a $75 million
record payout in 2024.
These traditional security tools expand your attack surface with public-facing IPs that
are exploited by bad actors more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust plus AI stops attackers by hiding your attack
surface, making apps and IPs invisible, eliminating lateral movement, connecting users only to
specific apps, not the entire network, continuously verifying every request based on identity
and context, simplifying security management with AI-powered automation, and detecting threats
using AI to analyze over 500 billion daily transactions.
Hackers can't attack what they can't see.
Protect your organization with Zscaler Zero Trust and AI.
Learn more at zscaler.com slash security. And finally, our FCC around and find out desk tells us the FCC has proposed a $4.5 million
dollar fine against voiceover IP provider Telnyx for allegedly letting scammers impersonate a fictitious
FCC fraud prevention team, which, spoiler alert, does not exist.
The MarioCop robocallers, yes, that's what they called themselves, made just under 1,800
fake FCC calls in two days, even targeting FCC staff and their families.
Their calls threaten victims with jail time unless they coughed up $1,000 in Google gift
cards because nothing says government fine like digital monopoly money.
The FCC blames Telnyx for lax customer verification, claiming they failed to do proper know-your-customer
checks. Telnyx however fired back calling the FCC's accusations factually
mistaken and insisting they went above and beyond compliance rules. While the
fine looms one thing is clear scammers will scam the FCC will fine and nobody
should ever pay government fees in gift cards.
And that's the CyberWire. For links to all of today's stories, check out our daily briefing
at the cyberwire.com. We'd love to know what you think of this podcast. Your feedback ensures
we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity.
If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the show notes or send an email to
cyberwire at n2k.com.
N2K's senior producer is Alice Carruth.
Our Cyberwire producer is Liz Stokes.
We're mixed by Trey Hester with original music and sound design by Elliot Peltzman.
Our executive producer is Jennifer Iben.
Peter Kilpey is our publisher and I'm Dave Bittner.
Thanks for listening, we'll see you back here tomorrow.
