CyberWire Daily - Treasury's offensive in financial defense.
Episode Date: May 10, 2024Project Fortress looks to protect the US financial system. News from San Francisco as RSA Conference winds down. Dell warns customers of compromised data. Google updates Chrome after a zero day is exp...loited in the wild. Colleges in Quebec are disrupted by a cyberattack. CopyCop uses generative AI for misinformation. The FBI looks to snag members of Scattered Spider. Betsy Carmelite, Principal at Booz Allen, shares our final Woman on the Street today from the 2024 RSA Conference. Guest Deepen Desai, Chief Security Officer at Zscaler, joins us to offer some highlights on their AI security report. A solar storm’s a-comin’. Our 2024 N2K CyberWire Audience Survey is underway, make your voice heard and get in the running for a $100 Amazon gift card. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you’ll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Betsy Carmelite, Principal at Booz Allen, shares our final Woman on the Street today. N2K’s Brandon Karpf caught up with Betsy to share insights from the 2024 RSA Conference. Guest Deepen Desai, Chief Security Officer at Zscaler, joins us to offer some highlights on their AI security report. Selected Reading Treasury launches ‘Project Fortress,’ an alliance with banks against hackers (CNN Business) Cyberthreat landscape permanently altered by Chinese operations, US officials say (The Record) White House to Push Cybersecurity Standards on Hospitals (Bloomberg) Dell warns of “incident” that may have leaked customers’ personal info (Ars Technica) Google fixes fifth Chrome zero-day exploited in attacks this year (Bleeping Computer) Cyberattack shuts down 4 Quebec CEGEPs, cancelling classes and exams (CBC News) AI-Powered Russian Network Pushes Fake Political News (Infosecurity Magazine) University System of Georgia: 800K exposed in 2023 MOVEit attack (Bleeping Computer) FBI working towards nabbing Scattered Spider hackers, official says (Reuters) Severe solar storm threatens power grids and navigation systems (Financial Post) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here’s our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. JoinDeleteMe.com slash N2K and use promo code N2K at checkout.
The only way to get 20% off is to go to JoinDeleteMe.com slash N2K and enter code N2K at checkout.
That's JoinDeleteMe.com slash N2K, code N2K. Thank you. customers of compromised data. Google updates Chrome after a zero day is exploited in the wild.
Colleges in Quebec are disrupted by a cyber attack.
Hoppy Cop uses generative AI for misinformation.
The FBI looks to snag members of Scattered Spider.
Betsy Carmelite, principal at Booz Allen,
shares our final Women on the Street today
from the 2024 RSA Conference.
Our guest is Deepin Desai, Chief Security Officer at Zscaler,
joining us to offer some highlights on their AI security report.
And a solar storm's a-comin'.
It's Friday, May 10th, 2024. I'm Dave Bittner, and this is your CyberWire Intel briefing.
Happy Friday, everyone. It is great to have you here with us. The U.S. federal government has teamed up with Wall Street to form Project Fortress, a cybersecurity alliance aimed at protecting the U.S. financial system from cyberattacks.
Reconced in a letter to bank CEOs by Deputy Treasury Secretary Wally Adeyemo,
the initiative combines defensive strategies such as vulnerability scans and automatic threat feeds with offensive actions including the deployment of Treasury's sanctions team and law enforcement.
This collaboration underscores the heightened cyber threats to the economy and emphasizes consequences for attackers.
The alliance also features an information sharing program to improve threat detection.
Over 800 financial institutions have already joined the initiative,
which offers critical support to both large and smaller financial entities.
Speaking at RSA conference, Eric Goldstein of the Cybersecurity and Infrastructure
Security Agency detailed how the U.S. is grappling with an intensified cyber threat landscape,
particularly from a Chinese operation known as Volt Typhoon. This group has expanded beyond
traditional espionage to more disruptive aims against U.S. critical infrastructure,
signaling a permanent shift in cyber warfare tactics.
Although the U.S. has strengthened defenses and resilience,
the persistence and evolving threat from China,
highlighted by both ongoing attacks and potential future tactics, remains a major concern.
Despite some progress in combating these threats,
officials warn that the capabilities and intent of adversaries like China to cause disruption
will continue to pose significant challenges to national security.
Meanwhile, at a tech event sponsored by Bloomberg, Anne Neuberger, Deputy National Security Advisor
for Cyber and Emerging Technology,
announced that the Biden administration plans to set minimum cybersecurity standards for hospitals.
This follows the massive cyber attack on Change Healthcare, a unit of UnitedHealth Group,
which compromised the data of 100 million Americans and disrupted billions in payments.
The breach underlined the vulnerability
of the healthcare sector to cyber threats. Additionally, the administration will offer
free cybersecurity training to 1,400 small rural hospitals to help bolster defenses.
Coming up later in the show, Betsy Carmelite, principal at Booz Allen,
shares our final woman on the street from
the 2024 RSA conference. And 2K's Brandon Karpf catches up with Betsy to compare notes.
For years, Dell customers have faced scam calls from fraudsters posing as Dell support,
using personal details like names, addresses, and service tag numbers.
Recently, Dell notified customers of an incident involving a portal breach that compromised customer data. An online ad claimed to sell the information of 49 million Dell customers from 2017 to 2024,
including names, addresses, and hardware details.
including names, addresses, and hardware details.
Dell advises customers to ignore unsolicited calls and contact Dell support directly if needed.
Google has issued a security update for Chrome
to address the fifth zero-day vulnerability exploited this year.
This high-severity user-after-free vulnerability
affects the visuals component responsible for content rendering.
Discovered by an anonymous researcher, it is confirmed to be actively exploited.
The vulnerability could lead to data leakage, code execution, or crashes.
Updates have been released for various platforms.
A cyber attack has disrupted operations at four colleges in Quebec, affecting 7,000 students by suspending classes and cancelling exams.
The attack targeted the college network's servers, compromising access to Omnivox, the primary digital platform used for academic activities.
for academic activities. Obscene images appeared on the site during logins, leading to a suspension of classes through the end of the week to allow a cybersecurity firm to investigate and address
the breach. As of now, there is no evidence of data leakage, and management aims for classes
to resume by May 13th, with further updates pending. This incident is part of a broader trend of cyberattacks
on educational institutions in Quebec.
The University System of Georgia, USG,
is notifying 800,000 people about a data breach
resulting from the 2023 CLOP-MUVID attacks,
which exploited a zero-day vulnerability in a file transfer solution.
The breach exposed sensitive information such as social security numbers,
bank account details, dates of birth, and tax documents.
The affected group likely includes current and former students, staff, and contractors.
USG has partnered with Experian to offer a year of identity protection and fraud detection services,
with a deadline to enroll by July 31st of this year. The incident is part of a global extortion
campaign by the Klopp ransomware gang, impacting thousands of organizations and millions of
individuals worldwide. Security researchers from Recorded Future have uncovered a significant Russian disinformation campaign named CopyCop,
which uses generative AI to manipulate and repurpose content from major news outlets to influence Western opinion.
This campaign plagiarizes stories from reputable sources like Al Jazeera and the BBC,
stories from reputable sources like Al Jazeera and the BBC, introduces biases, and distributes them through spoofed or fake news websites to promote narratives that benefit Russian interests.
These narratives often involve divisive issues such as the Israel-Hamas conflict and Ukraine,
aiming to sway public opinion and disrupt political unity in the West,
particularly ahead of key elections in the UK and the US.
The operation's sophisticated use of AI highlights the emerging challenges
and threats to democratic societies and media integrity.
The FBI is advancing efforts to charge members of the Scattered Spider criminal gang,
who are predominantly based in the U.S. and Western countries.
The group notably compromised systems of major casino operators like MGM Resorts and Caesars Entertainment,
demanding large ransoms.
Active over two years, they've targeted a wide range of sectors, including health and financial services. The gang,
known for aggressive tactics and sometimes threatening physical violence, has been
involved in over 100 breaches. The FBI, aided by private security firms, is gathering evidence to
meet the legal standards for charging these individuals. Already, a 19-year-old from Florida has been charged
with more arrests anticipated,
potentially leveraging state and local laws.
Coming up after the break,
our own Brandon Karp catches up with Betsy Carmelite
from Booz Allen to compare notes about RSA Conference.
Deepin Desai, Chief Security Officer at Zscaler, joins us to offer highlights from their AI security report.
Stay with us.
Transat presents a couple trying to beat the winter blues.
We could try hot yoga.
Too sweaty.
We could go skating.
Too icy.
We could book a vacation.
Like somewhere hot.
Yeah, with pools.
And a spa.
And endless snacks.
Yes!
Yes!
Yes!
With savings of up to 40% on Transat South packages,
it's easy to say, so long to winter.
Visit Transat.com or contact your Marlin travel professional for details.
Conditions apply.
Air Transat. Travel moves us.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs,
we rely on point-in-time checks.
But get this, more than 8,000 companies
like Atlassian and Quora have continuous visibility
into their controls with Vanta.
Here's the gist.
Vanta brings automation to evidence collection across 30
frameworks, like SOC 2 and ISO 27001. They also centralize key workflows like policies, access
reviews, and reporting, and helps you get security questionnaires done five times faster with AI. Now that's a new way to GRC. Get $1,000 off Vanta when you go to
vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cybercriminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform secures their personal devices, home networks, and connected lives. Because when executives are compromised at home, your company is at risk. Thank you. Learn more at blackcloak.io.
Betsy Carmelite is principal at Booz Allen.
And in today's final Woman on the Street report from the 2024 RSA Conference,
Betsy meets up with our own N2K executive editor, Brandon Karf,
to compare notes on this year's 2024 RSA conference.
I am here today on the floor of RSA 2024 with longtime friend of the show,
Betsy Carmelite, principal at Booz Allen Hamilton.
Betsy, thank you for joining us again.
Thanks, Brandon. It's great to be back here at RSA with all of you.
It's always good to see you in person and reconnect.
So let's talk about the show this year and what you're seeing, what you're observing.
What have been the highlights of your time at RSA this week?
So I've been to a few of the track sessions and listening to the conversations around specifically protecting our critical infrastructure.
And as I've mentioned several times before on other segments with all of you,
threat intelligence is my background, so I'm paying close attention to
how we are bridging the private-public gaps in intelligence sharing.
gaps in intelligence sharing.
And perhaps a better way of phrasing it is just the improvements in that space and how we're really accelerating and escalating
over communication at all levels of the government
to be sharing that information.
And really emphasizing what I'm hearing,
aside from the tech enablement that's occurring to allow intelligence to be more timely, more impactful, understanding the common threat picture in a more accelerated way, maintaining the human element of that and the criticality of our analysts who have the experience to be able to
understand, is this information that should be in somebody else's hands too? And I'm an analyst and
I need to be curious and critically thinking about things, which funnily enough came up in the
Jason Sudeikis keynote as well in some of theso, kind of what are some of the key lines from his show,
and not be judgmental, but to be curious.
And certainly that's what we always look for in intelligence analysts.
But does this information help the common threat picture?
Should I be sharing that information with somebody other than my immediate circle?
Because somebody probably doesn't know this information that I'm sitting on.
And having those trust relationships to be able to know who to go to to share that information.
I'm glad you brought up trust.
I was just about to ask you, historically, one of the biggest challenges we've faced in threat intelligence and information sharing is that trust. Have you had conversations, have you heard more about how we're going about building
these trusted relationships, trusted partnerships, and actually designing a system that we can have
implicit trust in? I think in terms of designing a system, it's nascent. I still see and I live and I hear of, oh, well, I know somebody over in XYZ
agencies. I'm going to go talk to them or I know somebody who's in that sector and I'm going to
connect that private company to somebody who might need that information, whether it's logs
or threat activity that they're a little bit curious about and not sure what to do with,
and knowing that there are really benefits to sharing that information and not repercussions.
You really want to be, again, over-communicating because one of the other things that I've noticed in some of the sessions
and as I'm talking to people, let's take,
for example, Volt Typhoon.
Topic du jour.
It is all over the place.
Yeah.
And not to belabor a topic du jour, but the part of it that's really struck me is the
awareness of that threat.
Like, we're in this community and we're in these circles.
We know about it.
of that threat.
Like, we're in this community and we're in these circles.
We know about it.
It's not the espionage threat
that we typically have seen
from that threat actor.
It's causing disruption
to our nation.
Fundamentally preparing
the battle space.
Right, right.
The interesting aspect
I've heard,
especially a number of CISA
representatives here
talk about with Volt Typhoon
is the targeting
of smaller scale
utilities and organizations that don't necessarily have the resources or the knowledge
or the tie-in to these information feeds that they need.
And that's the point. They're unaware of the threat. We all know and know that the Russia threat has not ended by any means.
But we may all have lived through the last several years of knowing the Russia threat was out there
and affecting our elections,
or whether you grew up in the 80s or in the 90s
and being very clear of who the threat was.
The general public is, in my opinion,
probably vastly unaware of the threat
of long-term persistence in our networks
to disrupt and not just gather information
and what that trickle-down looks like
for the American citizen,
for the small business,
for the private sector companies who don't have the resources to be tracking those threats.
So what do we as professionals in this industry need to be doing better?
So I think, you know, in the world that I work in, it's certainly still expanding the
information sharing circles.
And I don't want to make that sound so kind of vague or basic. I think we talk about
automating our intelligence gathering and production. We still need to have
we still need to have the human analyst who is ready to dive in to the criticality of the threat and follow the threads that they're seeing.
I think some analysts, and it's a tendency, I think, of just studying a topic,
Of just studying a topic, whether you've studied research topic in college or whether you've followed a campaign for a long time as an analyst. I think a tendency can come to be like, oh, this is my area.
This is the only thing I'm focused on.
And I kind of don't stray away from the borders of my research topic.
Right. to inform or even just your analytic processes
and how you go about performing the research
and collection and the analysis,
share that with your colleagues so that,
hey, I'm sitting in my little topic area.
I need to share that with other people
because it may be helping an analytic process
somewhere else in my team,
within the agency I'm working with, within the agency I'm working with, within the
company I'm working with. Just some of the process and operations sharing, I think, is really
critical. But yeah, just the critical thinking around, I'm doing this, I think I should share
it with somebody else, whether it's data, whether it's my intelligence analysis production, whether it's my process.
So really what I've heard you identify is that thing that we can do better is oriented around the human element of the operations and the activities.
Right, right.
And I see tech as a huge enabler.
Of course.
I am not conveying a message of, you know, we're doomed and, you know, tech is not here to assist us at all.
Like, that is absolutely counter to where we see our need to accelerate missions for clients and just the benefits of tech.
But keeping the tech and the outputs in check and having those humans validating the end product.
I think when you tie what you just said with what we were talking about earlier about trust,
bringing the human element allows us better modalities of trust and more opportunities
to build trusted relationships.
Yes, I would say 100%.
I mean, I think about, I mean, even why are we here?
This week at RSA, we are meeting people.
We are, like, there are thousands of people here.
We could do this virtually for sure.
Some attendees are doing this virtually.
But why are we here?
We're here to make networks and connections and, you know, kind of revitalize some connections in some case. Like
I'm seeing colleagues here that I've worked with for 20 years and it's great. And we're still
comparing notes. We're still saying, hey, I have this problem in my space. What do you think? Oh,
we should, we have to keep in touch because I'm still seeing that too. And let's figure out some
of those problems together. So if you just think about the human element at RSA, we're living that every day here as well.
Yeah, right. We're experiencing it.
Yeah. Building trust with each other as we make new connections and maintain old ones.
So when you go home at the end of this week, what is going to be your biggest takeaway?
home at the end of this week, what is going to be your biggest takeaway? That I am definitely going to maintain my curiosity about everything that I'm seeing here. I see a lot of products. I was in the
RSA sandbox watching the competition the other day. Those know, those companies had three minutes to pitch something.
I really don't know that much about what they were pitching. So I'm curious. I'm going to go back and
look into my notes and research some of these things. And we will have Reality Defender on
the show in a few weeks. Yeah, really interesting to see the whole reveal at the end. But yeah, I give those companies a lot of credit.
And I know that they're rapidly trying to accelerate what they're trying to do.
So I'd like to remain curious about some of these products that we're seeing, but also really remain curious about some of the people I've met here where we can continue to work together to make intelligent sharing
and the process a lot better.
Well, Betsy Carmelite,
thank you so much for coming back.
And I am sure we will have you back again very soon.
Great. Thanks, Brandon.
Great to be here.
That's Betsy Carmelite,
principal at Booz Allen
with our own N2K's Brandon Carr.
Deepan Desai is the Chief Security Officer at Zscaler, and he and his colleagues recently released the Zscaler Threat Labs 2024 AI Security Report.
Deepan, welcome to the show.
I would love to start out with some high-level stuff here.
What prompts the creation of this report?
Yes, thank you, Dave, for inviting me here.
So to give you background, what the team did is we looked at all the AI ML transactions
that were observed in Zscaler Cloud
from April of 2023 up until January 2024.
And the goal over here is to look at three areas.
Number one is, what does that AI adoption
looks like in the enterprise?
We hear a lot of things out there,
a lot of news stories.
AI is definitely at the top in a lot of the news cycle, hype cycle.
But is it really translating into adoption?
So that was number one bucket. Number two is, are we seeing threats that are occurring because of the advancement in AI technology?
because of the advancement in AI technology.
And number three is to power all the security professionals with data-driven insight to make decisions
on where and how to invest in AI-powered security defense as well.
So that was the intent of the report.
We looked at about 18 billion transactions.
That's what the activity translated to.
And this was a staggering nearly 600% growth in just nine months from April to January in enterprise EIML transactions.
So one more data point that I'll share is
we were seeing about 500 million transactions
every month in April of 2023.
And now it's 3.1 billion.
Well, as of the report, it was 3.1 billion in January
and it continues to grow even after
that. When you say transactions, what does that mean? Is that a phishing attempt? So when I say
transactions, these are connections going to AI applications on the internet. So think of these
as AI as a service apps. It could be chat GPD. It could be Drift, OpenAI, any of those destinations
which we have attributed as AI tools. So that's what I'm referring to when it comes to AI ML
transactions. I see. So let's dig into the report itself here. I mean, what are some of the things
that caught your eye? Right. So not surprising, but it's still a very, very staggering number,
600% growth.
If I were to even call out the amount of data,
that translates to about 569 terabytes of data.
So there's a lot of it.
It clearly calls out that ai apps and a majority of this is fueled by
the advancements in generative ai it has become part of the enterprise fabric right and then we
have data to back that claim it is no longer a thing that enterprises are just experimenting or POCing.
There is definitely widespread adoption that we're noticing
based on the data that we're seeing. ChatGPT
undoubtedly was number one. That's where
yes, there was a lot of experiment, but there is also a lot of
usage in day-to-day operation.
Many of the organizations deployed private version of ChatGPT, which will be hosted in Azure AI or other options as well.
This is where they will make sure none of their data hits the public ChatGPT version.
chat GPT version. The third key trend that I'll call out is in terms of the region where we saw a lot of adoption. US was definitely at the top, followed by India and then UK.
That's where we saw significant spikes in the AI ML related activity.
And so for security professionals, what are the concerns here?
What are the things that the decisions that this data informs?
So it clearly calls out like the AI, whether it's the sanctioned AI,
or I like to call it shadow AI, where you're not aware that your employees are visiting these AI apps.
So shadow AI, we will have to deal with that.
But in terms of risk, I see it in two major buckets.
Number one is the risk around your intellectual property data, the data that is leaving your environment if it comes to AI as a service apps, whether it's ChatGPT or any other apps out there, if your employees are submitting data, sensitive data. Let's take an example of an engineer submitting a code snippet asking ChatGPD to
beautify it or add comments or find
issues or rewrite in another
language. That code snippet,
if it was sensitive to your
organization, it's now part of
a public model. So it's
basically getting leaked. So number one
risk is the
leakage, the accidental
leakage of your intellectual property information finance
data code snippets it could be any of that number two risk and this is also an important one that
all of us cxos have to worry about is we will see more and more adversarial attacks targeting our AI ML environment.
In fact, we've started seeing that with a couple nation state threat actors
that are specifically going after AI ML development environments.
These are environments where probably the enterprise does not have
all the security controls
that they would have in production environment,
yet they're training those LLMs, those models, those agents,
those chatbots using sensitive data.
So the second bucket of risk, which I would put it in is,
how do I protect those private instances of my LLM
against adversarial attacks?
And that could lead to things like data poisoning, data leakage, hallucination, toxicity.
There's all kinds of new risks that comes up when you're building a chatbot, for instance,
and then you're going to post it on your public site to serve a business function. If that bot starts misbehaving, it can even lead to your brand reputation problem.
What would you recommend then? I mean, based on the information that you all have gathered here,
in terms of actionable best practices?
Yeah, so number one is you definitely need to start with what are the sanctioned versus unsanctioned AI applications for your employees.
And I'm talking about the external ones, the as-a-service ones. Once you have that policy defined, you need to have controls in place where you're outright blocking the unsanctioned ones or doing a best job at it.
And then for these sanctioned applications, you need to apply inline data loss prevention
engine with TLS inspection to make sure it is being used as intended and as defined by
your policy.
For instance, you may have a policy that, yes, it's okay to use this application, say ChatGPD,
but it's not okay for you to post.
I mean, it's obvious.
Don't post financial internal stuff over there
to glean whatever insight from the chatbot.
It's one thing to have it defined in the policy,
but you need to have those granular access control, DLP control, TLS inspection.
This is exactly how, say, Zscaler, Zero Trust Exchange, for instance, is helping our customers to securely adopt generative AI apps without risking their data.
data. Number two thing that I would recommend is for applications, and I'm talking about the second risk where the adversaries are already targeting the AI development environment,
you should treat them as a crown jewel application. All the controls that you have in place for your
crown jewel applications should get replicated over there as well, which translates to
Applications should get replicated over there as well, which translates to protecting the access.
In my opinion, it should be governed by zero trust principles.
So least privilege access, you should have full visibility, you should segment that off.
Only limited number of users should have access to it. And you should have full visibility into what happens,
what goes in and out of that environment.
So that second aspect
is going to be very, very important.
You start with something as basic as
make sure you don't have a VPN
leading to your AI ML environment.
That's the highest risk in this day and age.
We've got all the stuff we're seeing
over the past six months
with several VPN vendors getting targeted.
So use zero trust principles to provide secure access.
This is where Zscaler has Zscaler private access,
which essentially uses user-to-app segmentation
to achieve it.
But the core point I'm trying to make
is use zero trust principles
for protecting
your private EIML apps.
Have a solution that allows you to implement RBAC for that LLM, that internal environment.
We are working in that space.
We have certain things we've been talking about. Net-net, at bare minimum,
restrict access to a group of folks that only need access to it, that way you're containing the risk
in that space. Anything that leaves that AI ML environment going out to the internet again,
doing TLS inspection and applying DLP over there is equally important.
So the bucket number one where I said TLS inspection and DLP was where your users are
talking to public AI apps. Bucket number two is where you also need to apply the same TLS
inspection with DLP controls to data that is egressing your EIML environment in case there is a successful adversarial
attacks resulting in data exploitation stage.
All right.
Well, Deepan Desai is Chief Security Officer at Zscaler.
Deepan, thank you so much for joining us. Thank you. cybersecurity solution trusted by businesses worldwide. ThreatLocker is a full suite of
solutions designed to give you total control, stopping unauthorized applications, securing
sensitive data, and ensuring your organization runs smoothly and securely. Visit ThreatLocker.com
today to see how a default-deny approach can keep your company safe and compliant. What you're hearing is the sound of millions of tiny invisible particles called neutrinos.
Also can be the sound of the sun.
Make that tens of millions.
Billions.
This weekend, buckle up for a celestial showdown as a severe solar storm,
rating a spicy G4 on the Weather Wildness Scale,
prepares to ruffle Earth's electromagnetic feathers.
In what the Space Weather Prediction Center is calling a very rare event,
Earth is about to get a cosmic smackdown from not one, but five eruptions of solar material. These sunny spitballs
are expected to light up the skies with auroras, potentially turning the entire UK into an
impromptu Northern Lights festival. But it's not all Instagram-worthy sky art. This solar soiree
threatens to throw a wrench in the works for our beloved tech.
Unprepared power grids might take a nap, pipelines could get an unexpected jolt,
and satellites might find themselves on an unscheduled spacewalk.
Remember the G5 tantrum back in October 2003?
Sweden went dark, South Africa's transformers threw a fit, and we all reconsidered our dependence on
electricity. Flights over the poles might need to take the scenic route to dodge that extra zesty
solar seasoning, meaning some travelers will rack up a few more air miles than planned.
So, hopefully nothing more than a nighttime light show will occur. But just in case, grab your popcorn and settle in safely while we ride out a solar storm.
And that's the Cyber Wire.
For links to all of today's stories, check out our daily briefing at thecyberwire.com.
Be sure to check out this weekend's Research Saturday and my conversation with Dick O'Brien from Symantec's Threat Hunter team.
We're discussing Graph, growing numbers of threats leveraging Microsoft APIs.
That's Research Saturday. Check it out.
We'd love to know what you think of this podcast.
Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity.
If you like our show, please share a rating and review in your podcast app.
Please also fill out the survey in the show notes or send an email to cyberwire at n2k.com.
We're privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector,
from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies.
N2K makes it easy for companies to optimize your biggest investment, your people.
We make you smarter about your teams while making your teams smarter.
Learn how at N2K.com. This episode was produced by Liz Stokes. Thanks for listening.
We'll see you back here next week. Thank you. AI and data products platform comes in. With Domo, you can channel AI and data
into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare,
and automate your data workflows,
helping you gain insights, receive alerts,
and act with ease through guided apps
tailored to your role.
Data is hard. Domo is easy.
Learn more at ai.domo.com. That's ai.domo.com.