CyberWire Daily - Data breach at the US Marshals Service. Blind Eagle phishes in the service of espionage. Dish investigates its outages. Qakbot delivered via OneNote files. Memory-safe coding.
Episode Date: February 28, 2023The US Marshals Service sustains a data breach. Blind Eagle is a phish hawk. Dish continues to work toward recovery. OneNote attachments are used to distribute Qakbot. Ben Yelin has analysis on the Su...preme Court’s hearing on a section 230 case. Mr Security Answer Person John Pescatore has thoughts on Chat GPT. And CISA Director Easterly urges vendors to make software secure-by-design. For links to all of today's stories check out our CyberWire daily news briefing: https://thecyberwire.com/newsletters/daily-briefing/12/39 Selected reading. U.S. Marshals Service investigating ransomware attack, data theft (BleepingComputer) US Marshals says prisoners’ personal information taken in data breach (TechCrunch) Blind Eagle Deploys Fake UUE Files and Fsociety to Target Colombia's Judiciary, Financial, Public, and Law Enforcement Entities (BlackBerry) Dish hit by multiday outage after reported cyberattack (TechCrunch) DISH says ‘system issue’ affecting internal servers, phone systems (The Record from Recorded Future News) Take Note: Armorblox Stops OneNote Malware Campaign (Armorblox) Ukraine & Intelligence: One Year on – with Shane Harris (SpyCast) U.S. cyber official praises Apple security and suggests Microsoft, Twitter need to step it up (CNBC) U.S. cyber chief warns tech companies to curb unsafe practices (CBS News) Tech manufacturers are leaving the door open for Chinese hacking, Easterly warns (The Record from Recorded Future News) CISA Director Calls Out Industry Using Consumers as Cyber 'Crash Test Dummies' (Nextgov.com) The Designed-in Dangers of Technology and What We Can Do About It (Cybersecurity and Infrastructure Security Agency) Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K.
Air Transat presents two friends traveling in Europe for the first time and feeling some pretty big emotions.
This coffee is so good. How do they make it so rich and tasty?
Those paintings we saw today weren't prints. They were the actual paintings.
I have never seen tomatoes like this.
How are they so red?
With flight deals starting at just $589,
it's time for you to see what Europe has to offer.
Don't worry.
You can handle it.
Visit airtransat.com for details.
Conditions apply.
AirTransat.
Travel moves us.
Hey, everybody.
Dave here.
Have you ever wondered where your personal information is lurking online?
Like many of you, I was concerned about my data being sold by data brokers.
So I decided to try Delete.me.
I have to say, Delete.me is a game changer.
Within days of signing up, they started removing my personal information from hundreds of data brokers.
I finally have peace of mind knowing my data privacy is protected.
Delete.me's team does all the work for you with detailed reports so you know exactly what's been done.
Take control of your data and keep your private life private by signing up for Delete.me.
Now at a special discount for our listeners.
private by signing up for Delete Me. Now at a special discount for our listeners,
today get 20% off your Delete Me plan when you go to joindeleteme.com slash n2k and use promo code n2k at checkout. The only way to get 20% off is to go to joindeleteme.com slash n2k and enter code
n2k at checkout. That's joindeleteme.com slash N2K, code N2K.
The U.S. Marshals Service sustains a data breach.
Blind Eagle is a fishhawk.
DISH continues to work toward recovery.
One-node attachments are used to distribute CACBOT.
Ben Yellen has analysis on the Supreme Court's hearing on a Section 230 case.
Mr. Security Answer Person John Pescatori has thoughts on chat GPT.
And CISA Director Easterly urges vendors to make software secure by design.
From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday,
February 28th, 2023.
A data breach has been reported at the U.S. Marshals Service.
NBC News correspondent Tom Winter broke the news in a
tweet thread yesterday evening. Drew Wade, a Marshals Service spokesperson, said,
The affected system contains law enforcement-sensitive information, including returns
from legal process, administrative information, and personally identifiable information pertaining to subjects of USMS
investigations, third parties, and certain USMS employees. The February 17th discovery of what
Wade calls a ransomware and data exfiltration event affecting a standalone USMS system led to
the disconnect of the affected system from the network. The USMS is actively
investigating the attack as a major incident, Bleeping Computer writes. Justice Department
officials were briefed last Wednesday. The breach is said to have left the Witness Security Program,
better known as the Witness Protection Program, untouched, USA Today reported in an update this morning.
BlackBerry has published a report on a threat actor, Blind Eagle, also known as APTC-36.
It's a South American cyber espionage operation that's been operating against targets in Ecuador,
Chile, Spain, and Colombia since at least 2019. Its most recent activity has been
directed primarily at organizations in Colombia, including health, financial, law enforcement,
immigration, and an agency in charge of peace negotiation in the country.
The come-on in Blind Eagle's phishing emails depends upon fear and urgency.
Recipients of the email are told they have obligaciones pendientes, that is outstanding obligations,
with some of the communications telling the recipients that their tax payments are 45 days in arrears.
The email's fish hooks are usually malicious links.
The fishing is conceptually simple.
Blind Eagle has persisted with it simply because it works.
Dish continues to grapple with what it characterizes as an internal system error.
The record notes that no specific information has so far come to light that would support early speculation that the incident arose
from a cyber attack. TechCrunch has been in touch with the company, who said that Dish TV,
Sling TV, and wireless service were all back up. Investigation and remediation are in progress,
a spokesman said. However, some of our corporate communication systems, customer care functions, and websites were affected.
Our teams are working hard to restore affected systems as quickly as possible and are making steady progress.
DISH's website this morning was still displaying the notice it's had up since the weekend.
We are experiencing a system issue that our teams are working hard to resolve.
a system issue that our teams are working hard to resolve.
Armor Blocks describes a phishing campaign that's using OneNote file attachments to distribute the CACBOT banking trojan.
The phishing emails purport to come from a trusted vendor
and ask the recipient to open a OneNote attachment that appears to be an invoice.
Armor Blocks says,
Upon opening the email, victims are presented with a simple-bodied
email designed to look like a follow-up to a previous discussion. As victims read this
language-based email, they are prompted to open the attachment to review the details of the order,
to which it seems has already been completed. The file will then execute VB script code, which will result in the installation of CACBOT.
The Spycast podcast has an interview with The Washington Post's Shane Harris,
who encapsulates how conventional wisdom about Russia's hybrid war went astray.
He said,
At the outset, I believe that what we were looking at was probably a pretty swift Russian victory.
They would come in, they would decapitate the central government in Kiev in the first 72 hours,
and it would be bloody, and it would be violent, but that Russia would prevail because they were
deemed to have the superior military in terms of technology experience numbers. Turns out,
all those things were spectacularly wrong.
The same goes for cyberspace. Check out Spycast on the Cyber Wire network and hear more about
the conduct and prospects of Russia's war. CISA Director Jen Easterly spoke yesterday
at Carnegie Mellon University and outlined steps she urged vendors to take in order to introduce
more inherent security into their products. One of her conclusions was that the burden of security
shouldn't fall on the consumer. Since she was speaking at a university, she framed the issues
in ways that might suggest ways in which advanced students might shape their studies and research
to contribute.
In particular, she offered four questions that are worthy of more general consideration.
First, she asked, could you move university coursework to memory-safe languages?
As an industry, we need to start containing and eventually rolling back the prevalence of C and C++ in key systems and putting a real emphasis on safety?
Second, could you weave security through all computer software coursework?
Third, how can you help the open source community?
And finally, could you find a way to help all developers and all business leaders make the switch?
So, memory-safe coding is a technical, practical, and business issue.
It will take a push across all those areas to make software safer and more secure.
Coming up after the break, Ben Yellen has analysis on the Supreme Court's hearing on a Section 230 case.
Mr. Security Answer person John Pescatori has thoughts on CHAT-GPT.
Stay with us.
Do you know the status of your compliance controls right now?
Like, right now.
We know that real-time visibility is critical for security,
but when it comes to our GRC programs, we rely on point-in-time checks. But get this.
More than 8,000 companies like Atlassian and Quora have continuous visibility
into their controls with Vanta. Here's the gist. Vanta brings automation to evidence collection
across 30 frameworks like SOC 2 and ISO 27001. They also centralize key workflows like policies,
access reviews, and reporting, and helps you
get security questionnaires done five times faster with AI. Now that's a new way to GRC.
Get $1,000 off Vanta when you go to vanta.com slash cyber. That's vanta.com slash cyber for $1,000 off.
And now, a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning
digital executive protection platform secures their personal devices, home networks, and connected
lives. Because when executives are compromised at home, your company is at risk. In fact, over one
third of new members discover they've already been breached. Protect your executives and their families 24-7, 365 with Black Cloak. Learn more at blackcloak.io. Mr. Security Answer Person
Hi, I'm John Pescatori, Mr. Security Answer Person.
Our question for today's episode,
you spent a lot of time as a Gartner analyst.
If you were doing a Gartner cybersecurity hype cycle today,
where would you put the OpenAI ChatGPT chatbot that is getting so much press? Well, that's a timely question.
I actually just used the ChatGPT chatbot via the New York Times
to write my wife a romantic Valentine's Day card in the style of a pirate.
She was not impressed.
Next year, I will go back to buying her roses.
Okay, let me do some explaining first.
Unless you've been totally off the grid, you've probably heard some level of hype about OpenAI and ChatGPT.
If not, Google it for detailed information, but it is essentially an example of what is called generative AI.
Here's the one-line explanation a consulting firm McKinsey published for corporate executives.
Genitive AI describes the algorithms, such as ChatGPT, that can be used to create new content,
including audio, code, images, text, simulations, and videos.
One more short definition for those not familiar with Gardner Hype Cycles, which Gardner started in 1995,
and one of the more fun
Gardner research notes I did over my 14 years there. A Gardner Hype Cycle tracks and predicts
technology issues from inception or trigger point to peak of overinflated expectations,
into the trough of disillusionment, then up the slope of enlightenment, and for some, but not all,
to reach the plateau of productivity.
In August 2022, the Gartner Emerging Technologies hype cycle had generative AI at that initial trigger point.
Over the years, AI has mostly been trapped in the trough of disillusionment, but ChatGPT
actually passed the Turing test, fooling human readers into thinking they were chatting with
another human.
The public release of a website last November demonstrating the technology in various ways
has led to an explosion of hype.
From a cybersecurity perspective, there are two major things to think about.
One, how it will be used against us, but also, two, how can we use it against the bad guys?
First, a telling point to internalize.
The workflow of AI is always, one, human experts enter constraints and requirements.
Two, AI lines of code, mostly written by humans, creates a bunch of stuff.
And then three, humans evaluate and select the useful stuff.
Already, you can see how ChatGPT can be used to make it much easier to craft more real-sounding phishing messages and even simple malicious executables.
This is much the way cloud computing made it easier for bad guys to launch distributed denial-of-service attacks.
But cloud-based DDoS also made it easier to block DDoS.
And in that case, generative AI is going to follow that same trend because in the hands of skilled cybersecurity folks,
Generative AI is going to follow that same trend because in the hands of skilled cybersecurity folks, it will be useful for faster generation of IOCs that are more than just glorified signatures and also more useful tools for recognizing phishing text and malware created by generative AI to make sure all code did not contain any of the OWASP top code
or API vulnerabilities before allowing check-in of that software. That would be some real movement
up the slope of enlightenment. So to finally directly answer your question, today I put
generative AI used by bad guys at the peak of overinflated expectations, and it's used by good
guys just starting off from the trigger point. As in chess, the bad guys at the peak of overinflated expectations, and it's used by good guys just
starting off from the trigger point. As in chess, the bad guys have the white pieces and usually get
to go first. But the first mover in chess does not always win. It is really only a slight advantage,
and difference in skills between players is the more accurate determinant of who will most likely
win. The bottom line? Like all technology, generative AI can be a force multiplier
when skilled experts put it to use, or it can simply be a noise generator when unskilled users
are at the controls. Use the hype over OpenShot GPT to make sure your management understands the need
for machine language understanding and skills in your security staff. Also, update your security
awareness materials to users to emphasize that caution and clicking
should be based on the consequences of the action,
not just the believability of the email.
My prediction is that even as fast as things seem to be moving,
in February 2024, we will probably not be using generative AI
to send our significant others Valentine's Day messages,
or they will not be our significant others in 2025.
Thanks for listening. I'm John Pescatori, Mr. Security Answer Person.
Mr. Security Answer Person with John Pescatori
airs the last Tuesday of each month right here on the Cyber Wire.
Send your questions for Mr. Security Answer Person
to questions at thecyberwire.com. And joining me once again is Ben Yellen.
He is from the University of Maryland Center for Health and Homeland Security
and also my co-host over on the Caveat podcast.
Hello, Ben.
Hello, Dave.
I know you recently spent just a scintillating afternoon
listening to Supreme Court oral arguments in the Gonzalez versus Google case, which has to do with
Section 230 here. Before we jump into what the Supreme Court had to say, just a quick overview.
What is at stake here, Ben? So there are actually two cases here, Gonzalez and Tom Neve versus Twitter. For legal purposes, the cases are identical.
It's victims or the families of victims of terrorist attacks suing online platforms for
aiding and abetting terrorism through their use of algorithms. The Twitter case turns more on the
specific definition of aiding and abetting,
which is not as relevant for our purposes. So that's why we're focusing on Gonzalez v. Google,
which is really about how far immunity under Section 230 extends to the activities of these
big tech platforms. So the allegation on behalf of Gonzalez's family, Gonzalez was a young lady who was killed in the 2015 terrorist attacks in Paris,
is that YouTube and its parent company, Google,
bear some responsibility for these acts of terrorism
because of their algorithm that recommends videos.
So when you search ISIS videos on YouTube
and you watch one of them,
YouTube will actively recommend, at least this
is the allegation, the next video based on what you've already watched. And in that respect,
they are aiding and abetting terrorists. Now, Section 230 provides immunity to these companies
for third-party content posted on the website. So both parties agree that you can't sue Google or YouTube or Google as its
parent company for the fact that ISIS videos exist on YouTube. Okay. But the argument here is,
can you sue them for this sort of recommendation scheme? And that turns on the question as to
whether in recommending these videos, YouTube is acting simply as a
publisher and is just organizing the videos in kind of a content neutral way. Or if this is an
act of creative content, this is something that YouTube itself has created. The council for
Gonzalez argued that the specific thumbnails that are created for these recommended videos
are a mixed creation. It's the third party that has created the video, but it is Google and YouTube
that have created the thumbnail and that they should be liable or they should not have immunity
under Section 230 because they created that thumbnail. And put it in front of the viewer.
And put it in front of the viewer through their algorithm. Okay. The justices were very skeptical of that argument,
I think for both legal and practical reasons. The practical reasons is that all of these tech
companies would then panic about any algorithmic decision that they'd make, including ones that
seemed completely innocuous. So one of the examples they gave is, what if in a search engine, Google simply organized the results not by any algorithm,
but alphabetically? If there were no immunity shield, people like me with the last name of
Yellen could sue for economic damages because my name always turned up last in the search.
And they think that that would be a bad result for these internet companies.
It would stifle creative content, etc. So they are very wary about cutting against Section 230
immunity for something that, at least to a layperson, doesn't seem like content that Google
itself created. The counsel for Google
made an argument
that was similarly poorly received
by many of the justices.
Their argument is that
not only should that content neutral algorithm
where you're simply creating an algorithm
based on the videos
that somebody has watched,
not only should that still confer
immunity on the company,
but even in an extreme example where
YouTube designed an algorithm specifically to promote terrorism, to promote ISIS videos,
even in that extreme circumstance, there should be a liability shield because it's still just
third-party content. Even if you are designing an algorithm that promotes ISIS videos, it's ISIS
itself that created the videos. And therefore, Google shouldn't face any sort of legal consequences
even in that extreme circumstance. And the justices were pretty skeptical of that argument
as well. I think they are trying and sometimes we're asking really probing questions to determine where that line is.
Yeah. What did we hear from any of the individual justices here?
So Justices Gorsuch and Kavanaugh, I think, were particularly concerned about the practical effects of ending this immunity and what it would do for the industry.
And so if I had to guess, they are going to come down more on the side of
broad immunity for these big tech platforms. And that's really the status quo. Lower court cases
have held that immunity under Section 230 extends to a lot of the sort of organizing activities
that these platforms engage in when they're deciding which videos to put at the top of the
list, right? Justice Jackson, who is the newest justice and one of the more liberal justices,
I think is going to go in the other direction.
She was taking a very textualist approach and was looking at the original purpose of Section 230,
which concerned taking down third-party content or decisions about whether to take down third-party content.
And since this case,
when we're talking about algorithms and recommendations, doesn't relate to a direct
decision about removing third party content, I think she would not have Section 230 immunity
apply to these types of activities. So I think she would be one vote in favor of Gonzalez.
So I think she would be one vote in favor of Gonzalez.
The remaining justices are kind of in the murky middle where through really interesting,
I think intelligent questions,
were trying to engage in a line drawing exercise
and they did it through a bunch of different hypotheticals.
So with the attorney for Gonzalez,
they were talking about a scenario
in which somebody goes into a bookstore
and asks
for a book related to sports. And they are directed, based on that question that they asked,
to a table full of books about sports. If this were an internet transaction, would that
confer immunity on the equivalent of the bookstore here. And this really goes back to some of the
original algorithms we saw in the 90s, like with Amazon. You bought this, will you like that?
And so I think justices were skeptical of not extending immunity to those
very basic publishing functions of here's something we think you want to see, not based on our own ideological desire
of what we want you to see,
but based on what you have previously searched for.
But there are kind of a parade of hypotheticals
on the other side too.
The main one, which I already discussed,
is what if Google created an algorithm
that specifically promoted terrorism? Justice Sotomayor came up with, I think,
a really good hypothetical that was very difficult for the attorneys to answer. What if there was a
dating site that created a discriminatory algorithm so they wouldn't match black users
with white users, for example? would that dating service have immunity?
Because ultimately, it's the third parties,
the people who created the profiles,
who have submitted the content.
The dating service would just be engaging
in that kind of organizational publishing function.
So I think the justices in the middle
were having a really hard time of figuring out
that exact line between acting as a
publisher and acting as the creator of content. So it makes it really difficult to handicap
where they're going to come down in this case. Well, time will tell, and we certainly will
keep an eye on it. Ben Yellen, thanks for joining us. Thank you.
Cyber threats are evolving every second,
and staying ahead is more than just a challenge.
It's a necessity.
That's why we're thrilled to partner with ThreatLocker, a cybersecurity solution trusted by businesses worldwide.
ThreatLocker is a full suite of solutions designed to give you total control,
stopping unauthorized applications, securing sensitive data,
and ensuring your organization runs smoothly and securely.
Visit ThreatLocker.com today to see how a default-deny approach can keep your company safe and securely. Visit ThreatLocker.com today to see how a default deny approach can keep your company
safe and compliant. And that's the Cyber Wire. For links to all of today's stories,
check out our daily briefing at thecyberwire.com.
The Cyber Wire podcast is a production of N2K Networks,
proudly produced in Maryland out of the startup studios of DataTribe,
where they're co-building the next generation
of cybersecurity teams and technologies.
This episode was produced by Liz Ervin
and senior producer Jennifer Iben.
Our mixer is Trey Hester with original music by Elliot Peltzman. The show was written by John
Petrick. Our executive editor is Peter Kilby and I'm Dave Bittner. Thanks for listening.
We'll see you back here tomorrow. Your business needs AI solutions that are not only ambitious, but also practical and adaptable.
That's where Domo's AI and data products platform comes in.
With Domo, you can channel AI and data into innovative uses that deliver measurable impact.
Secure AI agents connect, prepare, and automate your data workflows,
helping you gain insights, receive alerts, and act with ease through guided apps tailored to your role. Thank you.