CyberWire Daily - Weathering the internet storm. [Research Saturday]
Episode Date: February 3, 2024Johannes Ullrich from SANS talking about the Internet Storm Center and how they do research. Internet Storm Center was created as a mix of manual reports submitted by security analysts during Y2K and ...automated firewall collection started by DShield. The research shares how SANS used their "agile honeypots" to "zoom in" on events to more effectively collect data targeting specific vulnerabilities. Internet Storm Center has been noted on three separate attacks that were observed. The research can be found here: Jenkins Brute Force Scans Scans for Ivanti Connect "Secure" VPN Vulnerability (CVE-2023-46805, CVE-2024-21887) Scans/Exploit Attempts for Atlassian Confluence RCE Vulnerability CVE-2023-22527 Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyber Wire Network, powered by N2K. of you, I was concerned about my data being sold by data brokers. So I decided to try Delete.me.
I have to say, Delete.me is a game changer. Within days of signing up, they started removing my
personal information from hundreds of data brokers. I finally have peace of mind knowing
my data privacy is protected. Delete.me's team does all the work for you with detailed reports
so you know exactly what's been done. Take control of your data and keep your private life Thank you. Hello, everyone, and welcome to the CyberWires Research Saturday.
I'm Dave Bittner, and this is our weekly conversation with researchers and analysts
tracking down the threats and vulnerabilities, solving some of the hard problems,
and protecting ourselves in a rapidly evolving cyberspace.
Thanks for joining us.
It is my pleasure to welcome to the Research Saturday podcast,
Johannes Ulrich.
He is the Dean of Research at the SANS Technology Institute
and also the host of the daily SANS Internet
Storm Center podcast.
Let me get that right.
It is the ISC Storm Center podcast.
And Johannes, you and I joke about this because for whatever reason, I always want to say
ICS, which is wrong.
Lots of people do that.
Right, right.
It's all my pet peeve, some people say that. Yeah, I want to get it right for you. So right. It's all my pet peeve when people say that.
Yeah, I want to get it right for you.
So welcome. It's great to have you here.
Yeah, thanks. Thanks for having me.
So we're going to do something a little different than we typically do on Research Saturday today.
We're going to dig into some of the history of SANS as a cyber research organization,
and then also talk about some of the process
that you and your colleagues use there.
Where would you like to get started?
Well, in the beginning, I guess.
One reason I think it's nice to talk about this
is it actually started 25 years ago,
and not too many things in information security
are that old and have survived for that long.
But it actually originally started in 99.
And for the young kids listening, there was something called Y2K that actually sort of sparked it all.
Where SANS sort of said, hey, we probably should get better at exchanging what we are seeing in our environments.
And Y2K sort of gave the spark to it.
But then people found it really helpful to have a place
where you can report what you're seeing,
where you can talk about something,
your observation, your environment.
And that sort of then evolved into what's now Internet Storm Center.
And what was it like in those early days?
Are we talking about message boards?
Are we talking about blog boards? Are we talking about blog posts?
What was it? Well, actually, one reason we still call it a diary today, what we are writing sort
of each day is the term blog didn't exist back then. And it was sort of a message board. It was
emails coming in. We had our handler on duty. We sort of still use some of that language today,
We had our handler on duty.
We sort of still use some of that language today, who would receive all these messages and then sort of compile a little digest that would then be posted in this diary format.
Was it bi-directional?
I mean, could people get feedback?
Yeah, and actually, that's something that still works quite well today sometimes.
When we do post something like, hey, you know, we received an email where someone reported something odd in the environment.
And then others are sort of chiming in
and reporting why they may be seeing that
or some of the background about that particular software.
And so that community aspect of it
really was developed very early on.
Yeah.
Well, let's walk through the evolution then.
I mean, how have things changed over the years?
Yeah, so, and actually that 99 was a little bit before I started
working with SANS and working in that Storm Center.
By myself, I sort of started setting up a little bit of a similar system,
but more automated, where I basically, with a couple of friends,
started collecting our firewall logs,
analyzing them, creating some graphic representations of those logs, which started in 2000,
so like a year after SANS started its system.
And it came really handy, if you remember, like 2000, 2001, when these early warms came out, where we really
had some great data to then reflect how these warms spread, how fast they spread, where they
started. So these firewall logs back then were what we collected. And well, people liked it.
We got a ton of people that were then willing to submit their logs to the system.
The nice thing was the original SAN system was a more manual process, I described it.
People writing in and people analyzing it and posting about it.
That's a slower process.
These automated systems allowed us to speed all of that up. And of course, the two then started feeding each other.
That's when I then started working with SANS and also then we sort of officially then named it the
Internet Storm Center. You know, my brain short circuits a little bit when I think about 1999
being 25 years ago. I don't know how you feel about that, but can you give us an idea of what the community was like back then?
I mean, cybersecurity itself was different than it is today.
It was very different.
Like, for example, one parameter of your tracking is what we call the survival time. And that's how long it takes between unsolicited packets being received by your system,
by an average home system.
We'll call it an attack.
And back then, that time was about 15 minutes.
After the initial warm started in 2001-ish. That shrank down to about five minutes.
Later, in particular, once Mirai
and some of these really aggressive scanning bots started,
well, we are now well below one minute
sort of between unsolicited packets
hitting random IP address on the internet.
Are there any specific, I guess, milestones along the way that stand out to you in terms of either the growth and evolution of the Storm Center or also the growth and evolution of the internet itself?
over the years is one of those things.
Like I mentioned, initially,
we started collecting firewall logs.
And that was really interesting back then because then we had bots like Nimda,
if anybody remembers that,
which sort of hit the IS on port 80.
We had the Blast or Worm,
which hit port 135 back in the day.
Over the years, that changed.
These days, much of the attacks we're seeing
are web application attacks, which basically hit your standard web ports like 80, 443, 8000, and so on.
So as the initial firewall logs we collected became less telling as to what the actual threat is, we had to adapt.
And we adapted to sort of more complete honeypots to collect our data.
So where we now set up honeypots that are collecting data from SSH server, from Telnet server.
So that's like your Mirai style attacks.
We have honeypots that are emulating different web applications.
So this is all the different web attacks.
That really now, first of all, tells us more detail about these attacks,
what they're all about, what they're after.
But then again, we have to sort of keep up with the attacks.
Later, these days, many of the interesting attacks,
they first check if your system is actually vulnerable.
And that then sort of about five to 10 years ago,
we started experimenting with what we call an agile honeypot,
where the honeypot is able to emulate different applications,
different devices.
So that's sort of as attacks against IoT devices started.
That sort of helped us then gain a little bit more insight
into those attacks that we're seeing these days,
where a lot of it, I mentioned already Mirai a couple of times,
are sometimes attacking very specific sort of routers or devices.
I always joke that, hey, we can turn our honeypots into toasters if that's what's being attacked today.
And now, a message from our sponsor, Zscaler, the leader in cloud security.
Enterprises have spent billions of dollars on firewalls and VPNs, yet breaches continue to rise by an 18% year-over-year increase in ransomware attacks
and a $75 million record payout in 2024,
these traditional security tools expand your attack surface
with public-facing IPs that are exploited by bad actors
more easily than ever with AI tools.
It's time to rethink your security.
Zscaler Zero Trust Plus AI stops attackers
by hiding your attack surface, making apps and IPs invisible, eliminating lateral movement, connecting users only to specific apps, not the entire network, continuously verifying every request based on identity and context, simplifying security management with AI-powered automation, and detecting threats using AI to analyze over
500 billion daily transactions. Hackers can't attack what they can't see. Protect your organization
with Zscaler Zero Trust and AI. Learn more at zscaler.com security.
was there much thought given in those early days about scalability like were people imagining that the internet would be so interleaved into our lives the way it is i think some people were
sort of imagining it. I certainly kind of
believed in that. That sort of got me stuck
with it. But I think overall, I would
say, you know, back in the early days,
the internet was a much nicer place,
kind of. People helped each other
a little bit more. And
that in some ways got me into security.
Like, you know, one of the
early instances that I sort of had to
deal with in my personal system
was setting up a Linux system, which back then, and again, we're talking like
late 90s, had an open mail relay by default.
And that's just how we rolled back then. You set up mail servers
just for everybody to send email with. And of course, that was then when
spammers started coming up and started abusing those
mail servers.
And I think the security community was also smaller.
And in that sense, I think there was more trust and there's now more collaboration.
I would say collaboration a lot depends on people collaborating with other,
it's not organizations collaborating with each other.
That personal connection, I think, happened probably more back than it does now.
Yeah.
Can you speak to that transformation where it has kind of become corporatized these days?
I mean, you have the big players.
There are still individuals who are known by name, and I would put you in that category.
who are known by name,
and I would put you in that category,
but so much is, you know,
Mandiant says, or Microsoft says,
or, you know, the big names come out with their research.
Correct.
And I think at the end of the storm,
we try to sort of still follow a bit
that old model,
like all of our honeypots are run by volunteers.
We have some individuals at corporations that
donate significant resources like IP address space and such to our honeypots and to the
effort overall. Also, a lot of the analysis we do is done by volunteers.
Well, let's fast forward to today. I mean, what does it look like nowadays?
What sort of processes do you all have in place?
So these days, we heavily rely on our web application logs,
in some sense, also on some of the talent as age logs,
maybe not as much as we should these days.
But all of these logs are being reported by these honeypots,
which usually run
on Raspberry Pis. That's our preferred platform. We have virtual systems that people are using
to set up these honeypots, some in various cloud providers. They send all of these logs to our
database. We add them to the database. And then one of the unique things we offer is essentially real-time.
These logs are being turned around via our website.
Everybody can look at them, can see what's new, what's interesting.
We do have actually now some interns that help us from our undergraduate program
that also run Honeypots, help us develop the software and test it, and
also alert us of some new attacks
that may be seen.
Can you give us some examples of some of the more
interesting items that you and your
colleagues there have been researching lately?
Yeah, just today
earlier I was working on
Adlation. Adlation Confluence,
they patched a vulnerability
last week.
On Monday, I saw in our, we have a report that you can also see on our website,
I call it the First Seen URL Report, where basically it lists,
hey, these are web application attacks that we saw today that we hadn't seen before.
saw today that we hadn't seen before. And one of the URLs that sort of popped up there was related to the adlation attack. Then I was able to actually emulate that particular software
in a subset of our honeypots. That's sort of where the agile part comes in. And then sort of
collect more data about these attacks,
what people were trying to do with those servers.
And then again, sort of immediately turn it around,
publish something about it,
put up a quick summary about what we were seeing.
But again, the data was already there for everybody else to see.
So our diaries, as we call them, these blog posts,
are really just summarizing the data
that we have. That's at least part of what we're doing there. What goes into being an effective
researcher here? The folks that you work with, yourself included, what are those personality
elements, the areas of curiosity that seem to work out? I think curiosity is really it, kind of.
And then being willing to experiment, being willing to be wrong sometimes.
And that's, of course, I think something where things may have changed a bit from the early
days, but the social media environment these days can be a little bit unforgiving in that
respect.
But being wrong in
the sense, hey, if you're wrong, someone else will tell you why you're wrong and what the real answer
is. Also being willing to listen to those people that tell you that you're wrong. I think that's
important. It sometimes helps to have a little bit of memory of what happened before, surely not
remembering all the different attacks that I've seen over the years.
But I see a lot of re-reporting of attacks too. That's a little bit annoying.
Oh, that's interesting. Yeah. And sometimes I would imagine, speaking to that memory component,
you probably just get a funny feeling like something's amiss here, but you can't quite
put your finger on it. That's correct. And also seeing like, no, what's different, what's new.
That's really sometimes the important thing and the difficult part to figure out.
Also, you know, being willing to just plain experiment.
Basically being wrong.
A boss once told me in a prior job that the important part is to make the right
number of mistakes.
If you don't make mistakes, you just aren't really brave enough to try something new, try something different.
I think that's important as a researcher to make those mistakes and learn from it.
What are your recommendations for somebody who's coming up in the industry, either a student or maybe somebody considering a career change, the types of things that they can do to prepare themselves if this sort of research is something they think they're going to be interested in?
setting up a honeypot. We had real great success with our undergraduate students who did it and then realized, hey, these are actual attacks I'm seeing here. Because when you're reading about it,
even when you're studying about in a classroom environment, maybe you're running some exercise
around the attack, it's all sort of fairly sterile and artificial. If you actually see a simple,
warm kind of hitting your honeypot, exploiting some of these
vulnerabilities that you talked about in class, I think that makes it much more real and brings it
really home to people. And it's relatively easy from a technical point of view to sort of get
started with that. Of course, I'm biased here, but I thought that I saw really a lot of people's
eyes light up sort of the first time they really saw these attacks hitting their systems.
Do you find that folks can be kind of intimidated by that, you know, sort of playing with live fire, if you will?
Yeah, that certainly happens.
And maybe that's also important for them to realize how frequent these attacks are.
Also, how many of these attacks really don't matter.
We had recently this famous statement from some bank executives
and how they're being attacked like a billion times a day and such.
And some security people made sort of fun of that statement.
It's real. They are being attacked that way that many times.
But most of these attacks don't matter. They don't cause any damage. And that's in particular if you're sort of
starting out from the defensive side, from like a software developer or network administrator
point of view. Your goal is these five nines or this high reliability, everything has to work.
You sometimes have to switch mindsets when you're talking about attacks,
where you're just saying that, hey, for an attacker,
it's perfectly fine if 99.99% of their attacks
don't work. If the one attack works that breaks into
the Fortune 500's research department and gets
you all their secrets,
it was a good attack.
It kind of reminds me of our own immune systems where most of the time it's just running there,
fending things off,
and we don't think twice about it.
It just takes care of its business on its own
and we don't even notice.
But then every now and then something gets through
and you could get a cold
or you could get something more serious.
Yeah, and that's sort of the important task of the researcher to find those new and different things.
But you have to adjust your immune system where you actually have to build these new capabilities to defend against this new attack.
And the danger is, of course, from someone who is in the business like me for a while,
to get a little bit dull over time or stop caring, really, to some extent.
Right.
And balance that with the new person who is getting excited about every little attack that's coming in.
And I've seen both work. Really, that's why you need that diversity also in your security teams,
where you still have someone that's new to it that still gets excited about some attacks.
Because sometimes they find some interesting things because they do that research. They
do actually dig in and see, hey, what is this attack doing?
Yeah.
All right.
Well, Johannes Ulrich is the Dean of Research at the SANS Technology Institute,
and he is also the host of the ISC Stormcast podcast.
Johannes, thank you so much for joining us today.
Yeah, thank you.
Yeah, thank you.
And now a message from Black Cloak.
Did you know the easiest way for cyber criminals to bypass your company's defenses is by targeting your executives and their families at home?
Black Cloak's award-winning digital executive protection platform
secures their personal devices, home networks, and connected lives.
Because when executives are compromised at home, your company is at risk.
In fact, over one-third of new members discover they've already been breached.
Protect your executives and their families 24-7, 365, with Black Cloak.
Learn more at blackcloak.io. Networks. N2K Strategic Workforce Intelligence optimizes the value of your biggest investment,
your people. We make you smarter about your team while making your team smarter.
Learn more at n2k.com. This episode was produced by Liz Stokes. Our mixer is Elliot Peltzman.
Our executive producers are Jennifer Iben and Brandon Karf. Our executive editor is Peter
Kilby, and I'm Dave Bittner. Thanks for listening.
We'll see you back here
next time.