Risky Business - Risky Business #759 – Why Iran's hack and leak will amount to naught
Episode Date: August 14, 2024On this week’s show, Patrick Gray and Adam Boileau discuss the week’s security news and recap the best research presented at Black Hat and DEF CON in Las Vegas last ...week. They cover: Iran tries an election hack’n’leak like its still 2016 Crowdstrike takes home the Pwnie for Epic Fail at DEF CON UK healthcare SaaS faces six million pound fine for lack of MFA US circuit courts disagree on geofence warrants Our roundup of juicy Blackhat/DEF CON research And much, much more. This week’s episode is sponsored by Trail of Bits. CEO Dan Guido is fresh back from the DARPA AI Cyber Challenge at DEF CON, where the Trail of Bits team moved through into the finals. Dan talks through the challenge of finding, reporting and fixing bugs with AI systems. You can also watch this week’s show on Youtube. Show notes Trump campaign points finger at Iranian hackers for documents leak FBI says it's investigating efforts to hack Trump and Biden-Harris campaigns Iranian hackers ramping up US election interference, Microsoft warns State Dept puts $10 million bounty on IRGC-CEC hackers CrowdStrike snafu was a ‘dress rehearsal’ for critical infrastructure disruptions, CISA director says | Cybersecurity Dive Dominic White 👾 on X: "CrowdStrike accepting the @PwnieAwards for “most epic fail” at @defcon. Class act. https://t.co/e7IgYosHAE" / X Russia's Kursk region suffers 'massive' DDoS attack amid Ukraine offensive Elon Musk on X: "@markpinc Yeah" / X Progress Software says SEC declines to pursue action related to MOVEit exploitation spree | Cybersecurity Dive NHS software supplier Advanced faces £6m fine over ransomware attack failings Security bugs in ransomware leak sites helped save six companies from paying hefty ransoms | TechCrunch 5th Circuit rules geofence warrants illegal in win for phone users’ privacy | Ars Technica Customs and Border Protection agents need a warrant to search your phone - The Verge Hackers could spy on cell phone users by abusing 5G baseband flaws, researchers say | TechCrunch ‘Sinkclose’ Flaw in Hundreds of Millions of AMD Chips Allows Deep, Virtually Unfixable Infections | WIRED Downgrade Attacks Using Windows Updates | SafeBreach Listen to the whispers: web timing attacks that actually work | PortSwigger Research Bucket Monopoly: Breaching AWS Accounts Through Shadow Resources Confusion Attacks: Exploiting Hidden Semantic Ambiguity in Apache HTTP Server! | DEVCORE Trail of Bits Advances to AIxCC Finals | Trail of Bits Blog
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name's Patrick Gray. We're going to get into the news section in just a moment with Adam Boileau and then we'll be hearing from this week's sponsor, Trail of Bits.
So Trail of Bits, Dan Guido will be joining us this week and they're a security engineering firm and they've been participating in the DARPA AI challenge, right? And they just
had a round of this at DEF CON and he's going to join us to walk through, you know, what people
were able to show off during that round and, you know, just talk a little bit about using AI and
large language models to do vulnerability discovery and also patching. So that's a very
interesting interview and it's coming up a little bit later on. But first up, of course, we're joined by Adam Boileau.
G'day, mate.
Yeah, top of the morning to you, sir.
All right, so let's get into it now.
And is this a case of here we go again?
Because it looks like Iranian government hackers,
you know, Iranian crews,
have apparently obtained a whole bunch of stolen material from
the Trump campaign and are leaking it to the press right so everybody's like oh my god it's 2016 all
over again you know election interference and whatever I don't think this is going to move the
needle a millimeter and I guess we'll get into that in a moment, but what is your feeling on all of this?
Yeah, I mean, obviously it brings back some bad memories of the mess that happened in the previous round of elections with, you know, butter emails and all of the, you know, Hillary's email servers
in Ukraine and all the other complete madness. But as you say, like the US election has so many other things going on that I feel like, you know, even if Iranians did want to influence the election, like they're probably better just sitting out and watching it maybe it was Roger Stone and then used his,
so former Trump kind of associate advisor,
and then used his email to email people at the Trump campaign,
presumably phished them, you know, for either code exec or creds,
and then onwards to stealing documents and passing them off to the media.
That seems to be the story.
Former Trump associate and Batman villain Roger Stone was involved in this this so it looks like they got his email account loaded up with
yeah phishing links and whatever and started sending it out to people in the trump campaign
there's you know there's a few interesting elements to this one i think one interesting
part of this is that uh the press so far like hasn't really reported on this stuff i think
another interesting difference between now
and 2016 is this doesn't have novelty they're not you know we're used to the idea that foreign
states are going to do this so while the guttchafer 2.0 um you know persona held at least in some
people's minds for a little bit i think we've all sort of moved beyond that now yeah um and you know
just the media generally being a little bit responsible about think we've all sort of moved beyond that now. Yeah. And, you know, just the media generally being
a little bit responsible about this.
I have a feeling, though, if it was a leak
from the Democratic side, you know, Newsmax
would be running stories about, you know,
how, you know, Kamala Harris is an alien or whatever,
you know, like, so I think there is, you know,
there's a few differences.
And I think really, sadly, one of the differences is,
you know, like, which side it's targeted.
But also, like, what's going to happen?
Are leaks going to embarrass Donald Trump?
Like, the man cannot be embarrassed, right?
So I just, I understand why people are getting tense over this,
but I really do feel like this is not, you know, this will be like,
I mean, someone shot at Donald Trump a couple of weeks ago,
and, you know, everyone seems to have already forgotten about it, right?
So I think this is going to just disappear out of the news cycle and not make a difference to anything.
Yeah, I think you're right.
It's just like that whole, this whole campaign is so bonkers that foreign hack and leak is just not even, like, it's just pocket lint you know yeah yeah i mean i did hear you know
someone who tracks these sorts of groups uh suggested to me that this is less about trying
to interfere in the election and achieve a certain outcome as it is about personally kind of going
after trump right because the iranians definitely have a bone to pick with him after he turned Qasem Soleimani into paste.
So they hate him.
So you do maybe get the impression that this is more about
the Iranians trying to do stuff to just annoy Trump
rather than aiming for a specific outcome in the election.
What a world, man. What a world.
It is. IRC war is a statecraft right so yeah exactly yeah i've linked
through to catalan kim panu's write-up on that he's our colleague and uh yeah he did a good job
on that one and um yeah so the fbi is investigating that one and you know microsoft has warned that
there's um you know that the iranian hackers are ramping up uh election interference, you know, that the Iranian hackers are ramping up election interference. But, you know, again, I just don't see this going anywhere.
But, yeah, we'll have to wait and see.
Interestingly enough, the State Department in the United States has put out a $10 million bounty on a different set of Iranian hackers.
So there's like lots of Iran news at the moment.
And I really feel like by the time we're all talking about a threat actor having arrived at a certain level
of status they've already been there for a while uh but it certainly looks like these days iran is
shall we say good at the ciders yes this was the group that's been hacking like control systems
embedded control systems like screens for example to display messages about israel um and i think
it's part of the Revolutionary Guard,
so like part of the government,
but kind of poses as a hacktivist group,
even though presumably they're just people
who work day jobs at desks like everybody else
in the military there.
So, you know, we've seen them doing this for a while,
but as you say, Iran is definitely getting
a bunch of attention because, you know,
it kind of dovetails well with other stories around the region and all the other stuff that's going
on there as well i mean it's interesting isn't it that election interference is like sort of passe
doesn't have a novelty anymore and just as government hackers posing as hacktivists these
days is just workaday right like i remember when you know the first couple of big ones around that
you were talking about the Sony Pictures hack
when you had Guardians of Peace, right?
Which was the North Koreans.
You had, and before that,
you had the huge attack against Saudi Aramco.
That was Iran, I think, wasn't it?
And that was, what did they call themselves?
Yeah, the cutting sword of justice, which is-
Something like that.
I mean, yeah, the Russians versus the Winter Olympics
in South Korea.
Was that the one where they pretended to be activists?
Well, yeah, and then Guccifer too, you know.
I'm a lone hacker who doesn't like Hillary or whatever.
So, yeah, crazy times.
But, yeah, certainly Iran, definitely.
You know, Iran and North Korea, they used to be of the big four,
Russia, China, Iran and North Korea,
they used to be the sort of laggards.
And these days, you know, I don't know.
They're doing it.
They're giving it a go, Adam.
They certainly are.
Yes.
Yeah.
So we got some interesting comments here out of Jen Easterly, uh, the director of CISR in
the United States.
Uh, I found this interesting because she is saying something that we said, which is that
when the CrowdStrike event happened, um, you know, the first thing I thought of is like, well, this is
kind of the type of effect that the Chinese would be going for around something like Vault Typhoon.
And indeed, she's come out and said that, you know, this is a great dress rehearsal to understand
what those sort of things might look like. One thing I do find interesting, though, and this
comes back to that whole discussion that we regularly have about, you know, the concept of cyber war. Ultimately it inconvenienced a bunch of travelers, but no one's still thinking of like the average person on the street is not still thinking about the crowd strike outage at this point. Like it is something that happened. It's done. We've moved on. It was a mess you know so you do sort of wonder well how much more effective
could a campaign like vault typhoon which is a multi-year multi-stage highly coordinated campaign
could it even get to the level of effectiveness of something like the crowd strike outage right and
and you know how does this change the way we think about these issues
yeah i mean i think uh you know when the crowd strike thing was happening
and it started to become clear that it was you know just common garden screw up and not uh you
know cyber attack uh and that it was in the kernel and that it caused things to to you know blue
screen doing boot you know there was quite a few conversations about like you know what if this was
actually a vault effort what if it was really malicious because you had all the ingredients there like if you did all the things crowd strike did
but then also bricked all the machines hardware like brick the firmware so they couldn't boot up
anymore that would not have been over so quickly and that's kind of like if you were going to do
this as nation state it has become clear that just making things not boot in software
is not good enough because we recovered from the CrowdStrike,
you know, in the end, in a matter of a few days.
But if you bricked every motherboard, like,
and in a way that required, you know, physical replacement,
it would not take days.
Like, that would be a much much more severe thing and i guess
crowd what this has shown us is that the bar for causing wide-scale disruption is pretty high and
if you're going to do it you know you need to do it for serious you can't muck around with just
you know brick and windows you've got to do it properly yeah i i'm with you, but I still think to a degree fears of what an adversary might do, you know, full scale cyber war might be a little bit overblown.
Like, I'm sure they can make a mess.
But what you describe in terms of like perma bricking, like, so imagine it's like deploying some sort of crypto or some sort of destructive thing.
I don't know you know could could the chinese
send a plant to work at crowd strike for years to you know get into a trusted position where they
can deploy you know a crypto uh into their kernel code or something i don't know it just feels like
you can construct all of these hypotheticals but we always find a way i mean there were indian
airlines through the crowd strike thing that were handwriting boarding passes right but we always find a way i mean there were indian airlines through the crowdstrike
thing that we're handwriting boarding passes right like we always find a way i guess is the point but
i i think what eastley says is right which is like this is this is what it could look like right and
we you know we now have a bit of a dress rehearsal yeah yeah i think i think we do uh and it's you
know it's a good reminder why the US government
has been making so much noise around Vault Typhoon,
like why they've been specifically keeping that in the conversation
compared to everything else they could be talking about
because we need to learn some lessons
so that we are ready for the real deal.
Yeah.
We have to give a shout-out to Crowdrike president, Mike Santonis, though.
He's actually an Australian guy who's the president there these days.
He actually turned up to DEF CON to accept the Pony Award
for most epic fail.
Yeah.
And, like, that award was huge.
It was ginormous.
Yeah, they gave him an extra big trophy for him for that one, yeah.
Yeah. But, I mean, you know, good on them for being good sports
and getting up there and taking it on the chin and saying,
hey, yes, we screwed up.
We're going to do better, you know, and not, you know,
running away from that.
So, yeah, good for them.
We're going to spend a bit of time today too recapping some
of the more interesting research that came out of Vegas,
but we've got a few out of Black Hat and DEF CON,
which were held in Vegas, you know, over the last week. But before that, yeah, we do have a
few more news items to get through. One is that there's been a massive DDoS attack, apparently
targeting the Kursk region of Russia. This is, of course, where Ukraine has essentially invaded
Russian territory, and this appears to be at least for now holding it.
But I guess this is just one more example of what it looks like when contemporary militaries
use cyber as part of their toolkit. And yeah, I think that's one thing we've really seen this
over and over in Ukraine, that those DDoS attacks tend to roll in as part of a military campaign.
I'm not sure what they actually achieve, but we can certainly say that this is now something that militaries do.
Yeah, it seems to be, as you're saying, a kind of table stakes for modern combat to
follow up with combined arms in the cyber space.
Over on Between Two Nerds, there's been so many conversations on that podcast that we run about what even is cyber war?
Does it work?
Is it useful?
And what do attacks like this mean?
And I'm sure we will hear, you know,
Grak and Tom offer their opinions on this as well
because, yeah, like it didn't feel like it did much,
but then again, we're not there.
You know, we're not on the ground in Kursk,
so, you know, maybe it did cause more impact. Maybe it didn't, like it did much, but then again, we're not there. You know, we're not on the ground in Kursk, so, you know, maybe it did cause more impact.
Maybe it didn't.
Who knows?
Yeah, yeah.
So that piece is by Darina Antonik over at The Record.
Links to everything, as usual, are in the show notes.
Now let's talk about another DDoS attack.
Adam, politically motivated.
Now you and I were talking this morning.
You are not anywhere near Twitter or X or whatever you want to call it these days.
You just, like, had enough.
Gone, right?
I'm hanging around.
My rage quit over to Mastodon.
Yeah.
I mean, I'm on Mastodon, but I remain on Twitter.
I'm going to be there till the bitter end because I get enjoyment out of the chaos, to be honest.
I mean, there's all sorts of revolting people on there.
I don't think anyone should advertise on the platform.
But you can see some crazy stuff on there sometimes. And yesterday was one of those days where Elon Musk had his big interview with Donald Trump. I think he wanted
to call it a conversation because I'm not an interviewer, you know, that's a loaded term.
So he wanted to do a conversation with Donald Trump and they had technical difficulties, of course.
Which, funnily enough, Elon Musk says, there appears to be a massive DDoS attack on X,
working on shutting it down.
Worst case, we will proceed with a smaller number of live listeners
and post the conversation later.
And then someone replied,
it's Dems fighting to, quote unquote,
save democracy from two massive disruptors.
Musk replies, yeah.
So actually claiming a DDoS attack that no one seems to have observed.
It looks like straight up technical problems.
The other funny technical one in this is that Donald Trump
appeared to be speaking with quite a pronounced lisp
at various points during the interview,
which his camp later came out and
blamed on these spaces like audio compression algorithm and i gotta say man i've been doing
this a long time i've never compressed audio on my audio and wound up sounding like daffy duck
i mean you might mess with me on occasion to make me sound a little bit funny but yes no just oh yeah yeah what so i'm just
saying there's still some good times to be had on twitter right i mean i'm like i'm willing to
throw that baby out with the bath water i think that's fine i'll just tell you what happened
there it's cool yeah you know you can you can function as my twitter secretary who observes
the interesting stuff and tells me about it i'll be be your Twitter proxy, right? Like that's fine. Now, in some more serious news,
Progress Software, which of course is the software company that owns Moveit, which was subject to,
you know, data extortion attacks. We all know about all of that. You know, for a while it
looked like they were being investigated by the SEC. Progress itself has come out now and said that it's not expecting any sort of enforcement action from the SEC.
And, of course, this comes just after a federal court in the United States dismissed parts of the SEC's civil fraud case against SolarWinds.
And we've got a report here from David Jones at Cybersecurity Dive.
I'm not quite sure if Progress's confidence in this case
is because of the federal court finding that said
that the SEC doesn't really have the power
to regulate companies' cybersecurity controls
in the same way it can regulate their accounting.
But either way, it looks like as far as the SEC is concerned,
Progress is kind of off the hook.
Uh, that said, they're still facing a deluge of lawsuits and other regulatory actions from
like the F FTC and whatnot.
I just found this an interesting story because, you know, we're really discovering, I think
the SEC is being, trying to get more aggressive and we're sort of, they're starting to find
their limits, right.
Of, of what they can do.
Yeah.
Cause I mean, their job as a market regulator,
you know, it interacts with how people report their cyber issues and their controls and what they say about their companies,
but it's still not 100% in their bailiwick to be able to, you know,
do some of the things they have tried to.
And, yeah, I guess they're finding their feet in terms of the cybers.
And as you say, progress may, you Progress may get off the hook this time,
but they've got so many other problems going on.
Like I think when we reported about this,
when Catalyst wrote up for Risk Keepers News,
he also noted that in the same breath,
they're out there patching bugs in one of their other products
that's being actively exploited and they are far from fixed.
Yeah, yeah. I mean, I just find the SEC angle on this pretty interesting. And, you know, I have found over the last few years, you know, like
US regulators are getting much more active in terms of trying to punish the wicked, shall we say.
Yeah. And good on them because there are certainly some people who are doing
this stuff pretty wrong yeah now meanwhile speaking of punishing the wicked advanced which
is a company that provides it services to health care providers in the united kingdom uh they were
a software supplier to the nhs and you know a big part of that ransomware attack that hit the nhs
when was that 2022 yeah two years ago now. Wow. Feels like yesterday.
They've been fined 6 million pounds for failing to protect the personal information of tens of
thousands of people. This is a story by Alexander Martin at The Record. And it looks like really
what this boils down to is that they weren't applying multi-factor authentication to some
pretty powerful accounts. And that's what earned them this fine and i've
got to say i i think that's a positive thing because i don't think there's any excuse for
not applying mfa to these types of accounts right and i think getting whacked with a fine
uh is the sort of thing that executives at similar companies are going to pay attention to
yeah i think that it's making it clear
that multi-factor is not an optional control.
Like it's a thing, if you're doing business
over the internet, you provided these services
as a service, that you have to have robust authentication
because it isn't, you know, so much of what we have thought
about these kinds of services in the past,
they've been on premise right they've
been hospital systems have been managed by hospital it teams inside a hospital there is some kind of
on network you know presence control and now it's just on the internet single factored you know it's
just not good enough and i think it's important that people get that message and a six million
pound fine it's pretty clear message yeah i agree it sends a clear message and fine, it's a pretty clear message. Yeah, I agree. It sends a clear message and I think it's a positive thing, right?
Yeah.
Yeah.
So what was that was the ICO, the Information Commissioner's Office.
We've got a similar office here in Australia as well.
It's a little bit different, excuse me,
to how the United States tends to regulate things.
It's a little bit more straightforward, shall we say.
This story I love.
Zach Whitaker has written it up for tech crunch
someone out there has saved six ransomware victims from having to pay up by owning the
like control panels for the ransomware actors i'm going to tell you why i think this is
interesting in a moment but just walk us through what happened here. So the security researcher Vangelis Stykus set out to map out a bunch of command and control systems for
ransomware projects and the other you know server services behind ransomware crews and he found
some bugs and some pretty standard kind of stuff you know APIs with no authentication, default
creds, those sorts of things, and then started monitoring
what those crews were attacking. And he found, I think, four organizations that were in the
process of being ransomware. A couple were small companies, a couple were like billion dollar
crypto firms. And then he was able to reach out to them communicate warn them what was going on
provide decryption keys in a couple of cases as well uh so you know bullet dodged for them because
of some you know internet do-gooder but on the other hand there is always some risks that you
know messing with with these systems yeah i mean the thing that I'm curious about here is it looks like some of these bugs that he used involve things like default passwords. Now, if I go out and log into a system with a default password, I mean, I'm committing a crime, right?
You're doing crimes, yes. you know, in years gone by, there would have been a lot of hand-wringing about, oh, no, you know, what he did was possibly illegal
and interfering with law enforcement operations
and yada, yada, yada,
whereas there's none of that really in this story.
It's just, here's what he did, you know?
So I find, that's what I find interesting about this
is I feel like the, I don't know,
like the sort of cyber Overton window
on this sort of shenanigans,
on these sort of shenanigans
is actually starting to move a bit.
Yeah, yeah.
I think people have been doing this for a long time,
busting into crime groups and helping themselves to their data.
But generally people have kind of kept it quiet for exactly those reasons
because, you know, it's always been a bit of an unclear, murky ground.
And the groups that do this at scale, you know,
who break into crime organisations and things
have really kept it quiet because it's not worth the grief
but I think you're right that opinions I think have
changed and seeing hospitals and cancer patients
and children's hospitals rancid, it does kind of change how you feel
about the fine print of the law and, you know, children's hospitals ransom. Like, it does kind of change how you feel about, you know,
the fine print of the law here.
Yeah, I mean, I just wonder,
would it be a net positive or a net negative?
Given that FBI operations targeting these sorts of groups
tends to result in them discovering that most of them are based in Russia,
you know, I sort of wonder if, you know,
the US Department of Justice saying, you you know issuing a release saying we're
not really interested in prosecuting people who do this sort of thing uh would lead to a flurry
of the sort of activity that we live to see you know what what do you i mean is it just time for
us to allow and i'm not talking about letters of mark and mercenary cyber operators going after
ransomware crews what i'm talking about is just the DOJ saying, you know,
if you happen to pop shell on ransomware infrastructure, you know,
we're not going to charge it. You know, I, I,
I just wonder what sort of position that would put us in, you know,
because often we hear the FBI complaining that that sort of thing interferes
and disrupts their investigations, but their investigations, you know,
their disruptions are great,
but their investigations tend to lead to absentee indictments of people based in Russia.
And I just sort of think, well, maybe it's time to just, you know, let people have fun.
I mean, when we have seen people go vigilante on the internet, it tends to be a little bit unhinged.
Yeah.
Because people, you know, people's motivations vary.
Some people want publicity.
Some people want to effect change.
Some people have a particular issue that they care about, you know,
for example, disrupting child sex abuse,
which I too do not care for child sex abuse, so that makes sense.
But, you know, there's just a plurality of motivations,
and some of them are just kind of
kind of messy well i think there's differing levels of competence as well right so that's
the problem like we like to think if the doj said oh we won't charge people for this then you're
just going to wind up with all sorts of idiots coming out of the woodwork and doing unpredictable
things and attacking the wrong targets and whatever so i think there are still reasons
to be careful here but yeah at the same time you love to see it right you do you do and i
yeah i don't know that like like there isn't a straightforward answer because
you know we do enjoy a good mess you know yeah we do we love it we love a bit of chaos come on
we do yeah we do but i mean the doj can't really love chaos it's not really there
but why though anyway anyway let's move on uh now look we're going to stay with a bit of a
theme around law there's just been a fifth circuit decision in the united states that says that
geofence warrants are illegal under the fourth amendment basically the reasoning i think seems
to be that because, you know,
Google or whoever has to search through many, many users' data
to actually discover, you know, who was in the geofence at a particular time,
that that is a Fourth Amendment violation of the people who are not actually the targets and whatnot.
I've seen a bit of online discourse about this.
People saying, well, I don't know, this is a pretty nuclear take from the fifth circuit and they don't necessarily expect it to stand because it has implications
beyond geofence warrants but this is one of those situations where we've got conflicting opinions
from US courts so you wonder it's just you you have to think this is just going to get kicked
up and kicked up and kicked up into higher and higher courts. I mean, some of these geofence warrants,
I think, I mean, you know, they're used to solve serious crimes. It's like those warrants as well,
where I can think of, it was like some murder a while ago where a cop had a brainwave and
requested that Google hand over information on users who'd searched for the victim's address
via Google Maps or whatever. And they got a hit from someone who searched for it like two weeks earlier it turned out to be the person who did it um you know but i i guess the
point is we're still working out really the rules for this sort of thing particularly in the united
states where they have constitutional protections around this sort of stuff and yeah you just wonder
where all this is going to go yeah i mean i think you're right it's going to have to end up you know
going i guess the supreme court is next after this because the other there was at the fourth circuit that decided that um this kind of search was like
a geo-defense search wasn't a fourth amendment violation and the now the fifth circuit has so I
guess when you have a disagreement like that it's gonna have to end up getting escalated up
upwards and I again right it's one of these things where we just, you know, this is complicated.
There are equities to balance.
And, you know, America does have better enshrined protections in some ways than other countries, right?
I mean, we in Australia and New Zealand, you know, we don't have that kind of restriction.
We have a much more kind of nebulous Westminster, you know, common law kinds of things.
Whereas, you know, it's much more explicitly stated in the US, which is good. And at the same time, you know, it's just going to
have to keep going up until someone can make a call because it's important that we balance those
equities. I don't know where the next step is for this one. I don't know if this winds up in the
Supreme Court, but I think, yeah, the point that you raise, right, which is here, the government will just pass a law and say, yeah, we're
doing this.
There's no Fourth Amendment protections to worry about.
And, you know, both systems work, right?
Both systems have their various advantages.
Look, there's another one here.
We're going to link through to a piece from The Verge.
And this one's a couple of weeks old.
We didn't talk about it at the time, but given we're talking about the geofence stuff, I
thought it would make sense to lump it in with this discussion. A court, a federal judge
in New York's Eastern District has ruled that warrantless searches of the devices of people
who are entering the United States from international travel and whatever, warrantless
searches of those devices are unconstitutionalitutional and to be honest that's one where
i've always kind of been surprised that they've been able to do that um you know i always would
have thought that that would be somewhat of a fourth amendment problem and apparently then
you know this judge in new york's eastern district agrees um from here i think the practical effect
of this ruling at least according to this reporting, is that CBP will just have to stop doing those searches in New York.
But again, this is another case where, you know, we're still trying to work out what should be allowed, what's constitutional in the case of the Americans, what's normal.
I mean, we have seen some pretty egregious abuses of these types of searches.
I can think of cases in Australia and New Zealand, historical cases that go a long way back. I can't
really remember the details, but you'll have someone like who's a member of a criminal
organization where police have been trying to get a warrant for their device. The judge says that
the conditions for the warrant haven't been satisfied. So they wait until they're going for
a holiday in Fiji or something and they get them at the border and they seize their device and it's a custom search.
So that is an abuse as far as I'm concerned.
That's like a backdoor, you know, that's a backdoor search.
And I've always thought that that is improper. in this case uh the guy who was coming in they had some intelligence that suggested he might have
been carrying c-sam and used that to justify the search but they didn't have a warrant and um you
know this judge just said yeah you you need warrants for this sort of thing in the in the
future but you know that said you would think you know given that they had that intelligence they
might actually be able to get a warrant so you know maybe everybody wins you know it does feel
like if you're using
these things specifically to circumvent the requirements of having to get a warrant otherwise
you know that doesn't feel fair that doesn't feel like the system working as intended um i was
surprised actually because i would have thought that i agree with you i felt like i am surprised
that this is a thing that was not problematic in the US.
And I figured it was just because it didn't get used
at enough scale for people to kind of get to the point
where people were making a fuss about it.
This reporting said they have done more than, what,
quarter million searches since 2018.
Yeah.
So that's still quite a lot.
It's more than I was expecting
because I figured like they must be kind of keeping it
for special occasions
so that it didn't get taken away from them.
And I guess, you know, we are only talking about New York
and some big airports there, of course.
But, you know, the different,
like the structure of the US does mean
that having different laws all over the country
is a little confusing
especially for us outsiders that are used to you know country having one kind of set of things
this is a federal court though so I think this will have eventually ramifications beyond just
New York I mean but I'm not a lawyer man yeah exactly I can't 100% say but I mean you and I
are dirty foreigners so we get the full you know the full freedom experience when we fly into the
US regardless yeah I mean some of the laws around this like outside of the u.s as well like in australia
i think the law that they rely or the regulation or whatever it is that they rely on to inspect
people's devices is that a device is a container and customs can open any container to look for
stuff that shouldn't be coming in you know and sometimes this is a really good thing right like
if someone where you've got intelligence that someone is um you know abusing children or whatever who's just coming back from a
from a trip where that sort of thing happens they can you know search the devices and sometimes
these idiots um have these you know crimes well catalogued on their camera roll right and bang
you get to put them in prison so you know you got to see it from both sides right like i don't want my devices searched at a border but you know if giving up that freedom
you know gets these people put in but then by that logic you know that's sort of getting into
you got nothing to worry about if you've got nothing to hide and that's not really good either
so it's almost like these issues are hard and complicated yeah and they also use it to like
look like if someone's coming in on a tourist visa, for example,
they might access a device and look at the calendar
and see if there's a whole bunch of business meetings scheduled, right?
So, you know, to see if people are satisfying visa requirements,
if they're involved in crime, onwards and onwards.
But yeah, it's a fraught issue.
It's just interesting to see this come up now, I think.
Yeah, yeah, it is.
Because it's not like smartphones see this come up now i think yeah yeah it is because it's
not like smartphones are new anyway so that concludes the news uh you know the news segment
of uh you know various events and happenings but now we're going to talk through a research wrap
of course the black hat and defcon conferences were held last week in las vegas in the united
states and you know you've been through the coverage
and you've picked the research that you want to talk about, right?
So this is totally your jam.
And the first one we're going to link through to Lorenzo's write-up
over at TechCrunch.
The first one we're going to talk about is some really interesting research
into 5G, like mobile networks, where you can essentially, like,
I mean, if I'm reading it correctly,
you'll probably tell me that I'm wrong.
You can essentially stingray 5G networks using 5G devices using some baseband flaws.
This looks really interesting.
Yeah.
I mean, that is a basic summary of one of the bugs that this research group came up
with.
And they're actual, so this is a group of researchers from i think
pennsylvania state university and their their papers haven't yet been published and i haven't
seen the talk from black hat so i'm going on uh the reporting on it and they published a bunch of
code on their github uh that related to their work but the guts of it is that they wrote some tooling that can inspect 5G mobile stacks
and from that generate the state machines that they use
between the states that the mobile stack goes through.
And then given some baseband from mobile phones
and some cell site software, extract these states
and then compare
where you can get them out of state relative to each other between implementations and as a general
approach that seems really cool because the thing you can use in all sorts of complicated protocols
and they demonstrated this on 5g mobile networks and found as you say some you know the ability to
stingray 5g which is pretty cool along with a bunch of other interesting bugs um so solid research by the look of it i'm really interested to see the rest of the papers
and stuff when it comes out but uh you know seems like a bit of a step backwards given that we had
such you know robust crypto in 4g and now in 5g we've kind of messed it up again by trying to make
things faster and easier i mean you get less these days with by doing this
to a mobile device right like you know you're gonna get i guess sms equivalent whatever the
hell they're using for that these days and um you might get voice calls but you're not going to get
anything over like rcs or you know any over-the-top encryption is going to defeat that sort of message
or whatever else does make it less valuable of course there's still lots of useful things you can do but it's not the skeleton key
for everything that access to mobile used to be yeah so you're not going to get email if they're
using a modern protocol you're not going to get web browsing like there's just a lot you're not
going to get still doesn't i'm not trying to poo-poo the research the research is good um i'm
just pointing out that these days like we you know we have encryptions at the app layer now and that's that's why right this is why
now uh this one we've got a couple of really interesting uh bits of hardware research
uh here so i think it's uh was it andy greenberger wrote this one up for wires yes it was yes so tell us about this one this is a amd bug that looks uh extremely not
great yeah so this uh andy wrote up some research from ioactive that have been looking at platform
security for amd cpus and in particular they started looking at how system management mode
works so in modern x86 processors the kernel runs what they call ring
zero and then if you're having a if you have a hypervisor that runs in a privilege level called
minus one and beyond that there is a ring minus two which is for system management and that's where
you know hardware access and control is kind of managed from and if you can get up into system
management mode you are you know better than the
hypervisor you're beyond all security software you're in a position where you can do anything
you want with the platform and of course also gives you access to hardware for things like
overriding firmware etc and there's been flaws that have let you go up into system management
mode on intel side and ioactive set out to look for the equivalent thing on AMD and they found a
bug where basically they could confuse the processor when it was loading system management
mode code into you know reading attacker supplied input rather than the stuff it was intended to
read and then from there get up code exec up into into SMM and the flaws that they're looking at basically go back 15 20 years
like since amd introduced this kind of ring minus two level and amd can patch this in firmware on
some cpus they're not necessarily going to go back and patch it on on everything they've ever
shipped of course um so if you're in the position where you've compromised the machine and you want to
be there for a very, very long time, and the conversation we had earlier about, you know,
how would you do CrowdStrike if you really wanted to make it hurt, getting into a position where
you're underneath the hypervisors, you're underneath all the security software, and you
can persist by writing down into the firmware in a way that you really can't be thrown out
without junking the whole motherboard.
Like that's the sort of bug that we would be very, very scared of.
And the fact that it's existed for 20 years, not great.
Yeah, I mean, doing that at scale would be difficult though, right?
So I think that's one thing where the CrowdStrike thing is a little bit different
because that's more like a supply chain attack.
I get what you're saying. Like attack. I get what you're saying.
Like, I 100% get what you're saying.
But, yeah, these things are a little bit different in my head at least.
You know what I mean?
Yeah.
Like, if you – I guess the way I think about it is there is a bunch of bits
of research that if you joined together, like you glued this together,
you would be in a position where, you know,
if you could brick all AMD data center systems in one that ran CrowdStrike,
for example, like join the CrowdStrike, for example,
like join the CrowdStrike bug with a, you know,
escalate SMM, overwrite the firmware
so the machine will never boot ever again,
you know, bad times.
You've been thinking about this a little bit.
A little bit too much, I think, actually.
Remember Scorpio?
Scorpio from The Simpsons.
Gentlemen, I have the doomsday device.
This is you.
You need a long-haired cat to stroke when you're talking about that,
you know, to match the beard.
You can stroke my beard.
Now, look, this one is my favorite, actually, this week.
Sorry, the 5G one was the other sort of kind of hardware,
firmware-related one.
This one is not.
This one is a Windows downgrade attack,
which I just wouldn't have thought was possible, right?
So we've seen previous attacks against like updaters
and whatever where you can, you know,
you can throw previous versions.
I think there was one years ago affected a whole bunch
of Linux distros where you could throw like old updates
at them and get them to
essentially downgrade into a vulnerable state looks like someone's figured out how to do this
with various Windows components which is um surprising walk me through this I want to
understand how practical this is because this shouldn't be happening with so this is this is
sadly practical um this guy uh Elon Levive, looked at how Windows Update worked
and went off looking for a downgrade bug.
And there's a lot of digital signatures
and kind of integrity checking and mechanisms in place
to try and make this more complicated.
But those mechanisms have all rather evolved over the years
and the whole system you know eye on it
is probably well overdue the bug you're going to groan when i tell you what the actual flaw was so
when you're windows updating the windows update kind of back-end system gets told by the by the
user interface or the scheduler or whatever else like go get the updates it gets the updates and
then it stages them into a particular place processes all the metadata and then goes through and applies
them and some of the updates require a reboot and so it says like next time you reboot here is a
list of things you should do and then you reboot your windows and it comes up with oh you're
updating please don't turn off your system and sometimes those updates take multiple reboots to apply
correctly and in between those reboot steps it's safe state and so everything is digitally signed
and well authorized except the little thing that's safe state between reboots so right so you can if
you've got system level access you could modify that yes and get it to apply a validly signed older update uh yeah so you can
change which updates are being applied and kind of what state they're in and which parts of it
and cause it to apply old updates so you can bypass all of the digital signing requirements
and basically get the point where you can roll a machine back years and then the update system
thinks it has applied all of the modern updates. So it won't apply them again.
So it's kind of permanent as well.
So to roll it back, you basically have to reinstall at this point.
So that's terrible.
And then he goes one step further,
which is can he use the same set of tricks
to escalate up into the Windows hypervisor components?
So Windows in modern, like Windows 11, Windows 10,
has a mechanism where it handles credentials
in a separate virtual machine, right?
There is your main Windows OS,
and then there is a separate kind of VM environment
that handles privileged operations like credentials.
And that's why stealing credentials out of LSAS
or modern Windows is difficult.
And so he uses the same tricks to apply updates
up into these even more privileged contexts and onwards up into the EFI BIOS even.
So he's able to update or downgrade, I guess, his way into the point where you've bypassed Secure Boot because there's been bugs in that.
So you can downgrade that through the same trick so essentially you're back to we can bypass secure boot we can bypass all the hypervisor based controls and we can roll back our system to ancient vulnerable versions of everything
uh as a you know but i mean this is post-exploitation activity though right like
it's not remote the thing that made the linux stuff like horrible was you could do it remotely
i think yeah yeah that was because uh some of the package repository metadata was usually handled over HTTP because it was signed,
the files themselves were signed.
Yeah, exactly, right?
So they'd done all the signing on the update,
so why bother with that fancy newfangled TLS stuff, right?
Because it's expensive to do on high-volume archive servers
and blah, blah, blah, yeah.
So it turned out to be the opposite.
Yeah, yeah, yes.
So, I mean, this is like post-system, right?
Compromise.
So that's something.
But, yeah, it's still not great because you would think that if you want to maintain persistence somewhere, you know, rolling that machine into a vulnerable state.
So if you do get evicted and they don't completely reimage the device, you can just come back.
Yep.
You know, that seems handy.
Yes, it certainly does.
And, like, if it means you can bypass secure boot
and some of the other hypervisor based controls,
so like these two bits of research we just talked about,
the AMD sync close and this one in my mind,
if you were the CrowdStrike person,
someone who did CrowdStrike for evil
and you wanted to ensure that you maintain access
wherever you bricked things
or you kind of got enough access to brick things,
getting to the position where you can write to the EFI through a bug,
like for being able to downgrade your way like this, like I feel like it,
it, you know, lines up pretty well. Like these things pair nicely.
It's a good, it's like a food wine combination of, you know,
it's the duck breast with the Pinot Noir. Exactly. Exactly. Yes.
So, you know, it's worth pointing out again
that the best attackers in the world
are already doing a lot in hardware.
Yes.
Right?
Like this doesn't make the news,
but that is something that they are doing.
And as we know, it's a matter of time.
It's just a matter of time
before other types of attackers start doing that as well.
We are hideously underprepared when it comes to tooling
to address these risks,
and that's going to be a lot of fun in five to ten years.
Yes, absolutely.
Moving on, and look, I love a bit of research from James Kettle,
of course, from Portswiger,
and we've got some here that he's presented in Vegas.
Yeah, I mean, I guess the idea that you should watch James Kettle's talk from Vegas is not exactly insightful because he always does great talks.
And this one, you know, it's exactly what you expect from him really really interesting bug classes made practical in a way you know
through portswigger's tooling in a way that you know takes things that you might read a paper
about but never actually use and make some things that everyone can use and then he as always goes
and does them against everyone who has a bug bounty program because once again portswigger
that's how they roll uh and it's just wonderful research. This looks at the utility
of timing attacks to do all sorts of really, really interesting things, and how to make
timing attacks practical in modern web applications through things like HTTP2 pipelining to deliver
requests at the same time, and interleaving requests so that you can cause two requests
to be delivered very, very, very close together in time, taking out network latency, all those kinds of useful things.
So it's pretty detailed.
But if you're a web app pen tester, as always, you should read its work
and make sure you internalize it and give your customers applications hell.
Yeah, I mean, I think that's the thing about, you know, James Kettle's research,
which is that it always winds up working its way into every sort of red teamer
and pen testers like workbook eventually. Right. So.
Yeah. Just super practical.
Like really follows that hacker ethos of making the theoretical practical.
Yeah. And we've also got some research here around, uh,
well the title here says breaching AWS accounts through shadow resources.
I did not read this one.
I'm sorry, but yeah, tell us about this one. So this is some research that looked at
creating S3 buckets in Amazon AWS that can influence the flow of other tools that store
their data in S3. So for example, if you use Amazon CloudFormation to roll out, you know,
to automate deployment of things into your cloud,
then it will store some like configuration
and stuff that it's going to use into an S3 bucket
and will make one for you if one doesn't exist.
And these account names or these bucket names
are kind of, they're not predictable,
but if you can find them out in advance
or make them in a different region
or something like that,
you can influence how these tools consume data.
It's kind of logically the same issue
as predictable file names in temp on a Unix system, right?
You're creating data in a way
that trusted processes think they created it
and then act upon that data.
And they've looked through a number of AWS services
where, as an attacker, you can manipulate that,
you know, leading upwards to full code executor account takeover
just basically by guessing an S3 bucket name
or determining it in the cases where they are predictably named.
And it's the sort of research that,
there's so many interesting niches in cloud stuff,
like there's so many fiddly detail bits
that you kind of don't think about
how these things actually work.
And it's great to see people doing this research.
And if you're responsible for cloud systems,
like this mandatory reading, in my opinion.
Yeah, I mean mean i'm just blown
away by the quality of research this year there was a time there was an awkward time where black
hat used to be about presenting like memory corruption bugs and research right and then
all of that kind of went into the defense contracting scene and people stopped presenting
on it right so there was this weird lull for a few years in black hat research and of course now there's all sorts of offensive research that does belong in the public domain
and uh you know getting to see it all presented like this is um is terrific uh we got one more
to talk about before we move on to this week's sponsor interview which uh looks like some um
some research into apache right which is i gotta say that's brave like targeting one of the giants
you know going for the king yeah well this is orange sai the
taiwanese researcher that you know when he decides he's gonna do it it's gonna get real and yeah he
decided he was gonna go swing at apache uh and he uh he found some really good bugs actually
essentially what he did is he looked at how apache is made up so ap Apache is a kind of core bit and then a number of modules you plug in
to do various things.
And the software communicates between those modules
using a number of shared kind of data structures.
And his rationale was,
all these modules are written by different people
over a very long period of time.
And there isn't a clear, necessarily a clear consensus
about how these data structures should be used.
One of the examples would be whether or not a particular path refers to a file path or a URL path when they're being processed.
And so he mapped out how all this communication happens and then looked for places where things are used inconsistently and they ended up coming up with a whole like
half a dozen really solid bugs uh in apache in places that you really wouldn't expect them to
be that you know lets you read arbitrary files off the file system um just by sticking a like
question mark in the url in a particular way uh so just super great research i love how he wrote up
the rationale and the like the way he thinks about it
because getting inside that guy's mind,
if you're a bug hunter, that's gold right there.
So once again, well worth a read if you're a bug hunter.
If you're an Apache admin,
you're probably going to be applying a few patches.
Yeah, yeah.
All right, well, that concludes our research wrap.
Thanks a lot for explaining all of that to me
and I'm sure a lot of the listeners found that really, really valuable.
So that was great, Adam.
And, mate, thanks a lot for your time this week.
We're going to roll into the sponsor interview.
Yeah, thanks so much, Pat.
I will dig through the rest of the research because Lord knows there's more of it to come.
And maybe we'll do another round next week. All right.
That was Adam Boileau.
And it is time for this week's sponsor interview now.
This week's show is brought to you by Trail of Bits, which is a security engineering firm.
They do all sorts of really interesting work.
Quite active in the crypto space as well.
They can audit all sorts of crypto stuff.
And I don't mean just encryption. I do unfortunately mean the cryptocurrencies and various crypto instruments.
They're also doing a lot of work around AI these days, all sorts of work from model safety
through to model security and whatnot. And they're also participating in the DARPA AI Cyber Challenge.
And this is where DARPA is running a contest
where contestants can essentially, you know, come up with systems that leverage large language
models to try to discover vulnerabilities and also figure out how to patch them. So yeah,
really interesting stuff. And there was just a round of that contest in Vegas last week. And
Dan joined me to talk all about the DARPA AI challenge. Here he is.
I hope you enjoy it. So the DARPA AI Cyber Challenge is a competition to apply AI to
bug detection and bug patching systems. DARPA has challenged us to create an automated system
that has no human involvement that can find and fix bugs in a real software. Things like
Linux kernel, Nginx, Apache Jenkins,
or other really large software programs. Okay. So what's been the general approach then
of contestants? Because there was just the big challenge that happened at DEF CON,
where people got to show off their various approaches to this. How did the
contest participants try to go about using AI to do what you just described, right? To find those sorts of
vulnerabilities. Because I'd imagine that there's a lot of ways to skin this cat, right?
Yeah. So the constraints on this challenge were pretty extreme. In the DARPA CyberGrant challenge
about almost 10 years ago at this point, the challenge projects, the software that we had
to audit was very simple. But in this case, because we've been given real software,
it's significantly more difficult to find bugs in it. We've also had constraints like you're only allowed to test each piece of software for four hours at a time. And we have a dollar
limit of how many LLM tokens we're able to use. So we can't just infinitely call out to chat GPT to
get recursive feedback from it about our bug finding. So they really put the constraints on us very,
very tight where we had to come up with something almost magical in order for it to be able to find
a bug in a complex piece of software with hardly any analysis time whatsoever.
There were a lot of teams that signed up for this. 90 teams attempted to participate.
There were 39 teams that created a working called a cyber reasoning system or a CRS
that were actually evaluated during this competition. And out of those seven,
we're able to find a large enough number of bugs and patch a large enough number of bugs
that they're proceeding to the final round that's going to happen next year.
And I'm really lucky to say that TrailerBits was one of them.
So you just mentioned it there. So my question was really about how did people go about using AI to do these sorts of things? And you said they built, you know, cyber reasoning systems. So what are those? Because that's what I was thinking. I'm like, okay, well, okay, you're given the rules are you can use AI and LLMs to do? Do you sort of keep prompting it until it builds
some sort of model of the software or like how to just, what's the rough shape of how this works?
Yeah. So from a high level, the way the competition works is you've been given this
very tiny little pinhole through which you can see the target software. So there's a function
that you can call called run POV, run proof of vulnerability. And that's where you can send input to execute against say the Linux kernel or Jenkins. That's the only porthole that you get
to test through. So all the other stuff that happens on the other end could be very fuzzing
forward, could be very LLM forward. You're basically trying to create inputs. You're
generating inputs that could potentially crash or somehow exploit one of those systems. And it's up
to all the competitors to figure out how they want to optimize for that.
Some teams used a very LLM forward approach.
I believe the team, All You Need Is Fuzzing Brain, had one that was almost purely based
on LLMs.
And then there are teams that were all the way on the other end of the spectrum.
I'd say Trail of Bits was a bit more conservative about our use of LLMs, using it as one potential
input among many other input
generation strategies. So the point of the competition was really to encounter that variety,
like what are the strategies that work? And that's what DARPA was trying to get out of there,
trying to figure out like, how would you actually use AI to help when finding bugs?
And so what were the approaches in the contest? I mean, you were just there,
right? You would have seen which teams went well
and had a rough understanding of their approaches.
You know, what were the approaches
that turned out to be more successful than others?
You know, what did we learn through this one, right?
Because I'm genuinely curious because this is all new.
So it's really interesting
because there aren't a lot of details out
about the challenges yet
or about the CRSs that competed in it yet because there is a final round coming out in a year. So
a lot of the teams are playing their hands close to their chest. And we don't know exactly what
these teams did beyond the teeny little one-liners where Darba congratulated several of the teams
during the closing ceremonies. TrailerBits has been pretty open about some of our details of
this. Like for instance, I have a video up where I talk through our patching strategy. Our patching strategy uses
an agent-based approach with LLMs in order to generate what I'd call idiomatic style patches
that fit into the program and get feedback from a fuzzer and from testing tools and from other
sorts of more offensive things to see if it actually works to address the bug.
But I don't think we're going to find out a lot
about exactly what each competitor has done
until next year after the finals.
And when the finals do happen, we'll find out everything.
DARPA has mandated that all of, not just our strategies,
but the code that we use to run our CRS
has to be open source two weeks after the finals run.
So just curious though, like the software, I'm guessing they gave you a list of the software
that was sort of, that you would be targeting for the challenge. So did you do, like, I imagine you
would have done a lot of the testing before you turned up and that when people arrive, it's just
about showing off how their system elicited, you know, vulnerability information out of that system,
right? Yeah. So in fact, most of the challenges that we had to play on during the live competition,
we did not see out of time. So this was, again, one of these enormous constraints.
Right. Okay. So they give you the parameters of the contest and then like on the day, they're
like, okay, today your system is going to be targeting this. And then you get to see how it
performs against something that it doesn't know. Because's what i was wondering like how do you actually begin to set up a competition like
this so people don't just optimize it to to one target or another and that answers that question
yeah yeah they gave us a couple of what they call exemplar challenges so that we could at least
build something and test it uh you know over the last few months but we had no idea what kind of
software they're going to throw in there we had some idea we knew that they were partnered with
the linux foundation we expected it to be open source software.
But we didn't know it was going to be SQLite.
We didn't know that it was going to be Apache Tika.
We didn't know a whole lot of different targets for this.
And our systems, luckily enough, especially for the seven people that qualified,
were robust enough to deal with that.
So given an unknown target, set up a whole fuzzing harness automatically,
find bugs in it, patch bugs in it, identify a commit that introduced the bug, and prove that
you understand what the bug was. Submit with your report what type of bug you've discovered,
and all of that using AI and some fuzzing. So I'm guessing that kind of answers my approach,
my question about the
approach that I had earlier, which is that, you know, you're using AI to set up a fuzzer,
basically, to configure a fuzzing harness or whatever you call it. Yeah, it could be, you know,
again, there's a variety of approaches that teams can take. Some can be more LLM heavy, some cannot.
And it's sort of, you know, within these constraints, I think it's actually created a diversity of approaches
that I don't know if the competition organizers even expected.
It does sound like the seven people that qualified
had very different takes on what was required to do this.
Now, the big question, right, which is what did you find
when you sat down to go against, you know, SQLite or whatever,
using your approach in the contest,
I know you've been working on this for some time.
So did you manage to find any bugs?
And what was the nature of the bugs?
Were they exploitable?
Were they just like DOS conditions or logic bugs?
How did that all work?
Yeah.
What did you find?
Tell us what you found.
Yeah, absolutely.
So these bugs were inserted into the programs.
They're synthetic bugs that
were created by the competition organizers, except for one case where we did, not we,
Team Atlanta, which I believe is Georgia Tech, their CRS was able to identify a bug in SQLite
that was not part of the competition, which was a live bug in the SQLite code base.
This happened at, God, what was it?
It was an Australian. It was either a training event or a conference, I think, or a CTF. I think
it was a CTF in Australia where a bunch of kids turned up and found a whole bunch of ODA in the
networking equipment by accident because they thought it was part of the competition. I love
it when that happens. So good. Yeah, totally. So in the Linux kernel,
Nginx and SQLite, those are three challenge projects that were written in the C language. We had to find five different CWEs. So things like out-of-bounds reads, integer overflows, use after freeze, null pointer to references, and out-of-bounds writes. So we were lucky because this occurred in rounds where we did one target per round.
And the very first one was Nginx and Trail of Bits of CRS came out of the gate pretty strong.
We immediately found and fixed, I believe, six different bugs that put us sort of in the lead for the CRSs that were there.
A lot of these other targets ended up being really difficult for a variety of reasons. Java,
if you've ever had to test a Java application, just has massive numbers of dependencies that
take a long time to compile and work with. The Linux kernel is just massive. And beyond being
massive, a lot of the bugs are hidden like 10 layers down. And it can be very difficult for
an LLM or a fuzzer to reach one of those points where a vulnerability might lie.
Well, I mean, I'm guessing this is why DARPA would have picked it for the challenge, right?
There's no point giving people an easy challenge. It's got to be hard, right? And that's going to
be hard. Both of those things, like sending an LLM, you know, giving it a map and a compass
and sending it off into Java land, you know, it's going to get eaten by bears, man. Yeah. I do admire that we have real targets this time around. You know, the CGC back 10 years
ago with the more synthetic, you know, simpler testing system, we know how to find bugs or real
software today. So I think it's a good bar to set as we think about continuing to design automated vulnerability detection tools in the future.
I mean, I guess what you're saying, though, is that they've had the round of this competition in Vegas and that so far, you know, the results are encouraging, right?
People were actually able to demonstrate that their systems performed.
They were able to actually identify the vulnerabilities that DARPA had inserted in there.
Even, you know, there was a team that even turned up one that they didn't put up in there.
So, you know, I guess, look, with all of these things, and particularly these DARPA challenges,
right, what usually happens is by the end of them, you're sort of left with the impression
that, you know, further study is required, right?
Like, it's not going to be like, wow, this is a panacea for vulnerable research.
And it's not going to be, this is a complete dead end. It's always going to be one of those situations where,
you know, LLMs and AI tech is just going to be one more tool in the toolbox, right? For doing
volume discovery. Is that about how you're expecting this to shake out? So I do think
that the likelihood that this results in an immediately usable tool after the competition
is higher than average for a DARPA program. They're making us produce
direct information that proves that we have a bug. In order to score points, we have to produce a
patch. In order to produce a patch, we have to understand the bug. In order to produce a patch,
we have to identify the commit that introduced the bug. And all of that taken together,
focused on a real piece of, or demonstrated on a real piece of software means the likelihood is high that these things could actually be helpful to you in a year.
So I'm, you know, from a Trilobit's perspective, this is not the only entry that we have into this
field. We're putting a lot more energy behind our automated vulnerability detection tools that use
LLMs and use generative AI because I think it shows a lot of promise. And I want to make sure we're
out in front. I think we've learned a lot about how to engineer software with AI that produces
good results. And that's going to pay off for a lot of other activities that Trilobits invests in
and the way that we consult with our clients about their own use of AI.
All right. Well, Dan Guido, thank you so much for joining us on Risky Business to have a chat about the DARPA AI Challenge and the round that was just held at Vegas.
I guess we'll chat again in about a year and find out how it all went.
Yep. Looking forward to it. Thanks.
That was Dan Guido there with this week's sponsor interview.
Big thanks to him for that.
And big thanks to Trail of Bits for sponsoring this week's show.
I've linked through to Dan's blog post about the AI Cyber Challenge in the show notes for this episode. And yeah, that's about going to wrap
up this week's edition of the Risky Business Podcast. Thanks a lot for joining us. And until
next week, I've been Patrick Gray. Thanks for listening. Thank you.