Risky Business - Risky Business #735 -- AnyDesk fails the transparency test
Episode Date: February 6, 2024In this week’s show Patrick Gray and Adam Boileau discuss the week’s security news. They talk about: Thought eels were slippery? Check out AnyDesk’s PR! Why ...Microsoft’s 365 is a nightmare to secure Cloudflare’s needlessly hostile blog post US Government introduces “Disneyland ban” for spyware peddlers Much, much more… This week’s feature guest is Eric Goldstein, the executive assistant director for cybersecurity at CISA. He’s joining the show to talk about CISA’s demand that US government agencies unplug their Ivanti appliances. He also chimes in on why the US government is so rattled by Volt Typhoon and addresses a recent report from Politico that claims CISA’s Joint Cyber Defense Collaborative is a bit of a shambles. This week’s sponsor guest is Dan Guido from Trail of Bits. He joins us to talk about their new Testing Handbook. Trail of Bits does a bunch of audit work and they’ve committed to trying to make bug discovery a one time thing – if you find that bug once, you shouldn’t have to manually find it on another client engagement. Semgrep for the win! Show notes AnyDesk initiates extensive credentials reset following cyberattack | Cybersecurity Dive AnyDesk says software ‘safe to use’ after cyberattack Former CIA officer who gave WikiLeaks state secrets gets 40-year sentence Arrests in $400M SIM-Swap Tied to Heist at FTX? – Krebs on Security Microsoft Breach — What Happened? What Should Azure Admins Do? | by Andy Robbins | Feb, 2024 | Posts By SpecterOps Team Members Cloudflare hit by follow-on attack from previous Okta breach | Cybersecurity Dive Thanksgiving 2023 security incident US announces visa restriction policy targeting spyware abuses Announcement of a Visa Restriction Policy to Promote Accountability for the Misuse of Commercial Spyware - United States Department of State Deputy Prime Minister hosts first global conference targeting ‘hackers for hire’ and malicious use of commercial cyber tools - GOV.UK New Google TAG report: How Commercial Surveillance Vendors work A Startup Allegedly ‘Hacked the World.’ Then Came the Censorship—and Now the Backlash | WIRED American businessman settles hacking case in UK against law firm Crime bosses behind Myanmar cyber ‘fraud dens’ handed over to Chinese government Another Chicago hospital announces cyberattack Deepfake scammer walks off with $25 million in first-of-its-kind AI heist | Ars Technica As if 2 Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3 | Ars Technica Two new Ivanti bugs discovered as CISA warns of hackers bypassing mitigations Agencies using vulnerable Ivanti products have until Saturday to disconnect them | Ars Technica The far right is scaring away Washington's private hacker army - POLITICO Our thoughts on AIxCC’s competition format | Trail of Bits Blog How CISA can improve OSS security | Trail of Bits Blog Securing open-source infrastructure with OSTIF | Trail of Bits Blog Announcing the Trail of Bits Testing Handbook | Trail of Bits Blog 30 new Semgrep rules: Ansible, Java, Kotlin, shell scripts, and more | Trail of Bits Blog Publishing Trail of Bits’ CodeQL queries | Trail of Bits Blog The Unguarded Moment (2002 Digital Remaster) - YouTube Boy Swallows Universe | Official Trailer | Netflix - YouTube
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name's Patrick Gray.
You might be wondering why we used different music this week and yeah, it's because like a lot of you, I've been watching the terrific Australian series Boy Swallows Universe, on Netflix. It's a great Australian show with a great Australian soundtrack.
And if you're not watching it, get on it.
And this song is from the soundtrack,
and it's completely appropriate for a cybersecurity podcast.
It is called The Unguarded Moment, and it's from The Church.
Adam Boileau will be along in just a moment
to talk through some of the week's news,
and then we'll be hearing from this week's feature guest,
Eric Goldstein, the Executive Assistant Director for Cybersecurity at CISA. He's joining the show to talk about
CISA's order for US government agencies to disconnect their Ivanti appliances and he'll
also chime in on why the US government is so rattled by Vault Typhoon and he also addresses
a recent report from Politico that claims CISA's Joint Cyber
Defense Collaborative is a bit of a shambles, but he pushes back on that one pretty hard.
That's coming up later. And yeah, in case you haven't noticed, feature interviews are back
most weeks. I killed them off years ago because we just had too much news to cover in the show,
but now our colleague, Katalin Kimpanu, is covering so much cyber news in the Risky Business News podcast.
I feel like we can make space for these sort of interviews
in the main show again, and I do hope you're enjoying them.
Also joining us this week is sponsor guest Dan Guido
from Trailer Bits, and he's popping along to talk
about Trailer Bits' new testing handbook.
Trailer Bits does a bunch of audit work,
and they've committed to
trying to make bug discovery a one-time thing.
You find that bug once,
you shouldn't have to manually find it
on another client engagement,
let alone on the same client engagement.
Semgrep for the win.
That one's coming up later,
but first up, it's time for a check of the week
security news with Adam Boileau.
And Adam, the big news story of the last week has been the AnyDesk fiasco.
Looks like they got themselves popped despite finding out at a different time they chose
to announce this, like on a Friday evening, which is a PR sin in my book.
And it's still not entirely clear exactly what happened here.
Yeah, Anydesk has been involved in quite a lot of people's security journeys
because the software is pretty regularly abused by scammers and so on.
So it's a name we're familiar with, but it being compromised,
it sounds like they had part of their network,
I think a couple of boxes in Europe, got themselves popped.
And then the idea of a supply chain attack into any desk would be pretty concerning
to a lot of people.
They are saying their source code probably got nicked along with it's –
they're a little bit unclear, but it pretty strongly implies
their private keys for code signing were lifted as well.
And they are saying their software is still safe to use as long as you get it from legitimate sources.
And they're in the process of rotating the keys used for signing their software releases.
They also rolled a bunch of credentials on their portals.
They said that was out of an abundance of caution.
But overall, you know, not a great look.
And some of their comms, as you say, the Friday afternoon release
and, you know, there's some lack of details
that they've been getting a bit of flack for.
Yeah, I mean, we were talking about this before we got recording today
and just some of it's a little bit weaselly.
Like they say, oh, all versions of our tools are fine
as long as they come from like official sources, right?
Which kind of implies that these keys were stolen.
And when you read through like their FAQ,
they're just being a bit cagey about it,
kind of unnecessarily.
Like it's very clear that they've lost confidence
in the integrity of those keys,
but they haven't just come out and said that they were nicked.
So I don't know what to make of that,
whether that's just that they're being extremely cautious
in rotating those keys
and there is no evidence that they were stolen.
I just don't know what to make of it.
But either way, yeah, it's a bit of a shambles.
Yeah, it smells a little bit much of corporate lawyering
and not quite enough of technical detail.
I think Alex Damos was mouthing off on LinkedIn about it
and he said, look, give us the hashes of the no-goob releases,
give us the details of the certs that you've lost confidence in
so that we've got something concrete to work with.
Plus, of course, Friday afternoon, a bit rude.
Yeah, and they haven't done that?
Why would they not actually release information on the certs that they're suspecting may have been compromised like what possible
justification could there be for not releasing that information it does seem a little bit weird
which is why it smells like lawyer you know yeah but even then like what would the legal
justification that's good it's a good who even knows uh with lawyers but uh yeah it's not you
know it's not exactly the flawless disclosures and details
that we really need these days especially for software that is pretty widely you know widely
used and high risk yeah yeah well look moving on and uh joshua schulte who is now the you know the
convicted vault 7 leaker who leaked a bunch of cia tools to WikiLeaks. And you know, this has just been a
nonstop drama with this guy. He was using devices in prison. Didn't he fire his lawyers? And I think
he was trying to represent himself at some point. Like he just seems like a completely deranged,
unhinged asshole basically. And didn't really get him very far when it came time to sentencing because he's going to be in the clink for 40 years
and even when he gets out, he's 35 now,
so you'd think if he does his full sentence he'll be 75,
then he's going to be on a lifetime of supervised release.
So, yeah, they have thrown the book at him
and I would just think if you are Edward Snowden reading this,
you would be getting a bit of a shudder, I would think.
Yeah, I mean, this certainly, you know, comes out pretty serious looking.
And I mean, the scale of that breach and the kind of audacity of it,
you know, I'm not surprised that it's got a pretty stern sentence.
There was also a bunch of other stuff, like he had some child sex abuse material
on his computers and, you know, things that, like, this did not seem like a good guy.
Yeah.
And, you know, beyond the operational security aspects that presumably the CIA has learned a few things from, like, perhaps their staff screening also needs a little bit of work, given that this guy was working in, you know, in the depths of their, sensitive environments and being a dick at the office by all the accounts that we've seen.
No one really liked him and he was combative and nasty to work with.
Well, I mean, it looks like this whole thing was rooted in a workplace dispute.
Yes, yeah.
In him not getting along with his colleagues, being a, you know,
I mean, it looks like some of his colleagues might have been
like not the best either, you know what I mean?
But it just looks like it was a complete, the part of CIA where he worked
looks like it had toxic elements and this is how it spilled over. It's just,
you just wouldn't believe it, right? Like if someone's predicted this, you would just say,
no way would this happen. But, you know, the US government argued in court that this
placed a bunch of people at risk. It really hampered CIA's ability to continue with various operations.
So, you know, big deal stuff.
And I sort of, you know, you do get the sense
that if Edward Snowden were to find himself back
in the United States that, you know, this is what it looks like
when you get the book thrown at you for this sort of thing.
So I guess we've got a bit of a template here.
Yeah, I think so, yes.
And, you know, at least with this guy like some of
the stuff that he dropped was legitimately useful to the people outside like i used one of the
exploits from this dumper on an engagement once caught myself some nice shells whereas you know
snowden so i guess i guess that's the silver lining that's the silver lining was you know
cia microchip exploit i got to go use in the wild. So, you know, there's that. Whereas by contrast to Snowden, which was a lot of storm and drang,
but, you know, perhaps it wasn't like there was sweet zero day in there
that you could actually go out and use.
But either way, it's good to see them getting to the bottom of this,
sorting it out and, you know, pretty strong deterrent message
to the others, you know, other people who might be considering the same thing.
Well, he's 35 now.
I forgot that he's been in prison for a few years,
so I don't know that he'll be 75 when he gets out.
And who knows, maybe he'll get out earlier.
But, you know, either way, you know, you get less for murder.
Yes.
So, you know, hectic.
Now, great story here from Brian Krebs,
who has pulled together a bunch of reporting uh into this piece
looking at the arrest uh of some people here we got a chicago man robert powell and we've also
got emily m hernandez uh been arrested and it looks like they've been arrested for a sim swapping
attack which was behind the theft of like you know nearly half a billion dollars from ftx like the
day they went bankrupt like we all remember that and thinking that's us that was probably an insider
like and then gradually we've learned that no no that was a real attack and probably you know my
theory was always that someone had access and when they saw that ftx was filing bankruptcy they
realized they needed to move quick uh we have seen arrests there. And yeah, three Americans have been charged this week
with stealing more than $400 million
in a November 2022 SIM swapping attack.
Just nice reporting here that pulls together
a bunch of work from, you know,
Ars Technica, Wired and some original reporting.
Yeah, and interesting to see this kind of lined up
with some, you know, details from some of the, you know,
the usual blockchain transaction tracing people like Elliptic, you know, from some of the you know the usual blockchain transaction tracing
people like elliptic you know kind of explaining how this worked and i think your idea that this
is probably someone realizing they have to move quickly and then making a few mistakes
in that process kind of gets gets borne out like they were using some um like russian cryptocurrency
washing scheme to try and get um the coins out and fast enough.
But in the end, like this much value moving around did still make it possible to trace.
And perhaps if they'd had more time, they would have come up with a better scheme.
But either way, you know, trying to steal half a billion dollars and not getting away with it.
It's how it's meant to work.
Yeah, I mean, there's this whole criminal soup
in the United States, right?
With these like scattered spider slash octopus
slash laptop slash the comm slash whatever.
Like it's more of a mindset than an organization.
But, you know, it looks like the people here
are sort of from that same soup.
And another interesting fact here is that they,
it looks like they were cooperating
with Russian organized, you know, cyber criminals to launder money and whatnot. So, you know, we've seen that with the
Scattered Spider stuff as well, which is Americans cooperating with organized crime groups based out
of Russia. I think this is, you know, pretty much an established trend now is that, you know, some
of these, some of these Americans are now really seeking out and doing business with these Russian organisations.
I think it's going to work out better for the Russians
than for the Americans because, as we've seen,
they're starting to get arrested now.
Yeah, and I think some of the frustration
that we can't extradite Russian criminals
and can't bring them to justice
ends up getting taken out on those people
that we can detain overseas and extradite to the US, or in this case, where there are Americans involved,
I think they're going to get a harsher judgment because of some of that frustration and seeing
Americans tied up with those Russian criminals just rubs everybody the wrong way. And that
will be reflected in the sentencing. Now, we've got a fantastic blog post up here from Andy Robbins,
who's over at SpectorOps.
Just a disclaimer, SpectorOps are a sponsor of ours, right?
But this blog post is terrific, right?
Because it is the first really clear walkthrough
of what this SVR password spray slash OAuth app
lateral movement and escalation attack.
It's a really clear description of what that sort of thing looks like
and what Azure admins should do to prevent this sort of thing from happening.
And that's why we're talking about it.
There's also some details in here that to quote you from Slack a short while ago
means Microsoft deserves a bit of a kick in the teeth.
Walk us through this one, Adam.
So Andy Robbins has kind of written a blog post that goes step by step through a vulnerable Azure environment
and configuration that is representative of what the details we've seen from Microsoft so far
with some kind of screenshots and explanations and a few sort of inferences
based on SpectreOps' experience in Azure about how it must have been configured.
Yeah, this is a likely description of how it would have gone down. Like this isn't an
incident response document. It's like, well, they've pieced it together and they're like,
here's what we think happened, but it all makes a lot of sense.
Yeah. So they walk through, you know, kind of how your OAuth applications will be configured, how it gets trusted into corporate Active Directory,
and, you know, can infer from that also,
like, what permissions were necessary.
Like, essentially, it concludes that
to be vulnerable to an attack that meets the parameters
Microsoft has talked about,
essentially, the attacker had full, you know,
global admin rights to Microsoft corporate, right?
It wasn't limited to
the few mailboxes that we saw etc etc etc based on on what we know um with a few you know it's
got screenshots and details and part of the point of this blog is if you're in charge of an
environment that's similar how would you understand if you would be vulnerable in the same way
and it talks through you know clicking through the through the Azure Entra ID admin panel,
trying to figure it out.
And then about halfway down the blog post,
you get to the instructions, which are,
okay, now open the Chrome DevTools
and start looking for GUIDs in the HTTP responses
because the data you need isn't surfaced
in the user visible portions of the web interface.
And now you have to start like finding guids and and
copy pasting them and searching through the source of the page to discover the necessary information
to figure out with your if you're vulnerable or not which good job microsoft you can see why you
end up with vulnerable azure configs when even as an, you can't see the necessary details to understand what your configuration is
without having to know magic guids for magic roles
that you then have to kind of cross-reference by hand.
Like you can see why Spectre Ops make tools
to solve these problems for their customers.
But yeah, just what a mess.
And yeah, if you're an Azure pen tester
or you're an Azure admin,
I would say this is probably required reading
so that you understand the state of the art of attackers' use of your things.
Andy's been all over social media on this one
and saying all sorts of interesting stuff.
So that's who I've been paying attention to
to try to better understand all of this.
But you would have heard me say previously
that years ago we were criticizing Microsoft
for the only way that you could control oauth apps for a long time at all
in your environment was via powershell you know it's just insane you can't expect people to be
able to properly configure their environments when you make it so hard and just anyway if anyone
wonders like why we say this is microsoft's fault this is why yes
this is why you have to have so much depth of understanding of the inner gubbins of microsoft
world to be able to work with this stuff and then at the same time at least active directory kind of
stayed the same for a long time whereas this stuff is moving constantly underneath you and you know
you really have to be breaking this stuff
or working with it all day every day
to even understand that it is changing in the ways that it is.
Now let's turn our attention to Cloudflare's
unnecessarily hostile blog post of the week.
So you remember a short time, you know,
there was the whole incident that happened
where someone hacked Okta support.
They grabbed a bunch of har files
that turned out to be unsanitized which is totally octa's fault they should have been sanitizing
those files and they grabbed some auth tokens and used them to onwards attack a few a handful
of companies right and cloudflare was one of them and they came out and said look we're the ones who
detected this and we shut it down we contained the incident well turns out maybe not so much yes so cloud but it's octa's fault but it's not um so cloudflare has put a blog post describing
an incident they had around thanksgiving last year uh where upon uh what feels like pretty
sophisticated actors got into their atlassian software environment their jira their confluence
their bit bucket uh and were rummaging around looking for stuff they eventually got snapped got into their Atlassian software environment, their Jira, their Confluence, their Bitbucket,
and were rummaging around looking for stuff.
They eventually got snapped when they added a new account with some extra privileges,
and then Cloudflare started investigating.
And these attackers were using some credentials
which were exposed during that earlier Okta breach,
but then Cloudflare had failed to rotate
because they thought they weren't being used, but this was sufficient to come but then cloudflare had failed to rotate you know because they thought
they weren't being used but this was sufficient to come in through cloudflare's you know external
authentication process from there into jira they shelled their um confluence i think it was
and you know and backdoored it with uh you know an open source um tool, which yeah, they eventually spotted and dealt to.
Now, blaming that on Okta
does seem a little bit,
I mean, okay, yes, they were
involved, but
Cloudflare decided not to rotate their
creds after they got exposed.
Kind of on them.
Yes, and if you read
this post,
it is just so hostile.'s like really really hostile and and
you know they keep saying that like this is a the second octa breach and whatever and you know the
first one was that uh support person's terminal that got screen capped and that was like the
extent of the breach so i don't know man it just seems like cloud flares knives out for octa and i
don't entirely understand why.
You do get the feeling from the blog post
that this was a very serious incident for Cloudflare.
Like the amount of people that were involved
and the amount of work that they had to undertake
to triage it and deal with it.
And so I can kind of see, like,
there probably is a little bit of saltiness there.
And some of the other work that they did in remediating it
certainly sounds pretty thorough.
And they've got some good stats for kind of like, for example,
how many Jira tickets were accessed
and how many wiki pages in Confluence were looked at.
So they did a pretty thorough investigation.
They also talked through, you know, re-imaging their Jira boxes
and bringing in CrowdStrike to do a subsequent investigation to follow up after their
own so like you know cloud they definitely investigated this very thoroughly uh and that
brings a bunch of confidence uh that they have you know understood what happened and so on and
so forth but yeah i i feel a little bit bad for octa in this uh moving on and united states
secretary of state uh anthony blinken has announced uh visa restrictions
it's a visa restriction policy to promote accountability for the misuse of commercial
spyware so basically if you're involved in like hacking for hire and spyware and whatever and
using spyware to target people in civil society uh the united states government is saying well
when you go to apply for a visa we might just decline decline it. So it's the, you know, we've got the release the hounds doctrine,
and this is the no Disneyland for you doctrine. I think it's a good idea because when you look
at these sorts of people who work for the companies that develop spyware, I mean,
look at some of these Indian organizations, and then you look at some of the Israeli ones. The sort of people who staff those companies could have global
careers, and they might plan to work in the United States one day. And this just sends a clear
message, which is if you're involved in all of that, you ain't coming here. Yeah, I think this
is a good move, because it does, you know, we love imposing cost. And this is, you know, a lower kind of bar of cost compared to sanctions
or, you know, actually prosecuting, extraditing,
et cetera, et cetera, et cetera, the various people involved.
And as you say, like, these people often have
plenty of career options.
They can work, you know, probably lots of different places
in the world and probably make a reasonable amount
of money whilst doing so.
Well, see, and here's the thing.
Like, if this were just targeted at just targeted at the director-level people
or the owners who are going to make millions of dollars,
they'll be like, well, I'm happy not to go to Disneyland.
I'm happy not to work in the United States.
But this seems like if it's targeting the workforce,
it's going to make it hard.
It's going to make it harder for some of these companies
to get everyone on board that they want to.
Yeah, and I think that's a pretty smart
move and i the wording for this said like it also could be extended to spouses and children so as
you say like can't take your kids to disneyland with all that money you're making sell inspire
maybe you should think about your life choices yeah that seems like a good move to me um you
know i i'm not sure what you know some of our associates and friends
and colleagues who work uh in exploit dev in the you know in the friendly side of of the world you
know kind of feel about not going to the equivalent of disneyland and other countries if we saw it you
know tit for tat but i mean you know i don't know that i would feel comfortable going to
china at the moment either so you know we're kind of already self-imposing some of these restrictions.
Yeah, but that's government-related stuff.
I mean, this is more about private sector stuff, which is, you know,
I don't know, I just think this is a good initiative.
Yeah, me too, me too.
There's been a bunch of, there's been a flurry of coverage
and things happening when it comes to so-called, you know, hackers for hire.
There's some sort of powwow confab
happening in the UK and France,
hosting 35 nations at an inaugural conference
to tackle the proliferation and irresponsible use
of commercial cyber intrusion tools and services.
So a lot happening on this.
It's taken a while,
but it feels like finally governments are kind of where they need to happening on this. It's taken a while, but it feels like finally governments
are kind of where they need to be on this.
Yeah, like it's been a long road to go from some of the kind of false starts
that we've had, like with the Vassana arrangement,
trying to regulate exploit sales or exploit dev or information sharing,
none of which had much support in the industry
or seemed particularly effective
now it's a few years down the track and we kind of understand better how the commercial spyware
world has shaken out into an actual business and where the weak points are and yeah we're finally
starting to see some movement that feels potentially more effective yeah yeah and yesterday
and yesterday uh google tag put out its report uh on commercial
surveillance vendors and it's a you know big old pdf there that we've linked through to in this
week's show notes yeah definitely worth a read because as usually with google tag stuff there
was a bunch of very real world experience behind these you know these publications and there's some
good data in there about like kind of what percentage of bugs that they have looked at
probably went through commercial exploit vendors versus you know individuals or regular hackers
or whoever else and also like a good breakdown of the industry and the various players and so on so
yeah definitely worth the read and good job google tag now staying on the subject of like
hackers for hire and whatnot um wired has a piece up uh titled a
startup allegedly hacked the world then came the censorship and now the backlash and this is
referring to the whole appen uh schmuzzle so appen is an indian company that was doing a lot of this
you know hacker for hire stuff reuters did a a bunch of work that wound up resulting in a story
that tied Appen to an individual in India and published that story. That story was subsequently
pulled by Reuters due to legal, not just a legal threat, but a finding from an Indian court
where the person who was named sued or got some sort of stay or some sort
of injunction. But then the lawyers for this guy have gone on a tear. And I mean, we've been touched
by this. Okay. So Tom Uren's weekly newsletter, Seriously Risky Biz, is actually syndicated by
Lawfare, which is the Brookings blog, Lawfare Media.
And they got a takedown demand, which they had to comply to,
and we had to pull some stuff down as well,
because it's difficult to stand behind a story
that Reuters itself has unpublished.
You can't very well be repeating claims made by Reuters when Reuters can't repeat them itself, right?
So, but I guess the point is lawyers for this individual have been amazingly aggressive, just incredibly aggressive in trying to remove any reference. So in the case, we hadn't even named the individual in our coverage, in Tom's
coverage. But because it linked through to the Reuters article, we got the nasty gram.
So what Andy's done is he's named the individual, which I'm not going to do.
But a bunch of people have got together and they are challenging these lawyers and their
takedown demands. And that's kind of where all of this sits at the moment. But yeah, I mean,
we didn't announce
that we'd pulled some content down
because first of all, it was old content.
Literally no one has noticed.
And second of all, the reason I didn't mention it
at the time is because it happened
during our Christmas break.
In case anyone's wondering
why we didn't mention it earlier.
But yeah, Adam, what do you make of all of this?
And please be careful and don't get us sued.
I mean, the classic word for this is the Streisand effect.
And one of the people involved, I think from, is it TechDirt,
is the guy that coined the term Streisand effect in a previous?
Yeah, it's Mike.
Mike Masnick.
Yeah, I've chatted to Mike a bunch.
He's cool.
Yeah, so that kind of irony is pretty funny.
But yeah, I mean, it's a good example
of how complicated it is to talk about this industry sometimes.
And I am glad to see a bit of light being shone
on the use of injunctions and whatever else
on this kind of stuff.
Yeah, wonderful to see EFF doing something useful for a change.
Moving on.
Oh, and just following up on some stuff we've been covering lately,
some of these crime bosses behind the, you know,
pig butchering farms in Myanmar have been handed over
to the Chinese government.
So I think they're about to have a bad time.
We've got a write-up here from James Reddick over at The Record
featuring some photos of some rather unhappy-looking suspects.
Yes, so these were like crime families
that were eventually allied with the Myanmarese government
that were running pig butchering and so on schemes
in that particular region.
It looks like now that there is an alternative government
in control in that particular region of Myanmar,
they've been handed over to the Chinese.
And the picture, it's very well worth going through the show notes,
clicking on the link for this one,
just because the picture of these very sad dudes
sitting between burly Chinese policemen wearing helmets
whilst on board the plane back
to china you know that's pretty funny and worth clicking on uh unfortunately it does sound like
some of the operations um of these groups have just moved to other regions of myanmar further
south of the chinese border because clearly there is still money to be made clearly the same kind
of corruption and and um you know forces that drove it in the first place still exist so I think we're going to see
you know this as an ongoing problem you know whether the Chinese are emboldened to deal with
it more we will see yeah well I don't think it's really up to the Chinese when it's in areas of the
country that are not under the control of groups that are friendly to them. So, you know, it's just, what a situation.
Anyway, just moving on and, you know, scumbags of the week are the people who staged a ransomware
attack against a Chicago children's hospital.
That one's been getting plenty of play on social media for obvious reasons.
I mean, it's just like, you know, what a bunch of turds.
Yeah, really.
This is like one of the biggest children's hospitals
in the Midwest of the US.
And yeah, it's just horrible.
So the hospital is still functioning.
They're back on kind of manual processes and things.
But yeah, just what scumbags.
Now, this next story we're just going to talk about quickly.
But the reason we're talking about this at all is because you and I are both very, very skeptical about this.
The South China Morning Post first wrote this up.
Then it went into Forbes and we're working off a version that is appearing in Ars Technica.
This report in the South China Morning Post says that, you know, this company, a multinational company based out of Hong Kong, or it had a Hong Kong
office, they lost $25 million US dollars in fraud. So a staff member transferred this money out,
and the staff member has claimed that they were on a video call with a bunch of the employees,
including the CFO, who authorized the transfer, and that this was all done with deep fakes, which is what fooled them into making this transfer.
This sounds a little bit like my dog ate it to me.
It does.
And I am skeptical.
But,
but we have heard of things like this happening with like audio deep fakes and
whatever that sounded like bullshit,
but turned out to be true.
So I'm going to keep an open mind here, but you you and i our initial gut reaction to this is that you know this
guy's probably lying yeah i mean that that was my initial read because you know getting to the point
where you can do real-time video deepfake interactively of multiple people you know that's
that's a pretty high bar and i'd want want to see, you know, pretty strong evidence
that that is what had happened.
And as you say, the person involved, you know,
covering either their own involvement or something else,
you know, that seems like a more likely explanation.
But, you know, deepfake tech is moving really quickly
and modern AI stuff is pretty fancy
and we've been surprised by the quality of some AI stuff before,
but, yeah, I'm pretty dubious.
All right, Adam, mate, that is it for the news.
So now we're going to bring on this week's feature guest now.
Eric Goldstein is the Assistant Director for Cybersecurity at CISA
and he joins us now.
So, Eric, thank you for being here.
And, yeah, CISA, and he joins us now. So Eric, thank you for being here. And yeah, CISA has
ordered US government agencies to disconnect their Avanti equipment from the internet, which is quite,
you know, a bit of a radical step. Can you walk us through that order and, you know, why that
decision was made? Absolutely. One of the authorities that we have at CISA is the ability
to issue emergency directives to federal civilian agencies.
And these are, and there's specific useful text here, where we identify an exigent risk
to the federal enterprise. And so we use these pretty sparingly, and actually we use them less
than we used to, because as listeners might know, we now run a known exploited vulnerability catalog
that drives mitigation of a lot of vulnerabilities out there.
So we actually use this authority less than we did a year or two ago.
We really reserve it for cases where agencies and the broader community need to take some really urgent action. Our activity is grounded here is we know that adversaries are pervasively targeting the network edge in order to gain access to credentials and identity and access management systems.
And so what we have seen in recent weeks is pervasive targeting of Avanti Connect secure and policy secure devices.
And we have seen the need for organizations to take some really urgent action in response.
And so what this directive requires is actually pretty simple.
It requires federal civilian agencies that are running these appliances to go ahead and
disconnect the devices if they haven't already, upgrade to a supported version, deploy the
patches that Avanti has provided, reset and rotate their credentials,
and then make a risk decision to reconnect, all the while conducting persistent hunting
in their environment to assess whether compromise has occurred. And our sense is this was necessary
given the degree of targeting and compromise around the world of the now three exploited vulnerabilities
affected these appliances. So I guess the reason you gave this directive is because you just have
to assume at this point that there's a solid chance that these appliances have been owned,
right? So it's not a case of like, just, you know, upgrade, just patch it, right? Which might be the
normal advice. It really is, you is, take it offline, image it,
rebuild it, and then put it back.
Every organization running these devices
absolutely needs to assume targeting
and assume compromise.
And that's why we think every organization
needs to take these important steps
just to make sure that they can have confidence
and the integrity of their environment
and make the right risk decision for their enterprise. Now, I guess my question here is, this is the first
time we've seen CISA give an order quite like this one, but I imagine we might see it again,
right? Because this is not a super unusual set of circumstances where we've got lots of
exploitation in the wild of a border device, right? Like, so say the next time it's Fortinet or Pulse Secure or whatever it is,
are we going to see this?
Is this the new template for the advice?
You know, it is certainly the new normal that these sorts of edge devices
are being targeted to an extraordinary extent by APT actors
that we are really focused on in the US government
and around the world.
And so where we see targeting of this kind of device to this degree, this is absolutely
the sort of action that will direct where needed to drive the right level of urgency
and response.
Now, what do you think, like stepping back for a moment, right?
Like we've talked about some you know, some comments you made
at some public forum where you were saying,
we need to move beyond, you know, just, you know,
responding to these things reactively by patching and whatever.
You know, Adam Boileau, who's here with me now,
he's a big fan of yours for these comments because he's, you know,
doing the full fist pump when we were talking about that one.
But, you know, what more broadly can be done, you know,
from a government agency perspective, right,
when you've got this big constituency of departments and agencies
and whatever, like what can you do to more broadly mitigate
the sort of issues we're seeing pop up at the moment
from these, you know, absolutely atrocious edge devices?
That's exactly the right question.
And, Adam, the fairhip goes right back at you. We know that this model isn't sustainable, right? We cannot,
in perpetuity, be chasing vulnerabilities in edge devices that grant extraordinary access,
that grant extraordinary privilege, given the sophistication of the adversaries that we're up against and so i will note that you know avanti the vendor of this case
has been a terrific partner working closely uh with sisa and other federal agencies um from day
one sorry i'm not gonna let you just heap unchallenged praise on them on this show
they did take a while to get some patches out, Eric.
So, you know, absolutely, that's the case. I will just reflect on their partnership with government, which certainly was positive and transparent. But to a broader question, you know, we know
that a lot of these sorts of vulnerabilities in general are of a class that we can address before
they are pushed to production. And so our call to action, many listeners may have heard our director,
Jen Easterly, make this point in front of Congress just a few days ago.
We need to get to a point where enterprises can have an enhanced sense of trust and safety
in the products they're using, and we are wiping out classes of known vulnerabilities,
SQL injection, path traversal, memory safety,
those classes that we know how to fix in development, we need to wipe those off the
face of the earth because this cycle of chase and patch and remediate isn't sustainable and
defenders are winning. So you think that there's at all a chance that vendors can actually fix
this? Because the cynic in me, which is 90% of me, just says, no way, that's not going to happen.
Do you really think that CISA, the US government, has the leverage to actually get these people to do their jobs?
I mean, they're all making money hand over fist selling crap.
So what makes you think that this is something that will change?
Yeah, the organizations who are being hurt by insecure technology are the customers of those technology products.
And so I do think that if we can change the conversation to focus on the right that every
organization has to use safe and secure product, we can move the narrative away from blaming
victims for not remediating and patching stuff, and we can shift accountability to where we can
actually drive scalable security. And that's with product vendors who can address not every
vulnerability every time, but at least the most prevalent classes of vulnerabilities that are
still how most intrusions occur. Now, I think that seems like a noble medium to long-term goal. It's
one that I think is actually a little bit ambitious, sorry to say.
You know, what about sort of more in the short to medium term? You know, I mean,
you've also got things like the zero trust executive order, right, which should bear some fruit, you know, any decade now. We'll see, you know, we'll see that we won't need these sort of
devices once that order is fully implemented. But, you know, as I alluded to, that's a very long road.
You know, I mean, what else can you, that's a very long road. You know,
I mean, what else can you do here from a CISA perspective? You know, apart from ask vendors to make products that aren't complete piles of shit, right? And moving glacially towards the
zero trust, all singing, all dancing future. I mean, what else is in the mix here?
We don't think that we have the time to move at a glacial pace. And so on both of those
points, yes, moving to a world where entire classes of vulnerabilities are rendered extinct,
yes, that will take time. But we also think that, for example, even near-term steps, like when
vendors disclose a vulnerability, they tag the CWE that goes back to that vulnerability so we
could actually understand the development weakness behind it and make more informed risk decisions
in the products that we are using. That is something that we can do today. Similarly,
we know that there are zero trust controls that can be adopted in the nearer term,
even in the absence of moving to a purist zero trust architecture.
And those are moves that federal agencies are investing in now. And so, yes, we are a ways away.
Can you give us a couple of examples there?
Yep, absolutely. So certainly making sure that we are enforcing lease privilege,
that we are rigorously locking down administrator and privilege accounts, that we are enforcing multi-factor
authentication, phishing resistant for every user, every time. Those are steps that we can
take today that aren't easy, but certainly they also are going to take a generation to accomplish.
So when we see vulnerabilities in these things, a lot of the vulnerabilities are because
the administrative interface or the privileged interface is shared with the end user facing interface. And that seems like a really natural
point to go back to vendors and say, this is a thing that should be segregated by design. And
most vendors don't take that step because, you know, as far as they're concerned, a web server
is a web server. Like that seems like an easy thing to do. Is that one of the things that,
you know, in CIS's guidance to vendors
could be more emphasized? Because that guidance is rather lost in all of the other things
that they're not going to do. You know, I promised that I didn't plant this question.
As part of our Secure by Design efforts, we have, on the one hand, been focused on
really broad strategic guidance as best reflected by our
joint seal, secure by design papers with 15 or so other countries.
But a few months ago, we also started a new series called secure by design alerts.
And what these are is we see an intrusion, we see a campaign, we extract the secure by
design lesson to be derived from that intrusion, and then we push it out as a secure by design alert.
The very first secure by design alert, Adam, that we released in late November last year was focused precisely on web management interfaces and the need to both harden these interfaces, segregate management interfaces from end user interfaces, and ideally take them off the open internet as a default configuration.
We have done subsequent alerts on issues like, for example, eliminating default passwords,
unhardening small office, home office routers. All of these are tied back to
actual intrusion campaigns that are actually causing harm around the world.
Now, Eric, look, while we got you here,'ve got to talk about a Politico report into the Joint Cyber Defence Collaborative,
and this was launched back in 2021,
and the idea is it's a government and industry sharing community.
Politico has just run a report claiming
that private sector participation in this community has dropped off.
Curiously, the report cites a couple of different reasons
for this, you know, mismanagement on the government's behalf. They also say that it's
understaffed. But curiously, they also say that part of the reason people are dropping back is
because they're worried about what they're calling, what is it, conservative criticism or something,
or conservative scrutiny, that's right,
when really what they're talking about are the fringe loonies going after organisations like the CTI League,
claiming that they are censoring political speech or something.
I don't know, it's all a bit nuts.
You're also quoted in this piece as saying everything's fine
and that this is all rubbish.
How do we explain the discrepancy here
between what Politico is reporting and what you're
claiming? Because you are definitely saying different things. Building a sustainable,
persistent, effective partnership between government and the private sector is hard,
complicated work. And we work through those challenges at CISA every single day. I have a
bit of benefit of perspective here because I had the privilege of working at CISIS precursor agency that I left for a few years for the private sector and worked
for a big multinational bank and came back. And so I reflect a lot on how public-private
partnership used to work, which was largely conference calls and ad hoc meetings, entirely
reactive to the events of the day, largely a one-way street. And we've come
collectively, government and private sector, really around the world in extraordinary way
since then, where now we have persistent collaboration channels using Slack that are
running continuously, over 30 now with over 200 companies. We are now doing joint planning to try to get ahead
of the risks that are coming next.
And every single one of our cybersecurity advisories
that we push out now benefit from input
from the private sector.
And so I look at this from a fairly longer term view
to say, wow, this model is working so much better
than it has ever worked before.
Now, is it perfect?
Is it optimized?
Are we at, as we say in government, full operating capability?
Absolutely not.
We are working every single day to reduce friction, increase reciprocal value, make
sure that government and CISA is a place where companies and researchers feel like
they can work in a trusted environment without fear of
retribution and where they get value from their important time well spent. And that's work that
we're doing every single day. And I will tell you, we have an extraordinary team, not just here at
CISA, but across the US government, across our international allies, and with industry who are
trying to make this model the most effective it can be, because frankly, our adversaries aren't
waiting around and we've got to get it right. Now, look, I would rather pinch myself until I bleed than try
to pull together an initiative like this, right? I understand it's a very difficult thing. Do not
want. Okay. I get it. But it seems like in what you said there, you know, you are admitting that
this is not going perhaps as well as it could have. I mean, look, I understand that, you know,
doing things like this is quite hard.
You know, if there's been some pullback,
I guess what I'm curious to ask you about is like,
what's the root cause of that?
You know, because as I say,
Politico's thrown a bunch of stuff here.
Oh, people are scared of, you know,
quote unquote conservative scrutiny
where people are going to say
that this is some shadowy DHS thing trying to silence political speech or whatever. So there's that, you know, people are
pulling back because of that fear, or just that it's not running as smoothly as possible. Like,
where, you know, where's the room for improvement, I guess, is what I'm asking here. Because it
sounds like, you know, you're happy to say, yeah, look, you know, these things could always go
better. You know, what's the plan to improve this?
I'll have a couple of points.
The first is, you know, we have not, in fact, seen any broad pullback from our partners.
And we continue to work with companies and researchers every single day.
And, you know, we spent a bit of time talking about the recent campaign targeting Avanti products. We have done over 100 notifications of victims of Avati intrusions based upon collaboration driven through the JCDC.
We're also doing joint planning as we speak around Volt Typhoon activity targeting U.S. critical infrastructure that would have been impossible without this kind of a platform. The biggest thing that we can do to improve is make sure that we are showing reciprocal value. Because every minute that an analyst at a company or a researcher spends working with the government, that is time that they could spend doing something else.
And so we need to make sure that that time is as well spent as it can be, that we are providing the platforms for them to share information effectively, that we are sharing back transparently, and that we are maintaining that environment of trust
so that that collaboration can be sustained.
That is going to be a different model
for different companies, different organizations,
different researchers.
And so we know we need to meet people where they are,
understand what they want to get out of the collaboration,
what success looks like for them,
and then build a model that's enduring over time. And again, that is hard, complicated work. We are
on the way to achieving it, but it's absolutely going to take time until we get it absolutely
right. Now, look, one thing that's been going on for the last, what, year or so, I guess,
is this Vault Typhoon campaign, or less than a year, this vault typhoon campaign out of China
that appears to have US government officials rattled.
I mean, the transparency we're seeing
in terms of government officials from the United States
actually talking about this, it's unusual.
I'm not used to government officials talking so openly
about a single group, as know, as opposed to like a general threat,
you know, what can you tell us about why it is that Vault Typhoon has everyone at the
US government so rattled?
Yeah, there's really two key reasons.
And I would just encourage listeners to take a look at the hearing just a few days ago
with the directors of CISA, the FBI, the NSA, and the National Cyber Director, the first time that they've ever all testified together, and it was about this exact
threat. And there are two reasons. The first is the stated doctrine of the People's Republic of
China to potentially launch destructive cyber attacks against the U.S. and our allies, and that
is a qualitatively different kind of risk that we have focused on with PRC actors in the past, where we've, of course, been focused predominantly on espionage and IP theft.
Now we are focused on pre-positioning for potential future destructive attacks.
You combine that with, as our director, Jen Easterly, noted at the hearing, the fact that CISA hunt teams have identified Volt Typhoon pre-positioning
on American critical networks across multiple sectors.
And so now we know, because they have said it,
that they may try to launch disruptive attacks
against critical infrastructure,
and we have actually found evidence
of them pre-positioning to achieve exactly that goal.
And so for us, that is a significant shift
in the threat environment
affecting America and our allies, and one that we really need to take urgent actions in response to,
both to make sure that we have the visibility we need to identify these intrusions, that we are
driving urgent investment in detecting and mitigating what we call living off the land
techniques, which as your listeners know, is extraordinarily hard for many enterprises to actually wrap their arms around and detect,
and that we are focusing on resilience, recognizing that we are not going to detect
every possible intrusion at every organization. And we also have to make sure that we can maintain
our critical functions under all conditions. Can you share with us any insight about the
types of organizations that Vault Typhoon are hitting? I mean, we've got listeners from all
sorts of industries tuning into this show. It might be useful to them to know that they're in
a sector that's being targeted. I mean, we've heard about telcos being targeted. Where else
should people be particularly aware of this group? We are particularly concerned about targeting
of sectors including transportation, energy, water, telecommunications, but we also need to
pull back a bit to understand these PRC actors' stated intent, which again is to cause disruption
of critical functions. And so what we would say is any organization that is in a sector that is providing a critical function to society or the economy, they are a potential target,
and they should be taking urgent action response, including working with government to make sure
that they understand the threat and the risk. Eric Goldstein, thank you so much for joining
us on the show and being this week's feature guest. Great to chat to you. Great to meet you.
Thanks for the work that both of you do every day. We appreciate it.
Thanks, Eric.
That was Eric Goldstein
from CISA there
and of course Adam Boileau. Adam,
you know, some interesting stuff there.
Like, certainly they are not
demanding that people throw their Avanti gear in the bin
and never plug it back in.
Yeah, exactly, right?
Like, maybe it's time for them to go one more step.
What do you think?
I would be in favour of that.
I mean, I feel like,
how many strikes do you get before you're out in sports ball?
So, yeah, I feel like Avanti
probably should be in the bin by now.
But then they'll have to get rid of the Fortinets
and the Pulse Secures and the everythings.
And the Citrix and then maybe Azure.
And then the government just won't be connected to the internet.
That sounds amazing, though.
We'd be out of a job.
Yeah, I mean, yeah, we could both retire.
That would be great.
It was great that Eric could join us.
He was a real sport under questioning.
So we really appreciate his time. And Adam,
always great to talk to you, my friend, and we'll
check in again next week. Yeah, thanks very much, Pat.
I will talk to you then.
It is time for this week's sponsor interview
now with Dan Guido of Trail of
Bits, and Trail of Bits has been
busy. They've released their testing guide,
which is informed by a lot of the
code audit work that they do.
They don't just do fancy
engineering projects. They do do plain old audit work as well, but they try to do it a bit more
systematically than most, I guess. And they're also getting involved in DARPA's latest challenge,
which involves using AI to do bug discovery and remediation. And that's where I kicked
off this interview. Enjoy. Yeah, so we're participating in that. That's the AI Cyber
Challenge. Really ambitious project that falls in the footsteps of the Cyber Grand
Challenge from back in 2016 that we also participated in. The rules for that finally
came out. We have a breakdown of how the competition works on our blog. And I think
really importantly, one of the things that got clarified when the rules came out after the
announcement, they announced it several months ago. And this new recent development says that
you have to produce proofs of vulnerability with every single scoring submission. So not only are
we going to have systems that can automatically patch software, but also system and hardened
software, but also systems that can discreetly identify unique vulnerabilities in them with the benefit of AI.
Yeah. Yeah. And I mean, what are you, what's your immediate feeling on what sort of fruit
this might bear? Cause I'd imagine it's going to, you know, we're going to find some,
you know, some useful stuff and also a whole lot of nothing, you know, it's usually how it goes
with this sort of stuff, but you know, I don't know. I'm an AI skeptic, so who knows? Yeah. So funny enough, we're majorly invested in AI. I have several
teams inside TrailerBits that are all dedicated and focused on it. We have machine learning
assurance, we have a machine learning research team, and then I have a machine learning focus
group for business operations within TrailerBits. But I am a skeptic as well. In our testing, and we've done lots of rigorous
testing with the UK government, in fact, they have an AI task force that's hired us to start
evaluating large language models for their effectiveness at cybersecurity tasks. So we
have a lot of internal benchmarks about how good these things actually are. And we're finding out that for things that aren't natural language-based, they're not that
good. Which is kind of funny because, you know, one of the major uses for like large language
models at the moment seems to be as an assistant for writing code, right? That's just it. Like if
it's sort of where we see it is that it can really add a lot of value in terms of education, of helping me
rapidly understand a large volume of code. That might be really good. So if I'm a security auditor
and I have two weeks to audit a code base of 10 million lines of code, where should I target my
effort? Maybe I can use an AI to point me in certain directions that I would otherwise not
be able to discover. But the ability to just hand in a chunk of code and tell you where the bug is,
that's not how it's going to work. Probably not for the AI CC. Maybe in 10 years.
So where we're seeing it apply a little bit better is sort of taking these traditional tools and
adding AI as an accelerant to certain behaviors.
Like I think the OSS Fuzz team
has talked a lot about fuzzing harness generation,
where you give it sort of a couple functions
and the AI can help you write a fuzzing harness
that targets that function.
Yeah, that's what I mean.
I've seen it being used to like do code creation,
you know, write me a little chunk of code that does this. And it seems quite good at that. But you're right, like trying to, that's why I keep coming back to that thing where I keep describing it as an interface as a better Siri, right? Not as a not as a brain. It's a it's a it's a glue, not a brain.
Yeah, I do think that the opportunity in education for AI is tremendous. And I know that DARPA is going to be doing a study of that pretty soon as well.
Yeah, all right.
Now, another thing that you've got going on is you have released the automated testing
handbook.
Tell us about that.
One of the reasons we wrote this book in the first place is that we want it to be systematic
in our application of these technologies to our clients.
When people come in the door and ask us to help them start using Semgrep, now we've got the exact steps to go
through to help you go from zero to hero. One of the things that we've leaned into pretty hard over
the last few years is automated testing. We want to do variant analysis on every single bug that
we find on an engagement, which means I never want to find the same bug twice. If we find one bug in your code base, I want to find it everywhere else it exists,
not just in that code base, but in your whole company and maybe even the whole internet.
So there are tools that you can use to do that that are quite good now. Things like Semgrep,
CodeQL, and these high leverage fuzzing platforms, things like OSS Fuzz.
And the automated testing handbook is our guidebook for how we employ those
techniques. Anybody can follow it. We give it out to clients before they do engagements with us.
And we're adding new chapters every month. So right now it covers CodeQL and Semgrep.
Very shortly, it's going to cover fuzzing with AFL. And soon after, it's going to cover Rust
specific testing and Burp. We think this is really amazingly valuable, and it's something that really only Trail of Bits can pull together.
Just talking about this whole idea of being able to auto-discover the same bug in multiple locations in a project or whatever,
is that something that usually happens when you're on a job? Like you find a certain bug
and maybe it's because they have a dev
who just happens to do this one thing
in a slightly suboptimal way, shall we say,
and then you wind up finding the same bug in multiple places.
Is that typically how it tends to shake out?
Yeah, absolutely.
There's a real green field here when it comes to writing rules and embedding the variants
into tools like Semgrep.
So alongside the testing handbook, we've also put out our own rule bases, our own knowledge
bases for Semgrep and CodeQL.
So you can grab the trailer bits rule sets for each of these that are based on our experience
auditing code for clients.
That's got stuff like tests for Ansible, tests for machine learning languages,
machine learning code, for instance, a lot of stuff around pickle files and model file formats.
And it covers Go and Python quite well. But a lot of the problems that a client will have
aren't necessarily that unique. We go through a learning process after every single client engagement to extract
out reusable common knowledge for whatever that project was.
I guess what you're saying here is this isn't about just like finding a bug on one client
job and then applying it to that client work.
It's about applying it to all of the work that you do.
Absolutely.
Yeah.
This stuff can be used
across the whole internet. And in many ways it is. When we do projects, we get hired for a lot
of open source projects. We just published two blogs about our collaborations with the OTF and
the OSTIF on lots of critically important infrastructure of the internet. And we employ
these testing techniques
that are based on knowledge we've gained from our clients to make sure those things are safe.
And it goes the reverse way as well. So you're doing some of this work for essentially NGOs
that are trying to make open source better, but you've been submitting some stuff through
to CISR as well. They reached out to the cybersecurity industry, I guess, for some ideas around open source security.
And you chimed in with this.
I mean, were you essentially talking along similar lines about how the open source ecosystem could be improved by trying to find similar bugs across the board in the open source ecosystem?
Yeah. So this was a blockbuster RFI that the government put out a few months ago. It was from
CISA, the National Cyber Director, National Science Foundation, DARPA, OMB. They all wanted
to know what can the government do to help secure open source software? And this is great because I
feel like a lot of the conversation in government has been dominated by threat intel and public privateprivate partnerships and all these things that are easy to do but provide very low value, whereas securing open-source software that we rely on is a really hard problem.
And they solicited for community input of, hey, what should we do?
I think Log4J was good for something, right?
Which is it got everyone in government all panicked about this stuff and forced them to actually uh think about it yeah yeah you have to be you have to focus on on hard
problems not just the easy ones and it is a really hard problem that there's tons of software out
there that's filled with bugs and i honestly think what what happened is senior policy makers
in governments all over the world got sat down by their cyber people and got like a
10 slide slide presentation put in front of them about how open source works in terms of like you
know how the supply chain for open source software works and it scared them so much that they've
they've actually got interested in this that's just my personal little theory on why there's all
of these you know initiatives kicking around around, you know, SBOM and, you know, automated
testing and whatever. But yeah, anyway, I've interrupted you. Go on. No, I mean, never let a
good crisis go to waste. When these things happen, like, thank goodness that people actually
internalize the lessons from things like Log4j and have now thought through how are we going to
secure the software ecosystem that we depend on.
But this is a really hard problem. You need to find things that are high value, low effort that the government can actually do. And TrailBits wrote an RFI response to this. We published it on
our website. I think it's a fantastic vision for what security could look like between the
government and open source software. But it also explains some of the benefits of things like variant analysis
and how to employ techniques like Semgrep, CodeQL,
or large-scale fuzzers like OSS Fuzz or similar systems
to help us bring down the volume a bit
in terms of the pervasive number of vulnerabilities that are out there.
All right. Dan Guido, thank you so much for joining us.
It's a pleasure to chat to you as always,
and we'll look forward to doing it again soon.
Cheers.
Thanks, Patrick.
That was Dan Guido from Trail of Bits there.
Big thanks to him for that.
And big thanks to Trail of Bits
for being a long-term sponsor of this show.
And that's it for this week's podcast.
I do hope you enjoyed it.
I'll be back next week with more Risky Biz.
But until then, I've been Patrick Gray.
Thanks for listening.