Risky Business - Risky Business #714 -- Microsoft vs Wiz: pistols at dawn
Episode Date: July 25, 2023On this week’s show Patrick Gray and Adam Boileau discuss the week’s security news. They cover: The dust-up between Microsoft and Wiz MobileIron/Ivanti 0day ho...ses Norwegian government agencies That’ll do TETRA, that’ll do… Microsoft finally agrees to offer decent logging without price gouging Much, much more This week’s show is brought to you by Resoucely. Travis McPeak, Resourcely’s co-founder and CEO, is this week’s sponsor guest. Links to everything that we discussed are below and you can follow Patrick or Adam on Mastodon if that’s your thing. Show notes Hackers exploited Ivanti zero-day to breach Norway’s government Citrix zero day exposes critical infrastructure, one provider hit | Cybersecurity Dive Interview with the ETSI Standards Organization That Created TETRA "Backdoor" Researchers Find ‘Backdoor’ in Encrypted Police and Military Radios Microsoft attackers may have data access beyond Outlook, researchers warn | Cybersecurity Dive Risky Biz News: Microsoft feels the heat, gives customers access to more cloud security logs Risky Biz News: JumpCloud compromised by APT group North Korean hackers breached a US tech company to steal crypto | Reuters North Korean hackers targeting JumpCloud mistakenly exposed their IP addresses, researchers say | TechCrunch Cyberattack on GitHub customers linked to North Korean hackers, Microsoft says Latest North Korean hack targeting cryptocurrency shows troubling evolution, experts say | CyberScoop White House secures safety commitments from 7 AI companies | Cybersecurity Dive Renewable technologies add risk to the US electric grid, experts warn | CyberScoop Statement on Labor’s rush to renewables leaves Australia vulnerable to catastrophic cyber attack Zenbleed Firmware vulnerabilities in millions of computers could give hackers superuser status | Ars Technica Satellites Are Rife With Basic Security Flaws | WIRED Russia’s vast telecom surveillance system crippled by withdrawal of Western tech, report says Apple issues third mobile OS update after zero-click spyware campaign | CyberScoop Apple slams UK surveillance-bill proposals - BBC News Bill that Would Stop the Government Buying Data Without a Warrant Passes Key Hurdle Kevin Mitnick Obituary - Las Vegas, NV
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business. My name is Patrick Gray. This week's show is brought to you by Resourcely, a company that will help you wrangle your cloud. Basically, they'll help you provision and manage your cloud infrastructure so you too can provision secure by default Terraform for AWS, GCP and Azure. Your tears will still flow, but they will be salty
tears of joy and no longer salty tears of sorrow. Travis McPeak is Resourcely's co-founder and CEO,
and he is this week's sponsor guest. We had a great chat about how security teams have gone
from being roadblockers under the waterfall model to being ticket spammers under the DevOps model,
and how there are ways you can actually make things a little bit more harmonious,
you know, by like maybe having a Terraform management tool
instead of six million variations of that one simple module you wrote three years ago,
which, yeah, that's a pain point I'm sure a few people listening to this will be familiar with.
Anyway, that is coming up later after the news with Adam Boileau, which starts now.
And Adam, the government of Norway has had an incident.
Apparently 12 government agencies have been popped using an ODE
in mobile iron MDM, which apparently now is called Ivanti,
which sounds to me like an Italian cookware brand,
but there you go.
Yes, they clearly have a bit of a bad day there in Norway.
This bug, I mean, having a bug
in your mobile device management solution,
like that's a bad day, full stop.
In this case, the bug is like-
In this case, it's a CVSS 10.
It's a CVSS 10.
And you don't see them very often.
I mean, you see plenty of 9.8s, but a solid 10 out of 10, I mean,bss 10 you don't see them very often you know i mean you know you see plenty of 9.8s
but a solid 10 out of 10 i mean that's uh you know yeah it's pretty funny when the you know
the likelihood part of the metric is maxed out i mean given that they're hacking you know the
government of a sovereign nation so that probably counts for the highest score they're hard to weasel
out of uh for the vendor i think the bug is like straight up auth bypass and then onwards into, you know, being able to just make API calls into it and reconfigure it, which given these things have to be on the internet pretty much anyway.
And that's their point is a really bad day.
We haven't seen specifics yet.
Obviously, the vendor has patched, but there's still quite a lot of these things kicking around on the internet.
So, yeah, bad day.
Yeah, I mean, that's the thing with an MDM, right?
Is like, if you didn't put it on the internet, it wouldn't be an MDM anymore.
It would just be a DM.
Yes.
The whole point is that you can manage devices that are outside of the organization.
So, you're kind of just stuck with this one.
We have seen, and look, it's unusual for us just to talk about a bug leading the organization. So you're kind of just stuck with this one. We have seen, and look, you know, it's unusual for us just to talk about a bug, you know, leading the show, but, you know,
it just falls into that category of things that you kind of have to put at the edge of your
enterprise network that are going to get you owned. And we have seen quite a few bad bugs over
the years in MDM solutions, but I feel like now it's just an even riskier time to have these sorts of bugs present in this type of gear
because people are just going wild exploiting them.
And it seems like one group jumps on it
and then everybody jumps on it and it's a bad time.
Yeah, I mean, anything that has to be on the internet
to kind of by necessity, things like MDM solutions,
firewalls, VPNs, voice over IP things that have to interact with
the outside world those kind of messaging systems like they're all complicated thereby designed
privileged make for juicy targets so you know I think we're going to see you know we will absolutely
see more of this kind of thing there's been so much going on already and the conversations we've
had in the past about every one of these things that meets those criteria of being on the
edge being privileged being complicated is going to have a bad day as you know as we have seen
yeah yeah now let's talk about something that isn't enterprise cyber security let's talk about
tetra now tetra is essentially a bunch of i guess what would you call it? Radio standards, right? For quote unquote secure
communications over things like walkie talkies. It's used for things like public safety agencies,
you know, police, ambulance, fire, that sort of stuff. Limited military use as well. Although,
you know, I can't imagine that too many militaries are relying on tetra radios for like combat
operations and whatever, because just looking into it a little bit,
not exactly hard to see on a spectrometer, right?
For example, and yeah, probably just not the sort of thing
they're going to be using.
But apparently it's also quite widely used
for machine-to-machine communications
in critical infrastructure applications.
And a group of researchers has found
that the lowest tier
of encryption in tetra known as tea1 uh it's supposed to have like an 80-bit key length
but the algorithm like actually truncates that key down to like 32 bits which makes it trivially
sort of crackable and now everyone's saying intentional backdoor now this whole thing is a
lot of fun right like it's a very fun story and kim zeta gets a special mention this week for
talking to someone from the standards body that is responsible for tetra uh this guy and this guy's
name is brian uh murgatroyd but there's this great q a that she's published with him where he's like
well yeah it's export restricted so of course the key is truncated and you know just sort of like saying the quiet
part out loud and also sort of downplaying it a little bit but in a way that seems quite reasonable
like he points out that you know this thing's 25 years old and has had a reasonable track record
and you know i look as and we'll get to the get to the reasons why I don't think this is a sky is falling moment in a bit.
But first, let's get your thoughts on this, Adam.
So I think this is a really interesting case study on, you know, what it looks like when you have standards that are engineered by organizations that, you know, want to manage whether or not people have access to it like it's a we think of all of the examples that uh you know cypherpunks have
given us over the years of oh my god the slippery slope or oh my god um you know the NSA or you know
keymat or whatever else and this is actually it's just a really interesting worked example of
you know export controlled cryptography is there so that the key is long enough that other people
can't crack it but small enough that you can um and it's also a great case study in non-public ciphers.
So one of the criticisms here is that these are ciphers that have been in use for,
as you said, 25 years, and they're proprietary designs without peer review.
And in this case, a group of academics reverse-engineered some Motorola radios
that implemented it, pulled out the code, the software,
and then went through and looked for flaws. And they found some flaws in the crypto, but also in other kind of
more workaday aspects of running, you know, a trunked radio network, being able to introduce
new access points, and so on and so forth. And, you know, on the one hand, that feels like a great
justification for not having secret ciphers and not having closed designs.
But on the other hand, it's been 25 years.
Well, that's the thing.
It's like the last point that this guy makes in the Q&A.
Hang on, I'm scrolling.
Kim Zeta says, but no one really knew how secure it was,
so it has that reputation through obscurity, not verifiability.
And this guy says, well, obscurity is also a way of maintaining security.
And, you know, that is not the popular thing to say
in security circles, but dude also has a point.
He does.
And that's why it's such a wonderful story
because, like, in a way, he's pretty unapologetic about it.
Like, they handed off the design.
They got cryptographers to design it.
They handed it off to the British Security Services.
They came back with some changes.
One of them was the key restriction for the export version of it.
There were some other weird changes to some of the S-boxes that no one's really quite
sure about.
And one of the research cryptographers says like, this looks weird, but I don't really
see a way to use it. But who knows? Who knows why cryptographers do the this looks weird but i don't really see a way to way to use it but
uh who knows who knows why cryptographers do the things they do adam moral of the story they do
yes um but as you say like this is you know mostly in use in public's you know public safety sector
some control systems as a radio system and uh you know for communications for for devices and
you know there are certainly a bunch of interesting attack avenues.
And I'm sure there's areas in the world where this stuff is used,
where those export restrictions on strong crypto
have been legitimately useful,
being able to passively listen in to emergency services comms
in Russia, for example.
It would probably be quite useful.
But here's the thing, right?
When public safety
agencies have switched to using encrypted
radios that's actually been controversial
in the past because the
media uses those radios
to find out what the police are up to
this is generally considered to be a good thing
I mean obviously there's going to be a lot of ambulance chasing
and that sort of stuff as well
but whether these things
should be completely secure or not
is actually a topic of debate.
And in the case of industrial control systems,
okay, sure, it's not great being able to decrypt all of that traffic.
I mean, this person makes the point too
that of that particular technique that allows for decryption,
it doesn't allow you to insert messages onto the
network but there are other techniques that these researchers have found where you could set up a
fake base station and overpower the real one and it looks look it looks a bit complicated when
you and i both know adam and more so you than me because i know you've done this sort of work
before these industrial control system networks over wide areas are not physically secure no no
they're you know are you gonna are you gonna mess around with spinning up custom Tetra base stations
or are you going to get some bolt cutters,
cut the padlock off the box and plug into the thing?
Yes, yeah.
There is very much a degree of practical attacks,
especially in field of equipment.
It's way out in the middle of nowhere.
And there is some value to not having to go hike out into the back blocks of australia to bust into mining equipment right
it's nice for the testers or the attackers to be able to you know sit in the office and do it
remotely via radio rather than having to get the spiders out of the cabinet first or the snakes or
whatever else um but yes these you know it's a great example of what you know radio
systems design 25 years looked like and honestly it feels like a standard that's done pretty well
despite these you know compromises and trade-offs that they've had to make um they do make the point
i think what the guy makes the point in the conversation with kim that you know there are
mechanisms that are over the top crypto etc you're not 100 reliant on tetra's things and in the case of control systems that are running over that they
may reasonably expect people to run their own overlay networks yeah throw some throw some aes
on top of that and you know robert's your mother's brother yeah exactly so um i know these systems are
just super interesting though i mean i remember when because because in New Zealand we use the P25 standard,
the one that's more common in the United States,
and I remember when they turned that on and my radio scanner
for the local police stopped working.
I was quite sad.
But, you know, as you said, there's plenty of interesting reasons
for listening to this stuff.
But, you know, it only has to work so far as to make it not practical
to go to the local radio store and buy an off-the-shelf radio
and listen to the police, right?
And all of these options do provide that capability.
Yeah, I mean, I understand that there's a temptation
to write breathless stories about how this is some huge
critical infrastructure vulnerability,
but I guess I just wanted to push back on that a little bit, right? And to say, well, you know, you can run something over the top, which is going
to give you more confidentiality, right? And a little bit more security. And also, you know,
if you're really just relying on the security of your radio network, when you don't have a physically
secure piece of infrastructure, like it doesn't change much, I guess is what I'm saying.
And I have seen, you know, people on social media and stuff
getting a little bit breathless about this one, that's all.
I mean, it's terrific research.
And I think they've made points well about closed standards,
being able to sit there being insecure for a long time,
but let's just not get carried away with this being a disaster, I think.
Yeah, one of the questions Kim was asking was, you know,
about whether the other countries that were buying export-controlled
versions of this stuff to operate, you know,
whether they should have been better informed about the, you know,
80-bit versus 30-bit key material, for example.
And you can't help but feel like, well,
if you're buying your radios from the European Union and, you know, you're on the list that can't buy the full-strength one,
then that kind of tells you what you need to know, even if you don't know the specifics.
Now, let's move on.
Now, of course, let's talk about the dust-up between Wiz and Microsoft.
Of course, last week we spoke about this intrusion into a bunch of, you know, Outlook Online accounts.
Chinese APT crew obtained, mysteriously obtained, or as Microsoft put it, acquired a very powerful key,
which they used to forge access tokens into people's mailboxes.
Wiz did a bunch of really interesting research, actually, saying, well, that's not all you can do if you have a key like that.
And you can mint access tokens for all of these other services as well.
So great research, but they went a little bit overboard on the PR by saying, oh my God, this is so much bigger and deeper than we knew and sort of heavily implying with their commentary around it that these attackers did mint these types of access tokens
and do this sort of stuff.
And that has predictably elicited a response from Microsoft
along the lines of, this is not based in truth, blah, blah, blah.
So, you know, this is just a case, I guess,
of two companies behaving quite badly
in that Wiz has done great research
and then sort of, you know, overdone it in the storytelling part.
And then Microsoft is just, you know,
weasel mode, engage.
Like this is horseshit, right?
And I mean, that's a fair summary
of what's happened here, isn't it?
Yeah, I think it is.
And I imagine the people at Wiz
who did the technical research
probably wrote it up sensibly and then got PR, got a hold of it.
Yeah, marketing got a hold of it and that was that.
Went a little big.
I mean, I have a lot of sympathy for Wiz in this because, you know, we don't get to see the insides of cloud infrastructure.
We are reduced to inferring the property, the security properties of these systems.
Well, you know why?
Wiz knows all about it because they all used to work at Microsoft, man yes and and to be honest like we kind of need that right because
we don't get to see the insides of these environments we do need people to go have a
rummage around and you know people who work there are well equipped to do that um but yeah that kind
of line between is possible and actually got used for badness is a thing we've seen a bunch of
security firms you know kind of blur the line on for publicity reasons because they're not the same
thing right just because you can doesn't mean the Chinese amputee did it yet um and you know I you
know I have a lot of sympathy for Wiz in this and Microsoft duff protest too much i feel yes they doth um now look staying on the on the same topic
microsoft is gonna give you free logs now they've added 31 categories of logs to their lower end
you know uh cloud computing tiers and uh they've extended the default the standard tier logging
the default retention has now gone from 90 to 180
days this has been after years of pressure i mean we've been banging this drum pretty loudly for
years right then sisa picked it up and kind of ran with it um so there's been a lot of us out there
saying this is ridiculous and finally and you know it was a big thing after solo wins where the true
extent of what happened during that campaign wasn't really clear because of this logging issue.
And it really was the case that the next time it happened, they were going to get hammered for it.
And that's what's happened in this case.
They've been hammered and they finally capitulated and hooray.
Yeah, absolutely.
I think this is great progress, even if not for the greatest reasons.
One of the things that came out of this kind of like Chinese APT intrusion was that the US government agency that spotted the keys being used to sign weird authentication was because they had that extended logging. And everybody else who didn't was left with not enough information to determine what happened, let alone spot it in the first place. So, you know, Microsoft's opportunity to argue that it had sufficient logging,
you know, kind of evaporated with that.
You know, if they had showed up and said, hey, we got broken into,
the fact that it took a customer to do it and a US government customer
and even whatever it was, the State Department or something,
like it shouldn't get to that point before something changes.
And I'm glad that, you know, we are now seeing Microsoft
be a little bit more sensible about this.
But I mean, two words that shouldn't go together
are premium logs.
Yes, agreed.
What the f*** is that?
Yeah, upselling people on logging
is not a thing that, you know,
your salespeople should be doing.
And if you're doing that,
then you're going to end up like this,
you know, getting wrecked
and then having to walk it back quite publicly like Microsoft are now last week we spoke about the jump cloud breach and I
mentioned that Katalin Kimpanu had a source who was saying it was North Korea a single source that
he wasn't sure by the time his newsletter went out he'd picked up another source he'd nailed it down
got it out there that has actually turned out to be quite right. Reuters has confirmed it.
I think even Mandy has come out and said so as well.
And very nice of the Reuters staff
to link back to Catalan's story on this.
So that was nice.
Big Catalan W on that one.
Yeah, absolutely.
And the fact that it's North Koreans going after,
you know, the jump card as a supply chain vector
into crypto things,
also a good reminder for everybody who's involved in hosting
or operating or running crypto infrastructure
that the DPIK are going to come for you
and help themselves to your customers.
Yeah.
And Zach Whittaker for TechCrunch has a nice report up,
I think speaking to some more Mandiant people.
And yeah, I mean, the North Koreans who did this
just made a couple of mistakes
that made the attribution pretty easy.
Yeah, it looks like they had some issues
with one of their VPN components
and they ended up just like straight up
logging in from Pyongyang.
So that's a little bit orcs.
Yeah, yeah.
VPN not working, just turn it off.
I mean, you know, you got to ask though it was always
going to eventually get attributed to them so well and like maybe they're at the point where
it doesn't matter anymore they can just do it what are you going to do extra item like yeah
exactly right exactly yeah it's such a weird you know it's such a weird thing right this whole
north korea stuff because they're actually out there using really innovative
techniques, you know, doing supply chain infiltration and whatnot. Like we've got another
one here. This one's from John Greig over at The Record, but, you know, they're doing some pretty
interesting stuff on the old GitHub there too, you know, and this is partially social engineering
and whatever, but it just shows that they've got a pretty well-rounded capability. Yeah. I mean,
I think one of the things
that North Korean hackers show is that,
you know, given evolutionary pressure, right,
you're going to get your family shot by anti-aircraft cannons
if you don't hack this stuff,
you end up with very, very workable,
productive, useful techniques, right?
That's bleak, man.
But anyway, I mean, you could look at it the other way
and say, you know, when you've got a place that's indoctrinated as this and there's so many true believers, you could look at it the other way and say you know when you've got a place that's
indoctrinated as this and there's so many true believers you could get at that but sure let's
go with you know dismemberment by anti-air cannons thanks thanks for putting that in my head
um but yeah something wrong with you man anyway keep going
um yeah like you know they are very much an outcomes-based, you know, they reward success and they use what works, you know,
and that kind of blend of, you know, we don't care about supply chain,
we're not constrained in the way that anyone else is.
Norms, what are they?
Yeah, exactly, right?
That willingness to just do what it takes to get the job done.
I mean, I don't want to hand it to them but yeah but i mean that's the
thing you imagine if they were actually out to create quote unquote real problems right and like
i think you know it seems like they're going after banks less because banks have i mean you know give
swift some credit here right because north koreans were going after swift terminals and whatever
swift actually put in some work banks actually put in some work to sort of lock that stuff down
and the north koreans as best I can tell,
they don't go after that stuff anymore
because it's just gotten too hard.
So now they're all in on cryptocurrency.
And I think a lot of people just think,
you know, just observe this and it's like, it's fun.
It's like watching sports, right?
Because it's cryptocurrency
and it's sort of disconnected from real,
quote unquote, real enterprise,
real business, real government real business real government right
i mean they're still doing the espionage stuff as well but like they're so active in this
cryptocurrency stuff and it just doesn't feel you know can you imagine if they just turned their
attention to doing destructive attacks using some of this same tradecraft like we'd be really worried
and we get to watch all of this cool stuff and not really stress about it which i find a really
strange situation but look walk us through this GitHub stuff,
like what they're doing there.
So their campaign on GitHub is somewhat of an extension,
it feels like, of the work they've done,
putting out fake job ads to infosec people
and trying to lure in either developers
or security researchers to collaborate on GitHub somehow
and then push out some code
that if it ends up being used inside an organisation,
then it's going to give North Koreans access,
which is a smart combo of supply chain social engineering
and attacks on creative backdoors in the supply chains
of NPM libraries or Perl modules or whatever.
Well, it's like social engineering,
but there's this element of community infiltration to it which just makes it really interesting
yeah it is it is really interesting i'm going after people you know kind of in the social circles
of you know people who are building stuff at crypto companies so as to write you know get
into their software supply it's just it's it's smart and it's well done and there's also you know nerds
um who who you know think of themselves as smart don't miss like they look down on people who fall
for social engineering like normal phishing style things so they think they're not gonna
get convinced by targeted attacks and yet it's i know it's just very slick and you know i i
appreciate it in a way that i feel bad doing so. Yeah, yeah, no, totally.
And AJ Vincennes over at CyberScoop,
he did a really fun write-up actually.
And this is the sort of story I normally hate.
It's normally the sort of story
that's kind of feels a bit like filler.
But the headline is
latest North Korean hack targeting cryptocurrency
shows troubling evolution, experts say.
And you think, groan.
And then you read it and you're like,
actually, yeah. And I mean, this piece kind of inspired my rant there about like no they are legit doing some pretty innovative stuff and i mean i don't find it troubling
because they're stealing made up internet money yes and you know if they want to steal apes like
go right ahead have at it but you know the know, I guess the point that this piece makes
is that the tradecraft in itself is alarming.
Yes, they have definitely learned a lot.
I mean, it wasn't that many years back
that you'd look at North Korean attackers and laugh, right?
You'd be like, ha, ha, ha.
Although Dmitry Alperovitch and I have had big arguments about that
because he says they've always been innovators
and it's just people have just started paying attention lately.
Yeah, that's an awesome title, possible, yeah.
Well, but then my counterpoint is like it's a matter of scale, isn't it?
Like they might have been innovative,
but they weren't operating at this same level.
So I think we're both right.
But that's just what I think.
I mean, they're certainly onto a winner target
in the cryptocurrency world because, as you say,
to a lot of us it feels kind of like a victimless crime, someone's internet apes but it also makes so bad making that joke but yeah
but it makes real money to fund a real nuclear weapons program and like so it's you know it's
not but as you say compared to going after the bank of bangladesh's swift terminals yeah right
it feels victimless compared to stealing hundreds of millions of
dollars from a poor country yes exactly lol that's pretty much it in summary you know it's a
i mean i do wonder how like looking back at this in years the future how much of a break
north korea has put on the rise of cryptocurrency stupidity and maybe
like their crime will have saved us from crypto becoming too big and too powerful and too big to
fail that's an interesting certainly an interesting glass half full take there yeah there you go see
i can do it i can do glass half full yeah but i mean you know you're saying this disaster is is a
positive which is a really interesting twist on glass half full, I guess.
That's quite unique.
Now, look, this next one, man, like we had a similar story.
It was in our newsletter.
Catalin wrote it up.
I think I actually cut this one from the Risky Business News podcast script
because, okay, so Google has spun up some sort of red team
to look at, you know at the security of AI models.
Okay, this to me seems like a bit of a PR story.
So when it found its way
into the Risky Business News podcast script, I cut it.
By the way, if you are not subscribed
to the Risky Business News RSS feed
to get those podcasts three times a week,
there's a bulletin read by an ex-ABC news presenter.
It's prepared by Catalan. It's edited
by you and me, Adam. It goes out three times a week, and you're going to hear a lot of news
in that news bulletin that you will not hear on this show, okay? It's definitely worth subscribing
to. But anyway, so I cut it from there because I'm like, okay, yay, Google has got, you know,
red teamers to look at LLMs, you know, whoop-dee-doo. And then, my God, like, I couldn't get away from that story in the mainstream media over the next couple of days because, you know, it might have
been a PR thing, but it just landed everywhere. There's so much of this public fear about AI.
And now, you know, we've got this story here from Lindsay Wilkinson over at Cybersecurity Dive,
looking at how the White House has secured
safety commitments from seven AI companies. Yeah I mean I read that headline and I was also a bit
kind of eh you know that's not very exciting but you know like it or not AI stuff is important
and the lack of transparency in it is certainly a thing that's you know that's a concern and so
in this case you know we're talking about the White House securing commitments
from Amazon, Google, Meta, Microsoft, et cetera, et cetera, to do a bunch of things to improve
the robustness of, you know, the controls around AI systems and more transparency about
things that go wrong with them.
And, you know, I don't know how much is pr and how much is real but like it or not we're stuck with you know with ai systems getting better and crazier and
more involved in our stuff yeah so i i think okay so the thing that i find interesting about this
is that regulators are jumping on it early so you think back to when Facebook was a new thing. And then
all of a sudden, you know, we went from no one having a Facebook account to basically everyone
having a Facebook account. And there was no White House investigation to look into, you know, what
could be the problems that this thing is going to create for society? How should we think about
regulating this sort of thing? So the reason I find this one interesting is that it's a new technology that does have massive implications for things
like economic productivity, labor, privacy, you know, there's security concerns as well,
and governments are all over it. Now, is that for the smart reason in that they've learned that just
allowing these huge technology phenomenon to go, you know go completely unchecked is a bad idea
or are they doing it for the stupid reason
that they think it's going to be like the Terminator
is going to come and get us, right?
So that's what I can't decide
is if this is happening for the smart reason
or the stupid reason.
But it is interesting.
We're seeing announcements from companies like Google
saying, look, we've got a red team and we're going to be looking at model safety
and stuff and i think that is in large part due to uh you know the fact that governments are paying
attention and are threatening them with the regulation stick which is it's a very good point
you raise like that we are seeing this discussion so early in a life you know in the life cycle of a
of a new technology
as compared to social media or whatever else.
When you think about the disruption from video streaming,
online video streaming to movie industries and whatever else,
it's weird for it to be happening this early.
Well, and it's always been like the regulatory,
like the government way of thinking about internet-based technologies
has been, no, no, don't apply regulation
because you'll stifle innovation.
Just let it all happen.
Worry about it later.
And, you know, they're not doing that with this stuff at all.
Yeah, and whether it's the good reason or the dumb reason,
I mean, that's a hard one.
Well, maybe it's both.
Maybe I think it's probably people using the dumb,
smart people using the dumb reason to get it done.
I mean, that's also entirely possible
because there is so much pop culture around.
AI's gone rogue and smart robots and killer robots.
And I think also the idea of non-human intelligence
does strike us right in some primitive heart of some, you know, primitive heart of our, you know,
of being proud about the fact we came from organic evolution
instead of being designed.
So, you know, it's just a very personal thing for humans
to feel like a human, like a machine can, you know,
make up Jay-Z lyrics just like Jay-Z.
Like that's a...
Yeah.
I mean, I don't know.
I mean, to me that just seems like a parlor
trick anyway everyone's heard my thoughts on these llms which you know i guess but you know i'm i'm a
little bit biased on this because you know i've been writing for a long time professionally and
i just i think it writes terribly and it gets facts wrong and i'm like you know it can generate
stuff that sounds like someone who's not very good at writing wrote it and and you know like
oh my god we're all gonna
die isn't what i get from that you know yes i mean that's it we do use ai tools in the in the
production flow for a risky business these days and yeah but they're not language-based no tools
right so just before anyone thinks that um we certainly don't use any llms for to do any writing
but we do use a ml based tool to enhance bad audio when we have to deal
with it. We don't even use that in this segment because, you know, keep the machines away from it.
But it's funny hearing that hallucinate things. You know, you gave us an example at one point of
like a dog barking in the background being turned into... It sounded like the guy was barking, yeah.
And there was a great one where I interviewed H.D. Moore
and he was typing on his...
While I was asking him questions and stuff, he was typing
on his keyboard and
the model actually hallucinated
with every keystroke syllables.
So it was like...
But in
H.D.'s voice.
It sounded like he was speaking in tongues.
What else have we got here?
Okay, so now we've got people raising a legitimate concern
in the absolute dumbest way possible.
How unusual for our industry.
Oh, my God.
So we've got a headline here from Cyberscoot,
which is renewable technologies at risk to the US electric grid,
experts warn and look in it there are people raising some uh very valid points about how a lot of these solar
inverters and we did see uh in in the last over the last month there was one of the companies in
china sungrow was it ah anyway a company that makes uh solar panel uh controllers right they
had some horror show bugs in that in the show bugs in the box that controls the panels.
And that's really not what you want, okay,
because that stuff can make fires and make things go bang.
So that's bad.
And, of course, a lot of this technology for controlling rooftop solar
at residentials, like I have, and also solar farms,
a lot of this stuff is connected technology
and it's made by Chinese companies.
And people are making the point that like, look, there's so much solar now in these
developed economies, you know, maybe having Chinese technology running these things is something we
need to think about, you know, whether that's going to be banning the import of certain technologies,
or finding out ways to mitigate the risks that come with them, you know, this is something that
deserves to be raised as an issue.
So first of all, I just want to get your thoughts on that, Adam.
Yeah, I mean, I think, you know, much like the conversation
around 5G mobile networking and communications,
building your national infrastructure for, you know,
all of the things in society on top of a technology stack
that you don't necessarily control or trust
or have concerns about the origin of, that's a stack that you don't necessarily control or trust or have
concerns about the origin of, that's a thing that you're going to regret. So in that respect,
you know, especially, you know, you look at the amount of renewable energy we're aiming towards
and how much of that, you know, comes from China and, you know, we're pretty price sensitive as
well about these things. You know, it's certainly worth having a conversation about
in the same way that we did around mobile networks
and other bits of important infrastructure.
But we're still quite a long way from
you can turn off the power grid in America.
Yes, that is right.
Now look, a few days after that CyberScoop story ran,
we get a release from an Australian senator
who is the shadow minister for cybersecurity.
And I'll give you the title of the release.
Labor's rush to renewables leaves Australia vulnerable
to catastrophic cyber attack.
Now, this is James Patterson.
He is basically, if the conservatives were to win the next election,
he would have Claire O'Neill's job.
Now, I've got an interview with Claire O'Neill,
our Home Affairs and Cyber Minister, that is going out tomorrow afternoon.
I spoke to her on Friday, her and Kieran Martin,
the founding director of the UK's NCSC.
That is a terrific interview.
She's a tremendously talented uh woman who got
sort of parachuted into the into the cyber uh portfolio uh at you know next to no notice and
has done a terrific job and then you contrast that with this right where the guy is saying you
know with this statement where the guy is saying you know that we're vulnerable to a catastrophic
cyber attack well i mean I mean, are we?
I mean, I agree that it's worth thinking about.
Yes.
Right?
And also the rush to renewables, like taking a poke at renewables in it.
Like he wrote for the Institute of Public Affairs here in Australia, was editing one of its publications or whatever.
It's a very conservative think tank that just shits out
like the stupidest stuff you can imagine.
And you read this and it is just brain
worm and i think james you probably listen to this show one thing people in security don't like
and it doesn't matter what political stripe you are don't turn this into partisan shit just don't
do it everyone will hate you yes absolutely no one likes a fud merchant and even worse when it's you know kind of politically
integrated yeah no so this is just absolutely like his copy-paste uh release but with added
shade at renewables and you know and the anyway sorry i mean i've been ranting now but you did
when you read this did you also groan and roll your eyes yeah i don't even know the guy and the
australian politics are not my first love.
Yeah, I was just like, what the hell sort of trash is this?
And just like, shut up.
But the fact that he could be the next Cyber Minister,
that's also problematic.
Yeah.
I mean, the former advisor to Malcolm Turnbull,
Malcolm Turnbull, Australian Conservative Prime Minister.
He had Alistair McGibbon, who you now work with at CyberCX.
Alistair McGibbon did a tremendous job.
He was not a government minister, but he worked for the Prime Minister.
Did a tremendous job.
The conservatives here have done good work in cyber.
They did some good stuff.
They were pretty transparent.
They had a good outlook.
There was good advice going around.
This is a misstep.
That's all I'm getting at.
Yeah, exactly. I mean, Al Mack getting at, you know? Yeah, yeah, exactly.
I mean, Al Mack and I certainly don't agree on our politics,
but, you know, he's a solid cat and he understands
how computers work and how they get wrecked.
Because this isn't a partisan thing.
And that's really what I'm getting at, you know?
It doesn't matter what your politics are.
Anyway, that's enough of me just ranting and raving.
Let's talk about Zenbleed.
Tavis Ormandy, back with another hit.
Oh, yeah, this is such good research,
as you would expect from Tavis out of Google.
So Tavis has been fuzzing AMD CPUs
to try and find weird microarchitectural bugs and things,
given the long history of finding that kind of bug over the last few years.
And they came up with a rig that would run fuzzed instructions on a real CPU in parallel with a
virtualized CPU that has the ideal behavior, and then compare the results to try and find things
that were strange. And he found a doozy. So this is, he's called it Zenbleed. It's an attack where you can leak data from other parts of the CPU
across virtual machine boundaries, across process boundaries.
We haven't yet seen an implementation that does it in a web browser,
but I'm sure that's what Tarras is spending his weekend doing.
That's what he's thinking about in the shower.
Yeah, exactly, exactly.
It involves a flaw in how,
in the case of speculative execution gone wrong,
AMD rolls back a certain,
like clearing some particular register state
such that it doesn't actually clear it
and you can leak material out.
And this is a relatively high bandwidth channel.
It's not a timing-based thing like we've seen other ones. Like this is a relatively high bandwidth channel it's not a timing based thing
like we've seen other ones like this is a you know you can get 30k byte a second off the cpu out of
its registers and just kind of watch stuff going on and the relevant instructions that he found
are used in things like hardware accelerated string copy or mem copy or like a bunch of
operations that are very very commonly used and the the GNU GLIPSC compiler generates code
that's kind of vulnerable to this in normal operations.
So widespread applies to a whole bunch of AMD processes
from desktop to data center,
and there's exploit code out there.
So if you were an AMD running cloud provider at the moment,
you're going to want to start patching.
AMD has released some microcode patches for this,
but you're going to need cooperation from your operating system
and your boss and whatever else to apply.
So it's fiddly to fix, very powerful, quite workable,
and yeah, it's cool cool cool research yeah so i think this is most
relevant to those cloud providers who might be offering you know hypervisor based access to
shared hardware yeah yes exactly or um you know anyone who's got uh who relies on a virtualization
boundary you know for real security things i mean the idea what a great idea um and like usually when we see uh when we've seen bugs like this often there tends to be an
equivalent thing on the other platform because you know hardware engineers tend towards similar
solutions to similar problems and they're both implementing the same x86 instructions they make
similar mistakes right and they make similar mistakes. It also does vary a bit between operating systems,
how vulnerable they are.
So for example, on OpenBSD,
their compiler is kind of less optimized,
so it doesn't generate very high speeds,
you know, string copy or whatever else
that uses this particular hardware,
you know, these very long registers
to do hardware acceleration, et cetera.
So, you know, the applicability of the bug does vary a bit
based on compiler and operating system dependencies.
But overall, there is a lot of glibc out there.
I don't know what it looks like on Windows.
There's pretty serious bugs nevertheless.
Yeah, yeah.
Look, we don't really have a great deal of time
to talk about this one,
but Matt Burgess has a write-up for Wired about how satellites have typical sort of IoT-style security bugs in them.
I mean, no real surprises there, are there, Adam?
No, but I guess the thing that's important about this particular set of research,
which is out of some German universities, was they actually did it.
They found some satellites on orbit that are available for people to test with,
and they remote code-ex exact at least one of them,
which is like an Estonian research one,
which in their defense had a mechanism on its command and control channel
where you could just like read and write arbitrary memory unauthed
if you could speak the relevant protocol, which is somewhat standardized.
So, you know.
So it's like sort of having a debugger on a port, right?
It's kind of like having a debugger.
But they did also find some like legit memory corruption
and stuff in other satellite bits of software and things.
So they looked at three different sets of satellites
and found bugs in all of them.
So just nicely practical research instead of theoretical stuff.
Meanwhile, Gavin Wild from the Carnegie Endowment
for International Peace,
and Gavin's appeared as an interviewee
in Seriously Risky Business in the newsletter before,
he has published some research looking at Russia's problems
running their so-called SORM surveillance system.
So obviously a lot of this stuff is run with like Nokia and Ericsson kit
and it's just atrophying.
It's not working as well.
The Chinese stuff isn't as good and yeah
Russia apparently is having a bit of trouble maintaining its its you know surveillance
dragnet on its telco system yeah I imagine that both you know Finland and Sweden as you know
Nokia and Ericsson respectively probably don't feel so great about supporting Russian systems
at the moment and as you say the bit rot is starting to set in they're having capacity
issues they're having capacity issues.
They're having challenges.
We're talking about deploying a 5G mobile network, for example.
Getting the relevant gear is becoming difficult for them.
And so having a functional national surveillance network,
even when you can't build the regular network first
before you can surveil it,
yeah, it's difficult times for them.
Yeah, Apple has issued another patch
for covering the ODA used in that campaign targeting Russian iPhones,
which the FSB attributed to the NSA and said that Apple was complicit and Apple denied it and whatever.
So yeah, Apple's still patching triangulation-related bugs, so that's fun.
Now, also, I just want to draw attention to the lead paragraph in this bbc story the headline
is apple slams uk surveillance bill proposals the headline says app sorry the lead says apple says
it will remove services such as facetime and i message from the uk rather than weaken security
if new proposals are made law and acted upon. Now, this is being repeated everywhere
by everyone saying if the UK passes
the Investigatory Powers Act changes, right,
that Apple's going to shut down these services.
That's not what they're saying.
They're saying if they're asked to do something stupid
once those powers come in,
they will pull their service rather than comply.
This is distinct from the online safety bill,
which is the one about client-side scanning for CSAM,
but it's the same signal has said the same thing in that context which is not we're
leaving if the law passes but we are leaving if the law passes and then the government asks us to
do something stupid this is just a this is an important distinction yes that a lot of people
covering this have have been missing so i just wanted to get that out there uh what
else have we got here yes the uh fourth amendment is not for sale act has passed the house judiciary
committee this is the act that would stop law enforcement agencies in the united states from
buying data from data brokers basically yes they'd need some kind of warrant and there would be a
process around acquiring data commercially that they would have needed the warrant for if they weren't getting it commercially and finally adam uh some really
sad news actually kevin mitnick has uh passed away i'm sure you know everyone listening to this would
have would have seen that by now because it was um you know the reaction was was huge he died at 59
uh from pancreatic cancer and you know there was was a time when Kevin and I were in pretty regular contact.
I've hung out with him a bunch.
The last time I spent good, you know, quality time with him
was he found out I was going to be in Vegas by myself the Monday after DEFCON.
And so he said, hey, you know, I'll come and pick you up
and I'll take you out to dinner and just keep me company.
And he did.
And look, Kevin's a really, you know, was a really misunderstood person. He was a lot of fun, a lot of fun,
very, very smart and quite a nice bloke, to be honest. But there's this line from his obituary,
which I loved, which said, to know Kevin was to be enthralled, exasperated, amazed, amused,
irritated and utterly charmed in equal measure and uh you know i've got kevin
mitnick stories for miles uh you know i had him track down christopher boyce the falcon from the
falcon and the snowman fame when he got out of prison kevin found him and he told me that he
you know he was going to give me his details once he found him and he did this for me and then he
says he got excited and rang him up and then the guy just shut him down and said no i don't want to talk to
you eventually i interviewed him years later because he had a because he had a book coming
up but i was you know i was never sure that that was true if he actually did find him and did talk
to him i had no way of verifying that and kevin was a prankster right and and you know you could
never quite be sure with him when he would say something i'll give you i'll give you another
example about nine years ago it was reported that kevin spun up a thing called the mitnick You could never quite be sure with him when he would say something. I'll give you another example.
About nine years ago,
it was reported that Kevin spun up a thing called the Mitnick Vulnerability Exchange
and he was selling Oday and stuff
and it was this huge controversy.
He told me,
and I was sworn to secrecy on this
because he wanted to be able to do this sort of thing again
and that doesn't matter now because he's gone.
He told me the entire thing was an elaborate troll.
That is a beautiful thing. And I'm like, why would you would you do that he's like because i think it's really funny to
trick journalists you know and now is that true i i also don't know but i had some great times
with kevin i've known you know i've known i'd known him something like 20 years you know and
we weren't like bffs or anything but we've hung hung out in Byron, man, we've hung out in the States. And I did meet his wife who is pregnant with their
first child. I met her just a couple of times in Vegas, just bumped into him and she was with him
and she seems like a really lovely person. And my thoughts go out to her obviously, but yeah,
look, a really, really misunderstood guy who was, you know, it's hard to describe it,
but like in many ways he didn't care about how he was perceived and in other ways he really did.
But the guy that you would get when you took him away from a conference, took him away from the
security scene and actually hung out with him, very genuine, very warm, someone who was actually
very easy to connect with and you would have real talk with the guy just a just a just a great guy you know and as i say like
he turned into a bit of a hate figure uh among some in the in the industry but they didn't know
him you know like and if you did know him like it was pretty it was pretty hard to dislike him
let's put it that way yeah no the infosec certainly is a you know there is a lot of real
interesting characters and it's sad when we lose them because there's so many you know there's so much uniqueness in in
some of the the people who've done well in this scene or made a big impression so yeah i i think
i've only ever met him in passing you know as you say at a conference in the hallway or something
so i didn't really know the guy very well uh but yeah it's it's always sad when we lose you know
such influential people.
Yeah, I mean, coming home from that dinner,
I just have in that memory now,
we were driving down the strip in Vegas
and we were talking about, he'd been,
and this is another thing about him,
he was in a beef with teenagers, an online beef with teenagers.
Like, I'm like, why do you care?
He's like, because they were coming after me, man, you know?
Like, he would just engage in that stuff.
But I think, you know, this is in the aftermath of his beef with lizard squad where one of them had
been arrested because they posted a geotagged photo of their girlfriend's boobs um to twitter
or whatever and so we're talking about that and he like him and i we actually got the giggles
talking about how hard it was for him to stay on the run from the fbi for two years versus
this guy just tweeting his gps coordinates and we were just absolutely in hysterics and that's the
way i'm going to choose to um uh choose to remember kevin uh valet kevin mitnick anyway adam that's
actually it for the week's news thank you so much for joining me uh and we'll do it all again next
yeah thanks very much, Pat.
I will talk to you then.
That was Adam Boileau there with a check of the week's security news.
And a quick note before we get into this week's sponsor interview,
if you are headed to Vegas this year for Black Hat and DEF CON,
well, Black Hat, if you want to get a Risky Biz sticker,
the team at Airlock Digital actually had some printed up and they're giving them out at their stand, which is
a very nice thing for them to do. So yeah, if you want one, you can get one there and big thanks to
them for doing that and they are repping Australia in force at Black Hat this year. It is time for
this week's sponsor interview now with Travis McPeak from Resourcely. Resourcely is a company that will help you to generate terraform for the major cloud services in a way that is not insane.
You're going to hear in this interview how and why things can go wrong with cloud provisioning.
But we started off by talking about how the relationship between security teams and developers has changed with the move to DevOps.
Here's Travis.
Yeah, so traditionally a security team in a waterfall model is this sign-off function.
So it's like you have developers, they create something, and then at some point QA is going
to pick it up. They're going to say, are there bugs, fix things. And then at some point security
is like the sign-off function. Is this release good to go from a security standpoint? Now,
most companies have moved to a continuous delivery model, and it's all cloud, and it's multiple deployments per day.
So what does security do?
And that's really a decision point.
Some security teams have said, what we do is we are going to do a whole bunch of scanning and other kind of risk assessments.
We'll do periodic threat modeling.
But what we're going to do is a DevOps process.
We're going to integrate with developers and give them JIRA tickets where they're used to living. These JIRA tickets are
going to have vulnerabilities. But the issue is this really creates an adversarial situation
between developers and security. Developers are trying to get their job done. They're trying to
ship code. Security's flinging a bunch of crap over the wall and saying like, oh, here's a bunch
of JIRA tickets for you. Developers, they don't have the context to know like whether these things are valuable. There's too much noise
in here. And security is not really adding a lot of value aside from here's vulnerabilities that
you might have. So, but hang on, isn't that just, isn't that just situation normal, right? Because
under the waterfall model, it's, you know, security is a roadblock, you know, that you
shall not pass until I'm satisfied that this is cool. And now it's like, okay, you're allowed to go through, but I'm just going to throw tickets
at you all day.
Like it's, it's just a, just a evolution of the adversarial relationship based on what
you're saying.
Yeah.
You've basically taken your big waterfall sign off and then made it to a bunch of little
small waterfall sign offs, which is not there in my opinion, the right way to go.
What you want to do instead, the security team should really take this complex topic, which is what do we need to do to be secure
within reason for the business to be compliant? Take that complex knowledge and break it down
into specific actions that developers should take. So developers shouldn't need to know about
cross-site scripting and all the different types of it, SQL injection, how those things work.
What they should do is they should engage
with some product that the security team has made for them
that makes it like really hard
to introduce these vulnerabilities.
Or if there's some action that they need to take,
it should be very specific
without any kind of security knowledge that they need.
Like here's an issue,
here's why we think it's important to the business
and here's a specific action that we'd like you to take.
I mean, one of the issues here though,
is that to really get to a wonderful place with this, you kind of need
those unicorns who are security people understand dev and dev people who understand security. And
for a long time, you know, people have been trying to, um, uh, you know, people from both
camps have been trying to develop skills from the other side, but you know, the success there
is limited. I mean, there are people who do both,
but they are unicorns and they charge accordingly.
Right.
Yeah, I mean, my opinion is maybe spicy,
but security people that are working with developers
in the product need to understand software engineering.
There's not any room for this.
I don't think that's a controversial opinion, but yeah.
Well, it's not traditional security, folks.
Lots of people
you know that don't have the empathy for what does a developer go through and what is their skill set
you know this is where like this ivory tower flicking stuff over the wall comes in
well i think that that perception of like you know the disdain from both sides where the security
people think that the developers are idiots and the developers think that the security people are
you know superior you know, superior, you
know, act superior.
Right.
Yeah.
I think this comes from a lack of empathy from both sides.
So if you're a security person and you've been a developer, you know what it is they
go through, how much stuff they have on their plate, then you can break it down and you
can say, what is the simple thing that I can get this person to do?
How can I give them context that's in my head in a way that's simple and they understand
what it is?
It's not that they're idiots. It's that they're busy and they have all kinds of stuff on their plate that's in my head in a way that's simple and they understand what it is? It's not that they're idiots.
It's that they're busy and they have all kinds of stuff on their plate that's not handling security stuff.
Now, so much of the conversation about how to make these problems better has focused on stuff like, you know, static analysis of code, right?
So I know some people who run AppSec programs who had a lot of success.
Like they came into places that didn't really have appsec programs and things were just a mess like i'm talking big companies with multiple development
teams and i think what they did was smart and i've i know of a few people who've done this which is
that they built developer infrastructure for the developers that had a lot of nice security stuff
baked in where which would just sort of encourage the developers, like the
developers just wanted to use it because it was better than the stuff that they built themselves.
It always reminded me, I saw a TED talk like 10 years ago about someone in Europe who built a
foie gras farm where the geese, there were no fences, like the geese flocked to this place
because they planted all of the right plants that the geese liked, and they would just fly in and get fat eating all of this delicious, delicious vegetation, right?
So a similar sort of approach. But I mean, your business is not so much about the application
code security, right? Like that's not really what it's about. This is about the next part of that,
which has kind of been neglected a little bit in terms of it being an integrated thing,
which is the sort of infrastructure side. Right. Yeah. And I think that that's a really
powerful pattern. If you're a security team, we all talk about how important partnership is,
but what does that actually mean? If you're just flicking stuff over the wall and you're saying,
here's a bunch of risk, you go and handle it. Here's every single JIRA ticket for all the
vulnerabilities we have. You haven't added any value. What you want to do instead is say, here's the business context I have. I understand that you're not
going to fix everything. These are specific things based on what you're working on,
our risk profile that we recommend. And by the way, here's a dead simple way to do it.
I think the dead simpler, the better. So if you're getting a developer,
they've already introduced some vulnerability, whether it's an AppSec vulnerability or an
infrastructure vulnerability, it's going to be really expensive to go and fix it at that point.
You have to go and make code changes. You need to test it out. You need to make sure that stuff
isn't breaking. So this is where, you know, in the industry, we talk about shift left all the time.
Yeah. I think the further you can shift it left, the better, but what's even better than that,
if you don't want to, but I guess what I was getting at is we've been, excuse me, we've been
really focused on shifting left when it comes to to the application code, not so much on the infrastructure side.
That's right.
Yeah, I think shifted left from the infrastructure side today means we're going to integrate into your CI process and tell you about a whole bunch of issues there.
But a step further than that is, like you said, the foie gras farm.
So we actually are going to lure developers to come and use something that's easier.
It's like when you want your dog to take medicine.
You don't just give the medicine to the dog. They're going to avoid it. You put it into
the peanut butter and then the dog wants the peanut butter and they get the medicine. That's
really the best way to go is developers don't even know that they're getting security. They're just
like, Oh, I need to do this infrastructure thing. It's complex. I can go through all of these steps
if I want to, or I can use this nice, easy thing where security is in there, but I don't have to
think about it. Yeah. Yeah. Yeah. So it. So it is a case of making something that's just,
you know, makes provisioning easier, right?
Yeah, exactly. Yeah. The other interesting thing to me is who owns risk? So the traditional answer
to that question is the security team owns the risk. This is why they're the sign-off, right?
If you get a breach, whose head's going to roll? By default, it's the CISO's head.
So in a farmed out model where really developers are responsible for their full application,
then that shifts a little bit.
Developers are actually responsible for security of their application, but we don't want to
train every developer to be a security expert.
They're not going to be able to get their job done.
They're going to be so slowed down with training and all this other overhead that they can't
get done what they need to get done.
So the only sustainable way to approach this is to just make those things as much baked in as possible.
This really is the same approach, I think,
to a lot of the companies
that have made the static analysis stuff, right?
Like, and again, like those AppSec programs
that I was talking about where, you know,
if you want people to do the right thing,
you just make doing the right thing a lot easier,
you know, than doing the wrong thing.
You just make it simpler.
That's right. Yeah, I think my favorite work in this case is companies that
have, instead of running some static analysis and it's like, here's a big pile of cross-site
scripting, you just invest in some framework that may makes cross-site scripting basically
impossible. Well, I mean, yeah, but you're still going to need the static analysis stuff, right?
For other stuff. The most powerful use of static analysis in that case is
just detecting if people are using the right pattern or not. So it's like, oh, you can use
this React framework and it bakes out all of the cross-site scripting just inherently and how it
works. And then now your static analysis is, are you using this good pattern? And anytime it's not,
it's not like, go fix all of these things. It's like, here's some documentation on an easier way
to do this. It's going to make your life a lot easier. Yeah. Yeah. So look, you know, you're a, you're a new company, right? Like you
are a startup. Where are you seeing interest for this stuff? I'd imagine it's at places with,
you know, large development teams and rapid, you know, churn in terms of versioning and,
you know, whatever. Versioning, do you even call it versioning in DevOps? I don't even know.
You can call it whatever you want. DevOps is like one of these, DevOps is in your mind.
Yes, that's right. But yeah. So who's interested in this, right? Because I'm just wondering where
you would get the early adoption. Yeah. So here's a kind of a traditional
arc. You have a small company and you have one poor security person that's responsible for a
whole heap of stuff. What they'll do is they'll shore up the basics. It's like, here's how we're going to do identity. Here's some kind
of password management system, things like that. And then at some point, you also may have the
DevOps person. And that poor DevOps person helps developers with all the infrastructure that they
need. So we have one of these at our company. Preston is our poor DevOps soul, and he can
handle everything. Companies can get by with
that until a certain point. And then at some point it's like, okay, that poor person is overwhelmed.
So they either become a team or you start investing in patterns that make it so developers can kind
of self-service this stuff. What a lot of companies will do at that point is they'll
have Terraform modules. So cloud is like 90% the same for all organizations with 10% differences.
The 10% differences make a module
approach really hard so basically companies will start building patterns for infrastructure and
they put these into modules and then those modules have opinions baked in so it's like this is how we
do naming the naming is baked into the module what then ends up happening is you kind of get this
explosion problem where it's like oh well I need what's in that module,
but I need like one little,
but I need a little bit different.
Yeah.
So you wind up with another module.
Yeah.
Yeah.
Now we have two modules and I have to move.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
And before you know it,
you've got 6,000 of them,
right?
Exactly.
Right.
And I've talked to people that are so frustrated with this approach.
They're like,
should we even do terraform anymore?
Maybe we just go back to click ops and just rely on scanning to go and tell us
when we have problems.
Yeah, no, that doesn't sound like a good option either.
Well, and I guess this is how resourcefully was made, right?
Yeah, exactly.
Yeah, so we want to solve those problems with modules.
What we want to do like more than that though
is think about what it takes to get to that point.
So you have to hire enough cloud expertise
that they understand like what are the common patterns?
What kind of guardrails should we even have in there?
Like, what are the things that we care about and want to enforce?
It doesn't make sense for every single company to have to have that.
And then they're reinventing the same cloud patterns with small differences that other companies have.
So, yeah, that's why resource leaders are made.
We just don't think that every company should have to reimplement basically the exact same wheel with tiny little differences in there.
All right.
Travis McPeak, thanks a lot for joining us
to talk about all of that really interesting stuff,
and we'll chat to you again soon.
Thank you.
That was Travis McPeak from Resourcely there.
Big thanks to him for that.
And yeah, apparently a bunch of you out there
really dig Resourcely because Travis had a huge response
from his previous appearance in one of our Snake Oilers shows.
So if you want some more information, you can find them at resourcely.io. So that's R-E-S-O-U-R-C-E-L-Y.io.
And that is it for this week's show. I do hope you enjoyed it. I'll be back tomorrow with another
edition of the Seriously Risky Business podcast in the Risky Business News RSS feed. But until then,
I've been Patrick Gray.
Thanks for listening.