Risky Business - Risky Business #751 -- Snowflake, operation Endgame and Microsoft's looming FTC problem
Episode Date: June 5, 2024On this week’s show Patrick Gray and Mark Piper discuss the week’s security news, including: What on earth happened at Snowflake? A look at operation Endgame ... Check Point’s hilarious adventures with dot dot slash Report says the FTC is looking at Microsoft’s security product bundling More ransomware hits Russia Much, much more 404 Media co-founder Joseph Cox is this week’s feature guest. He joins us to talk about his new book, Dark Wire, which is all about the FBI’s Anom sting. This week’s show is brought to you by Resourcely. If your Terraform is a mess or your CSPM dashboards are lighting up with insane and stupid things, you should check out Resourcely. Its founder and CEO Travis McPeak will be along in this week’s sponsor interview to talk about all things Terraform. Show notes The Snowflake breach and the need for mandatory MFA Snowflake at centre of world’s largest data breach | by Kevin Beaumont | Jun, 2024 | DoublePulsar Cloud company Snowflake denies that reported breach originated with its products ‘Operation Endgame’ Hits Malware Delivery Platforms – Krebs on Security Treasury Sanctions Creators of 911 S5 Proxy Botnet – Krebs on Security TikTok warns of exploit aimed at 'high-profile accounts’ SEC clarifies intent of cybersecurity breach disclosure rules after initial filings | Cybersecurity Dive SEC.gov | Disclosure of Cybersecurity Incidents Determined To Be Material and Other Cybersecurity Incidents[*] Nurses at Ascension hospital in Michigan raise alarms about safety following ransomware attack London hospitals declare emergency following ransomware attack | Ars Technica North Korea’s ‘Moonstone Sleet’ using fake tank game, custom ransomware in attacks OpenAI models used in nation-state influence campaigns, company says National Vulnerability Database | NIST More than 600,000 routers knocked out in October by Chalubo malware Hackers steal $305M from DMM Bitcoin crypto exchange | TechCrunch Germany's main opposition party hit by ‘serious’ cyberattack Cyberattack disrupts operations of supermarkets across Russia Rare earths miner targeted in cyber attack prior to removal of Chinese investors - ABC News Check Point - Wrong Check Point (CVE-2024-24919) Kevin Beaumont: "The latest Risky Business epis…" - Infosec Exchange This Hacker Tool Extracts All the Data Collected by Windows’ New Recall AI | WIRED FTC-industry talks over possible Microsoft probe raised recent hacking incidents - Nextgov/FCW Tim Schofield 🏴 🇬🇧 🇪🇺🗺: "@riskybusiness @metlstorm I d…" - Infosec Exchange Dark Wire: The Incredible True Story of the Largest Sting Operation Ever: Cox, Joseph: 9781541702691: Amazon.com: Books Distant Field Labs
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name's Patrick Gray. This week's show is
brought to you by Resourcely. And yeah, if your Terraform is a mess or your CSPM dashboards
are lighting up with insane and stupid things, you can check out Resourcely. Its founder
and CEO, Travis McPeak, will be along in this week's sponsored review to talk about all
things Terraform. But we've got a few things to get through first.
Adam Bois-Lowe is chilling in Fiji right now, probably holding one of those cocktails with a little umbrella in it.
So Mark Piper is going to fill in for him this week to talk about the week's news.
And then we're going to hear from Joseph Cox, a co-founder of 404 Media and the author of a book that just came out today,
Dark Wire. His book chronicles the Anom operation. That's the one where the FBI wound up running a
crime phone network that carbon copied all messages sent over it into an FBI database.
The book is great and Joe will be along to talk through all of that in just a bit. But first up,
it is time for a look at the week's security news with Mark Piper, who's
filling in for Adam Boileau.
Pipes, thanks for joining us, mate.
It's been a while.
It's good to be here, Pat.
Now, look, let's kick off the news section with a discussion around this whole Snowflake
mess, right?
Because initially we saw some initial reports over the last week that Snowflake had been breached and that's where all of this ticket master stuff had come from i
think santander bank is another company that has experienced a data breach as a result of this
event so some vendor put up a blog post saying that uh you know there'd been an intrusion in
snowflake and that's how all of this data got out there turns out that's probably not what happened
here yeah this is one of those interesting
ones that as the story unfolds the truth will become more clear right and even now i don't
think we have the full picture but really what they're saying is through let's call it muddled
communications as this has unfolded over the last couple of days it really looks like someone took
a info stealer and we looking for Snowflake access.
And one of the great things about Infostealers is not only can you sort of broadly obtain
a whole bunch of credentials
and then evaluate them afterwards,
but you can also target it down at the same time.
So that seems to be what happens here.
Groups gone out,
they're mined up through Infostealers,
a bunch of credentials to Snowflake data,
and they've hit it.
The problem here is really just the confusion and the communication. Snowflake saying
they weren't affected. It turns out they might have been somewhat affected, but it might have
been a demo account, but therefore nothing was taken, but therefore their customers are affected.
Look, I think I've been able to piece this together, an approximation of what's happened here,
which is Snowflake allows people to spin up demo accounts people were spinning them up putting company data in there
and they weren't using mfa which catalan actually quotes me in his write-up of this in risky biz
news where i said yeah allowing your customers to do dumb stuff is actually going to hurt your
brand if something like this happens so it looks like like, yeah, demo accounts with no MFA were getting
logged into because people had creds from info stealers. There was a vendor called Hudson Rock
that wrote this up as, well, Snowflake got breached because they found a Snowflake employee
account had been breached and they had this whole theory about how that was being used to generate
tokens. It looks like that blog post that they wrote is now gone and they said they got a legal threat
from Snowflake. And I don't think this is one of those usual examples where there was a threat
just over nothing. I think probably Hudson Rock got it wrong and that the Snowflake employee
account was just another one of these demo accounts without MFA. So I think that's really what's happened here is Snowflake allowing people to spin up powerful
demo accounts and stick their company's data into those, I don't know, for want of a better term,
buckets has turned around and bitten them on the bum. Yeah. And I think there's a couple of
key takeaways here, right? That is universal. The first is attackers will always follow the data, right?
Whether or not it was mail spills in the 90s
through to databases, through web apps
in the early 2000s to the 2000s,
through to enterprise networks,
they will follow where you put your data,
the attackers will go.
And the other is, yes,
having MFA as a requirement, super important.
And I think for anyone who's running
an as-a-service platform,
it's time to start really embracing the shared responsibility, right?
Defaults that are not going to get your customers impacted
is a good place to start, including MFA.
And I know there are several applications now that are out there
that will only let you use production data if you've enabled MFA.
Well, that's what I was going to say, right?
Like, that just seems like a no-brainer to me,
is as soon as you become aware that people are actually
shoving company data into these demo accounts.
And I don't think it's actually about shared responsibility, Mark.
I think it's self-interest, right?
You should do this out of self-interest,
because look at the week Snowflake has had, right?
Look at the headlines screaming that they've been breached,
because customers, and they might not have even been customers.
That's the crazy thing because people were using demo accounts to handle corporate data.
So I don't think you even need to make this an argument
about altruism or shared responsibility.
I think there's a decent case that it's in your best interests
to not allow people to do dumb stuff with their data on your
platform. That makes sense to me. Now, another big bit of news that broke over the last week
is Operation Endgame. And this is a big combined law enforcement operation that has shut down a
bunch of malware dropper and loader services like ICD, SmokeLoader and TrickBot.
We've seen a bunch of people doxed as well. Brian Krebs shared his thoughts with me about this. And
you know, as you'll hear the gist of what he's thinking is what I'm thinking as well, which is
that law enforcement have decided that yeah, doxing and disruption is a good way to have an impact
here when you can't get people in cuffs.
Operation Endgame is, you know, it's another acknowledgement that, okay, yeah,
we know who these guys are. We've known for a long time, but as long as they remain in Russia or Belarus or Cyprus or wherever, they probably don't have much to fear from Western law
enforcement. So what do you do? Well, one thing you can do is try to disrupt as many, you know,
criming as a service operations as you can. And you really want to focus on services that touch
a broad range of cybercrime communities and loaders and droppers are a really great place to
start. The feds are trying to sow confusion and doubt in the minds of their customers about
whether these crime where products and services are safe to use and, you know,
not simply operated or backdoored by law enforcement.
And that looks like the approach that they took here.
Pipes, what do you think about all this?
Yeah, I think it was really interesting.
And I agree with what yourself and Brian have been saying around like,
you know, disruption appears to be working, right?
And I think as soon as the first precedent was set down earlier this year
with some of the ransomware takedowns,
it really sort of has shown that it's being effective.
And I think, for example, you and Adam have previously talked
on the podcast around proceeds of crime.
And if you look at some of the large figures,
like one of the guys taken down has 69 million in cryptocurrency they make it clear he's going to have a hard time cashing that out
like just having that high figure doesn't mean that you had that money right um and the other
parallel i kind of thought was you know gloves are off here right they have a precedent they
have a technique that works and this reminded me back when the silk road takedowns and sort of 2013 2014 drug market takedowns as soon as the first precedent was set like law enforcement were able
to repeat that pattern across a number of different operations over and over and over again
to sort of disrupt it and it kind of has that same energy feel to it you know where
like they now know they can get it through the courts they can get it through the warrants they
can work together as teams they know how to um know, sort of make an impact through this sort of takedown.
And it's just wonderful to see that they're finally caught up.
Yeah, I agree.
Although I do think the Silk Road template only works so much for these sorts of crimes.
I think what's really good here, though, is to see the doxing.
You know, Silk Road was interesting in that the admin was actually based in the United States, which was just so incredibly dumb,
which is why he's in prison. But yeah, I mean, I think the doxing component of this is interesting.
I mean, what comes next are probably travel bans and sanctions. And speaking of, another report
from Brian Krebs here, a few people have written this up. The US Treasury Department has sanctioned
three Chinese nationals for operating 911 S5, which was some sort of proxy service that I think
it was shut down. Yeah, it was shut down back in 2022, but now they've sanctioned these people,
right? So I think these sort of non-law enforcement related actions you know it's kind of the release the hounds
doctrine that we've been trying to will into existence for the last six or seven years here
on the show yeah and i think we're going to see that it's causing frustration right because part
of the how all of these operate in ecosystems and if you have a look at um 9195 as an example
um they provided last mile sort of geographic location through residential
proxy networks.
So like,
you know that I think when you disrupt some of these operations,
you put down the sanctions,
you're actually also disrupting the ecosystems and the trust value of them
as well.
Now switching gears.
And it looks like,
and the details are not particularly clear,
but it looks like there's some sort of Oday web bug, or at least was,
in TikTok where people were able to send a DM to an account. And if someone viewed that message,
it was triggering some sort of condition. I'm guessing maybe it was some sort of token theft
or something. But it was triggering some sort of condition whereby someone could then
steal the account and CNN lost their account and whatnot.
What I find interesting about this is it's just the sort of bug that you would never
find on Instagram these days, right?
And it just reminds us all that TikTok is kind of new.
Yeah, new and complicated, right?
I mean, one of the things that surprised me as I'm not a TikTok user, but taking a look
into this, for example, was I wanted to try to understand what is the attack surface of direct messages and it turns out it's
reasonably quite large like there's a lot of functionality that has launched on TikTok in
the last 24 months and all of that functionality comes with risk of of bugs right the messages can
be chat but also any video that you can post in on TikTok, you can generally also ship it into a DM, right?
So I don't know what it takes on the client side to do that or how much they're doing on the website,
whether it's for Wasm or whatever's going on. But clearly, as soon as you start, you know,
on the fly accepting voice and video data and mixing and integrating it, then you've got a
bunch of complexity. Yeah. And it's not really clear whether or not this is like a web bug
on all versions of the app or whether this is some desktop thing.
It's really not clear, but interesting nonetheless.
Now, I have been wondering when this would happen.
And it's something I've mentioned on the show a few times,
which is when is the SEC gonna issue some more guidance
around when people should be disclosing breaches
as these item 1.05 form 8Ks.
So this is when the SEC issued guidance saying,
if you determine that a cybersecurity incident is material,
you must report it to us within four days, right?
And of course course what we saw
is people reporting stuff that and even saying in these in these disclosures we haven't currently
determined that this is a material thing but we're reporting it out of an abundance of caution the
sec has come out now and said don't do that like that's dumb. Only report it to us from the point
that you determine that it's material.
Now, okay, on one hand, that's great
that the SEC is giving this guidance,
but I think the core issue here
is that it is very difficult to really know
when something is gonna turn out to be material
in the eyes of an investor.
They've tried to issue some guidance there as well. I've
linked through to the statement on the SEC website. They're saying, you know, you've got to consider
things like regulatory impact and costs and all of this sort of stuff. But it's, I think the problem
with determining materiality when it comes to a cybersecurity event is it's a, how long is a piece
of string kind of issue, right? Yeah. I mean, my take on this was kind of twofold,
right? One was exactly that. And this was one of the arguments that's existed for the longest
period of time around mandatory disclosure, right? Is can the system be flooded and overwhelmed with
stuff that doesn't matter? Can that occur early and deliberate? Sort of, you know,
well, we reported it early, we got through the headline phase early but then on the other side of it as well as i don't think these things happen in isolation
right so the abundance of caution from cesos wanting to report early could also be related
to the amount of sec threats and actions going against cesos at the moment right there's been
numerous times where they've called out that like you know as a ceso you have a responsibility and
governance responsibility for the security of
your environment and while we've got cases like SolarWinds going through the courts I think
there's a lot of CISOs that kind of look at it and go I'd rather be early and open and honest
than be accused of hiding or or not even if it's an abeddon right so I think I think this will give
him some cover though right yeah I so. I think we'll see.
Now, turning our attention to the attack
against Ascension Healthcare in the United States,
John Greig has a write-up here
where the nursing union or the nurses union in Michigan
have just come out and said,
look, we need management to do a few things
differently. We need to get updates on when this is going to be resolved. We need more headcount.
We need to be listened to basically. But what they're describing is a hospital in disarray
and, you know, possibly patients being put at risk.
Yeah. I mean, it sounds like they're not getting any guidance on recovery of systems. They talk about using Google Docs and text message chains for the treatment care plans of patients inside the hospital. of requests or demands were super reasonable. Like, can you give us a regular update
on if systems are recovering?
Can you give us some insights into what we can do
to help make sure things are going on?
Can you alleviate some of the non-emergency procedures?
Like, can we stop accepting new patients
while we're managing under these extreme conditions?
But electives is where the money is, right? So you've
got management who are still trying to run business as normal and the nurses union saying,
you're nuts. This isn't business as usual. Yeah. And so I don't know. To me, it feels like,
I mean, if you work long enough with anyone in IR, right, they'll tell you the number one thing
is actually communications, both internal and external, right? And rumors have borne in the
vacuum as well, right? So I'd hate to imagine what it feels like on the floor there,
like, you know, what they are.
I mean, just even from a job security point of view,
like, you know, all of that sort of stuff is just,
they're asking for some assurance and confidence
that things are going to get better,
and I think they deserve to get the answer.
Yeah, I agree.
And meanwhile, Dan Gooden over at Ars Technica
is reporting that London hospitals have declared an emergency because of a ransomware attack that has impacted a large London-based medical testing and diagnostics provider, like a big pathology lab. So they're having to delay or cancel elective surgeries as well. And the whole thing is a disaster. So this absolute head kicking that
the healthcare sector is getting is just continuing. Yeah and one of the things that's
night and day between the previous story and this one is that the hospital seems to be managing it
right. They've cancelled non-emergency surgeries that require blood analysis and transfusions.
They've kind of looked at their load and what they can and can't handle
and they've taken action accordingly.
Now, look, this next bit of news,
I think is, look, in isolation,
I'm not terrified.
But if we keep seeing more stories like this,
you know, this could be the start
of something really bad.
So it's another one from John Greig
at The Record. And it's about one from John Greig at the record,
and it's about an APT crew called Moonstone Sleet from North Korea, which is doing espionage attacks
against like software companies and defense firms. Fair enough, they're doing their collection
operations, that's fine. The problem is when they're done doing the collection, they're now
starting to deploy ransomware, and this isn't like fake ransomware designed to act as a wiper and destroy evidence.
We've seen APT crews do that a few times.
This is financially motivated.
Now, North Korea makes hundreds of millions of dollars out of crypto theft.
If we start seeing them get into ransomware, I mean, I don't even know what we could do about that.
That would be just disastrous.
Yeah. What are your thoughts here? I had the exact same sort of thought around disaster,
right? If they decide to get serious about it. One of the things that we know about North Korea is they're really good at evolving and learning over time. They're really good at adapting
and changing. And as to your point, they're really motivated to do so because they're generating revenue for the country right um and one of the things that i did wonder
is why now why this new group now like what's with the evolution and i can't help but wonder
if it's a coincidence or if it's associated with the fact that there has been a disruption to some
of the ransomware environments out there right um and whether or not they think there's an
opportunity here somehow.
Well, I wondered, I mean, I wondered something different, which is whether or not the shake
up in the crypto markets, whether that's maybe affecting some of that stuff.
Yeah, I don't know.
But either way, like, yeah, we don't want North Korea getting in on ransomware.
That would be very bad.
And it's always concerning when they're changing tactics.
Yeah, yeah, 100%.
What else have we got here?
James Reddick over at The Record has a write-up
of OpenAI's first ever sort of transparency report.
This is a type of report we see come from all of the major social networks.
It's good to see that OpenAI is doing this.
They have disclosed that their models have been used in nation-state influence campaigns,
but they were able to sort of cancel or evict, I guess, those user accounts or whatever.
But I guess, yeah, the interesting thing is here they were able to analyze the impact of the language that was generated using their models
and have determined that at this point,
it's not really a big deal.
I don't know that we're going to see most of this.
When we look at the way LLMs are likely to be used
in sort of more automated campaigns,
not just language generation,
I don't know that it's going to be OpenAI
and the large providers that are going to be
that important in all of this.
But it is good to see them put out this report.
Yeah, I mean, it's really good to see them put out this report.
And it very much feels like an extension of the work they did in joint partnership with
Microsoft in February, where they kind of looked into that and has a lot of overlap
with Google's, you know, threat analyst group reporting and meta reporting on the disinformation
campaigns as well, right?
They're basically seeing the groups now try to leverage open AI as tools, which is
good catching, good work. My take on this was I think they have to. It's an election year. Now,
I know Alex Damos has the view that he's more worried about private entities doing this sort
of work for influence campaigns during the election, but being able to be seen as having one the capability to detect it two that they are
actually leveraging that capability to detect it and three that they are prepared on an election
year i think is a very strong policy message more than there's actual impact going on on the
offensive side here from the disinformation providers um but the other flip
side of it as well i really loved the fact that in this report they discussed the defensive trends
right like not only are they seeing you know the use of the llm um capabilities for um disinfo
campaigns but they're also seeing it on the defensive side so to help identify disrupt and
detect these campaigns as well so um to see that sort of
dual use of of the um the capability was really nice as well um but i think it's really just a
case of um this to me felt like a they have to and let's also not forget that them the majority
of their super alignment team just left as well right so they're under the microscope on that stuff um to your
point around you know where does it go from here i mean i'm with you i think local llms now like
the likes of llama 3 and stuff we're starting to see researchers sort of new to some of their
capabilities that make them safe um and the sort of sense of you know no we can't help you with
that query like we're seeing those in the open more and more now with the likes of obliterated. So I think that's where the shift will go because those
models are pretty powerful and can be run on commodity hardware or some nice stolen
stolen cloud accounts with very little effort. So that'll be interesting to see where it goes
from here, but yeah, some really nice work from OpenAI. Yeah. I mean, we were talking before we
got recording as well, cause you've been doing a little bit of work on llms and doing
some analysis there and we'll plug that at the end of the of the recording but you know really
the general models are a little bit of a distraction i feel and you agree with me uh from
from really where the interesting stuff is happening uh with this type of AI. And, you know, it's going to often be more specific,
more specific models that only understand one context or another that are really going to be
the ones that get things done both positively and negatively. I will say too, I had a very
interesting chat with someone from Kroll Cyber in a sponsor interview recently. And I thought the
funniest thing that came out of that is you can't really do input sanitization for an LLM. So what you have to do is do output
sanitization and what better, what better way to, to sanitize the output from a large language model
than to use a large language model, which I think is, it's just, we have to rethink certain things,
don't we? Certain security approaches. Oh, we do. And personally, I'm looking forward to
understanding where that adversarial model filtering goes, because I don't know how
sustainable it is. Yet one more from John Greig over at The Record. Elephant stamp for you, buddy.
He's got this great story here, which initially know you look at the headline and you're like ah that whatever you know and the headline is more than 600 000 routers knocked out in october
by chalubo malware and you think okay some bit of destructive malware went out there and then you
read that all of the routers were part belonged to a single asn uh and it looks like someone took out all of the routers belonging to the customers of an
ISP called ActionTech. Someone really doesn't like ActionTech. So someone managed to push a bad
update to 600,000 customers of a single ISP. And when I was reading this, I was just thinking,
and this happened last year, but I was thinking, can you imagine the day they had at that ISP?
Yeah, look, the write-up from Lumen on this was really good.
They call it the pumpkin eclipse.
I'm with you.
It blew my mind, right?
Like, okay, sort of 49% of a single sort of ASN's router has been taken out.
It's kind of curious, and it wasn't really reported on at the time, but I actually know they were pricked and, you know, effectively taken out of service.
They got a great length in their analysis to say say we don't think this is nation state it doesn't match the you know various
other actors we've seen running residential proxy networks um and we're not really going to go into
attribution but it's like yeah tell me you're a disgruntled employee without telling me you're a
disgruntled employee or customer right like it's it's just it's it's phenomenal uh what else have
we got here darina
antonik over at the record is reporting that germany's opposition party has been hit with a
serious cyber attack you know we're just seeing more and more uh well i mean i'm guessing this
is russia we're just seeing more and more russia uh you know starting to target uh you know european
countries that are allied with ukraine uh against it you know everything European countries that are allied with Ukraine against it.
You know, everything from destructive attacks to, you know,
what looks like hacktivist campaigns that have the tacit approval of the state.
And, you know, this is all the sort of stuff we expected to happen immediately after the invasion began in 2022 is starting to ramp up now.
So I wonder where we're going to be with this in six months.
Yeah, I mean, and as the various sort of countries go through their election cycles and they're having various different changes in government and all the rest of it, I expect we'll follow the patterns, right?
Follow which countries are having elections over the next 24 months and then see where either the central government or the main parties are getting compromised as that goes around.
Yeah. Now, it was either last week or the week before that Adam and I spoke about the attack
against the Russian logistics firm SEDEC. There's been another large ransomware attack
impacting a supermarket chain. It's a discount retail chain in Russia that has a thousand stores.
Their payments were down. People had to pay cash, customers very annoyed, etc., etc.
It looks like this is an ongoing incident.
I think that the more ransomware starts to impact Russia,
the better the chance that the Russian state
will actually start to want to take action against it
as a crime type, right?
So, yeah, I mean, I've got mixed feelings about this, Pipes.
Yeah, I'm the same, right? Like, I mean, I'd be interested to see where this goes and if the
pattern continues. I mean, this is number three, I think, in recent weeks.
So yeah, yeah. I don't know who's behind it. Yeah, who knows? Anyway, speaking of other stories
where the details are not entirely clear, we've got some news that broke yesterday here in Australia
that a rare earth mining company here in Australia was targeted
in some sort of attack.
And this is a company where the Australian government
has forced some Chinese investors to divest.
So we've got journalists and opposition politicians trying
to weave this narrative on top of this, where they're trying to connect the investment stuff
to this attack. But the attack happened in March and the forced divestment only happened recently
and the timeline's clear as mud and everybody's trying to spin this one a certain way.
And I just don't think the details are here
and that we can say that these things are connected yet.
No, I mean, I had to read through a couple of articles
on this particular incident.
And to me, it feels disconnected.
I mean, bottom line is ransomware's BAU
runs nine to five Monday to Friday.
Like, it just happens, right?
Was it ransomware?
I thought it was data theft because it's BNLN,
which is a ransomware developer,
deployer and data extortion.
I don't know that the attack itself was a ransomware attack,
but the crew that did it is associated with ransomware campaigns.
And I think there has been prior hypothesis and evidence that,
you know,
you are more likely to be hit by an initial access
broker if you're in the news anyway right because they're really looking at what your capital is and
all the rest of it so it could just simply be as simple as they were on the front page of a
newspaper somewhere around being forced to divest and someone looked up and said well they've got
reasonably good capital they'll probably pay a nice ransom yeah well i mean we will be getting
more details on that one because uh yeah it's sort of hitting the fan here as we speak. So another one I wanted to check in on last week, Adam and I, we didn't really have details on this checkpoint vulnerability that was being exploited in the wild. I think that's an APT campaign as well. Someone's reversed the patch. And, you know, we were like, oh, maybe it's username disclosure, some sort sort of info leak it is so much worse than
that to the point of being quite funny actually yeah um the team at watchtower did some pretty
good reverse engineering good old-fashioned bindiff and walk through the patch difference
and figure it out and um 2024 directory traversal running in some kind of privileged context that
you can straight up steal the shadow file,
get some hashes.
And I'm sure, like,
I'm not familiar with checkpoint architecture, right?
But I'm fairly certain that there would be some other sort of files
with cryptographically hashed values
that may be easier to crack than SHA
on an appliance like that.
Well, I mean, the point is attackers were exploiting this
and using it to get access.
So it is practical, right?
But what's amazing is, yeah, it's like there's a directory from which you can escape.
So you go, you hit ac shell slash dot dot slash dot dot slash dot dot slash.
And it's just so wild in 2024 seeing exploit write-ups that include dot dot slash.
It's crazy.
Yeah.
It's like those logic bugs will never die.
Yeah.
Absolutely nuts. crazy yeah yeah it's like those logic bugs will never die yeah absolutely nuts and uh we made a
little mistake last week or i should say it wasn't so much a mistake but it's we've got some clarity
now it turns out to extract and exfiltrate data from microsoft recall you don't need system or
admin level privileges on a windows machine if you land on that box as a user, you can pull that data.
Yeah.
And someone went and proved it with a new tool called Total Recall, which will take
care of the process for you.
You don't need physical access to the laptop to make that happen.
You just need to be able to execute some code in the user context.
Yeah.
I mean, what are your thoughts on this whole recall thing?
Because I think it's nuts.
Every now and then we come across a period in compute where you can't believe the headlines as you read them.
And then you think it must be the shovel or the onion or something like that.
And that was my take on recall when I first saw it. And I don't know, it's just...
It was kind of mine as well. I was thinking it can't be that bad. People are getting a little bit hysterical.
And then the more you read into it like and i i i think it's still one of the things that i think is true right is that we're trying to
throw a whole bunch of use cases and see what sticks in the public collectiveness of this is
what i want right and memory with ai is one of those things right we've seen various pins that
you can pin to yellow pal or you know previous classes you can wear that'll record everything you do. Like, somewhere in the heads of the, you know,
various tech executives overseas, usually in the US,
feels obsessed with we want this recall memory,
this memory associated with AI,
our personal assistant watching everything we do.
I think what we're discovering is that's not necessarily the case, that people actually want that, right? Because of all the reasons that
everyone expects, right? Privacy. I think though, we've got to be careful because we're in a bit of
an echo chamber, right? Like the type of people that we know, the type of people who listen to
this podcast, they don't want it. But that doesn't mean the masses don't. I'm yet to spread a write
up from someone who says they do. Fair enough, fair enough. Now, look, staying with Microsoft,
and this is some sort of late-breaking news over from NextGov.
It's an early report, but David DeMolfetta over at NextGov
is reporting that the FTC in the US
is holding meetings with tech industry executives
to gather information for a possible antitrust probe
of Microsoft's
licensing and bundling practices.
And that a reason, one of the major reasons the FTC is doing this is because of some of
these intrusions into Microsoft and data theft pertaining to the State Department and whatever.
So this is really interesting because the FTC seems to be thinking that, well, because
of the way Microsoft bundles this stuff, it discourages people from using best of breed security stuff apparently the word I got
because someone from DC sent this to me and the word I got is that like the DC water cooler is
just going into overdrive on this let's see if it actually launches but an FTC antitrust probe
into Microsoft over its bundling of its security software,
I mean, I'd love to see it.
Yeah, I think it's one of those ones, right,
where it's kind of like, yeah, we can rationalize, right?
We can sit there and go, you know what?
For Microsoft to provide that functionality
is actually X, Y, Z, more capability and compute
and resourcing and all the bits and pieces
that they got to do.
And so therefore you should charge more for it.
That works fine if you're not posting double digit record profits every quarter right
and i think that's the bit that ftc is looking at is going you're telling us it's not feasible
but at the same time we feel like maybe it is actually feasible and reasonable to offer those
capabilities and that seems to be the line of interviewing that they're doing with procurers
of microsoft services at the moment how's it impacted their business how's it affected their view what what was their procurement experience
did they feel like they had a choice as well like can you move outside of the ecosystem if you want
want other capabilities so it'll be interesting to see what comes of those interviews it feels
like early days even though they kind of kicked it off back in like march or something but um those
wheels move a little bit slow especially when when it comes from government influencing private. But yeah,
it's going to be really interesting to see if anything comes of those interviews and that
discovery process. Yeah. Well, my joke recently, and it's only half a joke, is Microsoft's business
model is to sell you a foot gun and a bulletproof shoe. So this looks like the FTC thinking along
the same lines. The other thing I wanted to
mention quickly is that last week I was talking about how the Chinese call Aegis, the Aegis
missile defense system, Zeus's shield. You can tell I don't have a humanities education because
it turns out in Greek mythology, Zeus's shield was in fact called Aegis. So this wasn't the Chinese
coming up with some badass
name. It's just them knowing their Greek mythology better than me. And thanks to listener Tim
Schofield who pointed that one out. So we're going to wrap it up there Pipes, but of course,
you know, you used to work at Insomnia Security with Adam Boileau, which was then acquired by
CCX, CyberCX. You're now out of there and you're doing sort of a bunch of analysis work.
Yeah, look at Distant Field Labs, labs our goals to provide what we call decision intelligence to executives so and we do that through sort of reporting targeted
research and opening up some work now and what we call our open intelligence
hub the observatory so our goal is to make sure that decision makers making
critical decisions around emerging technologies such as AI that we've discussed today, I feel like they have someone independent, non-biased
in their camp to sort of help with that decision process. And you've just done a big report on AI.
Is that publicly available or is that one that people have to buy? Yeah, we've done a couple
of reports that are available at distantfield.space. And we've covered two. One is sort of that build
versus buy on narrow AI, if you're kind of looking at how you can bring it into your processes. And
the one we've just released this week is very much focused on generative AI in the modern workplace.
So, you know, will it actually meet the needs to save you money and time with your stuff?
Or is it just a waste of time?
Or is it just a waste of time?
All right. Mark Piper, Distant Field Labs, thanks so much for joining us and filling in for Adam Boileau this week.
It's been a lot of fun to chat to you again.
It's been a pleasure, Pat.
That was Mark Piper with a check of the week's security news.
Big thanks to him for that.
We're going to check in now with Joseph Cox.
Joe is a journalist and co-founder of 404 Media.
And a lot of you would remember him for his work at Vice Motherboard.
But 404 is something like six months old now and, yeah, seems to be going really well.
Joe has written a book which came out today.
It's called Dark Wire,
and it's all about the FBI's Anom sting operation. I finished reading it a couple of days ago,
and I definitely recommend it. It's a lot of fun. But for those who don't remember, Anom was one of
those crime phone businesses like EncroChat or Sky or Phantom Secure. But it was a little bit
different to the others in that it was owned and
operated by the FBI and all of the messages sent over the network wound up in an FBI database in
real time. Now this obviously down the line led to a lot of convictions and bad times for the types
of criminals who use encrypted phones, so the book covers all of that. But beyond Anom, it also tells
the broader story of contemporary crime phones and starts off in its early chapters by looking at Phantom Secure.
It was the takedown of Phantom Secure that kind of created the Anom opportunity for the FBI.
A Phantom Secure reseller known as AFGU had been developing his own crime phone service to compete with Phantom Secure. But when law enforcement cracked down on Phantom, that's when AFGU approached the FBI and proposed
running the Anom operation for them as a way to keep himself out of jail.
And the rest, as they say, is history.
But what struck me as particularly interesting about the Phantom Secure thing is the guy
who founded it, because he didn't seem like the criminal type.
He was just someone who kind of lost his bearings and like the proverbial frog,
gradually found himself in a pot of boiling water. Don't get me wrong, he made mistakes,
but yeah, it seems like it was a gradual sort of thing. So here's Joe talking about
Phantom Secure's founder. Vincent Ramos, the CEO of Phantom Secure,
he started the company just as a legitimate business. He was selling it to VIPs in night
clubs, that sort of thing. And eventually criminals, especially one called Hakan Ayik,
who I'm sure we'll talk about as well, you know, who was Australia's most wanted man,
he starts using the devices because he's somebody who really needs a secure phone to stay ahead of
the cops. So more criminals start using the devices. Soon, Phantom Secure starts to lose
some of its competitive edge in Australia, I would say other companies are coming in.
And that puts a lot of pressure on Vincent to actually lean into
that criminal user base. So rather than just being sort of a passive salesman who's just putting a
product out into the world, he deliberately facilitated that criminal market, which of course,
that could be good for his bank balance when he's selling to the Sinaloa cartel. But it was
ultimately his downfall. Like, It's one thing to develop or just
simply sell a piece of encryption technology. That's not illegal. I don't think it should be
necessarily. It changes when you are providing that explicitly and deliberately to drug traffickers.
Well, and I think the interesting approach that the feds used to all of this was by, you know, the legal mechanism that they used here was the fact that Phantom Secure, if you rang them up and said, one of my guys got arrested, can you please wipe the device?
Well, they're destroying evidence and they know they're destroying evidence at that point.
And that's extremely legally problematic.
So that was really how they decided to go after him.
Yeah. The Canadians didn't have legal mechanisms for going against Phantom Secure where it was based. The Australians didn't have any sort of legal way to do that either, even though Phantom Secure phones were incredibly popular with some very, very bad people in Australia. So the Americans come in, and they have RICO, this sort of law that was traditionally used to target mob bosses, you know, people who don't
get their hands dirty, they're not going out and doing the hits necessarily. But they could take
RICO and apply it to Phantom Secure and treat it as a criminal organization in its own right. And
I mean, if anything, that's probably the most valuable thing the Americans actually brought to the table. Obviously, when they go on and run Anom, later on in the story,
that's all of their stuff as well with the Australians. But it was really this legal
weapon and apparatus the Americans brought to the table that actually allowed some of the
undermining of this encrypted phone industry. Yeah. Now, the amazing thing is, as you mentioned,
the FBI wound up essentially running Anom. And this was because one of the other providers they rolled up, you know, someone was going to get arrested who was involved in that. And they said to the feds, hey, by the way, I've been working on my own encrypted service. I was going to leave and go into a competition how about instead of throwing me in prison forever i run this service for you uh and you get to surveil the whole thing and to the
fbi's credit like it's it's somewhat amazing they said yes to that and then the story that follows
from that they really were running a startup you know and it's staggering to me that a lot of the
development for this stuff was just like outsourced to people via freelancing platforms in Asia. And they ran it like a real startup. And, you know, part of me was really
impressed that a law enforcement agency was actually able to steward this project to the
point that it became so successful. Yeah, I guess that's something I should stress is that,
yes, the FBI took it under its wing and it did develop it but there's something else
which is pretty key which is that this was a very small crew of agents inside the specifically the
San Diego FBI and prosecutors from San Diego as well but the ones I've spoken to there there's
something of an underdog mentality you know big cases don't usually come to san diego they're dealing with
the border a lot of the time that's what some of these prosecutors were doing very low hanging
fruit of you know like overlooked uh communities often they wanted to do something else that would
actually generate some impact so when this case came along and phantom before it as well they
basically leapt at the chance they wanted the the backdoor into Phantom secure. That didn't happen. And then this character called
Afgu, as you say, comes along and offers their own encrypted phone company and they leap at the
chance basically. Yeah. Yeah. No, it's extraordinary. And what winds up unfolding is that Anom grows.
It never becomes as big as the bigger ones, the bigger ones. But I think at
its peak, there was something like what, 9,000 users, something around there? Yeah, I think 9,000
concurrent, but like 12,000 in total. So it was basically Phantom Secure. They built a Phantom
Secure. But not Sky or EncroChat, right? No, they were the juggernauts. Yeah, yeah. But you know,
still a lot of very serious people using this. And of course, the Americans decided that it would be legally problematic to surveil phones based in
the United States. So what you essentially had was a group of FBI agents operating a real-time
interception center and clearinghouse and passing on intelligence to the Europeans.
I think, you know, one country that really rolled with this, though, was Australia. And it comes
through so clearly in the book where when this whole thing kicks off, the FBI and DOJ are very nervous and cautious about it, but the Australian Federal Police were just like, hey, this is great. Let's go. prosecutors, Andrew Young from the American side, said to everybody else in a certain hotel room,
we're going to go ahead and try it. The Australians look like kids on Christmas morning. And you can
completely understand why. Because as I said, cryptophones have been an issue around the world
and for a lot of different agencies. For Australia, especially, they've been a real, real problem.
Because Australia is a goldmine for drug trafficking. If you can get cocaine into
Australia, it can go for like seven or eight times the price as it does in the US or elsewhere.
There's even a saying in Sydney, which is cocaine addiction is God's way of saying you have too much
money. Yeah, there you go. And presumably along with that is a phantom secure phone as well.
I mean, just people pay extraordinary prices for cocaine in particular in Australia. That's like a well-known thing. Now, one thing that you posit in your book is that the FBI lost control of this operation. It didn't really appear like that to me from reading it, but what did you mean by that? What I found through reporting is that the FBI
did start to lose control of Anom. And by that, I mean a few things. There's one where
the way the phones were flashed was through these small black box computers. Initially,
that was very much a bottleneck in Hong Kong. Eventually, Hakonayik and some of his other
lieutenants find out how to clone those phones. So they are the ones flashing the Anom devices rather than the FBI having control over the
supply chain.
There's some other stuff.
I mean, did that matter though, when they had control of the entire back end of the
thing and could pull the plug on it at any moment?
I mean, you know, they were letting it run, but do you think it's fair to characterize
that as out of control?
No, I say starting to lose control.
And I know that sounds pedantic, but I absolutely do mean that because, yes, they say they would
or could pull the plug whenever they wanted to.
But then also the agents themselves were saying, we can't do this anymore.
It's going to break.
It's going to get too big.
That's what the agents themselves were saying in this operation.
And I guess it's sort of hard to say. So less about losing control of the the operation and more about like
business problems business problems i mean they were in a way almost too successful they had
hockey stick growth after sky collapsed and they tripled in size basically overnight and it was
like a resource problem so we don't have the resources to keep this going and that was a big
part of the reason why
they eventually did close it at the time that they did. Now, the most delicious thing in all of this
is that the FBI had known criminals out there selling these devices for them. And, you know,
they sort of even had their own contests and competition about who could become like the top
dog seller. One of them who really did well out of all of this
was an Australian criminal by the name of Hakan Ayik,
who features prominently.
It's funny, actually, because at the end of your book,
you're like, oh, he's still at large.
Last I heard, he actually was arrested in Turkey.
Why don't you just tell us a little bit about Hakan Ayik?
Because he's a fascinating character, if not a lovely man.
Yes, so Hakan Ayik is the head of the so-called
Aussie cartel. That's what the Australian authorities call it. And that's a collection
of the criminal entities that Ayyik has control over, like the Comacheros. Then you also have
the Hells Angels and a bunch of other criminal entities as well. And they control a massive slice of the drugs that get into Australia.
I mean, they're a multi-billion dollar business enterprise.
He is in Turkey.
Yes, he's been arrested now.
That should be updated in the latest version of the book.
I think it is, yes.
By the time it gets to publication, I did scramble to get that in.
And then there are various other people in Dubai
and some have been arrested and and that sort of thing but i you know from what i heard exceptionally
violent and intimidating and then people i spoke to inside a nom who worked with him
had this flip side of him being very professional and polite and would listen to his phone resellers
on marketing techniques and all of that sort of thing. I'm definitely not trying to flatter the guy. It's just, there is some nuance here in that
he can be a really, really bad person. He ran a very, very successful business as well.
Yeah. I suppose another thing that came through on this was, okay, so there's the flashy loud
people who everyone knew about, like Hakan Ayik. But the one thing that authorities kept saying over and over and over, and it comes through in the book, is that the most valuable thing Anom did, it wasn't necessarily about who got arrested.
And in Australia and New Zealand, that was like hundreds of people.
It was extraordinary here.
Australia did the best out of any country through this whole operation.
But it really taught them a lot about the way
that organized crime operates, right?
Like this was perhaps the most valuable thing
the Anom operation taught them.
And they identified all sorts of people and networks
that previously they had no idea about.
Yeah, it was all about the drug smuggling techniques as well.
So you'd have stuff like, let's take the cocaine,
put it inside liquid fertilizer
you then smuggle that across and then when it gets to the destination you do basically chemical magic
and you extract the cocaine again and then there it is and you can sell it hiding it in leather
in stone slabs putting drugs inside shipments of energy drinks as well like every sort of that'd be a hell of an energy drink yeah monster maybe i don't know pure cocaine don't drink the wrong don't drink the wrong one
but in any scheme you could think of they thought of that and won better because it's just so
they're just so resourceful and it's just worth it for them. And as you say, that is probably actually more
valuable than the rest themselves because you've burned their tradecraft. You've actually burned
their techniques for smuggling drugs now. And now the next crew can't really do that either.
Yeah. Yeah. Well, I mean, people should check out the book. You can hear all about Australian
outlaw motorcycle gangs, the Italian, I mean, the Indragata, which is the Calabrian mafia,
which has a big presence in Australia. And just the scale of this mean, the Indragata, which is the Calabrian mafia, which has a
big presence in Australia and like just the scale of this thing, the scope of it.
It's a brilliantly researched and very well-written book.
Joe Cox, thanks so much for joining us.
It's been a pleasure to chat to you.
Thank you so much for having me.
That was Joseph Cox of 404 Media there.
The book is Dark Wire, and I've dropped the Amazon link into this week's show notes.
It is time for this week's sponsor interview now with Travis McPeak, the co-founder and chief executive of Resourcely.
The idea behind Resourcely is that you can use it to manage all of your Terraform.
And yeah, Travis is here to talk about how some companies are trying to develop their own ways of abstracting away Terraform from their developers with things like, you know, hacked together YAML crap. And of course it winds up
becoming pretty janky, pretty pointless, pretty quickly. So have a listen to this interview with
Travis to find out why. Here he is. So, I mean, Terraform is a necessary evil. People want their
developers, for example, to produce Terraform because now it can go through a review process.
Things don't just happen in the cloud environment. But when I talk to companies that
can afford to abstract away of Terraform from their users, they almost all do. They're building
these kind of like YAML interfaces where developers, instead of actually producing
Terraform, they produce a config list. And then that gets translated into Terraform on the scenes.
The companies are trying to move away from even showing their developers Terraform just because it's so painful for them. And then I've also seen
customers that have accidental issues in Terraform, right? People will do things and not even know
that they were doing a destructive action. People will regularly, you know, Terraform will give you
a plan. It'll say, this is all of what's about to happen in the environment. And the plan is
very verbose. There's a lot of stuff buried in there, and people will make changes like accidentally deleting infrastructure and not realize that
they were doing that. I'm guessing what you're trying to do is replace that part that people
are doing where it's some YAML config to Terraform. I guess you're trying to be that
instead, right? And a little bit more polished and developed.
Yeah, exactly. So the YAML to Terraform sounds great. And then you quickly get into a problem
where the YAML has to be pretty complex because it has to account for all of the things that
developers want to do. And then instead of screwing up Terraform-
So you wind up with something like Terraform, but less documented in the end, right?
Exactly. Yeah. You just write bad YAML and then you find out in CI that you've screwed that up
or when it tries to deploy. So yeah, that's just shifting complexity to a later stage.
Yeah. So how is yours different then? What's the approach that makes it fundamentally different?
Yeah. So where we start is a really guided blueprint, right? Which is a default set of
inputs. The inputs are things that developers are commonly going to need, but that's just a
base starting place. We make it easy for folks to add properties, right?
So in a YAML file, you might have to say, okay, these are the six inputs that we're
going to allow people to use.
And that works until you pick a seventh.
And then now you need to extend your YAML syntax and then templates and all of that.
So that's one thing.
The other thing is in YAML, you're just going to get, okay, I need to put in a series of
strings and then actually helping users to know what are the correct strings to fit in there is itself a lift.
So for example, most companies have some kind of a regional policy, right? Amazon has like 25
regions or whatever. Most companies don't want to do business out of 25 regions. So for a developer
to actually stuff the right thing into that region field, they need to go look up in docs. They need
to have some semblance of, okay, these are the regions that our company is supposed to use. And a lot of times
that gets screwed up. What we do instead is we'll just show the actual list of options. So
where resource-y starts is basically the full universe of all things that the cloud offers.
And then with our guardrails, we actually start removing options that your company doesn't want
you to pick. And so by default, you're only going to get left with sane options.
And then the one other trick that we have up our sleeve is context.
So one of the reasons that configuration is so hard
is because user A needs to pick US East 1
and user B needs to pick US West 2, right?
And what they're actually trying to do
informs that decision a lot.
And so we can actually take that context up front.
A lot of times if a developer goes to an infrastructure team and they say, oh, I need a database,
the first thing the infra team is going to do is ask them some questions like, all right, great.
What kind of data is going in the database? Is this prod or test? Does it need to be HA? And
then from the answers, then they'll go and help them set up the database correctly. So we've
actually replicated a lot of that decision-making and automated questions.
Basically, the goal is to take all of that context from every expert at the company,
your infer team, your security team, your compliance team, and then feed that all into this system that just guides developers about how to pick the right option.
So, I mean, this is kind of like the next phase for you, right? Because originally it really was
just about getting all of that Terraform into kind of like a library, I guess,
or a catalog. And I guess now, you know, the dev work that you're doing is much more about like,
well, let's help people not do insane things, even if everything's cataloged and neat.
Right, exactly. So the security teams are very incentivized to buy us because they're the ones
that have to deal with all of the misconfiguration when things get set up poorly,
then they're in bold management.
But security teams are all political,
capital-driven organizations.
In most companies, they have limited budget
to go and screw with developers.
And so the bet for us is that we can actually make something
that's easier and faster for a developer to use,
like even faster than going and asking some team for help.
And that thing's all self-service
and they can do it themselves
and you prevent the misconfiguration.
So it's a true win-win.
Yeah, I remember seeing a presentation years ago
from someone who was like a, you know,
CISO type at an org that was very development centric.
And they realized that, you know,
creating a really nice build environment for the developers,
they'd all go and check in their code there, right?
Like if you were running great infrastructure for them,
like it would just, it just solved so many problems.
So I guess this is that same thinking, right?
Which is you make this beautiful little, you know, garden of paradise
for developers to go and get their infrastructure from and they will use it.
Exactly.
If it's the fastest and easiest way for a developer
and they don't have to worry about all this stuff,
developers will in mass use it.
You're going to have some that are like,
okay, I'm a power user.
I want to figure out how to do my own stuff.
But most developers are just trying to get a job done, right?
They're trying to move the needle for the business.
So if you can make that all easy for them
and then you build a bunch of stuff like security
on top of that,
you have this nice kind of like watering hole where you only need to secure one kind of system and not a thousand
systems. Yeah. I mean, I guess the question becomes, right? Like, so at that early stage,
setting up the guardrails, who does that? Do you do that for all of your customers or is that
something that the person who admins resourcely does, and can they make mistakes there?
Yeah. Yeah. Great question. So where we start is with basically a big menu. And the menu is,
of course, a bunch of security stuff. These would be things like, I want to have approval before things are put on the internet. This is the breach, oopsie prevention mechanism. There's
a lot of stuff like that, but there's also standards like tags. There's things like,
are we backing this up correctly?
Is it replicated the way we want?
Are we using the right instance types for cost management?
So big catalog of all of those things.
We have out-of-the-box defaults, so you can quickly just say, check, check, check.
Okay, get our buckets covered.
But we also allow our customers to completely customize this stuff because as much as we might think we have a good set of defaults, every organization is different.
Every single configuration option in the cloud is a configuration option because
some customer needs to pick it in way A and some customer needs to pick it in way B. So we
allow that full customization within the guardrails themselves as well.
And what happens if someone who's the resource admin running this system, what happens if they
try to start selecting defaults that are a bit nuts? Do you have some feedback in the product that says, whoa,
hey, you sure you want to do that? So we're not going to know if it's nuts or not, right? Like
any configuration option is good, but what's going to happen is probably a user is going to go and
try and actually create infrastructure this way. And they're going to say, I can pick no regions.
I have no options in this thing. We're deploying to Asia and our company runs completely out of
America. So a lot of these things are pretty reasonable and sane. And then the other thing is
when customers will start rolling us out, infrastructure teams that are very used to
having to review everything and watch it will pay very close attention. And they're usually the
first ones to actually use blueprints and guardrails
themselves because it saves them time.
So it's pretty easy for them to see, okay, we've left no options.
Where all of the options in the list that we're giving are bad.
Now it is not possible for me to do a sponsor interview in the last couple of
months without Gen AI coming up, right?
Because everybody's looking at it.
You had a bit of a couple of days like
hackathon looking at Gen AI and how you might use it with Terraform. And the results, at least at
the moment, are somewhat hilarious. Yeah. So I have talked to a subset of customers and they say,
why do we even need a resource? Isn't Gen AI just going to generate all of the configuration in
Terraform for us? And so nobody's more curious about the answer to that question than me. And I've looked at it a few times, but in this two-day
hackathon, we started off with, okay, let's just ask the best version of GPT available to go and
create Terraform. And the first thing it'll do is just start spamming properties, right? It'll add
encryption. I never asked for encryption. What I want is I need a bucket and an IAM role and
compute. And then it gives you kind of this contrived terraform that's never the same twice in a row. The first thing we did to make that better
is essentially a RAG approach. So we fed it, these are the required properties and you should only go
with those. Unless I ask for something else, we'll do required. And then the next thing that we did
is we added our guardrails on top of it. So the way we've built guardrails, we can definitely
check in CI and make sure that something's done correctly. And that's kind of commodity, right? A lot of folks do that.
But the point of us is to push that guidance all the way to the left. And something cool about our
guard rails is that they're actually so close to English that without any prior training,
GBT, the best version of it knows how to apply those guard rails on a configuration.
And then the winning sauce that we have is that we
can deterministically check and make sure that something that comes out of an LLM actually met
all of these requirements. So the answer for me- Well, I was going to say the idea that you
would replace, because what you've got with ResourceLay is the ability to set enterprise-wide
policies for what your Terraform should look like and then have a single place
where you can store it, track it. You've got certain types of infrastructure, certain subtypes.
I mean, just generating Terraform with LLMs and spinning it up and it's just going to be a giant
mess. Who cares whether it's human- or Gen AI generated? It doesn't make
a difference. You know what I'm saying? Yeah. Well, the real problem with the Gen AI stuff is
that a lot of the times it's not valid Terraform. And the reason, the only way that you'd know it's
not valid Terraform is if you're yourself a Terraform expert, which the reason we're all
doing this is because we don't want developers to have to become Terraform and policy experts. So
basically just pushes the complexity down the line and you're going to have to heavily review and edit the Terraform anyway. So at that point, just have a
team do it for you. Yeah. Do you do an audit function as part of like going to a new customer
and they say, Hey, here's, here's all of our Terraform. Do you, do you have a phase in your
like sales process? Cause I'm just guessing, like we haven't talked about this. I'm guessing that's
part of it, right? That you come in and, you know, run some analysis on what they've got and point out a whole bunch of
horrible things that are going on. And what have been some things you've found lately, right?
Yeah. So yeah, a lot of folks will come to us after they got visibility into their cloud,
right? So they rolled out a CSPM and they said, okay, we actually have tons of misconfiguration.
We need to do something about this. We need to not be in management hell.
So you're not really doing that yourself. You're just relying on the existing CSPM to find those issues. Yeah. I mean, that makes sense, right? Why reinvent the wheel?
Exactly. But we can quickly take that report and say, okay, here's where you're falling down,
right? So like a whole bunch of things are public and they weren't designed to be public.
That's an easy fix. We've got a whole catalog of guardrails for that. A lot of the CSPMs don't even go down to the kind of configuration that folks necessarily want,
right? Like there's whole additional categories of products that will help you set up data backups,
for example. That's not necessarily something the CSPM is going to look for. It's not strictly
cloud security misconfiguration, but it's the kind of thing that folks are like, okay, yeah,
we have all these data stores that aren't set up correctly for backups.
And then, yeah, like obvious standards. It's like, okay, well, I actually have these CSPM findings and I don't even know who owns them.
Like, I don't know who to go and ticket because there's no record of the cloud ownership here.
And so then things like tags become pretty important.
A lot of companies will have started to build some basic like patterns and blueprints, you know, the things that developers commonly need.
You're like, all right, I need database. I need compute. They'll have created Terraform modules. We can load in those modules. But on top of that, we can make it much more flexible and
nimble to, one, create new blueprints. You don't have to go through this module hell anymore.
And then two, extend your blueprints with new policies on top without having to do this full
with the security team. How out of control does it get, right? Like, have you just gone into a couple of customers and just
gone, wow, you know, like, because I imagine there would have been at least one or two where
you're like, my God, how, how does your, how has this been allowed to get this bad?
Yeah. I mean, I, yes, absolutely. It gets really bad. really bad. We've been in security a long time, but it gets as bad as you can imagine and more. But I No matter how much we train them, they're not going to understand all of the
four ways to make a bucket public. So yeah, I mean, there's a lot of just developers need to
get something done. They take the path of least resistance and then we go fix it in
management. Yeah, they get it done. It works. They move on. All right, Travis McPeak,
thanks for the update on everything that's going on over there at Resourcely. A pleasure to chat to you as always.
Thank you.
That was Travis McPeak there from Resourcely.
Big thanks to him for that.
And big thanks to Resourcely for sponsoring this week's show.
You can find them at resourcely.io.
And that is it for this week's show.
I do hope you enjoyed it.
I'll be back with more Risky Biz before you know it.
But until then, I've been Patrick Gray.
Thanks for listening.