Risky Business - Risky Business #750 -- Why Microsoft's Recall is an attacker's best friend
Episode Date: May 29, 2024On this week’s show Patrick and Adam discuss the week’s security news, including: Russian delivery company gets ransomware-wiper’d A supply-chain attack targe...ts video software used in US courts Checkpoint firewalls get hacked, details as clear as mud Microsoft Recall delights hackers Aussie telco Optus gets told its IR report isn’t legal advice Cyber insurer says you’re 5x more likely to get rekt if you have a Cisco ASA And much, much more. This week’s episode is sponsored by Kroll Cyber. Alex Cowperthwaite, Kroll’s technical director research and development for offence joins to talk about how his team attacks AI models, in ways both classic and new. Show notes Major Russian delivery company down for three days due to cyberattack Stark Industries Solutions: An Iron Hammer in the Cloud – Krebs on Security CVE-2024-4978: Backdoored Justice AV Solutions Viewer Software Used in Apparent Supply Chain Attack | Rapid7 Blog Check Point Software customers targeted by hackers using old, local VPN accounts | Cybersecurity Dive US pharma giant Cencora says Americans' health information stolen in data breach | TechCrunch Microsoft’s New Recall AI Tool May Be a ‘Privacy Nightmare’ | WIRED Kevin Beaumont: "I got ahold of the Copilot+ so…" - Cyberplace Kevin Beaumont: "For those who aren’t aware, Mi…" - Cyberplace Patrick Gray on X: "You know it’s coming… Microsoft Defender Advanced Security for Recall" Microsoft Edge for Business: Revolutionizing your business with AI, security and productivity - Microsoft Edge Blog Optus loses appeal to keep Deloitte report on cyberattack secret Optus says it will defend allegations it failed to protect confidential details of 9 million customers in cyber attack - ABC News Nearly 3 million affected by Sav-Rx data breach Spyware app pcTattletale was hacked and its website defaced | TechCrunch #F**kStalkerware pt. 6 - tattling on pcTattletale Spyware maker pcTattletale shutters after data breach | TechCrunch Jeremy Kirk: "Cyber insurer Coalition releas…" - Infosec Exchange Coalition_2024-Cyber-Claims-Report TikTok says it disrupted 15 influence operations this year — including one from China Israeli private eye accused of hacking was questioned about DC public affairs firm, sources say | Reuters RansomHub claims attack on Christie’s, the world’s wealthiest auction house Open-Source Assessments of AI Capabilities: The Proliferation of AI Analysis Tools, Replicating Competitor Models, and the Zhousidun Dataset Shashank Joshi on X: "Additionally, OpenAI will retain and consult with other safety, security, and technical experts to support this work, including former cybersecurity officials, Rob Joyce [@RGB_Lights], who advises OpenAI on security, and John Carlin."
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business. My name is Patrick Gray. This week's show is brought to you by Kroll Cyber Risk and this week's sponsor guest is Alex Calperthwaite, the Technical Director of Research and Development with Kroll Cyber's Offensive Security Team. on AI models. And, you know, besides prompt injection, there hasn't been all that much
discussion about how to attack Gen AI models. But that is what Alex is here to talk about this week.
He and his team have been beaten up on them and looking at the underlying infrastructure that
hosts them. And it's much what you'd expect out of any new technology. There be dragons,
basically. So we'll get into that in a little while.
But before all of that, of course, it is time for a check of the week's security news with Adam Boileau.
And Adam, we're going to start with a major ransomware attack in Russia, which we don't often see.
This company CDEK, I'm just going to call them CDEK for the purposes of this segment.
They're like a FedEx or a UPS-style courier company in Russia,
and they've been down for days, apparently,
because a bunch of hacktivists deployed ransomware to their network.
But I'm thinking this is just one of those examples
where someone's used ransomware effectively as a wiper.
Yeah, that does seem to be the case.
This is attributed to a group called Hedmere,
who appear to be Ukrainian activists, you know, kind of out there fighting that particular conflict.
And yeah, it makes sense that Vanceware is pretty good at being a data wiper.
It's optimized for speed.
You've got all the tools and things you need to deploy it very fast across an entire network.
And this does seem to have made really quite a mess of CDX network.
But yeah,
I mean,
it is interesting.
I'm not sure that we know that they're Ukrainian.
I mean,
we know that they're attacking a lot of Russian orgs.
So,
I mean,
it's a,
it's a reasonably safe bet,
but it could just be people who don't like Putin.
But that seems to be the,
the motivation.
But,
you know,
I just sort of wonder at what point, if we see more and more
of this, and it's something I've said on the show before, if we see more and more of this sort of
stuff weaponized and used against Russian targets, if that might motivate the Russian authorities to
sort of crack down on the ransomware ecosystem. I don't even know if that would, if it would work
out that way, but I'm curious. Yeah. I mean, we don't know like whether the ransomware variant in question was like an
actual Russian one. And of course, you know,
the Russian and Ukrainian cybercrime worlds were very much joined at the hip
until, you know, until this conflict kicked off.
It's been a pretty messy divorce, right?
Yes. Yeah, exactly. So like, you know,
kind of tying it back to one particular group or the other these days,
you know, especially when the origins are so muddy, pretty difficult, but yeah, like, you know, kind of tying it back to one particular group or the other these days, you know, especially when the origins are so muddy, pretty difficult.
But, yeah, like, I think it is going to make tolerating the crime ecosystem inside Russia, you know, just that little bit more complicated.
And I'm here for that.
Staying in the region, we got this great report from Brian Krebs here on Krebs on Security. And it's a rundown of one of those weird little corners
of like the cybercrime business where people are sort of being incentivized to install proxy agents
on their machine. And it's just one of those things where I read it and my brain hurts. But
Adam, walk us through it. Yeah. So this is a write-up of like a bulletproof hosting provider
called Stark Industries Solutions that popped up just before the war in Ukraine
really started to kick off. And this is a hosting provider that's been providing all sorts of
bulletproof hosting services for cybercrime operators and other people. But in particular,
they've been very popular with proxy networks for allowing attackers to move onwards through
their environments. And a number of the sorts of proxy services you see around are backed by stuff hosted in their environment and
indeed they've been offering essentially free hosting or free proxy services to people and
then other people have been like taking that and reselling it as for you know for money paid proxy
services but in particular there are a bunch of ties
from this to Russians and kind of Russian use.
And there's also been some campaigns
to promote it to Ukrainians.
So the idea that a free proxy service
ultimately operated perhaps by Russians
was being used by Ukrainians
might be a thing that the Russians
would be interested in,
getting real IP addresses and that kind of thing. But Krebs has done his usual, you know, high
quality work of tying together some of their people and organizations behind. And one of the
things that's noteworthy here is that a lot of this stuff actually ends up being hosted in the
Netherlands, which, given Europe's feelings about what's going on in Russia and Ukraine at
the moment, is kind of surprising in a way. But on the other hand, there's such a long tail of
hosting providers in that part of the world that have a history of being a little bit dodged.
But one wonders whether the result of this article is going to be a bit more of a crackdown
from the Dutch authorities because... Well, dutch are either watching it or pulling it down as we speak right yeah well exactly right
like whether it's more used to go in there and help yourselves and have access or whether it's
just time to torch it but yeah the piece has tie-ups with a bunch of other people who've been
doing you know just spamming and hosting and things like that um but you know there's a lot
of detail in there that's probably only interesting to the sort of people who like to track down the
you know nitty-gritty nuts and bolts of the stuff yeah but this agent is called d dossier and they
are like paying people to install it and it's just i don't know i just can't believe some of
these businesses function you know what i mean like when i'm reading and i'm like when people
are doing it like what on earth you know what i mean it's just the whole thing's weird it certainly is yeah it's
weird you can that you can make margin on such obscure things right yeah exactly and you find
a willing pool of people who are like just install whatever random code on their machines which is
you know funny because you would think the people who are most likely to see the advertisements for
this stuff are likely to be engaged in cybercrime and you'd think that type of person wouldn't want to
just run arb code that you know on their machine that's going to allow other attackers to proxy
through it so i'm like how does anyway as i say i just rub my temples and i don't quite understand
how all of that works but people are a little bit of a mystery to me adam uh now i want to talk
about some research out of uh rapid seven
on a supply chain attack that's getting surprisingly little attention this company uh justice av
solutions they make like audio and video recording software that's used in places like courtrooms
and they discovered a backdoor in a version of their product that was being offered for download from their servers.
And when you think about the sorts of environments
that this is going to give you access to,
like we've seen the Russians in particular
take a real interest in hacking into court networks
because they can get access to all sorts of juicy documents
that might be under seal or whatever.
I'm just sort of surprised this one isn't getting much more attention,
but can you walk us through exactly what's happened here?
Yeah, so it looks like a Rapid7 customer was rolling incident response
after they identified some weird stuff going on in the network,
traced it back to an install of the viewer from this Javs company,
and when they investigated it turned out that actually, yes,
the version that was available for download
had some kind of backdoor in it that, you know,
like connected to command and control,
downloaded some PowerShell, et cetera,
like pretty normal stuff from a technical point of view.
It was also signed with a code signing certificate
for an entirely unrelated entity.
So not the company that made this particular software and that that download or that installer had been on their site for quite
some time there's no information as to kind of exactly how long or how they got compromised but
of course rapid seven reached out to the company and they've you know all started to roll response
but yeah it's an interesting supply chain attack
and an interesting niche.
The actual backdoor itself was in,
they bundled FFmpeg for processing videos,
the open source kind of video compression system
as FFFmpeg.exe, which is, you know, that's kind of nice.
No one's going to notice that many Fs.
This is also used in things like prison systems
for video conferencing for prisoners
and for dealing with court interactions and stuff.
So definitely a kind of concerning place to find backdoors.
Yeah, but it's like it makes total sense
when you think about it, right?
Like what an interesting place to do a supply chain attack
because obviously, you know,
a company that makes recording software for courtrooms, it's such a niche field. I can't imagine that they've got a
200 strong security team. Exactly. Right. So it is the perfect place to do this sort of supply
chain attack. Yeah. And I think, you know, it really calls back to the point that you've made
a bunch of times that not all software is, you know, Apple and Google grade in terms of the amount of resources they
can expend on security testing or security engineering. And, you know, there are so many
pieces of software like this out there in the world that do something really important in really
important places, but just don't have the resources to care about security in the way that, you know,
we kind of need these days. Yeah, I mean, this is a slightly related thing, but even when you look at exploiting vulnerabilities
in uncommon software, right?
So I remember someone you worked with a million years ago,
Paul Craig, did that presentation.
I mean, this is going back like 15 years or something.
Did a presentation on just fuzzing for bugs
in scientific software so that you could just like,
you know, find a way to send someone
like a CAD file or something. And, you know, the options available to you as an exploit dev in a,
you know, ancient piece of poorly maintained software, poorly maintained from a security
perspective, at least, I mean, you know, the options are limitless there, right? So, you know,
if you're targeting R&D, you just find the sort of software that they're using to do the r and d
do some fuzzing and robert's your mother's brother as we say yeah exactly right i mean taking those
things that we know against hard targets i mean like if you've grown up writing exploits for
browsers you know which is probably the hardest target you're going to hit these days and you go
back to looking at any software niche it's just just going to be easy. Yeah, like Adobe Audition, like we use.
Like I can imagine the horror show that that is, right?
Oh, yeah, exactly, yes.
Nick Freeman, who's another New Zealand hacker,
did it with movie-making software,
like how to hack movie studios to steal, you know,
movies before they get released,
and exactly the same thing, right?
It's all the old classic bug classes are still there,
and it's still easy mode hacking. There's a lot of parsers and they're optimized for speed not security and uh yeah yeah
fun times but you know speaking of hard targets adam of course you're not going to go up
against security software because because that stuff's so well put together uh let's talk about
checkpoints interesting week this story This story is a little bit
odd, right? Because we got David Jones's version from Cybersecurity Dive. Checkpoint's put out
some sort of warning about how its customers are being targeted, but it's really unclear what's
happened here. There is a CVE, which is an info leak or info disclosure CVE that's linked to this
campaign somewhere, but they've put out
this sort of statement that says something about local vpn accounts and no mfa and something
something info leak but we don't really know yeah like the cve is very very vague and the fix that
checkpoint has provided essentially allows you to just require multi-factor auth which it's the
year 2024 like surely that was a knob that the
checkpoint already had um but yeah it looks like single factor maybe service accounts maybe you
know like not particularly common user accounts are being attacked and it you know it's presented
like it's probably cred stuffing but there must be something to it and i think you know the idea
that this info leak is perhaps username disclosure that you can then use to set up your password reports, like that kind of fits the available data, but it's a bit weird that a major manufacturer of network perimeter software has a bug that's being actively exploited and we security news people don't know what it is. News people don't know what it is. Yeah. And I'm trying to figure out if this is because they're worried about a reputational hit,
if they do a full disclosure, or if the bug is so dumb, which is often the case with these
sorts of events, that if they talk any more about it, everyone will figure it out and
off they go.
And I do wonder, too, if it's like an account name enumeration thing, and there's maybe
some dormant service accounts that people haven't put MFA on
because it didn't make sense to put MFA on them.
And I don't know, maybe you enumerate the username
and then it enables a brute force.
I got no idea, but that's the whole point, right?
Is where we got no idea.
Yeah, exactly.
So I mean, I guess if you've got checkpoints
on the edge of your network,
maybe go have a chat to your checkpoint,
you know, local representative.
At the very least
their hotfix maybe is a thing that you want to have anyway but yeah i mean who knows right i
gotta mention too like after we recorded last week's show uh at osso i wound up meeting two
different people from uh fortinet and it's funny because it's like oh hey you know i'm from
fortinet and they listen and i'm like yeah you know haha but they were
total sports i gotta say uh two people from fort yeah you met one of them right yeah one of them
came up to me afterwards as well introduced himself and i went past the fortinet booth
briefly absolute sports so uh shout out shout out to the uh to the fortinet reps uh in australia for
for having a sense of humor. Good sports, bad software.
Perhaps, yes.
But I mean, you know, what's good?
That's the point.
That's true.
I think one of them was like,
go talk to Pan about their dot dot slash.
And I'm like, yeah, okay, fair enough.
What else have we got here?
Oh, there's been a major breach of healthcare related information in the United States.
There's some company called Sencora.
I think this is a disclosure about a breach that already happened
like earlier this year, but like a lot of data was taken.
Yeah, this was a company that handled medical data on behalf
of a bunch of other drug manufacturers and medical companies.
They got themselves breached somehow.
They've had to go through the mandatory disclosure process.
We don't know a whole bunch about it, but at least they handle something like 20% of
the pharmaceuticals sold in the US.
So it's a big company, and I guess there is a bunch of customer data in there.
And this is the sort of thing we probably normally wouldn't have mentioned except that the healthcare sector has been getting such a pounding lately
yeah we've seen some pushback from hospitals and other medical you know organizations about
the sorts of you know requirements are going to be pushed on them but it doesn't look good when
it's like this no it doesn't and i mean you get to the end of this story and you see the um
you know you see the numbers involved.
And I can't believe this isn't a typo,
but it's Sankora made 262 billion in revenue in 2023.
I mean, that's a lot of dollars, right?
That's a lot of dollars.
And one would hope that for that,
you would be able to afford multi-factor.
And we don't know that single factor was how they got owned,
but I mean, that's...
Yeah, like let's find out if it's another one
where it was like Citrix, no MFA.
Yeah, 1FA Citrix on the edge, yeah.
So with some creds that someone bought in the dark web for 10 bucks, you know?
I mean, even if it isn't, it sounds believable to me.
And they're not the only one in the US that's been hit with something similar.
This company, SavRx, same sort of thing uh three million affected in this one so i mean
it's very much like the medisecure one that happened here in australia and now we've got two
examples in the united states yeah it's certainly not a great time um to be handling medical data
that is for sure and you know there's all these little companies that turns out are not that little
that are handling super sensitive data for you know such large swathes of of population
and yeah i don't know like we we clearly got to do a better job right well i think a part of this is
a lot of these companies eventually they sort of get managed like utilities right like they're a
profit producing asset uh and people just get a you know management i'm talking about upper
management can get a little bit lazy when it comes to thinking about the security of these things because you know it's
a stable business it's out of its innovations stage it's just expected to sit there and and
you know money all day uh and they don't really think oh well maybe we need to do all this
engineering uplift to modernize our infrastructure and stuff it's just not part of the thought
process with a lot of these type of companies you know what i mean yeah i absolutely
agree like this is a great example of you know security is not a thing until it gets broken and
if you've been running this business for 10 15 20 years and you haven't been wrecked then clearly
there isn't a problem and you know that's that's what biz school tells you to think and yeah here
we are yeah here we are now let's talk a little bit more about uh
recall oh god this is the microsoft feature that takes a continuous stream of screen grabs and then
like ocrs everything into a database now microsoft of course did the big announcement saying everything's
encrypted it's local only and whatever and you know i even did get a little bit of pushback
uh based off of our discussion last week with,
I think, one person on social media was like, oh, it's local only. Well, first of all, you know,
that's still a problem, right? Because if you can get access to a user account and then just
rifle through a complete history of all their activity, you made a good point to me, which is,
the first thing you want to do when you get on a machine often, if you're up to no good,
is to figure out how someone did their job. So we saw that in the case of the SWIFT attacks
against central banks years ago, that the attackers would have to get in there and actually
do screen recordings to figure out how transfers worked and whatever. That was when the North
Koreans were doing that. This would speed up that sort of discovery process very quickly.
But thanks to Kevin Beaumumont gossy the dog
we now have a little bit more clarity on how this thing works and it's not pretty it's really not
he uh managed to get a hold of copy of it and actually make it run on systems that weren't
like the full copilot plus grade pcs that have ai you, AI units doesn't actually really need that. So it takes screenshots,
it sticks them into a SQLite database in your like user profile directory somewhere, which
local administrators or, you know, people who've got system on the machine can help themselves too.
And then yeah, processes the data and sticks the results of the machine learning process
into the database as well and as an attacker like landing on a machine that's got you know
it's already got a keylogger running it's already got a screen grabber running like that's amazing
that's handy right because like deploying those things as an attacker is also kind of fiddly
right because you know heuristic av or things that are looking for weird behavior,
you know, that's kind of a bit risky
to go land on a box and drop a screen recorder.
Plus the performance hit, right?
You're always worried, like, is someone on battery?
Are they going to notice their laptop grinding?
You know, it's tricky.
Like, you're taking risks doing that.
And so having a built-in, you know, screen recorder
that already generates search indexes for you
so that you can just search for the password field
and find what they've been doing.
And yeah, it's wonderful for attackers.
And the idea that this is going to be on by default
in corporate environments is just...
It's lip-smacking good.
It's so good.
I mean, that's wearing your attacker hat.
It's lip-smacking good.
But I mean, you just mentioned,
oh, if you've got system level access,
you can get access to this database.
I imagine that if you've got the same access as that user,
you would also have access to that database, right?
So I've seen a couple of...
So Gossy the Dog was saying that you do require a system
or admin.
I'm not 100 percent sure how that's
implemented like i would imagine it's using the regular dp api security mechanism that you use
for encrypting other things on windows and there are some controls you know kind of around access
to that uh so which is good i mean that's that's something right like we've seen the hoops that
again to mention the north koreans we saw the hoops they had to jump through to do a silent install of a chrome plug-in in the background
like that was engineering work uh to make that work right so it can be fiddly to do this sort
of thing on windows so i guess that's something but it's i mean it's still like can you honestly
think there would be many cso's listening to this who are like, yeah, I'm going to leave that on. This is awesome.
Yeah.
No one wants this.
No.
And like I think the ultimate example of no one wants this is Microsoft have in the same breath are announcing, you know, Microsoft Edge for business, which has screenshot taking controls. And to stop you screenshotting Excel or PowerPoint or Microsoft Office apps in the browser
so that you can stop people exfilling your corporate data.
So like in the same breath, Microsoft, what are you doing?
I mean, my joke about this is that you know it's coming.
Microsoft Defender Advanced Security for Recall.
Right?
Because Microsoft's business model is to sell you a foot gun
and then to sell you a bullet-resistant shoe.
That is their entire business model.
I mean, I do think this edge for business thing is interesting, though,
because I'm all about it, right?
Like, I think that we need the browser makers,
particularly Microsoft and Google,
they need to be building enterprise-grade browsers
because there's so much, like, whether you're EDR, whether you're on the network, like as a defender, there's so much
happening inside the browser that you just can't see. You know, there's a company that I'm working
with called Push Security that does a browser plugin that captures a lot of that identity
information and telemetry and can detect phish kits and stuff. And that's really cool. So you
don't necessarily need an entire browser stack for everything but it sure is handy especially for things like dlp use cases and that's where
you've got things like um you know island browser i think is really cool and it's a lot further
ahead of what microsoft's announced but i think we need multiple participants making enterprise
browsers because it's just nuts that we're essentially using a bit of consumer grade
tech you know as our primary computing tool in enterprise it's nuts yeah absolutely right as so
much stuff has moved into the browser and as the browser has become more full featured and turned
into the operating system it makes sense that a bunch of the security controls that traditionally
would have been operating system things have to move into the browser to be effective and you know microsoft's probably the biggest browser maker on the planet
so makes sense that they are going to go there themselves but yeah i think you're absolutely
right about you know ultimately consumer tech being used to run you know warships and nuclear
power plants and every business on the planet you know life does need to think a little bit about some of those things you know i think it
made sense when we had visibility anyway right so there was a time when you could see a lot more on
the network you know what i mean and then of course tls came it's like oh well you can still
pull domains and you know now we're at sort of tls 1.3 and that's getting a bit tricky and
you know break and inspect is horrible. And,
you know, I just think it's time that we actually started hooking browsers for telemetry, you know,
on the security side. And then, you know, it's the obvious place to do DLP, but, you know,
in Microsoft's announcement here, they got improved leak protection in sensitive Word,
Excel, and PowerPoint documents in Edge, which means, you know, to use that, you're going to
have to be doing all of that classification stuff, right? And that's not what people want.
Microsoft's document classification framework
is not fun. No.
No, it's not. So, I don't know. I mean, good
for them for pushing into this
space a little bit, but I think it's
yeah, I think it's not going
to be that simple. I don't think.
I think they've got a long ways to go. And the irony
of announcing that whilst
also announcing recall,
it's just like, what are you?
I thought you were taking security seriously, Microsoft.
Yeah.
Well, yeah.
Yes.
What are you doing?
Let's just talk about something going on in Australia.
Optus, man.
Like, this is one of those things, you know, those domino memes, right?
Where, you know, something small happening and, you know,
it's the progressively bigger domino meme.
I mean, just when I think about the, you know, something small happening and, you know, it's the progressively bigger domino meme. I mean, just when I think about the, you know, couple of years Optus is at, where they had that hideous
data breach as a result of just exposing an API endpoint with no auth on it, right? Which is what
we all assume actually happened. You know, sometime after that, they actually had an outage too,
which people outside of Australia probably wouldn't necessarily know about.
But the whole network just fell over.
I think it was some update that went wrong.
And they were down for a day, more or less.
And they got fined $1.5 million over that because people couldn't make emergency calls.
And that's like a regulatory requirement that, like, that can't happen.
So that was a screw-up.
And then within a couple of weeks of that,
the CEO was gone, right? Now she might've survived the exposed API endpoint. She might've survived
the outage. No way she was going to survive both, right? So she was gone. Now there's a class action
against Optus and being run by a law firm called Slater and Gordon that's got a history of of running these sorts of things they did the stuff in the 80s here about asbestos mining and
uh you know and injuries to workers uh so they're very well-known uh law firm that does this sort of
thing they've been trying to get their hands on deloitte's ir report uh into the optus breach
optus are like no way you know privileged. It's been back and forth a couple of times.
Optus just lost their appeal
and they're going to have to fork it over.
Yep.
And we've seen companies take all sorts of steps
to try and get their incident response reports
not be discoverable
or not be a thing that you can get hold of easily.
And yeah, interesting to see the court in Australia go,
actually, no, this doesn't feel like bona fide legal advice.
This feels like you're trying to cover, and no, hand it over.
Yeah, and I don't know if that's because they messed something up.
Like, we've seen it happen too with Capital One.
That happened during the Capital One lawsuit as well,
and in today's Risky Business News newsletter,
which, of course, you should be subscribing to,
news.risky.biz, you know, catalan has taken a look at this as an issue and pointed out that these days like we are hearing
i mean i've heard it he's heard it you've heard it where companies are now asking for oral reports
on incidents so nothing is written down like that's the lengths to which some of these companies are
going to to try to keep this stuff hushed up, which is insane.
Yeah, it is.
It is pretty nuts.
And, you know, trying to shield it behind legal advice or just because you've hired,
you know, Mandy via your lawyer doesn't automatically make it legal advice.
So yeah, the next logical steps are things like no paper reporting,
no records being kept, do it all in person.
You know, we'll be in
you know interpretive dance reporting before we know it you know you won't even be able to say
it out loud you just have to come in there and do swan lake in the style of a you know of a
ransomware gang to explain what's happened what a mess yeah yeah i mean do it in encrypted morse
with a you know with a key that you eat afterwards right like? Like it's just, it's got really nuts
the lengths to which companies are just like,
yeah, trying to hide this stuff
because of the legal risk.
I mean, yeah, I don't approve of that.
No, and I think the push and pull we've seen
from like the market regulators,
from the SEC in the US, for example,
trying to get this information out of companies
so that investors can make good choices
and the market can make good choices,
you know, when organizations are just not incentivized
to be open and upfront about,
you know, things going wrong for them.
Yeah, and there's also another action against Optus.
They're being taken to court by ACMA,
which is the Australian Communications and Media Authority.
And that's over the breach, not over the outage.
Like, it's just a world of hurt.
And again, going back to that Domino meme, authority so and that's over the breach not over the outage like it's just a world of hurt and and
again going back to that domino meme you know one exposed api endpoint yep and here we are in the
class action class action law you know when we think about all of these incidents right that
we've seen recently like change healthcare citrix with no mfa and now we've got optus where they
just accidentally exposed an api endpoint there's lawsuits, there's regulatory actions,
there's resigning CEOs,
and you just think it's just that one little thing.
Like if you were just doing a bit of attack surface mapping,
you wouldn't be, like in both cases,
if they were actually looking at what was on their perimeter,
none of this would have happened.
Yeah, exactly right.
I mean, if it wasn't always a case of treating
security as a cost center that has no benefit and it's a place you can just trim some money out of
yeah you know there are definitely some lessons to be learned here people but then again you know
we as an industry have also sold people a lot of security products that didn't work and made it
worse so you know yeah a little bit of blame to go around. A little bit. Everybody gets the blame. Everybody.
Now,
let's talk about PC Tattletail,
which is a,
like,
stalkerware app.
What even platform does it run on?
So this is a,
primarily a Windows stalkerware app,
but I think they also have an Android version,
but I think mostly it was known as being one of the ones that has a functional Windows
spyware stalkerware.
Yeah,
I mean,
I figured with a name like PC Tattletail, that would be the one.
Anyway, you know, horrible, immoral software.
Thankfully, though, having a real bad time.
Well, and they're having a terminally bad time.
They are out of business by the look of it.
So somebody found a bug in their like PHP,
you know, backend system, attempted to report it.
I think this is basically a one-man shop.
Didn't pay much attention.
Some reporters, I think at TechCrunch,
I think Zach Whitaker over at TechCrunch
got handed that story, went and investigated.
But very quickly, somebody else found out about it,
bust in, defaced the site with a whole bunch of details
and then downloads of their customer data
and things like that.
Maya Arson-Crimew, the Swiss hacker,
has actually a very good write-up about the software
and some of its flaws over on her blog.
This thing had like 300 million screenshots
of Windows boxes in their S3 bucket
because the Amazon, they were backed by Amazon.
And ultimately one of the bugs was the client software
just had a token to get access to the Amazon with full privs.
Oh, it's one of them, was it?
And it was the same token for everything.
Yes.
Yeah, so not great at all.
And yeah, all the custom data has been leaked
you know can open worms everywhere uh another interesting wrinkle uh beyond this was that
somebody whilst looking at the the dumps from the site found that there was a backdoor in the php
that somebody else probably put there a decade ago that was like straight up code exec you know
in some obscure php script you script that they left lying around.
So yeah, this thing has clearly been used and abused
by both customers and somebody else.
And attackers and all in sundry, right?
Yeah.
The only reason that it's even more interesting, I guess,
is that this stuff is actually cropped up in places
like Wyndham Hot hotels reception kiosks
or reception like check-in terminals um so whether that was being used by corporate to keep an eye on
things whether there was some local hotels that were using it I don't know but yeah more widely
used than you thought less well maintained than you thought and more owned than you thought yes
exactly right that's uh yeah great combo they had a bad time and we're all about it.
Yes.
Basically.
Yes, we are.
Now, we got some great stuff out of the insurance industry.
It feels like insurance is hot again, right?
At least for us.
Because we spoke about insurance last week and talked about how, you know, you went through your initial optimism and then your, you know, trough of disillusionment, as they call it. And now we're reaching, what did
Gartner call it? The plateau of sustainability or something like that in their mystical quadrangle.
Insurance era is what it is.
But it looks like we got some really interesting data being published by the insurance industry.
Jeremy Kirk, who's a former journalist turned threat intel guy, we got his Masto post up here,
and he's just pulled out some interesting stats here.
So policyholders who are running RDP exposed to the internet
were 2.5 times more likely to experience a claim in 2023
due to, you know, attackers going after RDP.
And we've got similar numbers for ASA.
If you were running Cisco's ASA in 2023,
you were five times more likely to make
a claim. And Fortinet, you were twice as likely to make a claim in 2023. So it's really interesting
when you start looking at this sort of data and saying, yeah, maybe I don't really want to run
this thing that means that I'm, you know, statistically like five times more likely to
make an insurance claim on if I have it.
Maybe that's a good idea.
I mean, yeah, I think this kind of data is great
because so much of infosec is smoke and mirrors.
We don't get to see what happens.
Whereas insurers are in a position to get some good data.
And I just love that buying a Cisco ASA
makes you 4.71 times more
likely to get wrecked so yeah i mean i don't know i mean there's there's the correlation and
causation thing yeah i mean i know i know i'm gonna be a little bit facetious about it because
exactly that but i mean the type of company that's running an asa right like it's an older enterprise
right it's not some new hot thing that's been built with the latest stuff. Like, you know, it's more of an indicator.
I don't think you can directly say that it makes you five times more likely to get owned.
But it's like the type of company that runs ASA is maybe five times more likely to get owned, you know?
Yeah.
Yeah.
No, this is a great read.
And certainly, like, if you're in an organization that is one of these kind of, like, you know, legacy tech stack kind of places, some of these, you know, reports and graphs got to be useful for your next, you know these kind of like you know legacy tech stack kind of places some of
these you know reports and graphs got to be useful for your next you know kind of meeting where you're
trying to stride strategy and justify getting some budget to rip some of this junk out and if even if
that's only maybe our premiums will go down rather than we get owned less then hey you know whatever
works right whatever works now uh this I don't think will work.
TikTok has published a report into disrupting, you know, influence operations, including one from China.
Look at us.
We're completely independent.
Woo.
This ain't fooling no one.
But it's, you know, it's still good that they're publishing the data. And I, you know, I'm sure that they managed to squash a Chinese op, right?
I'm sure there is a group within TikTok that is tasked with cracking down on inauthentic behavior, is the term of art.
But I don't think this is going to do much to help TikTok out of its current predicament in the US.
The only thing that can do that is Donald Trump getting elected.
Yes, TikTok's problems definitely run a little bit deeper. In this case,
they wrote up, I think, what, 15 campaigns that they'd identified, some of which were related to Iran, one of which was run out of China, as you mentioned. And that was, I think they said, 16 accounts with 100,000 followers.
So from a TikTok scale point of view,
very, very small bickies.
The other stuff that they reported about
was kind of what you'd imagine.
There was a bunch of, you know,
Israel Hamas kinds of accounts,
you know, doing influence ops and things,
stuff in Venezuela and Indonesia and Ecuador.
And, you know, kind of what you'd imagine.
It's interesting that we don't see – I don't know.
There was nothing that seemed to be Russia and Ukraine related,
but again, this is part of the comprehensive.
Is TikTok big over there?
Yeah, I don't know.
I would have thought there would be something going on,
but yeah, I mean, it's good that they've done something.
As you say, I don't think they're really fooling anyone.
It's certainly not going to make any significant headway for them in terms of their political situation in the US.
Yeah, I mean, I really think it'll be interesting to see in November.
Just off topic for a minute.
I think Trump is a real shot at winning the presidency in the United States again.
And it's crazy because he, you know, initially the TikTok ban was his idea.
And now he's reversed it because, I don't know, anyway.
But I think he has a solid chance of, you know, reversing the ban, which would be very
popular with a certain section of the voting public in the United States.
He's also promised to commute the sentence of Ross Ulbricht of Silk Road. So he was doing a
speech to the Libertarian Party. He spent most of the time getting booed and yelled at, but he got
a raucous cheer when he said, you know, on day one, I'm going to set Ross Ulbricht free, which
I thought was wild, right? Because I guess Ulbricht is a bit of a icon in the libertarian movement um but you know
i was i was you know i just just in talking to like drivers bartenders just you know normal people
uh in the united states like they are furious about the state of the economy in the united
states because of the inflation uh and they're blaming joe biden and i could definitely see
trump winning again and as i explained to my American friends, it's particularly traumatic as an Australian
because they get their dose of Trump throughout the whole day, right?
Whereas us, we're waking up as the US is winding down.
So we get a whole day of American politics injected into our eyeballs as soon as we pick
up our phone in the morning.
And it's a hell of a way to start the day getting yeah you know getting getting 10 hours of trump
condensed into a five minute um stream of notifications on your phone right crazy times
but i think he could win and that's gonna you know at the very least adam it'll make life
interesting won't it i don't know that i want that interesting like peace and quiet thanks
very much america but uh what are you doing what are you what America. But what are you doing? What are you doing?
What are you doing?
Now, just real quick, I wanted to mention.
Oh, and look, if you want more details on that TikTok thing,
our colleague Tom Uren, who writes Seriously Risky Biz,
which is available again at news.risky.biz,
he's writing up a bit on that tomorrow.
And he and I do a podcast every week in our second channel,
Risky Business News,
and we'll be talking a little bit more about that TikTok stuff. And I promise you that Tom has better takes on this than you or I do a podcast every week in our second channel, Risky Business News, and we'll be talking a little bit more about that TikTok stuff.
And I promise you that Tom has better takes on this than you or I do,
as is often the case when it comes to serious stuff.
So he's our in-house thoughtful person who's less about half-baked hot takes like we are.
We've got a report here from Raphaafael sata and chris bing uh in reuters
about this israeli uh private eye who is connected in some ways to these you know
mercenary hacker for hire firms based out of india and whatnot but they've got a report here that
says the fbi actually questioned this private detective about his relationship with
a Washington public affairs firm called DCI Group. And, you know, what the Reuters team are doing
here is they're just cataloging every little bit of this, every little bit of this, right, along the
way. And you really do get the impression once you take in the totality of the work that they've done on this hacker for hire stuff being used to, you know, it's likely been used by some powerful corporations through various entities to do things that they shouldn't do.
You kind of realize that ultimately in a few years, this could culminate with someone going to prison.
This is important work.
Yeah, yeah yeah it
certainly is because when we first talked about this guy he could have been arrested i think in
like heathrow he was flying out of london and the british cops had done some they made some
procedural error and then he got out and that was where we had left it last time we talked he was
then subsequently re-arrested and i guess they did it right this time
so he's out on bail now in the united kingdom but yeah you do get the feeling that this is
one of those iceberg stories that you know as people grind through all the details and especially
as you say there's going to be some you know murky high profile corporate or government or
whatever entities um that you know use high npr
firms in washington dc that's it's just going to turn into a bigger thing and i hope they uh
you know the journals in question keep digging um and i look forward to reading you know the
proper story once we've got all those details because yeah it feels significant and of course
there you know of course uh this reporting also includes you know
the reporting on that indian firm that uh wound up taking court action against reuters and getting
their report pulled and you know we even got a legal threat on that one uh via tom's column you
know seriously risky business being syndicated via lawfare they got a takedown notice and we
didn't even mention the company we just linked to the report so they're really aggressive legal campaign um we we decided to pull down uh our reporting uh based on reuters
reporting because like if reuters have pulled it you know we can't really stand behind something
that doesn't exist on the internet anymore um but yeah they're just doing a lot of work here
and i think it's one that's worth you know just keep your eye on it because i think absolutely
if this thing blows up uh which it has a coin toss chance of doing think it's one that's worth, you know, just keep your eye on it because I think if this thing blows up,
which it has a coin toss chance of
doing, like it's going to be a scandal
when it eventually goes. Yep, I'm
looking forward to it because it's always nice to see people
getting their comeuppance. Now last week
we mentioned the
data extortion attempt against Christie's,
the auction house, and
yeah, that data has
popped up. You know, there was some sort of attack right
and we kind of guessed that it was probably going to wind up being data extortion and in fact it is
the wrinkle here though is that a sample of the data has hit like ransom hub and the attackers
are saying you know we tried to negotiate a settlement but you know they wouldn't go for it
so now we're going to sell the data and expose it live you know unless they um uh unless they pay us but christie's for its part says the
the data that was taken is just like limited data on some customers so they don't seem very
interested in paying now last week we did mention that you know if a full database from christie's
auction house went out there that would be bad because you're talking about rich people who own very expensive things with their, you know, names, addresses and personal
details. But yeah, Christie's doesn't seem too concerned. So I sort of wonder who's right here,
you know? Yeah, they've said there was no evidence of financial or transactional data relating to
our clients being taken, but that it did include a limited amount of personal data.
So, I mean, I guess they must be pretty confident
that the limitedness is pretty limited.
But yeah, again, you're messing with kind of sensitive people here.
So we will see.
The Illuminati.
The Illuminati are going to come and get you.
Yeah, but I mean, it's like when we've seen, you know,
breaches involving like gun registration databases and stuff like there's just certain categories of data that
you know if they go out there they're going to result in further crimes yes absolutely yeah yeah
now uh let's talk about something real funny which is you know occasionally like the chinese
intelligence world just screws something up uh pretty spectacular. We've got an example of that here, Adam.
Yes, some academics from the Berkeley Risk and Security Lab at the University of California,
Berkeley, they stumbled across a data set and one of these kind of like, you know,
AI data sharing sites, which was a set of images of US Arleigh Burke class destroyers
with their Aegis missile defense systems labeled for training.
So it's about 700-ish pictures, mostly of American destroyers.
Actually, there is one Australian destroyer in the data set as well.
So there you go.
And they have been hand labeled with, you know,
like missile radars and vertical launch tubes
and a couple of other, you important components of ships and this data set had been uploaded from like some university in shanghai the shanghai tech
university um so these academics uh took note of this uh trained their own models based on the
data and then concluded that actually the training data is not very good and that if you were to
build a
missile that used this for targeting then it might work in certain circumstances but because a lot of
their training data were satellite pictures of american warships they're not very good at other
angles so not a great data set but stop giving them tips what are you doing nevertheless uh to
see this stuff pop up so i mean i'm guessing this is some sort of university
project that is has been commissioned by you know the pla or something right like that's kind of
what you would imagine yes um but pretty funny and you know it's i imagine there is quite a lot
of interesting gold in some of these you know training data set sharing sites because stuff
like kind of how github used to be in the old days where all sorts of people uploaded stuff
without really thinking about it,
I expect because it's kind of part of their workflow.
So I expect there's probably all sorts of gold in Dumbar Hills.
Man, everyone tells me like once you're behind the Great Firewall
in China too, the amount of stuff that's just exposed
because it's sort of shielded to a degree, right?
Because the Chinese internet is so surveilled,
you know, there's not, I don't know how Yahoo it gets, right?
But if you can actually get a presence behind the Great Firewall and go a little bit nuts, there's a lot there.
Must be easy mode for, you know, cybercom and the NSA once, you know, they're in the right place in the network to pivot onwards.
Yeah, but I'm just talking about amateurs who like, you know, not even at that level.
It's just like, oh you know it's not even not even at that level it's just like oh man there's so much and one thing i will note about this story though is that the the chinese refer
to aegis which is the you know the missile defense system on these boats their name for it is zeus's
shield and i think that is like such a compliment right like if you're naming a weapons component
like zeus's shield like the like the Americans have got to be thrilled
that that's what the Chinese call it, right?
Makes it sound so much more badass than Aegis.
Yeah, absolutely.
I was going to say a total badass name.
And before we go, just real quick,
I want to say a big old congratulations to Rob Joyce,
who has joined a sort of advisory board with OpenAI.
You know, he departed from NSA recently
and yeah, now he's going to advise OpenAI
on sort of safety and security.
And I think that's a, you know,
that's a great result for everyone involved.
And, you know, whether or not, you know,
he's not going to be responsible
for OpenAI's decisions, obviously,
but, you know, the best decisions are usually made
when the best advice is present.
So I think it's a really positive thing
that he's over there.
Yeah, absolutely agree.
I mean, the AI world is pretty Yahoo at the moment
and a sensible mind and so much experience
that Rob has as having that as an input,
I think is a great thing.
And it gives me a tiny modicum of confidence,
but good for him nevertheless.
Yeah, well, mean let's let's
see right yes let's see yeah yeah uh let's see but you know it's always good to give people the
right advice right what they do with it is up to them but let's hope they make some good decisions
all right adam that is it for the week's news uh thank you so much for joining me and we're
going to miss you next week because you're off to fiji i certainly am and i'm looking forward to going to be on a
tropical island uh with no cell network hell yeah no cell no wi-fi no internet no nothing
there's no wi-fi i mean maybe there's wi-fi like you have to go to somewhere to find it like there
might be one of those shared computers in the lobby sort of thing where you can log into your
yahoo and get your you know your your Yahoo mail and get your key logged.
Your AOL.
Yes, my AOL.
I'm not going to take my multi-factor auth token.
So, ha ha.
There you go.
And you'll have drinks with little umbrellas in them.
Yes.
And it's going to be wonderful.
So we'll check in again with you in two weeks.
But until then, my friend, have a great vacation,
have a great holiday, and we'll talk to you then.
Yeah, thanks very much, Pat.
We'll see whether the, you know,
risky biz curse of the holidays applies to me
or only you. So, yeah, good luck with that.
That was Adam Boileau there
with a check of the week's security
news and he will be back in two weeks.
Yeah, because next week, as you heard,
he is luxuriating
in Fiji. It is time for as you heard, he is luxuriating in Fiji.
It is time for this week's sponsor interview now with Alex Calperthwaite,
the Technical Director of Research and Development
with Kroll Cyber's Offensive Security Team.
And Alex joined me to talk about his adventures in LLMs.
We all know that prompt injection is a thing,
but a lot of the conversation on LLMs, at least on this show,
have been about how to put them to use, not about beating them up, which is what he's been doing.
So here he is explaining to me that there's already an OWASP top 10 for attacks against LLMs.
Enjoy. So the OWASP top 10 is exactly what you think it is. It's kind of the top 10 categories
of vulnerabilities that you see in LLMs. It's a particularly interesting it is it's kind of the top 10 categories um of vulnerabilities that you see in llms it's particularly interesting one because it's so new um yes the whole kind of landscape is
um you know i think i mean i'm like what we've got we've got 10 already like 10 known you know
like we've got we've got more than 10 if we've got a top 10 right yeah yeah exactly that's that's
kind of the interesting bit is i i think a few of them in there will probably evolve as the landscape continues to evolve. There's a few, like model theft is one of them. And we haven't really seen much of an approach to be able to steal models through an LLM. That just doesn't seem to be something that's very feasible. But it's in the last level. That used to be the going thinking though, right?
Is that it would be pretty easy to enumerate a model by interacting with it.
But no, that's not really the case.
No, I mean, it takes a ton of data to be able to do that.
And you have to be able to get data out
and then figure out how that data comes together to make the model.
And frankly, the models are so complicated,
it's virtually impossible to get all
that data out and do anything useful with it. Prompt injection is number one, and that's what
we've seen in our research and all of our assessments so far. We see prompt injection
just about everywhere. And that's maybe not surprising. We see it all over the place. And
the risk profile of prompt injection, I think, is also pretty interesting because it's one of those vulnerabilities where on its own, maybe it's not super interesting because it has a lot of kind of reputational risk.
I say this is like when they start saying horrible things, right? one we had an interesting one in australia around anzac day where i think it was some queensland veterans organization had like you could interact and talk to an ai model of a like world war one
soldier and um you know people started asking it about you know australian war crimes in the
middle east around world war one and it got awkward but i was actually kind of surprised at
how well um the model handled that but yeah I can imagine that for your clients, you know, because you deal with some very big companies.
I mean, the reputational risk from an LLM that starts spouting off the wrong thing is huge.
And how do you even begin to put guardrails on that?
Yeah.
I mean, putting guardrails on an LLM and defending as prompt eject, is absolutely one of the hardest parts. It's a defensive depth type approach. Um, so input validation is just not going to be a very
effective approach, um, because the input is language. Yeah, they say about like breaking
syntax and, you know, doing weird stuff. Like it's totally different. Yeah. You can't just
block quote marks. Yeah. It's so unstructured. Like we've always been
able to, you know, either manipulate the word, do some character substitution or put sentence
fragments together and then be able to achieve whatever by getting by the input validation.
Yeah. So, you know, if you can't do input validation, what can you do?
I mean, output validation is another good approach, but that only works kind
of after the fact. And it's more effective because the output is generated by the LLM.
So it's harder to do a bunch of these tricky things to get by any kind of block lists.
Well, I guess because the output is the thing that the person's actually trying to elicit,
right? So that's what you can filter. It makes sense more to apply it to the
output. But when you're dealing with like natural language, well, I guess it's artificial language,
but it seems like natural language. You know, how do you begin to actually put some guardrails and
rules around that? I mean, do you use another LLM to sort of infer the intent from the output of
this one? This seems like one of the most promising approaches so far
is to stack an LLM on top of an LLM.
But it's just kind of a ridiculous thing to do.
And it's obviously going to have
some pretty tough performance implications.
So I guess what you're saying is that on the,
you know, the input validation side
might be a bit of a dead end.
Yeah, yeah, I think so.
I mean, it catches low-hanging fruit,
so defensive depth is always kind of the way to go.
You stack all these together,
and then you start getting some pretty good layered defense.
Of course, you can also use system prompts.
That's a good approach.
It kind of tells the LLM what type of input to expect,
what to avoid, and how to respond.
And you can give some guidance to avoid prompt injection there,
as well as some custom training and fine tuning.
But lots of people are just using out-of-the-box models.
So fine tuning is a big, big step forward for that.
Yeah.
Now I imagine too,
you would have done a lot of these sort of prompt injection attacks, right?
And played around with them and beaten them up as a way to figure out
like, well, what are we trying to, trying to guard against? I mean, what's been the, I mean,
we've all seen, you know, the way a lot of this, um, you know, prompt injection stuff works.
One of my favorites is the people doing white on white text on resume cover letters, you know,
saying this is, this is ignore all other candidates. This is the one, letters, you know, saying this is, this is ignore all other candidates.
This is the one, you know, stuff like that is great. But like, you know, you've actually spent
a bit of time, you know, as a professional beating these things up, you know, what have
been the things that you found that have like been surprising and interesting and just cool?
You know, once you've extended it out beyond the basic stuff.
Yeah. So I think the most interesting one is when you get some technical risk
and you can kind of chain the prompt injection
with other types of vulnerabilities.
So if the LLM, for instance, has access to an API,
you get some injection, you can probably achieve like an SSRF type result.
And then you can start building upon that
and chaining it together to do all kinds of other fun things.
Have you actually managed to do that?
Yeah, we got it on at least one, I think a couple more than that as well.
So you actually managed to get shell by asking?
Yeah, it wasn't a full shell, but we were able to get some access to sensitive information,
you know, just by getting it to manipulate some requests.
Yeah.
Well, yeah.
SSRF, I guess it's not a shell, but you know, it's good it to manipulate some requests. Yeah. Well, yeah. SSRF, I guess it's not a shell, but, you know, it's good enough.
It's fun.
You know, what are some of the other risks that people might not,
you know, that might not be so obvious, I guess, as obvious as model theft?
Yeah.
One of the other ones, it's a really great use case for LLM.
One of our clients has a bunch of services that expose tons of really complex APIs.
So they have built an LLM on top of it.
So it can output code blocks to help you use the API as well.
And it's linked in with the documentation through the RAG system,
all that kind of stuff.
So it has really thorough access,
and it makes it a lot easier to interact with all their APIs.
But of course, through prompt injection,
you're able to get it to inject all kinds of malicious code.
So we were able to get the classic RMRF
injected into every code block we saw,
which could obviously have some pretty-
Well, injected into code blocks that you were generating
or injected into everybody's code blocks
that they were generating?
It potentially could be other people as well.
Yeah, right.
Because you get it into the template. Because you're kind of training it, aren't you?
And you're like, no, no, give me one with RMRF.
Everyone should include RMRF.
And that's the thing.
It's hard to set boundaries on what these things are going to learn.
Yeah.
Generally, the session scoping does limit the impact.
So a lot of the impacts we see with them are sort of self-type impacts.
But especially if you can impact like a template or something like that, you can kind of achieve that persistence and have a wider effect.
Okay, I get what you're saying, right?
Because keep in mind, this is all new.
Bear with me.
But it sounds like what you're essentially talking about is scoping the permissions correctly when you set these things up.
Yeah, that's exactly it. And I think that's one of the funnest avenues to attack through LLM is the permission model. So we had another client
that, you know, they have a service that allows you to upload documents and then interact with
your documents through an LLM and particularly legal documents, which are a great use case
because they're really complex, tons of words,
very verbose. So it's good for LLMs to interact with. But we found you could just upload prompts within the contents of the documents. And that, of course, allowed you to get access to anything.
But the permission model was particularly interesting there because this allowed for
sort of collaborative work on these documents. So you
had permissions of the user, but also there are members of teams within the user base,
as well as potentially different role levels within the application as well.
So a pretty complex permission structure. And of course, the LLM has access to all the documents
through some capability. So it's got some complex auth where it's got to kind of federate or assume
a role to access all the content. So how do you actually manage the permissions for an LLM?
I mean, I'm guessing there's going to be a bunch of config files or, you know, are you just asking
it nicely to maybe not do the bad thing? Like how does this work? In principle, you're just,
the LLM should be assuming the permissions of the user rule, just sort of
federate that through.
Of course, federating auth is a tricky thing to do right.
But in principle, that's the approach that works well.
You'd mentioned earlier that authorization is an interesting attack surface, I guess,
with these things.
So why don't you walk us through what you meant by that?
Yeah, I mean, it's exactly that. Manipulating the permissions to achieve,
you know, access to other people's content or other things. The other interesting bit you can potentially do with that, instead of going after information directly, is you can go after system
information. So you can potentially pull tokens or configuration or internal architecture or whatever else out of the system.
We have kind of done a deep dive into the ML ecosystem overall.
And specifically, we're looking at inference servers.
So inference servers are kind of the core bit that runs a model.
So if anyone wants to run their own model,
they're running an inference server.
And that's a really challenging piece of infrastructure to get right. Because first of all,
by design, they don't really have authentication on them. And that's kind of done for performance reasons. You know, every little bit to check an auth token matters in the performance for an LLM,
an inference server, but they don't have any auth on them. And then, of course, they expose APIs, and these APIs
allow you to interact with it, but they also allow you to manage the device. So no auth,
then you've got APIs, and then being able to load a model, that just is RCE. These models contain
code, either like handlers or like serialized Python pickles or
whatever else. It's just direct code execution. So if you can load a model onto one of these servers,
you've got code on all of a sudden this high performance inference server.
But I'm guessing that's predicated on you being able to reach those API endpoints, right?
Yeah, yeah, absolutely.
So the old dumb solution is the
right one here, which is I'm guessing you just firewall it. Yeah, yeah, that's exactly it. You
got to have some network access controls in place and some pretty robust stuff, but you're going to
have a more complex infrastructure because you've got to have different application layers that talk
to it because you've got these dedicated high performance inference servers. All right, Alex
Calperthwaite, thank you so much for joining me to talk through all of
this.
I mean, this is a very new area of research and it's nice to be having a conversation
about attacks against LLMs that aren't just bog standard prompt injection.
It's nice to be talking about the next little baby step.
Yeah, I really appreciate it, Pat.
It's been great chatting with you.
And, you know, that's what I've enjoyed a lot about this and learning about it is kind of taking it to the next level and figuring out all the fun things that we could do with this.
That was Alex Calperthwaite from Kroll Cyber there with a chat about LLMs and security.
Big thanks to him for that.
And big thanks to Kroll for being a Risky Business sponsor.
And that is it
for this week's show I do hope you enjoyed it I'll be back real soon with more risky biz for you all
but until then I've been Patrick Gray thanks for listening Thank you.