Risky Business - Risky Business #716 -- This ain't your grandma's cloud
Episode Date: August 8, 2023On this week’s show Patrick Gray and Adam Boileau discuss the week’s security news. They cover: Tenable gives Microsoft a spray over Azure bug fix delay, quality... Lateral movement fun via Azure Active Directory Cross-Tenant Synchronization Ransomware targets hospitals, special needs schools Japan’s cybersecurity has some catching up to do Much, much more This week’s show is brought to you by Corelight. Brian Dye, Corelight’s CEO, is this week’s sponsor guest. Links to everything that we discussed are below and you can follow Patrick or Adam on Mastodon if that’s your thing. Show notes Tenable CEO accuses Microsoft of negligence in addressing security flaw | CyberScoop Microsoft resolves vulnerability following criticism from Tenable CEO New Microsoft Azure AD CTS feature can be abused for lateral movement Hackers force hospital system to take its national computer system offline Israeli hospital redirects new patients following ransomware attack Russia-linked cybercriminals target school for children with learning difficulties Hackers accessed 16 years of Colorado public school student data in June ransomware attack Marine industry giant Brunswick Corporation lost $85 million in cyberattack, CEO confirms China hacked Japan’s classified defense cyber networks, officials say - The Washington Post Comrades in Arms? | North Korea Compromises Sanctioned Russian Missile Engineering Company - SentinelOne Ukraine says it thwarted attempt to breach military tablets The Mystery of Chernobyl’s Post-Invasion Radiation Spikes | WIRED Radiation Spikes at Chernobyl: A Mystery Few Seem Interested in Solving U.K. election regulator says hackers had access for over a year but elections still secure Exclusive: DHS Used Clearview AI Facial Recognition In Thousands Of Child Exploitation Cold Cases Eight Months Pregnant and Arrested After False Facial Recognition Match - The New York Times New ‘Downfall’ Flaw Exposes Valuable Data in Generations of Intel Chips | WIRED New Inception attack leaks sensitive data from all AMD Zen CPUs Spyware maker LetMeSpy shuts down after hacker deletes server data | TechCrunch ‘Crypto couple’ pleads guilty to money laundering, as husband admits to carrying out Bitfinex hack Google Online Security Blog: Android 14 introduces first-of-its-kind cellular connectivity security features Risky Biz News: Russian bill will hide the PII data of military, police, and intelligence agents
Transcript
Discussion (0)
Hey everyone and welcome to Risky Business.
My name's Patrick Gray and this week's show is brought to you by CoreLite and CoreLite's
CEO Brian Dye will be along in this week's sponsor interview to talk about the various
merits and drawbacks of the three detection models, the SIEM model, the SOC triad model
and the XDR model.
CoreLite of course maintains Zeek, the open source network
sensor that is used in all three of those scenarios and they also offer their own standalone NDR
product. So that interview is coming up later but first up it's time for a check of the week's
security news with Adam Boileau. And Adam there's been a big brouhaha last week or so between Tenable and Microsoft, Tenable had found some pretty bad bug in the guts of Azure
and had notified Microsoft about this in March.
They didn't patch it or fix it until last week.
And even then, the fix was incomplete.
And Tenable CEO, Amit Yaran,
wrote a big, open, nasty letter to Microsoft
saying that they're negligent
and basically saying a lot of true things and Microsoft then went off and fixed this bug
look it'd probably be helpful to everyone if you started off by explaining to us what this actual
problem was in the first place. So the researchers from Tenable had been looking into
one of the features of like Azure Power Apps and a few other bits of Azure that build on that
platform so you can write your own code and have it kind of run inside Azure to do things
and quite often people are doing that to integrate with third-party bits of software and other
components and there is a mechanism for supplying your own code to run during this integration
process and this in code integration code ends up getting run by Microsoft
in a context that doesn't require auth.
Like you have to find where these secret kind of glue functions are running.
But if you knew its name,
then you'd be able to call like midway into this authentication integration
and get it to send you the creds it's going to provide to some third-party service so if you're integrating with salesforce you might end up with
a token or valid credentials to talk into the organization's salesforce and that essentially
had no auth other than you had to guess the domain name and there were some mechanisms for kind of
figuring what out what those are in that they're only really per tenant
that you have to guess
and after that you can enumerate them.
So anyway, net result was
you could steal authentication data
for other things out of Azure functions
with no authentication.
And one of Tenable's examples was
we found creds for a bank and that...
Creds for what though, exactly?
Well, for whatever thing they were integrating with yeah
okay which you know could be other microsoft stuff it could be other third-party stuff we don't
really know in that case but either way bank thing not good creds for banking not good uh and then
like so this was so then microsoft um they reported the microsoft microsoft went out to fix it
and microsoft's uh you know obviously took some, which is one of the aspects, and then the other was that
Microsoft's initial fix fixed it for new deployments
but not for existing ones.
And that was one of the big criticisms that Armand's had
about Microsoft's response was that they did not provide
their existing customers with information about the risks
that they were exposed to for,
you know, obviously forever, but Microsoft knew about it for months.
Yeah.
And that's pretty understandable to be concerned about. And then Microsoft turned around and retroactively applied the fixes
that they were rolling to existing customers,
basically the day after Armit unleashed on LinkedIn.
Yeah, which is, I mean, he's done us all a public service here, I think.
So good job, Amit Yaran.
I guess, though, this is typical sort of Microsoft stuff, right?
And even if they had have disclosed the bug and issued a warning
and said people need to go and, you know, do this pointy clicky thing,
you know, we know now the internet is old enough,
cloud services are old enough that we know
that people
wouldn't act on that information anyway the only way to fix this is going to be for microsoft to
actually make a change on its end uh to roll out this fix and risk breaking stuff and that's the
thing that they didn't want to do they did not want to risk breaking things for their customers
i mean that's what i presume is behind this half fix this initial half fix right i would imagine
so yes that like it kind of makes sense and it's you know you don't want to break people's is behind this half fix, this initial half fix, right? I would imagine so, yes.
It kind of makes sense and it's, you know,
you don't want to break people's production stuff for a bank,
but, you know, the fact that we don't have a way
to handle cloud vulnerabilities well
means that the cloud operator kind of has to take more responsibility
and more risk in breaking things
than we would have in the old model.
And one of the things that Amit said in his post
was like Microsoft doesn't issue a CVE for this bug
because it's kind of in their code.
It's not for outsiders to work with.
We don't need to coordinate or name it.
Well, I mean, you know, we've had this conversation before
and CVEs, I mean, they're issue trackers, right?
Like you don't need an issue tracker
for something that's going to be resolved on Microsoft's end.
Exactly.
And so like all of the sort of shared knowledge
we've built up about how to communicate security issues
and even like the whole model of full disclosure
and how we deal with bugs, it kind of doesn't work.
I see what you mean,
which is we've tied disclosures to CVEs
and that doesn't work anymore.
It doesn't work anymore.
And even just the way people think about a bug
and think about security issues,
it's so tied in that older,
you run software made by a software manufacturer on premise
and you are responsible for maintaining it.
That whole way of thinking just doesn't apply anymore.
And we have to learn what the new model looks like
and what our expectations of a vendor like Amazon or Microsoft or Google
will be with this kind of infrastructure.
Are we willing to trade it?
But I mean, that was the whole selling point of this sort of infrastructure
is you don't need to maintain it, you know?
Well, yes, right.
That was predicated on the idea that they would actually maintain it for you,
not be nervous Nellies about pushing something that might, you know,
in some exotic configuration
break things for a couple of customers so they're going to leave you vulnerable like that's not what
we signed up for here no it's not um and it's just funny when we you know I spent so much of my early
career in very conservative environments with very strict change control where you couldn't just
yolo stuff and prod and now we've moved to the cloud for security and availability reasons and they get
to yolo stuff and prod and now they all become the customers of microsoft don't get to manage
those choices themselves and as a result we either get over conservativeness or we get you know your
entire business can be dead in the water because microsoft changed one thing or decided to a b test
you or whatever so this is an interesting thing? I wonder if this cloud stuff has got complicated enough
that we're at that point where patching these things
is going to start breaking stuff.
Now, in this example, we haven't seen reports
of Microsoft fixing this causing problems for its customers.
I mean, it may have happened.
We just don't know about it yet, right?
But, you know, you look at this next story we've got here
from Bleeping Computer, which is some research into Microsoft Azure AD.
It's CTS feature, which is a cross-tenant synchronization thing.
And, you know, being able to use this for lateral movement once you've obtained access to a, you know, Azure AD instance.
And you just start looking at some of the issues popping up in stuff like, you know, Amazon and, you know, Azure, less so in GCP,
because no one uses it. But you start looking at some of these issues popping up, and you want,
you start wondering how much of this stuff is sort of semi structural, right? Like,
you look at all of the problems that came with early versions of PowerShell, for example,
right, where everybody started deploying stuff with PowerShell and there were all of these fundamental security problems with it that when Microsoft started
making changes did invalidate a lot of the prior work that people had been doing around PowerShell.
So I guess my question is, how close are we to the point where we're going to see some published
research that Microsoft needs to act on to fix, but if they fix it, they're going to see some published research that Microsoft needs to act on to fix. But if they fix
it, they're going to break a whole bunch of stuff and it just turns into this completely awful
dilemma. I just feel like we're headed to that sort of situation and it's not a good feeling.
Yeah, I think you're right. Like there will come a point where someone finds some fundamental flaw
and they are just screwed. Like they have to make, you know, one bad decision or a different bad
decision or, you know, yeah, we're absolutely heading to that world. And have to make you know one bad decision or a different bad decision or you know yeah we're absolutely heading into that world and i think you know there are some examples
i suppose of similar sorts of bugs i'm thinking like uh the dot net padding oracle bug that let
you disclose dot net keys uh out of the web apps and like that was a i mean i can't believe it
lasted that long before we spotted it um but
also like that's the kind of thing that you know if if that was in the cloud if all of those apps
were in the cloud and they had to deploy it there would be a whole bunch of breakage and you know
like doing the the trade-off of how you would patch a thing like that like doing change control
in the context of a company is kind of difficult but manageable we're doing it for thousands of organizations at once all
with different sets of requirements and expectations about availability and when a
reasonable outage window is or what like it's just very very complicated and you know this
particular one you were talking to froming Computer about the synchronization of users between tenants,
the cross-tenant synchronization feature,
it just, it really reminds me
of early Active Directory features
for doing like synchronization,
like different domains inside individual forests
and trusts and so on.
And how, like the problems
that this feature is trying to solve
are the same business problems that a
bunch of the active directory like trad active directory features were there to solve and so
all the problems are there and they're going to re-implement the solutions to them in the cloudy
way and then we're going to end up with you know all sorts of novel and interesting and new ways
to attack them and then yeah one day we'll get one that fixing it requires something really bad yeah
yeah i mean you know i i guess what this all boils down to is today's cloud services they ain't your
grandma's cloud no grandma's cloud was you can throw a virtual machine into a box in a data
center and not worry about the hardware and that was fantastic that was great it was all singing
all dancing and then things just started getting a little bit more complicated initially with the
stuff that would allow you to manage that process you know so there was the stuff around managing
those things but now it's all of these complicated services right yeah as it has moved further and
further up the stack and away from being just virtual compute machines well exactly and now
these cloud services and we've said it a million times on the show they're like the new operating system right but the problem is everybody's got their own way
of using it and you're eventually going to bump into some fundamental problems here that require
uh big fixes and you know i don't necessarily trust microsoft to make the right decisions i
don't even know what the right decision is. Well, exactly. We're in pretty uncharted territory.
And if you were building a software platform
to run every business on the entire planet,
that's quite a complicated problem.
And that's where we have headed with Azure
and Microsoft's cloud services.
And at some point, some really important tenant,
like the US government, is going to turn around and say,
like, you need to do more than you are doing, right?
And obviously the letters from Ron Wyden
that we covered last week are a good example of that.
Like, there's just a lot of complexity here.
And I don't know that Microsoft's,
maybe they've bitten off more than they can chew.
Maybe, I don't know.
I don't know if anyone can chew it.
I think that's more the point, right?
We're a post-chewing world.
Yeah, I'm just starting to feel a little bit tense
with some of this, you know,
especially around the Azure stuff
because it looks like the way, yeah, I don't know.
This is what happens when you build a bunch of really powerful,
easy to develop for cloud services
and you haven't thought about stuff.
Yes, yeah.
And I mean, all of the ways that we were used to thinking
about compute in terms of
servers and networks in the older,
you know,
infrastructure cloud way still kind of worked.
Like you could still do firewall and you could still,
you know,
control individual services and configure them.
And,
you know,
our old,
all of our hard learned lessons kind of still applied.
And now in the post infrastructure andrastructure, more whatever Microsoft is, cloud,
like it's just crazy town, and it's all moving so quick,
and yeah, it's really hard.
Yeah, yeah, it is.
Now, look, some big ransomware news over the last week or so.
There was a medical company in the United States,
Prospect Medical Holdings.
They had their national computer system taken offline, which affected patient care in something
like, what was it, four US states, four or five US states, you know, to the point where
they're turning patients away.
We also saw a similar ransomware attack affecting a hospital in Israel against a medical center there, and they
were turning away patients. So, you know, it just seems like, you know, ransomware crews are still
attacking hospitals with impunity. Now, I think it's great that the FBI did a long-term infiltration
and shutdown against Hive ransomware and stuff. But when you see, you know, stuff like this,
you wonder if there isn't space for organizations like Cyber Command, the more, you know, military and intelligence end of things, to go in and do somewhat more rapid, you know, baseball batting of these people, right?
Disruption, perhaps.
Yes, yes, that's the word, sorry.
The less meaty word.
Yeah, it is really hard when you see organisations like hospitals and schools and, you know know we've got some other horrible examples this week as well yeah yeah there's a there's a school for children
with learning disabilities i think that one is in england and that's been you know attacked by
lock bit and you just think you know airstrikes let's go you know like that that's just it's just
unforgivable to like it's a school for disabled kids.
I mean, come on.
Really?
Really?
And yeah, it would be nice to see some homes unleashed,
as we always love.
But you see what I mean, right?
Like, I think it is great that FBI is doing these sort of disruption activities,
but they're focused on the long game.
And I think that's a really valuable thing.
Yes.
But there's also that tension there,
which is when you see stuff like this, you know, I feel like a rapid response involving wipers and doxing is
you know probably warranted and probably going to be more useful in this case and i understand that
there's a tension there right because if you've got some you know cyber command cyber hounds going
around and making a lot of noise and a big mess it might disrupt some of these longer term activities
that the fbi is involved with.
So I understand that there's tension there,
but I think that anyone who claims to know
what the correct balance is here
between sort of speed and thoroughness,
I don't think anyone knows.
I think everyone can have a theory,
but I don't think anyone knows.
Yeah, we're still all trying to figure it out,
but I don't know.
It just would be nice to see
the old thrown-off-a-bridge in St. Petersburg answer to some of these problems.
Oh, man.
You know, it really is that case, isn't it, where you read some of this stuff and your blood just boils and you're like, look, if someone were to throw them off a bridge, like, that would be a result.
Yes, exactly.
Certainly.
And, of course, there was the Colorado Public Schools student data stuff.
Like, anyone who went to a Colorado Public School
between 2004 and 2020 had their data breached.
It affected staff as well.
And that one's made big news in the United States.
And all of these pieces here are from the record
and we'll link through to them, of course.
They do excellent coverage of ransomware incidents.
John Greig also has a report
on a company called Brunswick Corporation
that they've calculated their losses in a ransomware attack
at $85 million because their operations were impacted.
So, you know, it's a good week to sort of just survey
what's been happening in ransomware news
and to just go, yeah, okay, still a problem.
Yeah, I mean, because the last few weeks we've been mostly going,
oh, look, move it's still happening, move it's still happening and otherwise haven't really covered a whole bunch
of ransomware stuff and i wouldn't want listeners to think that it had taped off because it is still
horrible and and terrible things happening to all sorts of people who don't deserve it
out there on the internet now ellen nakashima at the washington post has a monster write-up
on some problems j Japan has been having
with intrusions emanating from China.
And, I mean, it's a very long piece.
But the gist seems to be that the penetrations are pretty bad,
that they were so bad that Paul Nakasone
actually jumped on a plane to Japan,
and that Japan is a long way behind
in terms of its cyber defences,
which would vibe with what I hear just generally, right?
And also that they're maybe not catching up as quick as they need to.
I mean, how did I go for a dot point summary there of the piece?
Yeah, I think so.
Yeah, I mean, the story raises, you know,
the US intelligence and military having concerns about sharing data with Japan because of the amount of penetration in their environments.
And you get some hints of, you know, like there's some cultural aspects to like, you know, how you deal with dealing with widespread intrusions and how you deal with being honest with your partners and discovering bad news and so on.
That's all kind of like subtext, which, you know, kind of makes sense, I guess.
But, you know, the change in the security environment in that part of the world,
I think means that Japan has started taking things a bit more seriously.
They've got a lot of work to do, seem to be the main takeaway.
Yeah. I mean, the origins of Australia's ASIO, you know, Domestic Intelligence Agency, that was spun up so that partners, the Brits and the Americans, could actually share sensitive stuff with us at a time that we had a bit of a problem with people from certain other countries.
You know, sniff it around for information that might be shared, right?
So, you know, this isn't a new dilemma in that you want your partners
to be able to protect the secrets that you share with them.
But I guess just with everything that's happening with China and Taiwan,
it's becoming more important, I guess.
Yeah, clearly Japan does need to work on upping its game
and being a bit more – valuing that.
And we have seen the Japanese are starting to spend a bunch of money
on domestic cybersecurity
and supporting the sorts of things they have to do.
We've talked a bunch about the intrusions
into some, like, contractors, the Japanese government,
the cloud services provided by the Japanese government.
Right, they do have to take it seriously,
and it sounds like they are beginning to,
but as you say, a few years behind, perhaps.
Yeah.
Now, Russia and North Korea are fighting, and it's great.
Well, not fighting, but a North Korean APT crew has been going
after a sanctioned Russian missile engineering company.
So I guess this is a case of with friends like North Korea
who needs enemies.
Exactly right.
And not just any rocket manufacturer.
This is NPO MASH, which is one of Russia's biggest manufacturers, korea who needs enemies exactly right and not just any rocket manufacturer this is uh npo mash which
is one of russia's biggest manufacturers you know and making all sorts of things that you know if
the north koreans were up in there helping themselves to russian you know missile tech
you know i'm sure they would get a bit of a bit of a heads up um you know leg up on their work
but yeah i uh there's probably some awkward conversations to be had now between the russians and the north koreans about staying out of their stuff when
you've been kicked out of all of the u.s defense uh you know defense industrial based networks you
have to go and hack the russians um unfortunate although you know russian scientific research is
up there right yeah and they're rocket and missile tech a very long lineage and and you know good
work and all those kinds of things so it's a sensible place to go but yeah it must feel bad
to be thrown out of lockheed and end up in mash now look speaking of uh all things russia ukraine
uh there's a really interesting report from ukraine security service the sbu and darina
antoniuk has written that up for the record. She is based
in Ukraine. So yeah, Sandworm have been going after the battlefield management tablets that
are used by the Ukrainians. That system, I mean, this is an assumption on our part.
The system is called Delta. It's a Ukrainian developed Android-based battle management system.
And what looks like has happened is russians have
recovered some of these tablets from the battlefield and have tried to use them to gain
entry move laterally drop malware with tor c2 and some you know some look it's the sort of stuff we
expect from sandworm but it looks like they're trying to do some uh you know battlefield related
intelligence gathering using some fairly
sophisticated uh methods yes and it was it was an interesting writer i went and read the uh ukraine
sbu's release that they put out about it um and yeah these are android devices and once they've
got them they've got access to the like the bpns and the networks that they have um connectivity to
and then they've got some tools for doing scanning I mean nmap I guess
tools for doing scanning and a bunch of other things for moving laterally around those networks
they seem to have been getting into the Android tablets in the kind of not really exploit way like
through open debug service on the devices and then pivoting through those as you say dropping
Tor and DropBear for SSH back into the environment.
They've also got some tooling in there specifically to go and find Starlink terminals
and query them for information, which I'm going to assume includes their locations and things like that.
So that's an interesting tidbit.
The SBU does seem to have nipped this in the bud,
but it looked like quite a lot of work went into building the tooling,
getting everything into a state that they could try and use it in the bud, but it looked like quite a lot of work went into building the tooling, getting everything into a state that they could try and use it in the wild.
But, I mean, nipping this in the bud, I've got to say, is extremely impressive.
I mean, did that also occur to you that like, wow, okay,
because it looks like this is Sandworm, you know,
a crew that's very well known for being very good at this sort of stuff,
doing what looks to be some fairly sophisticated
things and the sbu managed to squash it and i just think geez how many how many orgs could
actually squash something like this yeah i mean it's a great catch and i mean i guess they must
have been kind of expecting it right if you've got the vices that are in the field and being taken
right you you know you know they're going to be looking um so yeah i mean i'm sure you know one
day in the future we will hear more about these stories
and understand exactly how they went down.
But, yeah, it certainly is pretty impressive.
Yeah, I mean, again, is this cyber war?
Probably not.
This is intelligence gathering,
but it is very much the sort of, you know,
intelligence gathering that will have direct impacts
on the battlefield, right?
And that's, you know, that makes it quite interesting, I think. So probably one for Tom to study a little
bit more in coming months. Now we've got time to put on our tinfoil hats, Adam, because we've got
a bit of a theory bubbling up at the moment. Kim Zeta is leading the charge on this one. During Russia's occupation of the Chernobyl power plant site
in Ukraine, we saw radiation sensors around that site spike. But now Kim is reporting that
the researcher Ruben Santamarta has looked at the data from these sensors and concluded that it has been manipulated.
Walk us through what exactly Santa Marta is alleging here
and then let's talk about it a little more.
Yes, so Rubens, I don't think, done his talk yet at Black Hat,
but is about to, and he's got a big paper to drop
with all of the details.
But anyway, so there are a bunch of sensors
and there were radiation spikes reported by those sensors. a big paper to drop with all of the details but anyway um so there are a bunch of sensors and
there uh you know there were radiation spikes reported by those sensors the data ends up going
back into some ukrainian agency that collects it and then publishes it where it's consumed by
the international atomic energy agency and you know anyone else kind of interested in monitoring
radiation sites around the world now some of this data has spikes in it and some doesn't and the patterns don't make a
whole bunch of sense and like geographically like we've got one sense that's between two others it
doesn't report a spike and the other two do either side of it like seems a little bit weird and then
the early explanation when these spikes were noticed was well it's russian tanks churning up
the soil and leaving you know kind of
radioisotopes stuff in the air that are being read by the sensors um and that you know passed
the initial sniff test but i'm not an expert and now ruben says that some of these readings you
know the the regularity of them or the kind of geographical dispersion of it looks a bit sus
and his theory seems to be that they have been hacked at some point either in the collection like in the actual collection devices
across the networks that collect them in the Ukrainian systems that aggregate that data and
then publish it to the internet which I think his suggestion is that's probably the most likely place
to do it but the questions of why and you know kind of to what end someone might want to manipulate those, and if so, why manipulate them in this specific way, those seem a bit more unanswered to me.
Well, I mean, you've said it best, I think, when we discussed this before we started recording, which is if you were in a position to change this data, why would you change it in such a strange way?
Yes.
Yeah. change it in such a strange way yes yeah like why would like surely yeah if you wanted to you know cause some particular outcome you know fear amongst ukrainians or blame the russians for
something you know in the other direction maybe like it seems you would craft more believable
looking data and i don't know who benefits from just weird yeah and i mean there's a lot of things that can i mean weird
data tends to have weird explanations yes that you're not necessarily going to just think of
off the top of your head it could be all sorts of things so i don't really yeah i i mean he's
going to drop a 100 page report on this right and and until we sort of have a look at all of it and
even then though i don't know the whole thing just have a look at all of it, and even then though, I don't know,
the whole thing just seems a bit odd, doesn't it?
Yeah, I think Kim Zeta brings up the suggestion that one suggestion they had seen was
that maybe Russian electronic warfare things,
broadcasting signals or whatever,
were interfering with the collection devices,
but we just don't know.
Maybe one of them fell over and was in the dirt. I don't know. Why is a fairy in the dirt?
See, this is the thing. I don't know, and we're not experts. We don't understand much
about measuring ambient radiation. We've got no idea.
But it's just an odd claim, I guess.
It's just an odd claim, so I guess we'll just have to
see if anything else sort of unfurls from this, right?
Yeah, it just seems weird,
and sometimes things are just weird for not delicious reasons.
Yeah, yeah, yeah.
But, you know, you never know.
I mean, there was a lot made out of the fact
that Russia were supposedly, you know,
putting everyone in danger by churning up and disturbing radioactive material around that site.
Digging trenches in the exclusion zone.
Like, that doesn't sound like a great way to go.
No, no.
So, I mean, it was advantageous to,
it was certainly advantageous to Ukraine politically
on the world stage to make a charge against Russia that,
you know, this was extremely reckless.
It was also advantageous, I guess, to the Russians as a big scary thing to scare Ukrainians
with.
So, you know, you can certainly see that.
But would it have been worth it, I guess, is the question.
And, you know, hard to know.
Hard to know, yeah.
Hard to know.
Now, moving on, and we've got two stories.
I mean, this isn't technically cybersecurity, but I do find this a
very fascinating area, right? We've got two stories about facial recognition technology and its use in
the United States that are both fascinating, but for different reasons. The first one is a unit of
the Department of Homeland Security has been using Clearview AI. This is a courtesy of Tom Brewster over at Forbes.
It looks like he's got some sources
who've identified the technology that they're using
and it is Clearview.
What they've done is they've fed in some CSAM material,
historically seized CSAM material,
basically dead leads to do facial rec
on individuals captured in these images.
And they've got hits on it,
which has enabled them to arrest some truly awful people
who've been doing disgusting things to infants.
This is terrific.
This is a fantastic use of facial recognition technology
to bring to justice the worst type of offenders that we have.
So that's great.
The second story is from the New York Times
and is about how a eight-month
pregnant woman in Detroit was arrested for carjacking because she was mistakenly identified
by a facial recognition system as being an offender. And it's an interesting story because
it points out a few things. First of all, all documented cases in which facial recognition
has resulted
in false charges being brought against someone in the united states in every single instance it's
been a black person so that's you know that is something that i think we should probably pay
attention to yes and second of all the story does a good job of pointing out the flaws in the
processes used by police right so they will feed an image from CCTV or whatever into a system and say,
show me someone who looks like this offender. The system will do that. And then they print off that
offender's mugshot. And I think this woman had a mugshot in the system because of an unpaid license
fee or something from 10 years prior. They'll take that mugshot, they'll throw it in a six-pack
and then throw it in front of a witness and say, do you see anyone who looks like the offender?
Now, obviously, if the computer says that they found someone that looks like the offender, the witness is going to say so as well.
And then bang, you've manufactured a positive identification of a suspect based on nonsense,
which is certainly what seems to have happened in this case. Now, she was detained for 11 hours,
you know, arrested in front of her child or children, I can't remember if it was one or more,
and had charges hanging over her for a month. Now, she's suing, obviously, and good luck to her,
I hope she wins. The reason I find these two stories just great to read back to back is
it shows us that there's room for policy development here where we get to capture
the benefits of this technology, but also prevent the misuse of it. And we've got two awesome examples of both in one week.
What did you make of these stories, Adam? Yeah, I think the contrast obviously is the most
striking thing here, right? I mean, this is, you know, good use of technology and bad use of the
same technology. And, you know, trying to come up with a way to use it and regulate
technology as it develops that captures both of those extremes like it's really difficult and
you know i don't think it's unachievable though no no i mean i hope it's not because like it feels
like it ought to be there must be some kind of middle ground where we can use it and not make quite such a mess um but this stuff is just kind
of scary when you see it used at scale i mean being able to compare against every photo that
they've got on record you know billions or whatever it is and find people um who've done
horrible things like that's great and then you know at the same time the kind of the prejudice
aspect of it or the giving witnesses in this case you know a selection
of images of course one of them is going to look like the person yes because like you know there
must be a way to resolve these in a way that we can still use it rather than flailing around and
coming up with bad regulation and you know everyone just getting scared yeah actually yeah
and that's and that's why i find it interesting because i think you know we've definitely got to
be aware of the risks.
And I think another big risk with this technology
is over-enforcement,
is when it becomes such a powerful tool
that you capture someone on CCTV littering
and then automatically send them a phone.
You know, and that's the problem,
is it becomes too tempting
to enforce absolutely every single rule under the book
and there's just such a,
there's so much scope for this to be overused.
But then again,
you're taking down dangerous people with this who would otherwise still be free in the community.
You know?
So really, I think what, yeah,
what we all need to do is formulate
a nuanced, considered approach to this sort of stuff.
Anyway, I just thought it was very interesting
having those two pop up in one week.
Nuanced, considered approaches is the speciality of this industry,
so, yeah.
Now, look, we've got one last, you know,
major area to discuss this week.
I should have probably paired this with the cloud stuff
earlier in the show,
but there's some attacks against,
some new attacks against AMD Zen CPU cpus uh they're called inception
we've also got some attacks against uh i think intel based stuff called downfall but look i think
the tldr from reading about all of these sort of speculative execution attacks and things like that
and you know things like sgx not really that reliable, is hardware isolation hasn't quite panned out
the way that we'd hoped it would.
I think you and I, we've been talking about attacks against SGX
and similar tech for a long time,
and have always sort of thought,
well, this stuff might get there eventually.
It just hasn't yet, though, has it?
No, it's really complicated.
And the threat here really is you are sharing a CPU
with someone you don't trust.
And then all of the work we've had to do to make CPUs fast
once we ran out of megahertz is kind of coming back to bite us.
Most of these are in performance optimizations,
branch prediction, and all those sorts of things.
And it's hard to
bolt that much complexity in and then also provide security guarantees when it wasn't super clear
you know especially in the case of an architecture like x86 or x64 that's kind of you know a bit long
in the tooth and it has a lot of complexity in it trying to make those meet you know the security
goals that we didn't really have when we started is hard.
And reading both of these bugs,
and they're both great.
One's from a Google researcher for the downfall case.
There's another bunch of academics from Europe
that worked on the other one.
And it's super clever work
combining all of the things we've learned so far
and bypassing all of the previous preventions
for these kinds of side channels and leaks but you know in a big picture look it's kind
of hard to imagine this ever running out because of the amount of complexity in there so i think
at this point this is all stuff that i don't think we've really seen it in the wild but you get the
sense that we will eventually and then and then think so. And then once that happens, every single CISO
in the world who's worth their salt is going to contact their cloud provider and say, we want our
own cores. Yeah. Now what's that going to do for compute demand? Yes. I mean, the economics of the
cloud are already kind of challenging in a way when people are not finding it as cheap as they
expected and you get nickel and dimed by Amazon every time you so much as do anything you know
it's you know it was already kind of hard to justify especially as you scale up and if we go
back in and say okay now we all demand our own cause like maybe there's a the scope for moving
like the move towards very small very very numerous ARM platforms for cloud stuff.
You know, if the cores are smaller and cheaper, you can, you know, dedicate them to customers.
It makes economic sense, perhaps more so than giant expensive Intel or AMD cores.
But it does feel like, you know, the cloud…
But the whole economics of this is based around shared utilization.
Well, yes, right?
Right?
And that's what I'm saying is a risk in the future
is that people will just say,
oh, well, we're not doing shared cores anymore
and then we're all going to run out.
It's sort of like when COVID hit
and Azure started falling over because everyone was on Teams.
You know, like that, but much, much worse.
Yeah, when we were talking earlier,
I thought about the move from massp web hosting where everybody's in one
apache instance with one php interpreter and all the problems that came with sharing then and we
all moved to virtual machines for better isolation it does feel like maybe now cpu side channels is
the new php mass hosting and we're going to have to move to dedicated cores for everybody or you
know you can even you could even think of like
you know shared linux hosts right yes and then you know you privask your way into everybody else's
business yeah yeah which we certainly did do that yes yeah and it's just like this is the same thing
uh and and you know you think oh well it'll be right you never know where you're going to land
in a cloud environment but i mean if you're sneaky enough get into a cloud environment just start
grabbing stuff you know who knows what you're going to wind up with you're going to wind up with you
know tokens with ssh keys you're going to wind up with all sorts of stuff if you are careful enough
yes exactly and i mean the insides of everyone's cloud environment so there's still plenty of
interesting stuff to look at even if you you know finding your exact target i mean maybe it's easier
to go through the cloud provider first and then back down the other side of the you know of the architecture well people are smart uh and i just
don't see an end to hardware isolation bypasses like this because the researchers are just having
a field day yeah with the gubbins of intel and amd's optimizations over the years well and it's
hard stuff to fix like patching this stuff is hard so you know as i said it just
feels like hardware isolation ain't working out and and it's probably not something we can pin
our hopes on uh in the medium term at least no and you know we're just learning so much about
security as we go along and it's hard when it's in the physical infrastructure you know when it's
you know you either have to throw the cpus out or patch them in ways that cause significant
performance degradation,
which removes the benefit of sharing it in the first place.
So, yeah, these are hard problems.
Yeah.
InfoSec, building the plane as we're flying it since forever, basically.
As we're crashing it.
Real quick, LetMeSpy, which is that Poland-based spyware maker.
They got owned in June.
All of their customers' email addresses got leaked,
as well as the data that they'd stolen from their victims.
They've shut down.
Good riddance.
So that's nice.
Now, you remember a while ago,
there was that married couple in New York,
including Ilya Lichtenstein and his wife, Heather Morgan.
They got arrested because they'd been laundering
the stolen Bitfinex Bitcoin. And we wondered at the time, well, where did they get that Bitcoin?
Well, Illionaire has pleaded guilty to stealing that Bitcoin, which was 95,000 Bitcoin in their
control at time of seizure. I think they stole 120,000. Yeah. So that was worth 71 million US
dollars at the time of the heist and is worth, I think 4.5 billion now,
something like that.
So yeah, they ran an elaborate money laundering scheme,
but they have been caught
and now he has confessed to the original crime.
So sentencing coming up.
And finally, Adam,
Android 14 has a couple of interesting features coming out,
mostly around blocking on the cell connection side.
It will not allow you to connect to 2G anymore.
And it will also prevent you from connecting to cell towers that use null ciphers.
And this is an anti-interception measure, and it's good.
Yeah, this is going to give people who do look after a big fleet of
Android phones the ability to turn off and make them less vulnerable to stick rays and
MC catches and those kinds of things that involve impersonating cell infrastructure.
And most people are surprised when they discover that even their modern iPhones will fall all the
way back down to 2G GSM, given the right radio environment, and it's difficult to turn it off.
So having these knobs
available certainly very handy and especially if you're you know if you're going to vegas or you're
going to somewhere where you know you think there's going to be jerks hanging around with
fake cell infrastructure might be handy for that you know it's funny because everyone's like uh
you know half the people are like oh you know bring a burner it's defcon and other people are
like don't be ridiculous it's modern ios i. iOS is gonna burn a bug chain on that.
I think it was like 2017, I was at Defcon
and my iPhone, I'm holding it in my hand,
went black, rebooted.
And the funny thing is that's the only time
that particular iPhone had ever done that.
Yep.
Which still made me think,
well, you probably didn't just burn an exploit,
but someone is doing something funky that's causing some sort of crash or something.
And also, like, you put 5,000 people in a room with that many cell phones.
I mean, even just the regular cell infrastructure can struggle a little bit with that.
Although Vegas is better equipped than most for big crowds.
But, yeah, it's still, you know, I know I was,
I think when I went to Vegas last,
I got one of those, of those carrier settings update pushes.
I'm like, hell to the no.
Hell no.
Hell no.
It's a witch.
It's a witch.
Yes.
Basically, yes.
Also, I'm going to link through in this week's show notes to a piece by Catalan in his Risky Business News newsletter.
He seems to be one of the only people in InfoSec who noticed a report from TASS that says the Russian government is pushing a new law which will enable it to delete the personaling people as like not being people who are fond of
certain types of church architecture and are actually doing Novichok poisonings right because
like a lot of that OSINT is is enabled by leaked data sets and also probably as a bulwark against
western intelligence collection so they're actually passing this law saying oh well we're
going to withhold this data but Tom Uren's doing some work on this as well.
And I think the feeling is that it's only going to get them so far
because it was actually the absence of people's data
from the OPM data set that unmasked them
as CIA officers under State Department cover.
So this is a really interesting thing
that I don't think many other people are reporting on.
So Catalan's report is up.
And Tom's newsletter tomorrow will have further analysis.
And Tom and I,
of course, will be discussing this tomorrow. But that is actually it for this week's news.
It's been a lot of fun, as usual, and we'll do it all again next week.
Thanks very much, Pat. I will talk to you then.
That was CyberCX's Adam Boileau there with a check of the week's security news.
It is time for this week's sponsor interview now with Brian Dye, the chief executive of CoreLite.
CoreLite maintains the open source Zeek network sensor, which is the industry standard data
source for network security events. They also make a commercial version of Zeek that's more
powerful and customizable than the open source version. And they have a pointy clicky NDR version
of the product as well with a full GUI and all of that good stuff. So Brian joined me this week to talk about the three
main models for detection, the SIEM model, the SOC triad model, and the XDR model. Who are they for?
What are their strengths? What are their weaknesses? Here's Brian Dye. At one end, you've got this idea
of a fully centralized, typically large-scale SOC,
where it's all about give me all the data and all the detections you can.
I want to put it all in one place.
It could be a huge Splunk environment.
It could be a data lake.
But I want everything in one place.
I want all the data you can give me.
At the other end, you've got what Gartner used to call the SOC triad.
And this is why it's a mix of kind of analysts and different kind of people's interpretations here, where you're actually doing analytics and even alert triage
on a per domain basis, but you're still putting all the data in one place because you're actually
adding cross-domain analytics and you're adding threat hunting. So I kind of think of it as you've
got this fully centralized kind of data lake SOC, you've got the SOC triad, and then you've got XDRs,
the third design pattern. I mean, some of this stuff already in,
you know, how you've described this, some of this stuff is actually collapsing though,
like as we speak, right? Like I had an interesting interview with someone from Snowflake
who came on as a guest of another sponsor talking about like their approach to like just building
that, you know, what we all call that data lake, right? And then you can have, you know,
different applications to do stuff to the data once it's there? And then you can have, you know, different applications to do stuff to the data once it's there.
And then you've got, you know, companies like Panther,
which are, you know, trying to be, I guess,
a bit more like the next generation SIM.
So they're a little bit splunky.
They're a little bit data lakey.
You know, they actually do run on Snowflake.
So, I mean, I get what you're saying,
that there's these three categories,
but it seems like, you know, it could be four categories next year or six or two.
It could. And that's what's so interesting is that the analytics architectures themselves
are evolving. But I'll tell you what's, there's a continuum here of what's actually
happening. And the constants are more data is always a good thing. More detections are always a good thing, right? Those are the constants. What's actually varying here is how much essentially cross-domain analytics you can do, how much threat hunting you want to do, and how much security engineering staff you have this kind of call it the data lake type strategy, it's because you've got a big security engineering team that can build all the workflows and handle all the cross-domain
analytics for this massive amount of data you put together. At the other end of the spectrum,
the kind of SOC triad world, it's because you really don't have a security engineering team.
So you're willing to go with kind of these stovepiped kind of domain-specific tools because
that's all the engineering talent you have. And then, you know, you've got this middle ground, this XDR ground that's
emerging where you're getting more out of the box functionality, right? The XDR vendors are doing
more of the cross domain analytics. So you get a little bit of the best of both worlds, right?
Where you get domain specific analytics, because you're triaging those in some specific tool,
but you've got more that's centralized that enables a little bit more out of the box because the folks that are driving that
strategy, they don't have 100 people or more in their security team, which is typically what you
see in those big data lake environments. And that's where this stuff gets interesting,
right? It's because a lot of what's going to work best for an organization is going to depend
on so many different factors,
right? It's not like you could just say large companies need to use this one, small companies
need to use this one, you know, companies with big security staff need to do this, need to do that.
How should organizations be thinking about which approach, you know, how should they evaluate like
these approaches and pick one? You absolutely hit it.
There's a couple of variables here.
Number one is actually how big your security team is and specifically the security engineering
team.
Because you may have a SOC and you've got a bunch of analysts, but if all those analysts
are being supported by two people in security engineering, there's only so much customization
and analytics and whatnot that you're going to be able to do. And so that will kind of be a pretty heavy lever in driving how
much out-of-the-box functionality you need versus how much you can afford to go create with in-house
kind of engineering work. So that's a really, really big one. The second big one is how much
are you enabling threat hunting as a discipline? Because if you have no desire or no bandwidth to be able to go drive threat hunting, then that SOC triad starts to look a lot more interesting.
If you want to do threat hunting, then you need to be in one of the other two architectures
because otherwise you don't have your data in one place. It's really hard to do threat
hunting on a per domain basis. And I think the third one really is how much ability do
you have to do in-house detection engineering?
And that sounds like a fancy word, but bear in mind that also runs a huge spectrum, right? If you're kind of going and taking the initiative, go and download a bunch of Sigma rules and kind
of bringing those into your SIEM, you've done a level of detection engineering. And by the way,
nice work, keep it up, you're out of the curve, right? That's actually really great.
All the way up to folks that are truly have kind of aggressive kind of red team programs or actually building new custom
analytics based on their unique domains and unique attackers that are going after them.
So to me, those are the three big things that folks are kind of making this decision based on.
I mean, I just realized one of the reasons that you can talk about this is Corelight's
probably one of the only companies I can think of that supports all three.
It is exactly the reason that we see this, is that we've kind of thought about the world very differently than a lot of providers, right? Because there's so many people trying to,
so many vendors, I should say, they're trying to fight the analytics game. They want to fight for
the SOC analyst eyeball. And that's awesome because that competition generates great things
for defenders and there's a lot of good that comes from that. But we've had the opposite view, which is,
let's make sure that we have the best data you can possibly get and the best detection you can
possibly get. And then let's put that in wherever it needs to go, right? And it turns up that
wherever it needs to go has a lot of places it can go. And so, yeah, that's why we see the spread
here. No, no, I get it. I mean, you know, Corelight, obviously you've got the sensor,
and then you've got the, I mean, more recently, it's like a year or two old, you know,
you've got the sort of fully featured NDR thing. And I'm guessing you license probably to other
vendors for the XDR side of it. And even if you're not licensing to them, you're providing compatibility
for their stuff to do the network collection, right? Yeah. And one of the things that we've
seen a lot of success with recently is partnering not just with the XDR providers, because they're the ones that are driving that
middle column. And we've always worked with the SIM providers and the data lake providers for
obvious reasons, but also working with the instant response teams, because a lot of them have this
same problem in a very acute way, right? They need fantastic data to go and actually
find and diagnose really advanced attacks. And that tends to be a sweet spot for us. So yeah, the SIM providers are big partners. The XDRs are big
partners, again, across all three of these architectures. And then the IR groups are the
other folks that we love working with. I think before we got recording, we spoke about how
one of the reasons we're having this conversation is that companies don't often
make a conscious choice really with this
stuff and they just sort of wind up where they wind up. You know, how does that happen? Does
this stuff just sort of evolve out of a team? You know, maybe the CISO is just telling various
people to handle it and you just wind up with this sort of inertia that, you know, and that's
how Babby is made. I mean, like, does this do people
sort of wind up picking one of these three approaches by accident, I guess, is one of
the questions. Well, I think, actually, I don't think it's by accident, actually. I think there
is everyone who has a SOC, has a SOC architecture now based off of the available technology 5,
10, or maybe even 15 years ago. And so the key word that you had that I totally agree with is inertia.
It takes effort to go change that analytics architecture. And what's been happening in a really interesting way just in the last two or three years is the architectures that are
available, the technology that's available to do the analytics has really changed pretty radically
from what was there 10 years ago. So I think the call to action for folks is to recognize that
inertia is real, recognize that the decisions we all made five or 10 years ago were probably the
right ones at the time and may still be the right ones now. But it's worth stepping back and asking
that question because there really are interesting pros and cons of each of these big deployment
architectures. There's very different technologies available in a bunch of different categories that
enable these to be easier or in some cases even feasible in ways that they weren't five or 10
years ago. So I think the key is step back, recognize that inertia is a thing, and say,
now is the time to think about, gee, are we good or not? And if we're not, put in place a plan,
start next year, whatever that's going to be. A lot of folks are thinking about budgets for second half, budgets rolling into
next year, things like that.
What are the things you might want to do differently?
Now's the time to ask yourself the question.
I'm not saying that it's necessarily right for everybody to change, but it's definitely
right to ask the question.
Yeah.
So let me see how I go with some pros and cons on each of these three, right?
Let me just take a stab at it. I'm guessing the massive data lake Splunk style approach,
it's the most flexible if you have a big engineering team.
It's also the most expensive,
but if you are successful, it's extremely effective.
And that's a big if, right?
Because there's plenty of people
who make mistakes with that stuff.
So I'd imagine that's a rough breakdown on that side of it.
With the SOC triad, it's like, well, it's almost like, you know, it's beyond a starting point, right? But it is at that case where, you know, you've struggled to get the budget from the
board, right? You've got to stand something up. Here it is. We're going to take all of these logs,
throw them into a place. If we have to do, if we have to actually do some IR, it'll be a pain in the ass, but at least we can actually do it.
And we're getting some meaningful detections along the way, but it's not gold standard.
You know, we're not a defense contractor worried about APT crews, right? We're a cardboard box
factory, but we need a bigger security environment because I don't know, we make classified boxes.
I don't know. And then there's the third one, which is the XDR, which as you pointed out is a blend between the
two. And I'd imagine the advantage there is going to be cost, ease of deployment. And the
disadvantage with that one is going to be the absolute lack of flexibility and dumb vendor
stuff that they've, they've made it do dumb things and you can't turn them off. Am I about right with
all of those three breakdowns? Yeah, you're pretty close.
I mean, there's only a couple of tweaks I would put in.
Number one, for the very large environment, it is the most expensive in raw dollars.
But if you compare that to one of the other two architecture at the scale that those things
are actually at, it may be even more expensive.
So I actually think that one gets more cost effective at large scale, but the raw dollars
are still super large, right?
Yeah. So Splunk's licensing looks really expensive until you look at per user licensing for XDR across a mega enterprise. And not just the individual tools, but think about a lot of
these organizations will have 50, 100, 200 analysts. So the amount of training and workflow
standardization and then customized tool sets and SOAR integration and all this other stuff that's
going on there, there's a lot kind of baked into that, right? And then to the SOC triad piece, it is a great
place to start. I totally agree with you that having the data is the first non-negotiable,
right? Because you can't solve, I don't have the data. But what you're really trading off is
swivel chair integration when you have to go look across, which as he said, it's going to be more
of a pain in the ass, but at least you can actually get it done. And then on the XDR side, I think
it's less about being locked into what the vendor can do because you've still got all the data in
one place, right? That's kind of the advantage. I think the catch is that all those solutions
are still on a very fast innovation trajectory, right? So what the XDR providers could truly do with cross-domain
analytics now versus 12 months ago versus 24 months ago looks totally different.
It's getting better.
Yeah, it's getting a lot better a lot quicker. Yeah.
All right, Brian Dye, thank you so much for joining us for this conversation.
Always very interesting stuff.
Appreciate the time, Patrick. Fun as ever.
That was Brian Dye there from CoreLite.
Big thanks to him for that.
And big thanks to CoreLite for sponsoring this week's show.
And they've been a sponsor for a while now too.
And as I said, CoreLite's network sensor is the industry standard for network detection these days.
So if you're not familiar with Zeek, that's probably something you should remedy.
Not going to lie.
And that is it for this week's show.
I do hope you should remedy. Not gonna lie. And that is it for this week's show. I do hope you enjoyed it. I'll be back tomorrow with another episode of the Seriously Risky Business podcast
with Tom Uren in the Risky Business News RSS feed. But until then, I've been Patrick Gray.
Thanks for listening. Thank you.