Risky Business - Risky Business #765 -- The Kaspersky switcheroo
Episode Date: September 25, 2024Patrick Gray and Adam Boileau discuss the week’s infosec news with everyone’s favourite ex-NSA big-brain, Rob Joyce. They talk through: Musk and Durov bow to gov...ernment pressure Tiktok rushes to ban authoritarian propagandists The US doesn’t want Chinese software in its cars Kaspersky replaces itself with an AV no one has ever heard of Aussie police chalk up another crimephone takedown Press Win-R Ctrl-V to prove you’re human And much, much more. This week’s show is brought to you by Stairwell, and Stairwell’s founder Mike Wiacek will be along to talk about how people are using their platform to hunt down detection resistant malware. A video version of this episode is also available on Youtube. Show notes Elon Musk backs down in his fight with Brazilian judges to restore X | Elon Musk | The Guardian Telegram says it will share phone numbers and IP addresses of ‘bad actors’ to authorities Jane Lytvynenko on X: "Ukrainian cybersecurity officials are limiting the use of Telegram for military, critical infrastructure, and other authorities. Budanov said he has “substantiated data” on Ru authorities having access to personal messages on TG, including removed ones. https://t.co/xOcnf7am9R" / X TikTok blocks dozens of Kremlin-backed media accounts Biden administration proposes rule banning Chinese, Russian connected vehicles and parts Some Kaspersky customers receive surprise forced-update to new antivirus software | TechCrunch Russian cyber firm Dr.Web says services are restored after ‘targeted cyberattack’ Police announce takedown and arrest mastermind behind criminal comms platform 'Ghost' Turning Everyday Gadgets into Bombs is a Bad Idea « bunnie's blog Iranian-linked election interference operation shows signs of recent access | CyberScoop Republicans demand FBI hearing on Iran theft of Trump documents Ermittlungen im Darknet: Strafverfolger hebeln Tor-Anonymisierung aus | tagesschau.de DOJ charges hackers for stealing $230 million in crypto from individual This Windows PowerShell Phish Has Scary Potential – Krebs on Security You can now use Apple’s best iPhone Mirroring feature on your Mac and iPhone | TechRadar
Transcript
Discussion (0)
Hi everyone and welcome to Risky Business. My name's Patrick Gray. This week's show is brought to you by Stairwell and Stairwell's founder Mike Wyasek will be along in this week's sponsor interview to talk about how people are using their platform to hunt down detection-resistant malware. That is coming up later, but first up,
it's time for a check of the week's security news
with Adam Boileau and our special guest co-host,
former US Presidential Cybersecurity Advisor
and NSA Cybersecurity Director, Rob Joyce.
These days, Rob is advising startups
as well as some very large companies about all things cybersecurity.
And of course, he has time to join us here on Risky Business every now and then, which is great.
So guys, we're going to get into the news and I'm going to find a slightly odd way to start this week's discussion by talking about a little victory I had as a parent recently. So I have two
kids. One is six years old. One is three years old. And my three-year-old boy was just steadfastly
refusing to use a nappy or a diaper for our American friends when he wanted to do a number
two. So he'd always come up and say, yes, I'm ready for my nappy because I need to do a number
two. And we'd be like, buddy, can't you use the potty?
Can't you use the toilet?
And he'd always say, absolutely not.
He'd jump up and down and scream and the whole sort of thing.
So it's been an ongoing battle until recently where I said, that's it.
You will not have any dessert.
You will not have anything sweet.
You will not have any treats until you learn to use a potty.
And, of course, there was a lot of jumping up and down and screaming. After two days, he cracked, and he is now use a potty and of course there was a lot of jumping up and down and screaming after two days he cracked and he is now using a potty very
proud of himself huge parenting win really good now something similar has
happened in Brazil with Elon Musk's X right which is after all of the sound and the fury around you know this this ban on x and you know x refusing to censor accounts
or to take down accounts as demanded by the supreme court and saying uh that the that the
supreme court judge was a dictator and they're going to stand for free speech and there's going
to be protests and there's going to be this and that. They folded like cheap lawn furniture after a couple of weeks
of a ban. I am not surprised by this and I am surprised that other people are surprised by this.
First, we saw the SpaceX capitulate and Block X. Now we've seen X is reappointing local council
in Brazil. They're paying the fines that they owe to the Brazilian government and they've agreed to
act on the Supreme Court judge's orders. Adam,
let's start with you on this. Are you surprised? No, I'm not at all surprised. I mean,
the idea that Elon Musk would have to go potty, like, I'm here for that. And no, not even slightly surprised. And I think it's, you know, it's one thing to fight the free speech fight in the US.
But the moment you show up in Brazil or India or all sorts of other places around the world,
you know, it's just kind of a different kettle of fish.
Not everywhere is the US.
And, you know, you can't be the same free speech man-child everywhere.
Yeah, I mean, I love how they're so committed to free speech until they're mildly inconvenienced,
which seems to be the case here.
And I would note, too, that in India and Turkey, they have no problem whatsoever nuking accounts
at the request of the government.
You know, we do have an American with us, though, Rob, who I imagine is deeply committed
to First Amendment principles.
You know, what have you made of this whole episode?
You know, I love a good free speech, but the reality is these are sovereign countries and Musk is a businessman.
So he's got a choice between staying in the country or having his free speech.
And it's clear that he voted he wants to stay with the capitalism, not the free speech side.
Yeah, I mean, you do wonder, though, I mean, okay, he is a businessman, but he doesn't really
tend to run X like it's a business, which is perhaps why revenue has collapsed by something
like 83%, according to reports. But yeah, look, the other one that we're going to talk about this
week, too, is Telegram. Because same thing, you know, Pavel Durov did not want to go potty. He did not want
to go potty. And then he was arrested for basically, you know, facilitating all manner of
crimes. And now he said, okay, I'm going to use the potty, basically. Adam, let's get your take
on this. I mean, again, I mean, what they're saying now is, look, we've updated our privacy policy and we'll provide user information like IP addresses and
whatnot in response to law enforcement requests. You know, I mean, again, are you surprised here?
Because I am definitely not. I think given the choice between sharing a cell with a bank robber
in France or continuing to fly around the world on your private jet with your Instagram model
girlfriends seems like a pretty clear decision for him. Yeah, yeah, it certainly does. And I think
the question here is going to be exactly how does, what form does this cooperation take? Like,
they're willing to share these identifiers with law enforcement as part of what they said, like,
you know, legitimate law enforcement investigations or something. But, you know, doing that consistently, building the frameworks, building the, you know, the mechanisms for law enforcement investigations or something but you know doing that consistently
building the frameworks building the you know the mechanisms for law enforcement to submit those
requests servicing them in reasonable time providing the data like the proof is going to be
in the pudding with telegram at least with x in brazil it's pretty you know clear it's either
blocked or not blocked or the people are deleted or not deleted we can kind of see that response
telegram you know we'll have to wait
and see whether they do actually make a good faith effort of it or not. Yeah, and which jurisdictions
they're going to be responsive in. Rob, I did have a question for you on this, which is so much of
the discourse around cybersecurity and, you know, online privacy and stuff focuses on anonymizing
technologies and E2EE. But ultimately, you know, even if you've just got a service like this,
like Telegram,
which just uses basic like TLS encryption for its users,
they might be collecting logs and whatever.
But if they're not going to cooperate
with law enforcement,
I mean, the people using it
are kind of enjoying the benefits of anonymity anyway, right?
So you don't need to be using some crazy Tor-based thing
to get that benefit.
I mean, how much of that, you know,
I understand that you're limited in what you can say here,
but how much of a problem is it just when you're dealing
with platforms that just won't cooperate with governments,
you know, when you're either in a SIGINT agency
or, you know, your friends at the FBI and whatever?
How much of a problem is that compared to the harder core, you know, anonymization tech?
Yeah, I really think the, you know, the services like Telegram and some of the other major
platforms have been growing their ability to provide warrant proof communications.
Clearly, France thought that Telegram either could or
should supply information. And they went at them hard and Durov blinked. So I think the fact that
the French went to this extreme shows that it's a real problem and they were concerned.
And I think you've seen some laws, you know, like even in
Australia, where the law is kind of indifferent as to how you supply the information, but you've
got to make sure that you can respond to a lawful intercept request. And, you know, I do believe
over time that companies do need to figure that out. You know, civil society needs the ability
to enforce the rule of law.
And that's what we're talking about here.
Yeah, we've got another piece on this that we're going to talk about later.
And I've definitely got something to add on that.
Just while we're on Telegram, though, the Ukrainian authorities have now banned the
use of Telegram across military critical infrastructure and a bunch of, you know,
government departments and whatnot.
Kirill Budanov, who's one of the intelligence chiefs in Ukraine,
has said there's solid evidence and substantiated data
that Telegram is cooperating with Russian authorities.
I think that's an interesting angle to this whole thing as well,
which is it does seem there's a lot of smoke around that,
that Telegram was cooperating with
Russian authorities, but not with Western authorities, which I think was even more of
a motivator for the French. Would you agree with that? Oh, absolutely. You know, you look,
Telegram has a long history and Durov has a long history with the Russian government. He made a big deal of trying to stand up to the FSB way back in 2013 and departed Russia when they were squeezing him.
But he quietly returned to his home base in St. Petersburg in 2014.
And he's been able to come and go from Russia ever since.
And that tells you something, because the Russians have a very strict law.
It's the SORM law, S-O-R-M.
And, you know, it started with telephone and cellular intercept.
And then it moved into Internet in like 1999.
And then it grew in 2014 to include all forms of comms, to include social media.
And the idea that he could come and go while defying Russia is inconceivable.
It is very clear that he reached some sort of agreement.
The Russian government got what they want, both under law and just under what the FSB can demand.
And they've even made some statements, you know, in the Russian government that they found a compromise with the FSB.
There was a quote that Telegram installed the equipment so that it can monitor all dangerous subjects and dangerous subjects. That's a pretty wide lane to drive through when you're the Russian
government.
So I am highly confident, based on a lot of public information, that Telegram is absolutely
cooperative.
And the Ukrainians are super wise to get out of Dodge and make sure that their information
is not on Telegram.
Yeah, I mean, I think what you say, the fact that he was not arrested by the Russians gives you an indication there, you know, but it's just, I guess it's a bit of a plot twist
that he gets arrested by the French. Now, let's move on to another story here. And this one I
find pretty interesting, which is TikTok has now, you know, we spoke last week about how Meta has
basically kicked RT off its platforms, given recent revelations from US authorities
that say that RT is acting as an extension of Russia's intelligence community. We've now seen
TikTok follow suit, which I guess, you know, TikTok, people forget, it might be owned by
ByteDance, which is a Chinese-controlled company with majority, I think, Western shareholders,
but they don't really have the
controlling authority over the company or whatever. But TikTok is really the Western-facing
app. There's another one that ByteDance operates for the Chinese domestic market. So this isn't
like they've removed RT for their Chinese audience. But I do find it interesting that
TikTok is at least trying to put up a bit
of a fig leaf here and say see we're not a malign influence you know we're deeply concerned about
misinformation adam let's start with you what did you make of this i mean i i guess it makes
sense for them and it's an easy move right you're following meta there's not going to be any push
back and as you say like it's good for them to be able to have something to say point out and say hey look we're playing by the rules you know we're part of of your media ecosystem and we're you know
we're doing what you want but I mean it it does fall a little flat for for people like us that
have been following this story all along and kind of know about um how the Chinese work so you know
good for them but you know come on I think we can see through it,
right? Yeah, exactly. I mean, I do think it's, I also think the, you know, people have kind of
forgotten that ByteDance is going to be forced to divest TikTok, like actually quite soon. That's
coming up in January and there's court challenges and whatever. And I think everybody's in a holding
pattern to see who's going to win the next US federal election. But I would think that this is a sensible thing to do
if you want to at least reserve the option
to maintain that respectability
so that you might be able to find a US consortium
to actually buy the thing.
What's your sense, Rob,
on how this is actually going to play out?
Do you think that the...
I mean, look, let's just assume for a moment,
and it could be a dangerous assumption, we what's going to happen under under a trump admin
right which is that the ban won't go through but if if harris wins the next election you know how
do you see this playing out do you think that tiktok will wind up being divested or do you think
the ccp will just put pressure on the company to shut it down as kind of like
on the principle sort of thing? Yeah, I don't know where the China government goes. I don't think
Harris administration blinks, right? I think its policy on China is going to be fairly consistent
with the current administration. And Congress has been very
clear about where they stand, right? This was also almost a bipartisan issue where everybody
could rally behind. So I think the U.S. is going to continue that pressure. I just don't know if
China will allow their company to be seen buckling under in almost Elon Musk Brazil fashion.
Yeah, let's see.
Let's see if they go potty, right?
We're going to end up with like Oracle TikTok.
Yay.
I know, right?
Crazy stuff.
Now, look, we included this one this week, Rob,
because you're joining us
and I figured you'd have plenty to say about it.
The Biden administration is proposing a rule
that would ban Chinese and Russian automotive software
from cars sold in the United States.
It looks like the idea is this would take effect
for model years from 2027
and the ban would become law in the
year 2030. You know, this is something I know you have opinions on. So I wanted to get your
thoughts on whether or not you think this is a positive development. I'm going to just go ahead
and guess that you think it's a good thing. I think it's an important thing. I realize it's
going to come with a lot of pain, both economically,
technically, and in business and even international relationships. But the reality is
the regret factor is going to be super high if you let the Chinese throughout the
infrastructure that we use to do self-driving cars and automation in that space.
So it took an enormous amount of effort to highlight the Huawei challenge. And both
financially and technologically, that was a huge, huge problem. We got to the point where there
weren't Western options. So I think the
intent here is to get ahead of the challenge while there still can be Western options before
this market is completely mature and get those Western options in there so there can be a trusted
provider. I mean, the Chinese automotive industry is just a juggernaut. I mean, there's no other
way you can put it. You know, I live in a country where we don't have tariffs on chinese cars uh and because we don't really have a local car industry to protect
anymore and chinese cars are just everywhere and they're quite good my last two cars are not from
chinese brands um but they were both made in china my tesla uh that we got rid of because because of
elon um was made in china and then we replaced that with a BMW iX3, which was also made in China.
So just even on the manufacturing scale side,
China's just come out of nowhere to be such a major player in automotive.
So I think what you're saying about this trying to encourage others
to serve the US market, you know, I really get what you're saying there.
I do fear that perhaps, you know, Americans might wind up driving inferior cars as a result,
though, if the innovation is going to continue as it has in China.
Yeah, they certainly are a manufacturing powerhouse. But we've got to get some
alternate supply chains. Yep, yep. Now, another topic near and dear to your heart is Kaspersky.
And this week it looks like they have completed their withdrawal from the U.S.
market by switching over the installations on computers in the United States.
Basically Kaspersky users in the U.S. woke up and Kaspersky was gone and something called Ultra AV was installed in its place.
There was initially some confusion here and people saying that they had no idea this was going to happen, although it does look like Kaspersky had notified its customers via email that they were going to make the switch.
Ultra AV is owned by an American company. Personally, I think this was a responsible thing
for Kaspersky to do
because it means that users will continue
to enjoy some level of protection.
Whereas if Kaspersky had to stop shipping signature updates,
you know, there was going to be some bit rot there.
So I think they kind of did the right thing here.
Adam, let's start with you.
Do you agree with me
that they actually did the right thing here? Because, let's start with you. Do you agree with me that they actually did the right thing here?
Because they're getting a lot of flack for it, but I kind of think it was justified.
Yeah, I mean, I think, you know, you're right.
Like for them not shipping an update and just letting it bit rot and letting people think
they were still getting updates and still being protected, you know, that was not the
responsible choice.
So finding someone to buy it, take over that installation, ship a smooth a smooth update process I mean and there's argument about exactly how smooth it
was like that's the right thing to try and do I mean I'd never heard of Ultra AV and certainly
the impression you get from some of the Kaspersky forums is that nobody seems to have heard of Ultra
AV before but you know I guess so long as it does something, maybe we, you know,
we'll remain to be seen whether it's any good, but yeah,
the Kaspersky forums are like,
I tried to read them to get some vibes and like,
it's just such a frothing mess of, you know,
of people who don't understand what antivirus software is or how computers
work and, you know, just so much frothing at the mouth.
That was kind of like,
I ended up just closing the tab and deciding I didn't want to think about it anymore. Yeah. I mean, you know, just so much frothing at the mouth that was kind of like I ended up just closing the tab
and deciding I didn't want to think about it anymore.
Yeah, I mean, I've met Eugene Kaspersky a bunch of times.
He seems like, you know, when it would come to something like this,
he would be motivated by trying to do the right thing.
I think protecting, you know, the average internet user at scale
is something that he does personally care a lot about.
I think it's been his sort of driving motivation throughout his career. But Rob, what's your opinion here? Do you think
they did the right thing? Yeah, so two points, Patrick. One, I do think it was reasonable to
replace the AV and to have a plan to do it, I think they executed really poorly. You know, there are a
bunch of people, I went to the same forums, Adam, and yes, it was angry people with pitchforks and
torches. But many of them said, I definitively didn't get an email, right? They went back through
and, you know, I'm sure some of them hit the
delete key. Some of them ignored it, but I am also certain many didn't get an email. And there was a
big question as to why they just didn't get a pop-up that they had to click through on their
Kaspersky, you know, a week or a day or some period of time beforehand saying, hey, this is what's
happening. Because there were others who read the thing that said they'd be transitioned. And they assumed they'd get a key code for their new AV
that they would download and install. The second point I'd make is this was an educational moment
for a lot of people. It really woke them up to why we were saying having an untrusted AV in a position of root
access on your computer is not the place you want to be. And when Kaspersky disappeared,
and all of a sudden this new program was automagically installed with no user action,
there were people that were dumbfounded. Now, many of the listeners of this program will understand that, yes, that's a very achievable thing for the level of access
that your AV has. But there are a lot of people who didn't realize it. And that just drives home
the point of how much control the antivirus has. And it doesn't matter whether it's a Western American antivirus or, you know,
a Russian or a Chinese, you have to pick your trusted partners. And this is, to me, it's akin
to when TikTok, you know, went ahead and messaged all their users with the phone numbers of their
congressmen and incited them to call congressional
offices, right? It just told everybody the position of power they had. And that, you know,
was a shoot yourself in the foot moment for TikTok. I think, you know, if Kaspersky had any chance in
this space, this was a shoot yourself in the foot, no recovery. Not that they were likely to be able to recover anyway,
but it demonstrated.
I think, Rob, they knew that the US is a market.
That's a door that's closed to them forever.
The TikTok thing that you mentioned,
I almost mentioned it earlier,
which was, I mean, that was the most spectacularly idiotic
bit of government relations I've ever seen
when they were literally giving people,
and they gave people pop-ups
where you could actually just tap the number to call directly off your phones. Absolutely insane, hardened the resolve, really turned things around. I mean, if you're looking for, you know, a couple of key things that really did TikTok in, it was a lot of the content that was on there pertaining to the Israel-Gaza situation and then that bit of absolutely insane government relations work.
So I think both of these, you know, both of these actions will go down in corporate history as, you know,
bad own goals where they did themselves a disservice.
Now, real quick, Dr. Webb, which is another Russian cybersecurity firm,
they shut down a bunch of services after claiming that they had been attacked. I mean,
we don't really know much here. This could be hacktivists. This could be just Ukrainians who
want to cause some damage. It could be SIGINT agencies. I mean, you know, I would think it
would be a hard time to be
a Russian software vendor with a presence in a lot of Russian networks, Rob. I mean,
I'm sure you would agree with that. I do, right? I think all of those and, you know,
and their friends are coming after these opportunities, right? We just got through
talking about, you know, what a position of privilege an AV provider has.
And so if you want to brick a bunch of machines, if you want to understand and get data back
about the structure of networks and the places they're located, that's a position of power.
So it doesn't surprise me that somebody's coming at them hard.
Now we've got some news out of Australia, which is Ghost, the crime phone network, which was sort of smaller than a lot of the other ones that we've seen, but it was starting to grow, was starting to really take off.
The guy who was running it was actually Australian, 32 years old, J.G. Yoon Jung.
And he's been arrested and charged with running the platform
and it looks like the Australian Federal Police were all over it
it's a classic of the genre, right?
they managed to somehow access their update server
and push an update, I'm guessing it was signed
out to all of the handsets
collect a bunch of intelligence
then drop the hammer
I mean, it's just insane
i think for an australian citizen to be running a service like this when we have entirely comp
you know competent law enforcement agencies who are going to catch you was that your impression
too adam when you read about this yeah i think it does show some you know poor threat modeling
on behalf of a person deciding to stand up a crime network, especially given the history of, you know, of
attacks by the Australian law enforcement, and I mean, also other law enforcement as well. But
picking a jurisdiction to operate that kind of business in seems a pretty important choice. But,
you know, I guess a lot of people fall into this, like they start small, they start making it for
some friends or some associates, and then it, you you know kind of balloons out of control pretty quick so maybe
there wasn't a lot of thought up front but i mean you know advice from risky biz to aspiring
criminals is if you're going to run the replacement crime for a network maybe choose a different
jurisdiction maybe think about your opsec early on yeah do it from a country where you can bribe
the local law enforcement
so you don't wind up in prison, I think,
is the boilerplate advice for something like this.
And also, you know, the Australian Federal Police
were really all over the Anom operation,
working with the FBI,
and showed that they were very happy to act
extremely aggressively on operations like this.
I'm just, you know,
as again, I'm just, this guy was always going to get caught. Was that sort of your impression as well, Rob? Oh, I love what law enforcement's doing in this space, right? You talked about how fast
ghost grew. They grew fast because law enforcement pulled all their competitors off the playing
field. And so, you know, when people
looked around and tried to find the next best thing, there was Ghost. So they became the bright,
shiny object for law enforcement to target. And they had a spectacular win. You go back,
you mentioned Anam, but Phantom Secure, EncroChat, Sky Global, there's a whole, you know, just parade of entities that have been pulled down.
And now, you know, there's everybody looking over their shoulder about who knows what and who's been pressured because there's evidence against them and have flipped who's in the chats in other places.
And this is just the gift that keeps on giving. So this is wonderful work by all the law enforcement have been able to sneak explosives into pages that were detonated about a week ago across Lebanon, belonging to people in the military wing of Hezbollah.
So last week I said I'd seen some speculation that the explosive charge may have been contained in the batteries.
It looks like that's a pretty plausible theory.
Adam, you found this blog
post and yeah walk us through the the gist of it yeah so bunny huang is a guy that's been in the
infrasec scene but has done a lot of hardware work over the years a lot in like games consoles and
other kind of you know hardware adjacent hacking stuff um and he's also built a bunch of products um including custom battery manufacturing
and he talks through like how do you produce lithium pouch battery cells what kind of machines
are used how do you manufacture them um how much is the equipment to buy them and like if you wanted
to build a battery that contained a layer of of plastic explosives and then you lay that up inside
the batteries and then trigger it
through the battery using the existing circuitry without having to do much you know much too much
custom work like using the existing microelectronics to control it and it talks through that process
describes how to you know detonate it etc and you know it's all thought experiments but it's
thought experiments by a guy who has a bunch of experience, you know,
building these batteries and kind of what's involved in getting the equipment and doing
it yourself. So I found it really interesting just from a like, you know, this is, you know,
you always want to read the nitty gritty details, right? And our audience is into that kind of
thing. So I thought that was super interesting. You know, there's no, you know, we don't know how
Israel actually did it, but this certainly seemed pretty
plausible to my reading yeah well I know you know my my thinking around this is that Hezbollah
you know have a pretty sophisticated counterintelligence operation and they would
have inspected these things and hiding an explosive charge in the battery would be one way
that they would be able to get these things into the hands of fighters without them
realizing that was what was happening Rob I wanted to get your thoughts broadly on this
operation, because it occurs to me that as someone with a long history and, you know,
as Signals Intelligence Agency, this is the sort of thing you would look at and say,
I mean, in some ways, just wow, because it achieved a couple of objectives, one of which was to
severely injure a whole bunch of fighters.
And the other thing that they did was really undermine their command and control. I mean,
I would think you would be watching this and thinking that it was an effective operation.
But, you know, do you have any feelings about how much of a military benefit there would have been
from the Israelis actually carrying out
this operation. Because of course, when we spoke last week, it was just the pages. Of course,
the next day, there were walkie talkies that went off as well. You know, so what are your thoughts
on how effective this may have been from both, you know, degrading their C2 and personnel
kind of perspectives? Yeah, so I think it's going to be highly effective for a window of time. The reason that Hezbollah went to pagers was because they had been shaped off cell phones.
They had watched people have very bad experiences from using cell phones, having cell phones around them and communicating on cell phones.
And so they decided they needed a one-way communication communication that was safer and pagers certainly are that. And now they've gotten to
the point where it's very clear that they can't trust pagers at least for the time being. Right.
Yeah. And you know, where else do you go? You go to tactical push toto-talk radios, and they can't trust those for a period of time.
So a whole series of the infrastructure was taken down.
And the pagers that didn't explode, you can bet the people that own those have no desire to clip those onto their body.
So the actual interaction and communications that have to happen in a time of tension and war,
there's a lot of friction now. And I've watched a couple of the, there's been a couple of high
profile strikes by the Israelis as well. And you've got to wonder if those became opportunities
because people had to use insecure or insecure comms or they were shaped
into comms that were exploitable but wow it's um you know at least in the short term it's going to
have a pretty devastating effect on their ability to coordinate and orchestrate well indeed we've
seen a series of airstrikes now targeting targeting southern lebanon and you get the
impression the pages and the walkie-talkies were sort of the opening salvo in what looks to be a much bigger campaign
targeting Hezbollah. It's interesting because last week I described the operation. You know,
I said it was very sad that a child had died, and it is. It's tremendously sad. It's a tragedy
when a child is killed, you know, an innocent child is killed. But, you know, people were very
angry at me, and they said that I had is killed. But, you know, people were very angry at
me for, and they said that I had just mentioned that, you know, in a token way, even though our
conversation really did talk about the civilian deaths that occurred as a result of this military
action. The reaction to this one has been quite extreme, really. And it's made me go back and just check my work actually
because the other thing that people took issue with is i described this as a legitimate uh
operation so i've been to two people who are experts in international humanitarian law
to ask them if this was uh indeed a uh legitimate operation i'm going to read from
uh the replies that i got here, just because I want
to cover this off, because I've been getting a lot of hate on social media for what I said last week.
And I will just say too, that if you describe something as legitimate, that's very different
to saying that it's a excellent idea that's going to achieve your long-term policy goals, right? So
just something to keep in mind. But this source said, look, I listened to your show
last week and your analysis, of course, was correct. Step one, there's an ongoing armed
conflict between Israel and Hezbollah. Step two, there is a military benefit and lots of it to be
had by disrupting military command and control and also killing fighters and commanders. Step three,
the means used plainly show targeting of those military means and capabilities. Step four,
as you said, this actually seems much more protective against collateral damage than an airstrike
step five as we discussed it's not a violation of the booby trap rule either so there's been a lot
of people saying oh there's you know conventions against booby traps those conventions are really
designed to stop uh militaries booby trapping devices that might be picked up by, you know, might be left behind by
retreating soldiers and then picked up by advancing troops or even civilians, not really about,
you know, giving someone a Trojan device in this way. So I've got it on two sources,
from two sources that they do not believe that that convention has been violated. Unfortunately,
these sources do not
want to talk about this publicly because they are concerned at the social media reaction. They just
want to stay out of it. So I can't tell you all in the audience who they are, but I have told you,
Rob, and I have told you, Adam, who my sources are, and I'm sure you would agree with me that
they are impeccable sources of this. They're world-class. Yeah, yeah. And the other key piece of the analysis, Patrick,
is these were communication devices
intended for participants in military action, right?
Yes.
And that's the other thing, too.
Overwhelmingly, these pages were carried by people,
Hezbollah officers in the military wing, right?
So these were not distributed to, as best we can tell,
to people who work in the civil service side of Hezbollah.
Despite the Iranians actually coming out and saying
they were primarily used by kindergarten teachers,
which I thought was like, my lord, that's a bit rich, right?
So that they could warn about incoming airstrikes and whatnot.
And I just think, you know, people are saying, oh, you know, but civilians were killed, therefore it's a war
crime. I mean, you know, this is what war looks like. You go back to the explosion on the Kerch
Bridge that connects Crimea to Russia. That resulted in the death of the poor truck driver
who was carrying the bomb, unbeknownst to him, and a car containing a couple who just happened to be driving past at
that moment. Civilians are killed in these sorts of actions. Civilians are killed in airstrikes.
And one interesting thing that my source did say on this, which was, you know, so why so much angst
going the other way? Simple. First, as we saw 20 years ago with early drone strikes, technical
novelty combined with attacks that catch military targets out in civilian settings lead many people to just categorize the whole thing as something other than war, especially if it is attributed to cover that off because I've had, you know, just people really saying some pretty, uh, hostile stuff to me, uh, over the last week when really what I'm
trying to say is this in terms of like war history is up there with the Trojan horse.
Okay.
This is a remarkable, incredible operation.
Uh, it was legitimately targeted and as best I can tell from the sources that I've spoken
to, it does comply, does comply with international humanitarian law.
So, you know, I'm just here to report
what expert sources tell me.
So please, you can stop with the messages.
Let's move on though to something else.
And again, Rob, I'm glad you're here
to talk about this one this week
because it looks like the Iranians
are still going after the Trump campaign,
stealing documents and trying to send them going after the Trump campaign, stealing documents
and trying to send them off to the Harris campaign, journalists, anyone who's going to actually
do something with the material. The funny thing is, though, everybody just seems to be completely
ignoring Iranians bearing documents, right? So this is a, you know, it's a weird old world
at the moment.
Yeah.
Would you expect the Iranians to do any less, right? I don't think they would run the first operation and said, shucks, we were caught, we're done.
I expect that, you know, this is not the last of some of the nation state interference or at least interference efforts. But the good news is
we're so much more educated having lived through a couple cycles of this. You know, there was the
previous election where there was attempts by the Iranians to weaponize a Proud Boy video,
and the U.S. came out and pre-bunked it, got in front of it as it was about to run and talked about it.
And that just brought a lot of people to understand the games that were trying to be played.
In this case, you know, the media has been super ultra responsible looking at these things that are coming in and just understanding that, you know, they are intended as pawns in this disinformation, misinformation.
And so they're taking a knee and thinking about it.
I am sure if something amazingly newsworthy were dangled in front of some reporters, it will get some press.
But the good news is this is kind of pedestrian tactics, and the immune response is dealing with it.
Yeah. Meanwhile, House Republican leaders
have sent a letter to the FBI, and they're sort of demanding an investigation into this. And
I would think that's a reason, you know, they're demanding a hearing, and I would think that would
be a reasonable thing for them to request. But you do get the impression that probably the intent
here is to beat up the FBI in a hearing and sort of make it look somehow equivalent to, you know, the Russia
hoax from 2016. So you do worry that perhaps this is a bad faith request. I know it's going to be,
I mean, look, I can ask you this question now that you don't work for the US government anymore. Is
that your read on this as well, Rob? You know, elections are influence operations.
But I can ask.
Elections at their core are influence operations.
We just want to make sure that it's our politicians and our electorate that are influencing and being influenced.
Right. We don't want any external manipulators. And so, you know, the hearings are part of the politics and they certainly will have
players in them that have some intent, both to defend and attack.
And so-
You can still dodge a question like a senior bureaucrat. I love it.
I was sitting there like, masterclass.
That's like Neo from the Matrix dodging that question. Slow motion. Very well done, Rob.
We're going to move on to a more technical item
here. A more technical item here. We've got some reporting here from Germany and it looks like
they were able to run some timing attacks against users of a secure metadata resistant instant
messaging tool called Ricochet that I actually have some history with. Something like a decade
ago, I helped them get a grant. I was all hopped up on the Snowden leaks and thought it would be
great to have this sort of anonymizing technology in the hands of ordinary people. Very quickly,
though, you know, it was sort of made not usable in places like China where people could benefit
from it. And, you know, these days is used mostly by Nazis and
people who like to distribute child sex abuse material. So, you know, I hate to say it, but I do
kind of regret my involvement with this project, even though the people involved, it's not a slight
against them. They're all very well-intentioned people. But yes, it looks like the German
authorities were able to run some sort of timing-based attack against the Tor network to find some pretty awful people involved in CSAM.
And yeah, I mean, that's the story. Adam, I don't think we should be that surprised, should we,
that if you've got enough passive visibility over the internet, that you can, over a long period of
time, you could start to hone in on people who are operating you know this service operates by using Tor hidden services to enable chat so
you know it makes sense doesn't it that you'd be able to gradually figure it out.
Yeah like if you've got enough visibility of the you know the entry and exit points
to the Tor network or to the things where the hidden services are entering the Tor network
yeah given enough visibility you're going to be able to figure out over time.
And because of the jurisdictional reality,
so much Tor infrastructure is in Germany and the Netherlands and bits of Europe.
It's quite concentrated in places where the overall environment
is amenable to anonymity services.
But now that we're at the point where law enforcement are cooperating,
you know, you do have enough visibility, I guess,
to be able to do this kind of work.
And it's, you know, we've seen the Tor project, you know,
talk about introducing, you know, chaffing and things
to kind of complicate timing attacks.
And no one's been really keen to do it because Tor is already so slow
that adding those kind of technical countermeasures was always going to come at a
usability cost but you know i think you know we're going to see steps towards that because
you know clearly law enforcement have got to the point where you know it's a capability they can
and do use yeah i mean this doesn't affect the tor browser necessarily but yeah i mean ricochet
the way that it works is you know hidden based. I think newer versions might not even be vulnerable to this particular type of timing
attack. I'm not 100% on that. But Rob, I want to bring you in on this because one thing that I've
noticed is we never hear the Five Eyes agencies ever complaining about Tor, which to me, you know,
they'll complain about end-to-end encryption. They never complain about Tor. What can we infer from this?
Well, I think the reality is that it is one of many things. And when people talk about
end-to-end encryption, they're talking about Tor as well. But in reality, you've got to figure out
the way to get at the targets you're looking for. And that may mean going to the endpoints
before it's encrypted.
And that's one way around Tor.
You've seen the law enforcement solution here.
I think overall, when you're denied access,
people are going to get creative
to figure out how they could do the collection
to chase the targets that they need to go after.
Yeah. Quickly, we've kind of run out of time here, but the DOJ in the US has charged a bunch
of hackers for stealing $230 million in crypto from an individual investor, it looks like. And
the investor was in Washington, DC. And this is a 20 year old and a 21 year old. And yeah, they went
out and they were spending their money on luxury cars and just having a great time. And now they
are busted. So having less of a great time. Adam, I wanted to get your thoughts on this one from
Brian Krebs from Krebs on Security. It's just a novel fish. But I just, everybody who's read this
is like talking about it because it's really dumb and really clever all at the same time.
Walk us through it.
Yeah, this is great comedy.
So this was a fish that showed up and said, hey, look, there's some security vulnerabilities in your GitHub repo.
Click here to see the details.
And then it throws you to a web page that has a capture.
And the capture, you need to prove that you're human.
And the way that you prove that you're human is by pressing uh windows r and then ctrl v and then enter which means open the windows run
prompt paste in the contents of the clipboard that the web app put in there for you and then
receive your you know power cell power shell dropped uh infostealer trojan which you know
anyone technical is going to look at that and go
you excuse me you what now but i mean it's but the average user work and it probably does work
enough so like like my hat is off to them for just the sheer you know sort of abrasion brazenness of
it because like i wouldn't i mean if someone came to me in a you know on a penthouse gig and said i want to try this against our target i'm like oh come on like you know work harder go find some
bugs and you know shell them the old-fashioned way don't get lazy but hey if it works it ain't dumb
now we just got one more thing we're going to talk about before we wrap it up and adam you
actually uh have been playing around with the iphone mirroring feature that shipped with the
latest version of macOS.
Of course, we first spoke about that when Rob was on the show last,
and we all got it spectacularly wrong
because there was some bad reporting early on
that suggested this was going to be like a cloud service.
It's really not that.
And Apple really does seem to have put some effort
into making it not present insane security risks so you've been using it
what are your general impressions uh so yeah it seems pretty sensible it's based i think as we
discussed last time on carplay or that kind of like remote control infrastructure that already
exists so there's a pairing process you do via you know your icloud accounts but after that uh you
know you have to authenticate to your local
machine you know so on my Mac for example it prompts me to to biometric auth and then it
connects to your phone and you can use it you get pop-ups on the phone when it's being used
and then one of the questions I had was like well how much access do you have to the phone
does it do things differently than when you're physically at the phone and so for example if you're in the settings app the like FaceTime the Face ID and pin changing functionality
that menu option doesn't even appear in settings when you're connecting to it from the remote thing
if you try and do something that does require Face ID auth then it will prompt you to go use
your real phone so there's like a few little edge cases that are you know i'm glad that they thought about um you know whether long term this
exposes you know gets abused in ways that we haven't really thought of we're gonna have to
wait and see but like they've made a reasonable effort which is kind of what you expect from apple
right to have at least thought somewhat about it yeah yeah i mean i think it's it's it's one of those things where we did our initial reaction on it was wrong. It looks like
they've done it sensibly. I mean, I still want, I mean, you saw me earlier in this, in this podcast
that for the people watching on YouTube, you were to see me reading off my phone from a source who
I didn't want to identify. I don't want to expose that conversation to the screen on my Mac because
it is much easier to get a shell off my Mac than
it is off my phone, right? So even just from the point of view of wanting to avoid screen recorders,
I don't feel comfortable using it. But I think for the average person, it's fine. Rob, any thoughts
here? Yeah, I think as the details have been coming out, you know, Apple's got a lot of smart
people, and they have the advantage of that closed ecosystem ecosystem where they start all the way down at a
hardware root of trust and build up through their stack. So when they put their mind to it, they can
do a pretty good job with security. And it looks like, you know, there's some new attack surface
in any new feature, but I think they're putting a lot of effort into protecting it.
Yeah. So apologies to Apple people who no doubt wanted to throw their,
their eye device that they were listening to the,
the podcast on out the window when we had that initial conversation,
but it looks like, yeah, bravo job well done or job done as best you can do
with that sort of thing. And yeah,
they've certainly crossed their T's and dotted their I's.
But we're going to wrap it up there.
Adam Boileau, thanks for joining us.
And Rob as well, thank you very much for coming back to co-host with us again.
And we'll look forward to doing it again with both of you soon enough.
Cheers.
Thanks a lot.
Thanks very much, Pat.
That was Adam Boileau and Rob Joyce there with a check of the week's security news.
It is time for this week's sponsor interview now with Mike Wyasek, the founder of Stairwell.
Stairwell is a really interesting platform that does analysis of all of the files present in your organization to help you do things like track down malware variants
that might not be known about yet.
I once described Stairwell as kind of like a NDR platform,
but for files instead of network data.
And funnily enough, Mike actually liked that one,
so I'm going to stick with it.
But yeah, the idea is Stairwell ingests a copy
of every single executable file in your organization and lets you do all sorts of analysis on that corpus of data.
So you might find some malware on your network and then you can go to Stairwell and say something like, show me similar files.
And it does. And that's very handy.
So today we're talking to Mike about the efforts malware crews and APT operators are putting into evading detection. The amateur crews are still
getting snapped because they're doing things like uploading their samples to VirusTotal to see if
they're going to be detected. But the real professionals have built their own testing
rigs and are a little bit more advanced. Here's Mike. When you start talking about the really
high-end adversaries, they're not uploading it to VirusTotal.
They're probably replicating a copy of something like VirusTotal internally.
If you think as a software engineering company, you have continuous integration,
continuous deployment tests, you're running end-to-end tests, you're running unit tests,
that if you are a malware developer or a team of malware developers,
you can pick what term you want to use, you spend money developing your tools, you spend money
developing your capabilities, and you want to get an ROI on that. And so there's an economic
argument here that it doesn't cost very much to set up a bunch of VMs running the latest builds that are kept up to date of AV products,
EDR products, so on and so forth.
And you want to make sure that as part of your testing process that you don't get caught
before you ship this.
And so that kind of raises the question of, well, if the bad guys are able to test in advance, how do we
protect ourselves from someone who already knows that they're going to walk by? That's kind of the
unspoken truth that we're trying to talk about today. Even what you described, which is having
a bunch of VMs with the latest EDR and stuff in them, that's still risky, right? Because it's not
like the old days of little engines that would just sit there doing detections. These days, everything's instrumented,
everything's going back to a team, whether it's like, you know, in the case of your EDR vendors,
it's like, you're up against teams at Microsoft, CrowdStrike, SentinelOne, you know, you got to
make sure this thing really doesn't get detected. So, you know, there was the virus total risk where you'd submit something to virus
total and then people would run it down. I think there's still a bit of a risk, even if you're
doing this as an adversary, that you're going to burn your stuff. So, I mean, to a degree,
I'm kind of surprised that people would still expose their like Oday malware to some of these
contemporary instrumented scanning engines. I mean, you still have to think about the fact is like a laptop that's in airplane mode
can still have a bad thumb drive plugged into it, right?
So like the fundamental challenge with the products is...
So what, you can just run them offline, right?
Yeah, I mean, I wouldn't run these VMs connected
because I don't know what the answers go.
If I was doing it, I wouldn't know what the answer
is going to be a priori at all.
And I think like that's fundamentally, you have to support it.
I will run a less than best effort attempt to secure a device just because I can't be on the internet.
You never know what's going to be connected to it, what people will share via AirDrop or so on and so forth.
The concern's real.
And so it's actually not, it should not be hard to test because the products have an obligation
to do what they're designed and sold to do,
which is protect people.
Yeah, so when I think of stairwell,
I kind of think of it as being akin to the tools
that are actually used by those companies,
you know, in their offices to, you know,
obtain information about
files and various artifacts and whatever, and do the analysis work that helps them drive their
products. But with Stairwell, you get to do that yourself. So, you know, I imagine you would have
had customers who have used Stairwell to play that cat and mouse game with crews that are trying to
do evasion. Like, why don't you walk us through a couple of examples there?
Yeah.
So I think you have those cases where, you know, the fundamental way that Stairwell works,
we try to build it from an architectural sense to make it resistant to attempts like this. So with Stairwell, when people install our file forwarder on a machine, it sends us a copy of any pre-existing or new executable
or executable-like files because we would consider Python scripts and Ruby scripts and stuff like
PowerShell scripts as executables under that definition. But the file forwarder has no real
security logic to it. When it sees a new file,
if we've never seen that file from that particular customer before,
it uploads a copy to us.
So that fundamentally,
like again, at that economic level,
changes the argument
that there is no feedback loop
to that machine in real time
that says,
there will thinks that this file is good or bad.
It's simply just a-
Yeah, I mean, if you want to get that insight, you need to be a customer who's got access to
like a console, right? Exactly. You would need to be a customer who has access to a console,
but also even then, the adjudication is not necessarily going to be the same from customer
A to customer B. Exactly, because it depends what else has been in that environment and how
it's changed over time. Exactly. What's common here is not common there. So if you think about just taking extreme ends here,
what's common software on computers
on a offshore oil rig
is probably not the same software and files
that would be common at...
Weta visual effects kind of pops into my head.
So we're thinking of New Zealand for a second.
But they're going to have a different set of unique software footprints that would not be
found on an offshore oil rig. And so when you start thinking about that, you start taking into
account like the vertical and the space that's there, the risk factors change. It's much like,
you know, I often tell the story that if you're walking down the street and you see
in a credit card and it has the name Bill Gates on it,
it probably has a pretty high credit limit. And what would set off an alarm for my sad little credit card would not set off the alarm for Bill Gates' credit card, right? It allows you to offer
a much more tailored approach. The challenge here is not necessarily to sit down and say,
we're going to have better detections. We always try to.
But the logic at the end of the day is to say, how do I remove the certainty from the attacker's ability to operate?
And so since we are preserving all those files we're collecting, we have a really unique capability of doing detection work.
But then we're also doing continuous retroactive analysis of those files as
well. And so that means as we change our models, if you think how often your EDR updates, if they're
distributing new updates every hour or every couple of days, whichever the interval is,
any new change in our own internal models and detection profiles, we re-evaluate the history of everything that has ever been or is on
a particular device. And so that means if we make an improvement and we realize, hey, this is a new
way to detect something bad, and we find that, we can actually come back to a customer and say,
three months ago, there was a rat of some sort on this machine. It's not there anymore.
The machine's been re-imaged, but we have a copy of it.
And we know what else was happening on that machine around the point in time when that rat was on there.
That allows us to bridge.
There's a mental chasm that exists between threat intel, security operations, and incident response work.
And I often tell people, Starewell is
security operations, threat analysis, incident response done well, because when you actually
look at the way we approach the problem, we're actually really connecting these three fundamental
areas together. We have a lot of threat intelligence, big data analysis at the core of our
system, but it allows you to do detections across time. And so then you start thinking about,
well, you get told about a detection that happened
two months ago versus detection that's happening literally two minutes ago.
You can kind of triage, diagnose, and debug all of this stuff in one spot so that you
effectively can turn any tier one SOC analyst into someone who has the equivalent of years
of experience working at like a high-end
incident response firm or a top tier threat hunting firm literally with a day or so of
training on how to use the product and start thinking like that. A lot of the hard work,
I think as we were talking earlier, you mentioned, startups are, you said it best. How did you put it about
edge cases? Well, yeah, it's my saying these days, which is that a startup, well, you know,
a software product is not its core features. You know, if you want to make it mature and something
that people buy, you have to turn it into a collection of solved edge cases, really, because
you can build a core product that's fine. And then, oh, look,
I've been using it for half a day and it fell over or doesn't do this or doesn't do that because of
this weird thing that popped up. Right. So, yeah, it's a it's a collection of solved edge cases.
That's the hard part. That's absolutely the case. And I think the thing is in security,
security lives in the edge cases. The more we try and think of like there is a playbook to
solve every problem or address every issue, you know, it's again an area that I think bad guys almost bank on. When you start
thinking about like, I often think my mental model for a lot of the way security products work is,
if you think back to like old black and white prison escape movies or something, they're in the prison
yard walking along the wall. And for some reason, as long as that moving spotlight does not illuminate
them, they're invisible. No one will ever see them. And so you always see them trying to duck
down or hide behind the spotlight. And so if you know the pattern where the spotlight's moving,
you just walk around it. And I think that's not how the real world works. And so you
need to basically have ways to detect and see and observe things and also plan for failure, right?
Like no one catches everything. There are no magic bullets. And I think, you know,
security account executives get a bad rap because they will often like tell you that they solve all
of the world's problems. And I think one of the things that I really try and stand on with Stairwell is like,
look, I can't solve all the problems. No one can. However, we've baked into our design at that
fundamental architectural level because we are trying to build a platform that bad guys cannot
study and evade anywhere near as easily as you can with more traditional
solutions, we're bringing in the concept of incident response into a security operations
team. They can sit down and say, hey, there's this weird file in this machine. My EDR told me
there was. And then it's like, hey, Sterl, can you enrich this? And our ability to go over and
say, oh, there were similar files on these other machines two weeks ago.
So that was really the question I was asking earlier, which is, can you think of examples where customers have actually used this to counter evasion in exactly like that scenario?
I'm guessing there have been instances where customers have had an alert on something that their EDR slash EPP has identified.
And then they've been able to pivot off that and find other variants that the EDR wasn't detecting?
So there was one that...
Or is the scenario more likely to be that you then find historical stuff that the EDR would catch now,
but didn't catch then because they didn't have that detection then?
Both.
I think there's been cases where,
geez, going back, there was a case earlier this year. Our research team wrote a publication about
VileRat. And it was undetected by things in the past. And so when we finally discovered VileRat
in our global object set, which is basically just a mongol nation of a bunch of different
malware feeds.
We were able to go search for that across our customer environments. And we were able to go to a couple of customers and actually say, hey, several months ago,
you had VileRat on these machines.
And here's exactly where it was and so forth.
And it actually kind of requires a bit of a mental leap to understand what do I do about that,
right? Because immediately the first thing is like, I know how to handle if it's there right
now, like quarantine the machine. But then you start thinking about, well, there was a rat,
a malicious rat on a couple of systems months ago. Those systems were acting the odd,
so they got re-imaged. But now you actually still do have to think about like, what is
your instant response plan? Just because the immediate threat is not there, that doesn't mean there was not a risk to the business or anything along those lines.
Like the question I have is, what did those machines have access to? segues nicely from that would be there was one particular case where we had a customer
who was running a top tier EDR platform and someone plugged a thumb drive onto that machine
and the malware that was on there was APT malware. We'll leave the country and all that off.
But it did not run.
And since it did not run, their particular EDR did not generate any sort of an alert or an alarm about it.
But it's nice to know, isn't it, that someone tried.
And I was just thinking, as you were saying that, too, historical information about boxes that got popped.
It's probably useful to go back and have a look. Well, how did that box get popped? What other
compensating controls can we put around these sort of things? Like that shouldn't happen, right?
Exactly. I mean, exactly. I think that's exactly the point where we're going. And the other part
for that was that machine with the thumb drive, that thumb drive was then plugged into another
device that was not capable of running an
EDR platform.
Then the question is, what happened on that other device?
Once you start thinking about it like that, there is a... I hate to bring in... I hate
when people make biological analogies with cybersecurity, but there is some sense of
a herd immunity.
It's not just one machine of one machine's one cell
and then you have another machine being another cell.
Not all the cells are equal.
And so while this machine had that malware run on that machine,
it would have been flagged instantly.
It was an older strain of some malware.
But that was a point to catch it.
And then what happens when it's plugged into another machine
that's not able to run a sophisticated EDR package on there
due to resource constraints or so forth, did that execute on that machine
and that was the actual target?
I have to think about these things in terms of a network effect or not as isolated
instances. And that's for us one of the
true value parts of the platform as we've built it.
Everything is connected.
Everything is connected.
All right, Mike Wyseck,
thank you so much for joining us on the show
to have a little bit of a chat
about the current state of play
when it comes to adversaries
trying to be very sneaky,
which they want to do.
Great to see you as always
and we'll chat with you again soon.
Cheers.
That was Mike Wyasek there with a
chat about malware evading
detection. I do hope you enjoyed that
and you can find them at stairwell.com.
Don't forget
you don't need to be sending
all of your company's files to stairwell
to actually use the platform. You can
kind of use it like a private virus
total if that's something that interests you.
So yeah, go and sign up at stairwell.com
and you can have a play with the tool.
But that is it for this week's show.
I do hope you enjoyed it.
I'll be back next week with a snake oilers edition
of Risky Business.
But until then, I've been Patrick Gray.
Thanks for listening. Thank you.