Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x14: What Scares InfoSec About AI? With Girard Kavelines
Episode Date: December 14, 2021AI is coming fast to the information security world, both in terms of tools and threats. In this episode, InfoSec professional Girard Kavelines discusses the reality of AI in security with Chris Grund...emann and Stephen Foskett. With AI assistance on both sides of the security divide, will we see an escalation of attack and defense? On the defense side, threats have evolved to advanced attacks that look like system processes and legitimate connections, and machine learning can help process more data than ever before. ML-based systems can also judge unknown threats that a rules-based system would never catch. On the other hand, we are already seeing AI used to generate more effective attacks, from phishing to fuzzing APIs. Three Questions: Chris Grundemann: Are there any jobs that will be completely eliminated by AI in the next five years? Stephen Foskett: Can you think of any fields that have not yet been touched by AI? Amanda Kelly, Streamlit: What is a tool that you were personally using a few years ago but you are not using anymore? Links: TechHouse570- Cisco Champion Highlights Gests and Hosts Girard Kavelines, Founder of TechHouse570. Connect with Girard on LinkedIn and on Twitter at @GKavelines. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 12/14/2021 Tags: @GKavelines, @SFoskett, @ChrisGrundemann
Transcript
Discussion (0)
I'm Stephen Foskett. I'm Chris Grundemann. And this is the Utilizing AI podcast.
Welcome to another episode of Utilizing AI, the podcast about enterprise applications for machine
learning and deep learning and other artificial intelligence topics. Today, we're going to dive
deep into, well, kind of some scary stuff. In fact, that's the topic. That's kind
of where we're going with this conversation, right, Chris? That's right. Yeah. So we're
really talking about what is scary about AI from a cybersecurity professional's perspective,
right? So what are all the little things that are hiding under your bed when you're worried
about security for your company, organization, even at your house? What are the things that AI could do to throw you off, right? And I think there's some talk probably
going to be about shadow AI and AI that you're not even aware of. I also think one of the scariest
things is the kind of spy versus spy, AI versus AI war that I think we're going to see in the
coming years. But obviously, we'll dive into that. Yeah, this is something that's come up quite a lot
at our AI field day, but of course,
also the security field day event, networking field day, where we're kind of confronting
all these AI applications out there.
And everybody's like, oh, this is the best thing ever.
AI is great.
It solves all your problems.
And did somebody ask the security pros?
So let's ask a security pro.
So we have invited on our episode today, Gerard Cavalinas, who, well, introduce yourself.
Hey, what's going on, everyone?
I'm Gerard Cavalinas.
I'm the founder of TechHouse 570.
I'm also a managed services systems analyst for Helium Systems LLC.
You can find me on LinkedIn at Gerard Cavalinas and Twitter at GCavalinas.
So Gerard, we have talked to you in the past about security and
networking and all sorts of good stuff. But here's the thing, you're one of the IT pros out there
doing the work on a daily basis. So I guess let's just put it to you. What scares you about AI?
Well, I can tell you for one thing, I mean, right, the ongoing joke has always been Skynet watching, but, you know, AI is one of those topics where the more I dive into it and I
learn and I get stronger and I'm really trying to understand it, it's terrifying, right?
Because it's almost a sense of it has a mind of its own, you know, the way it can capture
data, the way it monitors our platforms, our infrastructures, our monitoring tools is heavily,
heavily based a lot
more on AI based technology. So, so seeing that it gets to a point where you could train it,
you could dictate it, but when you get it to a point, it almost does the work for itself. I mean,
barring some slight monitoring on your part and, you know, looking at a dashboard,
it really takes a lot of analytics and has a feeds and eats that data. It takes it. And
again, it kind of has a mind of its own. So I and eats that data. It takes it. And again,
it kind of has a mind of its own. So I think that's, that's where one of the biggest challenges
for me is really understanding where, and I know automation and, and, and as we move forward in
technology, that's becoming bigger and key, but when you have something that powerful, I mean,
you kind of wonder where, where could it go? You know, I mean, that's today we're in 2021,
where could that be in five years? How can AI expand out?
Absolutely. So Gerard, that's really
interesting. I want to dive in and kind of maybe tease apart a couple of things right off the bat.
Because, you know, obviously, we're talking about security. But I think, you know, as far as AI
being involved in data science and data collection, and those kind of things, which you alluded to a
little bit, I look at that as more privacy issues, right? And I think it's interesting to talk about
those separately, because privacy and security can sometimes be at odds with each other. And so
it's always interesting for me to kind of in conversations to point out the differences
between privacy and security. And so I wonder, you know, looking at that privacy aspect,
is that a big part of your concerns or not so much? That's been a big part of it, especially
to keep in mind from a security standpoint, you know, as we're monitoring newer threats and things
out there, these attackers have, and they are, they're getting more and more advanced every
day, right? With the different types of threats and ransomware attacks we're seeing on all these
enterprise organizations. But you got to ask yourself, well, if, you know, we're utilizing AI
for certain tools and technologies to help kind of combat that, you got to imagine that they're
using those same tools and AI-based tactics as well. So who's to say we don't see more of AI-based
threats in the coming future if they're not already existing? Yeah, that's a big fear of
mine, or at least something I'm definitely keeping an eye on. And you see the beginnings of this,
or maybe it's an analogy of this in the financial markets, right? And so the very early
quants and quantified traders and like algorithmic traders started being able to kind of gain the system, so to speak, right. Using AI to understand
economic indicators, but also the movement of the market and kind of get ahead of that,
right. By, by watching things. But then what happened is everyone started doing that because
they had an edge. And so now you've got AIs trading against AIs and you see things like
flash crashes and some strange things like that. You've also seen in pricing, this happens where you've got maybe Amazon and Barnes and Noble
are both running pricing algorithms that are trying to beat each other. And maybe Barnes and
Noble always wants to be a little bit more expensive than Amazon. And Amazon wants to be
a little cheaper than Barnes and Noble. But as Barnes and Noble raises the price, Amazon does.
And you'll see this every now and then you'll go and find a paperback book that's $1,500 on Amazon.
And obviously, this is AI gone rogue.
And so bringing that back to security, right?
When we're using AI to secure our environment,
and attackers are using AI to attack the environment,
at some point, you've got this kind of,
what I said earlier was like spy v. spy, right?
I can think of the little black blob and the little white blob
trying to outdo each other.
And where does that leave us, right? secure do you actually you know can you actually
become in an environment where the machines are running things we don't even know what's going on
yeah and i think it part in that too and i was just you know recently speaking on this on an
episode of Cisco Champion Radio but they where it's at is you have those conflicting matters
right and a lot of organizations can't
invest or, you know, develop just a dedicated SecOps team to really focus on those threats.
So now you're kind of double and tripling back. You have to find the threat, quarantine it,
remediate it. And for the most part, you know, the inevitable saying, cut the head of the snake off,
right? But where is that? So now you're diving in. And like I said, more threats are coming into your environment. It's kind of like a double edged sword, but it is interesting that conflict, you know, with the good AI versus the bad AI, we put it that way.
Yeah, and I think that it's only, it's inevitable that we're going to be seeing that because, and it will, it's got to lead to this kind of escalation where you basically have, you know, AI systems out there sort of fuzzing interfaces and applications and networks and trying to figure
out ways to get through. And at the same time, we've got AI-assisted tools on those networks
trying to keep unknown threats from coming in. So, I mean, if we want to kind of focus for a second
on one or the other, let's start, I guess, with something we've talked in the past on the podcast, which is basically how AI is able to help defend the network from threats.
And for me, that comes down to the ability for machine learning to do a couple of things.
Number one, to deal with a lot of data.
You know, you can't expect an IT pro to be monitoring firewall logs by hand and catch, you know,
literally, you know, thousands and thousands of things rolling past them every second.
We're actually seeing an escalation of data.
In other words, we're feeding more into, we're capturing more and we're feeding more into
the systems because we can handle it.
Whereas in the past we couldn't.
So that's one thing.
On the other side of the equation as well, machine learning might be able to detect out of the ordinary things that a normal rule-based
system might not. Because one of the things machine learning is really good at is kind of
taking a stab at quantifying, is this this or is this that? You know, is it a hot dog? Is it not a hot dog? Well,
guess what? A lot of attacks may not look like hot dogs, but a machine learning system might
actually be able to identify them better than just a simple rule. So let's talk about that.
How is AI being pitched at you as an infosec professional, and how realistic are those pitches?
I could tell you that a lot of them, I mean, a lot of the ones that I see, they come at me more so, okay, so let's break it down, right?
Threats aren't what threats used to be, and I think we know that, and I brought this up more than once, threats have gradually evolved from a workstation level to the enterprise level and from a security
standpoint that we see it at today. But more often than not, they're getting so advanced that a lot
of the ones we see mimic actual system processes and services. So how do you really detect those?
And I think that's one of the many key areas that you see AI really heavily
utilized in a lot of endpoint solutions and in EDR solutions, is you see them really being
addressed in real time and pulling like self-replicating, like it gives you a full view
of everything. But for me, I mean, that's kind of been one of the hardest challenges too, that I've
also seen as you're right. Cause you know, when you're a one man show, or even if you're working
on a dedicated team, we're not dedicated to just that section we're not dedicated to just really finding
those threats we have to wear 800 hats so kind of minimizing them and finding a good solution that
really ties that in is key yeah one area where i think i've seen at least the glimmers of ai right
and and this may be taking a broad view of how we define artificial intelligence versus just an algorithm or statistical regression or something. But
the realm of user behavior analytics and user and entity behavior analytics, right? The UBA or UEBA
tools seem to be an area where we've kind of gone in and said, okay, instead of just looking at
alerts that are coming from devices, let's actually see what's happening on the network, on the applications, and track when that goes off baseline. We're knowing the fingerprint of a good
actor and looking for anything else, which I think is a start into this kind of really
expanding the efficacy of AI in the enterprise. What do you think, Gerard?
I see that more so. You're trying to cross-compare and kind of see what's familiar,
what's not. And especially when you have certain processes or tasks and things that aren't identifiable I think that's one of the cooler features of AI right, and a lot more pieces again of endpoint solutions and, and security software is out there, define that it's not just about finding threats in real time and updating definition servers now, they have to think smarter to stay a few steps ahead as best as they can of the attackers.
But that is something that I've seen quite a bit.
Yeah, and that's kind of what I was trying to get at with this idea that ML can see an unknown threat,
something you've never seen before.
And sometimes it might be able to put it in the right box.
Another question that I have for you as an InfoSec person
is quite frankly, one of the challenges for machine learning is that you do get false negatives.
In fact, you get a lot of false negatives.
And you might also get false positives, of course.
But from the false negative perspective, my question is, how big of a problem is that in information security?
Because that could be a big problem in autonomous driving. But is that in information security? Because that could be a big problem in, you know, autonomous driving.
But how about in information security?
Are false negatives a big deal to you?
I try to take them with a grain of salt.
So what I mean by that is, you know, it's more so, number one, it depends on the piece
of software you're utilizing, right, the employee solution.
So it's almost like anything.
And I use this analogy, I kind of blanket it, but there used to be software, well, there
still is, I think.
It's called Dragon Software, right?
And the point of Dragon Software was like a voice recognition speech.
So you have to train it.
And I remember working at retail and that was one of the things.
And I was explaining that to customers, like when you have it, it's not going to work right
away the way you want it to.
What do you mean?
Well, that's what it is.
You have to train it to get it accustomed to your voice, the sounds, and then, you know, it'll pick up things over the
course of, you know, multiple times using it. It's the same thing, right? And that was like 15 years
ago or so. So same analogy, just different situation. The better, more you tweak the
policies, the shaping, you know, what hashtag, what hash values to look for in certain key areas,
the better the software is and the better the endpoint solution
is in detecting those threats.
Because a lot of the times too,
nine times out of 10, it'll go,
oh, hey, just a very basic,
it'll explore.exe, it's very broad, right?
And it's just you opening a Chrome task
or so nine times out of 10,
that is a false positive.
And you just take that as a grain of salt,
but it's other ones like isxe.exe
or certain ones that may stick out.
It helps you filter it more.
But again, as you shape and tweak those out, it helps you filter it more. But again,
as you shape and tweak those policies, it gives you a lot more flexibility as far as knowing,
you know, kind of what's a legitimate threat and what's not and what to really dive into.
Because again, when you're a one-man show, or even if you work on a small to medium-sized team,
how much time can you allot to diving in without, you know, taking away from your other day-to-day
tasks? Yeah. And that's really one of the recurring themes that we get here is that the AI should be a co-pilot. It should be an assistant.
It's not taking the place of an InfoSec pro. Also, I do want to point out that our other
co-host here for utilizing AI was actually one of the original architects for that
Dragon Nuance software. So, hey, hi, Frederick. Chris, I see you. You want to jump in?
Yeah. So I just wanted to kind of dovetail on that because I think Gerard's right that there's definitely this period where you're going to see false negatives when using AI for anything really, but for security in particular while you're training it, right?
Because it has to be trained for that specific environment to make it tailored for that environment. And then, but I also think that in the longer run, this is another reason for defense in depth, right? Is that, you know, if you've got AI at the different layers, just like a human looking at the different layers, you can miss something. And so I think the way to combat AI false negatives is the same way to combat human false negatives, which is layering your security tools and making sure that you're looking at things from at least two different perspectives, you know, with two different meshes. So you're seeing what's going on there. And then, you know, speaking of that, I wonder if another area where, and I don't know
if I've seen any practical applications of this yet, but it seems like deception technology and
kind of honeypots or honey tokens and that kind of area would be a great place for AI to live
and be useful in kind of watching in a state of honeypots potentially, instead of people
kind of waiting for those alerts, you could have AI potentially. I don't know if there's an
advantage there or not. Gerard, have you seen anything like that? I mean, I personally haven't,
especially from that area of security. I haven't seen much, you know, in the means of like honeypots
and different, you know, tactics and things of that nature. So I personally haven't seen them
in any day-to-day environments now. Yeah, that is one of the things that we've seen actually quite a lot in the past in other technologies, but you're right.
I don't think I've seen an ML-based one of those, but man, that could be fun, couldn't it? You know,
train a system. Oh my gosh, let's start a company. Train a system on what kind of responses like an api gives and and then let it
just give just fuzz responses at you i think that would be hilarious and and you could like build
something that would be just just spewing nonsense at an attacker and it would be all like credible
or pseudo credible nonsense man that's a great idea.
I'll take full credit for that.
But it also then turns over my mind,
I think, Stephen, to where you were going is kind of the second point,
which is the other side of this coin
we've been talking about,
which is attackers using AI.
And I think just like we've seen,
or just like we could see AI being used in security tools
to go hunt attackers and go
look for attackers. I think there's a possibility, right, for AI to be used by attackers. And I'm
wondering if that's something, Gerard, have you seen in the wild AI being used? Are there attacks
that are known that are AI backed in any way? Right now, no. And again, because I think more so where we're still facing
is we're still heavy into the ransomware and a lot of them, like diving into them,
even articles I've seen, they're not really AI based, but that kind of brings me back to my
initial question I mentioned a few minutes ago, where AI based attacks are, which they're very
minimal that I've personally seen. I mean, there could be more out there that I just haven't been
exposed to in the organizations I've been in.
But that's not to say in the next year, two years, even five years, 80, 90% of those attacks
are going to be a lot more advanced than ransomware.
Because as I said, these attackers have the same tools and technologies we're using, but
they're using them for a much more lucrative aspect, right?
And to kind of pull that data and, you know, to mine for any kind of information they can.
And I think that's why another key piece is segregating your data, which I can go to that, but that's also key, but not that I've seen,
you know, but I mean, that's, that's not to say that it's not becoming a growing threat.
It's brewing and it just hasn't risen to the surface yet.
Sure. And we may not even know, right. There could be attacks that are happening. There's
some AI behind it. We don't necessarily know that as Steven was talking about, you know,
the, the potential for using machine learning to generate a more realistic honeypot and kind of spewing things at the attacker that were actually reasonable and sensible.
I immediately thought of spam, which is maybe not the biggest cyber attack out there.
Most of it's just nuisance.
But that is somehow a lot of the malware gets in as folks clicking on links in spam.
And I'm just thinking of something like GPT-3 that can really create at least snippets of human sounding text.
Because in the previous ones, right, when you used to get those, what was like the Ethiopian king who had some lost fortune, he was trying to get you to recoup.
Most of them were pretty badly written.
So even if they were written by a person, it didn't seem like they were.
And so I can see potentially using machine learning with natural language processing to create better spam as one potential attack vector. using AI to generate text that passed a plagiarism filter, but was plagiarized from our sites in order
to as a like a Google AdWords scam. So basically, they're ripping off, you know, thousands and
thousands of websites of blogs and news sites and so on. But they're using machine learning to
generate text that isn't the same. So it passes the plagiarism tests
and gets counted as new text. I could totally see that being used in phishing attacks and so on.
I could see a future where right now phishing is a very careful task. You have to very carefully
decide what elements do I need to include in order to make this thing look good. One of the reasons that spam works so well is because it's
spam, because you can just send out a ton of it. And even if your response rate is like one in a
million, it doesn't matter if the million are free. Well, if we could make phishing be or not,
God, if people make phishing something that can be done in a one in a million
kind of way, that would be, I think, a major escalation and would be a major entry point,
because you could literally have a phishing, machine learning phishing bot that would like
target every employee at this company with reasonable sounding text. And sure, most of
them would become out totally ridiculous, but some of of them some of them might pass the test what do you think gerard i definitely could see that um you
know especially more and more because as i said the more advanced these attacks get the more it
gets to the point like i'm just going to say it if i haven't already you know you're not going to be
able to recognize what's a threat what's not and then that creates bigger challenges like a trickle
down effect right the more advanced the more, the more granular the threats get.
You can't separate the difference between a legitimate process as opposed to an infected process that you're actually open like executable code or a Trojan that has executable code.
And then this way that presents new challenges for the definition updates and kind of how these endpoint solutions and these softwares update in real time because you won't be able to.
And then it's just a trickle-down effect. So I could definitely see that happening for sure.
Yeah. And that's exactly the fear, right? I think the biggest fear perhaps is that you won't be able
to tell the difference and the difference become less and less, especially if there's, you know,
valid or good machine learning behind this. You know, in that example that Stephen gave with
phishing attacks, if there's machine learning, that's able to learn which combinations of words get the best responses
and keeps building on that the same way that, you know, Facebook and Twitter and everybody have,
have, you know, increased their algorithm to get clicks and to get likes and get comments.
If you did the same thing with phishing and malware that that's, that's pretty terrifying
indeed. Yeah. I mean, I mean, I could definitely foresee that.
I think the way too it's going,
and I'll give you an example.
A lot of organizations, more so the one I was just at,
were doing this a lot more frequently
as those phishing campaigns for this.
Because believe it or not,
we're talking about AI and threats,
but at the end of the day too,
a big portion of all this comes from the end user, right?
So having the end user being educating them, because a lot of them don't know, you know, how many of
them click on those links daily, how many of them, it's something very basic. So all the code could
be out there, but it just takes one person or it takes multiple who don't really understand, okay,
well, how do we really approach this? You know, we had a few people like, you know, kind of respond
back to the tickets, like, hey, we set those phishing campaigns up, but they're designed to not be more of a tool to yell at people or to, you know, like let them know, like, you know, kind of respond back to the tickets, like, Hey, we set those fishing campaigns up, but they're designed to, to, to not be more of a tool to yell at people or to,
you know, like let them know, like, this is, you shouldn't do this, but to educate them.
Right. Because again, they don't bear the knowledge that we know they're not the experts
in this. So we, that's our job kind of guide them. Yeah. And that's, I think maybe where
machine learning can come in as well is the ability to, to detect sort of the, the unknown,
undetectable things, you know, the idea that maybe you can train it on phishing campaigns,
and, you know, and then have it sort of say, you know what, this looks an awful lot like phishing.
I mean, obviously, I'm anthropomorphizing it. But, you know, to have a machine learning system that has been trained on, you know, thousands and
thousands of phishing attacks and what they look like, you know, there, you know, you might have
some false positives, you might have some email blocked or whatever. But I think that in many
cases, these attacks do look similar enough that a system might be able to be trained on it.
But this leads me to the next
thought, which is what about fuzzing? And of course, a lot of security breaches are detected
by basically just knocking on as many doors as possible until you find one that opens or you find
an unexpected door that can open, you know, so we, you know, everything is simple as sort of NMAP,
you know, go look at all the ports on the system to something much, much more advanced.
Well, that's another thing that you could really train an artificially intelligent system
to do, right?
I mean, you could train a system to basically attack everything.
You could train it with attacks that have worked in the past and try things like that
with other things.
You know, maybe, you know, these are all the vulnerable ports that were found on all the systems that we know of.
What can this tell you about vulnerabilities generally and what we should be looking for
in the new, as new software is released?
I could totally see that as another opportunity for the bad guys to use
machine learning against security pros, right? I mean, basically this kind of unbridled creativity
that you can get from something as simple as a GPT-3 that can come up with all sorts of crazy text.
Does that scare you, Gerard? Yeah, and I think that's too, that's where I just wanted to chime
in there. You know, as you mentioned that that it gives them the opportunity to kind of take that source code and then manipulate it right because they have just undedicated or they just have on they have vast amounts of time because it's not that hard to take it, face mask a threat, and then put it out there. And
especially again, making sure you're, you know, your source code is credible when you're updating,
you know, those EDR solutions. Because again, you get the wrong one, you're unpackaging that
threat in there. And then that creates a whole other slew of problems, which thankfully,
I have yet to face and hope I hope I don't. And another thing actually that occurs to me
is that there's actually another whole other world
of vulnerabilities opened up by the very AI systems
that are being deployed in enterprise.
In other words, companies are going to be bringing in
machine learning systems, big data sets,
all sorts of things that they may not actually be
all that familiar with, new tools, new techniques, new approaches to doing the core business of the
business. And of course, anytime there's a whole bunch of new products that are deployed, there's
a whole bunch of new vulnerabilities. So this actually has nothing to do with AI or ML.
It's just basically the fact that you're bringing in a whole bunch of new tools that are developed by people who know all about AI and ML, but may know nothing about security or very little about security.
They may not have the lessons of the past.
Frankly, that kind of scares me.
What do you think about that?
A whole new world of opportunity for these attackers?
It's terrifying to me, you know, and as much as I can keep up with it, and as much as you know,
we can kind of monitor at the end of the day, something's going to get through. Not on our
watch or on my watch, but just in general, right? Because as I said, if we don't have dedicated time
and more so resources and even the hands to do it, it becomes hard. I mean, how do you juggle
being, again, narrowing down the threat, finding the threat, remediating the threat, and then kind
of originating where did this come from and how do we prevent it from happening again? Much less,
you know, the much more hierarchical and bigger picture of getting it out of our infrastructure.
So I think one of those things that, I mean, the more it goes on, it terrifies me. And as I said,
we just have to be as diligent as forward thinking as we can
and being proactive as these newer threats arise.
But trust me, it's terrifying.
It scares the crap out of me.
I agree.
This whole new attack surface around AI and AI applications
and whether it's using AI applications in your IT stack
or developing AI applications for your customer facing,
whatever you're doing,
both of those things are just being done, right? In the last few years. stack or developing AI applications for your customer facing, whatever you're doing, both
of those things are, you know, just being done right in the last few years.
So I don't think any of us know 100% how to secure everything.
And it definitely opens us up to new attacks.
On the other side of that coin, you know, one of the things we're seeing on call it
the good guy side of the house is something like GitHub's co-pilot, where you've got this
AI as a pair programmer.
And now that we've been talking about this, and Gerard, you mentioned source code a couple of times, and that just made me think of coding and writing the vulnerabilities themselves
or writing the things that will exploit the vulnerabilities. And so if you're talking about
writing malware, and now all of a sudden, not only can you be a script kitty and just go copy
somebody else's code, but you can use an AI pair programmer to write better code.
I mean, that seems like a potential explosion of malware on the scene.
Is that a realistic fear, do you think?
I think so, too.
And believe it or not, I haven't gotten to dabble with Copilot much.
I want to, but I haven't got to.
But, you know, thinking about it, you know, this discussion, it really does.
Because it's like anything, you're opening that proverbial can of worms and you're opening those endless possibilities to being exploited so somebody who has no scripting knowledge
who's never touched you know powershell who's never touched a batch file in their life can go
ahead and take this manipulate it tweak it and then just deploy it out there and the damage it
could do could be circumstantial you know at any level so again i think it's one of those things
that we have to be more proactive and how we do it but i know that a lot of developers are doing as best of a job as they can, but there's going to be that one that gets through.
And then all it takes is that one.
Yeah, that leads me back to some of the other solutions we've seen here, Chris, on the podcast of, hey, let's have something that's easy to use that anyone can use.
You know, we can create an AI-based application and deploy it in minutes. We can drag and drop systems. These are literal quotes from some of the guests that we've had over the last three seasons talking about how easy it considered the information security implications of these things.
I think it's probably more likely that they considered storage than information security,
frankly. And I say that as a storage pro who gets ignored all the time. So Gerard, thank you so much
for this, you know, kind of this perspective from somebody inside the infosec industry.
We're now coming to the part of the podcast where we transition a little bit into something
different. So this is time for three questions. Tradition started last season, and we're carrying
it through with a little bit of a twist. We're going to be bringing in a question from a previous
guest as well. So a note to the
listeners, Gerard has not been prepped for these questions ahead of time. So we're just going to
get an off-the-cuff answer right now live. So Chris, go ahead and ask your first question.
Yeah, so I think we've established that cybersecurity professionals are not going
to be replaced by AI anytime soon. But Gerard, in your opinion,
are there any jobs that will be completely replaced by AI
in the next five years?
I think if I was giving my honest opinion on it,
I think more so, not even, well,
I really have to think about that.
But I feel like a lot of the jobs that could be
are more so like specific analysts, like a lot of the jobs that could be, or more so like specific analysts,
like a lot of analyst type roles could be potentially one, because if you take a look
at it, right, they're breaking down that source code about that. Try to, you know, it's, it's hard,
right. To really think about it. I mean, we can always say like general stores and things like
that, but for me, I, I, I'm not really too sure. Like I couldn't give, you know, I'd have to really
think about it. I'm not too sure, but I can tell you that it's, it's, you know, I'm not really too sure. Like I couldn't give, you know, I'd have to really think about it.
I'm not too sure, but I can tell you that it's, you know, I mean, anything that usually requires a lot of that data, that it could be self-automated, right?
Like you're going to see AI.
And again, the more you tweak it, it can pull that data.
It might be able to do it a little faster.
So one of the things we talked about as well was you were saying sort of where you have and haven't seen AI.
So I'm going to ask you the opposite side of Chris's question.
My question is, can you think of any fields, and not just in IT, but just in general, can you think of some fields that have not yet been touched at all by AI?
They have nothing to do with AI.
Some fields? I want to say manufacturing, I haven't seen much.
And now I have some friends that, you know, work in factories that I think that they've let me know, you know, I've talked to them from their technology standpoint, and they're
still dated, right?
But I think manufacturing hasn't really seen a whole lot of, it could see more growth in
the AI space.
Healthcare, for one, because, you know, I've worked
in the healthcare industry. They could definitely, definitely use, use a lot more AI-based tools,
solutions, move it forward. Because as I said, especially with everything going on out there,
I think that would be key. Cool. Yeah. Cause that's one of those things where, well, there,
there you go. Listeners, you got some ideas on where to develop your next system. So finally, as promised, we're going to use a question from a previous podcast guest. So the following question is brought to us by Amanda Kelly, the-founders of Streamlit. I would like to know, what is a tool that you were personally using a few years ago, maybe you were very hot on, but you find you're not using anymore, and why?
Well, again, a very common tool, and again, it's just one I wrote about a while ago, but I stumbled upon by accident. And the more I dived into it, I really, and that's kind of one of the big reasons, just one of the smaller portions of
the bigger reason why I really took a genuine deep dive interest into threats and security
was because of how granular it could break it down, what it was capable of, how it pulled
definition updates, how it detected in real time. And that was something I haven't used that in
years, but it was just something that was always like a callback. That was definitely a big piece of software that I use daily.
Well, thanks so much for that, Gerard.
We look forward to hearing what you might have as a question for a future guest on Utilizing
AI.
And if you, the listeners, want to join in on this, you can.
Just send an email to host at utilizing-ai.com, and we'll record your question for a future
guest.
So Gerard, thank you so much for
joining us today. Where can people connect with you and follow your thoughts on enterprise AI?
Or is there something you've done recently that you want to give a shout out to?
Yeah, definitely. I mean, I've just recently completed a post for a new series that I'm
writing for my blog, Tech House 570. It's called Cisco Champion Highlights, where we really get a
deep dive, a granular look at Cisco Secure Endpoint, kind of some new updates to that, what's coming down
the pipeline, some really exciting stuff. Hope everybody sees, but if anybody's looking for me,
I'm always available in the community, always willing to help. I'm available at TechHouse 570,
and again, at my LinkedIn and Twitter. Yeah, so actually with one of the hats I wear as the
category lead for security and risk at GigaOM, I work on a lot of reports in these areas. We have an upcoming report from analyst Chris Ray on deception and on UEBA. So two topics we talked about today. But for my own personal things, everything you need to know is at chrisgrundemann.com or on LinkedIn or on Twitter at Chris Grundemann. And as for me, I'm just going to give a quick
shout out here. We are doing a security field day event in March. So if you're interested in
information security, please do tune in March 23rd through the 25th for security field day,
and also an AI field day in April, April 22nd or 20th through 22nd. So please check those out as
well. So thank you everyone for listening to the
Utilizing AI podcast. If you enjoyed this discussion, please do give us a rating and
subscription. We're available in most podcast applications, and it really does help. Also,
please do share this episode with your friends in information security or AI or beyond.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the
enterprise. For show notes and
more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI.
Thanks for joining us and we'll see you next time.