Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x01: AI-Driven Information Security with Steve Salinas

Episode Date: January 5, 2021

AI will be part of everything we do in the future, not replacing us but augmenting our work, and this is especially true in information security. In this, the first episode of Season 2 of Utilizing AI..., Steve Salinas joins Chris Grundemann and Stephen Foskett to discuss AI as a “co-pilot.” Enterprise security saw an explosion of threats in the last decade, outstripping the ability of information security professionals to identify and prevent intrusions. The goal of enterprise AI in security is to help identify threats both known and unknown through deep learning as well as simpler pattern-matching machine learning. Of course, if AI is a co-pilot inside the company it will also be used by intruders, and adversarial machine learning is rising. The industry needs to be ready for anything! We finish the episode with a new feature: Three questions about the future of AI! Guests and Hosts Steve Salinas is an Information Security Product Marketing Professional. You can find Steve Twitter as @So_Cal_Aggie or on LinkedIn as StevetheMarketingGuy Chris Grundemann a Gigaom Analyst and VP of Client Success at Myriad360. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett Date: 1/5/2021 Tags: @SFoskett, @ChrisGrundemann, @So_Cal_Aggie

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing AI, the podcast about enterprise applications for machine learning, deep learning, and other artificial intelligence topics. Each episode brings experts in enterprise infrastructure together to discuss applications of AI in today's data center. Today, we're discussing how AI is transforming the everyday work that we do. Quite often, I've said that AI is our co-pilot, and that's really what we're going to talk about here. AI is not just our co-pilot,
Starting point is 00:00:31 it's our required co-pilot. It's becoming part of what we do, and it will be part of what we do, no matter what we think. But first, let's meet our guest. Today, we're joined by Steve Salinas. Steve, tell us a little bit about yourself. Hi, Steve. My name is Steve Salinas. I'm head of product marketing at Deep Instinct. You can find a lot of information and a lot of content that I create on the Deep Instinct website at deepinstinct.com. My LinkedIn profile is available.
Starting point is 00:00:59 Take a look at what I've been doing. But I'm glad to be here joining you today. And I'm Chris Gunderman. You can find me on Twitter at Chris Gunderman and chrisgunderman.com and the co-host of many Utilizing AI podcasts. Excellent. Yeah, it's great to have you, Chris, joining us again. So as I said, I mean, this has been something that I've really been kind of going back to the whole time we've been doing this podcast. And it was really evident during the AI Field Day event as well, that a lot of products are starting to incorporate AI-based features, not to kind of take over the jobs of administrators or, you know, kind of like, you know, even, you know, to do things that we couldn't do before, but just to like,
Starting point is 00:01:40 you know, assist us, just to augment our everyday work. And it seems like that's what you're doing, Steve, over at Deep Instinct. So, you know, talk about this a little bit. What is the goal of AI in your mind in terms of how enterprise products are using it today? Sure. So I think what we're seeing a lot in security is a few years ago, let's say maybe five or eight years ago, there was a huge explosion in the amount of threats. So the threat actors, the people that are carrying out these attacks we hear about all the time, ransomware, or the targeting hospitals and education institutions,
Starting point is 00:02:18 the amount of threats exploded. So the number of different ways that organizations were being attacked outnumbered their ability or outstripped their ability to really stay ahead of it. So you started to see organizations, security vendors, adopt new ways to try to identify threats and try to keep these organizations secure. And one of those ways, one of the things that you'll see a lot is artificial intelligence. So there are different kind of evolutions of that usage. The first being machine learning, which we can talk about, but essentially that was one way to identify threats, to try to help these organizations stay ahead of it. But one of the things that we've seen, the attackers,
Starting point is 00:03:00 they are very good at evading ways to capture their threats, to kind of neutralize them. So what we're doing is we're using kind of the latest advancements in AI, which is called deep learning, to really try to help these organizations stay ahead of it. And as you mentioned, I love the way that you positioned this. The use of artificial intelligence and deep learning in our specific example is meant to augment the human expertise that these organizations have it's very scarce like if if you have a if you're a security analyst today you're in extremely high demand there just aren't enough of them there's a huge skills gap so that's where ai can really help fill in that it's the autonomous ability for these AI solutions to act as a security analyst,
Starting point is 00:03:47 to really augment what the team itself can do in ways that human beings cannot do sometimes and at a pace that they can't do. So it really helps with scaling and trying to stay at pace with the threats that are impacting the organizations. Yeah, it's interesting. I mean, I think every time we talk about any form of automation, and especially AI coming into the workplace and in any workplace, there is some folks who get a
Starting point is 00:04:17 little worried about that, right? And they think things are coming for their jobs. I think there's a really interesting seven stages that Kevin Kelly laid out out where you kind of, it's like the five stages of grief, but it's the seven stages of automation or AI. And it goes something like a computer cannot possibly do what I do. And then, okay, okay, it can do a lot, but it can't do everything I do. And then stage three is, okay, it can do everything I do, but it needs me when it breaks down, which is often. I've got to care and feed it, right? And then fourth is, okay, it can operate without failure, but I have to train it to do everything i do but it needs me when it breaks down which is often i've got to care and feed it right and then fourth is okay it can operate without failure but i have to train it to do the new tasks eventually it says you know you come to the conclusion that that job
Starting point is 00:04:52 was you know no human was meant to do it let's go find a new job and then in you know level six is so my new job is made way more fun pays way more and uh you know i'm so glad that a computer is doing my old job and the seventh one of course is I'm so glad that a computer is doing my old job. And the seventh one, of course, is I'm so glad that a computer cannot possibly do what I do. And I think that's interesting. We've seen that through time in a lot of other areas. Do you see that as kind of the way that AI is going to roll out within the cybersecurity space in particular? I mean, is this something that allows people to do more or allows companies to do more with less people? And then how does that dynamic play out in your eyes?
Starting point is 00:05:27 Yeah, I think that's a really great way to look at it because exactly what we see a lot of times we have, you have security teams where if they are lucky enough to have really experienced analysts and security experts, they spend a lot of their time what I call break fix. Right. So a compromise has occurred. So they're having to kind of repair they're playing i.t person to where i mean their skills are well advanced of that but because the fact that the threats are coming in at a pace and they're being successful they're having to kind of do stuff that is that isn't really challenging to them and we've actually seen that since they're not challenged what do they do they leave they go try to do something else. So with AI, I love the way that you position that.
Starting point is 00:06:10 That's it. We're trying to take over those jobs that can be automated. That's what AI can do for security. So one, just a really basic example is when you're trying to access a file, right? So someone sends you a file, you download a a file that file can either be good or bad right it can either be a threat or it can be something that is completely useful to you for your job or whatever you're trying to get done well the analysis of that could be done by a human they could go and they could try to reverse engineer it and pull apart and look at all the code that is an absolutely ridiculous approach, though. That's not efficient. It doesn't scale. Well, the good news is that you have the ability to use AI to
Starting point is 00:06:50 do that type of function, right? You train, it's AI, it's a model that can be trained to identify, all right, I can see this file, I can analyze it in milliseconds, and I can determine, yes, this is good or bad. And they can take a decision because that's really what these teams need. They need that initial phase. That's just one example of how you can use AI. So now someone, a security expert, isn't having to spend time to try to do that. They can do other things, right? Look for more advanced threats because that's where the attackers are really trying to do. There's an that that's very common it's a denial of service attack right so they will pound a website or something and really that attack is meant to divert attention right so oh wow my my website is just getting pounded so that the attackers can kind of go around you right and and that's why you're over here trying to put the fire out
Starting point is 00:07:41 they're doing some bad stuff over here so it's very possible you could use AI to help try to eliminate or mitigate that situation to where that denial of service attack doesn't occur or it's minimized so your experts can remain focused on the bigger picture. So that's a really great way. I love the way you position that. Yeah, and that's actually one of the things that I've been saying for a while. That's one reason that I was happy to have you join us here, because it seems like security is one space where AI is particularly well suited. Because in the security space, especially in terms of detecting and fending off specific threats, a lot of what you're doing is just sort of sifting through and trying to separate the good from the bad. And, you know, humans, you know, we don't have great focus, we don't have great ability to, you know, to stay on target, you know, constantly. And of course, we can't work 24 by 7. And so things like that, tasks like that actually are tasks that AI is particularly suited for. Is that something that you've found? And is that something that separates security from
Starting point is 00:08:52 some other areas where AI might not be as good? Yeah, I think it's a really good point you make there because there are certain aspects of security. So there's a function, and I'll get right back to AI, but there's a function in security called threat hunting. So this is where imagine you have, I'll call it a security ninja. Like this person is like extreme security expert, years and years of experience. And they will literally go into the data set, like the data lake of an organization. And they're mining out threats that are not known. And it's a very unique skill. So that is a little bit more difficult when we're talking about that advanced threat hunting. But there's a lot of other stuff, like you mentioned, where we're trying to sift
Starting point is 00:09:35 through the good and the bad. One of the issues that happens today with organizations that are trying to still secure their environment the old way, you get way too many threats. So imagine a scenario where you have a simple rule looking for, I'll say, failed logins, right? So I constantly am putting my password in wrong. So on my machine, every time I do that, it writes a log message, you know, login failed. Well, that rule potentially could generate tons and tons of alerts a lot of them are just you don't need to do anything with them but it clogs up the window right it clogs up your space so using ai though what we can do is you can play some intelligence around instead of just relying on rules for example you can use some rules and then use ai to identify those those threats that can
Starting point is 00:10:26 just be weeded out immediately so now instead of like taking having this huge haystack of threats and potential threats it's a much smaller amount that is much more manageable and that's what we're seeing when an org kind of when the light bulb goes off for an organization that hey this this ai using ai and kind of building an ai centric approach to security it's going to allow me to deliver a much better experience for my employees for my customers for my staff because that's one of the other things and we've talked a little bit about it if the staff feels they're not being challenged, bored, frustrated, they will leave, right? And that's a big risk for a lot of organizations. But if you can give them a tool that says, hey, we're going to take this redundant stuff off your plate, we're going to limit the time that we're trying to put
Starting point is 00:11:18 out fires, and you can start to use some of that thing, that knowledge that you've built, they're going to be happy. They're not going to leave, right, they're going to want to stay. Yeah, I agree. And I think that that reminds me kind of harkens back anyway, to what you said, kind of the beginning there, Steve, about, you know, the evolution of AI a little bit, right, from machine learning to deep learning and different types of AI, and we've used them, because I think, you know, basic anomaly detection or unsupervised learning is just looking for deviations from norm has been around for a while. The problem, I think, you know, basic anomaly detection or unsupervised learning is just looking for deviations from norm has been around for a while. The problem, I think, to your point is that it creates a ton of false positives and so well maybe you're not looking at the raw log data anymore, you're now sifting through a bunch of reports from, you know, some kind of anomaly detection system. And so really moving and then especially moving to like the age now big data we're actually recording a lot more data taking a lot more telemetry off networks and applications and so those false positives just balloon um and so really digging
Starting point is 00:12:08 into getting into that supervised learning that deep learning and starting to connect those dots so that people don't have to sift through all that information i think that that definitely seems to me to be something that would both increase job satisfaction as well as increase efficacy of your teams uh is that what you've seen to be true for sure i mean i think that you're getting the head right there i mean with and i'll talk a little bit about deep learning we haven't increase efficacy of your teams. Is that what you've seen to be true? For sure. I mean, I think that you're hitting the nail on the head right there. I mean, and I'll talk a little bit about deep learning. We haven't really talked a ton about it.
Starting point is 00:12:30 Maybe your audience already is pretty familiar with it, but essentially a deep learning approach that we're taking or that you can take, I should say in security, is to, it's all about the data, obviously, right? It's about the training data that you use. So if you're trying to identify threats, you need two types of data. You need threat data and non-threat data, right? And you need to be able to feed this into a neural network. And the thing about the neural network,
Starting point is 00:12:56 and this is where when I've tried to explain this to some people, you kind of get some, I don't know if I understand, but essentially it works like the brain, right? It's a neural network. So there's not a specific, we're not identifying specific characteristics of a threat that needs to be looked for or specific characteristics of a non-threat. It's literally here is, here's a threat that's known to be a threat.
Starting point is 00:13:20 We're labeling it as a threat, right? Supervised, right? And so this neural network, give me an answer that shows that this is a threat compared to the same thing with the non-threats. But you have this massive amount, the size of the training data, it's huge, millions and millions of files. So what you end up with though is this model
Starting point is 00:13:38 that develops the innate ability to identify threats at a really, really high level, because it's harder it can't it's harder to be fooled for one because we're not giving it a specific path to follow like a decision tree specifically it's take it can take any path that it wants to get to that result and our data scientists and data scientists can modify the algorithms and whatnot until they get to that result that that over and over gets a really high efficacy because one of the things in security and you touched on it if you really want to identify more threats generally speaking the traditional thought is you have to be willing to deal with
Starting point is 00:14:14 more false positives right that's just the way that it is but with what they well what artificial intelligence is going to start to do and we we're seeing it today, is that paradigm is shifting. We call it the trade-off. You don't have to have that anymore. You can really, by adopting deep learning or artificial intelligence, the latest advancements in it, you can start to see really high prevention capabilities and false positives that are extremely low, which is something like, I've been in this space for quite a while. It was kind of like, oh, wow, that's really an innovative step. That's like a step change in a security tool's ability to really identify threats without getting flooded with false positives.
Starting point is 00:14:59 I'd like to follow along that path for a second, though, because one of my biggest concerns would be, you know, let's say it's the future and we have, you know, AI driven, you know, threat detection. My concern would be sort of like the archetype of, you know, autonomous driving system that doesn't know how to deal with snowflakes. Like, what happens if something totally unknown, totally unseen before comes at it? Could we find ourselves basically relying on a system that was trained on last year's attacks that can't even see this year's attacks? Yeah, that's a good point. That's a good question, comes up quite a bit.
Starting point is 00:15:43 So what we've found in our research and in the real world is when, and I'll talk just briefly about the model that we create for our solution, just briefly. when you're using artificial intelligence, specifically deep learning, I'd say probably even machine learning. Your standard laptop, you could not train on that. It doesn't have enough horsepower, right? There's just no way to do it. So we deliver something that's fully trained, right? So it gets deployed and we have seen, and I just, just the other day, there was a major ransomware, a brand new variant called,
Starting point is 00:16:23 I don't know how you exactly say it, Ryuk is what I call it, R-Y-U-K, right, was attacking a lot of hospitals and healthcare organizations. I pulled down 100 samples, right, to see what we would identify. And I used a model, the model that I was trusting against was trained in November of 2018. So two years ago, identified every one of those pieces of malware. Right? So that's what you get. It's we call it, I call it resilient because it's not relying on any specific characteristics. One of the things that attackers do is, you know, they're, they're looking to optimize their performance as well. So it's, you know, you don't really see a ton of from scratch,
Starting point is 00:17:07 like never seen before methods and things like that. So when we're deploying, we still have a very high resilience or ability to identify threats way after something has been trained. And that's where with the latest advancements in using deep learning, it allows an organization, what I find a lot, especially now with COVID and everyone's working from home and it's harder to kind of manage the assets that people are using. A lot of security decision makers are concerned that, you know, their attack surface has exploded, right? The ways that they could be attacked because people are working from home, using their home computers, but using a deep learning approach that doesn't need constant
Starting point is 00:17:51 updates, that doesn't need connection to the internet. It can make those decisions locally. It makes that system much more protected, right? So certainly we still continue to advance to get your, to come back to run to your point. We are always looking at different ways that attackers are using our different file types and things of that nature. But that it's an easily scalable right with deep learning. We can do some additional training because all we really need. We have these algorithms is the training, the training data. Right. So if they're starting to use a new new file algorithms, is the training data. So if they're starting to use a new file type,
Starting point is 00:18:28 we need training data on that file type, and then we can support and identify threats within it. Yeah, and where that takes my head is to that next step, which is, while I think we all agree that there are definitely use cases for artificial intelligence to really increase our ability to identify threats and move the ball forward from a defense standpoint on cyber security. I started to think about the other ways to interfere with that. Once AI becomes more widespread within cybersecurity, obviously, as we know from everything else,
Starting point is 00:19:01 attackers will find a way to attack whatever we're using to defend with. And the first thing I think about is those adversarial type attacks. And I remember I saw somebody did an exploit in the wild on just a SQL database where they printed out a SQL command and placed it on the front of their bumper. And then running through a red light, it actually causes the database to dump and you weren't able to attack. And so injecting that bad data into the system, causes failure. And I think what we were talking about there is adversarial machine learning and potentially, right,
Starting point is 00:19:35 if you train on a certain set of data, is there a way for an attacker to then inject unexpected data to cause the AI to either miss, you know, a threat or mischaracterize a threat, or, you know, how do you look at that? Or is that something that's on the radar yet? No, that's cutting edge, that's happening now. So, and we are, you know, we spend a ton of time researching
Starting point is 00:19:59 how that can happen. And you're right, the attackers, the way I always look at it is there has never been a security control delivered that an attacker hasn't defeated, right? But it just hasn't, right? I mean, turn your computer off and put it in the desk drawer, right? You're secure. But so the attackers now, they're using AI to defeat AI, to try to defeat AI. And we are actively researching and developing to identify if the attack that we're seeing is adversarial AI, right? If they're looking to defeat it. So it is around the training data still, but there are different ways. And I am no data scientist,
Starting point is 00:20:39 just to be clear, but I've talked to the data scientists and they know that's happening. We don't see a ton of those attacks yet, but they are coming, they're happening, and we are developing. This is the way that it's going to happen. These are how evolutions occur. One day long ago, signature based AV worked great. It was fine. You didn't need anything else but then then that got defeated and then machine learning based solutions started to really see great improvements and the attacker said nope gotcha we know what you're trying to do there we're going to avoid the the features that your machine learning is using so i mean you could you could argue that was kind of trying to defeat the machine learning right adversary in an adversarial approach. So the latest, what I like
Starting point is 00:21:25 to call like the third wave is deep learning, right? Which is going to be much harder to defeat, but the attackers, that's what they try to do, right? That's their business. This is a feat and where AI can be put to use for good, it can be put to use for bad. So the good news is we have a lot of good guys trying to make sure that doesn't happen, right, and trying to stay ahead of it. So on that note, you know, I mean, if that begs the question, what you just said, you know, AI can be used for bad. Whoa, whoa, whoa. and I don't know, allegedly North Koreans creating real malware. How long before we see, you know, an AI-based malware system that's, you know, basically, you know, attacking based on its own learning about the adversaries, which in this case is you.
Starting point is 00:22:21 Yeah, I mean, that's going to happen, right? It's kind of, I don't know if it's the right analogy but you remember spy versus spy those comics this is what it kind of reminds me of right so like we are the good we i'll say generally the royal we are trying to ensure that we we can we can mitigate those attacks so it can happen right it's not and i don't want to scare people but then but the good news i think if to scare people, but the good news, I think if you're a security practitioner, the good news is there are companies like ours that are looking at this.
Starting point is 00:22:52 They're cutting edge threats that are out there, right? And identifying those new threat vectors and adversarial AI is something that everyone needs to be aware of. And I think too, is it going to happen tomorrow? No, I think that there's a long way I'm thinking about the evolution, even even like the quote, unquote, good application of AI, you know, it's step
Starting point is 00:23:13 changes, even like, let's look at, you know, it doesn't happen overnight. So I think that's where we do have the ability to mitigate those threats. But I think once they start to show up more and more in the wild, the good news, and I always, you know, I try to always look at the positive here, is that there are lots of organizations like Deep Instinct that we are really trying to help, right? And there's a lot of resources being put behind
Starting point is 00:23:38 new and innovative ways to use AI and other advanced ways to protect organizations, right? So it can be complicated to identify what's right for your organization. But, you know, I think the good news is like podcasts like yours, right? It's good ways to learn about different things that might be available. So fear not, the sky isn't falling. You know, I think if I could give one piece of advice to a security decision maker, take a look at the security stack you have and make sure, or at least, you know, see how you can start to incorporate some artificial intelligence in there, right? To take care of some of those redundant tasks that are driving your team crazy and making them not really satisfied with their job and also are kind of leaving you exposed
Starting point is 00:24:26 to threats that can be mitigated with AI. I think that's kind of something I always like to just get across. Yeah, that makes sense. I mean, I think definitely we have to continue to look at our tool set and then continue to sharpen that saw over time. You know, it does beg the question though,
Starting point is 00:24:45 I think hearkening back to Steven's intro, you know, are we at a point or will we be soon where AI, you know, or, you know, deep learning, machine learning techniques in our cybersecurity stack is just table stakes. I mean, how far away is that already? Is that today? Is that tomorrow?
Starting point is 00:25:00 How far away are we from just this being something we have to do? I think I don't, I'm not sure if it's at table stakes yet. I mean, I think from my perspective, if you're not preparing for that, I think that you are going to find yourself behind a little bit behind the eight ball. I kind of look at it like a few years ago, when if you think, if you told people about, you know, using the cloud as part of your infrastructure be like whoa cloud. You know that's how that's you mean putting my data in the cloud or you kidding me, you know, stuff behind my firewall, but think about the cloud today it's ubiquitous right people are using it. We personally use it organizations have gone all in on cloud and things like that. So I think it's the same sort of thing to where you have, it's the adoption cycle, right? So you have your early adopters. If you went in, you would probably see someone that's
Starting point is 00:25:48 adopted AI. Here's where I look at it. As part of the security stack, their stack is probably going to be much smaller. It's probably not going to have a ton of redundant capabilities compared to like maybe a laggard or someone that's a little bit later to adopt. They're probably going to have lots of technology that overlaps that's probably not implemented fully because it relies a lot on on the human expert to configure and maintain and care and feeding of it kind of like you said your seven phases if you when you're adopting ai you're going to get to a point where yes the security experts are always going to be extremely critical but you're going to get to a point where yes, the security experts are always going to be extremely critical, but you're going to have a part of that process that's going to be handled by
Starting point is 00:26:31 this technology. This AI can do it for you. Right. And it's going to be like you say, your co-pilot. And that's where I think it, I think it's now's the time to look at that as table stakes or where you need to drive to probably within the next year or two. I mean, I think really the timeline here is not long. You need to hop on the AI train as soon as you can. Absolutely. Hop on the AI train.
Starting point is 00:26:57 I think we can all agree to that. So one thing I've been thinking of, this is actually the first episode of season two of Utilizing AI. So thank you guys for joining us here. One thing I wanted to add to the podcast this time around is basically some fun questions. And I'm catching you by surprise, Steve. And that's on purpose. Off the cuff, when do you think we're going to see a fully self-driving car? A fully self-driving car, probably within two years, we're going to see a fully like, you know, a robot AI that can pass the Turing test in a discussion, like an audio discussion with people?
Starting point is 00:27:54 Oh, I wouldn't be surprised if that's within like a year. Okay. I think we're there basically. Yeah, I mean, I think, yeah, I think we're very close on that. All right. And when does AI destroy the earth? Hopefully not for quite a while. Okay.
Starting point is 00:28:10 Well, I'm going to ask every guest that same question, and we'll see what the over-under is on some of these answers. That's not a great answer, but yeah, I think we're okay. I don't think we're going to get destroyed by AI. Well, thank you so much for joining us. Steve, where can we, again, follow you or find more about what you're doing? That's great. I'm so glad that I was able to participate with you guys today.
Starting point is 00:28:37 Most of the content that I do create is on the Deep Instinct website. So that's probably the best place to go. We have lots of video content, blogs, to learn about deep learning. We try to mix it up a bit between the technical aspects of it and then the business perspectives of it. So it's a good resource to go learn about that. You can look me up on LinkedIn, Steve and Salinas. I'm always looking to make new connections there as well. Well, thanks a lot. And I'm Steven Foskett. You can find me at S. Foskett on Twitter. You'll find this podcast at gestaltit.com, which is where you'll find a lot of my enterprise IT
Starting point is 00:29:20 writing. And Chris? Yeah, I'm Chris Grundemann. You can find me on Twitter at Chris Grundemann, online, chrisgrundemann.com. And obviously, in your favorite podcast, you're here with the ULAs in AI, as well as the Gestalt IT blog and around the webs. Great. Well, thank you, everyone, for joining us today. It's been a great discussion. And this is exactly what we're trying to do with the Utilizing AI podcast. If you enjoyed this discussion, remember to subscribe, rate, and review the show on iTunes. No joke, that really does help get the show out there and in front of folks. And please do share this show with your friends. As I said, this podcast is brought to you by the folks at gestaltit.com, your home for IT coverage from across the enterprise.
Starting point is 00:30:10 For show notes and more episodes, you can go to utilizing-ai.com. You can also find us on Twitter at utilizing underscore AI. Thanks, and we'll see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.