Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 02: Ethical and Moral AI with @AndyThurai
Episode Date: September 1, 2020Stephen Foskett and Andy Thurai discuss the ethics and morality of AI. Taking a cue from an article written by Andy for AI Trends, we focus on the various biases that can influence AI, and how to prev...ent these from interfering with the results. Andy recommends modeling knowing that biases would come into the input data, recognizing limitations in technology and data, teaching human values and validating AI models, and setting an ethical tone in the organization creating the AI model. Like people, AI models can only be as unbiased as their environment and training, and it is critical to recognize these limits when deploying them. This episode features: Stephen Foskett, publisher of Gestalt IT and organizer of Tech Field Day. Find Stephen's writing at GestaltIT.com and on Twitter at @SFoskett Andy Thurai, technology influencer and thought leader. Find Andy's content at theFieldCTO.com and on Twitter at @AndyThurai Date: 09/01/2020 Tags: @SFoskett, @AndyThurai
 Transcript
 Discussion  (0)
    
                                         Welcome to Utilizing AI.
                                         
                                         I'm Stephen Foskett, your host,
                                         
                                         and today we're talking about the responsibility to make sure that AI is ethical and moral.
                                         
                                         Before we get started, why don't we meet our co-host, Andy?
                                         
                                         Sure. Hey, I am Andy Thorai,
                                         
                                         a guy who's fortunate enough to have AIIM, but I didn't my last name.
                                         
                                         I am the founder and CEO of thefieldcto.com.
                                         
                                         Again, that's thefieldcto.com.
                                         
    
                                         And I also regularly tweet at Andy Thorey.
                                         
                                         I am an emerging tech strategist, particularly concentrating on AI, ML, edge, and cloud-related technologies.
                                         
                                         I'm Stephen Foskett, organizer of Gestalt IT and Tech Field Day,
                                         
                                         and enthusiast for all things tech.
                                         
                                         You can find me online at sfoskett,
                                         
                                         and you can find more about this podcast by going to utilizing-ai.com
                                         
                                         or utilizing underscore AI on Twitter.
                                         
                                         So, Andy, you wrote an article last year for AI Trends.
                                         
    
                                         The title was, it is our responsibility to make sure that our AI is ethical and moral.
                                         
                                         And I think this is right in line with what we've been seeing from a lot of, I don't know,
                                         
                                         pop culture coverages of AI and so on.
                                         
                                         The fact that AI sometimes has trouble with black faces or that AI makes, you know, immoral decisions when confronted by a strategy or by a certain situation.
                                         
                                         I think that this is a really important discussion.
                                         
                                         Maybe you can summarize the whole thing here.
                                         
                                         What do you think about AI and ethics and morality?
                                         
                                         That's a pretty big field in my mind,
                                         
    
                                         and it's lagging pretty big time. And again, that's true with not just AI, but with any
                                         
                                         innovation technology, right? For example, think about cloud or think about other enterprise
                                         
                                         technologies that came up 10, 20 years ago. Think about even computers when it came up.
                                         
                                         You are worried about innovating
                                         
                                         some stuff. For example, when Apple and Google came out, people were worried about the innovation
                                         
                                         aspect of it, that what it brings to me, to my palm, to my hands, and what I can do with that.
                                         
                                         Then worrying about that starting to spy on me, right? But now, all of a sudden, we're like,
                                         
                                         oh, wait a minute. You know where I am, when I am all the time
                                         
    
                                         and you can track whatever I'm talking to people.
                                         
                                         For example, you are quoting an example
                                         
                                         in one of the podcasts that's suggesting Syria
                                         
                                         and then Syria woke up.
                                         
                                         So such kind of listening to you all the time,
                                         
                                         where does it put us?
                                         
                                         I mean, those are the things
                                         
                                         that need to catch up with that.
                                         
    
                                         And to answer your specific question,
                                         
                                         one of the things people don't realize
                                         
                                         is there are how many human biases
                                         
                                         that has been identified officially?
                                         
                                         180.
                                         
                                         There are 180 different kinds of biases that are classified.
                                         
                                         And most times we don't even worry about a lot of that.
                                         
                                         But those are biases.
                                         
    
                                         And bias comes from a data that's somewhat partial,
                                         
                                         or the decision making that's somewhat partial.
                                         
                                         That's what bias is about, right?
                                         
                                         So you get some information, you make a partial decision,
                                         
                                         that's bias.
                                         
                                         So when a human, again again AI doesn't program itself at least not yet unless we get full
                                         
                                         SkyNet when when Cameron is ready to do that version. Right. But, but until then, it's humans that advanced AI systems expert systems. So when humans program certain things,
                                         
                                         if they don't spend enough time in researching
                                         
    
                                         and doing work to remove the biases from the systems,
                                         
                                         it can, without knowledge,
                                         
                                         they may not purposely put it in there,
                                         
                                         but they don't make enough effort
                                         
                                         to remove the biases from that, from the decision-making process,
                                         
                                         then it's going to be a problem.
                                         
                                         And the same problem comes from the fact that if you don't have the right amount of data,
                                         
                                         if you put a garbage in and creating a model, it's going to create garbage out.
                                         
    
                                         So you need to make sure that the data is actually scrubbed to produce proper results out of that.
                                         
                                         So essentially what that means is data has to be unbiased.
                                         
                                         You need to clean it up, scrub it, and make it clean quality data.
                                         
                                         I wrote another piece on that.
                                         
                                         And then the second piece is when you're creating a model in AI systems, make sure that that
                                         
                                         inherently doesn't make decisions that's biased as well.
                                         
                                         So there are two pieces to that that equation.
                                         
                                         I think that some people may be surprised by this because frankly, um, I think a lot of us are sort of, I don't know techno utopians who believe that
                                         
    
                                         You know, a system like a computer doesn't make mistakes, a computer doesn't, you know, how would a computer have bias? How would it possibly have cultural bias when it's a computer and it's not a living thing that has lived in a culture? But as you're pointing out in your article and here as well, there are a number of different issues that can lead to bias in an garbage out. Basically, if you train your AI system only with this certain type of data
                                         
                                         that reflects a certain use case, that's the only thing that model is going to be able to deal with,
                                         
                                         right? I mean, if you throw it, you know, a completely different, you know, situation,
                                         
                                         it may make no decision, or it may make just completely the wrong decision, or just an
                                         
                                         unexpected decision.
                                         
                                         I mean, you know, I mean, I think that maybe to kind of bring it home, like imagine if you trained
                                         
                                         a facial recognition system only with white male faces, it might not even be able to recognize the
                                         
                                         face of someone who's not in that category. Is that what you're trying to say here?
                                         
    
                                         Yeah, yeah, to an extent, right? So actually, it's funny you mentioned that.
                                         
                                         I actually wrote a follow-up article to that AI Trans article that's published on Forbes.
                                         
                                         If you Google Forbes antithrombosis, you'll be able to see it, in which I'm talking about AI is more about the data than AI itself.
                                         
                                         That's one of the points I talk about there. The funny enough, the facial recognition software, which when they...
                                         
                                         So I wrote an article, a follow-up to that one recently, I forget where I published,
                                         
                                         but in which I talk about the fact that there are companies pulling out facial recognition software
                                         
                                         from agencies using it because they are worried about biased decisions.
                                         
                                         But in that, when there was a research that's being done, they found out facial recognition
                                         
    
                                         system software is fairly accurate when it comes to white male category.
                                         
                                         And then they are very error prone or he's very high rates of both false
                                         
                                         positive and false negatives when it comes to colored folks particularly
                                         
                                         African American woman right I don't know why I don't know if they didn't
                                         
                                         print the model with enough data input but it is what it is right I'm not here
                                         
                                         to judge a specific system one against the other. But when you have a system that's not, you know,
                                         
                                         Good enough to make a decision. I'll give you this example. This is a scary thought I'll follow that to you and also will link it in here.
                                         
                                         This happened in Michigan. Right. And this is again a written article. I'm not making this up.
                                         
    
                                         I think he published an op-ed column either in the New York Times or Washington Post, the guy who got affected.
                                         
                                         He, a professional guy, I believe he was in IT, when he was pulling into his driveway, cops pulled in and then they arrested him because a facial recognition system identified him as a suspect in one of the cases. And they came home,
                                         
                                         arrested him in front of his wife and daughters. And when he asked them a question,
                                         
                                         what did I do wrong? Their answers were a machine made a decision saying that your facial matched with a guy who committed a crime some time ago.
                                         
                                         So they arrested him.
                                         
                                         He went to jail, and he was not granted permission to speak to the lawyer or no recourse for up to 24 hours that he has to lie down and filth in prison for a crime that he didn't commit. And then at the end of it, not only the decision making process was faltered,
                                         
                                         but nobody owned up the responsibility to it.
                                         
                                         They said it's a machine that made a wrong decision.
                                         
    
                                         I mean, the machines are assisting.
                                         
                                         You shouldn't make the machine to make the decision.
                                         
                                         It can feed into you, and then you can make a decision.
                                         
                                         A human should be able to look at it saying that,
                                         
                                         you know what, maybe I agree, maybe I don't.
                                         
                                         The ultimate decision, as they say again and again, has to rely or lie with humans to make a proper decision.
                                         
                                         Because at the end of the day, not only about decision making, the transparency,
                                         
                                         and if you have to prove to court of law, you have to prove to them if your machine made the decision, not a human,
                                         
    
                                         that if you give the same,
                                         
                                         given the same set of data and parameters,
                                         
                                         every single time the human will come up
                                         
                                         with the same decision as the machine would.
                                         
                                         That's very important
                                         
                                         if the machine makes automated decision
                                         
                                         or you have to prove the machines assisted with the process, but it's a human who made the decision.
                                         
                                         So my problem right now with most of the systems are that things are not set up in place.
                                         
    
                                         Yeah, and in a way that that's a kind of a process problem.
                                         
                                         And that's kind of the issue, too, that you pointed out in your article, which is essentially if the machine is making a judgment call,
                                         
                                         how do we validate the judgment call?
                                         
                                         And I think that in that case, for example,
                                         
                                         you know, that could have been solved maybe by having a,
                                         
                                         you know, by having the machine suggest
                                         
                                         that perhaps this was a match,
                                         
                                         but then having a part of the procedure be
                                         
    
                                         that the human needs to like look at it and validate that
                                         
                                         and kind of,
                                         
                                         I don't know, sanity check it in a way to make sure that this really makes some sense.
                                         
                                         And this is one of the problems here is that, you know, there's a difference between using
                                         
                                         this technology to assist us and using this technology in a way to replace us. You know,
                                         
                                         similar to the whole idea of, you know, automated driving, right? Is it an assistive
                                         
                                         technology or is it fully autonomous technology? And the same can be true in business, right? If
                                         
                                         you, for example, have an AI system that's analyzing the stock market and making decisions
                                         
    
                                         on buying and selling based on a model that it's following, it could very easily go way off the tracks
                                         
                                         if you sort of set it loose without having the technology in place to kind of validate
                                         
                                         the judgments that it's making.
                                         
                                         Another thing that you're mentioning here is the limited collection of data that we
                                         
                                         have, right?
                                         
                                         You know, we still are in our infancy with data collection and we need to basically figure
                                         
                                         out how to make the most out of that data without expecting too much.
                                         
                                         AI is not about AI and algorithms and fine-tuning the models itself. It's more about the data
                                         
    
                                         than AI itself. So if you don't have the right data, mean people say oh Andy when I talk to
                                         
                                         enterprises they say Andy we collect unbelievable amount of data our data
                                         
                                         collection today is 20 times more than what we used to collect yesterday and
                                         
                                         tomorrow it's going to be even 20 times more but the problem is that doesn't
                                         
                                         mean anything you you collect because you collect garbage that doesn't mean
                                         
                                         that you will produce quality models are that
                                         
                                         First of all, this is what I recommend to my customers. If you're going to be creating certain models or creating some systems creating whatever right
                                         
                                         You got to figure out what is a look at the end of the day, we are moved into a data economy, whether you like it or not.
                                         
    
                                         Right. We used to be in the computer economy. Now we're on the data economy. So in order for the data economy to work and make decisions create models out of that you should be
                                         
                                         able to collect that right amount of data right quality of data and then make
                                         
                                         it ready for your models and then that article actually goes into a little bit
                                         
                                         more detail than that but at the end of the day if you look at the work of data
                                         
                                         scientists they spend majority of their time,
                                         
                                         about 80 to 90 percent of their time, in getting the data ready to create the models.
                                         
                                         Right? Yeah. Exactly. Model creation is easy. Yeah. And I think that it's important what
                                         
                                         you're saying too, that, you know, let's again, let's take a think of a real world scenario, right? Imagine you had some kind of a, you know, telemetric system or something that was, you know, giving us, you know, telemetry constantly, you know, something simple, like, you know, the lights are on, the lights are on, the lights are of the lights are on and no examples of the lights are off,
                                         
    
                                         then the system is obviously not going to be able to make any judgment when that situation happens.
                                         
                                         You know, you have to have a variety of data, not just a lot of data. And in fact, it can even go
                                         
                                         the other way, right? I mean, if you basically have just a preponderance of data that indicates one thing, it could actually overbalance the
                                         
                                         system in a way that may make it not able to adapt when something doesn't match, right?
                                         
                                         Right. So I'll also give you another example now that you brought it up.
                                         
                                         There is a concept that's actually a security issue with some of the AI models
                                         
                                         that I talked about in one of the conferences. It's called
                                         
                                         data poisoning, right? And or you basically screw the data. For example, if let's say you're a competitor. I'll speak specific examples. Let's say if you're a
                                         
    
                                         competitor of a certain company, say a car manufacturing company, I'm just making this up.
                                         
                                         I'm not saying that this ever happened, just a hypothetical situation,
                                         
                                         that you somehow skew their data collection, suggesting the demand for your particular car
                                         
                                         is so high in this market that somehow you're making the company believe and they have all of
                                         
                                         those cars they're manufactured geared towards a particular region, Europe, India, whatever that
                                         
                                         may be, and export all that. And then, you know, land all the cars there only to find out the data
                                         
                                         and the model of what they created was completely wrong, right? And then while the competitor who
                                         
                                         poisoned their data or poisoned their process, poisoned model know exactly where to go I'm just giving
                                         
    
                                         you a wrong example maybe but where to go and then they capture the market that
                                         
                                         could apply not just in cars in any commodity for that matter soybean market
                                         
                                         maybe or wheat market or gold market gold may not work because everybody
                                         
                                         needs gold right but soybean or something else where you kind of you know skew their data models by
                                         
                                         by passing it you know that's that's the next the corporate side of wars I see
                                         
                                         that it could go there by by you confusing the data collection and the
                                         
                                         modeling and prediction of certain companies and you use it to your
                                         
                                         competitive advantage yeah absolutely and and I think that this maybe gets to the fact that, you know, there's a whole value
                                         
    
                                         judgment in here too, in terms of, you know, morality and values, like, should we, you know,
                                         
                                         can we rely on these systems to make good judgments? And another one of the issues that you bring up in your article is specifically
                                         
                                         teaching AI systems human values. And I love this idea, this
                                         
                                         Kyotei technique of basically putting rewards on socially acceptable behavior.
                                         
                                         So in your case, for example, you know, I mean, or even in the cases,
                                         
                                         you know, that we've been talking about, many times you might, you know,
                                         
                                         systems might be able to be hardened against malicious actions or even just poor data if they
                                         
                                         have some kind of reward system in place to reward socially positive or maybe even not socially
                                         
    
                                         positive, just like business positive outcomes, right? I mean, if in my stock trading example, right?
                                         
                                         I mean, maybe if you reward the system when it makes money
                                         
                                         and you punish the system when it loses money,
                                         
                                         it's not gonna cause some kind of a flash crash
                                         
                                         or something, right?
                                         
                                         You know, how do you teach values to an AI system?
                                         
                                         That's the most difficult part, right? It's not just about
                                         
                                         teaching human value, but also the human feelings, as they call it, right? Can you teach a machine
                                         
    
                                         how to love? I mean, there are kind of movies that's taken on the subject. There's a Will Smith
                                         
                                         movie, I forget, which was a really good movie on the topic, right? Can you teach, I mean, as someone once said, the art of love and fear and the human
                                         
                                         feelings to enjoy the beauty, it's hard to teach the machines, right? So when it's hard to teach
                                         
                                         those things, it's hard to teach a similar human values to that. I mean, let's say, for example, I'll give you an idea, right? So a machine, let's say, I don't know, I'll take some
                                         
                                         scenario. Again, purely hypothetical, not trying to polarize this, but let's say a machine makes
                                         
                                         a decision saying that in a certain coordinates lies maybe a gang leader or a terrorist that, you know, who is in this place at this time,
                                         
                                         and then by doing a precise coordinated attack, we could get rid of this person,
                                         
                                         and then that would save a lot of issues for a specific country, right? And once a machine makes
                                         
    
                                         a decision, if it is human, there were a couple of movies I watched in the past
                                         
                                         that really touched me because the humans analyze to see
                                         
                                         what is the human casualty around it of innocent people,
                                         
                                         how many people will get affected,
                                         
                                         and how do you either mitigate it
                                         
                                         or do a risk-reward analysis and then pass on,
                                         
                                         even though we'll get the number one terrorist in the world,
                                         
                                         there are about 100 innocent
                                         
    
                                         civilians who have got nothing to do with that might die. A human values more than the point,
                                         
                                         so you call out the strike. But would a machine do the same thing if it comes down to it? And how do
                                         
                                         you teach that? And how do you say, okay, one human life is more than 10 or 100, how do you
                                         
                                         teach that value to a machine?
                                         
                                         And this is the classic trolley problem, right?
                                         
                                         The idea, so the trolley is coming down the tracks
                                         
                                         and you've got the switch
                                         
                                         and there's five people that would die
                                         
    
                                         if it goes this way.
                                         
                                         And there's one person that would die
                                         
                                         if it goes that way.
                                         
                                         Do you flip the switch?
                                         
                                         Right.
                                         
                                         And the truth is that, you know,
                                         
                                         no one can say what the proper, you know, decision is. I've heard people say, you know, I mean, systems to make these calls, to make these decisions,
                                         
                                         it could lead to serious ramifications. Even if it's not a life and death situation, even if it's
                                         
    
                                         not, you know, bombing terrorists or switching trolley tracks, it could be a situation simply
                                         
                                         where a system, a runaway system can, you know, blow all the money and make the business go out
                                         
                                         of business or ignore the hackers
                                         
                                         that are getting in through the firewall or whatever there are all sorts of
                                         
                                         places and cases where values where a judgment call can be wrong making a
                                         
                                         judgment call then the decision is not black and white that is the most
                                         
                                         difficult decision makings an AI system will face. Not today, not tomorrow,
                                         
                                         forever. If they make a decision, when it comes on to questioning that, it's easy
                                         
    
                                         to question a machine. For example, I'll talk about this. Was that Uber or Lyft
                                         
                                         that had a self-driving system down in Arizona that accidentally killed a person
                                         
                                         who was trying to cross the road.
                                         
                                         I think it was Uber self-driving mechanism.
                                         
                                         And when I looked at that accident,
                                         
                                         the videos all came out and everything.
                                         
                                         And by looking at it very closely,
                                         
                                         I'm not trying to say whose fault it is and all that,
                                         
    
                                         but the reaction time that the machine had to react,
                                         
                                         the human would have had even less
                                         
                                         and they wouldn't have acted any differently, right?
                                         
                                         But if it's a human who was in there,
                                         
                                         the judgment process that we go through
                                         
                                         to judge an incident when a judgment call is made
                                         
                                         is a whole lot different
                                         
                                         than when a machine makes a decision.
                                         
    
                                         It's today, it'll be tomorrow, it'll be forever. Okay, so that's why making machines to make judgment calls is a very,
                                         
                                         very, very dangerous thing. It has to be human involved in the loop, as far as I can foresee.
                                         
                                         So even if the system is set up correctly to make the correct call in development,
                                         
                                         how do you know that that's really what it's going to do in production? I mean,
                                         
                                         you know, the system isn't just the model, it's the model plus the data. And if it's
                                         
                                         confronted by different data, how do you validate it? How do you make sure that the system is, you know, correct in its decisions?
                                         
                                         So a few things. One is, as I again talk in my article,
                                         
                                         this is my favorite term I love to use all the time.
                                         
    
                                         You know the old saying from the old historian say
                                         
                                         that Caesar's wife might must be above and beyond doubt.
                                         
                                         The same way your model should be proven 100% to make decisions right time all the time.
                                         
                                         That's one.
                                         
                                         And the second thing is when you make a decision, you know, there are certain systems do that,
                                         
                                         certain systems don't.
                                         
                                         You have to give what's called that confidence score all right when the machine says yes I recognize that that's Andy's face how
                                         
                                         confident are you right are you 99.9 percent confident or you about roughly 20 percent
                                         
    
                                         confident so you got it not only factor the decision
                                         
                                         into your decision making process,
                                         
                                         but also how confident the machine thinks
                                         
                                         it is on its decision, right?
                                         
                                         And the third thing would be,
                                         
                                         this is a very important thing,
                                         
                                         because the real world scenarios are so different.
                                         
                                         This is why even though Google, even though people are thinking Google
                                         
    
                                         was giving free maps for us to use, they were mapping out the entire
                                         
                                         transportation system, which and then also sending cars to map out the entire
                                         
                                         real-world scenarios so they could advance their self-driving mechanism for all the anticipated
                                         
                                         real world scenarios. So you have to train your system, not only depending on the model,
                                         
                                         train your system, let it out in the wild and follow the, to see if it is making the right
                                         
                                         decisions before you can deploy it consistently. Consistently is the key, right? And the last one is, a lot of
                                         
                                         companies don't do that. This again, my pet peeve. When a machine makes a certain decision,
                                         
                                         you need to capture not only the decision, but also the data input and the parameters that's
                                         
    
                                         used. Put it in, as they call it, the old political saying, put it in a lockbox and have that available
                                         
                                         at a later time because when it comes to people questioning that, if a legal system or law
                                         
                                         or court questions why did he make the decision, how did he arrive at that, they will have
                                         
                                         a locked, sealed, signed entity to prove this is the data inputs and parameters we had and this is
                                         
                                         the decision the system made and I'm hundred percent confident if you put a
                                         
                                         human to evaluate those terms they will come up the same exact decision and and
                                         
                                         that is not in in place in most places in my mind so in closing one of the
                                         
                                         things you know how you close your, and I think that this is an important way to close this discussion as well, is this core question of sort of the organizational culture and the organizational ethics.
                                         
    
                                         Essentially, if you have an immoral organization or an amoral organization, or if you have one that doesn't understand that you need to make calls like this, essentially, you can't train a system to be more moral than the people who are trying to train the
                                         
                                         system, right? I mean, if they don't keep these things in mind, if they don't focus on these
                                         
                                         issues, they will never be able to come up with an AI system that does because they, you know,
                                         
                                         won't have it. And so, for example, you know, you mentioned, you know, you have to have,
                                         
                                         you know, specific goals toward making sure that things are, you know, culturally, you know,
                                         
                                         basically physically sensitive to the world of people, not just the limited training set.
                                         
                                         You know, the same thing is true all across, you know, you don't want to make a system that's immoral.
                                         
                                         I think that this is honestly a hopeful message from you, though, that perhaps, you know, the culture of the organization can be a valuable aspect here.
                                         
    
                                         Yeah, I mean, that you're making the right decisions.
                                         
                                         Who is producing those models or which organization is producing that?
                                         
                                         Are they known to produce good models or are they Cambridge Analytica?
                                         
                                         It depends.
                                         
                                         If you are known to do bias politically, socially, morally, or otherwise, then what
                                         
                                         guarantee people would have that you would do the right thing?
                                         
                                         Okay, doing the right thing, as I say, when nobody's watching, this is, I think, the famous
                                         
                                         quote by the Intel founder, wasn't it?
                                         
    
                                         It's about integrity.
                                         
                                         Integrity is about doing the right thing when no one is watching, right? So that's all we can hope for.
                                         
                                         Absolutely, that is. And I think that really that is the ultimate answer to
                                         
                                         this whole question is figuring out how we can all make systems that do the
                                         
                                         right thing. And I appreciate it. Thank you very much for bringing up this topic,
                                         
                                         Andy. Tell us a little bit more. Where can we find, first, I guess, where can we find this article and where can we find more of your work?
                                         
                                         Right. So I, to post all of my work at my website, thefieldcto.com. Again, that's
                                         
                                         thefieldcto.com. And you can follow me on either on LinkedIn and or Twitter at Andy4i. And the articles I publish in different places.
                                         
    
                                         I think a couple of the ones we talked about,
                                         
                                         one is in AI World, one is in Forbes.
                                         
                                         But if you go to my website,
                                         
                                         there are links under article sections
                                         
                                         that you could find them all.
                                         
                                         Thanks a lot.
                                         
                                         And as for me, if you're interested
                                         
                                         in corresponding with me,
                                         
    
                                         you can find me on Twitter at S Foskett.
                                         
                                         I also write at gestaltit.com and of course this is the utilizing ai podcast you find us at utilizing
                                         
                                         dash ai.com or utilizing underscore ai on twitter and please do look this up in your favorite
                                         
                                         podcast feed you can like and rate and review the podcast there. That helps with folks finding it.
                                         
                                         And you can also subscribe through that or at our website. So thank you very much for joining us.
                                         
                                         Again, this is Utilizing AI, and we'll be back next time with another discussion of how
                                         
                                         we bring AI technology into the real world.
                                         
