Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x05: The Philosophical and Religious Aspects of AI
Episode Date: October 5, 2021In this episode, we consider the moral and ethical dimensions of artificial intelligence. Leon Adato, host of the Technically Religious podcast, joins Frederic Van Haren and Stephen Foskett to conside...r the boundaries of technology and the choices we make. Leon suggests that the unintentional, unconscious, and undetectable impact of AI is the key consideration, not the science fiction questions of AI and religion. Many religions seek to apply the lessons of the past to new technologies and situations, and these can provide a unique insight into the question of the way we as a society should proceed. We must also re-evaluate the systems we put in place to ask if the machine is doing what we wanted it to do and what the side effects are. Three Questions Can you think of any fields that have not yet been touched by AI? Will we ever see a Hollywood-style “artificial mind” like Mr. Data or other characters? Tom Hollingsworth: Can AI ever recognize that it's biased and learn how to overcome it? Guests and Hosts Leon Adato, Head Geek at SolarWinds and host of the Technically Religious podcast. Follow Leon at adatosystems.com. You can also connect with Leon on LinkedIn or on Twitter @LeonAdato. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 10/05/2021 Tags: @LeonAdato, @SFoskett, @FredericVHaren
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Frederick Van Haren.
And this is the Utilizing AI podcast.
Welcome to this episode of Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics.
Each time we meet, we bring up a different idea with our guests.
And one of those ideas that has come up again and again is the sort of moral and ethical implications of all these things that we're doing.
To quote Jurassic Park, just because we can do something, does that mean we should do it?
What do you think, Fred?
Well, I think just like we talked a little bit about in the previous episodes, AI is
all about the data, right?
You process the data, but you have no idea where the data is coming from.
So you kind of have to put some boundaries and mechanisms to understand what those AI
models are representing and maybe also kind of look a little bit more at where the data came from, such that we can kind of make a
solid understanding and a solid review of what those models really represent.
Yeah, I've been thinking about this basically my whole career. For what it's worth, I actually
majored in the interaction of society and technology back in the day.
And I've talked to a lot of people in the industry about those moral and sometimes even
religious questions about technology and how this impacts us, how it impacts the world.
And that got me thinking of a good friend of mine.
And we've invited Leon to be our guest today. So Leon,
tell us a little bit about yourself so that we can jump into this conversation.
Sure thing. My name is Leon Adato, and I am a head geek. Yes, that's actually my job title
at a company called SolarWinds. Yes, that one. And I have been in IT for about 30 years. I've worked in the monitoring space
of technology for about 20 of those years. I'm also one of the hosts of a podcast called
Technically Religious, which talks about folks who have a strong religious, ethical, or moral
point of view, and also a career in IT and how you make those two things support each other,
or at least not conflict horribly where you don't know what to do next. So I also identify as an Orthodox Jew, and frequently
my rabbi will admit to the fact that I belong to the congregation too, although sometimes that's
a little bit more difficult for him, depending on what I've been saying on podcasts over time.
So I've actually been a guest on Technically Religious, and it was a lot of fun, because
frankly, people don't ask these questions all the time. I feel like a lot of the time,
it's not that we're not thinking of these things. It's simply that technology sometimes runs away
with it. At least that's how it seems to me. One of the books that I read back in college was focused on technological
determinism, which is something that we brought up previously on the podcast here and people like
Bertrand Russell and so on. And how basically once you create an atomic bomb, it basically
drops itself because there's no way in the world we're going to be able to keep that genie in the bottle.
Is that how you look at these things or do you have a different perspective?
No, it's, I mean, it's Chekhov's technology. And just to frame that, there's an old rule about
Chekhov's shotgun, which is, you know, if in act one of a play, there's a shotgun hanging over the
door, by the end of the play, that shotgun will get shot. Like it can't not, it wouldn't have been there otherwise. I also, and I was looking for the source of this quote,
I can't find it. So I apologize for not giving full attribution, but all technical decisions
are ethical decisions. And I think that's something that we often wish weren't the case.
And we sort of actively try to live in a world where that might be the
case, but even saying, I'm not going to choose is an ethical choice because you have chosen
something. You just have acknowledged that it's not going to be harmful enough to you that you
need to change it. And I think that that is absolutely relevant to machine learning and AI in that it's so big and it's so complicated and the use cases are still not quite completely fleshed out.
The boundaries of what it can be are not clear.
And therefore, people are doing a lot of choosing not to choose and thinking that, well, that means that it won't be bad.
No, it can be horrifically bad
by not choosing. Right. Well, we talk a lot about ethical and you made some interesting points. Now,
purely from an AI perspective, what do you see the biggest issues or the biggest feedback you
see in the market regarding to religion and AI models that are deployed in the market today?
Okay. Well, there's two.
And I'll go with the fun one first, which is can an AI become a member of a faith?
And I can't speak for any other faith except for mine.
I can't even speak for mine holistically.
But I will say that this is something that Judaism and Talmud have already wrestled with
in a few different...
Believe it or not, a thousands of year old document
did consider whether a not human intelligence
could be considered a member of Judaism.
And the answer is no.
I apologize to Data and Marvin
and all the other android and robots out there
that might have been hoping to join the tribe. It's not a, not a thing.
It's a, it's a human based concept, but so they really have thought about that. So that's the fun one right and we can play around with whether, you know, Buddhism or Christianity, or whatever would accept a non carbon based life form and artificial intelligence into the faith as a valid member of some kind.
More realistically though, I think where religion
and artificial intelligence come into play
or think about each other is in impact.
How is this entity, you entity? I'm really nervous about applying science fiction principles of AI to
what we have as AI, which is in many cases just a really sophisticated algorithm,
so machine learning. But I think it still comes down to what is the unintentional,
unconscious, or undetectable impact of this technology in our
lives? But I will say from a Jewish standpoint that that was the same conversation that was had
with television and with other external influences that insinuated itself into your life
sort of uninvited. And I'll say that back in the late 60s, early 70s, the decision was made in a
lot of Orthodox communities that televisions weren't okay. And believe it or not, computers
and pads and phones snuck in the back door because they were seen as, you know, not
media consuming devices, they were computing devices. and so they ended up in houses before at large
people recognized how much media and how much of that kind of influence they were going to have.
So if I got it correct, you're saying that it's very difficult for AI to learn
completely about Judaism. Is that because people that practice Judaism will not accept an AI model that is
learned or just because it's not a human? So that's a really good question. Will Judaism
accept an AI-based decision-making system in certain cases? Yes. I don't think that there's any issue with
Judaism vis-a-vis self-driving cars. That's effectively really, really sophisticated
automation, which we have today. I think in the case of, for example, medical decisions or
educational decisions or any sort of life impact decisions, I believe,
and I can't cite my sources, that that would be absolutely disregarded. Those are conversations
you have with somebody, a human who knows you, who is cognizant of your, not only your
life situation, your details, but your emotional state and your tolerance for growth
and risk and things like that. And I don't see anyone accepting an AI as being able to
accommodate all those variables along with life experience, along with all the rest of it.
I think that to bring this a little bit, another dimension into here,
Leon and I both live in Ohio and so are familiar as well with the Amish. And I wanted to point out
that if you've ever wondered why Amish people don't drive cars or have telephones or bathrooms
or whatever, it's not because they're Luddites and anti-technology. It's because the core question
that their communities ask is, is this technology
going to support our community or destroy our community? And so the real reason that they choose
not to have telephones, for example, is because it allows a type of communication that would be
destructive to community. And that's the real reason that they don't want to use, you know,
electric power, for example, out, you know,
electric power, for example, because it relies on a community outside their community, and that that would be destructive to their community. And I think that that's a really interesting insight.
And if I can draw a parallel, it seems to me that people who have thought on the moral implications
of artificial intelligence are often asking themselves a very similar question.
They're not saying, you know, that anthropomorphic question that Leon said, like, can Mr. Data be a
member of religion? Because that's, well, foolish. Like, that's not a real question. That's not a
real situation. What they're saying is, should we use the lessons that we have cultivated over thousands of years, potentially?
And how do those lessons apply given this entirely new situation? At least that's how
I see it. Is that kind of what's going on there? Yeah. I mean, again, speaking for, area of knowledge and faith, Orthodox Judaism takes a day off from technology,
not everything off the way the Amish community has. And by that I mean once a week on Shabbat,
anything I like to say, anything with an on switches off limits. And it is categorized as work, which is a really poor
translation of the thing that is forbidden. But the idea is that, you know, I can walk three miles
to synagogue, but I can't flick a light switch. Like that doesn't sound right because it's not
about work. It's about doing the things that build a connection community. Also, in the case of the Sabbath, it's the idea
of avoiding doing, imposing our will or our creative energy on the world. We take a day
to just step back. And again, I think artificial intelligence is interesting, but it almost
doesn't have that off switch capability. It's very difficult to let it be what it is and also
have it stop being what it is and take a break. I'll give you a very simplistic example.
I don't have any Alexa or Google type devices, listening devices, because they're constantly on
and there isn't a routine, a program that will say, if it's Friday at sundown, shut off, stop listening, stop behaving, stop thinking, stop everything, listening and recording data and transmitting data and
possibly responding to data all the time, all through that Sabbath period, it's not consistent
with the values of the day. And there's your analog to the Amish example that you gave.
Yeah, exactly. I think the difference between AI and the ability to use or not use a phone or to watch a TV show or not is that AI will be all over the place. It's everywhere. Maybe not today yet, but it will be in the future if there are no boundaries and checkpoints, right?
But I think the fact that you cannot turn it off in the future, I mean, even a simple thermostat in your house is already driven by AI, and it will only get worse, right?
And even kids' toys and so on will be using AI. So. Yeah. And I want to emphasize the fact as a, as an IT person,
as a technologist, that's what it's built for. That's, that's part of the design. The whole
point of AI driven systems is that they will be all encompassing so that they can bring it in.
I realized that we can point to like minority report at the fairly, at the beginning of the
movie where the main character is walking into the store and it says, hi, Joe, the shoes that you bought last week,
how are they working out for you?
What about this?
It will match up with the thing
that knows all about your behaviors
is now able to integrate all sorts of experiences
into your life that you wouldn't have thought of
or wouldn't have realized you needed or wanted.
That's the system.
That's the point of it. But the problem is there's
no getting away. It either is all-encompassing or it's not AI, almost.
I think that this pervasive aspect of AI is really, I think, one of the key questions that
we as a society need to decide on. And as Frederick said, I absolutely, if this thing is always on everywhere,
what does that mean for the rest of us? And the choices that we are going to make.
And frankly, I think that some of these questions have appeared recently in the news,
in the stories that we're hearing about artificial intelligence, people over-relying.
I mean, certainly we've heard the
asleep at the wheel Tesla driver stories,
many stories about that sort of thing,
over-relying on artificial intelligence systems.
Certainly we've seen the comics and jokes
about how artificial intelligence systems
just get your answer, your question completely
wrong. As my daughter learns every time she asks the Apple assistant to play a song she wants to
hear and uses the words of the song instead of the title of the song and then is disappointed
that it won't play it for her. You know, what are some of the examples, Leon, that you can give of AI kind of going off
the rails in a way that's sort of predictable and that you saw coming? So I want to address
the boundaries thing first, and then we'll talk about the going off the rails. The boundaries
thing is interesting because in Judaism, there's this concept of, the Hebrew word is yichud,
which is privacy or
aloneness. And when you think about like, what is it that makes a married couple a married couple?
You know, the public ceremony isn't it because people get married without it and the blah, blah,
blah, blah, you know, the gift registry and all that, that's not it. What it is, is the privilege
of two people to be alone in a room together unsupervised. And I recognize that for some of
the listeners, that may be a completely wild and out there concept. And I would be happy to answer questions on Twitter or whatever,
but that is in Judaism, one of the primary things is that you're allowed to be alone together.
There is a boundary drawn between the public space and the private space. And within the private
space, you are either privileged to go or not privileged to go, depending on your status. And I think that one of the issues that technologists need to think about is how to
create those boundary spaces, those liminal spaces where AI is not permitted to go because then we
stop being human or it stops being AI. It intrudes on an area that it shouldn't go. So having said that, having
talked about these liminal spaces, I think where AI goes off the rails is when it either is trying
to cross into an area where it's not ready or where it's uniquely unsuited to do. So some places where AI has gone completely wrong, I think
the amusing example is where they put an AI online and they had to just continually consume
everything off of social media. And within, I think it was less than 24 hours, the AI had become this rabid, racist, conspiracy theory, spewing, hate machine that, you know,
I mean, literally garbage in, garbage out, which is amusing because it's not a real person,
so you can turn it off.
But it's also horrifying because I think we all sense on some level that that's us to
a lesser extent, but it's us and it could be more of us if we're
not careful. And so there's the funny example. I think the bad example of it, I think is what
you see happening with, for example, the YouTube algorithm, where it's taking kids, little bitty
kids, pre-verbal kids who are just clicking on funny videos, but the more they
click, the weirder the videos get, and the more off-putting they're getting, and parents who
leave their child alone for an extended period of time and come back and see the videos that
the child is consuming, like, why did you put that there? Why would you ever put that there?
Because the algorithm doesn't know. It's not tuned for that kind of
thing. I also think that, and this is something that's been true before the digital age,
but even more so is things like Instagram and the effect it has on body positivity and body image
on people of all ages, that the algorithm is choosing who you get to see and which of those
pictures you get to see. And there's a very specific, in fact, as of this recording,
you know, last night we heard one of the Facebook whistleblowers say that it was not intentional,
but they knew it was happening and they intentionally chose not to stop it from
happening, that Instagram was making people, especially teen girls, feel worse about themselves.
But if they changed the algorithm, people would stop using the platform as much and they didn't
want that. So again, every technical decision is an ethical decision. And I think that that's a great example of a bad example of that.
And that's the thing that struck me about that is essentially, well, if you build a
machine that does thing A, you can't be shocked when it does thing A.
You know what I mean?
If the machine's job is to mimic the speech that's seen
on the internet, well, then you can't clutch your pearls when it does exactly that. You know,
if the machine's job is to make people click and this, you know, click like, or hate, or discuss
more in the comments, and it's an artificial intelligence, I mean, it's a machine learning system. It's, it's constantly running, um, and, and refining itself. You can't be shocked and
annoyed when that's exactly what it does, because often I think that what we'll see is that, uh,
you know, well, the machine doesn't care and the machine will just head in the direction that the
machine is the most efficient way to go. Um, I mean, it would be the same as if, you know, if I'm sitting in my
car, and I turn it on, and I say, I need to get over there. I can't be mad that my car will drive
directly that direction. You know, when I stomp on the gas, despite the fact that there's, I don't
know, a road that goes like some other way, right? I mean, the car doesn't care. I said, go that way. It's going to go that way. And to me, that seems like what Facebook is encountering
and Instagram and Google and Apple, everybody is encountering this question of, I made a machine
to do a thing, and now I'm mad that it's doing that thing. Right. Although for me, the fact that
social media, Instagram, et cetera, are doing a thing isn't the shocking part. The shocking part is that within the halls of the social media empires themselves, they're aware that it is causing harm and they're actively choosing not to fix it. If you had, here's an example from a few weeks ago, Tesla had a problem where the self-driving
was automatically engaged.
Cars that were parked were automatically engaging into self-drive and going in ludicrous speed
forward.
And I know that because one of my friends ended up in a tree.
The car literally just popped itself into gear and forward and hit the first thing that it ran
into, which was a tree. And that was obviously a bug that now I'm not saying Tesla knew about that.
They didn't, but if they did know about that and they said, yeah, but think about how many more
Teslas will sell. If we keep this in place, everyone would be up in arms. Like that's not
okay. And that's effectively what the admission, you know, last night was saying was, we know we are actually running people into emotional trees we know that's happening.
And as a result they're using it more. So, all good. That's not, that's not okay. I don't know if in on this series you've talked about the show Social Dilemma. It's actually available for free now on YouTube.
And it talks to the creators of social media tools, Facebook, Google, Instagram, et cetera, and the decision-making that went into it. The thing that got me most though was that the people
who created the algorithms, who actually built the rules that cause you to get sucked into social media, whether it's positive
or negative, they themselves admitted that they can't avoid it. So the thing that was surprising
to me was that the magician thought that the trick was magic, even as they were the ones who designed the trick. And that tells you how
immensely powerful and perhaps inadvisably powerful these systems are and how
it, my hope is that after the news last night and with the ongoing discovery of what's happening inside, you know, there will be more thoughtful decision. I'm not saying oversight because that's a trigger word, but there's more thoughtful discussions about the real human impact and whether anybody should be okay with that real human impact in any case, because in no other
situation, you know, again, our car, you know, you know, a few of our pilots crash into mountains,
but most of the people, most of the planes land. Okay. Not okay. Statement, you know, yes, we know
that this, you know, toy falls apart completely, but you'll buy more of them. Then isn't that great.
Our stockholders think it's great.
Nobody would be okay with that statement. And yet in social media, because it's emotions and it's sympathies and it's micro behaviors, you know, and honestly, I think that's where religion
can play a key. You know, Rabbi Jonathan Sachs, who was the chief rabbi of England, who passed away just about a year ago, was one of the things he said famously was that science and technology, take things apart machine. Now let's take a look at it and see what
it's doing to people is, I think, sorely lacking in technology in general. And I say this as a
technologist. Right, but I think it's the commercial aspect of it, right? Like with any development
cycle, trying to make a change after the product is in the market is a very costly event.
And a lot of organizations would prefer to work around it than to solve it.
I mean, I saw a tweet from Elon Musk last week to his followers basically saying,
hey, please stop reporting car incidents with the beta software of the of the car right so it's kind of
using social media to kind of manipulate the market but earlier you said you know actively
not doing anything i think that's still nicely said i think it's the word passively doing nothing
is really more of an indication of what's happening. And I think one of the things that came out last night
in the report, you know, 60 Minutes,
is that commercial is still the number one topic, right?
Anything that commercially provides an advantage
is something they will focus on.
And that doesn't mean that everybody in the organization
thinks like that,
but it does mean that everybody in the organization thinks like that but it does mean that that in capitalism
money and and economy is still still the number one issue i mean i'm wondering because the the
challenge is if if religion is not built in at the early stages in the product it's extremely
difficult to extract it or to accept it afterwards so So what do you do, right? Do you reject
the usage of the model, which is a challenge because AI is going to be all over the place?
How can we then from an ethical and religion standpoint engage and introduce ourselves in the process as early as possible?
Well, I think that first of all,
the clarification of ethics versus religion,
obviously it is much more difficult to be ethical
without some sort of foundational religious base.
It's not impossible, but it is more challenging.
And what we're talking about is helping technical organizations make
ethical decisions, even when those decisions may be fiscally disadvantageous. But that's nothing
new. I think part of the problem we have here in America is the idea that a company actually has
the status of a person in certain legal contexts, and they're not. You were just hinting at it, that not everybody in the company thinks a particular way, but the company doesn't
think. The company is a fiscal entity that exists to create value for the shareholders. That is what
a company is. That's what Facebook is. That's what Google is. And to imagine that they are some sort of thinking, breathing, believing,
ethically behaving entity creates that false premise, just like Stephen mentioned about,
you know, if I point my car in a particular direction, and I shouldn't be shocked that it
goes that way. That's what a company is. So I think, first of all, acknowledging that that's
what a company is. And therefore, we don't need to feel bad about calling a company
to the carpet to behave in an ethical way. They can make choices to only make a billion dollars
this year, not to make $3 billion this year. That is a choice that the people within the company
can make. The only way that we have ever had to do that is either through economic pressures or legal
pressures.
The problem is that legal takes an inordinately long time.
And I think that we feel like justice can be obtained, and it can't.
So our only other choice is to stop using it or to, well, no, there's no or to, to
stop using it. If something isn't behaving in an ethical way
to not use it, look, Facebook is down today. Facebook, WhatsApp, all the Facebook properties
are down today. They're down because it's DNS, because it's always DNS. That's a different story.
But it's allowed several of us to have a conversation about, I've been off of Facebook
for three and a half years now. I haven't felt a pang of regret. And so-and-so says, I've been off for six months.
I deleted my account. I this, I that. I think those are the kinds of conversations that say,
this is never healthy for you. It was never good. What minimal value you thought it has can be
recouped in other ways. Let's continue as humans to have that conversation. Back to the
Amish that Stephen mentioned, is this building community or is it tearing down community?
If it's tearing down community, let's highlight those things. Let's talk about it. And let's
talk about the benefits of the other options. Yeah. And I see that that is a very useful tool
from religion that we in technology can ask ourselves is that basic question, you know,
is this accomplishing the goal I wanted it to accomplish? Is this supporting, you know,
positive outcomes? And I think, too, another thing that strikes me after hearing you both speak,
you know, Frederick, you mentioned, you know, capitalism, and there's really no difference
between a machine learning model and capitalism
or a machine learning model, you know, these are just systems that we build and then we engage them
and they do the thing and we have to continually ask ourselves, you know, do we need to refine this
thing? Do we need to change this thing? You know, should we be asking questions?
I think people sometimes maybe just assume that AI is magical or done or something, but it's not.
It's, it's a constantly in, in, in flux and we're constantly refining it. And that's a good thing, not a bad thing. You know, when, when your digital assistant gets your song wrong, that's a learning opportunity, not a, you know, abject failure.
So we shall see. We are getting along here in the podcast, though. So the time has come.
The time has come, Leon. As you know, we ask each of our guests three unexpected questions at the
end of each episode. One of those questions comes from me,
one of them from my co-host, Frederick, and then a third is actually coming from a previous
guest on the podcast. This is a new tradition that we started in season three, and we're really
enjoying the unexpected questions that we get from our guests. The guests have not been prepared for these questions, so we're going to get some off-the-cuff
answers and it's going to be a lot of fun, I hope.
So let's go ahead and do this.
Frederick, why don't you go ask yours first?
You can go first.
Sure.
So what can you, can you think of any fields that have not yet been touched by AI?
Give me a second. not yet been touched by AI?
Give me a second.
Wow, the more physical trades was my initial thought, but no carpentry, farming.
Those all have benefits to be gleaned from AI. Talmudic studies, the yeshiva system has been utterly untouched by AI in almost, if not every
single way. That the partner-based, person-to-person, face-to-face learning that occurs
from teacher to student and between students to explore ideas, AI can't help with that.
Even when you say that some of those texts could be put
into electronic form, AI isn't helping with that. There's some wonderful study guides where if you
highlight a phrase, it will show you all the other occurrences of that phrase, but that's not really
AI. It's not making any choices. It's just look up. So that's going to be my answer,
is the yeshiva has been untouched by AI. The yeshiva. Okay. Well, that's an answer we've
not yet gotten to that question on this podcast. So that's new. All right. My question, you brought
it up. So I have to ask it. It's one of our questions. You literally teed this one up.
So here we go. Will we ever see a Hollywood style artificial mind like Mr. Data or are those just fictional fantasy
characters? Never say never, but I would say that even if to your point, having that would be as
ridiculous as having a human shaped robot. Like I don't need, apologies to Elon Musk
and his recent unveiling of the pretend Butler thing
that he said.
I think that having an artificial intelligence
in doing human things is unnecessary.
I think that having a artificial intelligence
doing things that humans can't do is the whole
point of any force multiplier, any augmentative technology. Could it be done as a fluke for fun?
Now, are you saying, will we ever see artificial intelligence become self-aware, truly becoming an entity that can ask
why and looking for meaning in its own actions and visualizing a better way to be?
I think that's possible. I think that that's built into the idea of being able to glean more
information and things like that. I think there's a possibility to have basically another life form,
a silicon-based life form. But I don't think a Mr. Data or a Marvin or, you know,
things like that, probably not because it's a waste of time.
I think that was one of the better definitions of mind that we've gotten in answer to that question. So thank you for that. I should have known.
So as promised,
we're also going to use a question from a previous guest.
And this following question is brought to you
by someone you actually know personally.
Tom Hollingsworth, the networking nerd
here at Gestalt IT and Tech Field Day,
joined us recently and asked this question.
And frankly, Abby and I felt like this was a good one for you. So here
goes. Tom, take it away. Hi, I'm Tom Hollingsworth, the networking nerd of Gestalt IT and Tech Field
Day. And my question is, can AI ever recognize that it's bias and learn how to overcome it?
I just want to say that I'm so glad that you didn't ask me a question about BGP, because
I was not ready for that.
Okay.
I think not only can it, it must.
I think that is the singular challenge of our age and where we are today with AI, is
that if we cannot develop a better set of algorithms, machine learning, AI principles, where inherent
bias can be detected and then corrected, I think if we cannot do that, the end of AI
is nigh, because it will be utterly useless to us as a real tool, because we won't be
able to trust anything it provides for us in the long run. So again, not only can it, we must be building systems that do that, or else the entire concept is not going to be worthy of any more time or development money.
Yeah, in a way, it kind of reminds me of an artificial, or like a driving system that wouldn't be able to recognize that it was going off the road or about to hit something, right? You kind of have to build that in too. It's not enough to be able
to go and turn and stop. You also have to have the guardrails built into it as well.
Well, thanks so much, Tom, for that question. Leon, we're looking forward to hearing what your
question might be for a future guest. And if you, the listeners, want to be part of this, you can.
Just send an email to host at utilizingai.com, and we will arrange to record your question
for a future podcast guest. So thank you, Leon, for joining us. I knew it would be an interesting
conversation, and it was. And frankly, I think we could have talked to you for another few hours.
If somebody does want to continue this conversation, where can they connect with you?
Mention your podcast and so on.
Where can they continue this conversation?
Sure.
So you can find me on the Twitters.
And I say that purely to horrify my children every time they hear it.
You can find me on the Twitters at Leon Adato.
You can also find me on LinkedIn.
And as I mentioned before,
Technically Religious,
which is www.technicallyreligious.com.
I also have a website
where I pontificate on things,
both technical and religious,
and that is adatosystems.com.
Yes, I'm Frederick Van Haren
and I'm the founder of Hyphens,
active in HPC and AI markets.
And you can find me on Twitter and LinkedIn as Frederik V. Heren.
And more recently, I'm focusing heavily on data management, which seems to be a problem that a lot of enterprises are going through.
And as for me, Stephen Foskett here, you can find me at gestaltit.com. You can also find me
running the Tech Field Day event series, as well as every Wednesday, we do a rundown of the week's
news on Gestalt IT. And I do urge you to give that a listen. We put a lot of pride and time into that,
and we really enjoy being able to react to the week's enterprise IT news.
You can find that in your favorite podcast application
on YouTube at gestaltitvideo
or just at gestaltit.com.
So thank you very much for joining us today
for the Utilizing AI podcast.
If you enjoyed this discussion,
please, please give us a subscription.
It's free pretty much in every player you can find.
Give us a review maybe if you liked this or if you didn't.
I guess that's legit too.
And please do share this show with your friends.
This podcast, as I mentioned, is brought to you by gestaltit.com, your home for IT coverage
from across the enterprise.
For show notes and more episodes, though, you can go to our special website for this
podcast, which is utilizing-ai.com, or you can find us on Twitter at utilizing underscore AI.
Thanks for joining us, and we'll see you next time.