Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x16: Utilizing AI in 2022 with Chris Grundemann and Frederic Van Haren
Episode Date: January 4, 2022AI is now widespread, and companies are starting to look at the real-world impact of machine learning. In this special episode of the Utilizing AI podcast, the three hosts look forward to AI in 2022 a...nd revisit some of our guest questions from season three. First, we turn to the specific markets and verticals served by AI applications. We feel that datasets and models will increasingly be focused on specific business uses instead of being general-purpose tools. Next, we consider how the AI industry is increasingly concerned about ethics, bias, and privacy of data. Industry leaders like Timnit Gebru and Cynthia Rudin are showing how important social responsibility is to artificial intelligence. Finally we turn to the continuing progress seen in AI technology. New methodologies, larger models, and increasingly critical real-time applications are transforming the technology, and ML hardware and instructions are everywhere from mobile devices to the datacenter and the cloud. "Three" Questions: Chris from Tony Pikeday: Can AI ever teach us to be more human? Frederic from Sriam Chandrasakren: What do you think is the biggest AI technology that will transform medicine in the future? Chris from Leon Adato: What responsibility do you think IT folks have to insure the things that we build are etihical? Frederic from Amanda Kelly: What is a tool that you were personally using a few years ago but you find you are not using anymore? Chris from Ben Taylor: Are you living up to your potential legacy? Frederic from Girard Kavelines: What scares you the most about AI? Stephen from Chris: What do you think will be the most surprising thing to come out of AI in the next three years? Stephen from Frederic: When do you think AI will be able to automatically identify if an AI model is ethical or not? Stephen from Chris: What do you think is the most over hyped application of AI people are talking about today? Hosts Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett Date: 1/4/2022 Tags: @ChrisGrundemann, @SFoskett, @FredericVHaren
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Chris Grundemann.
I'm Frederik van Heren.
And this is Utilizing AI.
Welcome to another episode of Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics.
This is the first episode of 2022,
so we thought that we would take a look
back at what we've talked about, that we would look to the future about what we're going to see,
and maybe even ask each other three questions. That's right. There's no guests, just the three
hosts. So Chris, it's nice to have you here with me today. Yeah, thanks a lot, Stephen. And I'm
excited to kind of peer into our scrying pond, as it were, and look at what's coming
from 2022.
I think we've got a lot to draw on for the last three seasons, and it's going to be exciting
to chat about what we might see coming next.
Yeah, I agree.
I mean, I'm looking forward to 2022.
There's a lot of advancements going on in AI, and certainly a lot of organizations are
getting more exposure, and enterprises and organizations are getting more exposure and
enterprise and organizations are more than ever willing to pursue AI and
looking for some information to transform their business.
When we started this a couple of years ago, now,
AI was just starting to get real.
It was just starting to come to the enterprise. And that's why we,
that's why we decided to start the podcast because quite frankly, I looked around, I was looking at a lot of the podcasts that existed,
a lot of the blogs, a lot of the presentations and seminars and discussions. And it was still
an awful lot of folks that were focused on data science and machine learning and sort of the ins
and outs and bits and bytes of how to do AI, as well as a lot of kind of like weird futurist
consumery, I don't know what,
like the robots are gonna come kill us all
kind of discussions.
But there wasn't really a lot of sort of
how do we do this thing discussions.
And so that's why I wanted to do the podcast at first.
That's why we had Andy and Chris and Frederick
joining us here as our hosts.
Do you think we achieved that? And how do you think that the
market has changed over the last few years, Frederick? Well, I think it has changed massively.
And I think you hit it right on the nail by using the word real, right? It used to be that AI was a
marketing term where enterprises just needed to play along. And you could see organizations heavily playing
the AI card from a marketing perspective, but lagging seriously from a technology perspective.
And I think from that perspective, it's important that people get information from
all different angles. And I think that's what utilizing AI is bringing to the table.
Yeah, I agree with that.
And I think that one of the things,
it may be a little niche,
but one of the things I've really seen happen is the starts of convergence of operational technology
and information technology, right?
That OT and IT.
And it's just the very beginnings,
but I think AI is part of that kind of amalgamation
of bringing manufacturing and sometimes healthcare and some of these kind of industrial control systems into the realm of digital transformation.
And I think that overall, the impact of AI on what we're calling digital transformation has been pretty huge.
So really kind of echoing what Frederick said, that this is becoming real and that people are actually finding practical applications for AI in more and more services. Yeah, that certainly is reflected
by some of the more practical episodes we've had. You know, when we had Tom Hollingsworth and Gerard
Catalinas and so on, people talking about how AI is being used in, well, I guess, AI ops and so on.
And one of the things I think that we're seeing too,
and I guess this is the sort of the big question going forward, is AI seems to be focusing more
and more on specific markets, specific verticals. We've been asking our guests, you know, where has
AI been deployed? Where hasn't it been deployed? But it seems like it's really, really zeroing in
on specific markets and
verticals.
Frederick, what do you think of that?
Yeah, I think it's absolutely right.
I think transforming AI requires a lot of baseline technology, if you wish.
So it's not a surprise that from a vertical standpoint, that certain verticals get a lot
of attention.
And certainly AI is driven by open source mostly,
meaning that it drags other similar organizations
into the same vertical.
However, I do think that certainly for 2022
and moving along is that considering AI
is all about learning,
that the learnings from a particular vertical kind of
transpire into other verticals. And I think that's where the future of AI will benefit from
the heavily focused verticals to something more horizontal, if you wish, where there's a lot more
sharing and we will see other verticals and markets being penetrated by AI
and the knowledge that has been built up so far. I don't disagree with that directly, but I do think
that the nature of AI and machine learning in particular does have a predisposition to stay
verticalized. And what I mean is when you train up a model in a certain environment, it works best in similar environments. And I think we're going to see that for quite a bit longer
where folks who really target healthcare, for example, I think is one area that's really ripe.
As I mentioned before, like industrial applications and I think manufacturing is another area.
Obviously, logistics is an area that's been touched greatly by AI. And I think that there
are particular approaches and particular
models that work within these verticals for specific problem sets that maybe won't translate
very easily to others. Not that it won't ever happen. I think it will. But I still think we're
a couple years at least away from really seeing kind of across the board AI adoption and the same
models and same techniques. Right. I do see two points there. One is the methodologies
in order to build a model, right? So for example, if somebody is analyzing videos or pictures,
you know, there's multiple verticals, we could use that methodology. And then there is a second
component to what you're talking about is, is the reuse of certain, certain models that have been
built specifically within that vertical.
But mostly what I've been saying so far is the ability to build those models, right?
Reusing a model has always been vertical, right?
I mean, if you build a model for self-driving car, it's not going to be that useful in a
toaster, but you could use the methodologies you used in order to build your
self-driving car. So I think there's two components to that, but overall, I do agree with you.
Yeah, it really does come down to that question of models versus data. And I think that that's
one reason, one thing that makes AI an unusual application in that it does have very specific,
vertical specific, industry specific, domain specific data that needs does have very specific, you know, vertical specific, industry specific,
domain specific data that needs to feed the specific model. And I think that the pairing
of those two has caused us to have to diverge a bit market to market. And I can also see as well
that the sort of the black box nature of AI means that if we train it on data from the wrong vertical, right, then we could end up with a very surprising outcome.
I mean, I can just think right off the top of the bat, you know, I mean, if you have a sales AI assistant that's been trained on sort of B2C sales processes and communications patterns and so on, and you try to put that into a B2B
market, that might cause a big problem right there. And then of course, you can go nuts with
that too and think about like, okay, what's the difference between like a customer relationship
bot that has been trained on, you know, retail versus medical, right? Oh my gosh, that would
cause problems, right? So we have to have, I think we have to have verticalization of applications according to different domains. And I think
that that's what we're seeing too. So if you look at what Microsoft, Google, and now Amazon after
AWS reInvent are doing with a lot of their applications is that they're increasingly
verticalizing them along market segments instead of trying to come up with some sort of general
purpose kind of Lego brick tool. And that I think is, from my perspective, what's happening with AI.
Let's ask another question too. Another thing that's come up quite a lot in the Utilizing AI podcast is the question of ethics and, you know, considering the implications of the things that we're working on, this has been brought up in episodes talking about ghost workers, talking about inherent bias, talking about data sets, what you're feeding it.
Basically, you're going to get out what you put in.
We even talked a little bit about the religious aspects of this.
So I think that's another thing that we need to be considering.
And I think that's something we're going to see a lot of in 2022. Yeah, I do agree. And I see organizations still
struggle with ethics, right? As an example, we have Timnit Gavru, who used to be working for
Google, who now started her own institute, which is called the Distributed Artificial Intelligence Research Institute,
abbreviated there. And she wants to use that to document AI's harms on minority groups. And I
think the reason why people cannot get a certain level of success in larger organizations is
because the larger organizations are still struggling with ethics and how to introduce that how to implement that and how to
deal with it absolutely and this is as steven said definitely a topic we've we've touched on a couple
times in the last couple of seasons and and frederick to your earlier point right this is
one area that i think really does cut across verticals and across markets where approaching things in a way that will not introduce bias is super important and something that is,
you know, a common lesson that I think can be learned and then repeated in multiple places.
And what I mean is, you know, one of the things we've talked about a couple of times
is just the really simple application of bringing together both data scientists,
but also domain experts.
And then having, you know, a conversation around not just ethics, but bias in general,
and really bringing those two groups together. So we understand what data do we have? How can you actually use this data? What does this data actually mean? And not just looking at it as
simply a math problem, because I think that's where you can sometimes get into some trouble.
If you don't pay attention to what that data actually represents in the real world and are just doing, you know, calculations on it and running algorithms against
it, right? And so really having that domain expert in the room with the data scientists and
understanding how to use the data is super important. I think we'll see a lot more of that.
And then I think the piece of bias that we'll see, you know, more and more is definitely there
are some big ethical problems with whether it's
minorities or other disenfranchised or underrepresented groups. But bias can be just a
really simple business problem as well. If you're introducing bias into your data, even though it
has nothing to do with any kind of groups of humans in particular, it can still upset your
results and be detrimental to your company. And so I think looking at bias from a really
clear vantage point and then taking kind of some of the politics maybe and staying out of it
is going to be super important. And we're going to see a lot of that this year.
Yeah, I think one of the challenges too is data, right? If you have a lot of data
that doesn't really comply with ethics or ethical statements, then, you know, you have a lot of data that doesn't really comply with ethics or ethical statements, then you have to recollect that data in such a way that you comply with ethical ruling.
And that might take a while.
It's not as easy as saying we're going to flip a switch and suddenly it's going to be ethical.
You're going to have to look at all the data you have and revisit that whole environment.
Yeah, and that brings up the point of privacy, right?
And I think some of the things that, you know, collecting these massive amounts of data and then analyzing it and having people involved in analyzing that data along with the machines can bring up some really big issues around privacy and anonymity, easy for you to say, in the workplace and in the environment in general.
And so I think we'll continue to see advances in combinations of cryptographic techniques along with machine learning in order to be able to make the calculations needed without exposing data in an unneeded way.
And this is, I think, something that is just inevitable to happen. I mean, of course,
we brought this up again and again, the questions of ethics, the questions of bias,
the questions of data set. And as you mentioned, Frederick, there are people out there raising
these questions and raising them loudly and eloquently. So of course, you mentioned Timnit
Brew. We've had a few people
on the podcast here. Another AI hero that I want to point out is Cynthia Rudin, who just won the
new Nobel for artificial intelligence for her talk about socially responsible AI. I think that it's
just wonderful that these people are starting to have these questions. And again, I mean, to our audience-pilot isn't biased is to have people asking these
questions and to have tools and technologies focused on the core questions of ethics and
bias instead of just sort of hoping that things are going to be right.
So, you know, what do you think, like, practically speaking, is the forecast there?
Do you think that we're going to be able to have more, I guess, more unbiased or maybe less biased applications in the future,
Chris? Yeah, I think so. I do think that bias is something that is inherent in many ways to
humanity. And so anything we create is going to have inherent biases. I think the real key is going to be in learning to identify, understand, and mitigate those
biases, right?
I don't know that we're ever going to eliminate bias, definitely not in the next 12 months.
But what we can do is be aware of it, be cognizant of it, and find workarounds and make sure
that we're calling it out, changing it where we can, and mitigating it where we can't.
Yeah, so I think there will be increased
focus on ethics, but I think the implementation and making models more ethical is going to take
much longer time than the next 12 months, really. Chris, that's basically what you said, right? It's
not going to be in the next 12 months, but I think the focus on ethics, it's right away. I think that is going up really
rapidly. Yeah, it's almost like with technology, we tend to have these things kind of slip in a
little bit. And we start using technologies without really understanding the ramifications.
I think that people and technologists and engineers in particular have a tendency to ask
how to do something instead of asking if we should do it. And that definitely has happened over and over again in the IT realm. And I think AI is another
example where we've kind of started using it in a bunch of different ways. And now we're slowing
down and backing up and saying, wait a minute, wait a minute, how do we actually approach this
in a responsible manner? And so that's great, right? And the best thing is in two, three,
four, five years, hopefully we'll resolve a bunch of those questions and built it into the process.
And we won't really need to pay attention to it as much anymore.
We'll see. As somebody who studied the history of technology for my undergrad, I think you're right that we're going to be looking at it, but new technology introduced, it does introduce new social implications for the use of that technology.
And we have to be aware that this happens.
And we also have to continually be on the lookout for the ways in which technology, though, and talk about some of the improvements that have come down the road in artificial intelligence over the last year and what we're likely to see for improvements in the future. now having specialized machine learning instructions or co-processing or even integrated
processing in the CPUs now, we're going to have AI applications literally everywhere.
And pretty powerful stuff too.
I mean, that's the other crazy thing.
You look at the capabilities of the latest phone or even, I guess, the latest doorbell,
and they've got some heavy-duty specialized instructions there
to perform AI work.
What are you seeing coming down the line
that's going to be the big things in 2022?
Well, I think you're totally right, right?
So AI is all over the place.
And we live in a great time, at least from an AI perspective. There is very capable hardware and there is an ever-growing operational stack, DevOps and other methodologies in order to play around in the AI market. Now, at the same time, because there is very capable hardware,
people are thinking bigger and bigger and bigger.
So, you know, we talk about
the billion, trillion parameter models.
However, one thing that I also see happening
in the market is time to market, right?
You know, if you want to deal with billion parameters,
that's all great,
but are you willing to wait two and a half years
for a result?
No, you want to wait two and a half years for for a result no you
want to see this much faster so you see a lot of the the the things we had issues with with hpc
for the for the longest time which is which is the parallelization so today it's not saying, hey, I can do AI with a GPU. It's can you run and create a model with 60, 80 GPUs at the same time, which causes, of course, it's a significant larger problem to scale, to maintain, to comply with ethics, and also making sure that your software is always up to date. So I do think in summary there is that the hardware software stack is great,
give you a lot of capabilities.
You want to have a lot of parameters, but your time to market,
the time for you to go from data to a model is very important now
in a competitive market where people are seeing AI as a competitive advantage.
Well, from that note, what I'm seeing a lot of is sort of cloud bursting of machine learning
training and processing. Certainly, that's been a topic of a lot of the recent cloud conferences
and cloud technologies. How about you, Chris? Yeah, well, riffing off of that look towards
hardware a little bit that Frederick kind of took us down, I think that that's an interesting area where we're seeing maybe not quite an explosion yet in the last couple of years, but maybe in the next year or two, a true explosion of different form factors of hardware that's specific to AI or that, to your point earlier, just has AI built in, right? And we definitely see some of the big chip manufacturers adding in different instruction sets and different ways to get a regular CPU up to speed with some kind
of AI functions. But we also see folks building specific chips that are hardware neural networks
and optical chips that are specifically built for AI and TPUs in addition to GPUs. And I think
anybody who's asking the question, is it better to have purpose-built AI hardware or a combined kind of CPU that does both
is missing the point. And I think what we're going to see is this continued diversification
of hardware for different applications in different areas. Yeah, I totally agree. And
you could also see that a lot of the innovation, you know, at the hardware level isn't coming from the usual suspects. It's coming from smaller organizations that have a fresh view on the problem
and specifically and purposely build technology for solving AI problems.
And I think that is a major influence to the markets
where organizations are willing to go off the road from the mainstream
chip makers to get the biggest bang they can get from new technology.
But as I said, too, another interesting aspect that I'm following is the increasing deployment
of specialized instructions, even in mainstream processors that don't need offloads.
So that was one of the big topics, for example, with Intel and their Ice Lake,
their Alder Lake, they're coming out with this new Sapphire Rapids generation soon.
All of these processors are expected to have pretty advanced AI capabilities built into the CPU.
And as we said, with mobile devices too, Google's latest phones and of course
Apple's all have these processes built in. Do you think that there's going to be sort of a turn
away from specialized hardware? Or do you think that that's sort of what we're seeing as a split
between training and implementation of machine learning? Well, I think today, if you see the workflows that are in production, they just don't use
one type of hardware.
They use multiple types of hardware.
And I do believe that eventually it will be a handful of hardware technologies that will
win.
But I don't think there will be one type of chip that would be able to help you go through the whole workflow, unless it's a piece of hardware that acts like a chameleon and can act like a CPU when it needs to be, acts like an FPGA in other situations, or like a GPU in another situation. But I do think that the market will go to a handful of vendors,
but I don't think you could say or end up with only one or two. I think it's going to be still
a few of those that will be required. And not all AI problems are the same, right? Some are,
you know, like pictures are different than textual and so on.
So one of the things that happened during our season three of Utilizing AI is we continued our season two tradition of three questions, but we extended it to invite questions from
guests and from our other hosts and others.
I wanted to, for this special episode, turn that around. So basically, all the episodes of Utilizing AI, you've got me, and then you've got either
Chris or Frederick as a co-host.
So Chris and Frederick actually weren't there for some of these questions.
So I want to ask you guys some of the three questions that you maybe haven't had a chance
to think about or be exposed to.
So again, as always,
listeners, our guests haven't been prepared for this ahead of time. I'm going to spring them on
them in real time, and we're going to get a fun off-the-cuff answer from Chris and from Frederick.
So let's alternate here. Chris, I'm going to throw the first one to you, and this question comes from
Tony Paikaday, Director of AI Systems at NVIDIA.
Hi, I'm Tony Pikaday, Senior Director of AI Systems at NVIDIA.
And this is my question.
Can AI ever teach us how to be more human?
Wow, I do think so.
I think technology in general has the ability to teach us to be more human in ways, whether
that's to contrast against humanity.
So you can see the difference between an AI decision without common sense and then appreciate a human's ability to have common sense, but also in delineating the tasks that are best
handled by a human and what those are. And I think by understanding the things that we can do that a
machine can't, we really understand ourselves a lot better.
Excellent answer, Chris. Frederick, over to you. Your question comes from Sri Ram Chandra Sekharan, assistant
professor of biomedical engineering at the University
of Michigan.
Hi, I'm Sri Ram Chandra Sekharan. I'm a professor at
the University of Michigan. I work on AI and healthcare. And my question is, what do you think is the biggest AI technology that's all starts with data. And I think the challenge with a lot of the medical AI methodologies and models they're trying to build today is that they don't really know what they're looking for. Right. So yeah, it's an interesting question. i think i'm gonna stick with data it's it's
it's like with everything else right so they're probably gonna stumble over a piece of data and
then find a solution for you know you know to let's say heal cancer so to speak um i don't
know i think beyond data i i i don't see it you know my background is not in healthcare so i
wouldn't know.
But I would stick with data, yeah.
I think at some point they would collect enough data from the right sources and analyze it
and let the AI figure out that it can solve certain problems.
All right.
The next question is for Chris.
This one comes from Leon Adato, one of the hosts of the Technically Religious podcast.
Hi. is for Chris. This one comes from Leon Adato, one of the hosts of the Technically Religious podcast. Hi, my name is Leon Adato, and as one of the hosts of the podcast Technically Religious,
I thought I would ask something that has something to do with that area. I'm curious,
what responsibility do you think IT folks have to ensure that the things we build are ethical?
That's another really good one. I think every person has a responsibility to
imbue ethics into everything they can, really. And so as IT pros, we definitely are handcuffed
sometimes in implementing systems that maybe weren't our choice or to get to outcomes that
maybe we didn't choose. But we can always find other jobs as well. So I think we
do have a responsibility. I think that in a professional environment, there unfortunately
are some roadblocks, but as we've seen, you know, from folks leaving Google when they were
challenged on their ethical beliefs, you can actually vote with your feet. Excellent. Excellent.
And I think that we're seeing that really. So Frederick, this next question comes from Amanda
Kelly, co-founder of Streamlit.
Hi, I'm Amanda Kelly.
I'm one of the co-founders of Streamlit.
I would like to know, what is a tool that you were personally using a few years ago?
Maybe you were very hot on, but you find you're not using anymore.
Oh, as a developer, I used to work with C all the time.
And C is like, you compile everything ahead of time. It's very
strict. I don't use it anymore. Now it's all about DevOps, right? It's Python. C has completely
been replaced by Python. Although the execution of Python itself is slower than C, the fact that
you can develop and be more efficient as a developer using Python than a
language than C. So yeah, C for me, C and C++, if you wish, are the tools that I don't use anymore.
And Python are the new ones. Seems like it's a Python world. Pythons and pandas, right? Right.
Chris, your final question comes from Ben Taylor, Chief Evangelist at DataRobot.
Hi, I'm Ben Taylor. I'm the Chief AI Evangelist at DataRobot. Hi, I'm Ben Taylor.
I'm the Chief AI Evangelist for DataRobot.
So for my question, are you living up to your potential legacy?
I sure hope so.
It's something that I work on and I'm trying to do.
I use a combination of long-term, medium-term, and short-term goal setting to make sure that
I'm aligned with the things I want to accomplish in this lifetime. And I work as hard as I can to get there. So
I think so, but there's always more work to be done. So I'm my own harshest critic,
as most of us are. So it's hard to answer, but I hope so. I guess we all hope so. Frederick,
your final question comes from Gerard Cavalinas, founder at TechHouse570. Gerard, go ahead.
Hey, this is Gerard Cavalinas, the founder of TechHouse570. I also say, you know, the inability to make a decision.
I mean, the best example that that people brought up to me was, you're driving in a car that has
AI built in it, it has control over your steering wheel, your gas pedal, your brake,
at some point, there's going to be an accident, and the two cars that are going to be involved in an accident have a conversation with each other.
And in one car, there is a young couple with a baby, and I'm over there by myself.
And the two cars then decide that I'm the one that should probably die.
And the fact that there's not much I can do about it,
that scares me actually, yeah.
The classic trolley problem, huh?
Right.
So Steven, we didn't get to ask you any questions yet.
I've got a couple here and I think Frederick has one as well.
So after three years of utilizing AI,
you've got a little bit of inside track on what's happening. So what do you, after three years of utilizing AI, you've got to like a little bit of inside
track on what's happening. So what do you think will be the most surprising thing we're going to
see come out of AI in the next three years? I mean, you know, what do you see happening that's
going to shock the rest of us? I'm going to sound like a super downer here, but I think the most
surprising result of all this AI technology is going to become some kind of massive, massive
disruption on society. In other words, it's going to become some kind of massive, massive disruption
on society. In other words, it's going to go terribly, terribly wrong in a very specific way.
So maybe all of our series get a bad model pushed out to them or bad data set pushed out to them
and cause some terrible problem. Or maybe, I think maybe more likely is that, you know, all those self-driving cars get a bad update,
and we have horrible, horrible issues. So yeah, I'm totally terrified about what comes out,
and I think it's going to be something pretty bad. Yeah, so my question for you, Stephen, is
when do you think AI will be able to automatically identify if an AI model is ethical or not?
Really? Never. Never. I don't think that AI has that capability. I think if there's anything we've
learned in our conversations, it's that it's all about the people, and it's about the people asking
the question. And I do think that you're right. I do think there's going to be an ethics lifeguard
AI bot that people are going to create, but I just don't know that it's going to work. I think at the end of the day,
there has to be a person behind the keyboard looking at it. And it has to be a moral and
ethical person, like some of the folks we've talked about that are doing some really great things.
And they need to look at it and say, you know what, that's not right. And I'm not going to
put up with that. I think it's got to be a person.
So Steven, speaking of things that won't work, what do you think is the most overhyped application of AI people are talking about today? What is it that we're really buzzed about that's just
not going to happen? Without a doubt, it is self-driving cars. I'm sorry. We've asked this
question on this podcast so many times. So my daughter thinks that she's
never going to need to learn to drive because she can just get in the car and tell it where to take
her. And if there's one thing we've learned, it's that the most hyped AI application is autonomous
driving. And the biggest letdown is autonomous driving. I'm sorry, it ain't going to happen
anytime soon. In fact, it may never happen. As we had one of our recent guests say, there's a difference between self-driving, autonomous
driving in highway conditions and autonomous driving on a dirt road in the backwoods of
Montana, right?
That's just never going to happen.
And I think that that's going to be the biggest letdown for people.
So thank you guys both so much for joining us on every episode of Utilizing AI this season.
And thank you so much for playing along for the three questions and coming up with three
good challenging ones for me.
If our listeners want to join the fun, you can.
Just send an email to host at utilizing-ai.com with your own question, and we'll record it
and ask it on a future episode.
So before we go, I want to give you guys one last chance here. Where can we connect with you and follow what's going on with you in 2022 with artificial intelligence? Frederick?
Yeah, so I'm helping enterprises with efficient data management and designing large-scale AI
clusters, and you can find me on LinkedIn and Twitter as Frederick V.
Heron.
Chris.
Yeah.
One of the main things I do is help technology companies tell their story.
So if you've got a great new AI idea and want to figure out how to take it
to market, that would be an interesting reason to come talk to me.
You can find me at chrisgrundeman.com, on Twitter at
chrisgrundeman, or LinkedIn as well, same name, and I'd love to have a conversation.
Excellent. And as for me, you can find me at sfosket on most social media networks.
And of course, the thing that I'm really excited about is we're going to do another AI Field Day
event in 2022. Please, if you're interested, give me a holler. I would love to hear from you,
maybe as one of the delegates around the table, or maybe as one of the presenters on the other side of the
table. And we would love to have you be part of AI Field Day. So please drop me a line. I'm
sfosket at gestaltit.com or just sfosket at pretty much any social media network.
So thanks for listening to the Utilizing AI podcast. If you enjoyed this
discussion, please do subscribe, rate, and review. And that really, really does help. And also,
please continue to share it. Our growth in the last year has been phenomenal. I kid you not,
999% growth in listenership in 2021. That's an actual number that we got out of Spotify.
Pretty cool. Thank you guys for
listening and thank you for joining us. And thank you everyone for being part of Utilizing AI.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the
enterprise. For show notes and more episodes, go to utilizing-ai.com or find us on Twitter
at utilizing underscore AI. Thanks for listening, and we'll see you next time.