Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x15: Utilization of Shadow AI with Run: AI
Episode Date: December 21, 2021Shadow IT is as old as our profession, so it's no surprise that shadow AI is becoming a major issue. In this episode of Utilizing AI, Ronen Dar and Gijsbert Janssen van Doorn join Frederic Van Haren a...nd Stephen Foskett to discuss resource utilization and shadow AI. One of the biggest issues with shadow IT is the low utilization of these resources that can come when they are purchased and used by a single corporate group or application. This is true both on-premises and in the cloud. But even if enterprise IT operations and infrastructure groups can come to understand AI, they must offer a compelling AI solution if they will be able to get control. The easiest way to do this is to deploy a much larger centralized solution than any group could procure on their own and deliver it with a flexible cloud-like access method. Another issue with shadow AI is that it often relies on a single individual and is difficult to reproduce, put into production, or scale. Three Questions: Frederic: Is it possible to create a truly unbiased AI? Stephen: Is MLOps a lasting trend or just a step on the way for ML and DevOps becoming normal? Tony Paikeday, Nvidia: Can AI ever teach us to be more human? Guests and Hosts Ronen Dar, CTO and Co-Founder of Run: AI. Connect with Ronen on LinkedIn. Gijsbert Janssen van Doorn, Director of Technical Product Marketing at Run: AI. Connect with Gijsbert on LinkedIn. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 12/21/2021 Tags: @runailabs, @SFoskett, @FredericVHaren
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Frederik van Teren.
And this is the Utilizing AI podcast.
Welcome to an episode of Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, data science, and other topics.
Today, we're talking a little bit more about shadow AI.
This is one of those topics that's come up quite a lot, both here on the podcast and the AI community in general, because
just about the nature of it, it seems like there's a lot of AI being brought into companies by
different groups than traditional IT applications. Is that what you're seeing as well, Frederick?
Yeah, indeed. I mean, shadow AI is kind of an evolution of shadow IT.
It's typically somebody with a bright ID.
They want to apply some AI on it.
They throw some hardware at it.
And eventually they have, let's call it a prototype for the sake of the name of it.
And in the end, it is kind of an isolated event.
And so from an organization standpoint, where your goal is to bring a product to market, there is a lack of efficiency of the hardware.
The software stack and the hardware stack are not really defined.
And so there is really a need to pull that all together and to use some kind of an orchestrator to help you go from shadow AI, I should say, to a production environment.
But that's an appropriate slip there because, of course, shadow IT has been with us forever.
I think that if you're a student of computer history, you could reflect on the success of
x86 as being a result of shadow IT. People bought IBM PCs, they bought Magic Pencil as their spreadsheet,
they put it on their desk, and then later corporate IT had to figure out how to deal with this. And so
we've been dealing with this for a long, long time. So that's why we decided to invite on the podcast
today two folks from Run AI. So go ahead and introduce yourselves. Ronan, why don't you go
first? Yeah, so hello everyone. Great to be here.
Stefan and Frederik, thank you so much for inviting me. So I'm Ronen. Ronen Da, I'm the CTO
of Run.ai, one of the co-founders. We started Run.ai about three years ago and I'm happy to be here.
My name is Gijsbe Janssen-Vandoor. I'm really happy to be here as well. I'm happy to be here. My name is Gijsbe Jansen van Doorn.
I'm really happy to be here as well.
I'm responsible for the technical product marketing at Run.ai.
I recently joined.
I joined in September.
Before that, I spent some time in multiple other startups
in the infrastructure area.
And it's great to now hop on a journey
that's more related to AI and its infrastructure area. And it's great to now hop on a journey that's more related to AI and its
infrastructure challenges. I think that, Gais, that's actually a good way to introduce this
topic because both you and I came from this world of enterprise IT operations and IT infrastructure.
And we saw this happen again and again and again, where you had shadow IT. Somebody said,
I need a solution.
I found my solution.
I'm not gonna wait for IT to deploy this thing.
I'm not gonna wait for them to come up to speed
or approve it or whatever.
I'm just gonna buy it.
I'm gonna make it happen.
And since I'm the business,
since I'm where the money comes from,
then that's gonna have to be okay.
And it's not like a negative attitude.
Honestly, it's a positive attitude.
It's an
innovative attitude, but it does cause some problems, right? I absolutely agree. We've seen it
basically since IT started, all the way from cloud where engineers and developers were just
pulling their credit card and launching cloud resources to Dropbox is a great example as well. Everybody
had a Dropbox account, private Dropbox account somewhere to share files. And then all of a sudden
putting IT in this position of, okay, we need to control this. We need to centralize this. We need
to make sure it's secure. We need to make sure it's efficient. And I think as with any technology, like I mentioned, sharing of files,
cloud, the AI is now in a similar phase where people are buying resources because they need
them. And then they sit somewhere without the control of IT, making sure it's efficient.
Hence, shadow AI.
Right.
So what do you see as being the biggest challenge to help a customer that is heavily involved in shadow AI and wants to go to production?
So what are the biggest challenges you see there?
Yeah.
So I think, yeah, it's a good question so in shadow ai
you know you have data science teams or ai teams just buying hardware buying computer resources to
to run their workloads to train models and do what they what they need to do right and
and of course it's it's like they get what they need to do, right? And of course, it's like they get what they need
at the same moment, but they have this scattered
infrastructure which is not really efficient
and not really scalable.
And that's the place where IT can come
and centralize the infrastructure and provide a pool of resources that can be
shared and can be scalable, a scalable solution for the data scientist teams, for the AI teams.
I think in terms of challenges, we see two things, two main problems.
First we see that we see organizations just starting
to centralize their infrastructure, right?
They decide that AI is important in the organization.
They see it grow and then they want to scale it.
And the first thing,
they want to centralize the infrastructure
and the IT wants to take control
to do good things for the data scientists.
And then you're getting problems of starting.
There's not a lot of best practices today on how to establish an AI infrastructure,
on which hardware should I buy, which storage should I buy,
what should be the software stack, what's important and what's not important.
So we see time after time organizations
are struggling with understanding like what what is the solution what is the right solution what
how a good ai infrastructure should look like so we see that for for sure we also see organizations
that already built the infrastructure right started to build it and they need in terms of infrastructure, in terms of computer resources, in terms of the services they need to do what they need, what they do best.
And what we see also very, very often is that data scientists are using a centralized infrastructure, but they need more.
They need more computing power.
They're asking for more GPUs, for example,
for more resources to do their job.
And they feel like they are limited
by the infrastructure in many times.
So that's from one side.
And then from the other side,
the IT looks at the infrastructure
and the utilization is so low, right?
So time after time, we see 10% utilization,
20% utilization, right?
The infrastructure, GPUs are just sitting idle,
for example, and then while data scientists
are needing more compute, right?
So we see that as well as a very big challenge.
Yeah, I think I see two items in the market
that are big issues with shadow AI.
And the first one is shadow AI typically is done
by people who are more technical
while the corporate IT environment or organization is,
I wouldn't say they're less technical.
I think they're more focused
on the traditional type of hardware, right?
So in other words, a traditional IT organization might be more familiar with CPUs
than with GPUs, right? So what's a GPU and how do I use it? And why does one GPU cost so much more
than a CPU, for example? And I think the second piece is the methodology it's it's the the corporate IT organization understanding that DevOps
is is is the way to go and I think the challenge there is is DevOps is a little bit like rock the
boat all the time but rock it a little bit while I feel corporate IT is more like don't rock the
boat at all how do you how do you see in the market you know those those those kind of comfort well i
wouldn't say confrontations but different approaches between corporate i.t and and the
shadow ai organizations yeah and i i totally agree with what you're saying right the gpus is a new
creature in the data sign in the data center, right, for IT.
So there's a lot of challenges and questions
on how to manage GPUs,
how to build the infrastructure around GPUs.
And then it comes with DevOps and so on.
And there is conflict, right?
So I think the AI teams, the data science teams, they understand that
the benefits that could be for centralized infrastructure for a lot of compute that can
be offered to them, right? If they could get that compute and easy access to that compute,
if they could get good services and good tools to consume that infrastructure, that would
be amazing for them.
They could be more productive, right?
They don't want to deal with infrastructure hassles.
They don't want to deal with compute, right?
They just want to run their workloads, train models, experiment with data, right?
Do what they do, but they don't want to deal with infrastructure.
If someone can provide it to them,
if IT can come in and be like the hero
that provides a cloud-like experience,
provide services, provide easy access to compute resources
for the data scientists, for the AI engineers,
that could be awesome, right?
And so I think that the conflicts are there.
So it's very clear what the data scientists and the AI engineers want.
They want easy life, good life, in terms of when they work with the infrastructure.
And the IT needs to provide it.
And I think that that can be done, right?
That can be done.
The IT enterprise can be the heroes
and the data scientists, the engineers
can get what they need.
I think that that's a really important point
that you're making there, Ronan,
because from my experience in enterprise IT,
the only way to get control of shadow IT is to offer a better and more
compelling solution. You can't just go in there and say, you're not supposed to have your own
stuff, so I'm going to take it. I mean, you go into your kid's room and you say, oh, I didn't
buy you this toy. I'm going to take it. Well, that's not going to lead to a good solution.
But if you can actually lead them to a better solution, if you can say, look, that's not going to lead to a good solution. But if you can actually lead
them to a better solution, if you can say, look, here's the thing. If each of you in application
groups buys one, you know, medium sized, you know, system, then we're going to have 10 of these
things throughout the business. But for the same money, we could buy
a mega size system, share it with you, and you can get your work done in a 10th as much time
because you're sharing these resources. It becomes more compelling, especially if you can offer it as
a service as a, in a more flexible way to say, look, not only that, but we're not going to make
you like come to a meeting and schedule things. We're going to make sure that, you know, you can access it when you need to,
you can, you know, you can have the resources you need, you can get more done, but that's only
possible if we share these resources because, you know, it really goes hand in hand, a utilization
of resources and shadow IT. I mean, they're really two sides of the same coin, right?
Yeah, absolutely.
I agree.
Yeah, and I think IT has learned that over the, let's say, the past decades.
It's not the, how do I say it?
The B-O-F-H.
I think that's the, it's not that time anymore.
They understand they need to deliver value. how do I say it? The, the B-O-F-H. I think that's the, it's not that time anymore. They,
they understand they need to deliver value. They understand. And I think cloud was a big
enabler there that things like on-demand, like pay-per-use, like self-service are,
is important right now because that enables their users to be happy, to do better, to do their job in a better way, in a more
efficient way.
And I think that's what IT is looking for.
It is that cloud-like experience.
I think that's what you mentioned, Stephen.
And that's exactly what they need to deliver.
And yes, I think GPU is a different beast. But I think with all the lessons learned
from how the IT departments dove into cloud,
how they delivered that as a service to their organization,
the whole DevOps movement,
I think IT now knows how they want to deliver it
and how they need to deliver it.
There just needs to be the right,
let's say, building blocks to ensure that they can deliver that.
Yeah, so you both talked about public clouds.
How do you see consumption of AI coming from shadow AI?
Do you see that on premises in their own data center?
Or do you see shadow AI kind of converting more to a consumption model
where you really don't know how much capacity you're going to need
and you're really not in the mood to spend a lot of money on GPUs ahead of time?
Do you see like a split between on-premises and public cloud?
Or how do you see that?
Yeah, that's a great question i i
i i don't i don't see a split i think i think this similar situation also appears in the cloud
right we we we spoke with a company that they have a lot of data science activities, a lot of AI activity,
and they have both an on-premise infrastructure and a cloud infrastructure.
And they spend $10 million a month on AI, right?
Just on AI and data science and the you know we're speaking with the it and they don't really know why they're spending 10 million dollars a month on that and and you know and i'm
if they will look at the utilization of all the cloud machines they will get the same problem
right they spend a lot and the
utilization is is around 10% 20% when it comes to GPUs right so so it's very
similar also in the cloud you get AI teams data science teams that are just
going to the cloud spinning up a few machines or a cluster and just running
the workloads and typically they don't have the right tools to run the workloads in an efficient way,
to use the cloud in an efficient way. Many times to just spin up a Jupyter notebook on a machine
and just build models or debug the models in a very inefficient way like that.
And so I think it's very similar also in the cloud.
Teams are just using shadow AI, shadow infrastructure,
and really in an inefficient way.
Yeah, so you're saying it's basically collect your metrics,
know what you're doing, and then tune your efficiency and cost control
and all that. So do you see customers focusing on one particular item? And maybe another way to say
this is, do people understand that they have to collect metrics as opposed to just let it run and
hope everything will run efficiently? Yeah.
Monitoring is a, we spoke about challenges.
So monitoring is a big challenge.
Typically,
organizations don't even know what the utilization of the cluster, right?
They don't even know that the utilization is so low.
So I think it starts with monitoring.
It starts with just getting visibility
into the infrastructure,
getting visibility into the utilization,
to the usage patterns,
to who's running more,
who's using less,
who's using more efficiently,
who's using less efficiently.
So things like that.
So visibility is really the first step.
And I think then is understanding the problems,
understanding the inefficiency, the challenges around, you know,
around getting access to that compute power in an efficient way
and, you know, starting to solve those problems.
From our work with organizations, we see time after time, right?
We're getting in and we're getting visibility
into the infrastructure.
We see the low utilization and then we see the problems
and then we solve them, you know, one step after the other.
Yeah, another challenge with shadow AI
is typically it's put together by a small team, in some cases, one individual.
And so where the ability to reproduce and regenerate a model consistently and even going to production, you know, it's not an easy task.
Where do you think the tools should come to go from shadow AI to creating this ability to reproduce consistently,
the ability to validate that your data is coming from an ethical background?
And how about scalability and and time to market right another
thing with with shadow ai is it's not typically it's not built for performance right it just gets
the job done but how do you tune right so there's a lot of challenges there and and uh in it i i
cannot imagine how difficult it is to have a conversation with a customer that has something that works,
but it's far from being production.
You hit the nail on the head.
I think that's also the reason why the number of organizations that actually don't get their
AI models into production is enormous.
I think most organizations don't get their AI models into production is enormous. I think most organizations don't get their AI
models to production. And I think part of that is because it's developed on shadow AI,
and then there's no plan, there are no resources to actually take those models and put them in
production. The easiest thing to do is buy a GPU workstation, put it there, start developing your model,
training it, but then how do I take this into production?
So organizations that understand that understand that they need that multilayer approach.
One, yes, they need to buy the resources.
They need to buy those GPUs, But that's actually, that's the easiest thing.
You call Nvidia, you say, I need a bunch of GPUs, wrap them in a nice DGX and put them in my data center. That's easy to do. But then you need to efficiently use those resources. One challenge
there, which Ronan actually mentioned, is the monitoring.
Then we need to get insights into how they're being used.
But to be able to really provide insights and to really understand what you need to do with that, there's another layer on top of that that you need to know about or at least understand.
And that's the, what are the types of workloads that are running on my resources, on my platform?
Is it a data scientist that's developing? Is it training a model? Is it running inference,
right? Is it running in production? All these different types of workloads have different
resource requirements. And you need to ensure that your system is
built for all of those. And you need to understand the different challenges with those different
workloads. And I think if an organization wants to successfully put a model into production,
they need to understand every phase of it and every challenge that comes with that phase.
And they need an efficient way to do
that. Whether that's one, deliver an interactive development environment with a Jupyter notebook
for their data scientists that are developing the models. They need to have the right resources for
them to train those models, hopefully in a set and forget way.
I need to train my model, do it, and come back to me as soon as it's done.
And then they need an efficient way to put it into production, whether that's in the
cloud or on-premises or wherever on the edge, wherever they want to apply that AI model.
And I think that's why most organizations that run shadow AI or have shadow AI or have this shadow AI challenge don't get to that production phase because they're steps behind.
They know how to develop it.
They know how to train it, it's probably slow because
it doesn't scale, and then they'll never get to production.
So do you think that shadow AI is a necessary evil or do you think it can be avoided
for organizations that want to play in the AI market? I think it's both.
I think you need an enabler.
I think the more traditional,
the more, I don't want to say legacy,
but the more that the companies that run traditional IT,
they've had many shadow IT issues
that eventually evolved into them
delivering a service to their end users
so that they didn't need those
shadow IT services anymore. And I think that it's part of the process. Some organizations
need shadow AI to get to that realization of, oh, I need to take this serious. I need to
centralize this. I need to manage this. I need to make sure that
IT can deliver that as a service. Other organizations that are probably newer,
that are built around their AI developments, they understand that they need a scalable solution that
helps them do all the steps of the AI development phase, right? All the way from build to train to inference.
So I think when you need shadow AI as an enabler,
yes, it's a necessary evil for some organizations
and other organizations will understand
that they need to do this right from the get-go.
Right. I mean, I do think that AI is definitely,
you know, trial and error and
prototyping. But when people talk to me about shadow AI, it's, it's, I kind of look at it as
somebody had a great ID, but it's not really a corporate ID. And you really don't know where
this is going to go, right? So somebody might be doing shadow AI and nobody might ever know
anything about it. Right. So, so, so I agree with you. I mean, it's necessary,
but I think from an organization standpoint, because it is shadow,
it is in the shadow, you know,
organizations might not know that it's going on good or bad. Right.
And, and I think that's where,
where prototyping and maybe using a starter kit, like you said, you know,
you buy some, some hardware and you get going, but I like you said, you know, you buy some hardware and you get going.
But I think it's a challenge, right?
I mean, for organizations, it's not easy
to kind of go from traditional IT to AI.
And certainly there's a lot of definitions about AI,
but, you know, either way, I mean,
I think shadow AI will always be there.
I just want to shed more light on it, right?
So make it less shadowy, if I can say it.
Bring it out of the shadows as it were.
But it really does remind me a lot, Frederick,
of the discussion that we're having around DevOps as well,
because in a way, DevOps is a mature answer to shadow IT or shadow infrastructure.
It is like, as I said earlier, how do we give these people a good answer, an answer that they
will embrace instead of just saying, no, no, no, you're bad. You can't do this without having IT
operations and IT infrastructure involved. And I think DevOps was a way to say, look, let's have a conversation. Let's invite these people together and let's give
them the power that they need to provision infrastructure to support their ideas in a way
that is sensible to the corporation. So in a way, aren't we always, aren't we all just sort of
talking about MLOps here? I mean, is this just an ML Ops discussion,
or is there some unusual nuance that I'm missing?
That's an amazing point, Stefan.
I think, yeah,
the shadow AI, I think it starts in the research space, right?
Data scientists at the end,
a lot of them are researchers, not engineers.
Right. So many times AI starts with a small list as a small research, right?
It's a small research project. And then, right, with researchers, they want to get, you know, do stuff very fast.
Right. If they need to compute,, then quickly they want to get that compute
and they want to have the freedom
to do what they need for the research, right?
To move the research very quickly, right?
So if it means compute that they buy for themselves,
then let it be, right?
They have the freedom without compute.
And that's amazing.
They can move really fast with their research, and then it becomes a problem however when you want to scale
the research right you have a lot of researchers or when you want to scale the research projects
then if you do it in a small scale environment that becomes a problem right so so i think that's one thing. And then the second is what we described here, but also in terms of the infrastructure, right?
And it comes also to MLOs, right?
You need to put processes around production and processes and, you know, best practices and centralized things to get good control of your production environments.
And then it comes to MLOps and you need to find good ways to move research to production
also in terms of infrastructure and operations.
And then MLOps is an amazing new space that comes in right now. And it's great.
Yeah, I think MLOps is a methodology that kind of identifies that the market is going much faster
than it ever has been going,
meaning software releases are going much, much faster
because it's driven by the open source community.
And you could actually go to GitHub
and download the latest sources, right?
It doesn't have to be a quote-unquote official release.
And then secondly, the hardware, right?
And I think MLOps is kind of addressing
the extreme dynamic behavior of this
where MLOps gives you a methodology
where it says, well, we understand
you're working with Python
and some other tools and CUDA versions,
but in the end, you're not locked down.
And that's what's different with traditional IT,
where you try to lock as much down as possible
because you don't want any variables.
And MLOps is basically saying, sure,
I realize the world is all about a billion different types of variables,
and here is a methodology to to handle that and and
i think ml ops is is is very difficult to define in the sense that it's a moving target right i mean
it's it's going so fast but we need those methodologies and i think that's the only way
to get out of shadow ai or at least to go to production maybe that's a better way to say it is is the mlops methodology and
accept that change is is is guaranteed right and you need a methodology to do that i couldn't agree
more i couldn't agree more so mlops is really is moving fast and it's really really really
important when it comes to to machine learning in production and and there are new challenges there.
You mentioned some of them.
The fact that machine learning is a statistical algorithm, right?
We're not talking anymore about just deterministic code
that I'm programming.
I'm programming a deterministic algorithm,
and then I run it as a web application.
It's a model, a statistical model that I,
as a data scientist, as an engineer, I trained on a sample or a set of samples, a set of data
with some distribution. So it has statistical nature. When I run that model in production,
I'll get good predictions or good results with a certain accuracy, with a certain
probability, right?
So how do you manage that in production?
How do you monitor that in production?
How do you put all processes around something with statistical nature, right?
That's a new thing.
And then MLOps is emerging right now, and it's totally important, and it's so totally exciting to see what's going on there.
A lot of exciting stuff happening there.
So to summarize the conversation, you know, now that we've kind of reached the end of the discussion here, what is your prescription?
What should the listeners be thinking of?
I mean, so if we've got IT people listening, what should they be thinking of? I mean, so if we've got IT people listening, what should they be thinking of? And if we have, I guess, ML and data science people,
what's the quick hit that they should take away?
Yeah, and I think, right, we spoke about shadow AI,
so the importance of centralizing the infrastructure
and creating that infrastructure stack that could provide
the best efficiency for the data scientists.
So providing simple access to compute power, providing tools and services to the data scientist
is so, so important for the IT, for the centralized infrastructure to be successful.
I think if that happens,
then data scientists are becoming more productive.
They can run more jobs, train more,
more there's be much more, much more productive.
So if data scientists and IT can go together
and create that AI infrastructure
and create what they need to develop AI
and change the world with AI,
that could be really something amazing.
Well, thank you so much for that.
And thank you for this great discussion.
As I said, this does have so much resonance to me.
It reminds me so much of what we heard
from back in the days of various aspects
of IT operations and IT infrastructure. So
it's definitely another face of a familiar question. So now comes the time in the podcast
when we ask you three questions. This is a tradition we started last season, and we're
continuing it now. As a note to our listeners, our guests have not been prepared for this or prepped or told what the questions are. This is a surprise for them so that we can get some off-the-cuff
answers and see their personality a little. We're also changing things up this season by introducing
a question from a previous podcast guest for our guests here today. I'll ask a question as will
Frederick, and then the third question comes from our previous guest. So Frederick, I'll let you go first and you can address it to either of them,
whoever you think is most appropriate.
Let's go with Gaspard and the question is, is it possible to create a truly unbiased AI? oh that's a that's a that's a that's a very interesting one i think
that the chat and we've seen it we've seen it multiple times that it's hard to really uh build
unbiased ai because i think the the reason um it's all about data. It's all about the data you feed into the model. And if that data,
for some reason, I don't think the data will be biased by nature. It's not biased because it's
built biased. I think it's the data that feeds into the system.
Can it be truly unbiased?
I don't know.
If the data is truly unbiased, truly balanced,
then I think we can eventually.
But it's a challenge.
It's a challenge for every single AI model out there.
And we've seen it, unfortunately, happen multiple occasions that even though the goal was to
build an unbiased model to make something unbiased, the results were biased.
And again, there's no developer to blame there.
The only thing you can blame there is the data.
And I think, so where does unbiased AI start?
It starts with the data.
Thank you.
I think that that's a more nuanced answer
than maybe the average person might hear,
but I think it's true.
And I think it lines up with what we've heard
from some of our other guests as well.
So Ronen, I'll throw this one to you
since we were just talking about MLOps.
I wanna know is MLOps a lasting trend
or is this just another step on the way
for machine learning and the ML infrastructure
and DevOps and everything just to be normal IT?
I'm good.
I think MLOps is amazing, right?
It's really needed, right?
There is room for MLOps, right? For definitions on how to manage
and deploy machine learning in production.
The DevOps world did wonderful things
for the technology world, right?
So, MLOps brings amazing stuff.
And in the long term, let's see and wait.
I have big hopes for machine learning hopes, but let's see.
And our final question comes from a previous guest.
This question is brought to us by Tony Paikaday,
Senior Director of AI Systems at NVIDIA.
Tony, take it away.
Hi, I'm Tony Paikaday, Senior Director of AI Systems at NVIDIA.
And this is my question.
Can AI ever teach us how to be more human?
I really have to think about that one. That's more of a philosophical question, right?
How can a human be more human, right?
That's, to to me impossible.
The other way around, can we teach AI to be more human?
I think that's actually more possible
than making humans more humans.
I think that the beauty of humans
is that they are so unpredictable
and they are conscious of themselves.
And there's so many things that,
well, not necessarily defines,
but what makes up a human.
I don't think AI can make us more human.
Yeah.
Yeah, I totally agree.
I think AI for sure will cause us, will change the way we act as humans, right?
So we as humans are totally different than like what humans been thousands of years ago, right?
And so with AI, for sure, it's going to change so much, so many things that also us as human beings will change.
Well, thank you very much for those thoughtful answers.
And thank you for joining us today for this discussion of shadow AI and utilization.
We look forward to hearing what your question might be for a future guest.
And if our listeners want to get in on the fun, you can just send an email to host at utilizing-ai.com and we'll record your
question for a future guest. So, um, Heis and Ronen, um, where can people connect with you and
follow your thoughts on enterprise AI and other topics? Yeah. Best way to find me is on LinkedIn.
I'm reachable there. So happy to connect there.
And same for me.
I'm active on LinkedIn.
I'm active on Twitter.
I think if you want to know a little bit more about both me and Ronen,
and I know it's a couple of months away,
but GTC in March will be a big event for Run AI as well with many sessions. So
you'll be able to see us there and listen to our thoughts on how to build AI infrastructure
that's ready for the future. Yeah, I know that we're both looking forward to GTC as well. So
probably everybody in the audience is going to be paying attention to that. So you can find all of us involved there
in one way or another.
Well, thank you very much for listening
to the Utilizing AI podcast.
If you did enjoy this discussion,
please do subscribe
in your favorite podcast application
and give us a rating review while you're there.
This podcast is brought to you by gestaltit.com,
your home for IT coverage from across the enterprise.
For show notes and more episodes, go to utilizing-ai.com, or you can connect with us on Twitter at
utilizing underscore AI.
This is our last podcast of the year 2021.
We're going to take next week off to spend some time relaxing with our friends and family
here at the end of the year and into the new year.
And we want to thank our
listeners for listening to us and making 2021 such a great year for us. We'll be back with
more Utilizing AI on January 4th, 2022. So we'll see you then.