Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 3x24: Challenges of Building Successful ML Programs
Episode Date: March 1, 2022With so many AI tools available, it can be a challenge to integrate everything into a productive platform. Orly Amsalem of cnvrg.io joins Frederic Van Haren and Stephen Foskett to discuss the challeng...es of managing data and resources for AI training, development, management, and deployment. Orly discusses her journey from software development to AI and the challenges people face. Many in the AI community are following the same path, and are looking for tools like cnvrg to help them bring AI to their day to day work. AL blueprints, provided by cnvrg and the community, can help developers and data scientists get started with AI projects. In a recent survey, only 10% of developers said training was their main challenge; nearly every one said that deploying a model to production was the biggest. Orly then discusses the main bottlenecks to MLOps in production and how to break through and normalize AI in the enterprise. Links: "Five Ways to Shift to AI-First" Three Questions: Frederic: When do you think AI will diagnose a patient as accurately as (or better than) a human doctor? Stephen: Is MLOps a lasting trend or just a step on the way for ML and DevOps becoming normal? Eitan Medina, Habana Labs: If you should choose something for AI to do for you in your day-today life, what would it be? Gests and Hosts Orly Amsalem, VP of AI Innovation & Business Development at cnvrg.io. Read "Five Ways to Shift to AI-First" here. Frederic Van Haren, Founder at HighFens Inc., Consultancy & Services. Connect with Frederic on Highfens.com or on Twitter at @FredericVHaren. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 3/01/2022 Tags: @SFoskett, @FredericVHaren, @cnvrg_io
Transcript
Discussion (0)
I'm Stephen Foskett.
I'm Frederik van Herens.
And this is the Utilizing AI podcast.
Welcome to another episode of Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, data science, and other artificial intelligence topics.
Frederik, one of the things that we've talked about quite a lot
is that building artificial intelligence and machine learning applications
and putting them into production,
well, it's not really about the technology, is it?
No, it's not.
It's, you know, creating end-to-end applications and pipelines
is very challenging. The tool chain to go from training, building models all the way to production is not easy.
And that's why I think it's really interesting to talk this week to Orly about what can Converge.io do as far as tools are concerned in order to help building those
pipelines in a consistent and repeatedly meted?
Yeah, exactly.
And we've got lots and lots of tools.
The challenge is bringing them together.
So let's meet our guest today.
As you mentioned, Orly Amsalem from Converge.io.
Welcome to the show.
Hi, thank you for having me here. So tell us a little bit about
yourself. Sure. So I actually started quite some time ago as a software developer. I kind of
experienced every type of software development there is started to, I worked with BI applications
in large organizations like banking, hospitals, healthcare, and so on, on BI applications,
like massive databases, getting data from multiple sources and so on. And for me, the natural next
step would be machine learning and data science
because once you have all this data in place,
you have to do something with it.
And then I think it was like about 15 years ago or so,
I started to look into data science.
I started to learn more about it
and started to think what can I actually do?
And back then, we didn't have a lot of deep learning.
It was very classic machine learning,
more like classic data science algorithms,
like random forest and stuff.
And it wasn't enough. When have like masses of data you need also
always to evolve so from there I moved I worked at a startup company and I was leading a team of
data scientists and data engineers and moved also to manage the product side and the business side of products
that are actually based on machine learning and data science.
And from there, the way for Converge was kind of natural
after working in big organization in enterprises, banking, hospitals,
and also big companies like Fortune 500 companies, media companies, I switched to a smaller company,
which is doing like cutting edge technology to solve all the problems that in those big
organizations we usually used to face.
Yeah, the slogan on the website, I mean, you go to converge.io and you find the slogan is everything you need to build AI, which is kind of an expansive slogan. But what do you mean by that? I mean, what exactly are the data. So how do I connect to those data sources?
I need to build maybe APIs to certain databases,
to a certain data sources and so on.
It's also about managing your experiments, right?
A data scientist, it's a research work.
It's not something that you know how to start
and how to end.
So how you manage all this research
in terms that you have your
data, you might try different things with this data, with maybe parts of this data. So you need
something that helps you to manage all this data, the versions of the data, each one of those
experiments, some easy tools and frameworks to manage all the experiments. Let's say that you
have a machine learning model
and you actually want to run it multiple times
with different parameters for each time.
So what you need to write the code to create
those fancy framework yourself.
No, so actually at Converge, this is what we do for you.
Also, it's about managing the resources.
Let's not forget that training a model
could take lots of resources on the training phase.
It could be that you will need to utilize a GPU or multiple resources.
Your data may reside on-premise, but your resources in the cloud, and it could be that
you have a very complex and hybrid environment.
So at Converge, we also manage all this for you so you can have like a smooth
experience bringing all the things that you need. And of course, the last and very, very important
and very challenging piece of this puzzle is the deployment, which means to take this model that
you created in your research, in your development environment, and putting in your production environment.
And this is very challenging to many companies.
And at Converge, we provide a very smooth way
to also take your model from your research environment
and deploy it easily on any infrastructure that you have.
So eventually this model will be accessed and utilized
and will provide the value it was intended to provide.
I was listening to your intro and it's very similar to what I experienced.
And what I and others probably are very interested in is how do you go from being a developer into like an AI slash ML market?
I mean, as you described, you know, the hidden Markov theory and all that good stuff.
I mean, in the early days, I think you called it the classic AI, was a lot challenging.
A lot of, you had to come up with a lot of things yourself.
How do you, what were the challenges you think to go from development to
AI and then maybe like a follow-up question is is for people that are thinking about AI and are
doing more classical developments what are the challenges they would probably see and what do
you think they should do in order to to get into the AI market? Yeah, so I really love this question
because it's really what I,
it's like about telling my story.
So I think that the main challenge
is the changing mindset
because as a developer,
you know what you are about to accomplish
and you definitely know that you can accomplish it.
You know that to take you from A to B,
you need to take certain steps and you'll get the you can accomplish it. You know that to take you from A to B, you need to take certain steps
and you'll get the results that you want.
Usually you also work with software analysts
and product managers, and you have like a framework
on what your product or whatever the software
that you're developing should look like
and should behave and should function.
But when it comes to AI, you don't really have this.
It's something that you need to come up with yourself. You need to research. You need to try.
You need to try an error until you get to the right thing. And it's also about being an expert
in a certain domain. So let's say I remember that one of the times that I was experimenting with this model, so I didn't really know what would be a good set of features to solve the problem,
but I was not a domain expert.
So you need to go to find, I don't know, economists or doctors
or whoever is the domain expert to help you,
but then you need to make it accessible to explain what exactly you're doing.
And back then, I think it was, if I remember, when I'm thinking about this time, then it
was less more accepted or less more, there wasn't a lot going on with machine learning,
right?
So it needs to be very explanatory.
If you couldn't explain your model and what it's
doing, so it would not be accepted well. So how do I know this model works? This was a question
that I would hear very often. Today, I think many business people who maybe don't know exactly how
those models are trained, they trust it because they see that everything is going, working with
AI,
you see the results speaking for themselves. You have like in every simple application,
you already have an AI. But when you just start with it, then this is, I think, the main challenge, like how your customers are going to accept it. And also the changing mindset
that you need to go through from something very certain to do something very uncertain.
I think that's so insightful. And I think that one of the things I've heard from the smartest
people, some of the smartest people I've ever met is when they talk about the things they don't know,
not the things they do know. I think some of the non-smarter people tend to talk about the things
they do know instead. And it's so true that when it comes,
especially to machine learning and AI applications, a lot of the people that are out there trying to
do this, a lot of people that are listening to this podcast, certainly, because that's sort of
the world we live in, are faced by this challenge. And they understand that they don't know what they
need to do. They don't know what they need next. And I think that they're seeing a lot of discussion
of sort of deep details of machine learning
when in reality, what they need
is maybe something a little bit more simple.
So how do companies or individuals like yourselves then,
what's the practical way that they get from here to there
in terms of getting into the world
of AI applications? So actually, this is the question that we asked ourselves a couple of,
not so long ago, a couple of months ago, because Converge, the machine learning operation platform,
is very oriented towards data scientists, experienced, you know, educated, that they know
to train fancy models and so on. But then we said, okay, even myself, I am not this fancy data
scientist, right? I came from a development background. I did some machine learning.
And we thought how we are going to take, you know, these models that are not going to be,
it's not about, you know, also building those fancy models.
It's just about using them, right?
So how do we take this and make AI accessible for everyone?
And we put an emphasis on software developers
because this is the audience that we are very,
we feel very connected to.
This is our community. This is us. Like Conver very, we feel very connected to. This is our community.
This is us.
Like Converge, we started,
the company started as development efforts
of data science, of course,
but we are very, very connected to software developers.
So we took this question and we thought a lot
and we came up with, I think,
one of the most amazing solutions that I've seen recently in terms of machine learning.
And I'm not just saying because it's convertible, because it's really very intuitive to developers.
And we created a solution that we call Blueprints, which are pipelines that are ready to use that were built by data scientists for the use of everyone with an emphasis of software developers.
Because we understand that now software developers
also need to use AI as part of their day-to-day work.
It needs to be something that they have in their toolbox.
And I'm not expecting them now to take classes
and educate themselves and become data scientists.
They are excellent in what they are doing.
They are professionals in software development.
They are creating applications, but they need to enhance those applications with machine learning.
So I think that, and it also relates to your question, Frederick,
what should they do?
So I think that software developers don't need to be afraid of utilizing and using machine learning in their applications, but they need to find the
right solutions that will work smoothly with what they have. They need to be very careful, I think,
not to, you know, defocus from what they're doing. You are a software developer, you're excellent at
creating applications, and this should be your main focus.
If you need to add some AI because you need to solve the problem.
I don't know if you're developing an insurance application and you need some OCR capabilities
because you want to, I don't know, scan some manual claims and translate them into something
the machine can understand and respond to.
Or you need a virtual agent that is trained on your data.
You don't need to go and learn and study from scratch how to do it.
Search for those ready solutions.
But you need to find those solutions that will work smoothly with your environment.
Sometimes you go to a cloud vendor that requires you to shift everything to this cloud. But I don't know, you might have some privacy issues that prevents you, but it needs that you, it requires you to shift everything to this cloud.
But I don't know, you might have some privacy issues
that prevents you from doing this,
or you might be in an organization
that choose a different platform.
So you need something very flexible,
something that will take care of the entire pipeline
from the connectivity to the data that you have
through the training and deployment until
you can actually interact with something. And I think that also developers need to put a big
emphasis on how to interact with those models in a very easy way. API, just plug it in your code
and start working with it. Yeah, I think AI is definitely all about learning
and trying and trial and error, so to speak.
And I like the concept of blueprint, right?
It's like you said, it's a great bootstrap
for people to get going.
Now, is the blueprint, is that something
that you would provide?
Or is that something that the community provide?
Or is that a combination between the community and yourself? So we actually started some blueprints by ourselves because we
wanted to contribute to the community but we definitely welcome and invite and want that it
will become a community thing. This is something that we give for the community and we hope that
you know the community will start working with it and they will create their own blueprints and and you know each one will contribute um and and then we'll
have a beautiful marketplace of many many use pieces um in different domains different industries
so ai is all about data does blueprint also come with data or or is it uh is there an abstraction layer between the code and the data?
Yeah, so actually, because we wanted to simplify this process and, you know, we had a conference
at the earlier week, but we had the AmelCon, but we had one in 2021 and we run a survey
and we asked, what are the challenges that you have in
machine learning phases? So of course, data was the first thing, right? Number one, because people
are still struggling with data. So we also took this and we built with, so we understand that
those blueprints need to provide also a solution for, you know, the data problem. So one of the things that we created
is the connectors to different data sources. So you can connect to your Salesforce or your
Mercado system, or even just to Twitter or other social media or Wikipedia, whatever source that
you want. So we have some pre-built connectors and all you need to provide is your account at Twitter or, you know, keywords that we can send to Wikipedia and get
the information and so on. So we really try to make it very simple. You don't even need to
struggle or think, how am I going to pull the data from my organizational Salesforce? You just
have the connector, you pull, you just put the data that you want to retrieve and then you get it.
So those blueprints definitely come. This is, you know, from the data that you want to retrieve and then you get it. So those blueprints definitely come.
This is from the data to the model to the deployment,
it's all stitched as a one pipeline.
And of course, it provides all the flexibility that if you
want, it depends on your skill level.
If you want to change stuff, if you want to retrain models,
so we open it up to any type of change any type of combination you
want to replace the marketo with something else go ahead it's yours it's the communities
yeah when we talk about ai you know we have training and and and inference or production
if you wish and there are many many challenges with that because the approaches and the methodologies between the two can be very different.
I feel like the market now has a much better understanding of training and that the tools around training are maturing much faster than from production.
And that production today is still the number one challenge in the pipeline.
Would you agree with that?
I would more than agree because I can back it up with numbers.
So this is the second question that we asked on the survey.
And we had like hundreds of participants, data scientists and developers across industry,
any industry you can think of.
So after getting the data and struggling with the data, let's put that aside.
So training,
only 10% said that training is their challenge. It's not a challenge anymore. This is, I totally
agree. But the number one challenge after, you know, struggling and handling the data was the
deployment. How do I take my model and make it accessible and usable? Yeah. And it's also
repeatable, right? I think one of the challenges nowadays is that it's not just creating a model and deploying it.
There's a lot of demand for data lineage, right? Where did the data come from? Can you prove the data came from the different sources you used? it's also required about ethical background,
you know, what data did you use and how can we figure out
which data should or shouldn't be used in the model.
I think from a production standpoint,
there's many challenges
and I'm not even sure the challenges are always technical.
Sometimes they can also come from a different angle,
meaning that enterprises
don't have a great direction or what they want to do production-wise. Is that something you see as
well? Yeah, definitely. And I can also back it up with numbers because we see, you know, we looked
at, you know, the companies that are running massive scale experiments, right? I'm talking about hundreds or thousands of experiments per month. And 57% of them, of those companies that are
running massive scale, said that they deploy less than fifth of what they are creating in their
development environment in production, because they don't really know how it will work and how it will play. And it's not necessarily something that they find useful.
And also something that I think that is also related to this,
we ask if they have like some kind of a framework to manage those models in terms of their life cycle,
because the model, you know, can drift and can change because the data is changing
so what do you do about that you need something to alert and tell you that something has changed
with the model and most of the companies are not working with these kind of tools so this together
I think creates this I think gap between what between what you plan to do, what you have in mind, what you have in your
development or research, and what you can actually implement.
And so I think utilizing an MLOps platform is something that it's not an option anymore.
I think it's a mandatory thing because it just, you know, takes half of, at least, you know,
it takes high percentages of all the work that you do around in terms of, you know,
managing your resources and allocating your resources.
You know, we had customers that they had like this Excel file saying they had like very expensive GPU says,
OK, in this hour, this team can utilize the GPU and at this time they can utilize.
And they would, you know, fight over what time they can utilize this.
And, you know, with using an MLOPS platform like Converge is something that, you know,
you just submit your job and the system
manages the resources for you. And if you know something crashed, then we know exactly where to
run it again from. And if you want to use pot instances to save your costs. So that's also
optional because we know how to move them between those pot instances and so on. So you need
something to take off the load of, you know, all the complexity of managing
everything from the data scientists to something which is more automatic. I know that some of the
people listening were probably like, oh man, I wish we had an Excel spreadsheet. We don't even
have that. We just yell at each other. Or actually probably the number one way of dealing with contention for hardware resources is mine.
Nobody else can have it, which is also very common.
But when you have like the most expensive GPU, something you can't keep it to yourselves.
It's like so expensive hardware that you have to you have to use it to the end to get to 100% utilization of the hardware.
Otherwise, you know, why to invest in it?
Yeah.
And I think that that's the situation that a lot of companies are in.
And especially when it comes to production ML applications,
they're looking at this as, well, you know,
basically the sandbox and picnic table days are done.
We're going into production.
We're going to offer this as a part of our enterprise IT portfolio.
We're going to have, as you mentioned,
AI is indispensable in modern applications. It's being integrated more and more everywhere.
Everywhere you look, there's different AI use cases.
And I think that overall, what we're seeing is sort of the normalization of MLOps in all
enterprise IT settings.
So given that, Orly, you're probably seeing this ahead of many people because you deal
with a lot of different companies.
And of course, you're deeply involved in the MLOps pipeline.
What are the main bottlenecks that are happening right now in MLOps? What's holding people back?
And how can we, I guess, grease the skids and help to move MLOps forward into this new world
of being just part of IT? So I think the first blocker would be management buy-in.
You know, managers and leaders, you know, sometimes they're getting also the let's be more AI, let's be an AI first organization,
but they don't really understand what it means in terms of, you know, equipping their developers with the right tools and the right budgets.
So they say, okay, let's hire some data scientists
and some help from our developers.
They can be machine learning developers
or machine learning engineers.
And let's start, let's get going.
Sometimes they would even think that,
why do we need to invest in an MLOps platform?
Let's buy it in-house.
Let's tailor it according to what we need.
And they don't understand that it's such a in-house. Let's tailor it according to what we need. And they don't understand that, you know,
it's such a lot of effort.
It's not something that you build, you know,
it's evolving all the time.
It is changing.
You also, you always need to add something to it.
So I think it should come from above, you know,
leaders, they need to understand that if you want
to really transfer your organization to be a first,
give the right budgets, help your teams to learn more about those tools, be more committed to
providing them those tools. And sometimes we had this case that, you know, you get a budget for, you know, the entire team.
So you invest in your infrastructure and you invest in your storage and everything.
But then, you know, the team is coming and saying, OK, now we also need an MLOps platform and say, why?
Why do we need? We are paying to this cloud vendor and we are paying to this GPU vendor.
Why do we need something else? But if you want to orchestrate all of this and you want to give the flexibility
of hybrid environment to utilize your on-prem,
your cloud, your everything all together,
then you have to understand
that this is something that you need
your teams to use.
Something else that we noticed also is knowledge.
Not even, you know, sometimes even, you, even data scientists,
they're not very familiar with these applications.
And they don't really know what exactly to expect.
They have their code.
They have their Python.
They have their notebooks, the frameworks that they are using.
They know to write some scripts to turn off and turn down
the jobs that they're submitting to their clusters or to their infrastructure, but they don't understand how much time they're spending on doing this. and also some knowledge that is missing. And it's some kind of, you know,
we need to do some market education around
this is not a luxury.
This is something that you really need.
Yeah, I totally agree.
I think, I mean, there are a couple of facets, right?
There's the ecosystem of storage, compute and network.
It's not because you buy the fastest GPU by itself that it will give you results on its
own. And then the second one is more people, right? So you have the C executives, you have
the data scientists, the data engineers, and the whole IT, DevOps, MLOps crew, and all of this
needs to work together. One of the things I've always seen in the market
is that when they come together,
they look at the solution as being static,
meaning they buy an environment
and then they hope they don't have to touch it
for the next three years,
which we all laugh about it
considering we understand that AI,
even three hours is a challenge to keep it all static.
But I do think that there's still a lot of messaging that needs to happen. But it also
comes to the fact that the tools now are commodity, right? You don't have to build them
yourselves. You can actually, you have access to a decent amount of tools, which is good and bad,
right? If people know what they're doing and they have a right ecosystem,
it's a bootstrap.
If they don't know what they're doing,
then it's more of them creating bad output and very difficult to undo that.
Yeah, totally agree.
You just reminded me of a large company that I have some ex-colleagues and friends there and they're one
of the largest companies that are providing recommendations. I'm pretty sure that everyone
got a recommendation from them one time and I was talking with them, it was like some light talk
about you know MLOps and everything and they they said, no, we have such a specific
and unique use cases.
So we want to tailor it to our use case.
And I said, and, you know, they were struggling for,
I don't know, like a long time.
I would say even more than a year, definitely.
Just in thinking what exactly would they need
in such an environment and how to tailor it directly to
whatever they specifically need. But along this time, they're wasting a lot of time.
They have those data scientists that cannot work for full efficiency. And you can just solve it
with getting one of those decent tools. There are very good tools out there that you can use and solve so many problems.
So overall, I suppose the question then becomes, how can people build a successful ML program?
If you were to talk to the folks in our audience and say, you know, I see where you are.
This is what you need
to do next. Let's just give us a summary. What should we do? What should we be doing to build
a successful ML program? So I think first that you need to understand that, you know, AI is a
team sport. It's not something that, you know, just data scientists are doing or just developers
or just something that you do together.
If you want it to scale and you want it to be sustainable, then you need to have your entire team around it in the sense that, you know, you have infrastructure.
So and things are running on this infrastructure.
So you want your DevOps to have easy access to understand what's going on, be able to monitor the system health, model health, and so on.
It's also about the domain experts and those who are the key to get the data sources.
Those are also part of the process.
It's also about those ML engineers that can bring the data,
build those data pipelines, and also, of course, about the data scientists in the model.
So you need to have very strong communication, have a tool that everyone can collaborate on the same tool.
It can't be fragmented.
Each group will have their own tool, and they will never meet.
By the way, this was one of the challenges that blocked the smooth transition of a model
from development to deployment
because every team was using a different set of tools,
different set of automation,
but when they tried to connect together, they couldn't.
So one platform that will serve as a hub
for all those stakeholders,
which are very important in the process,
I think this is like the baseline.
The next step I would say is to be more,
I think, open-minded in the sense that,
you know, there are certain things
that your data scientists need to do
if it's like training those big models and so on.
But then you have, once those models are trained,
you have your software developers use them, you know, as a way to scale your machine learning
efforts. It's very hard now these days to get, you know, a data scientist. There is a shortage
in data scientists. It's a long education path and, you know and every company now wants to have a big team of data scientists,
but you also have your developers and they can utilize machine learning for whatever
application they are doing.
So don't be afraid to transform other functions in your business to create and help in the machine learning efforts.
And of course, I think that we see today also in terms of infrastructure, we see that most companies are not using one infrastructure.
Each one from their own reasons. Sometimes it's about budgets.
It's about privacy issues.
It's about many things.
So you need something to be able to manage,
tool to manage all this complex environment.
And I think eventually it comes down to a good ML Ops
platform that will help you to scale,
to manage your hybrid Ops platform that will help you to scale, to manage your
hybrid resources, and also will support every type of user that eventually will be the creator
of machine learning. Well, thank you for that. I think that that's a great summary and a great
place to leave this conversation of the challenges of building ML
programs. So now's the time to shift gears in our podcast. We are going to ask our guests three
questions that they are unprepared for. It's kind of a fun off-the-cuff opportunity to express
some opinions about AI that might not have come up during
the podcast.
So, Frederic, why don't you go ahead and go first?
Sure.
So my question is, when do you think AI will diagnose a patient as accurately as a human
doctor?
That's a tough question.
I think it's not too long. I think for certain, and I'm not an expert
in healthcare, right? But I think in certain diseases, I think there are already some
solutions in terms of, you know, skin cancer or, you know, MRI images that are being diagnosed,
diagnosed, and they're getting better and better. I think, because it's such a sensitive thing,
you know, our health, everybody's health, then I think if we will always want to have someone
which we trust, even in terms of, you know, emotionally trusting.
We'll take a look and interpret that that's for us and not just trust, you know, a cold machine.
That's what I think.
All right, my turn.
We talked quite a lot about MLOps, and my question is quite simply, do you think that MLOps is a lasting and continuing trend, or is this just another step on the way for ML and DevOps concepts to be just normal parts of IT?
I think eventually it will be a normal part of IT. processes and you can't even imagine how you develop your code and deploy it without
DevOps tools and DevOps team, that will be the same thing for machine learning.
Well, thanks for that. And now we're going to bring in a question from a previous podcast guest.
Our question comes from someone I think you know, Eitan Medina, Chief Operating Officer at
Hibana Labs, an Intel company.
Eitan, take it away.
Hi, I'm Eitan Medina, Chief Operating Officer at Habana Labs, an Intel company.
And my question to you is, if you could choose something that AI would do for you in your
day-to-day tomorrow, what would that thing be?
Wow, that's a good question. So as a mom,
my natural instincts, instinct would be to say, just, you know, get this mess done. And, you know,
children, COVID, schools and everything. But I think that, you know, we all want to have better health care.
Just stay healthy, stay positive.
And if this is something that machine learning can help us
identify when people are less in this situation,
then for me that would be something great.
Well, thanks so much for that.
I think that's a hopeful answer for a lot of people these days.
And thank you for joining us for the conversation today. We really enjoyed it.
We look forward to hearing what your question might be for a future podcast guest. And if our
listeners want to join in, you can just send an email to host at utilizing-ai.com and we'll record
your question. So, Orly, where can people connect with you and follow your thoughts?
Or is there something recently that you'd like to point out that you've done?
Yeah, so I just published a short article or a blog on data and NAMI.
It's five ways to shift to AI first.
A couple of steps, some of them we discussed here,
which can help you transform your organization to be AI first.
Excellent.
Frederic, what's new with you?
So I'm providing services around data management
and designing and deploying large-scale AI clusters for customers.
And you can find me on LinkedIn and Twitter as Frederick V. Heron.
And as for me, you can find me at S. Foskett
on most social media networks.
I'm pretty excited that we are planning
our next AI Field Day event.
Just go to techfieldday.com
to learn a little more about that.
And we'd love to see people join us as presenters
or perhaps as delegates or just tune in live.
That's going to be May 18th through 20th.
So thank you very much for listening to the Utilizing AI podcast.
If you enjoyed this discussion, please do subscribe.
We're available in pretty much every podcast platform.
And while you're there, give us a rating or review since that does help.
Also, please do share it with your friends.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise.
For show notes and more episodes, go to utilizing-ai.com or follow us on Twitter
at utilizing underscore AI. Thanks for joining us and we'll see you next time.