Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 2x28: Offloading ML Processing to Storage Devices with NGD Systems
Episode Date: July 13, 2021Today’s storage devices (disks and SSDs) have processors and memory already, and this is the concept of computational storage. If drives can process data locally, they can relieve the burden of comm...unication and processing and help reduce the amount of data that gets to the CPU or GPU. In this episode, Vladimir Alves and Scott Shadley join Chris Grundemann and Stephen Foskett to discuss the AI implications of computational storage. Modern SSDs already process data, including encryption and compression, and they are increasingly taking on applications like machine learning. Just as industrial IoT and edge computing is taking on ML processing, so too are storage devices. Current applications for ML on computational storage include local processing of images and video for recognition and language processing, but these devices might even be able to execute ML training locally as in the case of federated learning. Three Questions Are there any jobs that will be completely eliminated by AI in the next five years? Can you think of any fields that have not yet been touched by AI? How small can ML get? Will we have ML-powered household appliances? Toys? Disposable devices? Guests and Hosts Vladimir Alves, CTO and Co-Founder at NGD Systems. Connect with Vladimir on LinkedIn. Scott Shadley, VP of Marketing at NGD System. Connect with Scott on LinkedIn or on Twitter @SMShadley. Chris Grundemann, Gigaom Analyst and Managing Director at Grundemann Technology Solutions. Connect with Chris on ChrisGrundemann.com on Twitter at @ChrisGrundemann. Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett. Date: 7/13/2021 Tags: @SFoskett, @ChrisGrundemann, @SMShadley, @NGDSystems
Transcript
Discussion (0)
Welcome to Utilizing AI,
the podcast about enterprise applications for machine learning,
deep learning, and other artificial intelligence topics.
Each episode brings experts in
enterprise infrastructure together to discuss
applications of AI in today's data center.
Today, we're discussing moving AI to the storage devices.
First, let's meet our guests, Vladimir Alves and Scott Shadley from NGD Systems.
Hi, yeah, my name is Vladimir Alves.
Yes, I'm the CTO and co-founder at NGD Systems.
And we've been in this journey since 2014 and very happy to talk about a concept that we as a startup have introduced, computational storage.
And hi, my name is Scott Shadley. I'm the VP of marketing here.
You can find me on Twitter at SM Shadley.
And I've had a great journey in storage and looking forward to talking about the next level of storage technologies on this call.
Hey, everyone. I am your co-host today, Chris Gunderman.
I am a content creator, consultant, coach, and mentor.
And you can find more at chrisgunderman.com.
And I am Stephen Foskett, publisher of Gishtalt IT and founder of Tech Field Day.
You can find me at S. Foskett on most social media networks.
And of course, you can find me here as the host of Utilizing AI
and the Gisdalt IT Rundown every week.
So Scott and I go way back
and we've seen quite a lot happen
in the enterprise storage industry.
And frankly, what NGD is doing is something
that a lot of our audience may not be really aware of.
So many of us, many times we've talked a lot
about the various places that AI appears and machine learning is appearing and how machine learning is going everywhere from the cloud to the data center to the edge to 5G to cars.
We've talked about all these things, industrial IoT.
But one of the things that we haven't really talked about in too much detail yet is computational storage. And so the idea here is that you and me and well,
pretty soon there's going to be processors on the storage devices themselves. And I don't mean a
storage array. I mean like the disks and that stuff's going to be doing processing all the way
out at the drive level. And that is a real sea change. And so I figured it would be good to kind of cover this to start with.
So maybe Scott, maybe you can kick us off a little bit.
Did I do justice to what NGD is doing?
Absolutely.
I would say that the one little hint would be that the processors are already there and
some of the products that are out there on the market.
So the idea of AI, ML, all of these things and making the computer more intelligent,
everything has to be
done on data. They have to use data to be more intelligent, to do training models, to do
inferencing, to update any kind of artificial intelligence applications. And if you're pulling
data from the disks or the solid state drives, then of course you're moving it into memory,
you're using the host CPU to do that work. Why not let the disk do it itself?
And so you've heard the term intelligent storage from HPE,
where they're using the storage devices to better manage their wear or their characteristics.
What we're talking about is really making the drives self-aware so that the drives act as a server themselves.
Full-on CPU can run applications.
And we've got some great activities that we've got in place.
As you mentioned, as far as where it's being played out at, being able to do AI in the disk itself allows
us to do things as far as in space. And we can even get into that a few ways. So the chips that
are already out there, I mean, is this, how common is this for an SSD or an HDD to have some kind of
processing ability right there locally where it has access to the data and can do things without going through a traditional CPU architecture? So I would say that it's out there,
but the type of processing is very limited and not very flexible. So for example, most solid
state drives nowadays support some form of encryption. And encryption means that you're processing data, right,
to keep it safe.
And you can argue, okay, yeah,
then I'm doing some sort of processing, and it's true.
But it's very limited and it's very rigid, right?
Some, you know, SSDs out there also do some sort of compression,
which is another form of processing on the data
so that you can store more on the SSE.
It has its advantages and disadvantages,
but nevertheless, it's processing.
So what we did is going to next, I mean,
pushing that boundary really far
and allowing like generic processing to happen
inside the SSE, not just something that is purely
data storage centric and simple and not programmable.
We made it programmable, generic, and open to all sorts of applications.
And when you talk about the kind of processing that's happening there,
right now it's not really machine learning processing that's happening on drives.
It's more like, you know, compression encryption.
But you're saying that in the future,
things like that can go all the way down
to the drive level, is that right?
Well, yes and no.
I mean, the future is here
because there are drives out there
and we as a company, we manufacture and sell those drives
that are capable of running an operating
system and running applications on this operating system that can be of different
types, including machine learning. So it's not just machine learning.
So yeah, the future is here, I would say.
So when I think about this, you know, kind of from the naive point of view, right, we have, you know, potentially more and more powerful processing available right on, you know, some form of storage, you know, disk or drive of some sort.
And on the other side of the spectrum, right, we see other folks who are doing more kind of like hardware composability in the data center.
And there's a number of companies that are working on kind of looking at the data center as the unit of compute. And this seems to be going the opposite direction, which is kind of
driving down and you're almost recreating a computer inside of a drive. And so I just,
maybe this is an ill-formed question, but my imagination goes to just, are we redefining what
a computer is at this point, right? I mean, because I think there's components that make up
what a computer is, and we're seeing those at both a macro and a micro level, obviously for different advantages,
right? And I can see in this case, the advantages of latency and data locality make a lot of sense.
But is this actually a shift in what we call a computer or am I overthinking it?
So from your point of view, Von Neumann is what we've known as the architecture. It's a CPU,
a memory, and a storage block. And they're independent blocks that have to be combined in some way. And when you think
of computational storage, we're putting all three of those boxes into the storage device itself.
And the reason that we're doing that is the amount of raw data that we're generating as a people
is growing in exabytes and zettabytes. And being able to grab and store the data is great for
current existing architecture. Being able to do something with that raw data and doing value with that raw data
becomes more of a challenge. And so when we get into talking about the composability piece of it,
what we're offering is kind of in a lot of people's minds, and you have a very good point,
is disaggregating all of those items into being able to allow you to mix and match them
appropriately in a rack or in a data center novel. But all of that still requires the raw data. And that raw data can
be better handled at the device level. And especially things like AI for inferencing or
similarity searches or other types of object detection that are all in the AI realm, bring
in all that raw data and then just do some of that pre-fetching or pre-work before you fish
it off to the composable architecture that you have what that does is it allows you to actually
better manipulate the composability so instead of needing 62 processors i only need 60 because
the devices are offloading that little bit of work that those extra two processors would need
to be doing and they can be composed off somewhere else or composing seven GPUs instead of 15 because I'm doing a lot
of that work inside the device first and saving a lot of that data movement. And it's really about
the moving of that data that comes into play that this is so much of value. Yeah, I would like to
make an additional comment there. I think the advent of the edge, right, the computation happening
far away from the data center and actually
distributed everywhere. And that's a reality, right?
It's happening now and it's just going to grow at an exponential rate,
even faster than what, you know,
what happened with the data center or the hyperscale world.
That is changing everything. And that is bringing, you know,
an opportunity for new, I mean, actually existing concepts, right?
Concepts that were developed already, but have not been extensively deployed, such as
heterogeneous computing, near data processing, distributed processing out there to help solve
the new problems that come, right?
We have to reduce latency a lot. We have to absorb enormous amount of data
and be able to respond to inputs very quickly
coming from everywhere at the same time.
And centralization in data center
is not the answer to everything.
It helps in many aspects of it,
but it will be a problem going forward, creating artificial
bottlenecks that we don't need. So what does computational storage do? It brings together
a lot of these concepts. So heterogeneous processing, because most likely the processor
that's being used inside the computational storage device is not the same that you have on your host.
For example, an ARM CPU versus an Intel or an AMD CPU.
It brings the concept of new data processing because the computational storage device
processes mostly data that's stored inside its own media.
And the concept of distributed processing
because in most deployments out there,
whether it's at the Azure or in the data center,
there's multiple storage devices
that are directly attached to a host CPU.
So therefore, having multiple intelligent storage devices
where you can run real applications on their local data
then creates, or at least creates the opportunity to use distributed
processing, meaning that all those storage devices can work now in parallel together
to solve a problem that involves a substantial amount of data that needs to be stored in
separate storage devices. So, I would say that using these technologies is what is going to make the actual growth of the edge possible.
It's not just, okay, I need to absorb more data.
It's not just high-capacity storage or more storage devices.
It's beyond that.
To me, it seems really analogous to what we were talking about when we
were talking about industrial IoT, when we were talking about basically having, for example,
imagine every camera having local machine learning processing on the edge devices,
so that we didn't have to transmit as much data back to the core, and so that we could have lower latency of processing at the edge. In this case,
the edge is not the edge of the world as some of those industrial IoT applications are,
but the edge of the computer itself, because the computer itself has all these devices attached to
it. And if you think about, you know, I know that it's sometimes hard, you know, you can kind of
imagine, okay, so an oil rig in the North Sea is going to have slower, you know i know that it's car it's sometimes hard you know you can kind of imagine okay so an oil rig in the north sea is going to have slower you know more latency and less bandwidth and going
to need to do local processing it's kind of the same thing within the computer itself because
storage devices have limitations to bandwidth and have latency even on that little short cable
and uh you know the volume of data we're talking about here
is just massive. I mean, these days, it's not unusual to have multi terabyte SSDs, and to have
the ability to do some processing there to reduce the amount of data that has to go over the storage
bus is just as important as it is to, you know, reduce the amount of data coming from that oil
rig in the North Sea. And I think that to me is the thing that got me excited about the prospect of computational storage. I don't see it as an alternative
to composability or even centralization. I see it as sort of an inevitability because as you're
saying, most of these devices now have a full operating system on them anyway. I mean, you may
not see it, but it's running the same kind of
processor that's in your phone and your phone is doing ML processing. So why not the storage device?
Is that right? Yeah, absolutely correct. And I agree with you. It's not an alternative to
composability or anything else, right? People often, not often, but sometimes we hear, oh,
okay, so are you going to displace GPUs? Absolutely not. GPUs have their
place, for example, in the ecosystem, right? We add another dimension to the problem, right, and
try to help solve concrete issues in processing large amounts of data. Bringing some of the
computation into the search, right, doesn't have to be all of it.
It just needs to help the system augment the performance of the system and augment the energy
efficiency of the system. These are the goals, this is the vision of the companies, you know,
that are in the computational storage space, including ours, is to, you know, make it easier
to process data. And it could be running something as complex as ML workloads
or as simple as filtering data by parsing metadata, for example.
So all of this can help.
So every case is different,
and it requires some understanding of how computational storage can help you.
But yeah, that's it.
Well, I think that's a path that I'd like to go down a little bit,
which is those use cases, right?
And maybe specifically here, the AI use cases.
And what are the things you've seen maybe already,
or at least have thought of as far as where this computational processing
can really help for artificial intelligence in general,
maybe machine learning more specifically, is this something where we can go in and be doing
training on data while it sits on individual disks? Or is it something where we can actually
be performing inference and making predictions based on that limited data set out of the bigger
pool? Or fully training, or what pieces and parts can this actually help with or is it all of them right and how so uh sure so yeah yeah i mean absolutely we uh both are valid so one of the cool things
about being a startup is that you you have the opportunity to do a little bit more than just
you know engineering development and we do a lot of research and development in our company,
for example, but there are other groups out there that do the same. And we have a strong
relationship with different groups in the academia, including the University of California,
Irvine, right there where we have. And then we have, we're lucky to have many PhD students
as interns in our company.
And one of the things that we have explored over the past few,
two, three years is actually to first running inference
inside the storage device using common frameworks
such as TensorFlow, Keras, et cetera.
And we've been very successful,
especially when we use the concept
of distributed processing.
So usually, for example, if you're doing,
you know, image recognition of some sort,
like facial recognition or things of the sort,
you have tons of data in form of images that are,
you know, when you talk about terabytes
or multiple terabytes,
they need to be stored in multiple devices, right?
So multiple storage devices.
So we can quickly and easily harness the power
of computational storage by executing these algorithms locally
in the local data and let the host also do the same type
of process or a different process.
And in the end, we have shown in multiple papers that have been published in well-known
conferences that we augment the performance of those systems when they are performing
those machine learning applications.
And as well as there's a significant increase in energy efficiency.
Now, then after that, we've been pushing the boundary and looking at training.
There's already been a lot of work done by, you know, Google, for example,
on distributed, you know, training, especially using, you know,
the platforms such as cell phones, et cetera.
We piggyback on that kind of study and developed on the algorithms and showing actually
that you can perform things such as
language processing, for example,
on databases that have been released or public by Twitter, for example, on databases that have been released or public by Twitter, for example,
to infer on the mood of people that are writing,
for example, a comment on Twitter,
whether they are sad or happy, et cetera.
So this kind of thing that's usually done
in the framework on a data center and high-powered
machines and so on, we can actually reproduce that and run it locally on the storage device
and harness the power of all the storage devices working in parallel together with the CPUs.
We're not just eliminating CPUs, but then we can show, and we have shown in these scientific papers, that we can augment the performance of the system.
So one of the other things that occurs to me is that by having a huge number of, we already have a huge number of storage devices.
And if we're putting processors on those devices, essentially we're getting a huge number of processors working as well.
And as I think everybody in the machine learning space
has seen, massive numbers of processors working in parallel
is the secret to making all this stuff work.
So there again, there's another aspect to this as well,
that we're starting to see more and more processors
teaming up and even distributed
processors, as in the case of NVIDIA's announcements with their Grace platform and so on,
even distributed processors teaming up on offloaded data. And so to me, again, this seems analogous
to what we're seeing in networking with the DPU trend and in the NVIDIA Grace and so on. Do you
guys see that as well? Is this sort of the same sort of thing? It's like a DPU trend and in the NVIDIA Grace and so on. Do you guys see that as well?
Is this sort of the same sort of thing?
It's like a DPU except for storage?
Yeah, so I'll jump in there and kind of bring to light
that I would say that that's a very good way to look at it.
You think about everybody's favorite book,
The Secret to the Universe, the number is 42, right?
For us, the number is 6,048. If you take a rack of one-use servers full of our devices, you get 6,048 additional processors in the system. That's a lot of distributed processing power and capability that. But one of the key aspects that you brought up with like the GPU, for example, is a GPU
takes a slot in the back of the box or a drive that is a non-storage device in a drive slot.
We're giving you these processors inside the storage device.
So you're getting a two for one.
You're not losing any of your access to any of your additional connectivity or capacity,
but you're gaining the processing power.
And that's what makes the computational storage drives so much more valuable to some of these customers. any of your additional connectivity or capacity, but you're gaining the processing power. And
that's what makes the computational storage drives so much more valuable to some of these customers.
And to kind of piggyback on, you know, back to Chris's comment and what Vladimir was saying,
the concept of federated learning is a perfect example of what these products can do. Federated
learning, which is a process being developed by multiple companies, even down like VMware,
is to take a lower performance processor and allow it to do more meaningful work.
And so since we are a small arm inside of a device versus an x86 with 128 threads and 64 cores,
federated learning is a perfect example where you can do that distributed learning capability in these types of devices. And from a market landscape perspective, that's one reason why NGD has been so prevalent in the SNEA organization around standards,
is to help the market understand how to do this and create a common framework to interact with the different versions of these products that exist.
That makes a lot of sense. And I guess the obvious question, which we've touched on a little bit,
because I think, I forget who it was mentioned that there's potentially cost savings here involved in kind of doing some of this kind of offloading from main memory, main CPU by doing this in the drives.
At the same time, I have to ask the other side of that question, which is, okay, if I'm now adding an extra 6,000 processors to this rack, how am I getting away with not spending more power, keeping all that lit?
So where's that trade-off there?
And how do I actually save power in this paradigm?
Initially, it may sound counterintuitive, but there has been many studies that show
that most of the power that is spent in any computing system is actually spent in moving
data.
So moving data from, for example, from the memory
to the processor register, so it can be, you know, actually, you know, worked on. That results in a
lot of power spent, right? And moving data from storage device where it is, so if you're talking
about processing massive amount of data, let's say terabytes and terabytes of data, and to run, for example,
a machine learning workload to search for a face in a gigantic database. Every one of the faces
or the images corresponding to each face needs to move from the storage device through the PCIe bus, in our case, into the root complex of the CPU,
and then into the host memory, and so that it
can start being processed.
So that movement of data results in a gigantic amount
of power being spent.
By reducing the amount of data that needs to move,
then we can save a lot of energy.
Okay, yes, we are adding a processor that consumes and so on,
but the proof is in the pudding.
And all the experiments that we ran so far, all of them,
all the POCs show that we save energy at the end.
And it's not something that's particularly to the technology
and the products that we as energy systems have
developed that's a generic concept whereby reducing or eliminating the amount of data that
is moved from one location to the other saves energy and also you're talking about using um
low-powered chips anyway so i mean you mentioned earlier. I know that there's also this RISC-V thing
that a lot of storage developers
are getting excited about.
But on the ARM side,
we do see a lot of deployment
of specialized ML processing cores and so on.
Are you seeing those kind of specialized
ML processing IP coming right into the drive?
Yeah, I believe this is part of the,
there's going to be a trend in the near future.
Yes, I do believe so.
I mean, just accelerating ML processing is not enough.
You need to actually to have a programming model
that makes it easy.
That's fundamental.
But having said that, adding this extra processing capability
will help a lot, accelerate and enable the tackling very complex, hard to solve problems in the storage device that today can only be,
you know, done in a GPU, for example. So I think this is going to be a real trend,
but it does have to come with a good programming model, which is, in my opinion, essential.
Yeah, I think that's the most important thing we've seen that here on the
in the past as well that you know a lot of this technology relies on programming frameworks and
are you all working in that area as well or are you really focused on the hardware side of things?
No actually so what we've done is that we've worked on a programming model that allows us to kind of use these frameworks that are being developed.
This is a gigantic amount of work that's done.
Usually it's an open source development.
It's thousands and thousands of developers around the world.
You don't want to be, you know, like replicating some sort of, you know, ML framework.
Who wants to, you know, reinvent TensorFlow?
I mean, it's a gigantic task.
So I wanna use TensorFlow, right?
And that's the thing.
If you don't have a programming model
that allows you to go and say,
well, I wanna use TensorFlow today,
tomorrow I want to use a different framework
or to deploy my applications, I wanna use Kubernetes,
does my drive support it?
So these are the very important questions, right?
So if you have like a very basic processing capabilities
that maybe even, you know, provide high performance
for some specific functions,
it may not solve your programming model issue.
I want to circle back a little bit to something.
I think Vladimir,
it was you that was talking about this earlier, was just this move to the edge that's obviously happening kind of across the board. And we've definitely talked about it a lot here on this
show, as well as in other fora, where for a lot of use cases, a distributed approach to computing
at the edge makes a lot more sense. And I think you mentioned that this
is kind of hand in hand or hand in glove anyway with computational storage. And that makes sense
to me, right? If I can get better performance locally on the disks with lower power, that seems
to play right into the edge. I mean, does this technology enable new business models for edge computing from your perspective?
Or am I getting, again, ahead of myself here?
No, I think it does.
So first of all, I do 100% agree.
Computational storage is a very good fit
for what we're seeing in terms of needs at the edge, right?
So what we see is the need for tons of storage capacity at the edge,
much, much more than people thought a few years ago. So, you know, high capacity solid state
drives and so on are the solution. They're more rugged and provide higher capacity than conventional old magnetic hard disk drives.
But then there's a lot of need for processing.
A lot of it can be done by simply move the data to a central location,
a data center, and performing the computation there,
and then reacting to it and you know performing another or or sending
commands or or actuating something but more and more we see use cases where this is not possible
you have to make decisions right there and then you need to process the data you know closer to
where it is being collected and stored because you don't have the luxury,
the time to wait for data to move to the data center
for the decision to be made.
Also, the cost of moving data becomes much, much higher.
Data centers charge a lot for data coming in
and data going out.
So it becomes more and more interesting to perform computation at the edge.
And when we talk about a very small amount of data, then I, but not limited to processing video or images,
then definitely in many cases,
we see that computational storage as a concept
can be used very effectively.
And I mean, we're seeing a lot of need for that in autonomous vehicles.
Maybe Scott can talk a little bit about that.
A lot of exciting work happening in edge analytics in general, something that's kind of public, for example, the work that we're doing with VMware. So there's a ton of examples there that show there is a new
business model out there that's getting ripe, let's put it this way.
Yeah, I'd like to actually finish up before we get to our three questions here with just a quick
question about where are we in the development and the deployment of this?
So I think people maybe, I think a lot of listeners have probably said, I had never heard of this. I
had never thought about this. This is something totally new to me. How real is this? Are these
products you can buy? Are they products that are deployed? What's going on here? Yeah. So I'll go
ahead and jump in there. And from that perspective, yeah, the, the market is evolving. It's in it's what I classify as infancy. So there are products
available for multiple vendors. We're one, uh, the SNEA group has now published a document that
gives the framework for easy adoption, if you will, or like adoption of various different
products where think back to fusion IO and PCA ESCs. When they first came to market, it was kind
of a wild west of everybody you plug in, you have to use uniquely, so it creates more
challenge for adoption. By using SNEA to develop this and having companies like Samsung on board
ourselves and companies like Micron and Kyosha participating in the effort, we see the momentums
behind it, but we're still in the growth phase. So this is like to your point, a lot of people
simply haven't heard about it, but the products are available.
They are shipping, they're in volume
at various different places.
As you mentioned, things like autonomous vehicles,
we're going to space,
VMware projects that are being launched
that are gonna open up greater opportunities.
So just for our company alone,
let alone the other players in the space.
Excellent.
Well, thank you so much.
I think that everybody listening has probably learned something today that they didn't know about before.
So before we move on, it's time for our three questions.
We're going to handle these jump ball style. In other words, I'm just going to throw them out there and Scott or Vladimir, you guys can decide who's going to answer this one. So we'll start off with the first. Are there any jobs that you think will
be completely eliminated by AI in the next five years? I'll jump in there. So from a jobs
perspective, I think we already see that there are some jobs that are being
impacted by AI.
I just look at like all the robots running around the warehouses, picking and pulling
content, stuff like that.
I think the evolution of the job force is going to be taking place because of AI.
We may not see as many jobs eliminated, but they just migrate to other opportunities,
whether it be more and more computer science guys needed to help drive the AI,
let alone the pick and placers type of thing.
So I think we are seeing an evolution of the workforce,
but I'm not really seeing
that it's gonna wholeheartedly replace people.
I mean, I used to be in the fabs at Micron
where I would move wafers myself by hand carrying them.
Robots move around the sky now using AI,
but they're still an operator that hits go on the machine. Well, let me flip that on its head. Maybe Vladimir can take this one. Are there any fields
that have not been touched at all by AI? Whoa. Maybe very few. Well, even, yeah, if I try to
think, it's complicated because even in the arts, we see people experimenting with AI maybe just for fun.
But no, I think AI is going to be pervasive.
I'm not afraid of AI, like maybe we've know, people like Elon Musk and so on. But I think it will change
completely and very soon. The way, you know, like the job market in general, lots of activities are
going to be completely eliminated in my view, but it will create a lot of new opportunities as well
that we even can't imagine today.
And that has been the case, you know,
with the industrial revolution, et cetera, et cetera.
So we, I mean, hopefully we're getting smarter
and we don't make the same mistakes as, you know,
the human race has made when we transitioned
from each great phase of history. But AI is going to definitely
completely change the panorama in 50 years. We'll look back and say, wow, things have changed a lot.
That's my belief. Well, the final one then that I've got for you. Maybe this is an appropriate question as well. If we're putting ML
processors in the drives and in IoT devices, how small do you think we're going to get with ML?
Do you think we'll have ML-powered household appliances? Do you think we'll have ML-powered
toys for kids? How about disposable ML? Yeah, I believe we'll have ML everywhere. And actually, it's going to be very helpful.
The other day I was discussing that concept where you actually would have a little AI bot that you can use to talk about your feelings, like how you're feeling, your and etc and then this can be used to kind of gather information
about yourself to help you in the you know you know talk to your doctor and you know like
help diagnose like complex problems like mental health issues or or even health issues that you
you know you're having a hard time figuring out what's going on with you and so on.
So it's just an example.
I think it's going to be absolutely everywhere.
And like cell phones, it may have good sides, but also probably also bad things that we'll need to deal with.
Yeah, that's a really good point. And frankly, just like cell phones,
where the components of the cell phone have powered revolutions in all sorts of other areas,
because suddenly we've got massive volume
on accelerometers and gyros and flash memory
and all these other things.
It allows us to see other areas to apply this technology.
And I think ML is going to be there as well.
And certainly in devices, storage devices.
I think we can all agree that there's going to be ML processors there.
So thank you so much for this conversation.
Where can people connect with you and follow your thoughts on enterprise AI and other topics?
Scott?
Yeah, absolutely.
So you can find me at SM Shadley on Twitter, Scott Shadley on LinkedIn, the companies at NGD Systems on Twitter and
LinkedIn as well. We've got a great blog site, including one written by Vladimir about how to
deploy ML that was recently reshared. So feel free to check us out, web, Twitter, LinkedIn,
we're kind of pervasive and just search the term computational storage and you'll be able to find out quite a bit of content. Great. And Vladimir, anything you want to add?
No, I'm one of those nerdy guys that is shy about social media. I'm not on Twitter. I'm
lurking in Reddit and so on. I follow stuff, but I'm not exposing too much of myself.
But the company kind of reflects the works that we do.
So you can follow our company on Twitter.
Great.
And how about you, Chris?
What have you been up to lately?
Yeah, I'm all over social media still,
to an extent anyway.
LinkedIn is a great place to have conversations.
You can also follow me on Twitter
or look at all of my exploits
kind of documented at chriscrunneman.com.
Great.
And as for me,
you can find me every Tuesday
on Utilizing AI
and every Wednesday
at the Gestalt IT News Rundown.
So do check out the rundown
where we go down the weekly news
with a little bit of snark.
Thank you very much
for listening to Utilizing AI.
If you enjoyed this discussion,
please do share it with the world,
share it on your social media networks
or maybe on Reddit, if that's your place.
And please join us again next week for the next episode of Utilizing AI.
This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise.
For show notes and more episodes, go to utilizing-ai.com or find us on Twitter at utilizing underscore AI.