Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 08x01: Accelerating and Protecting Storage for AI with Graid Technology
Episode Date: March 31, 2025Modern AI servers are loaded with GPUs, but spend too much time waiting for data. This episode of Utilizing Tech, focused on AI at the Edge with Solidigm, features Kelley Osburn of Graid Technology di...scussing the latest in data protection and acceleration with Scott Shadley and Stephen Foskett. As more businesses invest in GPUs to train and deploy AI models, they are discovering how difficult it is to keep these expensive compute clusters fed. GPUs are idled when data retrieval is too slow, and failures or errors could prove catastrophic. Graid not only protects data but also accelerates access, allowing users to achieve the full potential of their AI server investment.Guest: Kelley Osburn, Senior Director of OEM and Channel Business Development at Graid TechnologyHosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesScott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
Modern AI servers are loaded with GPUs,
but if you spend too much time waiting for data,
then you're not getting much out of them.
This episode of Utilizing Tech,
focused on AI at the Edge with SolidIME,
features Kelly Osborne of Grade Technology,
discussing the latest in data protection and acceleration.
Welcome to Utilizing Tech,
the podcast about emerging technology from tech field day part
of the Futurum group.
This season is presented by solid I'm and focuses on AI at the edge and related technologies.
I'm your host, Stephen Foskett, organizer of the tech field day event series.
Joining me today from solid I'm as my co host is Mr. Scott Shadley.
Welcome to the show.
How's it going, Stephen?
It's great to be have an opportunity to join you this season.
So I'm excited about what we're going to be talking about today
and for the rest of the season as well.
Absolutely. And what we're talking about
is basically all the various components
that are required to build AI applications at the edge,
AI applications in the cloud, AI applications everywhere.
But really we're going to focus in on the importance
of basically the full stack under AI.
Yeah, it's a great opportunity to kind of talk to that point
because AI is our shiny object today
and we know that it's here to stay for the foreseeable future
but we still have to start looking at 2025 as a year
we start to optimize our infrastructure around all the AI stuff
that's been booming for the last year and a half, two years.
So taking this opportunity to meet with folks like our guests today on ways to manage, manipulate,
protect, and take care of all that data that we're dealing with around all these AI workloads
is very important for us.
Yeah.
As a storage nerd, it always gets me when people don't take storage seriously because
the truth is, you can't just assume that it's going to work.
You can't assume that it's going to be reliable and redundant and performant and so on.
That's why we've invited Kelly Osborne, Senior Director of OEM and Channel Business
at Grade Technology, Inc., as our guest today.
Kelly, welcome to the show.
Thanks, Stephen and Scott. Appreciate the opportunity.
As Stephen mentioned, my name is Kelly Osborne.
I'm with Grade Technology and I'm in business development.
You can find me on LinkedIn at Kelly Osborne.
Not hard to find.
Just make sure you put the E in the URN and you can find me.
Kelly, tell us a little bit to kick things off.
What is Grade Technology?
I mean, I think most people have heard of RAID technology.
What is grade technology?
So we were founded about three, three and a half years ago
to solve a problem in the space
around NVMe high performance flash storage.
And essentially the problem is when you develop a server
or install a number of these drives in a server,
once you impose some sort of RAID data protection on them,
you create bottlenecks that don't
allow you to achieve the full performance of those drives.
And when you spend a lot of money on drives like that,
you want to get all that performance.
So we identified that as a problem in the market.
So GRADE is actually kind of a twist on GPU raid,
and that's exactly what we're doing.
So we have a product called Supreme Raid,
and that is a software rate stack
that deploys on Nvidia GPUs
to accelerate the raid operations
and allow you to achieve very close to 100%
of the performance of those expensive drives.
It's very interesting, you know, to that point, because SolidIME being the solid paradigm on storage
and being a self-prolamed storage geek much like Steven, one of the unique things about
NVMe drives when they first came out and with SolidIME being one of the pioneers in that
front with our long history was NVMe stripped out a lot of what you could do with some of
the other interface technologies.
And so having to find a new way to work at a formerly existing problem is nice to see that
G-RAID is coming in and doing some of that work. Yeah, we, I think, identified two problem areas
with traditional hardware RAID where you plug your drives into a controller that's
sticking in a slot. You artificially create lane limitations. Think of a toll booth on a superhighway.
When you think of these solid-dime drives, each of them can maximum performance.
You need four PCIe lanes.
So if you connect four of those to a card that has 16 lanes,
you've already hit 100% of the performance of that card.
So how do you go past that?
We allow that because the drives in our
world are connected directly to the motherboard and if you have 10
drives you need 40 lanes. We deliver that 40 lanes of performance because we have
a patented technology that is out of path we call peer-to-peer DMA that
allows the data from those solid-line drives to make it to the CPU or the GPU
directly across the
motherboard without going through a gate, if you will.
The other side of that is software-raid, where as you have more and more of these drives
that are really fast, they're real, really want a lot of attention, so you get a lot
of interrupts and the CPU gets really, really busy.
CPUs also are not very good at mathematical calculations
compared to a GPU.
So you start to choke on the parity calculations
for RAID 5 and 6 in those kinds of environments.
And so the hardware RAID scenario doesn't scale
and the software RAID scenario doesn't scale.
We really shine when you get beyond four or five drives
in a server and go all the way up to 32
on a single chassis.
Yeah, this is really a big trend in the HPC and AI sector overall,
is basically having direct, as you said, DMA,
direct memory, direct access between peripherals and chips generally.
And I say chips rather than CPUs because that's also what's going on in many cases
with a lot of the GPUs and other AI acceleration
engines in modern systems.
Instead of being sort of a star topology
with the CPU at the middle
and everything going through the CPU,
it's almost a mesh or fabric technology
where a topology where things are going direct.
It's actually more like a web
where the CPU is still at the center,
but there's links that go sort of around the CPU.
Is that right?
Very much so.
So the way NVMe works,
it plugs directly into the motherboard of the server chassis.
So the CPU, the memory, the PCIe slots, the drives,
everything can see each other.
We call that a peer-to-peer direct memory access,
if you will.
So what we do is act more like a traffic cop.
When a read comes to us, we know where the data is,
and we know where it needs to go.
So we just tell that drive, send the data over here.
We don't have to read it in and forward it.
That would introduce latency and create hops
and other things in your data path.
So in the GPU world, that's called,
NVIDIA created something called Magnum IO
or GPU direct storage.
And that allows drives to send data directly to a GPU
and bypass that host, which eliminates that extra hop
for those high-performance workloads.
The problem is when you impose RAID and data protection into that, you still have to force
everything to go through something else, and we now have that built into our new version.
Yeah, that brings up some interesting ideas. I mean, you mentioned earlier, you know, kind of
the idea of how many slots, right? And we look at new reference designs and things like that, and
all the fun stuff that's getting put out there around how to fit so many of these chips
to Steven's point into a system,
there's still a limited number of slots
that are associated to that,
whether it be PCIe slots in the hole
where the product you guys are developing is at,
or even the number of slots where a number of drives
we can put in those are in place.
So we get into those conundrums of,
okay, how do best stuff the box and make sure
that we're getting the maximum use out of that
from both the storage technology, the partner technologies,
like the Raid, Supreme Raid product and things like that.
And you guys have recently worked on something new too,
that has recently made the press.
So I'd love to hear a little bit more about that
as you kind of started down that path just a moment ago.
Sure, we created a product called Supreme Raid.
Once again, that's our product.
The new addition that we have is called Supreme Raid AE,
which is AI edition.
And it implements several new features
like GPU direct storage, intelligent data offload,
and that peer to peer,
we also are incorporating a lot of NVMe of our fabric
and incorporating our product
into parallel file system environments
like Ceph and BGFS and Lustre and providing
that local data protection for those storage nodes.
The other thing that we've done, our traditional product
is required to dedicate a GPU.
And these large GPU servers like a DGX
or like you see some of these servers
from big manufacturers that have eight Hopper or Blackwell
SXM chips that are NV linked together,
there's not a lot of real estate to put another card in.
But you have so much GPU performance,
we now have a version of our software, the AE version,
that can run on one of those GPUs you already have.
So you don't have to have yet another GPU
in there powering our software.
So, you know, that AE version we just demonstrated
and showed it off at GTC in San Jose.
So that's getting a lot of attention
because when you buy servers like that,
that have so many GPUs,
if you're not feeding that data to that beast fast enough,
you're not getting the full utilization out of that.
And those are typically very expensive
and you want to get very close to 100% utilization
if you can.
Yeah, we were recently at AI Field Day.
We had a couple of presentations focused on that
at very topic.
And one of the things that was pointed out
was a study that showed that
And one of the things that was pointed out was a study that showed
that the majority of enterprise AI deployments,
and this is not hyperscaler AI, this is not, you know, sort of wannabe AI,
this is the actual enterprises running AI workloads,
was the GPUs were utilized at very low percentage.
If you had asked me, I would have said,
during actual work, a production GPU cluster,
yeah, you're probably getting 50%, 60%, 70% utilization out of those things.
No, 35% of the respondents are the not respondents
because this is actually a metric based survey too.
So it was actually measuring it.
35% of the respondents were getting less than 15% utilization
out of their GPUs.
Can you imagine if a factory bought a $30 million,
you know, stamping machine or something,
and they let it idle all, you know, 85% of the time,
somebody would be fired. And yet that's what we're looking at when it comes to these GPUs and there are a lot of reasons for that but again this
was during active use so the main reason was those GPUs were not being fed with data exactly
and so what we what we see in those environments, a lot of times
those GPU servers will have local NVMe 8 or 16 drives, they'll run that
scratch space in RAID 0 and that'll give them the best performance they can
possibly get because they don't have any RAID bottlenecks. But then the issue is
they have to start a whole job over again if there's a drive failure and
it's not a negligible situation that you could have non-recoverable errors or But then the issue is they have to start a whole job over again if there's a drive failure.
And it's not a negligible situation that you could have non-recoverable errors or a PCIe
error or an actual drive failure.
And so then you have to roll back to a known checkpoint.
And so it may look upfront like you're getting better equalization, but in the long haul,
you may not.
So our theory is let's provide RAID 5 or 6 or 10 or 1
or even a ratio coding protection of those drives
in that machine, but let's not impose a bottleneck to do it.
So you get close to RAID 0 performance,
and yet you have that protection being able to lose a drive,
continue running, rebuild the drive, the kinds of things
that we're used to with RAID.
And so that's really our focus. It's an interesting step too,
because I mean, as the drive guy, right,
I'm gonna sit here and tell you,
I have a high quality product.
But there's always those situations
where you do need that protection.
We know you can't get rid of it.
And so you have to do something to manage it
and eliminate the overhead, to your point,
of what you're trying to do with those particular products. And as the data sets grow, there's always this term of blast radius. And
one of the focuses of what RAID has always been about the idea of RAID, not your product,
but just great in general is to prevent those things like that blast radius and other issues.
But even with these high quality drives, to your point, there's fewer of them in the system,
and you still got the legacy infrastructure that's around them. NVMe has definitely
taken over the world a little bit which is nice to see but it's not you know all
in everywhere and systems still aren't optimized for it so having solutions
like yours tied together with the products and things like that you know
we recently showed off a fun new toy at GTC too so it's exciting to see how you
know our advancements in technology
combined with your advancements in technologies
aren't just there for fun,
but to actually help our end customers solve problems.
Yeah.
And that's one of the interesting things, isn't it, Scott?
You and I go way back in history.
Storage has been a bottleneck for a very long time.
And in many ways, I'm sorry to say this, but storage is still the bottleneck,
but it's not the storage's fault. It's the way the storage is being used. That's the
bottleneck. You know, I mean, these, these systems have incredible amounts of bandwidth,
especially when you take, you mean, these modern SSDs. And as Kelly was saying, you
know, the, you know, NVMe drives with, you know, fourMe drives with four PCIe lanes,
that's a lot of bandwidth, especially when you have multiple ones.
But the problem is most people don't have the capability
to actually use these things efficiently.
They don't have the capability to use all that bandwidth.
Or as Kelly was pointing out, they're strangling it behind a controller
that kind of acts as a bottleneck on it.
It's a little frustrating, isn't it?
Because we finally have all this incredible performance
and yet people are still not using it.
Yeah, that's another aspect of kind of
as we evolve this ecosystem and these reference architectures
and putting companies together like we are with
Grade and SolidIME and even the GPU guys,
if you want to, owners of GCC and NVIDIA.
As we evolve these ecosystems,
the need to isolate the bottlenecks and solve them
is one thing that we're actually starting to see
with things like the new EDSFF form factors
that Solidim's been kind of a big leader in
because adding this new form factor,
which makes it flash only, you can start to realize that. Now I know I'm not killing the
hard drive it'll still be there it has its reasons but in the performance
bottleneck situations where you really do need to really make those systems
work better truly developing that system to work with this new
interface and the subsequent technologies that are tied to it because
we want to make everything work well together.
It's nice to see, I mean, to your point, Steve,
when we go back far enough that I remember putting one SSD in a box
was a huge success and now we're talking about,
let's put 32, 64, however many they can start shoving in these boxes.
So it's a great evolution. I'm excited about what's next.
And I want to call attention to another thing that Kelly said
and make sure that people heard it.
One of the things about these modern AI servers
and frankly, AI servers are going to be everywhere,
not just in the cloud or in training or anything,
I mean, especially at the edge.
One of the interesting things about these
is that they have a ton of compute power on the GPU or, you know, AI accelerator side,
way more than they do on the CPU side. And yet, many, many supporting systems,
I'm just going to say that generally, and that would include storage, use the CPU to do the work.
Kelly, you said that you're actually doing the RAID calculations in the GPU.
Tell me a little bit more about that.
Are you really doing them in the GPU
right there in the cores?
Yes, so we have a technology that's intelligent,
basically intelligent parity production.
And what we're doing when a write comes in,
we actually allow that write to go straight
to the drive across PCIe.
It doesn't flow through our card.
We do have to get a piece of that data into the GPU, the NVIDIA GPU, and that's where
we calculate the parity.
So then we lay the parity down, and then so only a tiny piece of data is being written
that comes from the GPU itself, and then we acknowledge back.
So that's how we achieve very high rate performance
and very high IOs per second.
On the read side, I mentioned when reads come in,
we just do a redirect until the drives send the data straight
across PCIe to improve that performance.
CUDA cores on GPUs are far faster
at mathematical calculations.
RAID parity is nothing new.
It's based on Reed Solomon.
Depends on which level you're using,
but those are just intensive mathematical calculations
to generate this information
that allows you to create data protection
around these drives and around your data.
And we're not doing anything different in the algorithm.
We're just implementing it in a way
to maximize performance of these NBME drives.
When you look at hard drives, we mentioned hard drives,
hard drives benefit greatly from things like write cache.
So traditional hardware read controllers
that you see out there, you'll have batteries on them,
you know, or capacitors and what that is is for write cache.
So the data comes in, we tell the application, the data is down on the drive,
and then it gets written to the drive on the side.
And then if a power goes out or something, you need to, you need to be able to have,
you know, that cache backed up because the application thinks that's all written to
disk. Ironically, in our world, we identified that that kind of thing, caching, write caching,
actually introduces latency into the data path.
That's why we allow the data to go right from the CPU
or right from the GPU straight to the drive.
All we're doing is writing the parity.
And so that improved performance greatly.
And so that's why we do it the way we do it.
And it's a very simple concept
once you start to understand what we're doing.
Yeah, those of us who've actually done some work
on AI at the edge,
I know that we've experienced the joy
of tensor processors and GPUs and offloading this stuff.
I mean, you go from trying to inference on a CPU
to adding in a moderately powered GPU or a TPU module,
and suddenly you're able to do not just a little bit more,
but an order of magnitude more inferencing
using basically the same hardware.
In fact, many of us are using pretty low powered systems
at the edge with pretty high powered GPUs there.
And so it's analogous to exactly that.
So you're doing object detection or facial recognition
or whatever it is using the GPU cores.
Well, now you're using those same cores to do the raid.
It's the same thing and it's offloading that CPU
and it's allowing you to use a smaller, lower powered,
less cooling, etc. in the CPU side,
and you're able to leverage that as well.
And as we said, in many cases, the GPUs are not maximally loaded.
In many cases, there is a little bit of bandwidth there,
there's a little bit of slack space
that you can use to do these calculations.
So that's a big benefit from all the way out to the edge.
Do you have many people using this sort of technology there
in edge and inferencing use cases?
Oh, yeah.
So I'll even mention some real customer scenarios.
We have one.
This is a company that does.
They've kind of started to blend AI machine learning
with CFD, computational fluid dynamics, and simulations.
And they spec'd out a server from a well-known server
manufacturer.
And it had 24 NVMe drives, Gen 4.
So that should be three and a half times 24.
So somewhere around 80 gigabytes a a second at raid zero from for write performance
The customer needed for to get their application to run properly
They needed around 40 gigabytes a second to write to those drives. So they bought the server set it up
Tested it with raid zero. It was great. They set up the software RAID that's built into Linux,
Linux MD RAID, not picking on any one Linux
because it's a ubiquitous freeware.
And once they set up RAID 5 on these drives,
they got one gigasoccal.
So the drives are theoretically capable
at writing at 80 in total, and they were only getting one.
We demonstrated 68 out of 80 with our technology,
and so the customer has solved that problem.
We have another similar situation in a customer
that's got a large high frequency trading model database.
They actually have 22 servers that happen to have
32 solid nine QLC drives in each one.
They're writing huge amounts of data really, really quickly, and they have to keep this
for analytics and forensics and historical logging.
And similar problem, they were only getting one or two gigabytes a second with this traditional
software raid.
So in that kind of environment, we've come in and, you know,
helped out the customer, the customer sat retrofitted, whatever you want to say. The challenge now is how do we start to condition server manufacturers like this to understand where
the performance bottlenecks are and how we can solve that with great technology.
And we're making a lot of in-breds with these server manufacturers.
That's great to hear. And to that point,
like I've talked a couple of times now about the ecosystem.
It's good to hear where that's, you know,
seeing some traction from your perspective as far as those efforts and things like
that. When we, when we kind of shift back to kind of what Stephen was mentioning
about the edge environments,
there's also the concept to keep in mind that the throughput capabilities he
mentioned underpowered CPUs, for example, in those systems, being able to do anything to relieve that
stress is definitely valuable to these customers.
I love the, you know, between the two of you had the super highway with the, you know,
the pinch points or the manufacturing company that spends way too much money on a stamp
maker that makes one stamp an hour instead of a million stamps an hour.
It all shows to what we're trying to do here
in this amazing new ecosystem we're developing
and excited to see how the AI train continues to move forward
and gets more properly utilized,
I guess is the best way to put it.
We've thrown a billions,
the companies have thrown billions and billions at it
and recent snippets between the fight of chat GPT
and deep seek and all that kind of stuff have thrown us all into a little bit of a loop showing that what we're working on, the
stuff that we're doing under the scenes, behind the scenes, under the hood, whatever you want
to call it, really does make a difference in what's going on in the world.
And it's almost thankless in some ways because we don't necessarily expect the recognition,
but we can see what we're doing as having a major impact.
You bet.
We're working with some medical device companies
that build CT scan machines and MRI machines
and things like that.
One of their dating factors is how quickly can they take
the information that's generated from these scanners.
How do they get that down onto media
where it's stored securely?
How do they get it done quickly?
If we can double the speed, that machine can handle twice as many patients.
You can help more people.
The clinic can pay for that machine more quickly because they cost millions of dollars.
The real thing that we keep seeing over and over in our market, and it's not just storage,
anytime you put something faster,
anytime you improve the performance of one component
in a system, you rarely get the full performance
of that component because it just exposes
a bottleneck somewhere else.
I kind of think of it as, you know,
you put a bigger, faster engine in your car
or a supercharger on your car engine,
and then you find out your brakes are really bad.
So now you gotta go upgrade your brakes
so you can stop the thing or you need a better
transmission to handle the torque.
You can't just make one thing faster and expect the whole system to be faster.
And so, you know, we're specifically focused on protecting data.
That's our A number one.
You know, your data is more important than anything.
But if we can do it and get out of the way and allow that data to flow and allow you to
achieve higher performance, then that's even better.
Yeah, that's actually a key point there, too. We've spent so much time talking about
performance. Protection is critical as well. And unfortunately, what I've seen is many use cases,
especially in Edge, use unprotected storage instead of,
and that's a big risk. That's a big challenge.
There's constraints, sure. You may not have as many drive bays.
You may not want to invest in raid cards.
You may have been burned by software raid or whatever
that didn't work and didn't perform as well as you had hoped
because it was reliant
on old technology and CPU cores and things like that.
But a lot of those problems are being solved.
We're kind of in a new world.
Flash and SSD-based computers are kind of like electric cars
in a way.
It's sort of a completely different paradigm
of the same thing,
but you have to treat it somewhat differently.
And when it comes to data protection and storage,
as Scott mentioned, there's new form factors
that will allow you to pack multiple drives
into a compact form.
There's new technologies that would allow you
to very, very efficiently serve data
and also protect data in those environments.
So I could see this as really a transformative technology
while maintaining that sort of compact
and less expensive form factor, right?
Absolutely. Yeah, I mean, to your point,
we did kind of gloss over to some aspect.
I mean, RAID by definition is of course a protection scheme,
but for those that may not be as familiar with it
or the implementation challenges,
I hopefully through the course of the conversation,
Kelly's done a great job of explaining a lot of that
to us too as well.
But data protection is absolutely paramount.
There's things that you have to do
at all the different levels to preclude all the wonderful things we're hearing
about cybersecurity problems and whatnot,
let alone just data integrity problems for the base hardware.
So it's going to be an interesting ride
for the foreseeable future.
We're excited about the partnerships
that exist with GRADE and the whole solution stack
that we're working together on. So.
You know, we've talked about a lot of the high performance
features were adding on GP Direct Storage,
but there is a slew of data protection features
that we've added like write journaling.
So in a write in a write degradation mode
where you're rebuilding a drive, we now journal every write.
That way, if you have another failure,
you eliminate a corner case known as write-hold,
which comes from a double fault where it's unrecoverable.
We now have, we can always go back to the journal
and say, this is where we can pick up where we left off
because we know that was written.
There are transient errors that can happen across PCIE.
And even in these drives,
you can have something called an unrecoverable error.
And a URE happens when something's changed in the data
and the data is now no longer what you wrote.
Maybe a gamma ray hit that cell in the flash
and it's flipped a bit, you know.
Okay, if you're in running raid zero,
you are having a bad day now because you have a URE
and it can't be recovered from.
Well, we can go reach across the drives
with the striped parity that we have
and recreate that on the fly.
So we have now built a unrecoverable URE recovery
on the fly as the application doesn't even know what happened.
And we just rewrite it somewhere else
and then pass it on to the application.
We have customers that they believe
they can run RAID 0 because they're looking at just MTBF
and drive right per day specs from a drive manufacturer.
And those are really, really good.
But when you have 20 of those,
you increase your failure potential by 20.
But a lot of things can happen that aren't the drives fault.
Many of my customers that are doing edge type things,
we're talking about military battlefield installations. We're talking about military battlefield installations.
We're talking about harsh environments like ships,
mapping the sea floor, like data collection in an airplane.
And drives can fail in that scenario
through no fault of their own,
but it could be because of EMI interference,
it could be because of heat,
it could be because of vibration,
it could be because of corrosion in assault environment. because of vibration, it could be because of corrosion in a salt environment.
So you've got to take that into account as well because the drive specs are fantastic
if you're in a hermetically sealed data center that's at the right temperature all the time,
but that's not always the case.
So if you don't protect your data, you're just asking for a problem at some point.
Yeah, that's a great way to kind of segue us right into what's coming soon
in the rest of the series as well around as we start focusing on AI and Edge
and getting further away from those perfectly cleansed environments
that a lot of these systems sit into.
So I really want to thank you a lot, Kelly, for taking the time to join us.
And I'll hand it back over to Stephen here to kind of wrap us up.
Yeah, thank you so much.
And that is exactly what we're going to be talking about.
Actually, that's why I love Edge Field Day and all these Edge companies.
You know, it's in a way, it's kind of like the real world.
You know, it's when things get real, you know,
it is all well and good to build these systems
in an environment
that will never face challenges.
But edge by definition is systems that are not in the conventional spaces.
They're not in data centers, they're not in the cloud.
They are everywhere.
And they could be under the friars, they could be on a battleship,
they could be in the back of a self-driving car. They could be on an oil exploration site
out somewhere out in the middle of nowhere.
And in all of those cases,
we have to think about all the things that can go wrong.
And in all of those places,
AI is making a tremendous impact on what people expect
and how people are using this data.
These sensors are pulling in way more information than ever before.
They're processing more information than ever before,
and that demands better and better capabilities
in terms of GPU, sure, but in terms of storage, especially.
And that's really what we're going to be talking about
all this season on utilizing tech.
Thank you, Kelly, so much for joining us today here on utilizing tech.
As we wrap up the episode,
where can people connect with you and continue this conversation?
Absolutely. So, gradetech.com, gradetech.com is our website.
We have a number of parts of the website
where you can look at different use cases, customer case studies, etc.
We also go to lots of the big shows.
We mentioned GTC.
We're going to be at CloudFest over in Germany.
We're going to be at the super compute show over there called ISC.
We're going to be at SC25, which is in St. Louis this year.
I'll personally be at NAB.
I'll be at Dell Tech World.
So we're going to be at a number of different events like that,
and would love to connect with anybody
who is gonna be there and would like to speak with us.
Just reach out to me on my LinkedIn page,
and I'd love to talk.
Well, it's great to have you.
How about you, Scott?
Where can people catch up with SolidIME?
Yeah, so for good old SolidIME information,
www.solidimeed.com. You'll find all
kinds of cool stuff there related to what we're doing across the entire AI pipeline. Myself,
I'm SM Shadley on the different social platforms and Scott Shadley, of course, on LinkedIn. So,
happy to chat with anyone about what's going on in the world as we start to move this stuff forward.
And SolidIME will also be kind of trancing the world
with CloudFest and Data Center World and Dell World and SC,
all the fun shows where we can kind of showcase
some of the really new innovative stuff
we're doing together with all these great partners.
And as for me, you'll catch me most Tuesdays on Tech Strong Gang,
most Wednesdays on the Gestalt IT Rundown,
and of course on the Tech Field Day and Utilizing Tech podcasts.
Look for Steven Foskett.
Thanks for listening to this episode of the Utilizing Tech podcast.
You can find this podcast in your favorite application
as well as on YouTube if you want to see what we look like.
If you enjoyed this discussion,
please consider leaving us a rating or a review.
We would love to hear from you.
This podcast is brought to you by Solidime and by Tech Fielday,
part of the Futurum group.
For show notes and more episodes,
head over to our dedicated website, utilizingtech.com,
where you'll also find the previous season focusing
on storage with Solidime.
You can also find us on ex-Twitter, Mastodon and BlueSky at Utilizing Tech.
Thanks for listening and we will see you next week.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.
All right.