Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 07x01: Proving the Performance of Solidigm SSDs at StorageReview
Episode Date: June 3, 2024Analysts and press spend a lot of time talking about specs and performance numbers, so it's always a treat when we get to talk to people who are testing and using these products. This episode of U...tilizing Tech is focused on AI Data Infrastructure and features Jordan Ranous from StorageReview and is co-hosted by Stephen Foskett and Ace Stryker from our sponsor, Solidigm. StorageReview has constricted an experimental environment focused on astrophotography as a way to demonstrate AI applications in challenging edge environments. Their setup included a ruggedized Dell server, NVIDIA GPU, and Solidigm SSDs. This is the same sort of setup found at edge compute environments in retail, manufacturing, and remote use cases. StorageReview benchmarks storage devices by profiling real-world applications and building representative infrastructure to test. When it comes to GPUs, the goal is to keep these expensive processors operating at maximum capacity through optimal network and storage throughput. Hosts: Stephen Foskett, Organizer of Tech Field Day: https://www.linkedin.com/in/sfoskett/ Ace Stryker, Director of Product Marketing, AI Product Marketing at Solidigm: https://www.linkedin.com/in/acestryker/ Guest: Jordan Ranous, AI, Hardware, & Advanced Workloads Specialist at StorageReview.com: https://www.linkedin.com/in/jranous/ Follow Utilizing Tech Website: https://www.UtilizingTech.com/ X/Twitter: https://www.twitter.com/UtilizingTech Tech Field Day Website: https://www.TechFieldDay.com LinkedIn: https://www.LinkedIn.com/company/Tech-Field-Day X/Twitter: https://www.Twitter.com/TechFieldDay Tags: #UtilizingTech, #Sponsored, #AIDataInfrastructure, #AI, @SFoskett, @TechFieldDay, @UtilizingTech, @Solidigm,
Transcript
Discussion (0)
Analysts and press spend a lot of time talking about specs and performance numbers,
so it's always a treat when we get to talk to people who are testing and using these products in the real world.
And when I say in the real world, I literally mean out there in the world, as you're going to hear this time.
This episode of Utilizing Tech is focused on AI data infrastructure and features Jordan Ranus from Storage Review
and is co-hosted by myself and Ace Stryker from
our sponsor, Solidigm. Welcome to Utilizing Tech, the podcast about emerging technology from Tech
Field Day, part of the Futurum Group. This season is presented by Solidigm and focuses on AI data
infrastructure. I'm your host, Stephen Foskett, organizer of Tech Field Day, and joining me today
as my co-host is Ace Stryker of Solidigm.
Ace, welcome back.
Hey, Stephen. Thank you so much. I'm very excited to be here and looking forward to this season of the podcast with you.
Absolutely. It's going to be a lot of fun.
We're going to be able to bring in a whole bunch of folks all season long from different customers, different practical applications.
And that's kind of what we're talking about today, right, Ace?
I mean, you know, we hear a lot about speeds and feeds.
We see a lot of numbers.
We see a lot of bragging from vendors.
But really nothing matters until this stuff is out there in the field and being used, right?
Yeah, I'm really excited about the guests we've got lined up today.
You know, in a lot of these AI conversations, what we sort of hear is the industry talking to itself, right? And we hear various solution
providers and hardware and software vendors sort of hawking their wares and extolling the virtues
of their products. But where it gets really exciting for me is to be able to hear from folks
who are out in the field, who are using these things in the real world,
where the rubber meets the road, and producing really cool results.
And so in that spirit, I'm excited about the conversation today.
Yeah, totally.
And although they're not, I guess, strictly speaking, an end user,
one of the sites that I love,
one of the companies that has this incredible video
content that I enjoy is Storage Review. I love running to see you guys at the show. Jordan,
you guys are always playing with the coolest toys. Welcome to the show.
Yeah, thanks for having me on. I'm Jordan. I'm from Storage Review. More officially,
my title is the Advanced Workload Specialist. So anything kind of advanced, whether it's AI or HPC, it's my job to take all the cool toys that we have in our lab, put them all together and make them do fun stuff.
And that's really what we're talking about here.
So this season of Utilizing Tech is focused on AI data infrastructure.
You're the guy that's out there taking these things, trying them out, trying to see what these systems will do in terms of AI data infrastructure.
So I guess give us a little bit of background in terms of what level of coolness have you put together with Solidigm SSDs?
Yeah, so Solidigm has been a longtime friend of Storage Review. We got together with them not too long ago and had a fun idea for some, as Ace alluded to, where rubber meets the road, almost literally in this scenario, plan to take some of these high capacity, pretty quick SSDs and stick them out in the field and do some real field AI work with them.
We came up with a pretty neat concept around astrophotography,
which is one of my personal favorite hobbies.
And we filled up a Dell XR7620 with four of the Solid Ion QLC SSDs
and took it out to the frozen wilderness
and shot some pretty incredible space pictures with it.
And we're able to take that data and bring it back to our lab and do some AI work with them.
So, Jordan, what's the connection there? Where does AI play into this?
You know, I've been out there and I've done a little stargazing.
I've got a little telescope that the kids and I, you know, look through at home.
Obviously, the images you're producing are much higher quality. Is it the AI that enables that level of quality, or how are
you using AI models to kind of develop your output? Yep. So there's a couple different ways that we're
taking advantage of it. The first aspect, we're using the high-capacity SSDs to capture all of the data.
Our images are about 62 megapixels each raw, and that's before you add color and chroma data to
them. From there, we were able to use that and fine-tune, manually go through the data and comb
through it and get a really good subset of images, combine that with some actual Hubble legacy data as well,
and run them through a novel convolutional neural network
to create a more advanced, modern, CNN-based
denoise and sharpening algorithm
that outperforms the traditional kind of
Richardson-Lucy algorithms that you see out there.
Because of the amount of data and the amount of time that we were able to
spend out in the field, it helped us really kind of drive that model forward,
which we can then bring back out into the field and do edge inferencing on the
images real time to see if we're having some other issue that maybe we're
slightly out of focus or there's a little too much do on the lens or
there's a vibration happening from a car driving by.
And we need to know about that so we can make an shrink that down by using a neural network in order to help make decisions in real time out in the field.
To add a little bit of additional color to that, traditionally, you know, you would take a take a photo and you could spend hours processing through it and working through the sharpening or the processing and the stacking, and only to find
out that the entire weekend that you went out on your camping trip and took all of your photos,
it just didn't work out for some reason because you were slightly out of focus. We're helping to
drive methods that can bring, like I said, that actionable real-time feedback into the scenario. And this has a lot of applications beyond just the astrophotography
side. Any sort of imaging is actually compatible with the network that we put together. It was just
specifically that we trained it on the space photos because that was something that was near
and dear to my heart and something that I'm fairly decent at, or at least I like to think to, in order to prove out the idea and flesh it out. It sounds to me like
you've basically created sort of an edge computing workload or test case there because, you know,
you've got poor connectivity, bad environmentals maybe, and yet some high throughput AI applications that are running there.
So, I mean, if you sub out telescope and stars, you know, you could easily think that this similar situation could apply in energy, medical, military, all sorts of different areas. And yet you might not be able to talk about those
use cases because they're sensitive. Whereas what you're doing is something that you can try out and
experiment with and play with in a way that is wide open, that you really can have that conversation,
right? Right. The stars aren't going anywhere. And if we mess up the AI, it's not going to turn off,
you know, Andromeda on us. But yeah, the couple of the big things that come to mind, like you said, the other industries,
self-driving cars, providing more real-time image clarification to the cameras that are needed on
a self-driving car, for instance, or exploration. Oceanic exploration is another one where you're
capturing a lot of data and you need to move quickly
and having the ability to go back and clean it up in a meaningful way is really important
because of the value and the amount of costs associated with that data acquisition.
Jordan, a lot of what we're exploring in this season of the podcast is around data infrastructure
and what are the kind of requirements needed to support use cases like the one you're
talking about.
So we talked a little bit about what's happening at the edge and the inferencing that's going
on there.
Can you speak a little bit to the upfront work of developing the model and sort of what
your infrastructure requirements look like there, maybe particularly around the storage and kind of
take us through that process of developing the model that you ended up using?
Yeah, so without getting too nitty gritty into the math of it, there was a lot of back-end work
that happened in our lab initially. We were doing some training on some super micro blade systems.
We're doing some distributed training
that involved putting together some really fast SSDs
to be able to work in the data in and out of the GPUs
as fast as possible.
Our dataset wasn't exceptionally large,
but we ended up requiring large amounts of VRAM
due to the nature of the image processing
that was actually happening.
We were, I think, at over 2000 layers for the neural network at one point,
which was able to eat up enough DRAM to span four H100s.
So getting that data in and out of the GPUs,
specifically the model checkpointing,
and being able to take all of those different checkpoints and save them out
so we can go back after the fact and assess the performance,
was one of the keys that we needed. And we had go back after the fact and assess the performance was one of the
keys that we needed. And we had selected some of the faster SSDs that were maybe a little smaller
on the capacity side because we were only working with 320 gigabytes worth of VRAM that we had to
then condense out and flush out for the checkpoints. But the key there was the speed
in getting the GPUs because they only had them for
a short amount of time and getting the data back out of the GPUs and getting those checkpoints saved.
So what's next for you in the world of astrophotography? Is this a project that
continues to live on here? I've seen the images on the Sorgeview site. They're gorgeous. And I
highly encourage our listeners to go
check those out, but I'm curious about, uh, are there, are there bigger challenges you're looking
to tackle, uh, in the astrophotography realm specifically, as well as, uh, are there ways
for others with similar interests to leverage some of the work that you've done recently in the space?
Yeah. So the next, the next goal immediately on the horizon
is to finish up the paper
with my partner in crime on this model
and get that pushed across the line
and then actually open source this model out
for folks to be able to use
in some of the existing software
that's out there right now
and really put it in the hands of the developers
who can help make those implementations
and some of the open source capture software happen and help give the real-time feedback to the users.
The whole idea here is to help improve the hobby as a whole and not get too hyper-focused on one specific thing,
but just help everybody out with the network.
Because at the end of the day, it is actually relatively small to run.
And being able to take all this information and all this time and data that we spent capturing
it and condense it into a really small, really efficient model, I think that's one of the
more rewarding aspects of doing this project.
So what did you find in terms of, you know, what were some of the nuts and bolts, nitty
gritty, kind of the fun things that you found, what were some of the nuts and bolts, nitty-gritty,
kind of the fun things that you found when you were experimenting with this setup?
Yeah, so we had, I'll touch back on my Edge server that we used for this, Dell sent over a XR7620,
which is, there's one of the ruggedized platforms, and we had it in a big ruggedized road case that was impact-resistant, weatherproof, and shock-resistant.
And when we were looking at kitting it out and kind of building everything up, obviously a GPU was on the list.
We ended up with a NVIDIA L4 in there.
They had a Lovelace 70-watt card or 75-watt card, I believe.
But it's got 24 gigabytes of VRAM to be able to work with.
So we were able to do some kind of real-time work at the edge as far as fine-tuning goes.
But when we started looking at the storage solution, what's generally provided we found to not be quick enough for what we were trying to do.
Traditionally, when you're going out to capture this stuff, you're going to be a guy with a laptop and a camera and a usb cable connected to your scope you don't need all this overkill stuff but when we start getting into we need to collect a lot of data we need to look at it in
real time we need to be able to make decisions on is this are we doing what we're doing properly
so we are capturing the best data to provide the best training data
all those factors combined led us to needing a lot more higher throughput from both the cpu
perspective as well as our storage perspective so we did go with four of the u.2 and vme ssds
uh we had two flavors of the um of the p5336s we had the 60 terabyte guys and then we had the 7.68 terabyte uh drives in there
um both of which performed great the main reason why we ended up going with these especially for
the high density was actually the old sneaker net story um getting the data back to the data center
for larger processing later it was more efficient for me to fill up one of these 60
terabyte drives and drop it in a FedEx mailer and have it two-day back to the lab where I could then
interact with it on bigger, big iron, so to speak, as far as the H100s and the bigger AI machines go
than to try and upload it over limited bandwidth. Because we actually ended up doing this out in
the middle of nowhere on the banks of the Great Lakes.
You're someone who, from my point of view, very much has their finger on the pulse of what's going on in the AI world by virtue of your work and your personal kind of pursuits as well.
I'm curious about this general trend of more and more AI work, you know, moving closer to the data, moving closer to the edge.
You mentioned in your case, you know, there's a benefit of having a better sense of how you're doing while you're out in the field, right?
And sort of what your outputs are going to look like so you don't find out down the road that you've ended up with nothing usable. Can you talk about maybe other ways you're seeing
that trend manifest across the AI landscape today, you know, in the course of your work
and your interactions with lots of folks across the industry? How prominent is this trend and
how quickly are you seeing AI work moving from kind of the core data center where a lot of it has lived historically to where more intensive work is happening at the edge?
Yeah, so that I mean, your question almost is data center full of GPUs, and I think I
have an AI model, and now I needed to deploy it to 1500 retail locations. Or actually, just the
other day, this was kind of a fun one, I went through a McDonald's drive through that had an
AI ordering for the first time, that was kind of a wild one. I went through a McDonald's drive-thru that had an AI ordering for the first time. That was kind of a wild experience actually. But the name of the game
for everybody is we've got all this data. We've got it right now, very centralized. And a lot of
folks are taking the kind of the similar steps that we did with this project, which was, okay,
let's keep that data out at the edge. Let's inference on it out there and then feed back just the key metrics
into either the data center
or into the management interfaces,
real-time dashboards, that sort of stuff,
and just getting it down
to that real granular level really quick.
Traditionally, with the big data stuff,
working in large databases,
a lot of those things have to deal with ETL jobs.
They take a long time.
Whereas if you can process that stuff
and save that important data out at the edge,
even though you might not be able to send it
all the way back to the data center right away
at the time of capture,
having that real-time performance at the edge
in order to be able to make a decision on it as a business is where things are going.
And I think it's what a lot of people already see is this is what we need to do with AI. And this
is how AI can actually help the business. We don't have to move truckloads of data. We can move
metadata or a few values or a few model outputs for analysis and then worry about moving the heavy stuff later.
Yeah, that's something I really want to zoom in on there, Jordan,
because you're completely in line with what we heard when we did.
So previous seasons of Utilizing Tech, we focused on AI.
We talked about industrial IoT and industrial vision applications.
We talked about media production and all of these
industrial IoT applications. That's exactly what they're looking for. They're looking for a way to
move processing to the edge, not just data collection, but move processing to the edge,
to use AI as a way to collect, not just to process data, but to collect more data, to process more data, and to get more value at the edge and then ship back the anomalies or the interesting aspects, the interesting elements.
We're seeing that more and more, I think, in a lot of these edge use cases.
I mean, think about pretty much any kind of IoT vision environment. They're collecting all sorts of camera and sensor data all the time.
The last thing they want to do is just be writing all that data and then taking it back home.
Instead, what they want to do is they want to have intelligent processing of that data and then take
the good bits back home. And similarly, a lot of the things that you're describing to me,
it sounds like a lot of the same constraints that people are facing at the edge.
You know, you're talking about ruggedized servers.
You're talking about power.
I think that's another aspect here too.
If you're going to have a fairly high-powered GPU,
you need to think about cooling.
You need to think about power requirements.
You need to think about adverse conditions that these things may be installed in.
Back of a pickup truck, pretty adverse, but actually not that wild when it comes to things like energy exploration or military applications or things like that.
That's actually a pretty nice environment compared to what the military faces.
And also, you know, if you've got something like SSDs that have
large capacity, large performance and a low power footprint, that really helps. I mean, I know that
your stuff, for example, is, is battery powered, right? And, and, and so you're, you know, having
the ability to reduce that power envelope gives you hours more processing time, right?
Yeah, I mean, the difference of 25 or 50 watts that you would
have to spread out across spinning rust versus consolidating it into a single large capacity SSD
at the edge, certainly that can mean the difference of hours depending on your system,
especially if you're loading up the GPU. You brought up something quite interesting there, though. We did a demo with Solidigm actually back at FMS in 2023 that we did just that, where we had a camera up and we were capturing all 1080p, 60fps of the camera and saving it down to large capacity hard drives that could then be yoinked
out, shipped back to your data center, but we're doing real-time inferencing on it. It's a lot
cheaper in more than one way, not just financially, to send a single line of text back 60 times a
second than a whole video frame or a whole 4K frame even. when we look at that that was kind of a our proof of concept
of this idea that everyone's been talking about that everyone's been saying this is what ai can
do for you this is how you can use it you can do that edge inferencing and take back the data but
you don't have to throw away that's where the the larger storage stuff comes in you don't have to
throw away that raw data because that's valuable.
At the end of the day, everybody's data,
they want to save every bit and bite of it
because who knows what the next evolution of AI
is going to look like.
Oh, you can just point it at a drive full of stuff
and it'll figure it all out for you maybe.
But without saving that,
if you were just throwing it away
because you didn't have enough storage
or your storage wasn't fast enough to keep up, right? that's where you kind of get the double-edged benefit of things
like these these huge QLC SSDs at the edge and you mentioned the rugged thing I've got a little
fun anecdote uh I I there's a video floating around on the internet somewhere of me actually
running these things in a blizzard and I had sent it to my PR friend at
Solidigm. And I think the first question was, what's the temperature? And the response was,
I'm pretty sure we're not rated for that. But they were fine. They were rugged enough,
they could handle it. You know, that's not something where things with moving parts would
necessarily be able to handle it. But we had to go through it as part of our capture when a storm ran
through and the rugged hardware chugged right through it.
Jordan, I want to pivot a little bit here because one question that a lot of folks are
asking, we heard it at GTC earlier this year, we've heard it in various forums and conversations with our customers,
is how do you evaluate storage performance for AI, right? And I know this is an area where you've
done a lot of work, so I'd love to pick your brain here a little bit. You know, where I go,
you know, as a PC guy, is if I want to sort of measure CPU performance, I run Cinebench, right?
And I can make an apples-to-apples comparison between a couple of processors, or I can run PCMark on a system level.
And there's a lot of interest in understanding how that's done for storage for AI in the data center specifically.
So what's your view of kind of the state of things in that space now?
Are there emerging tools that are designed to serve that purpose?
How well do they work?
What are you learning there?
And what do you see as recommendations for folks with an interest in kind of understanding
that a little better? I thought we talked about no loaded questions.
So there's a lot to unfold there, right? So if we think through our phases of AI training,
right, you've got data ingests, data prep, the actual training that goes into it,
checkpointing, and then out to inferencing once you're done with your model. So when we look at
those five different phases, there's different needs for each of those phases. When you take
that into consideration, you don't want to have to go maybe necessarily buy out five different SKUs of an SSD
to fill up your data center, but you need to thoughtfully design all the way through the
platform that you need for your business, right? Every AI is not going to be the next Lama or chat GPT or, or Dolly, you know, it's every, everything's
going to be unique. And so what we do right now in our lab is we're taking a look at a lot of
different types of storage, QLC, TLC, throwing in cache layers, looking at using, utilizing the CPU
and the DRAM going GPU direct, and kind of
profiling these different workloads that we're seeing coming out, whether it's synthetic
benchmarks or open source projects, looking at how those are actually impacting the system,
how they're treating the disk, how they're working with the system memory with the CPU,
and then mimicking that using some open source tools
like GDSIO, for example, is really powerful.
If you know what kind of AI you're going to use,
because there's 15,000, or I'm sorry, 16, no, 17,000,
no, 18,000 now kinds of AI, it's always changing.
But if you have a general idea of what you're going to be doing,
you can actually go out
and profile and set up, you know, these tests. And that's what we're aiming to do with kind of a
script that we've been working on to look at storage from a total perspective and kind of
Gantt chart it out and say, okay, this type of device or this type of, you know,
this specific NVMe drive is really good if you're doing this type of AI.
But if you're doing this kind, you need to look at a storage appliance like this.
Or if you're doing, you know, another kind of AI, shoving your GPU server full of as much NVMe as you can get in there is the way to go.
And then let the CPU worry about offloading those checkpoints
over your network at a later date.
When we see stuff like NVIDIA just came out with the,
and I've got a video on this there, 800 gigabit networking,
we're starting to see that be less and less of a bottleneck,
but then the storage servers and the storage appliances
are going to have to start to catch up to that too.
We were doing a test recently.
I can saturate 200 gigabit ether ether or 200 gigabit infiniband rather
very easily with a single a 100 GPU.
And then we started talking about H one hundreds and now the black well
stuff's on its way.
You're, you're going to need those bigger, faster, stronger stuff. And that's where Gen 5
gets really exciting, as well as Gen 6. I know Gen 5, right, when we look at the speed on there,
the name of the game at the end of the day is to keep the GPU working as fast as possible,
as well as nonstop as possible. Everybody knows that. Selecting your storage infrastructure and
what you're going to be doing around that is something that's getting more and more focused pretty much every day right
now that we need a way to if we're doing really read heavy stuff because we're doing etls um but
we need to we're using uh a nemo the nemo framework that does the real-time augmentation of the data, so we're not,
you know, overfitting our model or something like that. We need a lot of random 4k read performance,
you know, and so there's layers to it, right? And that's kind of what we're aiming to do at
Storage Review, at least, is provide the, maybe not the textbook but give out the the playbook for folks
to be able to profile their storage or make a make a smart decision on either their existing
infrastructure or some infrastructure that they might be looking at and say this is where this
fits and this is why it's important for that you need the density above all for this reason or you
need the gen 5 speed above all for this reason. And providing that is,
I think that's kind of what everybody's looking for right now.
So it's, yeah, it's a loaded question.
I think we're onto it
as far as getting these GSIO stuff up and going,
getting our own FIO scripts
because you can mimic a lot of these workloads in FIO.
There's multiple ways to skin this and to make it happen.
But I think we're on the right track. It's interesting you mentioned this, Jordan, because I've actually found that it is
actually, this equipment is getting so fast and so good that it's actually kind of difficult to
push it. I mean, how do you test a 60 terabyte SSD that can push, you know, gigabytes per second of throughput?
How do you test a processor that has, you know, over 100 cores?
How do you test, you know, terabytes of memory?
You don't find that that's a challenge?
You're bringing back memories to when the AMD 96-core Genoas came out.
And that was actually one of my first testing tasks that I had when I joined Storage Review was,
hey, go test these 96-core CPUs.
By the way, here's a terabyte and a half of DRAM and a bunch of 30-terabyte SSDs in it.
And I was like, okay.
I like the approach. I like the approach if we look at
it as each individual piece of the system and profile that, right? So when we looked at our
CPUs, we decided to do something crazy with the CPUs where we run the traditional benchmarks that
could scale, that could handle that level of thread and that level of core. And then pit those tests down both the product stack currently and historically
to be able to show that scaling. We started talking about SSDs. We got to look at the,
you know, we look at the total performance of them by absolutely hammering them with rights.
So we took one of our more traditional tests, right? So for these big SSDs and big CPUs,
we took Y cruncher, and then we needed to test SSDs with it. There's a swap partition in there.
We just did 105 trillion digits of pi with that set that world record. And then we just did 202
trillion digits of pi on the 60 terabyte drives and took that
world record.
But that was all basically in the name of, let's put these things up there.
And Solidigm was willing to send them to us and say, our SSDs can survive this level of
abuse.
We put absolute petabytes through these things when we were doing it.
And it was just that total system test. But then I can go out
when people approach me at shows and say, hey, is that QLC good enough for my workload? I can say,
well, if you're going to put 20 petabytes through it in a year, yeah, you'll be fine. It'll last
you maybe 10 years. That's a pretty good thing to be able to say. We did it. Here's the raw data.
Apply it for yourself.
See how it works out.
Yeah, that's so cool.
And it's a crazy ability that you guys have over there at Storage Review.
That's one reason that I'm a reader and that I enjoy looking at what you're doing.
So thank you so much for joining us on here today.
Ace, before we go, how would you summarize this? Real-world testing, real-world benchmarking,
proving out AI data infrastructure with Storage Review?
Well, it sounds like everything else in AI. It's moving rapidly, right? And I think what we're
learning here is that the possibilities are opening up in terms of what you can do at the edge, which is really exciting in a lot of innovations and probably touching our daily lives
in more and more ways going forward.
And so I'm thrilled to hear
kind of some of the developments there
and firsthand from you, Jordan,
about your own findings and some of your projects,
very, very exciting stuff.
Glad you also found a way to plug
the Pi project in there as well. I think
that's super cool. That's become one of my kind of go-to party facts. Hey, did you know the 105
trillionth digit of Pi is a six? And maybe that's why I don't get invited to too many parties. But
anyway, I find it super interesting. And I appreciate your thoughts on the benchmarking
piece as well. That's something that is not as easy as it sounds, right?
There's so many variables in there.
It depends on the use case.
It depends on the architecture.
But I think it's of interest to a lot of folks in this space
as we look forward that that continues to mature
and that the industry finds a way to measure
and communicate
storage performance within an AI infrastructure in a way that enables folks to make easy,
you know, A to B comparisons. And so we'll certainly look forward to your continued work
on that front as well. Yeah, it's been great to be able to go through and test all of these
different kind of permutations and the stuff that everybody's talking about when you go to the trade shows and
you see the keynotes and you see all the booth demos. It's really fun to see those and then
actually to be able to take them out into the real world. I feel like I'm one of the luckiest
people in the world. I had the best job ever. I get to take this fun stuff, take it out and actually,
like you opened with, make the rubber meet the road, put it to the test.
And it's been absolutely great.
Partnering with Solidigm has been their longtime friends and great friends to work with in the industry.
We can throw some links in the description to our AI story as far as the CNN goes.
And we can throw in our PI records as well for you guys to take a look at. Storageview.com for all the latest and greatest
in data center hardware tech news and reviews.
Thanks a lot, Jordan.
Ace, before we go,
where can we continue this conversation with you
apart from listening to Utilizing Tech every Monday?
We'll be very busy this summer
at events all over the place as well.
So keep an eye out for us at all the
major OEM conferences and industry events going forward. You can continue to track the latest on
our products and work that we're doing through partners at solidime.com slash AI. And as for me,
you'll be seeing me at Tech Field Day events this month.
And after the bit of a summer break here, we're going to be doing an awful lot of stuff.
Of course, you can also catch me here at Utilizing Tech every Monday, the Tech Field Day podcast every Tuesday.
And of course, our rundown of the week's news every Wednesday at Gestalt IT. Thank you for listening to this episode of Utilizing AI Data Infrastructure, part of the Utilizing Tech podcast series.
You can find this podcast in your favorite application.
Just look for Utilizing Tech as well as on YouTube.
If you enjoyed this discussion, please do give us a rating or a review.
We'd love to hear from you. The podcast is brought to you by Tech Field Day, home of IT experts from across the enterprise,
now part of Futurum Group,
as well as our friends at Solidigm.
For show notes and more episodes,
head over to our dedicated website,
utilizingtech.com,
or find us on X Twitter and Mastodon at Utilizing Tech.
Thanks for listening, and we will catch you next week.