Podcast Archive - StorageReview.com - Podcast #142: What’s Next For Storage in the AI Era?
Episode Date: November 12, 2025Brian and crew recently landed in Montreal to tour the Hypertec lab facility and witness some interesting immersion-cooling trends with the leader in the field. Following the lab tour and product demo...s, Brian sits down with Scott Shadley, Director of Leadership Narrative at Solidigm. Scott and Brian have a lively, candid discussion as they trace The post Podcast #142: What’s Next For Storage in the AI Era? appeared first on StorageReview.com.
Transcript
Discussion (0)
Hey Brian Beeler here with the Storage View podcast and we're live in person in Montreal with Scott Shadley with Solidime.
Yes, it's awesome to be here with you and it's always fun to get out on the road with the storage review team.
It's really interesting. My first time to Montreal ever. Your second time in 20 something years.
And we've been in this building for a little while. We just saw a whole tour of the hypertech facility here and it's pretty wild. It's
It's very unassuming from the street.
Yeah, I did not realize it extended that far out into the beyond, if you will.
Well, and we've been so focused with you guys, too, on immersion cooling with hypertech,
that you've got your E1S and your E1L drives in a lot of their systems in a variety of use cases,
but the world is much bigger in their brand than I ever knew.
Yeah, you start talking to them, and they've done some aquires and rebranding and things like that,
and to know that talk about with them about having air-cooled systems still,
part of their shipments, plus the immersion cooling, they even do some of the direct
chip, direct liquid cooling, whatever you turn you want to call for it.
So it's interesting to see just the scale of something like this, because generally we see
it from different views or different angles, but being here is actually a very fun little
walk around.
It is a fun walk around, and it's always fun for guys like us that love hardware to see a whole
bunch of hardware.
A whole bunch of hardware.
Because many times, even when that happens, we don't get to see all the gritty stuff
behind the scenes, so that's pretty cool.
Seeing them crank the immersion thing out and it's like so quiet and we go over to their production lab and it's like, oh, there's the fans.
So that's one of the great things about immersion.
I know you were just at OCP.
I didn't make it out this year, but I know immersion and liquid cooling generally was a big talk track there.
You go to all these events.
What's your read on OCP and what's going on there?
It kind of followed suit pretty well after a couple of the other events of the year like GTC and everything else where a lot of people are looking at the show going,
I need to become a plumber as much as a system architect or a solutions architect, what not,
because we're starting to see just there's more fun with red and blue piping going through all the different booths and stuff like that.
And at the end of the day, going that direction, actually, for technology that I work on, especially, it's more valuable to us because it gives us the consistency that actually makes our products work better.
Thermal shock is a huge problem.
And that's a really interesting thing.
So, you know, we're talking about OCP, we've got super compute coming up in a few weeks.
But you guys started this message with the cold plate bit at GTC back in, what was that, March.
And so this has been a thematic, I mean, I guess a course for Solidime to really be on the leading edge of what next-gen technologies are, the cooling technologies to get the most out of your products.
Yeah, it's something that can come back to when I joined the company and I had the opportunity to pick where I landed.
the innovation and the desire to work with partners and customers for the right solution,
not just the next solution.
I worked so many years in this industry, right?
I'm topping out on almost 30 now.
It's kind of crazy.
But it was always this race to first of something.
And yeah, in this particular case, it's another first.
But it's a first with partners.
It's a first where it's something that we co-designed.
I mean, the direct-to-chip liquid cooling solution we put together is got Solidine Plus
Nvidia IP in it. And so yeah, we have our own special secret sauce that makes our drive
the perfect solution today. Others will come along. But then the beautiful part about it is
anything we do, we also donate. So the IP of the cold plate architecture is not our IP that
we're holding on to. We shared that. So it's going to show up in all the reference
architecture, things like that. And then the drive stuff, what we can share and helps give our
customers what they like, which is that whole multi-sourcing capability, is we donated some of the
changes to the drive infrastructure to SNIA for the form factor.
Right.
So, and as a board member of SNIA, I kind of like that part, right?
So it just helps with how Solidime has always been on that.
Yes, we want to innovate.
We always want to be looking forward, and we're willing to take some leaps.
But we're doing it not because we're the first to the next PCI node or the first to the next
interface, but we're a first to what customers actually want from us.
And that's a big thing for me.
You said 30 years.
I mean, you've been around and you've seen, I guess, all of it really at this point in time.
And you talk about jumps like in PCIE versions, and that's going to be something that the whole industry is sort of racing to support as that new stuff comes out.
Not necessarily because there's a ton of slots, but there's a lot of halo wins, right, to have the next leap and performance out of your drives or whatever.
But go back a little bit.
Let's take a walk down memory lane of the – I mean, you were there for the origins of Enterprise Flash.
Yes.
And I remember very well the whole – the very first satellite, the Intel was X-25Ms.
That was a thing.
And then the SaaS drives that were out there.
The fiber channel drives.
Fiber channel drives.
I mean, you were there for all of that.
Based on what you saw back then, what surprises you the most about where we are today?
When it comes to the new architectures like that, if you're building something to replace an existing infrastructure, the crossover points are always the interesting thing to look at.
So build a fiber channel SSD to replace a fiber channel hard drive.
when does the fiber channel hard drive kind of lose its dominance?
Or when does the SaaS drive, if it does, lose its dominance.
It's funny, though, to even think about something like a fiber channel drive?
Yeah, 73 gigabytes, $37,000 in the old days, yes.
That fits it in perspective.
But then when we started, what really did it for me was when we actually broke the box.
So for years, for decades, from 2007, 8, all of the...
the way up until, you know, NVME finally hit the market. We were just a Me Too product. And
then all of a sudden, ooh, we can do something that's just flash-based. And along comes the fight
between NVME and SOP, scuzzy over PCIE that used to exist. But then we did that. And so we got
this cool new interface. And it's like, oh, we can break the box box too. So enter the enterprise
data small form factor we all know and love is EDSFF now. Those types of paramount changes
are the fun parts to watch in the whole ecosystem as it grows.
And being able to have been part of almost every one of those little steps
with whatever organization I was working for in the industry
has been a wonderful thing to see.
And now, like here at Hypertech, the immersion systems we're looking at,
it's all EDSFF.
It's E1L, it's E1S.
There's no, I mean, can you put a spinning drive in an immersion?
I mean, you can.
It's a seal drive.
Yeah, so actually it's funny you bring that up because OCP, maybe three years ago,
So somebody had a little demo of a bunch of hard drives in the, I can't remember the name of the OCP box for the storage box.
It didn't do a lot for the hard drives in terms of anything.
But the one thing that we do see, and you talked about it already a little bit, is that in the immersion or in any of the controlled cooling environments, you get a lot of stability to the platform.
And so the drives or the CPU or the GPU or the RAM or all the other little things soldered on the motherboard have less highs and lows, less thermal expansion, and the endurance and reliability of those parts seems to be going through the roof.
Yeah, if I delve way back in the go-back machine to device physics levels, right?
We have these temp ranges on all of our products, zero to 60, zero to 55, 25 to whatever, and we do that because we realize that doing one thing in one temp and then trying to repeat it at the next temp,
those fluctuations are what really caused the problems.
Like, flash could have infinite endurance
if I'm not constantly going hot, cold, hot cold, hot cold.
Because if I program hot and try to re-cold,
those electrons like to move around
or shake differently right in their electron spin.
And so by keeping it at one flat, constant temperature level,
you can extend the life of almost anything.
That way, just think of cars, for crying out loud.
If my car never went through Montreal weather,
for example, since we're in Montreal,
versus SoCal weather.
SoCal cars last a little longer than Montreal cars
because they don't get that kind of temperature.
Yeah, but they're softer.
They don't have a toughness too.
Yeah, we won't go there.
So the temperature control is one thing,
and that's really interesting.
And you talked about form factors.
It's a great opportunity,
but there's also a little bit of confusion, too,
in the industry with all of these different things.
And you talked about all the EDSFF things,
but now we're looking at E2 and, like,
there's so much going on.
You live through SATA, Fiber Channel, SaaS, the very first PCI-Gen 3, SSDs, the add-in card, edge.
I mean, that was one of the ways to consume NVME or PCI early on was in the slot, the riser slot.
But now all these form factors provide a great opportunity for system designers, but also a challenge to kind of figure that out.
It's a challenge for you guys to produce these things.
Yeah. At the end of the day, I mean, a board put some chips on, wrap it in a case is one aspect of it, but then skew management, all the other things that everybody else worries about. The challenge has been when EDSFF, the E1S that we know and love, that's in a lot of these platforms we've been looking at, was designed as, I have an existing slot in a hard drive-based platform I want to replace with Flash in high density. That was the original thought process of the guy that was I call the godfather of EDSFF. And then you let all these other things.
vendors get in it, it's like, well, I'd rather have it a little more flexibility.
So let's go, let's do E3.
But now, oh, we can do E3, let's do short, long, double fat, whatever.
We can make it a universal drive bay.
It's like, yeah.
So there are challenges that actually come with it.
But I personally love the fact that all of those options, E2 even coming out now,
it's all about not looking at it from a what was the box that has existed for 50 plus years.
This is the first time where the industry has really thought about it.
thought about it this way, or at least, I guess it's been thought about and talked about,
but it had the wherewithal to go make a change.
Because you're right.
I mean, the very first SSD is we're all two and a half inch form factor because it felt
similar to the hard drives.
And I think, too, that's why E3S is kind of working because it feels very similar to...
And it allows a density factor you just can't get with anything else, but still fits
in that traditional, what everybody likes in this 2U box form factor.
I will say that we have had some challenges.
And for you as when you ship the blister packs, it doesn't make it.
matter because the drive's well secure, but having that connector external to the devices,
oh gosh. Well, that's actually very interesting. The whole thing with the cold plate design
that we did with NVIDIA was focused around how do you insert the drive perfectly straight
but still maintain perfect contact with the cold plate because of the edge connector, because
the edge connector has to go in straight and has to meet that orthogonal connector just right. Otherwise,
it could snap or break. But at the same time, having the direct goal,
old lead fingers on that board like that allow for things like PCI Gen 5.
And even especially Gen 6 because the connector and all of the transformation there you end
up with signal integrity as she's trying to keep it in a U.D2 type edge.
So we had to go to something and exposed maybe not have been the best plate, maybe an inserted
version of it or something down the road may come out.
But yeah, that was a lot of the genesis was just making sure that you could address the
up and coming forward looking form factor requirements as well as speed and
and interface requirements.
So in your view, you talked about signal integrity, and I think that's one of the
underplayed challenges around storage these days.
In consumers of storage in the enterprises, they just want to buy stuff that works and
they don't really care about this, but the signal integrity is a big deal.
Why is it, in your view, that going from three to four maybe wasn't quite so challenging
as four to five and then eventually to PCI, like, Gen 6?
Well, if you want to get into the trace windows and the PCIEI, right, that I become so much more
challenging, there's just introduction of noise from the rest of the system. We're talking
about, like, for example, when we get into PCIE Gen 6, the optics that are going to be used
to connect up to us inside servers have to be properly sealed because even air can influence
the ability for that signal to reach us and get back without kind of inflection, let alone
dunking it in a tank of fluid, which adds a whole other viscosity problem for that. So it's
It's just, we are, it's kind of like if, as a more real reference, as cell phone networks,
3G to 4G, we had more towers go up.
5G, we're literally having to go to street corner towers, better, faster, weaker total signal.
And that's why that's such an important thing.
It's the same kind of physics that are involved with all the PCI in a box.
Well, that we get so worried about performance out of SSDs, but I think if what Solidime's done
the last couple of years, if anything, is to show us that there's a lot of other ways to
skin the storage pad, if you will, and density is a big part of that too.
And I know we were talking before about shifting the bottleneck or breaking the box, but
we're kind of getting to the point, even with the current technology, you can only get
so fast in the box that eventually getting out of the boxes is a challenge.
And if you look at storage arrays versus just like software-defined compute servers that
I mean, there's so many companies, putting that on top of like a power edge box or whatever
to do shared storage.
Do you ever feel like we're fast enough?
I love that, and I didn't even prompt you for that one, so thank you so much for that.
But that is exactly one of the things that this whole race to being the first to fastest.
When SSDs first hit the market, it was a one to many.
It was a cash drive.
It was, I need performance to supplement all the hard drives I'm stuck dealing with.
And as we continue to supplement more and more of the hard drives in the system, that cash performance requirement reduces.
So there's a certain point where the number of lanes of PCIE that's storage as will maximize and it becomes not the bottleneck in a box.
So take a current GB200 that has four PCIE Gen 5 products in it or eight, four per GPU, you know, or two per GPU, but eight total in the box.
I can maximize those.
So yes, I need something fast.
As soon as I jump to a 2U-24 or any other type of storage cluster that's not sitting right
next to that guy, Gen 4 is more than fast enough.
I mean, and even capacity points, there are certain vendors that are customers that really
need the dense, but a lot of the enterprise is still sitting way, way back at 4-816 while
we're overselling the living daylights out of 122, so we have to do balancing acts there
too.
But I'm less concerned about the speed of the individual drive versus just how.
how to maximize the system architecture
to get the throughput you're looking for
and the capacity you need.
Well, that's the duality of it, right?
So you talk about needing the performance
in a couple cases where you need that full Gen 5
for those GPU servers, but then, yeah,
I mean, we're also seeing in the work we're doing
that you can put 40 E3S in a platform.
You're gonna get two lanes to them,
but then the question is, what can I get out of that box?
And if now we're looking at riser allocation
And how many 400 gig nicks can you get or 800 gig nicks can you get in the box?
And if I can get 160 gig out of the box, then you're right.
Gen 5 doesn't do as much for me.
I mean, that just may be what the modern drive speaks and an E3S, and that's fine.
But what else do you have to do or where else can you differentiate?
Is it around reliability?
Is it around latency tightness?
What's your focus?
I would say that the sheer IOPs, millions plus IOPs, is really kind of gone away
with the exception of the one, like this whole big storage project that's been talked about
around NVIDIA, but they're changing the dynamic. They're making it more like a memory read
level, like 512 byte, not for the 4K and everything we're used to in storage. So set that little
guy aside, which has some merit and value. And the rest of the systems, we are going to all
SSD where we don't need to focus on necessarily how fast it is. Because again, we're saturating
in the bus. Consistency, wearout capabilities like the idea of an unlimited wear on a drive
at a certain capacity point with the lower cost, because cost always comes into these questions,
the media brings significant value to customers. And I have a joke. We build a, you know,
what we call the Solidime Advantage deck to help our salespeople sell the company. And I did a 4.75 star
kind of graphic. And I said, our customers love us. We have a 90% rating from our customers.
We're not perfect, we're darn close, right?
And being able to go into a customer meeting and have them say, yeah, we rank all of our vendors by a quality score.
And it's one to six, solidimes 1.05, and somebody else is down in the fours.
I'm happy all day long, not just because I'm at the top, but because the customer trusts that we know how to build a product for them.
Because at the end of the day, our relationship is part of it.
But yeah, you have to be able to deliver something that doesn't need to be fixed.
Storage has gotten this moniker for years and years and years, for the 50 years of a hard drive that always has a headbreaker, whatever.
I mean, I lost, you know, hundreds of personal photos on a hard drive in my house when that drive broke.
And it would cost me like 10 grand back then to recover it.
Now, these guys, they still think they have to be removable.
We've gotten to a point where they're almost to the point of a reliability of a dim or other products that really don't need to come out of the box.
So I kind of see that as just the fact that we can show that kind of level.
of quality of customers is important.
Well, I remember, and you'll know this well, that supporting Drive Blink and things like
that was such a big deal because the hard drives were the things that failed the most
in any data center.
So from a serviceability standpoint, the IT admin would ask for things like, do you support
drive Blink on the caddy?
So I know which Amber Light to go and Yank out.
But yeah, the metrics are changing.
But talk about capacity to it because we've spent a ton of time with your 5336 family
all the way up to 122 terabytes now.
What's that feeling like?
And is 122 just a halo product or are customers really putting these to work in a significant
way?
We've, or at least I've seen, and I think to some extent solid I'm seeing, as soon as you
have something that is really unique, people gravitate towards it.
And it's not an instant on, right?
This whole market is never, as soon as I drop it, it's going to sell millions and millions
and millions.
But it just so happens that this 30, 60, 120, the TCO models and people paying attention to it,
throw in lead times of other components that add to the, it's not a halo, right?
It is something that's going to continue to grow.
You've seen announcements from everybody, including we've made the soft launch of, you know, 245 next year.
We're going to see them continue to grow in size.
And part of it is simply because we keep building the NAND at a denser point, too.
so I can't build 2 terabyte anymore with real ease.
It's funny.
It gets harder to make a small drive than a big drive.
So as you move up, it's kind of our own natural progression up to lower costs.
We make everything bigger, therefore we get everything bigger.
And it's been a very nice nuance to see that this 122 mark for today has that perfect umbrella TCO for the hard drive replacement concept and things like that.
And there's even aspects of that, even from a hard drive replacement of a capacity,
point, they're still overperforming.
So we're even looking at ideas, how can we make a product that is literally a displacement
product at certain capacities?
That's sort of been the dream for a long time in the industry of hard drive replacement,
and you can do that on the performance cycle, because that was easy.
That was done in 2007, a long time ago.
But then it's the cost metrics and how do you detune what you provide from a drive standpoint
and SSD standpoint so that, you know, you still have the reliability.
Do customers not paying for the performance and stuff that they don't pay?
Yeah, I would love to find out whoever it was that said there's a special halo of success for SSDs when they get to that 3x mark on the dollar per gig.
Yeah.
That was the early dream.
Wow, yeah.
And I think dollar per gig, you guys can do simple math, even if we're a penny a gig, a 122 is not cheap, right?
And it's not a penny a gig, but can you get things that make it more palatable to that?
Because there are so many sockets in this world, especially with the amount of data that we're going to.
growing, number of street cameras, number of smart cars, all this data we're generating
is exabytes to yada bytes to whatever.
You got to put them somewhere, but you just don't need them fast, not SSD fast.
Well, and that's part of the challenge too, right?
Is that there was a time eight, nine years ago where there was kind of a movement to get
rid of data for legal compliance, for potentially getting yourself in trouble by keeping
things too long, that's gone.
That notion was here for just a flash, and now it's with AI, what we used to call business
analytics, but now AI, these models that we can have a finely tuned model point it at a large
data set and inference on it, and you want that data to be available in relatively
perform it, but you don't need the best out of it in most cases.
Well, and as we move into this next phase of the AI model efforts, we get into the sovereignty
stuff.
We get into the, I want my private data private, but I want to leverage those massive public
models.
So now I'm building this massive data center as a cloud or a neocloud or whatever you want
to call them AI factory that are generating these models that everybody has access to.
You're snapshotting them into a local environment that's now going to continue to grow at a stellar
rate in your personal data center.
Where you need now capacity to take and, you know, model train on your own version.
Solidime has Solidime GPT.
We took a snapshot of GPT and we continue to update that snapshot and we use our sovereign data
to make sure that we don't push something out that would be non-public domain wanted
type of architectures.
And that's, we're one of a very small group of people that have a tiny set of that.
But there's countries now that are going for the sovereignty and that is another capacity
play because I got to copy massive data to a local environment so that I can play with it and
then maybe never see it again or at least do a d-dupe once in a while from that massive.
So now the capacity growth is growing in two ways, but one's an inference growth based on a training
model growth.
It's funny, though, because two with these big drives, then the pushback is, well, whoa,
blast radius is scary to me now.
And we just did a couple pieces with Dell on their perk hardware raid controller.
So hardware raid is back for NVME.
And then we've done a ton of work with grade.
They've got a great rate algorithm that runs with a GPU, so it gets out of the CPU complex.
It's wildly efficient and extremely high performance.
What are you seeing out there to address that concern from a blast radius?
If I lose one of these drives, oh my gosh, 100 plus terabytes of data.
Well, that's the beauty of these things.
Yes, it's a lot of data.
But if I lose a hard drive in an environment behind a perk cart, the whole system slows down until the,
The hard drive can rebuild with its replication, and that will take significantly longer than
an SSD rate architecture that loses a drive and rebuilds from an SSD architecture.
The physical time required to rebuild that drive, even though it's a lot of data, it's
still faster than any of the previous replication replacements.
And the failure rates of our drives are minuscule in comparison to the storage products, right?
So the blast radius will always be a problem, and we'll always have to worry about it.
But the loss of the data is no longer a concern.
Nobody has a single copy of any set of data unless you're literally on the street camera SSD
and well then you're just, you can't do anything about that.
We've got double, triple, quad, whatever, you know, if you're using the AWS or anything
like that.
They put it in six different locations around the globe for you.
So the data is really not gone.
The blast radius doesn't really exist from a loss of data.
The blast radius exists of how long is my system down until I get my performance back.
Yeah.
And that's a fair point.
I guess the on either the raid providers or the software-defined solution or whoever
to architect for that.
And they've done that before the ratio coding with all sorts of other techniques to make
sure that when you, because drives will eventually fail.
Not as often, but when you put that one in, do you need it right away or do I keep a
cold spare in the system?
And the resilience.
We're seeing a lot of data center operators that really don't want to go service failed
parts and they build the resilience into the architecture.
Oh yeah, we have customers today that are fail in place.
We want your drive to fail eloquently, not fail badly.
We wanted to say, I'm going bye-bye, see ya, I will have no impact on the rest of your system,
go find somebody else to replace me.
And that's exactly what you're talking, that's exactly.
They will literally let racks start to die off and when you get to the point where it's
now, you know, ends the means of replacement, they go in and play with it.
But yeah, they'll let systems and racks go down and just fail in place.
And so now if we do fail a drive, it's fail eloquently, fail cleanly.
I feel like you just gave your retirement speech when you can get, I'm going bye-bye.
Exactly.
Replace me.
So as you think about all of this that we're talking about, what we're seeing with hypertech day, a lot of this emerging technology,
you've been to all the trade shows, you've been all over the world this year, you know, meeting with customers and looking at all of this.
what's your takeaway of where storage sits as we close out 2025 here and what do we look forward
to next year, maybe something that's not on everyone's radar.
I know it's all AI all the time, but that may not work forever.
Like what else are we not seeing that we should?
Yeah, I think to your point exactly, we know where the markets are today.
So we've got all the cool innovation on how to build out and scale out the AI factory, the
data center, whatever.
immersion cooling, liquid cooling, whatnot.
From a storage perspective, we also know that there's a significant bubble of tightness
of supply.
You take what happened with the post-COVID 23 dip, and now this immaculate jump of data
consumption for AI, we can't supply it all comfortably, right?
We can supply everybody, there's lead times, things, our friends in the spinning world
or, you know, year-plus lead times now.
So it actually provides a great opportunity for our customers.
and our partners to engage on that what next, right?
Step back, you can't really do something net new today because you can't get today's product.
Let's think about what would work better for you.
So what have we seen as a problem and what can we do to address it?
So something like...
Better planning is really what you're talking about.
Yeah, we have a window of opportunity that we haven't had in a decade to actually kind
of just step back and say, okay, you've got your parts, you have problems, let's go find a way
to solve them with something different.
So hopefully, I'll be honest, I'm not looking for a new form factor, but there could be one,
or there could be going to flash in a one-U pizza box all soldered down.
I don't know.
Or we could see ways of interacting better with the PCI networking and figuring out how to put
networking plus storage plus this all together better.
And so those are the kinds of things that I look at now is what's next is I don't have
the crystal ball to say what's the perfect thing.
but it's a grand opportunity to sit down with someone and say,
instead of going, where's my PO, how many drives do you want?
It's like, how can we work together to do something different that you need?
And there's lots of people out there that have a desire to do that
if you're willing to sit down with them and talk to them about it.
Well, that'll be interesting because you've got the events that you've done,
but we've got Supercompute coming up.
I guess that's probably about it for shows for you guys this year.
As far as big events, yeah.
There's a few other things out there,
but the next big thing past that is really back to DTC, right?
Because there's some CS buzz around the consumer space,
but of course, Al-Oidine doesn't play there.
Well, actually, so this is interesting, not really,
but I think about some of the new stuff that we're seeing,
like the DGX Spark.
I mean, it's just got M.2 storage.
It's only got a 22-42 on it,
and there's just not enough performance there
to really give that GB10 everything it needs,
or the bigger ones that are coming out with the GB30,
still with M.D.2.
So what we did in our Spark review, and we're just starting to dabble with it because we didn't have a ton of time, but we thought more about, well, it's got 200 gig on it.
Why is the spark on my desk and not in the data center?
And connected it to a system with PKIIO and it had flashed behind it.
And now I can get some really fast, reliable throughput to that spark.
I think even some of these consumer devices, or not really consumer, but desktop devices are going to make us rethink where they should live, how they should.
should be accessed and what kind of storage they need.
You could almost call that the epitome of the edge
AI computing, right?
Because everybody's asking what is the edge.
It's like, well, the edge is really the TG Spark.
Well, that is, because they put a 200 gig nick in it.
Like most edge devices, 10 gig, maybe,
if you get a nice onboard port.
But these little NVIDIA boxes are really changing.
I think, I mean, you remember the data center shelves
of Mac minis and stuff, they would pull out,
and they would all be stacked in there.
We're going to see that with the Sparks, there's no doubt.
we will, especially because they're going to make them available, and it has a platform
that's fast enough for a lot of things, where you can't get some of the more performance
systems, and even if you don't need them, that's, again, I think absence of availability
is the key to innovation, right? You could look at it from an economics perspective, a technology,
whatever it is. If someone can't get what they think they need, they will find a way to augment
it with something else. The creative innovation is certainly something we're seeing now.
And you reminded me to your point about putting the spark and networking it over to something.
One of the fun things that we did this year was NVMEA offload of RAG when we did the Metrum AI white paper that we've got available.
And we're now doing an NVMEA offload of inference project with that same thing that we'll be working on over the next few months and things like that.
So there'll be some cool stuff.
So now we're looking at, okay, we've had this huge HBM conversation.
I'm looking forward to when I can go into like a GTC
and instead of having Jensen talk about HBO and DRM,
he's like, check out these cool storage innovations.
If I can get that, that would be like, you know, the peak of the next era.
Well, it was GPU Direct was the first dream, right?
Especially for gamers because they didn't want, you know,
load sequences anymore while you're freshening up your graphics memory.
But yeah, I think there's got to be a way where that tiering gets better and faster
and let those GPUs really reach this larger data pool.
And if you look at it from kind of how to bring that all together,
that's one reason why Solidim has a significant footprint in SNIA.
So there's this...
You're just saying that because you're on the board.
Oh, hush up.
There's this thing that we created, right?
So SNIA has what we call storage AI.
And it's where we've got all these storage vendors,
all these networking vendors,
and all these consumers and builders
that sit in different work.
groups that are all focused on their linear aspect of what to do for AI.
And it's not often you get an organization that looks at all of that, that can bring it together
and try to cohesively align everything up.
So that's going to be kind of fun for the next year, too, is to work in that frame of reference.
Because you have NVME, which does its thing, and you have Ethernet, which does its thing.
But when you can bring those together with someone in between that allows frenemies to talk is
the best way to put it, I look forward to that.
And yeah, fine, I'm on the board.
But that's just, that's something that all of the companies in the space need to think more about is, yeah, I need my, my dollars out the door and I want my innovation.
But sharing resources to help our end customers is really where the winds come from, because that builds on everything else.
Well, we've got more to do today to go play in the oils and see the immersion servers at work.
I think they're just about done frying our lunch.
Yeah, probably, hopefully.
I know that I know we talked about the thermals and that's something that I'm excited about
because we've been hitting one of their GPU servers really hard with your E1S drives in there
and they just lock in on this really tight window of thermals and the utilization I mean you
can make all the energy benefit arguments you can or you want but the utilization and the
performance of these things when they're not getting too warm is throttling is just like the
natural way to stop that from happening. And if you don't get throttling on your, on your
expensive components, then all the better. So we're going to go do more of that. Consistent latency
performance and everything else as well. Yeah. And I've been impressed honestly with not just
the solid I'm leadership on performance with Gen 5, the capacity with the 5336, but your willingness
to go play in uncommon playgrounds like full immersion. Yeah. We're in the process of looking forward to
having the first fully qualified SSD for immersion liquid,
a set of emerging liquids, mostly focused on hydrocarbons today by the end of the year.
Well, that would be exciting.
For systems, I mean, these guys would love that.
Exactly.
You know, help our customers sell their customers.
All right.
Well, thanks for doing this pod.
I appreciate you.
Good to see us, Scott.
Thanks for coming all the way to Montreal.
I know.
Next time I'm going to be at least one of our headquarters.
Yeah, good at any Cincinnati.
I'll be down soon.
Thank you.
