Podcast Archive - StorageReview.com - Podcast #143: KIOXIA 245TB SSDs Are Here!
Episode Date: November 16, 2025KIOXIA’s Maulik Sompura joins Brian for an in-depth, informative discussion about all things flash and other industry events. The topic is timely and relevant, given the massive expansion of AI ...and modern workloads. Maulik Sompura is the Senior Staff Director—Product Planning and Management at KIOXIA, with over 13 years of experience in NAND, memory, The post Podcast #143: KIOXIA 245TB SSDs Are Here! appeared first on StorageReview.com.
Transcript
Discussion (0)
Hey, everyone, welcome to the podcast.
I've got Keogsia with us today, talking about all things flash and probably couldn't
be more timely with huge expansion and AI and other modern workloads.
You know, thanks for getting in on the podcast.
We're going to get to AI, but I want to start with some other things first.
First of all, tell us a little about what you do at Keoxia and who you are.
Thank you, Brian. Thank you for having me. My name is Maulik Sumpura. I lead the product management team of SSDs at Kiyokshia Americas.
And that is a bigger and bigger job these days. I mean, in the early days of Flash, you had a couple models and now the portfolio is unbelievably diverse.
When you think about managing SSDs, what's the hardest part with such a big portfolio these things?
days. That's a great question. Just the complexity of very fast-paced rapid changes happening,
especially with the AI boom, you know, we are in a great time right now. You know, honestly,
storage is at the forefront after, you know, if you think about GPUs and HBMs. So it's a
great place to be. However, as you said, the complexity is growing immensely. And especially on
the enterprise side with hyperscalers specific requirements, very custom requirements for their
own needs. And they, they are obviously huge customers. So they demand certain things specifically
for themselves. The form factors are growing like crazy. So, you know, we have so many form factors
coming in, especially with, you know, higher capacities. And now you see the liquid cooling
coming in with Gen 6.
So there are new changes happening for liquid cooling in terms of performance.
You know, the GPUs are pushing the performance boundaries like anything we've seen before.
And we are growing at an unprecedented pace.
So, yes, it's a very challenging landscape right now.
So we are just coming off OCP.
I didn't make it out there this year, but you're talking about hypers.
And it seems to me that historically, the hyperscalers have really dominated the direction of storage, if that makes sense.
And I don't know that that's, that everyone really understands that.
You know, the metas of the world and Google and Amazon and all of these guys will say, you know, this is what we want out of a drive.
Go build us a couple million of them.
And that really drives the direction for enterprise.
Talk a little bit about, you know, maybe some of the benefits that the enterprise derives by hyperscalers sort of driving the performance and the form factors and whatever else you think is relevant there.
Yeah. So, yes, I was there at the OCP event this year. It has significantly grown. And now I see that event. It's like we are talking about very detailed storage stuff.
from all the way to data center, you know, even real estate.
That, and it has grown significantly in terms of data center infrastructure.
So when, I think one of the key things is the OCP specification itself.
You know, when the hypers started, then OEMs joined the OCP specification is kind of
honing everybody in one direction and it's simplifying at least all the stuff that we used to do in silos.
And, you know, there were lots of changes which firmware had to accommodate and we had to validate.
So now having having one specification on top of the NVME specification helps a great deal, even to the enterprise customers.
There are minor changes.
Enterprise customers specifically, specifically they sell from, you know, point of sale all the way to the data center.
So their requirements still could be much bigger in that scope.
You know, they have to kind of address a single customer versus a gigantic, you know, single, small customer compared to a CSP.
So in that regards, there might be changes.
They do require certain stuff that, you know, for example, three drive rides per day, you know, is one example, right?
So those things are different, but otherwise it does help overall to get into the same space from specification.
And I think as an industry, it's coming together, even for form factors, as I said, you know, while there are, you know, it's this a wide gamut of form factors, I think industry is coming together slowly to kind of, you know, reduce that.
Everybody's worried about the increased number of skews.
Yeah, I mean, you hit on a couple things.
You hit on performance.
You know, that would be a couple level of skews.
You hit on form factors.
that could be, I mean, at this point, a dozen different form factors when we start factoring in
all the EDSF variations and the thicknesses, Z height, and then you've got endurance, which you didn't
get to yet, but I'm sure we will, which is another factor. And as you slide all those knobs
around, you could end up with a difficult portfolio to wrangle with too many options.
But it sounds like if you feel like at least, I mean, in the enterprise, I think we've seen it already, right?
We still have a little bit of U.D2, you're hanging around, but E3S seems like the way most modern servers and then eventually storage appliances will go to.
I'm sure that's consistent with what you're seeing, but what else are you seeing in terms of thinning down the varieties of.
form factors?
Yeah, that's always a question.
In fact, we, I'm going to Japan next week and that's one of the discussions, right?
Internally, of course, we're talking to the external customers for all of these, but we have
even dot L, we have different varieties of where ideas of even dot S.
Now liquid cool is joining that E3S, 1T, 2T, so far 2T, 2T, so far 2T.
very few, you know, almost, you know, nonexistent. And then we have 2.5 inch, as you mentioned.
So 2.5 inch will still continue. But Gen 6, as you say, people are moving to E3S 1T.
And I think that's the right direction in terms of power and thermals. That's where we are making
progress and, you know, we are recommending customers to kind of move toward E3S to align
as an industry. And so, so we are semi-successful.
I would say there's a lot of legacy stuff and people still want certain things and plus
these things run run very long year, right?
So people have to support for five years, seven years.
So yeah, I'm hoping that by Gen 6 and Gen 7, you know, we kind of come to a much smaller
number of form factors and skews.
Yeah, I know it's funny though.
As soon as you say that, I mean, we were just writing not long ago about E2, which is
Yes. Another one, which is unbelievable. For people that don't know, this is a much larger physical SSD, really designed for capacity. And I don't know that it's in the mission, but it, you know, I think of it as, is maybe the true hard drive replacement form factor and capacity and performing profile. But was that a popular topic at OCP this year?
Yes. Yes. There were a couple of.
discussions on E2.
And the way they designed E2 is very clever.
So we are using the same connector,
same height as E3.
It's just that instead of E3L, it's longer.
But it also is slightly thicker.
So when you look at like front mounted SSDs,
you know, it's basically you can't use the same slots
that you build for E3S because this is now 9.5 millimeter
instead of seven point and and therefore you still have to design your you know
design a different chassis to take advantage of that and it's longer so you know if
you have components in the backside you have to shift everything you know but
it does allow you to reach to huge capacity so in in the future if we have four
terabit type for example or you know and you can go much higher in stacks then
you can go up to one peribite just on a drive.
That's amazing.
I mean, we saw some people concerned about, you know, things like blast radius and rebuild
and whatever as we went from even just four to eight terabytes and eight to 16.
And now that 122 has been shipping for a while, you guys have announced a 245 class drive.
Yes.
already. I think it, I mean, fundamentally just comes down to needing to build systems that
are more intelligent, more resilient, so that we can eat a, by trial loss and not lose availability
or whatever the QOS, the performance guarantee is in that particular system, right?
Yes. Yeah, so that's a great point, actually. So I think what has happened in the last,
you know, I think a decade, especially the CSPs, the hypers, they are taking these large
density drives. And you touched upon LC9, the 245 skew that we announced, that was purposefully
built for large-scale repositories and data leaks. And the primary driver is rack consolidation
and power efficiency, right? And so when you look at,
from a raid rebuild, you mentioned blast radius and raid rebuild.
What hypers are doing is, you know,
they are sharding the data.
So the data is, you know, very small chunks,
you know, and distributed across many storage devices.
They are doing erasure coding across the racks.
They are also now with smart monitoring,
latency monitoring, you know, there are so many tools
for predictive analysis so that before the drive,
goes bad, they decommission and replace the drive so they don't have to deal with the failure.
And even the locations, they are separately located and, you know, they can build, you can also
build with multiple trials now instead of rebuilding just, you know, through few drives, right?
So in general, you are actually accelerating the rebuild times.
And, you know, if you compare today, say, 30 terabyte HDD versus a 245 times,
So I can, you know, it's even at Gen 4 speeds that I give your read speeds 7,000
megabytes a second.
So you could build a 245 roughly in say one and a half hours versus a 30 terabyte hard
risk drive will take about five hours.
So so not only you have six times the capacity, you know, you are almost four times
faster three, three and a half times.
So you're almost 20x faster in rebuild times also.
So yes, there is obviously a concern.
And I think the other way to think about it is two terabyte drive.
If I'm giving you five year warranty and 2.5 million MTBF for a 2 terabyte drive and same
for 245 terabyte, from a reliability point of view, you are still getting the same reliability
underneath.
So on top of having all of these provisions, I think now the
radius, blast radius concerns have kind of significantly come down.
But for OEMs, as we touched upon earlier, there may be some concerns for point of sale and
stuff like that.
But in general, I think at least at a large deployment, it's much, much better handled.
So speaking of large deployments, you hit on liquid cooling a couple times.
I want to visit that because we're seeing that in a number of different ways, but I suppose
those the NVL 72 or the GB300 nodes now are really what's pushing that from the AI training
perspective at a large scale with the E1S drives.
What are you seeing there that's interesting or what do you have to do as Kiyoxia to be ready
for big shifts like that in technology?
Yeah, so we have been participating in the standards, which is the SNIA SFF, was driving
that and so we've provided some comments to improve certain things but primarily the idea is to
have is secondary and primary sides and the top and the bottom which are the thinner sides of the
e1.s and you know the idea is to have one side fold plate from the host and you change the you know
there are minor changes so that we have the the right connection for thermal conductivity right so in
In that regards, there are changes in the material, the tooling.
You need certain flatness and roughness.
And some of the customers wanted like a chamfer around the connector.
Some customers may use Tim materials.
Some customers may not use Tim material.
So we kind of helped the industry and tried to say, hey, you know, there are certain things.
We need to kind of do it this way.
But otherwise, I think we will be ready at, you know, we'll start with Gen 5 drives and then go to Gen 6 because, you know, that Gen 5 will be our POC, like, hey, whoever is deploying Gen 5, even in Gen 6, because some customers are going liquid cool, whether it's Gen 5 or Gen 6. It's like, you know, whatever drive, it has to be liquid cool.
So we will start with Gen 5 and then move to Gen 6.
Okay. So that, I mean, that makes sense for the big training systems, of course. We can get fans out of those, get those to be more efficient. You know, we all know the reports of how much power AI is consuming these days. And I think it's generally, you know, by most sort of underappreciated how much electricity, obviously the GPUs, but the fans, to cool the GPUs and to cool the system, use up a tremendous amount of power. So there should be some good savings there.
Do you see that as you look down the road translating to the enterprise market?
Obviously, we've seen systems from Dell and HPE and Lenovo and Super Micro.
They're all using liquid cooling to different extents, but the enterprise is just a little more reluctant than most that are putting in these training clusters to rework their data center and bring in water.
In some cases, they can't do it.
others they don't want to do it but it seems like for next-gen systems we might be getting
so close to the benefits of fully liquid cooling these systems from a power consumption standpoint
that it it may just tip the balance you know what what do you see in the enterprise world
yeah that is very true observation ryan what we are seeing is especially on the even
In E1.S, most of the companies are following the Envidia direction and going with this liquid
hole design on E1.S. But on the OEM front, where they are using external boxes, E3S takes precedent.
And now we are talking about E3S because the connector can go much higher in terms of, you know,
it allows for a lot more power. Especially with Gen 6, we would need more power than 25 watts
if you want to saturate certain corners, and especially on the right side.
So therefore, you know, they are looking at liquid cooling on the E3S side as well.
And that's also being discussed.
So in the industry overall, I think that's going to be a de facto standard going forward.
Yeah, I think OCP has a couple working groups for liquid cooling, both direct-a-plate
and then a separate whole chain for immersion, which is another place we've spent a lot of time.
you know, the storage impact there is a totally different set of concerns, which is interesting
in its own right. Those working groups must be fun, fun to be part of, too, because we're just in
this shift in data center technology that's so dramatic that really we haven't seen this
in a very long time, maybe even since Flash first came into the data center. I agree.
AI is basically changing a lot of things that we had an incredible pace also, right?
This is, you know, as you say, unprecedented pace that we are picking up and things are changing so rapidly.
But what we have, at least on the AI side of things from a storage perspective, right,
when you look at AI workflows and AI infrastructure, we feel that two,
vectors will be dominating for storage. One is capacity and one is performance. And capacities
where, you know, we basically announced the 245 terabyte, first 245 or, you know, and that we did
with two terabit first two terabit QLC die with an innovative 32 die stack on the NAM. So that's
everything is first time, right? We never built 32 die stack. We never built a two terabit die.
We never built a 245 terabyte drive.
We never built an E3L form factor.
So there's a lot of changes coming very rapidly at us, yes.
And now, you know, compounded with PCI interfaces.
You know, the cadence is much faster because of GPUs.
Well, and it's kind of fun too because the storage really isn't the bottleneck anymore.
I mean, there's different ways to look at.
at it within the GPU system,
whether it's a traditional Rack server or those NBL nodes,
the storage is typically fast enough to keep up
with the GPU's ability to process data.
But when we look at some of these real dense boxes,
we're doing some work on a system now with 24 drives,
four lanes, the ability to generate 250, 280 gig a second
in that box is pretty is that's stunning in itself yes but now if we want to take that box and
share it out to six or eight or ten GPU servers now we've got a problem problem sort of right
in that we can only go as fast as the network will let us go to be the the storage boxes have
gotten very very good and the drives inside and now it's a fabric issue of how do we how many
nicks can we get into this this box to share it to all these hungry servers um
data transport is not really your problem, but what do you view, what's your view on fabric these days?
Yeah, I think you hit it on the nail there. It's, it's, so for, you know, if you look at all the
the large deployments where all the money is being spent, right, they, they are spending a lot
of money to kind of solve that problem also because, you know, every bottleneck, right,
Once it's GPU, then HBM, then storage, then, you know, everything down the pipeline has to kind of meet the requirements.
And therefore, that is why the scale of money, an amount of, you know, dollars being spent, CAPEX, and even OPEX, right, this huge.
We've never seen this kind of investment overall in the data center at this rapid scale, at this scale and this speed.
And so I think that is being solved.
You know, and if you see the the Ethernet guys are coming out with 800 native, you know, and so, you know, the smart nicks are getting much smarter, faster CX8, CX9,
NVIDIA is coming out with all those. So, you know, when you look at it, I think from a big picture point of view that is being solved, from a smaller customers, yes, that is a challenge, right? So how do you scale and now, you know, you have to make.
sure that the cost is also you know balanced right so 100 gig 200 gigs maybe 400 so i see it
solved at a grand scale but i think there will be a lot of like middle tier and lower tier that still
needs to kind of come up to that and i don't think we're there yet so that that's that's a challenge
yeah i mean AI is obviously driving that but the good old fiber channel networks are still going to exist
and we're still going to have databases.
Can you imagine that?
I mean, this is about the longest I've ever gone
without talking about database performance.
I assume you still have customers that care about that.
Yes, yes.
Yeah, I mean, especially vector databases, right?
You had to bring it back to AI.
Well, no, I know. I know what you mean.
Yeah.
Well, so there's another thing that I think is interesting about Keoxia,
and this may be a hair out.
outside of your wheelhouse,
but you guys are doing a lot in software too
and open sourcing some projects.
And I'm not sure the industry really knows that much
about what you're doing on the software side.
Again, I know probably a little outside your scope,
but just give us some of the high level beats
on your view of software and some of the open source work
you guys are doing.
Yeah, so a couple of things.
So on the, so there is a group called
storage pathfinding group.
And so we have a senior fellow Rory Bolt,
who's managing that group.
And he was in the software side of things
before also in Kyokshia when we had the software-defined storage.
And primarily right now, we are focusing on Isaac.
You may have seen the demos or some videos,
where what we are trying to do is what can we achieve
in terms of performance without the DRAM required?
How can we reduce the cost?
of the expensive HVM.
And so Microsoft had disk A&M,
which was a software which would utilize this.
And what we did was we took that
and added our own recipe to reduce that significantly.
So now, you know,
I don't remember the numbers on top of my head,
but basically Isaac is reducing the DRAM
to almost bare bones and still get the same performance.
There are minor, minor,
things that may come up in terms of latency, but otherwise, it's significantly improved.
And so that's a huge cost savings for the customers.
Yeah, and you've got some, I think, some RocksDB work too or something else that's kind
of interesting.
But it does sort of highlight how you guys are taking a multifaceted view toward, you know,
whatever the problems are you're helping customer solve.
And then there's the whole fab side.
So you're headed over to Japan, I think you said.
And that's got to be pretty thrilling to be able to see those fabs and the wafers rolling off the line.
Yes, yes.
At the same at all.
I have been to Yokaichi before.
This time, no, mostly I go for strategic discussions and mostly for the product roadmap and strategy discussion.
But yeah, I don't plan to go this time.
But yes, it's fascinating to see those wafers.
and we have Kitakami, which we announced with Sandisk recently, where K2 is also ready for production.
So, yeah, we are growing and we are very excited.
Talk about that piece of it for a moment because I think there have been some concerns about supply.
We went through the COVID cycle where supply got diminished and then maybe came back and we overbuilt or overproduced.
And now it feels like we're sort of in an in-between for plenty of people tell me that supplies are maybe even a year out at this point for large orders.
For some vendors, anyway, I don't know.
But what is going on with supply and what can customers expect, at least from your perspective?
Yeah, that's a million-dollar question right now.
So a billion dollar question maybe.
Yeah, so we talked to the analysts, we talked to the customers, we talked to the suppliers,
and the shortage is real. We all see it. And what's happening in the industry is, as you say,
when COVID hit and we, you know, there was a lot of inventory. You know, it was a huge downturn
for the industry and as an industry we lost 30 billion dollars on on the slash site sstsd site um so after
that and and let me say that when we go to bigs eight bigs nine bigs ten you know as we ramp
the technology and similar for our competitors you know as you scale the layers or reduce um you
know uh the lithography your cost of the the capex is increasing very very
very, very rapidly and huge.
Like before it was, you know, it used to be two-year NAN cycle.
Now, you know, this is changing rapidly and also that just to keep up with that kind of
speed and increase the capacity, reduce the cost, while it's a great opportunity for
the storage industry, it's also, you know, there's always that fear that, hey, what if
the downturn comes again?
And I feel that the storage industry will be cautious to kind of grow,
but it will grow, you know, the trajectory is upwards,
but how much each vendor kind of can do more?
And how much fab capacity do they have idle today?
Or can they increase anything in terms of ramping up the production?
That has to be seen.
But yes, it's going to be, it's going to be interesting to see how we,
you know, as a lot of analysts say that this is a super cycle.
So compared to, you know, that regular standard cycle,
this is a super cycle.
And if we believe that, this could take,
you know, maybe another four or five years,
it could last for that or more, I don't know.
But you know, not a typical one and a half two year cycle, right?
In that case, it'll be interesting.
Yeah, it, well, it certainly feels like we're in something,
you know, a super cycle maybe,
but with AI becoming more productive and more real
for more organizations, no one's going to delete data.
So all we're going to do is keep creating data,
keep stacking data.
And then now we've got to figure out where to put it,
what these new tiers, what we used to call data lakes.
I mean, that's even shifting the way organizations
are thinking about availability of data
and where AI is run.
I mean, we've even seen backup repositories
being targets for running AI against them,
for in non-critical, non-real-time kind of things.
But wherever the data is, I think we might see some of the AI get pushed at the data.
I mean, we've seen that already at edge.
I mean, how many smart cities, you know, conversations have we had, right?
Where it's getting more tech into the edge devices.
More work in robotics, more work in other industries where even a little NPU or something
can bring a lot of analysis and insights.
to data but yeah slowing slowing that down the storage market I don't see that at this point
I think it's a question of where is the spend going to go what is the flash or drive spend
you know mix look like and you know do we do we start to get closer with these big big drives
that we've been talking about to really disrupting the hard drive market I mean I'm sure you'd
like to take a swing at that yeah yeah definitely most definitely actually if you
I was just looking at 245 terabyte that we announced right and if you look at a 42
you rack you can fit about say 10 drives per you and 420 drives of 245 just for
the calculation of you know how that will so that's a hundred petabyte rack
103 parabyte rack if I take that and compare to today's 30 terabyte HDD you
would need seven racks. And now you need the additional networking, the switches, everything.
So if I compare, the biggest challenge, as you touched upon earlier with all the GPUs and
the power being a big constraint, what's happening is GPUs are taking more and more power.
You know, Nvidia and others are, you know, they want to cram more GPUs per box. Okay.
So, NVL 72, 144, now you're looking at bigger and bigger numbers, right?
Now, the challenge is, you know, power has to come from somewhere.
You know, the data center pipeline is fixed.
So while you can build bigger data centers with more power and so on,
what are you going to do about today's powers?
And now you're also going to liquid cooling.
So the power reduction could be a huge benefit if you go to SSTs, right?
So in this case, right, where you have 42 U, 100 parabyte rack and seven racks of hard drives,
iops per watt is almost 1,300X.
And while they are increasing, while HDTs are increasing to fit more spindles to, you
know, get the capacity, now you're, you know, most likely 30 to say 40 and then 50.
But even with 30 terabyte, your performance is not growing, right?
So performance density is a huge problem.
There's no scaling there.
So iops per gigabyte, right?
Even with 245, which is a huge denominator to have it at the bottom,
even that is still 600x better.
And then your throughput is 7 to 8 times better.
So overall TCO is huge improvement.
Yeah, we've seen those charts, right?
And the efficiency numbers that you talk about and just the raw growth of data.
I think for many, that table will flip and there won't be any other choice.
We talked a lot about OCP and hyperscalers driving the industry, but now we're coming up on sunny St. Louis and Supercompute 25, which is going to be an interesting one.
I'm not sure St. Louis is ready for this.
I went to the first supercomputer after COVID that they had there, and it was okay, but it was only a couple thousand people.
this i've heard from dozens of people that have to stay 20 or 30 minutes away from the convention
center we're at four different hotels just for our organization it's gonna be wild but people
seem really excited for it uh i'm sure you guys will have uh several people there as well what
do you look forward to from you know what the hpc world can teach us next in terms of
of where technology is going yeah that's uh that is very interesting um we
did attend the last few supercomputing events.
One was, I think last time was Atlanta.
And this time it's St. Louis, so we will be there.
And I think in most cases, especially in the supercomputing,
we are there to listen.
You know, we are there to learn, especially with Kray
and, you know, like the supercomputing giants, right?
What are they doing, right?
If you think about today, like, you know,
compare it with 20 years ago.
you know now you are having super computing in you know multiples of racks you
know literally your GPUs are so powerful that now you're just like you know
supercomputers all together but we are we go we go to listen we you know
specifically we talk to the customers we talk you know attend the sessions
and learn what the new technologies from a thermal point of view
cooling point of view performance what are the pain points right what are
of the solutions that they are bringing.
So yeah, generally, it's more of a, you know,
absorbing things from the industry
and see, you know, how we can help in the standards perspective
or say SSD feature or a performance point of view.
But yeah, nothing specific,
but mostly learning on all the stuff that is going around.
I mean, I don't know about all your customers,
but most of your non-hyperscale customers will be there,
all the server vendors, all the storage guys,
the ones, the big ones, Weka and Vast and DDN
and all those guys that are quite popular at shows like this.
Because again, we're back to some of the conversation
we've already had feeding these large clusters
and scheduling time on them and scheduling access to data.
I mean, there just becomes this orchestration
of data movement becomes really important to this audience
is what I expect we'll hear more of
it that show comes up.
Agreed.
Agreed.
And yeah, I think that it has, it is growing, as you said, the popularity is growing and
I see the OEM customers and now I'm seeing hyperscale customers also having a big presence
there.
So, you know, and rightly so, right?
It's more of, you know, you've seen announcements from Google on the quantum computing.
you're, you know, they, even on the large language models, they last, I think the last month
they announced that they hit quadrillion tokens in a month. So which is, you know, this, this kind
of scale requires new, because they're also, even if from super computing, I think there is
a chance to kind of derive something goodness in your, you know, standard computing also. So there's
also that hope that it kind of intermingles here. Well, that's why I like personally, OCP,
supercompute so much because you get a vision into where the enterprise will end up.
We'd like to think, I think, that the OEMs have a lot of discretion in the systems they've built.
They do, of course, but it is so greatly influenced by what's happening outside that world
in it, the hyperscalers. Even things like form factor, I mean, 21-inch servers are the OCP standard
now and things like Busbar and like there's all this other technology that will start to feed into
the way enterprise servers are designed and really make data center designers and IT admins and
everyone that's involved in the in the acquisition process of any of these systems has got to be
juggling so many balls and thinking about so many different challenges it is not as simple as it
used to be and it will continue to be you know a big diversity in technology I think yeah
Yes, completely agree.
You know, I attended OCP, and I don't know if you probably saw the meta keynote,
and the VP, I think his name is Dan.
He was explaining the, you know, they have that ORV3, the wide rack.
So now they have the ORV3 wide, and now they're talking about even bigger racks.
And they're talking about special, like, trough.
and tools made for that rack because there's there's nothing this is like an
elephant it's like one big elephant just to move that thing around is also another
challenge and they are they are obviously doing through OCP so this is like
open source you know you can do this and you talked about bus bars we are
talking about this 800 volt you know it's just getting crazy and everybody will
benefit from that as you say you know when when OEMs and hyper scales
get together. It's kind of solving each other's problems together, hopefully as an industry.
Yeah, well, I mean, ultimately, they're after infinite resiliency, right? And I think,
you know, that's also important at the enterprise. We've talked about the nines forever,
five, six, seven, all of the nines. So that's, that's clearly the directive. Well, this has been
fun. I appreciate you hopping in. And I think you have done a good job to articulate a lot of the
benefits and things that you guys are working on, the 245, the software advancements,
the some of the thoughts on liquid.
I mean, these are all important topics that I think our audience will take, you know,
great, great joy from getting your perspective.
So thanks for that.
I appreciate you hopping on today.
Oh, thank you, Brian.
Thank you for having you.
Thank you for having you.
