Podcast Archive - StorageReview.com - Podcast#139: Immersion-Cooling, The Valvoline Way
Episode Date: July 24, 2025Following a tour of the facillities, Valvoline’s VP of R&D, sat down with Brian… The post Podcast#139: Immersion-Cooling, The Valvoline Way appeared first on StorageReview.com. ...
Transcript
Discussion (0)
Hey everyone, welcome to the podcast.
Brian Bieler here hosting for Storage Review, and we've made the short journey down to Lexington,
Kentucky to talk with Valvoline about what's going on in the data center liquid cooling
space and there's a lot going on.
I've been trying to convince George's people for years that there's more technology in
the Midwest than anyone knows.
We're in Cincinnati.
I call it the storage hub of the Midwest.
I think that's mostly not true.
But you guys have a legitimate claim
to data center technology down here in Kentucky.
Before we get into all of that,
you've been here for quite some time,
well before data centers were even a thing,
but just give us a quick intro on who you are
and what you do.
Sure, sure, thank you, Brian.
So I'm George Danso.
I've joined the Valve in 1999 after I got my PhD in chemistry. So when I joined the
company I was studying motor oil, dispersant detergent, formulated motor
oil, so 26 years ago. And we actually over the 26 years we created the
high mileage oil, the segment, Max Life. And we also did the next gen, which is the
re-refined base oil to make the motor oil.
And also we did the Max Life ATF,
quite a bit of the lubricant products over the years.
It's wild, so as we walked in here,
we walked past, what, 100 years of automotive engineering
from Model T to now the high-end race cars
and a motorbike, and you guys are involved
in all of this high-end work around motor sports.
It's wild to me to think about, even over all that time,
that so much engineering could take place in something,
and maybe I've just never thought of it
in an elevated sort of way,
but motor oil, it's wild, huh?
Yeah, I mean, people don't really think about motor oil, it's afterthought, right?
They thought of engineering, mechanics, how the parts moving, but when you have moving
parts, you have to lubricate.
So motor oil actually has four functions, not just lubrication.
Lubrication number one, you have to clean, keep the system clean.
You have to seal the combustion chamber.
That's the viscosity comes into shape.
And finally, heat transfer.
You have to cool the engine parts.
That's where we have to understand how the heat is being taken away from the combustion
chamber to the oil, then to
the cooler, then to the air, rejecting to the air. So that's the whole heat transfer loop.
I think I learned about viscosity in motor oil commercials doing sporting events through the 80s as a child growing up.
It's a great word, viscosity.
That's right.
So what is it then about what you do with automotive lubricants and heat transfer?
And what is it about the entire petroleum industry that's getting into the data center
space?
How are they similar?
Or maybe how are they different?
I don't know.
Yeah, so they are more similar than different.
So number one, again, we talked about the heat transfer.
We learned so much from the high-end machinery,
how do we manage the heat, right?
So for the data, the number one thing is heat management,
the heat rejection.
How do you take the heat away?
If you talk about, like say, a thousand watt GPU,
that's really the, let's say the energy it takes, right?
But the majority of the energy is rejected as heat as well
after it's done the calculation.
So how do we manage that heat?
So we understand from the automotive side,
quite well from the oil to the coolant.
So that's transferable, let's say,
understanding to the data center cooling.
We understand the chemical, physical, rheological, thermal,
and we add one more thing, which is electrical.
We never really worry about electrical.
You know, a lot of the property of the oil,
now we have to worry about the electrical property of the oil.
Yeah, I mean, it's funny. You've got, I don't know,
how many people are in this facility here in Lexington?
Roughly, here roughly about 700.
Okay, and of the 700, almost all of them know
how an engine works, probably.
Yes.
How many of them know how a data center works?
Or does it not even matter because you're just dealing
with the same fundamental physics issues?
We're just dealing with the same fundamental physics issues,
but a lot of the people, we use, I mean,
we all use data center every day, right?
You do a search on chat GPT.
That's going back somewhere in the data center
to do the calculations, right?
I mean, we were shocked to know last year, 2024,
the, all the search on the chat GPT,
the energy consumed is the energy consumed
as a country of Ireland in 2024.
That's just amazing, right?
But again, that energy is consumed, it's become heat.
You have to manage that heat.
And we're just getting started.
Everyone thinks we're so deep into this AI journey.
I think on our 90 minute ride down here,
we spent 75 minutes talking about the impact on colleges
and teaching and coursework and all this stuff.
It's so fresh.
And if you design a rubric for a college class today,
in six months it's going to be out the window
because we're changing that fast.
And it's the demands on the data center
are obviously what you're interested in
in adapting these fluids in your history of engineering
into the next step for data centers.
That's right, that's right.
So what, you said they're very similar.
What's new in the data center
that's not in the automotive world?
So what's new is really, number one is we have to make sure,
these are the mission critical equipment,
those are much high value equipment.
We have to make sure they're working,
they're stable all the time.
Compared with our automotive world, you do have the time, you shut down for quite a bit
of time, then you use it again, right?
Data center, all the time, constantly running, right?
So that's slightly different.
But on the other hand, it's less demanding compared with automotive.
In automotive world, you have combustion, which is chemical reaction.
You have the mini explosion in the combustion chamber,
which create a lot of combustion byproducts,
which will get into your oil,
maybe even get into your coolant, right?
You have to manage that.
But in data center, you don't have that.
So that's, let's say, a little bit less stressful situation.
But one thing that is a little bit different
is that you have different materials used for your computer,
for the data center, for the servers,
compared with automotive.
We have to make sure they're compatible.
So to us, compatibility is really, really important.
Yeah, the metals are different,
the PCBs are different than what you'd have in an engine.
Yeah.
Okay.
Think about even labels.
Oh yeah.
The plastic sheets.
The adhesive.
Yeah, the adhesive, yes.
Yeah, if you dunk those in, that was one of the first things I learned when we spent some
time with immersion cooling is that anything with a label, a few minutes those labels are
all sunk to the bottom.
That's right.
And you talk about the need for cleanliness.
I guess you have a little more flexibility in immersion cooling.
But when you're doing direct to chip with something like PG-25 mixed in,
the purity of that fluid becomes critical.
Because as you well know, but for anyone listening,
all those heat sinks have little channels that the water, the fluid goes through.
And as those gunk up, then the efficacy of your cold plate starts to degrade.
And now we've got problems. So we'll get into all that. But actually, since I already brought it up,
tell me about the cooling segments that you're interested in.
Obviously, air cooling is the most common in most data centers, fans, giant heat sinks.
Even coming off Dell Tech World,
we were there a couple weeks ago,
they're showing now GPU servers that are
10U, giant heat sinks, still trying to manage
sort of the last gasp of air cooling
on these very high-end systems
until they have to do something else.
But Direct-to-C chip is one where you guys play
and immersion is the other, right?
Both.
So you mentioned that the hissinks today,
the, we call the clearance inside, right, the gaps,
they are actually smaller than the human hair,
which means they are less than probably 70 micrometer.
Again, if you have any particles, any, let's say,
even bacteria or the bio growth, it's going to gunk up the
system.
So it's, like you said, critical to keep the system clean.
So when we do the direct to chip type cooling using PG-25, we have a patented formulation
which is low solids, which means our formulation inherently has much less solid content.
So we have much less chance for the system to gunk up the whole passage of the liquid.
And even we have a little bit leak because those adages we use are liquid-based.
So they will be evaporating away.
So they're not going to form salt bridge on your electrical components.
They're not going to cause any shorting either. Okay. Yeah, so let's
talk, let's stick with direct-to-chip because the cold plate is what is very
popular now with Nvidia showing with NVL72 and people are starting to kind of
wrap their heads around it. Liquid cooling direct-to-chip has been around
forever. I mean we have to go all the way back to the 60s,
probably, big mainframes that were having it and using it.
That kind of went away, because efficiency is caught up.
But gamers know from closed loops in their systems
that are familiar with the concept,
this isn't much different at scale.
We're still trying to get the heat off of the CPU, GPU.
Even now in some of these new systems, SSDs, all the NICs,
everything could be liquid cooled.
And then we've got to get the heat out somewhere,
cool it down, cool the fluid down,
and then bring it back in.
When you think about your value add there
in that ecosystem, what do you do as Valvoline
to differentiate, you talked about some of the patents and the design, and I guess you're able to carry that
forward from decades and decades of what you've done before.
But for PG-25, for instance, how much difference is there in that?
So not all the PG-25 are the same, but like mentioned, right? All formulation is patented, so that's one. So we do carefully choose the ingredients to formulate the product to have the best
thermal transfer efficiency.
Again, the base is roughly the same, 75% water, 25% PG.
The critical thing is additive, the corrosion inhibitor, the anti-foam, and also other components we're putting
to make sure the system stays clean, stays for a long time.
And also we, from the coolant days, automotive coolant,
we understand the testing,
we understand the condition monitoring.
So we will be working with our customers
on the condition monitoring too.
So we'll help them understand the fluid property
to monitor any situation that could be happening, right? condition monitoring too. So we'll help them understand the fluid property
to monitor any situation that could be happening.
And one last thing is because we have the worldwide
scope of production for the coolant side,
so we can produce the PG-25 globally.
So from supply chain perspective,
we are prepared to support any customer
which has a global deployment anywhere.
You hit on a couple of things that I think IT admins get concerned about when they start looking at bringing in a liquid loop of any kind into the system.
Even if it's, you know,
we've seen plenty of servers that have a closed loop in with a little radiator.
That's kind of like the starting point, right?
Because you don't need to connect anything. You don't need a CDU.
You don't need to connect anything. You don't need a CDU. You don't need the heat rejection system.
But as you go upscale,
we're immediately concerned about leaks for one.
And when you think about the ecosystem
that gets involved in a liquid loop system,
I can go by my servers and GPU servers
from Dell or HP or whomever,
but ultimately there's so many other players and there's the hoses,
the cold plates, the CDU, the whole ecosystem.
There's so much there.
And quick connect. That's actually another critical component, right?
You have to make sure you can disconnect quickly and no leak.
Right. That's critical too.
Well, you're part of that,
but you've got to be integrated with Cool IT
and all these other big vendors,
Ferde and everyone that's out there
selling these solutions.
The number of partners involved
to pull something together like this is really big.
And I guess the collaboration becomes more critical
than maybe ever for these large-scale systems.
Absolutely, absolutely.
These are system perspective, not an individual component.
For us, we work with the device manufacturers,
the cooling device, and we together will be working
with the hyperscaler to make sure the whole system,
we talk with each other, right?
So we actually heard from like one of the supercomputer
system, the server maker versus the coolant provider
versus the service provider,
they were not talking to each other.
That issue, they have that deposit forming in the system,
nobody knew where it came from,
and they were all pointing fingers.
So that's a problem when you do not have communication,
you do not understand who is responsible for what.
So for us, it is critical, like you said,
collaboration, communication,
understanding the system design,
bring the best benefit to enable the system
to work the best.
Yeah, well, I mean, you look at a typical data center
and they don't have a water quality guy on the team.
They've got some plenty of electricians and HVAC guys
and controls guys, they have all that.
But they don't have water quality thing.
And so when you look at a solution where external water
could be involved, there's always the water in Cincinnati.
It could be different than other places.
I guess you've got to do things to be more aware there.
But when you think about the role of your PG-25
in these systems and what the cooling system has to do
to get back to IT to say,
hey, there may be a service problem here,
or there could be a leak somewhere in the system.
That's not necessarily your responsibility,
but you're part of that ecosystem.
That's one of the things that those guys
are most concerned about is,
how do I identify something before it happens,
or before it's a problem, I guess.
And I don't want my GPUs getting hot
to be my first indicator that we need some sort of service or one
falls out of the cluster because it overheats. Absolutely. So one thing we are working on,
so not only with the cooling device provider or help scaler, we're also working with sensor
manufacturers. So putting sensors into different places in the cooling loop so we can find problem,
if that's problem, way ahead of time, right? So we
measure the some critical properties of the fluid for the PG-25 site. For example pH, turbidity,
right, their conductivity. So we will understand what's happening with the fluid. So as you know
these days the sensor is getting better and better. And also the response rate latency is almost none.
So you can real time monitor the situation,
the health of the fluid without being too late.
So that's the key.
Yeah, I wonder what,
because the cold plates now are,
I don't wanna call them dumb,
but they don't have much technology.
They've got hard technology built in, but they don't have sensor technology in most
of them.
But it would be something that maybe at the connector level or somewhere in there to be
able to get that real device specific kind of information might be interesting at some
point as that matures.
Sure.
Because we think about all these systems like iLO from HPE or iDRAC from Dell, we can monitor every
DIMM slot, every drive slot, the CPU, the heat, the thermal, the fans, we can manage
all that. But the liquid loop just injects a whole lot of uncertainty at times into that
sort of typical IT motion.
I think for them today they have the capability for pressure and temperature, right? But we
can add in, like I say, the more parameters to the flow itself,
then we can translate into the whole loop, let's say, situation.
Yeah, it's interesting because as this ecosystem is developing so quickly,
and I'm sure you see this as well, there's a pretty wide chasm between organizations
that have really nice complete solutions
with the CDUs, the cold plates, the heat dispersion units, all that and there's
there's others that are not quite so far along in the journey which I guess for
you guys you've got to be somewhat selective in terms of the right
partners to make that work. Absolutely, we want to, again we want to work
with the partner to be able to provide the complete solution, right,
instead of only a partial solution.
So in the US, I think our split has got to be pretty heavy on cold plate from a liquid cooling perspective.
That is correct.
Do you have a sense on immersion versus cold plate? We see immersion as more like a longer term deployment.
Not currently, most of the users are still focusing on the cold plate or direct to chip.
The reason is that direct to chip could be retrofitted to the current data center infrastructure.
If you talk about the whole tank immersion, the whole infrastructure will change quite
significantly. So the data center construction will be from ground up.
It's for you to retrofit, it's relatively difficult. But again, the immersion has
its advantage. So let's talk about that. What do you guys see as the key advantages of immersion?
The key, so for direct to chip, no matter what, you still have some component.
It is hard for you to cool.
So in most direct to chip system today, we still see a little bit air cooling.
For the residue part, you cannot use the cold plate to cool.
So you still combine the liquid with a little bit of air.
Your PUE still is not as low as you would like to achieve.
So for the immersion, you can cool 100%.
So, I mean, a simple number is that the liquid,
or like a typical fluid, roughly about a thousand to two hundred
times to three thousand five hundred times better heat conduction than air. So you know of course
water has better heat transfer but oil has a little bit less. Water is harder to dunk your
servers. That's right. That's right. That's a range from the 1300 to 3500 right?
But for the oil we are using today for the immersion coolant, it's a little bit lower
in the thermal conductivity but it's extremely low on the electric conductivity.
So basically it's dielectric.
So you can put the whole system into a liquid, then you have every part is soaked in the
liquid, right?
So then heat is transferred to the liquid, then you have your pump in the back, you can
circulate the liquid.
Yeah, that's part of the trick, right?
As the TDPs get higher on these components, the fluid can't just sit there or else it
just gets hot next to the thing, right?
There's got to be movement there. What is the theoretical limit of what you think you can do
with immersion in terms of, I don't know how you measure it,
watts per rack U or something?
How do you think about that?
So what we are seeing is that for,
there's a couple technologies, right?
So the most, let's say the mature technology today
is probably tank immersion.
We've seen the tank immersion can go as much as, let's say, the PUE side, 1.03, 1.04.
So which means you're significantly lower than your air, right?
Air is roughly 40, 50%.
You're talking about here, 3 to 4% extra energy used for cooling. Yeah. I guess we should get to that differentiation that there are kind of some different ways
to do immersion. In my mind, every time I say it, I'm thinking of the tank on its side
as summer, Midas, dog, whomever, where you're slotting in the server's top end. You talk
about power efficiency though,
I was at Addison's data center in DC not long ago,
and they're a big supporter of immersion,
and one of the things that they said was,
their failure rates on every single component
in the immersion systems go through the floor, almost none.
And RAM, storage, CPU, GPUs, all does way better
in that design.
And while it's still a little bit scary for,
you know, another IT admin challenge,
because now you're talking about a crane
and tables and drips and that kind of thing.
The savings and lack of service
may be kind of an interesting byproduct
of the benefits that immersion can provide.
Yes. So again, we are comparing air versus the, let's say, the immersion fluid, right?
So I'm actually having a model here.
So hydrocarbon.
So we use hydrocarbon as a cooling fluid.
It's only carbon with hydrogen, right?
Nothing else.
So when you have heteroat atoms on this molecule, then you will
create a problem. For example, if you have fluorine or chlorine, that's forever chemical, right? We are
only using the single phase immersion, which is hydrocarbon. So they are very, very, let's say,
pure chemical. They are very inert. They are not going to react with anything. Same as air. You put your system in air.
It's similar as you put your system in immersion fluid.
Yeah, we've seen some creative immersion solutions.
Hypertech is one that's really designing servers
for immersion, which I think is interesting.
And then there's a bunch of guys that just,
sort of retrofit Adele or HP or Supermicro or whatever and dunk them in.
Which can be fine, but I think what the industry learned very quickly, and I'm sure you guys saw this in the labs,
is that cables designed for air servers don't last long in oils and stickers and glue and things that are on drive labels or DRAM labels
will fall off and foul up your oil cycling
system in those tanks.
What were some of those early learnings like for you guys?
For us again, you're absolutely right.
So the compatibility is really the first thing we have to do.
So if you have a system that wants to implement immersion cooling, we will ask what kind of
material you use, what kind of system they're using, right?
The metal, the plastic, the elastomer, the
thermal interface material. If they have thermal glue, we want to understand what glue they're using.
Early we've seen like some of the cable, the color or pigment, can be picked away by the fluid.
The fluid will change color.
So even we all know, like I say, the color will not,
it's not an indication of performance,
but to the end user, that's everything.
It looks, yeah, it looks foul.
So we have to understand that,
we have to work with the system or the hyperscaler
to make sure we have the compatibility problem,
or the compatibility resolved at the
beginning before we go into really implementing the liquid.
By the way, I don't think the mic's picked it up, but I'm pretty sure that was a V8
I just heard, which really lets you know you're at Valvoline when you hear a nice V8 fire
up in the background.
But staying focused on the data center for a little while longer. So
it's not, I mean, I guess it's not really your job to qualify cables, but you end
up there anyway in a certain way, don't you? That's right. So one of
the, we recently just did one other study, is wicking. So you want to make
sure your liquid is not wicking in between the line,
the copper with the casing, right? So those are really critical for us to
understand what kind of, let's say, manufacturer are producing the
best cable we can recommend to a customer. That's probably what you can use.
And some of the PVC or some of the type of plastic for the cover could become really brittle, right?
So we want people we will suggest use polyurethane if possible things like that.
We've got to be at the point now as an industry where that's pretty widely available though, right?
It is a liquid immersion certified cable. Yes. We have two organizations
we are very actively involved one is OCP, Open Compute Project.
So we are in there, like some weekly calls, we get together with them, we understand the material list, right?
That's one. That's mostly immersion and PG-25.
Then there's another organization called ASHRAE. So ASHRAE is a little bit more on the PG25 side
because it's the HVAC route, right?
ASHRAE also has a weathered material list as well.
So the list getting longer and longer.
But again, those are giving the customer
or the user a guideline on what materials
you should be careful with,
what materials you should be testing with
before you implement the liquid cooling.
The other question we get all the time is about networking and the light transfer for
some of the high-speed fabrics through the oil.
If the lubricant gets involved in there, does it manipulate or deflect light or do something and I know
you've worked through that yes what what's your feeling on that concern so
today a lot of the a lot of the let's say the hardware manufacturer solve that
problem by moving the switch out of the out of the liquid right so on top of the
liquid so that's that should bypass the problem but before most of the liquid, so that should bypass the problem. But for most of the system we are seeing today,
the switch or the connection, the fluid we're using,
we have not seen much of the problem.
We heard there's some, we call signal integrity issue,
but so far our study showed we have not,
the fluid has not caused that problem.
But if there's something we could manipulate,
we could manipulate the dielectric property
or dielectric constant of the fluid
to mitigate that problem,
we do have that, let's say, tool in our toolbox.
So you talked about OCP,
and I think that's an interesting one.
I think you guys were there last year in October.
I think I remember seeing the booth.
It's gotta be weird for people that don't know you
when they walk into this little expo
in San Jose Convention Center.
See, Valvely, what the hell are those guys doing there
if they're not used to administering that kind of thing
because OCP talks to a wide group of people.
True, true.
I don't know if you were there, but what's the feedback you get from that audience?
Again, like the number one, like you said, people think, oh, you guys are car guys.
Why are you in the computer show?
But to us, again, we try to reiterate what we have learned in 150 years.
From the lubricant, from the coolant side, we take that learning to a high performance computer.
And we try to educate people heat transfer,
heat transfer, material compatibility is chemistry, right?
We have both, we understand both,
we can put that both into use.
So again, we are, let's say relatively new
into this industry and we have a lot to learn,
but we are learning and we are progressing.
OCP, I think, has two different working groups
or whatever they call them.
I believe they've got a liquid cooling immersion
and a liquid cooling directed chip.
That's right.
Two separate working groups, right?
Yes.
So they're getting really serious.
I mean, Meta's put a lot out into OCP
about what they've done.
Intel as well, right?
Nvidia, Microsoft.
Again, these are impacting the performance
of their hardware in the future.
Well, in video, really, right?
Because they're driving TDPs through the roof
with their GPUs.
And whatever comes next, I mean,
it's only going to get more intense.
Yes.
We've talked a lot about the Direct-to-Chip
and then also Immersion,
but you work with some partners at Data Center World
a couple, I guess, two or three months ago now in DC.
You were there with Isotope.
I know you've been a partner of theirs for some time.
They do it a little bit different.
That's right.
And unfortunately, I think most of their servers
have metal lids on them, but the display unit
with the acrylic lid on, you can really see in there
with the lights.
See what's inside, yes.
But talk about what they're doing.
I know it's not your product, but you're familiar with it.
Yeah, again, so when we talk about tank style immersion,
you put quite a bit of fluid, right?
Typically, like 200, 300 gallons per tank.
Okay.
You dunk the whole server, like say, vertically,
coming into the liquid.
But for isotope, precision immersion.
So the idea is kind of taking the direct-to-chip approach,
but you have a little bit less fluid per rack.
Then you pump the fluid onto the top,
and you spray the fluid onto the hard components.
Yeah.
Typical GPU, CPU, SSD, the DRAM, things like that, right?
Then the fluid comes down to the bottom of the chassis, we call chassis, then get pumped
back again slowly.
So the pump is a very small, let's say, power consumption.
Right. small, let's say, power consumption. Again, for using this precision immersion,
I guess one advantage is that you don't use, I mean, to us, it may not be an
advantage, but to the user... You're less fluid? Yeah, you're less fluid. But on the other hand, it's because it's
horizontal. So retrofitting becomes a possibility. Well, and it's more
comfortable because you're now back into a standard rack system, right?
And then you're breaking up in the back to move that fluid.
Yeah, it's interesting. I know they've been showing that with HPE systems for some time.
And they had a couple new things that were pretty cool. So yeah, I think that one's a fun one to explore.
I think we'll see maybe some of that in the lab tour later today. Yes, I think so. And also they are a little
bit newer design these days. It's more active spray. You probably have seen that, right?
Those more like not sitting, the surface not sitting horizontal or vertical. It's with
the angle. When your fluid comes in, spray from the top, then the fluid is touching everything, goes to the bottom.
Well, that's the kind of the neat thing about their design is, yeah, it cascades over the entire motherboard.
Yes.
And you're going to pick up coverage for all those little capacitors and transistors, and your out-of-band management, and your NICs, and all that stuff is going to benefit from that cooling.
That's right.
Things that aren't even that hot, but they still wear.
That's right. is going to benefit from that cooling. The things that aren't even that hot, but they still wear. And we've seen some of these servers,
we've lost motherboards, not because the CPU socket died,
it's because some 49 cent part blows up on it.
And there's not.
No, but that you can do.
No, yeah, you don't have a solder kit,
normally, at the data center to go in there and fix it.
I mean, Kevin probably does, but most people.
Most people don't do it.
They don't solder anymore.
Yeah, they don't teach that, I don't think,
it's server admin.
We didn't talk about it. You hinted briefly at it earlier in the conversation,
but multi-phase cooling. What's your take on that?
So multi-phase cooling has the advantage of the heat of the evaporation of your liquid
going from the liquid phase to the vapor phase, right?
It takes up more heat away from your components.
But the biggest problem is that right now, almost 100% of the molecules used for the
two-phase or multi-phase cooling is the forever chemical, the PFAS, right?
So the PFAS so far, I mean, as the industry,
as a whole society, we have not found a way
of managing it without causing long-term
environmental problem.
So for that reason, we studied, even like say,
it has efficiency benefit, but it's detrimental
to the environment, so we decided not to pursue that.
Yeah, it sure looks neat when it bubbles.
Yes.
But I guess that's not a good enough reason.
You highlighted the thermal properties.
I went for the visual.
Yes.
I think that when you say bubbles,
there's a second, let's say, complication for that
is that you have to have a CO system.
Right.
Escape of the gas is problematic.
Even more problematic, yeah.
So for full immersion or tank immersion, if we look at this in 10 years, how different will that fluid be than what you have today? We've seen the fluids getting, let's say, again we look at four
major properties of a fluid when we design a system, right?
Again, we talk about viscosity.
That's number one, because you want the fluid
to be low viscosity, so you can flow easily.
You gotta move it.
Number two is thermal conductivity.
Number three is heat capacity.
Those two are linked together,
but they're not same property.
Thermal conductivity is how you take the heat away.
Heat capacity, how you're holding the heat, right?
They are linked.
Number four is the density.
So the dense, the fluid, the better.
So the four properties you put together,
there's not a lot for us to manipulate if it's hydrocarbon.
So we are still working on making, let's say,
more purified, better on these properties,
little bit dense, little bit low viscosity.
But you can imagine when the viscosity gets so low,
it becomes flammable, right?
That's problematic also.
Yes, so.
I think Europe or some other areas
have some more stringent requirements on.
More Asia, yeah.
Oh Asia, okay, right.
Have more stringent requirements, right?
We look at the flash point.
Right.
So 150 C is the North American requirement above.
And in the Asian countries, it's 200 or 250 C.
Much higher.
Much higher.
But then when it gets much higher, the problem is that you have to use much viscous fluid.
Right.
Right.
Then lose pumping power, yes.
Right.
So still oil-based.
Can you put additives into the oil to help any of those
four challenges? Yes, you can. But then you have to be very, very careful. Number one thing you
don't want to break is the dielectric property, right? The non-conductive property. Yeah, I guess
if I'm going to dunk a couple million dollars of GPUs in some oil, I'd like it to not destroy my
equipment. That's right. That's right. So you can put, there are quite a bit of additive
we can use, but to manipulate these four properties is quite difficult without
causing other properties to go through. Out of balance.
So then for the closed loop business with the PG-25, what does that look like
going forward? Can that be further enhanced, the fluid itself? Yes, before you go to the PG-25, let me
actually go back to this. So a couple things we can do for the
immersion side. Number one is how do we make the hydrocarbon with less carbon
footprint? We do have ways of doing that, that's one. And second, can we use
bio derived? So for bio derived, the problem is that you have here, we We do have ways of doing that. That's one. And secondly, can we use bio-derived?
So for bio-derived, the problem is that you have,
here we have all single bond.
For bio-derived, you have some double bond.
The double bonds are the problematic part.
They're reactive.
So how do we create bio-derived hydrocarbon
but with all saturated single bond?
Then also the cost is acceptable
right we do have ways of doing research I think in the future we may be able to go that
route okay so that's on the immersion side if you look at the PG25 side similar so the
PG today most is from petroleum process more like like a by-product process. You take the propylene
oxide, you make the propylene glycol. But there are also wood chip based and plant based
PG as well. But again the cost is really really high. You buy the human element of benefit,
so can we have the process better
or can we make the volume better
so eventually the cost is comparable?
Those are the things we are researching into the future too.
I'm regretting getting a C-
in high school chemistry right now.
Chemistry is the key.
So who would have thought that for aspiring IT professionals
that a chemistry background would be useful.
It is useful.
Thankfully it's too late for me.
I'll just rely on smart guys like you to help.
We can work together.
Okay, we'll collaborate.
Yeah, we'll collaborate.
You do the nerdy stuff and I'll run the social media.
We'll figure that out.
That'll be no problem.
So speaking of that, we're going to go after this,
we're going to wrap up, we're going to go
to your data center labs.
Tell me what's in there. What are we going to see when we get in there?
So here in Lexington, Kentucky, so we have the lab, we do the compatibility testing,
like I mentioned, that's the number one thing, right?
And because this is a working lubricant lab, we do a lot of oil formulation.
You will see that, I can explain a little bit to you, but that's not the data center
related. But we do have immersion rack over there.
You will see that.
And also we will have some of the thermal
and electrical measurement device in the lab too.
And finally I may show you guys some of the,
my proud part of the research is tribology, which is
not too much linked with the IT side, but again, it is how the lubrication friction
wear comes into play, right?
So that's where the lab, we house the study of the rheology, the thermal, the electrical
properties.
Okay.
And beyond that, we have a lab in Ashland, Kentucky,
where you mentioned the closed-loop cooling system.
We actually set up a closed-loop cooling system there
without CDU, just pump the fluid around,
change the radiator or the heat exchanger, right,
from copper to the aluminum to see the efficiency.
Okay. And the same fluid we develop over here,
we test over there, then we see any benefit,
we'll come back, reformulate, and test over there again.
So you're really doing a lot of your own testing too,
not just relying on partners to do that work for you?
We do quite a bit of our own testing.
Okay.
Well cool, we'll go check that out next.
If you want more information on Valvoline
and what they're doing in the data center,
we'll put a link in the description for that.
And George, I mean this awesome conversation,
I appreciate your time.
Thank you very much.
Thanks for having us down here.
Thank you.
Thank you.