Podcast Archive - StorageReview.com - Podcast #109: Direct-on-Chip Evaporative Liquid Cooling
Episode Date: August 9, 2022There continues to be increasing interest in liquid cooling technology but also trepidation in… The post Podcast #109: Direct-on-Chip Evaporative Liquid Cooling appeared first on StorageReview.c...om.
Transcript
Discussion (0)
Hey everyone, Brian Beeler and welcome to the podcast.
Today we're talking more alternative cooling.
I'm not sure Udi would appreciate liquid cooling, but we'll see.
Really interesting innovations that his company's involved with ZetaCore in terms of bringing
cooling technologies to the data center that are maybe easier to consume than some of the other choices.
So we'll get into all of that.
And without any further ado, Udi, thanks for joining us today.
Hey, Brian.
Thanks for having me.
Exciting times.
Not really alternative cooling.
This is the real cooling now.
I think some would argue that air is still the preferred methodology, but...
I'd say AIR is definitely still predominant.
But AIR is choking.
AIR is choking.
I mean, we are speaking with major data center owners,
edge developers, partners, very large partners across the ecosystem.
I think the numbers are starting to show.
Air is choking. It's simply applications requiring high performance processors.
The growth, to fuel the growth, customers are needing to find better ways to utilize their resources and really relieve cooling resources for computing in order to achieve that.
And last but not least, thanks God,
people are awakened to the need for a more sustainable future and data center with their pace going to make a dent.
So I think there's a perfect storm happening.
Well, yeah, I mean, we've talked about this a lot, right?
About the,
especially the accelerators coming into these systems and driving these thermal challenges.
It was for a while, a bit of an argument on density. So a lot of one use server stacked
together, had some airflow challenges, especially when designers wanted to get four or eight CPUs
in there. Now it's kind of enough 86 and then
x86 and then throw as many accelerators in there as we can, whether it's 1U or 345U, whatever.
The density is really of these high performance cards is really what's driving a lot of this.
But I'm interested in your solution, and we'll get into the technical bits,
because the first time I saw you was at Dell
Tech World where you were showing, I think it was a 640, taking off the traditional heat sinks,
putting your cooling plate on, so re-gooping it on there, connecting your tubes and getting into
the system and off you go. I mean, that's just a traditional server, but that's perhaps one of
the lightest lifts, I suppose, when you look at liquid cooling for these on-plate systems.
What was the reaction at Dell World when you started talking about things this way? This is
a little bit different than we commonly think about it. The reaction was actually a great enthusiasm. I think the best was coined by one of the executives, key executives at Dell, who really
came to say, this is liquid cooling for the mainstream.
Okay. I mean, he was walking to our meeting from the big exhibit room where it literally took four clicks to bring in alternative liquid cooling solutions.
Whereas we literally walked into the room with that R640. I think that demonstrated and really made tangible
one of the key benefits of the Zooter Core technology
in terms of ease of implementation and use,
making it really viable for the mainstream market,
both in not just Redfield, but also Brownfield.
So let's talk about that a little bit,
because you've got some technology differentiations that I think are really important to understand.
When we think about liquid cooling, we've talked a lot about it already across a wide chasm of everywhere from a discrete loop on a GPU, like in a PC gaming rig, or Lenovo's doing some of this in their SR670 V2 to cool
the socketed GPUs. You've got a radiator and fans and it works the same way, but you're getting
closer. You've got rear door heat exchangers that'll take traditional servers but run liquid
through that panel to help cool off some of the exhaust and deal with some of those thermals.
You've got things like full immersion where we saw some of that exhaust and deal with some of those thermals. You've got things like full
immersion where we saw some of that at Dell World too where the tank will be tipped on its side and
servers are slotted into the liquid and we have to get into this as well. That can either be
single phase or dual phase because that becomes important to you where it's an oil based and
single phase and just kind of plops around in that puddle, or the dual phase
where there's actually a reaction occurring that generates a liquid and vapor cycle for the cooling.
So let's get there. But there's so much going on. And now what you're doing is you're talking about
two-phase on the chip, which is something that I haven't seen anybody else doing.
And we've been looking at this space pretty closely.
So take that little summary of liquid that I did.
Talk about probably where I made mistakes, but where you guys fit in there with your solution because it is so unique.
You're not just running a cold water loop because that's kind of been done or being done.
This is really fundamentally different in the way that you're handling the cooling cycle there.
So actually, I think you've been pretty accurate.
I'd be happy to kind of provide a very quick overview of how the implementation of ours makes it so simple and scalable.
Essentially, we are very much like a single-phase direct-on-chip-based solutions.
We use cold plates, we use manifolds, and we use condensers versus radiators that you mentioned before.
And on the surface, it actually looks pretty similar, except that the similarity stops there.
We approached it such that the liquid that we're using is dielectric, so there's no water in the server, which is one of the key enablers for both the customers and the technology,
not just that it eliminates the risk of IT meltdown altogether.
By the virtue of using this dielectric in the specific way we use it,
we implement it, it makes the whole system much smaller
and compact. Just to burn the ear, the amount of liquid that is needed in our closed-loop
system is a fraction of what single phase would require and even smaller in comparison
to immersion. In a 100-kilowatt rack, we'd use less than three gallon in the whole system to one day
product which means that we also with the implementation of the two phase happening at this
pool boiling on the chip I don't know if the camera yeah you can see it sure
yeah for the audio guys he's holding up a plate so you can see what it looks like.
Yeah, so I'm holding a plate, a cold plate.
It's actually an evaporator.
You can see how small and compact it is.
And it would remain compact because we are unlike in single phase that is built on one specific heat and therefore dependent on flow and inlet temperature.
In the case of the dual phase, it's all about latent heat, which means that this small pool
of dielectric material will be boiling if and only when the heat is generated, which means that it would evaporate upwards.
And that means that we are not bounded, and we're not dependent on flow,
we're not dependent on the temperatures, and therefore we're not bounded by the heat flux,
nor by the amount of heat that is generated by the device, which gives a future
proofing within the same compact slim design.
Now, if you compare it, and this is why, again, connecting it to the previous discussion about
mainstream, you can actually take existing servers, definitely new ones that are coming, and without the need to compensate for higher
mechanical devices to allow higher flows, without less dependency on inlet temperatures,
you can maintain all of them in a very compact way in those 1U or 2U chassis, and allow the
densification into the rack and allow
therefore to bring it in a very simple, easy, cost-effective way into the data center.
All right. There's a ton to start to unpack here because I think we've got to do a little work
because again, what you're doing is different. Not that fundamentally, but your delivery mechanism
is substantially different. So a lot of the liquid cooling loops that we've seen,
if we go into an HPC data center or somebody that's embracing these things,
is that you normally see the rack or two of gear, of HPC gear,
and if you look around the back, there'll be a bunch of black piping,
and all the systems will have an inlet and an outlet plugged into the back.
And then there's normally a rack next to it, which is just a pump, effectively,
and a water exchanger that moves the liquid around.
And eventually they've got to go cool that liquid somehow
and run it, in some cases, through a swimming pool or off the roof line or something
where there's some sort of way to recapture that heat or exhaust that heat, recapture ideally.
In your system, just walk us through what that looks like because the little heat sink plate
you were holding up is only a few millimeters tall. So you're talking about a reaction of when that
dielectric fluid comes in, makes contact with the plate that's hot, and it causes that fluid to,
I guess, boil at that point? Is that the right, or off-gas? What is that process?
So the process is indeed boiling. It's a little bit vice versa. So this evaporator, this cold plate, which is
acting as an evaporator, holds a small, tiny amount of this dielectric liquid and that
creates a quote unquote a pool, a small pool of this liquid sitting there either. And instantly when heat is being generated,
right, the processor is starting to warm up, but instantly without needing to create any
weight because the boiling is happening instantly at the desired temperature that we need to maintain it.
That process is by the virtue of turning from vapor to liquid
is what carries the heat in a very efficient way
to the tubing that takes the vapor out.
This vapor, both the liquid tubing and the vapor tubing are
connected to a set of manifolds, in-vec manifold, and those would go into this heat rejection
unit which essentially is a dual function device known as CDUs in the single phase, but in our case it's a condenser.
So one function is condensing the vapor back into liquid and make it available again for
the system in a closed loop.
And the other is to eject the heat to the primary loop, whether it's air or another
liquid where there are typically primary water loops in some cases might
be another dielectric material that we're working with other partners to to take out that process
make it both very efficient in terms of cooling but also very scalable and very available in
variety of use cases because the heat those heat rejection units those condensers are coming in different shapes and form
anywhere from in rec solution so you can actually
Wreck and roll into a hybrid environment existing hybrid environment, which is a big benefit because
Better said that owners do not need to commit themselves to a major forklift up front
But this of course if they want to they can.
And then the family extends to end of wall solutions and as well as external solutions
which would be super efficient from a sustainability or energy point of view.
We also have solutions that would maintain the room, maintain room neutral and would actually use the heat generated by the fans
back into our system and keep the room neutral,
although it's part of a larger data hall.
So again, very kind of 100% heat use capture that we provide.
I guess later we'll talk about scenarios that would make the heat reuse
in a further extension of the benefits of this particular technology.
Yeah, we should do that. So what you're talking about then is still just like a standard liquid
cooling loop. You've got an inlet and an outlet basically where your fluid is going in
and then coming out as gas and then being sort of repatriated back into fluid
and then sent around again.
But are you actually pushing or are you using gravity for this as well?
I'd want to be clear on that.
It's gravity and you hit the nail on the the exactly on the head
uh this pool of uh liquid sitting is sitting idle in the evaporator uh and would not boil at all
if there's no heat generated so you can imagine multiple of those sitting across the servers and
across the rack and across the data centers.
Some processors would work at turbo mode.
Others would be idle.
Others would be just running at some percentage of load.
Each one of them, all of them are connected in parallel, but each one of them is self-regulated,
meaning the process of evaporations and therefore keeping the process of
school would happen all instantly but on demand only when needed that's one of
the reasons why we managed to build the system with so much a small amount and
with such a small amount of liquid in one of the reasons why it's so compact and why it's so energy efficient.
The condensers that I described before, just again to burn the ear, in-rack condenser that
can handle upwards of 70 kilowatt computing would draw under just about half a kilowatt when the system is running at full
quota. So that tells us how efficient, energy efficient the system is.
Seems that way. Talk a little bit too about the process. So again, I saw the video and
we can link to that in this
description so people can check that out because it is relatively easy you're unscrewing the the
large air heat sink re repasting it dropping your guy on running tubes out the back presumably
connect you know one to each manifold there and and off it goes Now that was a CPU example. I imagine you can work on
anything else, socketed GPUs. Are you also looking or doing RAM or other things within the system
that perhaps aren't as intuitive as the CPU and GPU? The answer is yes, yes and yes. Today we're privileged to be
qualified by both Intel and AMD with their existing line of processors
including the Cephe Rapids. We're working with both on next generation
processors. We are extending as we speak the coverage to a number of GPUs
and as well as going beyond the processors of various kinds into DIMMs and VRs. That
solution would come out soon. In fact, applying this technology is rather attractive and technically simple.
To extend it to 100% of the board, it's a matter of running the economics
and seeing that in a system solution, whether it makes sense to invest in the on-board solution versus the kind of 100% heat capture solution at
the rate level that they've described before.
Right.
So it strikes me, though, that your solution is overall then relatively simple, and it's
a bit of a retrofit.
What does that go-to-market look like today? I mean,
you can work on any system. It's somewhat irrelevant, but do you have a network of
resellers that do this work? Because it's not something that, as you're configuring your power
edge, you can select liquid cooling today or your dielectric cooling. That's not available just yet,
but how do these things get consumed?
Obviously, in the HPC space, those are often very bespoke custom jobs, and so that's one thing.
But what about the rest of the world?
And as we said, this is going to mainstream, so your question is right on.
And we pride ourselves on developing the ecosystem in general and the system integration partnerships in particular from day one.
We've been working very closely with some of the largest data center customers from a very, very early stage of developing the technology and thereafter bringing the products to market and it was clear to us that the go-to-market needs to be predicated on working
with and through system integration partners that would actually perform under training and
formal certification of course but would perform the work just like they do by the way on server
configurations regardless of direct andon-chip cooling.
So for them, it's a natural expansion of their services
and their customers are counting on them
and have the confidence in them providing the services.
If you go on our website, check it out.
We're really grateful to have some of the largest system integration
partners on board fully certified fully trained and actually performing uh this kind of uh
role as we speak with some of the uh most demanding customers across various verticals
among them is of course uh worldwide technology, WWT.
We're also working very closely with the server OEMs,
a number of them.
And again, if you follow some of the publications,
you'd see that we're working, for example,
very closely with Dell Technologies
and particularly with Dell OEM.
Eventually, those would be provided the SKUs from several OEMs.
And meanwhile, this job is performed under certification
by the system integration partners.
Okay.
So let's talk then, too, a little bit more about the loop
because of all the feedback we get
is that traditional enterprises that haven't
conceived of putting these systems in are afraid of leakage. So they're afraid of one of the lines
springing a leak and now their 48U or 2U of server gear is wet. So let's understand that.
And then let's talk a little bit more about servicing this and how often fluid needs to be added.
Let's talk to some of the scary administrative details of these loops.
So in your case, if one of those, let's start with something simple.
One of the blue or red tubes to a CPU springs a leak. What happens?
So nothing of catastrophic failure will happen because this is dielectric. In fact,
we demonstrated and people loved the fact that you can pour the liquid onto the board and nothing will happen.
So from that perspective, it's safe.
Now we designed the system of course, to be leakage free
and have all the monitoring in place to keep it,
keep close eye on it.
Actually, that's one of the strengths of the system
is the software, we'll talk about it later and if if there is a leakage through one of the tubes so you
know one of the tubes being cut somehow what would happen is a normal shutdown
of that of the particular CPU that's it that's the worst that would happen in that case. This is one of the key
differentiators, of course, of the system that not just from the elimination of the
catastrophic failure is a huge value and allows the data center operator to sleep well at
night, but also, again, it is one of
the things that feed into why the system is so much simpler.
Whereas in case of bringing water to the server because of
the catastrophic failure risk,
they need to create
much more stringent and robust mechanical tubing.
Ours is very flexible and easy to route into any server.
And you talked before about how more densified those servers are coming
with multiple processors and GPUs.
It's becoming hard to route those kind of robust bulky elements into the server.
It also eliminates the need to put in all kinds of additional layers of safety,
which cost money, take space, and create more complexity,
such as negative pressure systems.
Those are not needed in our case.
As I said, the worst that would happen by design is that this particular processor or board
would basically shut down.
Okay, so that's one thing and then you could service it, replace your tubes
or whatever and be off and running again. The other one you talked about
was needing
just a couple gallons for effectively a rack of gear how often in your case does this two-phase
fluid need to be replaced or added to is it like a car oil where it's somewhat consumable if you
drive heavy and and and may need to fill up before you know your six-month checkup? How does that work?
So the system is a closed loop. There's practically, with the exception, of course,
if there's any significant breakage that would cause leakage, there's no loss of liquid. There's a tiny little diffusion of that over time, but we're talking about less than a percent over terms of usage of the system, if there's a lot of insertion and extortion of servers, but this is not a typical data center environment. The reason is every time you put it in and out, you bring in and out in those evaporators
this small pool, so it might aggregate a lot, but this is not a typical use case.
So in most cases, there's really no need to refill the system.
In fact, we have systems running at, again, I speak about what we can speak publicly at Equinix.
For a couple of years now, we didn't have to fill them up.
The system is up and running.
Now, because it's a closed loop and because of the characteristics of the liquids we are using,
actually the level of purity remains intact.
And a few years later, if and when you need to remove the liquid,
the liquid remains intact and can be reusable.
Okay.
So that's the physical service.
And I'm sure you do regular checks or a customer would,
or their partner on fittings and tubes and connections and all that sort of thing is some sort of, I don't know, semi-regular physical check.
So as we think about it as the loop continues through, then the last real big thing is the heat. So I still have to dispose of this heat somewhere, which is both an opportunity but also a challenge,
depending on what sort of scale you're talking about.
What are your customers doing with heat?
At what different sizes?
I mean, if you just have a rack of gear, can you exhaust into your attic AC?
What's going on?
What are the choices there?
Okay, so the key word that you used, Brian, is choices.
The choices are determined by the customer environment.
As we know, there are multiple different environments
and even within similar environments, there are some
heterogeneous environments that they need to deal with.
This is where Zootaco introduced a variety of heat rejection units, which is really where
the solution would be determined based on the needs.
So let me touch on just a few examples here. In typical data center environment that is only running based on air cooling today,
there's no facility water primary loop there. This is actually where one of the areas where
the Zootaco solution would shine because we are bringing in the heat rejection units such that they would extract the heat outside to in the in-rack solution, outside into the hot
aisle and typically the existing infrastructure to remove the heat from
the hot aisle or cool it back has enough capacity for a variety of reasons, among which if you deploy it at scale in these
kind of situations, you can increase the ambient temperature of the cold dive, which would
result in a very significant energy saving with a variety of different heat rejection units
families such as the rear air-dough condenser of ours so that's not the
typical rear air-dough exchanger this is a condenser you could even reduce the
CFM and achieve even greater saving.
So the handling of that, and quite uniquely for Zootaco,
would provide for a solution in air environments
that would be able to not just deal with high power processors
at very high density,
but do it within existing data center power envelopes, and at the same time
achieve power efficiencies. This is a distinct benefit if you look at the other options to do it
with single-phase cooling. It's not coming even close to the densities that we can deal with.
Another family is a family of heat rejection units that would eject it to a liquid, most
typically in many data centers running into the primary loop of facility water.
And there again, we can use very high temperatures, inlet temperatures, so we can, you don't need
chillers, if they do, fine, but you can
really work in environments that would go upwards of W4 with the ASHRAE, which again makes for very
efficient use cases. And complementing those is what I alluded to before, is the end of flow and use, which from a power savings of PoE and therefore the energy efficiency would provide
the most compelling solution. Of course, those would be designed for much larger deployments
into the hundreds of kilowatts and into megawatts. So that gives you the wide scope of
solution for family of products that provide
solutions into different data center environments.
This is also, I think,
a good opportunity to just introduce and if you'd like,
we can discuss it further that with some of those products
we can provide the highest quality of heat for heat reuse and do it in a way that not
just provide upwards of 70 degrees Celsius but guarantee that even in fluctuating loads of the data center and the
heterogeneous environment of the CPU but let's keep it for the next well either
the one of the great things with guys like you that are that are operators but
also nerds is that it's that you understand technology and can go deep there as appropriate.
Because I'm sure this is a real conversation.
When you go talk to an organization that hasn't done something like this before,
if they're already in on Liquid, then they're in,
and the sale gets a lot easier.
But for first-time and for some of these enterprises now
that have AI practices that didn't before
and have to deal with these things. The conversation with someone at the CXO level versus the practitioner, I mean,
there's going to be a wide chasm there. I hit on some of the maintenance concerns that we get hit
with every time on these things. But then there's the operational concerns and to be able to hit on
all of that's important. But fundamentally, every time I get the pushback from, you know, we're never putting liquid in our
data center. Well, I don't think you have any choice in another, maybe two server cycles,
maybe one kind of depending on your workloads. But if you're after these high performance workloads
and leveraging the GPUs, you're either going to have to have
twice as much gear so you can space it out or figure out some other way to cool these things
in a more dense environment. And maybe in the US we're somewhat spoiled in terms of data center
space to be able to spread out a little bit. Or maybe even we have to because a lot of the
colos, as you know, don't have enough power in that rack.
If you wanted to populate 42U of GPUs, you just can't physically do it. But in Europe and other
parts of the world, the density problem is very real. Are you seeing, I'm sure you're seeing that,
but are you seeing a greater adoption for liquid cooling in some of these other places that might
have a stronger green initiative than we do in the U.S. and some of these other places that might have a stronger green
initiative than we do in the U.S. and more of a density challenge? You know, I would say yes,
if we would be speaking maybe six, 12 months ago. I think what's I mean in the inland US is phenomenal we see clients data
center owners and operators that would be known to be the slowest to adapt most
concerned a conservative moving into action.
And the reason they are moving into action is exactly tying back to the points you just said
and that we discussed at the beginning of this call.
There's this confluence, this perfect storm that is happening.
I mean, some of them are signed up by their CEOs to 2030 goals. There's no way in hell
that they will get close to that goal if they won't start implementing liquid cooling today
as part of the strategy. I'm not saying it's a magic wand. That's the only thing they need to do.
They need to do some other things to realize that.
And I can, just as an example,
in their existing data centers,
they can deploy today,
but they would need to bring,
to redistribute the power to their ex, right?
Power distribution is going to be key.
But that's something that is relatively speaking easy
and easier than go and start building a new data center
that would be designed for that that would come online just in say it best in
three years but will never help them achieve their goals and it's not
stopping there as you as you said before those applications are requiring the
higher power processors they are moving up with the families of processors, they need
to cool them, they bring in GPUs for some of those applications, and they are still
within the power envelope that they have.
So now another forcing function for them is to shift this power from cooling to computing,
otherwise again, just operationally it's not cooling to computing. Otherwise, again, just operationally,
it's not going to happen.
Now, it's not taking away the absolute need
for them to go through all the testing and qualification
and make sure that this solution
is really delivering what it's promising to deliver
and that all the operational and maintenance and economics,
economics needs to work.
Budgets are not unlimited.
They're actually very stringent on those budgets.
This is where the other point we discussed before
comes into play.
It's one thing with all due respect to look up at Zootaco, but those customers are customers that have deployments at scale and globally.
They need to be assured that their providers can deliver it, that their provider can service it and do it at the global scale. And this is where, again, Zooterco is now
really enjoying the fruit of a tremendous investment we've done in developing our
go-to-market partners. So let me ask you this too. I mean, obviously, when we're talking rack scale,
hyperscale, HPC scale, there's a lot of easy wins there for you. Is there a play
or does there need to be a player? I'm still looking for the easy wins. If you
send me some easy wins I'll take them. The easy emotional wins. As you look
further down the stack, as you look at edge computing, as you look even at SMB, is there
maybe not just for a file server for an eight-person accounting office, but is this technology
a player in smaller physical sizes, or is it just not quite necessary yet?
The answer is absolutely yes. This is exactly the point that this executive at Dell said. This is mainstream. Yes, it serves well the hyperscalers and the large deployments, but this is mainstream. this discussion. When we're working with our HPC partners, some of them have done this for life,
they really put their finger on a very interesting point there that relates to this question.
There's the HPC, you know, the supercomputers, the very high end of the HPC clusters,
but they coin very nicely.'s the HPC the forgotten
middle they use the word the forgotten middle and this is what this is actually
what you're describing this those are those enterprises that have significant
capacity of non HPC but given the application and we see the cost
vertical we see in manufacturing we see it in manufacturing, we see it in healthcare, we see it definitely in finance, you see it
across the board.
There are applications that are requiring them to move to an HPC-like environment, and
although it's not going to be hundreds of racks, it will be only portions of the racks within
their installation, they are requiring, they have the needs for liquid cooling.
Now, you also mentioned the edge.
In its broader sense of edge, right, we know edge can take different shapes and forms,
but this edge has some commonality in the sense that it is being done in many
cases in urban areas where space is a big, tremendous constraint.
It is done in modular data centers, environments for a variety of benefits for the drive modular. And actually, even in the non-urban areas,
there are requirements that have to do with things we talked about before,
needing to do it with little to no service.
So all of those actually use cases that you put it together drive the need and now the
solution for the mainstream that are not necessarily the hyperscales. Right. Well, it's a problem that
we face here in Cincinnati, Ohio. We've got eight racks of gear. Our main lab has six. And in the summer right now actually when it's 95,
67 degrees we can't run everything we want to all the time. Further, to be
efficient we're open-air cooled so that comes to us about nine months of the
year but three months of the year it's not so good pumping in the 96 degree air.
Brian now I know what you meant by an easy win there you go
yeah but you haven't seen my budget uh but no but that's a real a real challenge for us is that
if we had something like this i just think that that there's a gen we've talked about it a couple
times today that there's a general discomfort with traditional IT in making this decision, whether it's you or
any of the other liquid solutions, because it's just different. It's a little bit scary. It's a
little bit, oh my gosh, now what do I have to do? I've got to pump these pipes into the attic or
into the basement, or I got to do something with this heat and run this complicated loop that I don't have.
Or if I'm in an office building that I can't modify, where I have very little control over
that, not a traditional elevated floor data center.
So I think these are some of the things that pop into customers' heads in terms of, can
I even do this?
Ooh, yuck, it's scary.
No, let's scary.
No, let's say we can't and just stick with air for as long as possible.
Again, that's very good observation.
That's reality.
I think liquid cooling in general, this is not Zootico in particular,
for the first time, I mean, liquid cooling has been there for a long time,
but I think in what we're seeing now, liquid cooling requires a convergence between the traditionally separated silo of infrastructure, power and cooling, and the other sil side of IT. By the virtue now that the liquid somehow gets to the IT,
regardless of the liquid cooling technology, there's a convergence. And that forces those
groups to actually collaborate. Sometimes it's the IT that takes the lead because the pain is bigger there, and sometimes it's the infrastructure.
But very quickly, those two silos are now getting to work together.
And yes, it's a change.
The beauty, and again, I'm saying it humbly
because this is really what our customers and partners are telling us. The beauty in the way Zootaco implemented it, as I said before,
is it takes away many of the concerns.
The simplicity makes it much more attractive for those different groups
to realize that, hey, yes, we need to deal with some level of
change, but it's minimal. We are
eliminating the need for
forklift projects altogether.
In many cases, we eliminate the need to even deal with
water infrastructure.
So this whole notion of no forklift, of little to no change to not just the infrastructure,
but also to the trades, to the existing operational processes, to even the service agreements, leveraging the same service agreements,
same providers, puts customers at much bigger ease than dealing with other alternatives that
are, in fact, more difficult. Well, so talk about that because the simplicity should help then on this next
question of economics. So how do you rationalize the investment required to do this? Because it's
not obviously nothing and it's more capital expenditure than just buying your air-cooled
power edge or whatever and being off and running. There is some equipment. There is an investment in infrastructure on one hand.
On the other hand, you could drop some or all fans
depending on the configuration.
So you've got some power savings there.
You've got additional efficiencies from cooling the equipment
that perhaps you pick up on and maybe even some electrical savings.
But how do you help customers rationalize the financial decision?
Again, you already described it.
There's the typical ROIs, by the way, that we've seen,
and it's project dependent, of course.
The typical ROIs are in the one to three years periods,
which is rather completely viable in most cases.
There's also the notion of, I think, pricing elasticity that you can expect
as volume ratcheting up. Essentially, customers that are rationalizing it for the near-term projects
at smaller scales are enjoying the benefits, just like you said, of eliminating both capex
and definitely enjoying OPEX savings that are very tangible in order to rationalize the
economics and customers that are doing it with the notion or the view into the
future and I'm not talking about far future I'm talking about volume
dependent right when they are moving into significant amounts of megawatts, aggregated megawatts,
at the end of the day,
they do understand, and we are not hiding it,
that the cost of those elements,
and one of our key partners actually did
a phenomenal detailed analysis of that.
Its volume, this is going to be competing with air.
Now, it's not yet there, but it's literally at the end of the day the cost of materials. Now it goes beyond that and I want
to touch here a little bit, kind of segue a little bit into the software, right? We
talked a lot about the hardware system that is the co offering. Having said that, Zootaco have been,
for this work we've done with those large customers and partners,
we've invested a lot in developing
a complementary software, I'm saying complementary,
because it's not needed for the operation
of the actual operation of the hardware system.
But this was developed and called the software-defined cooling
because it does come as another branch under the software-defined data center of its various kinds,
you know, networking, the power, et cetera.
The idea there is twofold. One, it provides, we are sitting at literally the
endpoints in those data centers. We're sitting on the processors, we're sitting on the GPUs,
we're sitting on the DINs. So think, imagine the basically
almost infinite number of endpoints that we're sitting on.
That enable us to build the software-defined cooling based
on a lot of data that we're collecting through the system,
performance, temperatures, flows, etc.
This becoming a big data pool.
First and foremost, we are shifting our system today
with what we call the software-defined cooling platform,
which is a very highly granular
monitoring control system that
no customer would want to fly blind and that gives them
the opportunity to really know exactly what's happening in the system everywhere including all
the alerts that feed into preventive maintenance in case it's needed etc but it also provides us
the ability to build on the big data and really venture into additional features and functionalities that bring tremendous
value, I'll touch briefly on two.
One is dynamic frequency scaling in some applications that can enjoy it.
We actually optimize the run of the jobs over larger number of processors that would run at a lower throttling or lower power,
instead of running them at full throttle quickly, the job would end up at the same time and
would account for upwards of 20% of additional power saving of the servers.
A completely different application of that is going into this heat reuse that I mentioned
before that allows us to control the system such that we can guarantee a 70 degrees Celsius
of heat coming out in fluctuating loads. Very, very tough to do without this kind of software.
By the way, it doesn't require specifically the Zooter Core
underlying hardware to run underneath.
It can run with practically every hardware.
So all of that is to say that there's the tremendous benefit
from the liquid cooling.
As we talked on the hardware side,
there's actually a whole new level of benefits
that customers can retrieve from the software.
Well, I mean, that's pretty cool too, right?
Because you talk about software
as if it was inherent in your design,
but we've seen throughout IT history
where companies get very myopically focused on
the product that they're delivering and kind of forget about the broader ecosystem, the visibility
into how things work. And I mean, we often see that today of like, oh yeah, you know, just plug
our API into whatever, like, okay. I mean, that's cool for a lot of people, but for many, they want
a little more of a curated, manicured experience in terms of
seeing that data and trusting you to be the expert to help show that and what that should look like
and how that should be interacted with. So all of these things, you're meeting people at these
trade shows and having these conversations. What does it look like if somebody wants to check this
out? Because this isn't something where you can just send them a box of your
plates that you were angrily shaking at me before
and say, just chuck them onto your servers and off you go. What is a
POC in this model? Is there
one or is it sort of binary? How does that work?
It's actually simpler than it sounds.
All that the customer does
and typically the partners that we train and certify
are the ones that are leading those. That tells you how easy
it is. It's not like a very complicated
process. Partners aren't very smart so you have to keep it simple right?
is that what you're saying?
I'm saying that their investment
they are actually smarter than us
because they find out ways to do things
at a large scale and globally
but they're actually smarter than us
and I'm saying one of the reasons
that it is so attractive for those partners,
therefore for the customers too,
is that it's designed to be simple
even from a POC point of view, right?
So we've done and are doing
a growing number of POCs.
I actually just recently looked
at the global map, right?
The globe.
And we have so many of them spread around
across a few continents now.
All that's needed to know is really what
processors they're using. So we make sure that we have those cold plate
evaporators ready. And if not, by the way,
it's a process that takes about three months to come up with
a new one if there's justification for the volume of course.
Okay.
So we need to know what the processes are.
We need to know what the server models that they prefer to do,
just so we optimize the tubing for this or the partner does that.
And then a number of servers,
and practically we'd like to know the type of wreck,
just so we check that there's nothing out of ordinary.
And of course, into which data center environment
it's going, is it air, do they have facility waters,
what's the total
power that they want to do, etc.
That's practically it.
We can turn it
around today, and again
this is actually a great segue.
We can turn around
units,
not just for POC, but also for
production deployments.
Not at unlimited scale, but at scale, in rather
quick lead times.
And that's not because we are sitting on tremendous amount of inventory, nor that we figure something
out in supply chain that others didn't.
It's because in comparison to other liquid cooling systems because of the simplicity and because of the inherent characteristics such as it's dielectric, the materials that we're using are more common.
A specific example, the quick disconnects that we're using are not requiring to be
completely dropless. You can have a drop when you connect and disconnect nothing happens right those are much cheaper
and much more available than the ones that are very stringent you know so the dielectric decision
lets you have a little more flexibility in terms of that yeah because you need if you're dripping
oil or something it needs to be right water One example. And there are many examples like this, which allows us to be in this very challenging environment of supply chain.
Be pretty nimble. So to your question, and maybe hopefully you would consider one as well.
We can turn it around in just a few weeks. No, I was about to say you missed the part where I was about to list off our Eaton racks, our Dell servers, the couple others we needed.
But the fundamental difference between what you would like in a customer, again, comes back to that budget.
So we're going to have to work on that one.
We might have to do some eval testing for you guys. But all right.
So it sounds like, and this is news to me, I actually had no idea that you guys would
support a small environment like that for a POC or even a small environment.
So I think that's really worth noting because I think in my head, and obviously incorrectly,
I often think about these as large scale solutions and not something that's consumable as much
in a smaller organization. So that's great to know.
Where do people go? Zutacore.com, Z-U-T-A-C-O-R-E?
That's a great place to start. And again,
you there can see the links to
customers or partners of ours, such as WWT, I mentioned, Boston.
There are a number of partners.
You can go directly to them as well.
Well, this has been great.
I appreciate your time. with that R640, I had no idea that there was even another choice out there in terms of, you know,
just what's available in this cooling business. So you guys are doing something that's very
distinct and very cool. And so we're glad to help spread the word. Thanks for coming on and talking
to us about it. Appreciate it. Thank you. Thank you for making the world available for the community at large.
And we're looking forward to continue.
Very good.
Thanks again, Udi.
Thank you.
Have a great day.