In The Arena by TechArena - Ventiva Brings Innovative Solid-State Cooling to the AI Data Center
Episode Date: January 6, 2026CEO Carl Schlachte joins TechArena at OCP Summit to share how Ventiva’s solid-state cooling—proven in dense laptops—scales to servers, cutting noise, complexity and power while speeding deployme...nt.
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein.
Now, let's step into the arena.
Welcome in the arena. My name's Allison Klein. We're coming to you from the OCP Summit in San Jose.
And I am so glad because Carl Schlahti, CEO of Ventiva, is back in the Tech Arena studio with us.
Welcome back.
Glad to be here.
Good to be back.
So Carl, you were recently on the program discussing your innovative cooling solutions,
and we were talking all about lactops and different types of device deployments.
And here you are at OCP Summit.
So I'd have to ask, that seems like news in itself.
What does this mean for the data center?
I think what it means for the data center for us is that the activity that we put into
designing cooling for laptops is directly applicable to all the stuff that you can look at in
servers. So to just give you some real quick metrics, power density in a laptop is actually higher
than power density in a server. That's a good data point. Yeah. The reason is physics, you're talking
about a much more constrained space with higher compute values, as opposed to a server. In a server,
a one U rack, for example, might be something on the order of 12 liters worth of space.
On a laptop, that's more like three liters worth of space, if not a little bit smaller,
depending on the size of the laptop.
Yet the compute density tries to approach the same level.
And the result of that is everything that we've learned in laptops is directly applicable
to the stuff that you can do in servers, and that has some pretty concrete benefits for a server.
Now, it's very timely that you bring this up because we're hearing all about the cooling challenges
is at OCP summit this week, rack designs cresting beyond a megawatt and facilities beyond a
gigawatt. Can you describe how this challenge offers an opportunity for a vent to you to step in,
excuse me? Sure. So I guess the easiest way to think about this is like a one-U server rack.
So a one-U server rack is roughly, as I just mentioned, 12 liters worth of space. In that 12
liters worth of space in implementations today, actually only three liters of that is compute
equipment. The rest is space. And the reason the rest is space, that other nine liters of space,
is in order to allow airflow through the system to get rid of all of that heat. And part of the
reason that is that way is that fans, really, if you look at it, if you remember what a fan looks like,
just a square box pushing air in one direction, what you're doing is you're actually moving as much
air as you can through that server in order to be able to get cooling. And you want to do everything
in a nice straight line. You don't want turns, you don't want baffles, you don't want any of those
things. The challenge is as processing power and the kind of data points you're giving go up,
you get to a point where you actually can't move enough air in order to cool the thing,
right? So you start to see things like cold plates show up, things like that. Okay. So wouldn't it be
great if you could actually just put cooling right where the heat is? Okay. Yeah. And that's where
we show up. Okay. So I know that we've been talking a lot with a lot of
vendors about liquid cooling solutions for the data center. They're talking about direct-to-chimp.
They're talking about immersion. They're talking about rear door heat exchangers. But where do you see
the move for more innovation? So I would say speaking as an engineer, the first thing you want to do
is use what you already have rather than try to create something new that requires all sorts of
challenges with it. So if we just take an existing rack and we say that we can get rid of some of the
fans that are existing in that system. By replacing them with a device, this is our device,
that's an air mover. It looks like a, it looks almost like a harmonica. Yeah, it is. It kind of looks
like a small harmonica. Completely solid state, no moving parts. This will move between one and two
CFM of air. If you could put this directly where the heat is occurring in a server, what you
end up with is a reduction in power needs. About 15% of a server's actual power budget,
the total server is devoted to cooling, devoted specifically to the fan, actually.
So we've done a lot of work working with some server folks on this, and if you just go with
directly implementing cooling where the heat problem is, and you remove some of those
fans that are existing in there, you can get a reduction of anywhere from four to five percent
of the overall power budget for a server. And it sounds like a small amount in general.
It's huge. It actually turns out to be here.
huge. So if you do the math, with just deployments that exist right now in the United States,
just talking to the United States. And everybody just did what we said you can do with this.
You replace all the servers with this. That's three Hoover dams worth of power. Oh, wow.
Or 750,000 homes per year savings. And you're not changing the square footage of the box at all.
So I have a feeling that there's some tech in that piece of equipment that you just showed that's
more than a harmonica. So why don't you just talk about what you've actually invented?
Sure. There's actually two things. I didn't bring a copy of our power supply with this,
but there's a little power supply that goes with this as well. What's going on in here is,
actually, if you take a step back, and you imagine a fan is we're using power to turn a motor.
That motor is attached to a blade. The blade spins. You know what I'm talking about. And all of a
sudden, I'm pushing the air, right? What we do is we remove that middleman. There's no motor.
There's no blade. We power the air directly. And we use.
that power to move molecules of air. If you can sustain that process long enough, you get air
moving. That sounds a little like Star Trek. That's kind of cool. That's amazing. Now, there
are unique opportunities within data center infrastructure. So, you know, micro delivery,
where are you targeting specifically in those data center racks? So the evolution we see
happening in a data center rack is that the very, very high performance components,
CPUs, GPs, things like the hat,
are all going to eventually move to things like cold plates.
You just literally can't move enough air
under any circumstances within those.
Cold plate kind of liquid cooling
is going to happen with that.
But you are going to be now in a situation
in which you still need to, for example, cool memory.
You know, spaces between dims get very tight
and you can't get enough air flow through them.
You can take something like this
and we can make it in different sizes.
You can have to blow air directly through that at the spot
and reduce the amount of things
like single bit errors that happened because of over
heating, et cetera, et cetera. That opportunity means, as soon as you can start to do those
things, you can start to use less bulk airflow in the system to achieve the same ends.
Yeah, that seems incredibly important when you start opening up density through cold plate
at the highest points of heat. I would say that's the second thing, right? So there's one about
increasing the overall efficiency of cooling in a system by doing this, but the other thing,
because we do this in laptops and because the power density of laptops is that much higher,
We know how to deal with power-dense systems, really well.
So what you now have the ability to do with doing something like this is actually
pull components closer together on the RAP, meaning you can increase the compute density
per server.
What does that mean?
That means that in the exact same footprint that I have in a data center, I can have
greater power efficiency and more compute density without changing the footprint at all.
And that's actually a very huge step in the evolution of those kinds of devices.
Very cool. Now, you're at the summit, which could call it who's who in the data center zoo.
What have conversations been like with the data center ecosystem as they've seen this approach?
Amazingly good. I say amazingly good because we've been working on laptops for five years.
So just a quick step back in the laptop space, we're going to be announcing some stuff probably at CES, some groundbreaking laptops that really change the nature of what a laptop is.
But the folks that have been developing that and the folks that we're working on that with
also happen to have data center groups.
And so word filters from one to the other, word filters from the laptop folks, hey, you
should take a look at this.
And now we have a situation that is totally unique to us because we're an upstart.
In this situation, we are being referenced sold by the laptops, engineers themselves,
directly into the data center without me having to say, oh, you should take a look at it.
Yeah, that's a nice referral.
It is.
I would also say the other thing that's super cool about it is that, because,
Because the laptops are a very intense environment, we're going to be announcing that we
passed fall. We're going to be announcing lifetime things. We're talking about very massive
scaling. All of that stuff is already done. So the way we talk about our deployment in the
server is we actually are going to get the server for no additional engineering effort on our
side. We don't have to create anything new. The thing that we are going to do is put a lot of
application engineering and helping people understand how they can use. Right. I mean, there's
customized form factors and those types of things, but that seems like you're in the latest
stages of development. Exactly right. That's so exciting. We love featuring innovative technology
on Tech Arena, and I got to say you brought it today, literally and figuratively. Thank you so much
for being on the program. One final question for you. If folks are interested in what we've talked
about, where do you send them for more information and to engage with your team? Yeah, so they can go on
our website, www. Ventiva, v-N-T-I-V-I-V-A.com. And there's all sorts of
contact information in there. There's FAQs, there's native sheets, there's all the rest.
Awesome. Thank you so much. Thank you, Alton. Lovely as always. Thanks.
Thanks for joining Tech Arena. Subscribe and engage at our website, Techarena.AI.
All content is copyright by Tech Arena.
Thank you.
