Not Your Father’s Data Center - The State of Innovative Liquid Cooling in Data Centers
Episode Date: July 26, 2021Isn’t the first rule in data centers not to get the servers wet? They’re kind of like Gremlins in that way. But someone forgot to tell Daniel Pope this fact. Pope, the CEO of Submer, thin...ks data servers belong submerged in liquid. He was thrilled to speak to host Raymond Hawkins about his immersion cooling technology, which provides a new green approach to data centers, on this episode of Not Your Father’s Data Center. Born and raised in England and now residing in Barcelona, Pope began his career at 16 when he started his first data center. Beginning with a single server in his bedroom, Pope grew his business to more than 20,000 servers. With his expertise in data centers set, Pope now loves the challenge of pushing technology and the industry further. Today’s increasing demands on data centers pose a cooling challenge – one Pope knew he wanted to solve. And that set him off to help develop immersion cooling technology, the solution Submer focuses on today. “Instead of cooling the electronics and the server components with air, we leverage a dielectric fluid, a nonconductive fluid, which captures and transports the heat in a much more efficient way than air,” Pope said. The initial challenge for Pope was how to use this technology to cool the racks without disrupting the rest of the data haul and data center design. “And, now, further into this journey, we’re looking at it from the whole data center point of view,” Pope said. Pope’s immersion cooling system works for solid-state drives, NVMe, flash drives and helium-sealed drives. Immersion cooling reduces IT power load in the center by removing all the fans from the servers. And, from a density perspective, the density is ten-fold. “We are deploying immersion tanks that are in the range of 100kw that operate with extremely warm water, which means the overall facility PUE (power usage effectiveness) is reduced to around 1.04 to 1.05,” Pope said. And that PUE number is before the energy savings from the fan removal are calculated.
Transcript
Discussion (0)
Welcome to another edition of Not Your Father's Data Center. I'm your host, Raymond Hawkins,
and we are recording today on Thursday, June the 17th, to give you some perspective.
The world continues to climb out of the pandemic and things looking better. I got to ride in my
Uber today without a mask.
My driver said, hey, I'm fully vaccinated.
Are you?
Let's ride without masks.
So things are looking up.
Today, we are joined by the founder and CEO of Submer, Daniel Pope, out of Barcelona,
Spain.
Daniel, welcome today to the podcast.
And we're so grateful for you joining us.
Thank you, Raymond.
It's a pleasure.
So, Daniel, I got to tell you, we talk about some unique and interesting things here on the podcast.
Talking about dunking my servers in liquid is not one I ever thought we'd cover.
Talk about a little bit of outside-of-the-box thinking that is different and anxious to hear how this works and how in the world for my entire career,
we've been worried about getting liquids on our servers and how that could ruin our data center and very anxious about where the plumbing works and what we use for fire suppression.
And liquids and servers have always been bad, so fascinated to hear this story.
But before we get into how we dunk servers and liquid
and keep them running, I would love to hear a little bit about you. Where are you from?
Grew up? Maybe a little bit of work history and how you ended up with a British accent in Barcelona.
Absolutely. Thanks, Raymond. Yeah. So maybe a little bit about myself then. Daniel Pope, I was a professional rower, started at a really young age rowing
professionally. And I guess that's the kind of discipline you need in a data center, right?
So the next thing I actually did, apart from rowing, was start a data center business at a
really young age. At the age of 16, I switched
on a server in my bedroom and started developing a little website and providing web hosting and
email services on that server. Now, that was back in 1999. And that very quickly, with that dot-com boom grew into a pretty large business.
I ended up with more than 20,000 servers, not in my bedroom.
Of course, my parents chucked me out by then.
I was going to say that's a big bedroom, Daniel.
Yeah, we went through the process of building, I think,
four different facilities on that journey.
We just couldn't believe the scale that things were getting to.
And that's the journey where I became an expert, I guess, in data center design and operations after building four of them. And one of the biggest challenges that we always had in the data center
was cooling and how to support the next wave of equipment
that was being deployed back then.
I guess looking at really low densities in the racks,
probably in the range of, I'd say, five to seven kilowatts,
most probably. That didn't change. I sold my
business in 2009, and I was still in the range of seven kilowatts per rack. Now, it was only in 2015
when we realized that a new challenge was going to get to the data center floor, which was GPUs making their way into the rack.
And that's really where we started to see rack densities skyrocket,
and especially some specific applications and workloads
really pushing to levels that were not even possible to cool with air anymore.
So we set off to develop immersion cooling technology,
and that's what Subner does today. Our flagship technology is around single-phase immersion
cooling. Essentially, that means changing the medium around the server. So instead of cooling
the electronics and the server components with air, we leverage a
dielectric fluid, a non-conductive fluid, which captures and transports the heat in a much more
efficient way than air. And I'm sure we're going to talk quite a lot about that.
Well, so we got the rowing history, we got the I started a data center in my bedroom history, and ended up doing three more data centers after that. Give me a little bit of where you live, your personal background. I am fascinated by that you're in Barcelona, but you sound a bit like a Brit. So tell us a little bit of the personal front, if you don't mind. So yeah, I'm based in Barcelona. I'm actually born in Switzerland.
Interestingly enough, my mother was working for the United Nations for some time.
And that's where my accent came from,
of course. I lived in the UK until the age of around nine, so I grew up in the UK,
but then at one stage my mother said to my father, Steve, the food and the weather here is terrible. Can we please go
back to Spain? And he didn't really have an option. How in the world could she prefer the
food and weather in Barcelona over London? I mean, it's lovely in London for the whole first week of
July. That's about it. Yeah. So it was a no brainbrainer and I've lived in Barcelona ever since um I feel
very very Spanish and specifically Catalan from this region of the world where I sit
um and Barcelona is an awesome city very um forward thinking very easy to find super
um great resources and candidates for all the
stuff that we're doing here at Submar.
And essentially it was in the summer of 2015 where it was as hot as today.
So I think around 35 C and I was sitting with my co-founder,
Paul and saying,
Paul,
I was looking at the swimming pool.
If we feel so much better in this swimming pool,
what would happen if we stick servers into a dielectric liquid and cool them with that?
And we started doing some initial experimentation that summer,
built some of the first prototypes,
started to test some of the
fluids that were available, and immediately saw the benefits and the potential.
Literally, you're sitting by the pool and you decide, I like the pool better in the hot weather,
maybe my server will too. That's how this came to you?
I swear, literally, that's it. Yeah, that's how this came to you i swear literally that's it yeah that's what that's
what happened holy cow yeah because as you being a guy who was designing data centers early and
you saw not much change in the um the density of the rack you know five seven kilowatts
before you saw that first real jump and saw wait a minute coolness with with air might be tough
um it didn't strike you in that environment it struck you by the pool that's a fascinating and saw, wait a minute, coolness with air might be tough.
It didn't strike you in that environment.
It struck you by the pool.
That's a fascinating story.
I like you.
You said earlier, Daniel, you said, hey, let's change the medium around the server.
Let's not make it air.
Let's make it something else.
Did you think about it from a single server perspective, or were you thinking about, hey, I'm going to have a really dense rack,
and let's do it for a rack, or did you really think, hey, this might be a way to manage an entire data center?
Tell me about those early thoughts after the pool.
So we were looking at it from the rack level to start with.
Obviously, all the first prototypes were at the server level.
And you can see tons of those photos on our website. But actually, if you go on to the Wikipedia post for immersion cooling, one of our first prototypes is on that page.
So we started testing the thermals at the rack level and how we would, how could we deploy these new liquid
cooled racks without disrupting the rest of the data hole and the rest of the data center design.
That was kind of the key thinking. And now further into this journey, we're really looking at it from
the whole data center point of view. So what does a hybrid environment
where you have low-density equipment
and higher-density equipment look like?
And what benefits can one of these facilities leverage
by rolling out immersion cooling?
So as I think of dipping my servers in liquid,
it makes me incredibly anxious because of three decades of worrying about the stability of my servers.
As you thought about that for the first time, there's challenges there, right?
I mean, first of all, just putting liquid around the servers scares me.
But I think about the disks.
I think about the plugs.
I think about, you know, I um you know io devices whether it's drives
or thumbnails how do you start to think through those problems and what do you have to do to the
server to make dipping it in liquid an okay thing so one one of the things that surprised us um
first was when we were running these tests on the single server configurations, how simple
it was for a node that was designed for air to actually work in an immersion environment.
So we didn't need to change a lot of things.
And I'll go now into the things that do need to be adjusted.
But essentially, we removed the fans we made sure that the bios alarms weren't
going off when the fans were removed and and we didn't need to do much more back then
uh in 2015 maybe ssd drives weren't as common as they are today in the server space. We can't leverage spinning disks.
That's the only thing that we can't leverage in immersion.
The only spinning disks that can be leveraged are helium sealed drives.
But a traditional spinning disk is not pressurized.
It's an ambient pressure and it has a hole in it to make sure that that's always the case.
So obviously, through that hole, the fluid gets in, and the speed at which the disk speeds is humongously reduced, and so it becomes useless.
But solid-state drives, NVMe, flash drives, helium-sealed, they all perform perfectly in immersion.
When it comes to the design
of the nodes ideally you'll be this is a tank it's an immersion tank
so look at it as like a big deep deep freezer kind of system and the the biggest challenge
was back then you can't reach the back of the rack, right?
That's one of the biggest challenges for immersion, I guess.
Design servers that are designed to be manipulated from the front and the back are a substantial obstacle.
So we've been leveraging some standards that are out there, like the OCP systems, open compute systems that hyperscalers leverage.
In immersion, we're a platinum member for OCP because they're designed to be only manipulated from one side of the rack and they have even an additional benefit which is the power is distributed
instead of through cables through bus bars through power bus bars at the rear of the rack
which in our case is at the bottom of the tank and it makes it super interesting because we lose
hundreds of cables in in the facility that we don't need. And it simplifies.
The rat's nest goes away, right?
Yeah, yeah.
But then in the 19-inch type of form factors,
there's lots of servers that can be leveraged perfectly in immersion now.
The whole idea of just having everything on one side
of the rack is becoming more and more common.
You see it not only in OCP, but also in Open19
and in some other standards.
So that journey is much simpler now, I guess.
So Daniel, I didn't even think about that.
I mean, you're living it every day and have for years,
but the challenge of, yeah, we're not doing liquid immersion standing up, right? I mean,
you almost think about I've turned the rack on its side and I'm sliding the servers in from the top
so that the fluid stays all around it, right? Because otherwise, if we stood the rack up
vertically, right, it'd be hard to keep the fluid at the top. I got you. So I've laid my rack down,
I've got access from the top, and I'm dipping it into the liquid from the top, like you said, a tank.
It just took me a minute to think through why that works that way.
That's correct.
Servicing it all from the top instead of the backside. Got it.
That's right. They're horizontal tanks.
The servers are installed vertically instead of horizontally, I guess.
And then the first question that would come to mind is,
oh, then it uses up double the floor space than a vertical rack, right?
We don't have the height.
So one of the first questions that pops up is,
okay, so then this must be lower density than a traditional data center deployment
because you don't have the height.
That's one of the most common questions.
More physical footprint, right.
Now, so from a server U perspective, maybe you do have less server U's per square foot.
But from a density perspective, the density is tenfold.
So we are deploying today immersion tanks that are in the range of 100 kilowatts that operate with extremely warm water,
which means that the overall facility PUE is reduced to around 1.04, 1.05. And that doesn't account for something which is really
hard to simulate, I guess, if you're not used to immersion, which is you remove the fans from the
systems. The very moment you remove fans from servers, you're typically reducing the IT
power load, anything between 7% even up to 15% or 20%, depending on the system design.
So all that power, which is-
Yeah, how hard those fans are working. Yeah.
Yeah. That's considered compute in a data center because it sits inside the server.
Right, because it's inside the server.
Right.
Yeah.
Right.
Understood.
Yeah.
Considered IT load.
Yeah.
Not considered heat rejection.
Right.
I want to make sure I'm following something.
So I'm taking this tank, you know, for lack of a better word, I'm laying a rack on its side.
I know it's not a rack, but I mean, physically in my mind, I've doubled the physical footprint.
But instead of having a rack that's 15 kilowatts, or let's even just go with a really aggressive 20 or 30 kilowatts in the rack, I can now do 100. I might be using twice as much physical floor space, but I can cool up to 100 kilowatts in what would be considered conceptually a 42U tank.
Is that approximately right?
That's correct.
And there's other substantial parts of the infrastructure that are not needed anymore,
like the crack units or cry units or air handling systems, et cetera, which tend to use a lot
of floor space in the data center.
All that goes
away as well. So we don't need to think only in the actual rack itself, but all the supporting
infrastructure to cool those racks that also goes away with immersion. Right. So yes, if I was a
completely immersion data center, I could do away with all air handling. As I think through that tank now
laying down and having 100 kilowatts of IT load in it, in a normal, in an air-cooled data center
where I have hot aisles and cold aisles and I have a certain pitch, I need to have a certain
distance between the front and the back and I've got to manage that air. All of that, do you need any distance between the racks? Can they line up next to each
other? I just don't think that the, I can't think of thermals impacting the air. So I guess you
could stack them all right next to each other end to end. Is that practical other than physical
access? That's correct, Raymond. So what we typically do is we deploy them back to back
and side to side, which means that you end up with, let's say, islands of tanks.
The tanks don't dissipate. They dissipate less than 1% of the IT load into the actual data hall. So air renewal is very,
very basic. That means we're capturing essentially 99 plus percent of the heat that the
IT equipment is releasing and transporting it in that warm fluid and water loop subsequently.
So, Daniel, what does the tank fluid renewal look like?
Is the tank set up and it's good, or are you having to pump fluid from somewhere
or exchange the fluid, or does it happen all in the tank?
Do you mind talking a little bit about it?
When I think of liquid cooling, I know I'm running a chilled water loop, and that's a totally different solution, but I think the water or the fluid is moving.
What's happening inside the tank with that fluid as it warms up or cools down?
So the design that we have here at Submar, the tanks don't have a single hole in them, which really guarantees that they're leak-free and very easy to manufacture as well. The immersion fluid that
sits in those tanks is just gently pushed through the IT equipment. The speed at which
the fluid is controlled and pushed through the equipment.
That's all controlled by a cooling distribution unit,
a CDU, that sits inside our immersion fluid.
It has a server form factor,
and it sits inside the tank as well,
like another server, essentially.
So that device, what it does is it makes sure that the fluid is constantly moving and it also does the heat transfer between the immersion fluid and the water loop. So
the CDU has two quick disconnect hoses that come from the water loop to deliver the heat from the
dielectric fluid to the warm water loop.
The dielectric fluid does not evaporate.
It's surprising.
It's a fluid that doesn't evaporate.
Bacteria can't grow in it.
It's non-toxic.
It's biodegradable.
You can drink the stuff, although it doesn't taste well.
We have not worked on the flavor of it.
But it is super safe.
If it goes in your eyes, in your mouth, it's absolutely okay.
There's zero risk when it comes to that.
And it's not a fluid that needs to be topped up.
It's designed to be truly part of the infrastructure, part of the cooling infrastructure.
Wow.
Okay, so the fluid doesn't evaporate.
It's not dangerous.
And it is, I guess, absorbing the heat from the servers
and then going back towards your CDU
and swapping that heat out with a water loop.
Is that what I heard?
Did I understand that right, Daniel?
That's right.
So we capture the hot fluid at the top of the tank
through some channels that we have.
That fluid goes into the
CDU, the heat gets exchanged to the water loop, and then we re-inject the cooler fluid into the
bottom of the tank in an area that we call the fluid distributor, which evenly distributes the
fluid across the lower end of the tank again so that we can start, we can commence that process again and again.
Maybe something I didn't mention,
but the fluid has an expected lifespan of 15 years.
So it truly is a piece of infrastructure.
Yeah, it's going to outlive your servers.
So it's got plenty of shelf life.
We really refer to it's a future-proof system, which maybe today 100 kilowatts is a bit too much for some of the IT systems that you're rolling out.
But you're investing in a piece of infrastructure that in 15 years time will be able to dissipate 100 kilowatts from a rack so
if it's not today it will be tomorrow all right daniel so this is a really practical question
so i've got a server it's sitting in a tank it's running um things go wrong with servers
it happens all the time um the fact that we have no spinning components helps but still something
so right you the the
disks are not we don't do spinning disks when we don't do fans two spinning components that break
a lot in server so you've i think you might actually help my meantime to failure my server
by taking the spinning fans out but i'm still going to have something break on the server
what happens when a technician comes in and his server is covered in this fluid. How do you service a machine?
How does that, what's a technician do?
How do data center technicians need to be trained?
Because this is a totally different paradigm,
thinking about that the server is inside a fluid now.
Yeah, so typically we train data center personnel
in a half a day training session to get them up to speed to be
able to do the same tasks that they do in traditional racks in immersion. So it's not a
two-week course or anything like that. And the process is quite simple. You just need the right
tools. You will be wearing gloves and some goggles, probably just to make sure some glass
of protection goggles, just to make sure that if the fluid does go into your eyes, you don't get
scared and drop the server or something like that. But essentially we have maintenance rails lying on
top of the tank that you can move along depending where you want to pull a server out. Then depending
on the weight of the server, you'll either pull it out manually or you'll use a server lift to lift it out. And you lie it on top of these maintenance
rails where you can remove whatever, replace whatever component you need to replace. And
essentially you'll put the server back in. So you're not taking it away from the rack
or the tank in this case. The maintenance task
is done immediately on top of the tank so that any dripping liquid just falls into the tank and
you can run that process in a very clean and tidy manner. Daniel, I've got to ask a silly question.
Do I have to pull it out of the tank and it sit for an hour to be dried? Can I work on it?
I mean, if it's running in with liquid on it, I can work on it with liquid on it.
It doesn't have to be perfectly dry, right?
I mean, I know that's a silly question, but as I think through it, do I have wait time?
So as I mentioned, the fluid, it's quite surprising because we're all used to seeing
fluids dry and evaporate and essentially disappear.
But if you were to leave a server that you've extracted for a whole year outside of the tank the fluid would still be in
it or on it um it does not evaporate it truly does not evaporate so um so you you you pull it out and
you immediately um run the maintenance on on that node, even with the components all soaked in the dielectric liquid.
The dielectric liquid, although I guess we're not used to seeing electrical components
looking like they're in water, wet air essentially,
it's not really, it's non-conductive.
It's eight times less conductive than air.
And that's kind of the most surprising initial experience that operators will have when they run through this exercise.
Yeah, it's got to be a little bit of a mind meld to go, wait a minute, there's liquid on my server and it's okay. I'm assuming you get over it fairly quickly,
but it just seems to the mind like it's not the way it's supposed to be.
But yeah, if it's running in liquid,
you ought to be able to be maintained in liquid.
That makes complete sense.
And so you don't need to turn it on its side
and let all the fluids run out and all of that.
You can just work on it and slide it right back in.
It's a fluid which is super innocuous to the IT components.
It's protecting them from dust particles, from electrically charged particles.
So going back to the meantime between failures that you were referring to before, first,
there's no moving parts in the system.
So that already is a humongous improvement. But then because you
have better thermals in the systems, the components in general are cooled much better.
And there's not this variance between the front and the back of a rack or the bottom and the top
of a rack. It's all really identical across the tank.
And you add to that the fact that there's no dust particles, no electrically charged particles being blown aggressively through the server plane.
What you see in immersion is a two thirds drop.
So a 60 percent drop, let's say, in hardware failure rate compared to traditional deployments. So we have customers that don't touch immersion tanks in a whole year.
It's quite common.
So, Daniel, what does a typical install look like?
Does a typical install look like I've got some very intense workloads and I need a few
tanks in my data center? Does the typical workload look like I've got an edge deployment that I need
to do cooling and I don't want to set up all the infrastructure for cooling? Or is it a full-blown
data center where instead of air, I'm just doing a whole room full of tanks. What's typical for you guys, or does it span all of those?
It has moved a lot into Edge.
And if you go on our website, you'll see that we have a specific product
for Edge called Micropod, which is an immersion tank
with an integrated dry cooler on its side,
designed as a ruggedized device that can be placed directly outdoors.
We have a lot of customers in the telco space that leverage it for base stations and edge
infrastructure, but also customers in the industry 4.0 space that deploy these compute racks on the
factory floor to manage their robotic platforms and things like that. So it's edge
infrastructure where you don't, either you need to protect the IT equipment from a harsh environment
or you don't want to build the whole infrastructure. And on the other side of the spectrum,
our most common deployments are, it's the SmartPod platform. So it's this bigger 45U immersion tank, tens of them in a data hall.
We don't believe that data halls or data centers, let's say, will be 100% immersion. percent immersion but a lot of our customers today are building for a scenario of 80 90 percent in
immersion and 10 percent in air and that's obviously this there's always going to be
lower density equipment that there's no justification to put it into immersion
so they'll just have they'll split the data hole they'll have um. They'll have a small area which is cooled by air
and where they have their routine equipment
and their own legacy systems, AS400s, you name it.
And then they'll try and build a hyper-converged type of infrastructure
where they can just replicate this tank design,
which has a lot of hyper-converged compute
and some high-speed networking equipment
and replicate that building block a number of times.
So I'm going to ask a weird technical question.
In that hybrid environment where I've got some legacy equipment,
could I take servers and put them in a tank and run a disk subsystem
that is spinning drives next to it and connect to those servers?
Is that doable?
Is there a backplane or a way to connect the tank to traditional spinning disks that aren't submerged?
Yeah, so the tank, it's designed as a rack to the extent that you can even,
we have an area called the dry zone, which is either where we,
if we're using standard 19-inch equipment, we'll deploy the power distribution units there, the typical zero-U rack PDUs that we'll deploy those horizontally on the side of the tank.
We have customers that typically deploy the top-of-the-rack switch in the immersion tank as well, but customers that choose to deploy it on the dry zone.
So there's a dry
zone on each side of the tank that can be leveraged for this. And it's also leveraged for cable
management. So getting cables in and out of the tank towards the standard rack infrastructure
where they need to connect the immersion tanks to. So a lot of the customers, the uplinks are
sitting in the air-cooled, go to the air-cooled
portion of the data center where they have their core distribution switches and Cisco routers and
so on. And the immersion tanks are designed in a way that when you put them one next to another
and back to back, they have latches to allow you to communicate cabling between them and interconnect tanks and so on.
Well, Dan, you're sitting by the pool at a 35C day, and you say,
boy, I like it here, my servers might like it here.
So I get how the inspiration came about, and let me cool my server in an efficient way in the data center.
But as I think about where our industry is headed and the talk about the data center industry being responsible about its power
consumption and as the world continues to digitize, what percentage of the planet's energy
do we use to power all these servers? So much of that is cooling. I can see a massive advantage from a power consumption perspective for submerging your servers.
Could you just take a little bit of time and tell us how you see this from a global environmental perspective, how submerging servers can change what's outside the data center, not just what's inside the rack?
Absolutely, yes.
So the first thing is actually floor space.
So we're talking about a reduction typically in the range of one-tenth of the floor space that's required.
That's the level of density that we tend to see in the new designs.
So that's the first, I guess, humongous benefit when it comes to how many millions of square feet we need for these data centers.
And that's also because there's a lot of these components, as I mentioned, inside the data hall or around the data hall that we don't need anymore.
When it comes to the infrastructure that sits outside, well, so immersion cooling will typically eliminate or reduce the PUE, as I mentioned, to something in the range of 1.03, 1.05.
Approximately, that's where it tends to be. That means that it's slashing by 60, 70,
80 percent, the typical data center PUEs that are out there. So that's the immediate benefit
of deploying immersion cooling. Plus, you have to consider this humongous reduction in the power consumption from the IT side of things. capacity that has been made available by removing the fans, you end up with a much bigger IT load,
critical available IT load versus the cooling infrastructure. What we think is really exciting,
apart from the PUE, of course, is that all this energy is now captured in a warm water system.
And that warm water system today is operating at probably something in the range of 100 C, but we're working towards making sure that it operates in the range of 120 C or 130 C,
sorry, 120 F, 130 F.
That's where we are today.
And we're on the journey of getting that up to 160 Fahrenheit, 170 Fahrenheit.
And when you have water in that temperature range, you can do some very exciting things
like deliver it to a district heating system, which is quite common here in Europe,
and we're seeing more and more of that happening.
But you can also enter into kind of symbiotic relationships
with your vicinity facilities and neighbours
on supplying them with energy,
with this warm water transferring to business parks
or industrial parks.
We believe that at Sumner, we're convinced that the future data center site selection,
the primary criteria for selection will be the energy monetization rate or factor. So people will start selecting sites based on a new capability in their data center,
which is just going to destroy all the TCO models that are out there
and that everyone's designing against today.
It'll stop being how much do I pay per kilowatt,
but how much can I sell my thermal per kilowatt?
Turning that whole equation on its head.
Today, it's megawatts, hundreds and thousands of megawatts hour that are just getting released into ambient air.
And that's something that will have or has today the potential to be monetized.
And here in Europe, there's some really aggressive policies
to push data centers in that direction
and really start thinking
about these type of implementations.
The technology to do that is now available
in the temperature range,
which is directly ready to be plugged
into the new district heating systems
that are being built.
So we believe it's super exciting times for the data center industry.
And it's an opportunity to transition from being a burden for the society and your neighbors to be in an actual benefit and really allowing the data center industry to be seen in a completely
different way as a power and energy supplier to the community.
All right.
So the future is bright and it's submerged.
How about that, Daniel?
Absolutely.
So let's close this up.
Born in Zurich, grew up in London, live in Barcelona.
Which football club do you support?
I mean, this has got to be a challenge for you.
That's a no-brainer.
It's obviously Football Club Barcelona.
Yes, absolutely.
Okay.
All right.
All right.
All right.
Very good.
Well, Daniel, this has been great.
I'm super fascinated.
It's still hard for me to wrap my head around components that are wet, but glad that you guys have figured out a way to do it
and love to see where the future goes with Submerge and how much it changes the data center industry as we think about how we're burning up megawatts all over the planet.
How do we do it in a more environmentally friendly way? Been super great to have you.
Really, really grateful that you spent the time with us
and look forward to seeing where things go for you and Submir.
Thank you, Raymond.
Always a pleasure.
Stay safe.
Thank you, Dan.
Take care, bud.