@HPC Podcast Archives - OrionX.net - @HPCpodcast-96: DDC Solutions on Datacenter Cooling – Industry View
Episode Date: January 17, 2025Datacenter power and cooling has rapidly emerged as a critical topic in HPC and AI. System roadmaps only point to additional requirements. In this episode of the @HPCpodcast's Industry View, we are j...oined by Chris Orlando, Founder and Chief Strategy Officer of DDC Solutions to discuss high density cooling, dynamic control, safety and compliance, and the enduring role of air cooling in the datacenter. [audio mp3="https://orionx.net/wp-content/uploads/2025/01/096@HPCpodcast_IV_DDC_Chris-Orlando_Datacenter-Air-Liquid-Cooling_20250117.mp3"][/audio] The post @HPCpodcast-96: DDC Solutions on Datacenter Cooling – Industry View appeared first on OrionX.net.
Transcript
Discussion (0)
You truly can have a variable density data center that is like a living breathing being that adjusts
based on where the workload is. With the way AI and other HPC workloads function, that is becoming
even more important. We looked at how we could make an individual rack fully dynamic based on real-time load dynamics
and use air and liquid in a hybrid way to provide super efficient cooling that flexes and adjusts
based on the workload, the type of customer, or the design of the data center itself.
And so that's really where I think things are headed.
From OrionX in association with Inside HPC. This is the At HPC podcast. Join Shaheen
Khan and Doug Black as they discuss supercomputing technologies and the applications, markets,
and policies that shape them. Thank you for being with us. Hi, everyone. Welcome to the
At HPC podcast. I'm Doug Black with Inside HPC, along with Shaheen Khan of OrionX.net
technology consultancy. And our special guest today is Chris Orlando. He is co-founder and
chief strategy officer at data center cooling solutions provider, DDC, which is based in San
Diego. Chris, welcome. Thanks, Doug. Excited to be here. Obviously,
a lot going on in the industry around our space right now. Happy to talk with you and Sheena about it. Okay, great. Yeah, so we'll be talking about this vast and growing problem of data center
cooling, which is emerging as a potential limiting factor on the adoption of big AI,
big HPC AI. Let's start by stepping back and sort of looking at the overall problem. We all know
that systems and data centers are getting bigger and more powerful with more density and more heat
generation. And the problem is growing as HPC and AI systems become increasingly compute intensive.
Chris, tell us where things are with liquid and air cooling and whether those approaches
address the massive
needs of data center operators. Yeah, absolutely. So I think we've been
looking at rack level density and the potential increases there for a long time. And while they've
been predicted to happen quickly, I think it took longer than many of us expected. And then all of
a sudden, we've come up on the time that's with us now, we're at this hockey stick type growth, where things are just jumping exponentially. And so if you look at the
ways that we're addressing those thermal densities today, air cooling traditionally has had significant
limitations from an overall density perspective. As we started to get more intelligent about how
air cooling worked with things like containment, we saw us being able to increase overall density, approximately
40 kilowatts on a really good day at volumetric scale, give or take. And that's where we've seen
the limitations happening. Now, the good parts about air cooling is that we know how to do it,
right? We've been doing it for decades. The challenges are that there is a physical
limitation in the current design, mainly raised
floor systems or non-forced air systems, where the overall thermal limitation is around 40
kilowatts on an ideally designed solution.
The other challenge is obviously that it's inefficient from a power usage perspective.
Trying to exchange heat through that methodology is just, it uses a lot of power and with the focuses on ESG and
power savings and the cost driven from those systems today, it just is, it's falling short
of where I think the industry needs to be. And so part of those challenges is what drove us to
look at liquid cooling. And there's a variety of forms, both liquid to the chip and chassis,
as well as hybrid liquid cooling systems that you see in things like legacy rear door heat exchangers, and even in the latest versions of the DDC cabinet.
The benefits of those technologies are obviously they're extremely efficient.
By doing the heat exchange so locally to the chip or chassis itself, we improve the efficiencies
of those systems.
And the other benefit is obviously that they're very high density.
That efficiency allows us to move a lot of heat very efficiently off of whatever substrate that we're working it from. And so it allows us to
increase those densities from 40 kilowatts to a rack to much, much more, hundreds of kilowatts
per rack, depending on who you talk to. The shortfalls are obvious, I think. Right now,
there's very few standards in that market. It can be costly to deploy those solutions.
Bringing liquid to the data center is something
that we did a long time ago. It has not been standard for many decades now. And so doing it
again increases the risk of potential collateral damage for equipment that's near or around where
we're bringing into the data center. Additionally, the variety of different cooling solutions that
are out there kind of limit the amount of hardware, the types of hardware that can be supported with that platform. And so operators themselves are trying to find a solution that
manages density, allows them to keep servicing any type of customer hardware that comes into
the data center and do so cost-effectively within budget and without making a decision that will
have to be remade years down the road as equipment and requirements change. And so there's certainly ups and downs to both sides. And a lot of folks today are figuring
it out on the fly as these rapidly changing thermal densities happen for us. So when we were
all at SC24 a couple of weeks ago, it seemed to me that for the first time, liquid cooling and
cooling in general was finally front and center. It's been around for 10
years, I know. But this time, it just felt a little different. And one thing that comes out of that
discussion is just the sheer complexity of heat transfer, fluid dynamics, air cooling, liquid
cooling, dielectric, immersion, all these new acronyms and vocabularies that are coming in, compliance,
and we'll talk about all of that. So one thing that comes out of this and seems like you're on
this is the idea of doing the best that you can do with what you've got while preparing yourself for
new requirements that are also emerging. If you wouldn't mind commenting on this.
Yeah, you're right. At SC, certainly there
were cooling was front and center. And I think the uniqueness of the fact that you could walk
around and look at anywhere from five to 10 different ways of using liquid cool solutions
was very obvious. I think the challenge with that is that the solutions out there today sometimes
have limitations in the types of hardware they can support, the types of facilities they can go in. They have complexities around deployment.
And what operators are looking for is, how do I do and support what I'm doing now,
get the best out of the facilities that we've developed and built, while giving myself a little
bit of time to let some of these solutions mature, let some of the standards evolve around liquid cooling.
I think everyone's at this inflection point of trying to say, when is the right time to
pull the trigger on a solution?
I certainly don't want my customers to show up saying, I have a high density requirement
that exceeds the air cool capacity of your site.
What options do you have for me?
Everybody wants to have an answer, but I think the answer is going to be a hybrid one for many years to come where we both have to improve the way we manage and efficiently deliver
air cooling while adopting a liquid cooled solution that supports as broad of a customer
base as possible. And I think that's the challenge that everybody faces here today.
That's excellent. So the idea of using air and liquid on an ongoing basis,
that by itself is like a big challenge. And tell us what some of those challenges are.
Certainly. So while facilities have had chilled water available to them for many years,
distributing water into the IT room itself, much less down to the rack level, can be seen as
daunting and sometimes complex, right? Additionally, operators are tasked with choosing a solution that will service a broad spectrum
of customer requirements over a variety of densities and over the life of the data center
itself.
One of the things that a lot of people were talking about at SC were we don't want to
be making decisions today and making investments in solutions that two to three to five years down
the road, we're saying that wasn't the right one. Things have changed slightly. I'm going to have to
again, put a significant amount of investment into my facility to try and manage for whatever the
changing densities amount are. And so to our previous conversation about doing the best we
can with our current air-cooled systems and making good decisions about how we deploy liquid in the future is going to be paramount for customers not having to reinvest a few
years down the road as we look into our crystal ball and see how these requirements that we
see today might change.
Chris, talk about some of the latest facts and figures related to data center cooling,
the percentage of power usage for cooling, the cost that represents, and are there
sort of goals that data center operators have in mind that they're trying to achieve in this area?
Yeah. So we're in year 15 of operating air and liquid cooled systems. And so we've become known
as the high density guys, and that's afforded us the ability to interact with folks like NVIDIA
and Intel, AMD, Dell, Supermicro, etc., where next generation server deployments are rumored to be exceeding 300 kilowatts a rack.
Some going as 500 kilowatts a rack in rare cases.
Something I don't think that's being talked about are the expectations around how the air-cooled workload versus liquid-cooled workload will land in these extremely high-density racks.
Depending on who you talk to, we're hearing ratios of air-to-liquid cooling anywhere from 10% air with 90% of the heat load being transferred via liquid cooling,
all the way up to 30% being air-cooled with only 70% being liquid.
And that has a tremendous amount of impact on how we plan for data center space
moving forward. If you think about simply a 300 kilowatt rack and being 30% air cooled,
that leaves 90 KW of air cooling each rack with transferred off via a liquid cooled solution.
If you look at the more aggressive of the numbers that we're seeing where only 10% needs to be air
cooled, if we are looking at these extremely high density racks
where 500 kW is the target,
that still leaves you 50 kilowatts of air.
And that's just, that's not possible
in nearly all of today's traditionally built
air cooled facilities.
So even then bringing a liquid cooled solution
to the rack level may not support densities
that go even outside the sphere that we see
for the next 12 to 18 months
here in the HPC world. So customers really have to look at how do we do and improve and build the
best possible air-cooled solutions to support today's and future density while integrating
the right type of liquid cooling solution for the workloads that we have. And what I think
is a resounding fact that we've talked about decade after decade is building for the future and not wasting space or time, not stranding assets in the data center.
And by simply saying, hey, maybe we deploy liquid in this one area of the data center without looking at how efficiently you do air, depending on where some of these metrics we're hearing for ratios of air to liquid cooling, you could end up having to redesign space again.
And that's something that we want to try to help protect customers from. Chris, is the reality of the data center such that air
cooling is here to stay in one shape or another? I think there are certain HPC workloads and
operators who say this particular operation could achieve 100% liquid cooling someday.
But I don't think operators, either of enterprise or
co-location facilities, or even hyperscale data centers, will ever get to the point where every
single rack is liquid cooled. I think we're going to see air in our data centers in perpetuity in
some way. Even in the most advanced liquid cooled racks today, you're seeing components that have
thermal thresholds that don't do well if the air-cooled piece of the data center isn't working. And so I think designing for the future, affordably and
cost-effectively making really effective air cooling capable in your data center, and then
making sure that you're as universally ready to deploy liquid to racks that requirement is the
best thing an operator can do today to protect the investments they're making in the data center
space. I think air is here to stay for a long time.
And if we can do it efficiently and at very high densities,
it provides a bridge and more time for us to understand
and make good decisions around the types of liquid cooling that we deploy.
And it can certainly lower costs and help with objectives that you listed earlier,
ESG compliance, green compliance,
et cetera. A typical data center, I remember in the old days, it was like 10 kilowatts per rack,
and then it became 20 kilowatts in the HPC world. And now, as you mentioned, you've got these racks
that are 100 kilowatts a rack going to 300, going to 500. I think at SC24, I also heard of like one megawatt per rack,
which may be too ambitious, but it just tells you that this is not stopping, that the energy density
and processing density that goes with it is just going to carry on. And if air is going to remain
part of it, what are some of the strategies that you've used to provide the environmental requirements as well as the live within the cost envelope?
Sure. Yeah, I think for many years, decades even, we've built data center level solutions.
As densities got outside of the 10 kilowatt per rack, we started to move to things like containment.
Then we moved up to things like rear doors and row level cooling.
And all we were really doing is going from one size fits all at the data center level
to smaller chunks like a row that we had to manage.
And even in row based like containment solutions, you get one rack that is pushing an extreme
amount of density.
The ability for an operator to see that, understand it and make the changes necessary to adjust
cooling to that one rack when you're cooling at the row level is very difficult to do.
So decades ago, when we began looking at the EDC model, we said, what do we think the end result is going to be?
And I think we're close to that today in that we have to look at data centers at the rack level.
Unit of compute has to be an individual rack, and we have to be able to control that rack
very efficiently and very cost-effectively.
And if you can do that in an automated way
that's efficient from both an operator perspective
and from a cost deployment perspective,
you truly can have a variable density data center
that is like a living, breathing being
that adjusts based on where the workload is.
With the way AI and other HPC
workloads function, that is becoming even more important as workloads are both up and down
significantly in density. You can see an HPC rack idling at 10 kilowatt, ramping up to 80 kilowatt
and going back down. In a row level or data center cooling solution, that's extremely difficult.
It's almost impossible if you're not
doing cooling at the rack level. The challenge for operators is that not only are we building
for what we see today for the next two or three years, but if you're a co-location facility,
those customers in each individual rack could change regularly over the course of 10 years,
a new customer, a different workload, individual customers could change what type of gears rack
in an individual rack over the course of their agreement for hosting with you. And so how does
the data center dynamically adjust the density of each rack unless they have those levels of
controls built in? And that was one of the genesis behind DDC, which that stands for dynamic density
control. And so we looked at how we could make an individual rack fully dynamic
based on real-time load dynamics and use air and liquid in a hybrid way to provide super efficient
cooling that flexes and adjusts based on the workload, the type of customer, or the design
of the data center itself. And so that's really where I think things are headed.
Excellent. So anytime you mentioned the word dynamic or real time or things like that, it also implies software. So what are you doing on the software front or
what do data centers need to do on the software front to be able to dynamically manage and
optimize things? Yeah, you both have to have a cooling system that is responsive enough to adapt
to dynamic changes in environment. You have to have monitoring and telemetry
that provides you the feedback from the rack
so that you know exactly what's happening at the rack level.
And then you have to have a system
that is obviously able to control those
without creating a monster amount of work
for an operator that sits and watches those things.
It pretty much has to be on.
So as we were developing our DDC S-series cabinets
that integrates both a liquid heat exchanger
and an air-cooled system.
We wrote software that looks at all of the data that's available when you start measuring thermal
and environmental conditions at the rack level. With air pressure, airflow, and temperature sensors
and power monitoring coming out of each individual rack, that's a tremendous amount of data flowing
out into our DSIM management system.
And based on that, we're able to see what the environmental conditions are and how we adjust both water flow to the heat exchange,
as well as airflow to the front and back of the rack to ensure that the right environmental conditions are being met,
to ensure that hardware is able to perform at its maximum while still efficiently managing heat exchange and not spending a lot of money where it's not necessary. And then dynamically adjusting all of those criteria
based on what's happening in the rack in real time. And if you think about it, power efficiency
and cost savings are paramount in every data center operator's minds. When we were building
data centers at the platform level and trying to cool, we were spending a lot of money cooling air
that wasn't necessarily being consumed by the servers. As we went down to a row level,
we got a little bit more efficient and that we were cooling slightly less. But at the DDC model,
where each individual rack is your unit of measure, we only have a couple cubic feet of air in the
front of the rack that we're actually managing to a specific environmental and temperature set point.
And so right there, you can see that is a tremendous amount of savings.
And if you multiply that by not just one rack, but hundreds or thousands within a facility
and then use the real-time data from that platform and say, hey, this is the true live
load demand happening in the data center and plug that into your external systems that
are looking at your generators, your pumps, your chillers, all those things, and run those
dynamically,
the energy savings become compounded and sometimes exponential.
And so that's the level of efficiency that we're driving to, not just from a cost management
perspective, but from an overall density perspective at the rack level.
So Chris, I saw a story concerning you guys that had this statement, the whole room approach
to data center cooling
is out. Precision monitoring and predictive cooling at the equipment level is in. So describe,
if you would, one of your cabinets and everything that goes into it, as well as how those cabinets
facilitate different air cooling and liquid cooling infrastructure that the data center customer has in place.
Absolutely. So our principle behind DDC is an enclosed rack, standard 19-inch equipment rack,
enclosed with an anemostry sealed enclosure. The bottom part of the cabinet itself is the IT
environment. And on the top is a second segment of the cabinet that connects to it. And there's
actually a liquid air heat exchanger inside that, along with two precision fan
systems that supply air control inside the cabinet.
The original design of that system was meant to provide efficient heat transfer through
a liquid air hybrid cooling model that allows us to do extremely high densities while getting
the efficiency of liquid cooling, but without introducing liquid
into the cabinet environment itself. And for the last 15 years, that platform has been deployed
throughout scale matrix data centers around the country, providing the efficiencies of liquid
cooling with the risk-free kind of flexibility of air. And customers have been able to leverage
that system and enjoy the benefits of it without having any
risk of water being in the data center itself. And because the cabinets are modular, they're
able to be deployed in any quantity, whether it's a handful of cabinets or deploying them at the
entire facilities level. But inside the cabinet itself, there are a variety of sensors on the
front side supply area, which is the only area we really focus on from an environmental and temperature management perspective. Those sensors are providing us
feedback on air volume, CFM, temperature, humidity, et cetera, all at the front end.
Then we have a similar mirrored set of sensors in the back of the cabinet that are talking to us
about the same things. And we're able to build a visual picture of each individual rack through
our DSIM that shows us that we're meeting to build a visual picture of each individual rack through our DSIM that
shows us that we're meeting these very stringent environmental and temperature criteria at the
front of the cabinet, guaranteeing a certain amount of CFM per KW deployed, a certain amount
of temperature set point at a variety of different locations throughout the rack, ensuring that we
have a very small amount of temperature differentiation between RU1 and RU45 or RU60.
And then we have those sensors in the back and all of that data is fed into our DSIM. We then have a visual image of
what happens inside the rack. And then you can zoom out from there through the DSIM and see
what's happening at the row and data center level as well. All of that is run through our
intelligence platform. And so you can monitor and manage the data center through a fully automated platform, or you can go into each individual rack and tweak settings around airflow and temperature
set point to get specific pieces of hardware to operate at a different level. And so what we're
really doing is now managing the data center down to a single compute unit, which is the rack,
but allowing you to use the intelligence and data coming out of it to run the whole facility as efficiently as possible.
And we just recently at SC announced the version four of that particular cabinet, which comes
with a variety of other upgrades.
Our most recent one allows us to take the thermal cooling in that rack all the way up
to 100 kilowatts of air cooling inside the environment itself.
And while we know that
densities of the data center are increasing, especially around HPC, we think that allowing
customers to have 100 kW of air cooling while providing the universality and inputs for liquid
cooling at a variety of different points throughout the cabinet and mounting areas for manifolds so
that when the customers are ready, they can deploy air and liquid homogeneously inside of a cabinet.
Provides a tremendous amount of future-proofing and flexibility for customers
as we try to predict where the data center industry is headed over the next couple of years.
Cost savings and energy savings for what you're providing relative to, say,
traditional conventional data center cooling practices? We believe that when you're minimizing the amount of air that
you're environmentally and temperature controlling down to just a few cubic feet per rack, the
savings can be significant. If you're utilizing our DSIM to also connect to your building management
systems or other data center control platforms and to efficiently building management systems, our other data center
control platforms, and to efficiently manage chillers, pumps, generators, et cetera.
Using that data about what the real time load dynamics are, you can save up to about 30%
of energy over a traditionally built data center.
And that's on the cooling side, right?
And that's a lot.
Operators should be taking a significant look at how they're managing those things because 30% energy savings for cooling is a significant amount. We also, though,
on our website, we had a customer recently that did a 15 megawatt deployment at one of their sites,
and it was a rapidly deployed customer requirement. So the cabinets being delivering a lot of the data
center capabilities already built out of the box, allowed them to
speed up their deployment time for this particular project. But they did a significant engineering
evaluation of our technology, rear door heat exchangers, hot and cold oil containment,
et cetera. And in a published white paper that they produced, they came in with numbers very
similar to what we estimated. Somewhere in the high 20s was their energy savings over their
other cooling platforms, along with being able to deploy the resources quicker,
and by providing both of the energy efficiency and future proofing of being able to support
higher densities than the customer actually needed in this deployment down the road.
We also helped this particular customer by introducing both our fire suppression systems
at the cabinet level. This just added a
whole nother layer of risk mitigation for the customer they didn't find available in any other
solution on the market today. Chris, would you characterize the approach we're talking about
as a kind of a modular data center inside a traditional data center? And because you're
essentially compartmentalizing the data center into smaller chunks that are enclosed and sealed. Yeah, I think that's a great way to look at it.
And it leads us to another thought. Because we're doing a lot of the things that happen
inside a regular data center in the cabinet, it eliminates the need to continue to invest
and deploy those resources on. The cabinets are very simple in design. There's a connection for
liquid into the heat exchanger, hot liquid out of the heat exchanger, and then connections for power at the
cabinet. And that's pretty much it other than a network connection. It makes it simple to deploy
and it reduces the amount of time to have customer-ready data center space available
because a lot of that work is already being done in advance. But that modularity of the cabinet
also means that the platform is ideal for designing
new data center space, whether you're speccing the data center to utilize these cabinets
as a future-proof platform that allows for maximum density, air cooling, and fully ready
liquid-to-chip, liquid-to-chassis capabilities when the customer is ready. You get a lot of
benefits out of building from spec with integrating DDC into the overall building design. But an equally valuable customer use case that we're seeing today
is customers that have existing data centers that have been operating for a number of years or
decades, simply because of the design, the air-cooled limitations of the floor are making
those data centers a bit dated. What customers can do is utilize the existing power and chilled water resources that are
already built into the facility, but go in and retrofit the data center floor itself
with DDC cabinets.
And what this means is that operators that previously may have had a 10 or 15 KW limitation
on existing air-cooled racks in their data center can add DDC cabinets based on the requirements
of the new customer and have 100 kilowatts of air
cool capability literally within days or weeks as opposed to months or years that a retrofit would
take. And customers would be able to use existing water and existing power to cool these cabinets
and deliver next generation cooling and data center operating capability in a very short
timeframe. So for those customers, retrofitting
a data center and becoming valuable for any type of workload that's coming down the pipe today
is a really easy prospect. And we're seeing more and more customers do that.
Now, this may have to wait for a future episode, and we'd love to have that with you. But you
mentioned NEMA as part of your description, and that's to our audience who might not be
familiar.
That's N-E-M-A, and it stands for National Electrical Manufacturers Association, if I'm
not mistaken.
But what it tells me is that as we go towards cooling and liquid cooling, especially, we're
going well beyond the traditional certification compliance requirements of
electrical equipment, whether it's UL or FCC or whatnot. And we're moving into NEMA, ASHRAE,
fire safety, and a whole bunch of other standards and compliance that are coming at us. So we'd love
to drill down on that at some point. But I was happy to hear you mention NEMA. And of course,
they do like a thousand standards. So I don't know which one applies to what. But that complexity is headed
our way too. I think it's important because DDC originally designed our cabinets to utilize liquid
as the heat exchange methodology, but to keep the water out of the IT environment itself. And so
today customers can buy DDC cabinets, deploy them in their data centers,
connect liquid into the heat exchanger, but that liquid never enters the IT environment itself.
And so you get a tremendous amount of cooling, 100 kilowatts of air-cooled density,
without having the risk of having it in your IT environment itself. But when the time is right,
because the cabinets are designed to integrate liquid to chip or chassis into any individual cabinet. Data center operators will find themselves running liquid into the data center and connecting it to the cabinets that require it.
And we know that we can do so safely, but every mechanical system breaks down at some point.
And so there are water leaks and things that will happen in data centers of the future because we are running liquid in it. And the NEMA 3R rating that DDC cabinets have today ensures that if there
is a leak with some system in your data center that's providing liquid cooling, it will not
endanger any of the other rack environments because the water is able to be kept out because
of the rating of the cabinet. And that's a tremendous thing that people really need to
think about from a risk mitigation perspective. But one of the other assets and facets of that
protection is that by having NEMA 3R rating on the cabinets itself, the But one of the other assets and facets of that protection is that by
having NEMA 3R rating on the cabinets itself, the dollar value of the equipment we're now deploying
in data centers is skyrocketing. We just deployed a 15 megawatt data center for a customer and they
were talking about a half a billion dollars worth of GPUs that went into the facility. If a fire or
any other type of false positive or another customer in the facility
happened to set off the smoke fire detection system
and were to trigger the dry pipe sprinkler system
in that room,
burning up a half a billion dollars worth of hardware
would be a really rough day for a data center operator.
And by having each individual cabinet be 3R rated,
the DDC cabinets will protect customer equipment
from those types of issues.
So there's a lot of things that we're doing at DDC cabinets will protect customer equipment from those type of issues. So there's a lot of things that we're doing at DDC to future-proof design of both air and liquid cooling inside a singular cabinet deployment
that can be utilized by operators ubiquitously for any type of hardware that customers want to throw at it
with a mix of liquid and air cooling and be able to do so, hopefully, with the most future-proofed ideas possible
so that they're not having to look at reinvesting in data center space because of changing requirements.
Excellent. It's all coming at us and I'm glad you've got a team working on it and
you're advancing the technology here. We're certainly trying hard. It's definitely keeping
us busy. Yes, for sure. Okay, Chris, that was great stuff. Great to be with you. Good conversation.
Hope to have you back sometime.
We've been with Chris Orlando of DVC and thanks so much.
Thanks very much.
Appreciate being here.
All right.
Thanks a lot.
Take care.
That's it for this episode of the At HPC podcast.
Every episode is featured on InsideHPC.com and posted on OrionX.net.
Use the comment section or tweet us with any questions or to propose topics of discussion. Thank you for listening.