Not Your Father’s Data Center - Innovations in Generator Land with Chris Brown
Episode Date: May 24, 2022How did a Texas boy, who swore up and down he’d never leave Texas for Oklahoma, wind up involved with the data storage industry in Oklahoma? Chris Brown, CTO at Uptime Institute, isn’t 10...0% sure himself, but spending the past 25 years in the Sooner state means something went right. And Brown said the data center life treated him well so far. “I was fortunate enough to get into some different companies in times when they were doing a lot of work and a lot of changes,” Brown said. “So, it allowed me to get thirty years of experience in about ten or fifteen years. And I’ve enjoyed working in the industry, and I look forward to many more years to come.” And with Brown’s passion for data centers limitless, he knew he eventually wanted a role where he could help other industries increase their data center capacity and knowledge. Brown’s journey led to the Uptime Institute. “Once again, most opportunities are those are surprises rather than things that are planned,” Brown said. After a stint working for Saber Company and a few other related mechanical engineering data center opportunities, a friend from his past called him up to join the Uptime Institute to help them with some engineering challenges. Over the past twelve years, Brown worked his way from consultant to CTO. “When I started at Uptime Institute, there were four people delivering the technical work, and they were all US-based,” Brown said. “Today, we have thirty-four engineers scattered across thirteen different countries and still growing. It’s definitely changed a lot. But the data center industry’s changed a lot. When I first started (in the industry), chilled water plants were the norm because the electrical power that was required to run a chill water plant was about 25% of what it would be with direct expansion.” Technological advancements in direct expansion today make that method much more affordable.
Transcript
Discussion (0)
All right. Thank you for joining us for another edition of Not Your Father's Data Center.
I'm your host, Raymond Hawkins, Chief Revenue Officer at Compass Data Centers.
I am joined by Uptime Institute's Chief Technology Officer, Chris Brown.
Chris, thank you for joining us.
Glad to be here. And if I remember right, Chris, thank you for joining us. Glad to be here.
And if I remember right, Chris, aren't you joining us from somewhere outside of Tulsa?
Is that the best, as close as we want to get?
Yeah, I am just outside Tulsa, Oklahoma.
Chris, thank you so much for jumping on with us. We would love to hear a little bit about you.
I know Oklahoma is home, but you did a stint down here when you went to school.
Talk a little bit about your time in and out of Texas and in Oklahoma and what path led you to end up being at Uptime Institute.
Certainly.
Well, I'm actually from Texas, north central Texas, just north of Dallas.
And I went to school at the University of Texas in Arlington and pursued a degree in electrical engineering.
And it was kind of interesting how I got into the data center industry.
I just fell backwards into it. You know, I graduated from college in 1995.
Data centers were, you know, I mean, they were going strong, but they weren't widely known
We didn't publish, the industry didn't publicize itself very well
You know, and while I was in college and in my senior year trying to look for a job
American Airlines was on campus and they approached me and wanted to talk about, you know, uh, their saber division.
And so after a little bit of that, uh, they, you know, we kept the conversations going and,
uh, that's how I found myself in Oklahoma. So a Texas, Texas boy that swore up and down,
he probably never lived in Oklahoma. It's spent, uh, over the last 25 years here.
On the wrong side of the Red River.
On the wrong side. But it was a good thing because I got into data centers through Sabre.
And that life has been very good to me. I've been fortunate enough to be able to get into some different companies when they were in times when they were doing a lot of work and a lot of changes.
So it allowed me to, you know, get 30 years worth of experience in about 10 or 15 years.
And so it's been very good.
And I've enjoyed working in the industry and look forward to many more years to come.
All right. So if I remember right, didn't Sabre have a big presence up there outside of Tulsa, right?
Wasn't there a big data center facility in the Tulsa area? Am I remembering correctly?
Yeah. Sabre had two data centers right on the airport property. So if you think about it in Tulsa, Tulsa International Airport, American Airlines
had a long-term lease on a lot of property just on the edge of the airport. And so when American
Airlines built their data center, they built their first data center out of an old office building.
And then they actually built a purpose, built an underground data center there on the airport property that could continue to operate even if they had a crash, unfortunate crash there.
And so they had two decent-sized data centers there.
Then when they spun off Sabre, Sabre built a third data center a little bit further off the airport, about 10 miles away, but still in that area.
And it's a fairly large data center, and it's still in use today.
It's about five acres under roof.
Oh, my goodness.
Five acres.
That's a chunk.
All right.
Okay, so Sabre, which at one point wasn't Sabre part of American,
and then it became its own outsourcing, for lack of a better term, business.
Do I understand that correctly, Chris?
Yeah, Sabre was created by American Airlines to handle their own reservations systems,
which then expanded into their operations systems.
Basically, every aspect of the airline was managed through the Sabre systems.
They had tried for years and successfully in some areas to expand and provide those services to other companies and other airlines and rental car companies, hotels and things of that nature.
But they were seen as competitors. And so about 1997, they decided to spin it off into its own individual company. And so Sabre kept operating the data centers and providing the IT services
and tried to expand into different companies in different areas.
So walk me through the transition from doing American and Sabre and travel and the data
centers there in Tulsa. Where do you jump rails and head over into the Uptime Institute
and start teaching the industry how to think about this stuff?
Well, it was, once again, you know, most opportunities are those that just are surprises
rather than things that are planned.
I worked with Sabre for a little while.
Sabre was getting ready to spin off its data centers to ultimately EDS.
But at the time, we didn't know who it was.
And all I knew was Sabre was getting out of real property.
And as an electrical engineer, real property is pretty much what I do.
And so I left Sabre, did a little stint in the petroleum industry with Sitco Petroleum,
working with their laboratories, their pipelines, their liberal plants.
You know, when they found out I knew something about data centers, I kind of got into a lot of their laboratory stuff.
But then, you know, EDS was operating those data centers, the old Sabre data centers.
And I knew some folks that were there and they were having some troubles. And I kept telling them that, you know, how to fix the
problems. And then one day, a gentleman from Trammell Crow Company, who was their operating
company, called me up and it was essentially put up or shut up. And it was an opportunity to go
work for them. So I worked for them for a few years. I did a little sideline as well
after that. So Trammell Crow for about five years and then did some contract engineering work due
to some family issues. And I need to be home more often because as you probably know, if you're
working in a data center, it's pretty much an 80 hour a week gig or more. And then those family issues got solved. And I was looking to get back in with
the team. And a friend of mine that I'd known for years and worked with in Sabre and some other
places called me up because the Uptime Institute needed some engineering help. And so I took that
opportunity and started working with Uptime Institute. And so with Uptime Institute, I started that in 2010 and have worked my way up from, you know,
a consultant delivering their certifications for data centers to being the CTO of the company.
Be 12 years here next month or two months.
Is that about right?
It will be.
I started in January 2010.
All right.
12 years. That's a long gig for anywhere so congratulations on that all right so talk to us so we've gotten you a texas
boy that ends up on the wrong side of the red river uh got schooled on both sides right ut
arlington and i think oklahoma state right so you get edumacated edumacated on both sides of the red
river so um and understand the American
Airlines piece of it. So now you're at the Uptime Institute. Tell me what in the early days,
because if I think about it, I know you're talking about data centers in 95 and 96. I mean,
there've been computers living in buildings for a long time, but as far as the wholesale sort of
commercial approach, that's an early, you know, 2000s kind of a thing. What did the world look like five,
six, seven years in when you joined the Uptime Institute and your commercial wholesale data
centers were still fairly new in 2010? What were you guys doing? What were you guys teaching the
industry? Talk to me about the early days and love to hear how it's transformed in your 12 years.
Well, back then, when the retail data center industry was just getting going and establishing its footprint,
you know, they were needing to capture business from the enterprise.
And convince the enterprise companies that it was better to use them for their data center space rather than build their own and operate their own data center space. And one of the challenges that was happening at the time was having all those folks that had spent,
you know, 10 years, 15 years, 20 years designing and building data centers for specific companies
and operating them for specific companies to trust anyone outside of them as a place to house the IT. So that was a big challenge then.
And we worked with some of the co-location providers at the time and talking to them about
the tier standards, because there were some that were not really adhering to really any standards at all. And they were trying to learn
the ropes as well. It was a burgeoning industry. And so we started talking to them about the tier
standards and tier certification and things to prove that they were designed, but also
compliant and performed to those tier standards. So Chris, did the concept of tiers, is that an
uptime concept? Is that where tiers started or is there already a notion? Okay. So putting that
title, you're a tier two, tier three, tier four data center started at uptime. It did. It started
at Uptime Institute. And it was one of those things that, you know, Uptown Institute had been talking about tiers since before 2006.
Okay.
And there was a lot of discussion there.
And Uptown Institute had been involved in the original, I guess you could say,
birthing of tiers well before that.
But they were starting to commercialize it, get the market and the
industry familiar with the tier standards, tier requirements, things of that nature.
And so when I came on board at the Uptime Institute, I'd had exposure with tiers before then.
And when I came out at the Uptime Institute, we talked to and convinced a lot of players in the
co-location industry that the way to communicate to enterprises that their facilities were quality,
that they were designed and built to rigorous standards was to, you know, use tier certifications for that.
And so that was what was going on at the time with co-location data centers.
And, you know, things have changed so much since I first started
in data centers, since I first started at Uptime Institute. You know, when I started at Uptime
Institute, there were four people delivering the technical work and they were all U.S. based.
Today, we have 34 engineers scattered across 13 different countries and still growing. So it's definitely changed a lot.
But the data center industry has changed a lot.
When I first started, chilled water plants were the norm
because the electrical power that was required to run a chilled water plant
was about 25% of what it would be at with direct expansion. And so the chill water
plants were all customized. The people that were operating them were well-trained and had to be
well-trained and were highly experienced. You know, when I got into the industry, most of the
mechanics and the chiller operators were, you know, they were in their late forties
and had been doing it forever. In fact, when I was responsible for operating data centers,
one of the things that I did was stole people from the hospitals because they understood mission
critical and hospitals had the same systems as data centers. But over the years, what we've seen
is as technology has improved, the direct expansion technology, as well as using evaporative cooling
and other approaches, has brought the cost of using direct
expansion down to pretty close to what a chilled water plant can run. And so then that little
elevated cost of direct expansion in terms of energy costs can be offset because the rigorous
skill sets of the operators are not quite as much. You know, if you're having to worry about running chillers, you got to worry about flows, hydraulics, pressure differentials, temperature differentials, dress expansion.
Most of the time, you know, you tell what set point you want it to be.
You tell it to run. It runs.
And if it doesn't run, well, then you have to have a technician anyway, because you're dealing with refrigerant gases and other things that require special licensing to deal with. So that's been a major change that we've seen in the industry in the
last 10 or 15 years. It's just the move to large-scale direct expansion plants.
So before we get too far down the technological changes, especially since you were there when
tiers became a thing.
Let's go back. It's kind of funny, a little bit like Coca-Cola or Kleenex, right? Everybody just
says Kleenex and what they mean is tissue paper. When people say tiers, I mean, that's an uptime
thing, but everybody knows, right? I see it in RFPs. I see it in any communication with brokers.
They say, hey, here's the tier. Will you walk us through? I don't even know if I can even begin to
say what a tier one data center is. Just quick bullets, tier one, tier two, tier three, tier four.
What are the major differences? The tier rating system has four tier levels,
one through four. They're progressive, which means that the subsequent tier levels have all
the requirements of the previous tier levels and then add some on top of it. The first three tier levels are about providing increasing levels of opportunity for maintenance
without impacting the critical load.
So if you start with tier one, a tier one data center has just enough of everything.
So just enough on-site power production, which is typically engine generators,
just enough cooling capacity, just enough UPS capacity for whatever load that
the data center is going to operate. So you could have a single UPS, a single chiller,
a single engine generator, and that's tier one. Tier one provides no opportunities for
maintenance without impacting critical load. Then you step up tier two, which is redundant
capacity components. So if you think of data center, which is redundant capacity components.
So if you think of a data center, all systems have capacity components.
So a chiller is a capacity component, something that creates the capacity.
The piping and the pumps are distribution paths.
All right.
Same thing in electrical.
UPS system itself is a capacity component, but all of the cabling, the PDUs, things of that nature, or distribution paths. Tier two requires at least one redundant capacity component for all systems. So you'd have
to have at least one redundant chiller, one redundant air handler, one redundant UPS.
It does still allow a single distribution path. So where tier two steps up into is you can, you have the
opportunity to provide, to conduct maintenance on your critical, on your capacity components
without impacting your critical load, but the critical distribution path is still a single path
and doing any work on the critical distribution path will still impact your
critical load. So then you get to tier three, which has all the requirements of tier two. So
you have redundant capacity components, but tier two only has a single distribution path.
Tier three requires redundant distribution paths. So tier three gets into full concurrent maintainability. So every capacity component, every distribution path, every system touching a critical system must be able to be isolated for planned activities.
And those planned activities could be maintenance, upgrade or replacement.
And it has to be able to be isolated without impacting the critical load. We have redundant
capacity as well as redundant paths so I can do all the work without changing delivery to my
critical systems is what I think the key designation for tier three. So hit us with tier four, Chris.
Okay. And tier four, so tiers one through three is about increasing opportunities for maintenance
from no opportunity for concurrent maintenance without impacting critical load, which is tier one, all the way through tier three, which is full maintenance
without impacting your critical load. Tier four adds the idea of fault tolerance.
So if you think about it, tier three is about planned activities. You plan to go perform
maintenance or do an upgrade on a system. Well, Tier 4 adds in the idea of an unplanned activity, which is a fault or a failure.
So with Tier 4, the requirement basically is that all systems must be able to respond to a fault or failure
without operator intervention and without impacting critical load.
So if you think about it, if you're running a chill water plant
and you lose a chiller and the chiller spins offline,
the system has to detect that the chiller has been lost,
start the redundant chiller up, and continue to serve the load.
And it's without operator intervention.
It's all automatic, autonomous response.
And so Tier 4 gets into being able to deal with any single event, whether it be a
planned activity or an unplanned activity. Got it. All right. And there's the four that you hear
thrown around pretty casually in our industry today, which is pretty incredible. You were there
in the early days. I mean, tier four is tier three, pretty casually mentioned standards that everyone largely has their arms around or at least thinks they do.
So I appreciate the refresher.
All right.
So the business has grown incredibly.
You're in, I think you said, 13 or 14 countries.
Talk to me a little bit about as the industry, and I liked at the beginning, Chris, you talked about we had to convince enterprises that they could put their computers somewhere else without a doubt.
Had to convince them that it could be run safely, securely, reliably in another facility.
What is it that Uptime is helping providers and customers with today?
I certainly get those early days of I'm not sure I'm ever going to let these servers out of my building.
That question seems to have been answered.
What challenges are you guys answering for people today?
Well, we still answer some of those same challenges.
A lot of our tier certification work is about helping people to ensure that their facilities are designed to provide the availability and resiliency that they
need. If you think about a lot of companies, a lot of companies don't hire their own engineering
staff. When I worked at American Airlines, I was on a team of engineers that the company actually
employed. And a lot of companies just don't employ those detailed design engineers anymore.
So one of the things that we do is we help companies that are building data
centers to ensure that they're getting what they paid for when they engage with design engineers
or construction companies. So we ensure that their designs are tier compliant. We ensure that the
facility was built to the design, but also performs to it and meets all the tier requirements.
We help clients with understanding how to operate
their data centers, because as you know, designing a data center to the highest standards only gives
you the opportunity to meet your availability needs. But you have to, you know, the operations
is where that investment is realized over time. So you invest in a large facility with high quality systems and
well, a good design. But if you don't have a good operations team, you're not going to
realize the availability that that facility can give you. So we help them with that as well.
We're also helping clients understand even non-tier rated facilities, existing facilities,
we help them understand what the risks are.
So we help them to look at their operations teams, their actual facilities, and help them understand what that facility can give to them over the long haul, how it's going to help them
meet their specific business needs, but also what those risks are and where they need to
kind of plug some holes of those risks.
So that's where we're at today.
Gotcha. Good stuff.
So, Chris, as you guys look at the industry and you got a great view on where our space is going, I'd love it if you'd tell me, hey, Raymond, one is no one's talking about this, but we ought to be.
So with that, no one's talking about it, but we ought to be.
What would fit in that category for you? No one in the industry is really talking about it, but I think
this is going to be something we're thinking about in the future. I'm hitting you with that one by a
little bit of surprise. And then everyone's talking about this, and I don't think it's as big a deal
as it is, as everyone's making it out to be. Both of those two categories. One that no one's, that
few people are talking about, that I think they need to be talking about, those two categories. One that few people are talking about,
that I think they need to be talking about, kind of goes hand in hand with the example I would have
for both of your questions. They go together. So sustainability is a big industry topic today,
and everybody's looking at sustainability. And countries are looking at how to get their grids
more sustainable and how to advance their grids more sustainable and how to advance their
grids to have more renewables we have uh and data centers are doing the same sort of thing
in that though there's you know we're going to be in periods where there's going to be
a little bit of instability in the grids tex saw that this year, right? I mean, they had some
problems because we had what I call the great freeze. So cold temperatures that no one had seen
in a long time, no one expected. Even up here in Oklahoma, where we had 19 degrees below zero,
something we hadn't seen in a long time. And we see some colder temperatures than Texas does.
But it created some moments of instability in the grid because, of course, there was less power available than there was load.
Right. And so one of the things that, you know, everybody's talking about sustainability and it's a big deal.
And along with sustainability, you have engine generators and everybody looks at the engine generators from the data center world as being big polluters.
And the reason they look at them as being big polluters is when they operate, sure enough, they do pollute into the air.
Right. But of course, there's processes to help reduce that, such as urea treatments, catalytic converters, those sorts of things.
But most people don't realize that they only pollute
when they run. And they don't run that often, but they're a good insurance policy for the business,
right? Because when you need them, you need them. And if you didn't have them when you needed them,
you'd be wishing you did. Well, one of the things that what we're seeing and what I'm thinking,
and only a few people are talking about this, is that we can't move our energy grids in the U.S. or any other country, any modern country, from where we stand today straight to a green, carbon neutral, resilient grid.
There's going to be problems along the way, right?
I mean, we understand how to operate with coal-fired plants and with nuclear plants nuclear plants and with natural gas fired plants because all that's baseload generation.
Renewable is not baseload generation. So over time, we're going to try some things and we're needs to be looking at and sort of touting, if you will, and we need to be ready to do is when there's instability in the grids because there's not enough capacity for the load.
Data centers have a huge they consume a huge amount of power.
We all know that in a very small footprint, but we have our own systems for generating power on site when we need to for emergency situations.
And I think that data centers can serve a big role in the sustainability world, helping us
figure out how to get the right, what is that right balance between baseload generation
and renewable sources on the grid? And how's that look? What are the control systems? How do they need to
change? How do those algorithms need to change? Because when you stub your toe and there's a time
when for some reason the power companies didn't anticipate a heat wave like you'd have in Texas
or a sudden cold snap, which is going to put a lot more load on the grid, data centers can help out because
they can pull huge amounts of load off of the grid and help to stabilize the grid. But they can only
do that with engine generators and other reliable sources of power that we serve today. And so I
think that that's a big piece that we're going to be spending a lot of time, you know, the data
center industry can help that out. So I think the thing that people are talking about is sustainability.
What they're not talking about is how we can help not just our industry, but the
larger society as a whole get to where we want to be. Yeah, I want to make sure I understand what
you're saying. What I think you're saying is, hey, Raymond, everyone talks about sustainability. And
yes, that's a thing. But what they're And yes, they look at the data center industry and
say, wow, you guys eat up a lot of juice, right? So let's think about the juice for a state as one
set of capacity. What I think I hear you saying is, hey, when a system is strained,
when a state of Texas's capacity is strained, all the data centers conceivably could go offline and run on their generators and produce their electricity on site, thus providing relief to the grid.
And when's the right time to do that?
And how do you do it?
How do you compensate them for that?
How do you think about that?
Is that what I hear you saying?
That's exactly what I'm saying. You know, the stint that I had with the petroleum industry at Sitco Petroleum, Sitco is a company that has roots, you know, years and years in the past. And they have
a lot of power contracts that go back to 50s and the 40s and things of that nature. And they had
set asides with the power companies because they run huge pumps, right? 2,000 horsepower pump motors, all powered by electrical power.
And they would get, they could get at their facilities called by the power company and
say, hey, look, we need this much load pulled off the grid for this amount of time.
Now, if you're just pushing power to pipelines, you're just pushing that product on the pump
lines.
Well, if you have to wait
an hour, that's okay.
So they'd pull the power off the grid.
That would free up
capacity.
And then the power companies would then sell
that on the spot market, which the power,
the difference between what
Citgo was paying per
KWH at their contract
and what they would get on the spot market was a lot.
There was a big delta there. Sure. Because it was a high demand window. Yeah.
Yeah. And Citgo would get a small cut of that action. And so it did a couple of things. They
were good corporate citizens, so they could help free up power when power was really required.
And they got a little bit of reward from it from the power company because they got a little cut of the profits off of that.
And so what I'm saying is that, you know, and it's already happening in some places where data centers can sign up with power companies to be part of that set aside.
And if a power company sees, hey, look, my grid's becoming unstable because I don't have enough capacity. It's going to take me a while to spin up some more turbines or something.
Then they can call up the companies that are on that list and they'll pull that load off of the grid.
Well, that helps to stabilize the grid because you're now starting to match capacity to load and give them time to respond with some of their base load generation. And so I think that's
an approach that we need to be looking at as an industry partnering with the utility companies,
because as the utility companies go to modernize the grid and transition how we're producing power,
we're going to have times where we're going to stub our toes, right? I mean, living in Texas,
I remember those times where you'd wake up in the morning and it would be,
you know, 50 degrees in the morning,
but by the afternoon it was in the eighties or nineties and everybody comes
home and throws on their air conditioners because they didn't have them on
previously. Right. Right. And you'd get, and you get times of brownouts. Yeah.
And, and that was with years and years of experience of baseload generation with coal
fire and natural gas fired power plants. Now let's add some wind into it. Let's add some solar into
it. And you've got a whole different system now. Yeah. Yeah. And I think that's one of the,
we're not going to get lost in electrical engineering, but I do think that's one of
the things that people struggle to understand is that storing electricity is hard. You generate it when you need it, largely. I'm oversimplifying.
And you can tell a coal-fired power plant or a nuclear-fired power plant or a hydro plant,
you can tell it to run or not to run. You can't tell the sun to shine, and you can't tell the
wind to blow. It blows when it wants, and it shines when it wants. So you can't tell the wind to blow. It blows when it wants and it shines when it wants.
So you don't have the on-off switchability of generation that you have from traditional
power facilities. So it's, like you said, when the demand comes, if you don't have the supply,
we're breaking the system. Well, and the other part of the problem is because the wind cannot blow or you can get heavy overcast all of a sudden.
So you can reduce, you can lose some of your capacity from your renewable systems, right?
And it takes time to bring those large turbines online, get them spun up producing power.
You know, if we think in our data centers,
you can have a quick start engine generator online producing power in 10 seconds.
Right.
Not so much at the power company level.
Right, right.
The bigger it gets, the longer it takes.
No question.
Very cool stuff.
Well, Chris, this has been awesome.
I really appreciate you spending a little time with us and hanging out and talking.
Not only are we good friends with the Uptime Institute as an institution, but also with lots of folks there.
And we're grateful to be partners with you guys and grateful to have you come tell us a little bit about your story, as well as help people understand what Uptime does in the space and how you guys are really, I think,
shepherding the industry and advising people on how to, I like the way you
said it, hey, you paid for this design, you paid for this deployment, is it actually there and does
it actually work? Because at the end of the day, all people want to know is, hey, are my servers
on and can I talk to them? And you guys help folks do that. And we appreciate it and appreciate the
standard you guys set in our industry. It's been a pleasure. Chris, thank you so much.