In The Arena by TechArena - Equinix & Solidigm on the Real Cost of AI Infrastructure Demands
Episode Date: October 5, 2025Equinix’s Glenn Dekhayser and Solidigm’s Scott Shadley discuss how power, cooling, and cost considerations are causing enterprises to embrace co-location among their AI infrastructure strategies....
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein.
Now, let's step into the arena.
Welcome in the arena. My name's Allison Klein, and we are back for another episode with Equinix and Solidime.
That means Glenn DeKheiser, Global Principal Technologist of Aquinix and Scott Shadley, leadership.
marketing director at Solidimer back in the house. Len and Scott, why don't we just start with
a brief reintroduction to the audience for those who did not catch our first episode. Glenn, why don't
you go first? Sir. Glenn Decazer, as Allison said, I'm a global principal technologist at Equinix.
I'm one of a team of, I believe it's 12, 11 or 12 at this point around the world. All of us have
our own subject matter expertise topics. Mine happens to be storage and data as it's applied
around the world by our customers.
And I also get involved with a lot of our tech manufacturers as well to help advise them
on their storage products and how they are used at Equinix.
And Scott?
Hi, guys.
I'm Scott Shadley.
I'm the leadership narrative director at Solidine.
My job is to help people understand how to use technology and use it involving both
just products we make as well as the whole ecosystem along with partners like Glenn.
Our topic today is the AI surge.
and I can't wait to dive in.
Question number one goes to Glenn.
We spent a lot of time talking about AI advancement on tech arena,
but from an infrastructure perspective,
can we just foundationally discuss Glenn
how AI workloads are different from what came before?
Two words, power and heat.
It's about as simple as you can make it.
In the past, we talked about GPUs.
GPU used to be the passengers on the bus.
Now they're driving the bus, right?
They've created tremendous and very well-documented scarcity for power around the world, right?
I think Eric Schmidt just testified something like 290 gigawatts of power is going to be required
to power all of the data centers that have been registered as being built when you consider
in the U.S. the average nuclear power plant puts out one gigawatt.
The math doesn't work.
But inside the data center, for an enterprise, right, if you start to get with these more dense GPU,
I guess the later generation ones that have been coming out.
Enterprises are going to have to deal with direct-chip liquid cooling.
There's just no way around it.
We've been able to do some stuff with air cooling.
But going forward, this need for erected-chip liquid cooling
requires additional infrastructure, additional management operationally.
And this changes the TCO and the ROI for all these kinds of infrastructure investments.
Now, if you think about that fundamental change, one of the things that I want to think
about is what are enterprisers actually trying to achieve for their businesses with AI?
And how is that ambition translating into new infrastructure challenges?
Scott, do you want to lead on that one?
Yeah, it's an interesting one, right?
Because the whole thing is everybody needs to use AI in their architectures.
You have to have AI for this or that.
Co-pilots everywhere, if you will, your Microsoft user.
But when it comes down to the infrastructure around it, there's a lot of stuff that's been
thrown out there about was it model training or is it inference?
We can get into a whole ball of wax around the different types of architecture required for each of those.
But Glenn hit it on the head. It's all about how to manage the heat and power that are associated to it.
And when you get into liquid cooling and you get into immersion cooling, you create a whole new infrastructure challenge that we have to work through as in the ecosystem to understand the ways to make that work.
Glenn, anything to add?
Notice you didn't bring up training and tuning, which is interesting.
And rightfully so, most enterprises usually don't need to train unless they're doing it for.
for a specific domain or expert models or distilling,
as we're seeing a lot.
But tuning is really just training after the fact,
kind of how I view it.
And it requires a certain kind of infrastructure with GPUs
where they are all talking together.
And that's where you get the NB link networking going.
And inference is more of a load balance GPU paradigm,
but you still have companies that are using
the same kind of clusters to do inference and training.
But the one difference is that all the enterprises
will all need inference, and they all need
to get their private data into that inference engine, either by augmentation, rag, and now with
the agentic tooling, right? We start to see this concept that's being called distributed AI.
You're going to have inference engines all over the place. You're going to have these MCP tools
being referenced over the internet, which is probably the wrong way to do it. But the internet,
because you have no control over the network or private MCP servers, like even within an organization
serving to itself, both private and public interconnectivity to AI.
services, both compute and data-oriented services, it's going to be key to success.
Now, that gets into pressure points within the infrastructure itself.
And when we look at AI training and inference, they introduce radically different power storage
and interconnect demands. And we've talked a little bit about that with the previous answer.
But Glenn, how are you preparing your facilities to meet these needs?
First of all, a lot of AI infrastructure can still be satisfied with air core.
It's very important to understand.
So we've got a lot of organizations.
It's kind of a funny thing.
Sometimes they'll want their latest, greatest, and all direct cooling, they get to see the quote.
It's like, maybe I can deal, get a bunch of L40s.
They're cheap or H-100s, and I can go deal with that.
And then we've got a lot of organizations that have even said, interestingly enough,
I don't want anything more than L40s.
It's all I need, for instance.
I'm not doing large language models.
I don't need to stuff it in there.
I can use the smaller models, the 8 billion parameters or the 70 parameters,
and they're fine for what they need.
And so you'll start to see, like, the service providers and the AI platforms, they're using the large language models, the big ones, and they'll have lots of big infrastructure in these dedicated data centers for these GPU forms.
And that's all good.
And I think that's where Nvidia was talking a lot at GTC about with Vera Rubin Ultra, it's going to be kind of nuts when you're talking about 600 KVA in Iraq.
But most organizations aren't going to need that.
And they're not going to need the efficiencies gained by that.
We still see a lot of customers that want the H-200-based factories and talking to some this week, even.
It's a very popular option.
Now, it's only once density gets about 40KVA a rack where liquid cooling becomes an issue, right?
Where you really got to do it.
From our perspective, as far as how we're preparing customers for that, it obviously means we have to get water to the customer cage or the customer rack within our data center.
We have a multi-tenant data center.
So we don't have the luxury of having one customer take up an entire data center, except for our X-scale data center.
types, which is a different thing. But for our retail data centers, that takes some time to do.
But we've identified 100 sites around the world where we've got liquid cooling available to
accommodate a customer's requirements. Also, we've trained a team, a subset of our global
architects and engineering team, our GTST team, that's the team I'm on, that are SMEs in liquid
cooling. So we have folks all around the world who are like ready to have these conversations,
not just with the customers, but also working with the tech manufacturers and the partners
that are bringing the outcomes to bear to make sure that you're not getting any surprise at its deployment time.
You know, it's funny when you were talking, it made me think about when I was marketing CPAs,
and it was really easy to get myopic on everybody must use the top in skews because that
the market, right, when we're setting world returns. But the reality is people are buying across the
stack and they're utilizing technology in different ways. And I think you bring up some good points.
I guess the follow-up to that is how pervasive are we seeing this?
You know, are you seeing AI as a driver of change across your customer base?
Or is this still isolated to particular verticals or innovators in different spaces?
And where do you think we're going with that?
It's everywhere, every customer, every conversation.
Even the conversations where they're saying, okay, look, this isn't about AI.
We're not ready for AI yet.
Halfway through the conversation, we're talking about AI.
because they're going there.
Look, there are proven efficiencies, especially in coding, right, and content creation,
where if you're not even evaluating you're not down the road on this and you haven't created
an AI center of excellence within your organization, right?
You're putting yourself at a competitive disadvantage, right?
You can get out of Gen AI, by the way, and into specific domains, right?
So if you're talking about verticals, like pharmaceuticals, they're doing all sorts of drug
development and DNA folding all the kind of great stuff.
That's not Gen AI, but it's still AI, same kind of infrastructure.
and we've had HPC for quite a long time that uses this kind of infrastructure.
So this is like not new for Equinix and it's not new for the industry,
it's just the sheer volume and ubiquity of it across every vertical for the Gen AI thing
is just really, I don't want to say democratized.
And it's just everybody had the need for it now.
So we're just seeing it all bleed together in ways we didn't before.
And every conversation has some angle to it, right?
Whether you're going to be a consumer or provider or some middle service provider for data
and we've got a lot of companies that are working on our platform,
that all they do is prepare data for RAG or for training,
and that's their entire business model.
They're locating at Equinix so they can have access to the customers
and then easily to the cloud and to the service providers.
So the ecosystem does matter,
but it's every conversation at all the different layers of the AI business.
It's a real asymptotic moment,
and I think that in early conversations about AI adoption
and proliferation and enterprise,
a lot of people thought,
Oh, this is a moment that's just going to push even more workloads into the public cloud.
But it's really not playing out that way.
We're seeing a lot more interesting co-location.
Why is that, Glenn?
It's expensive.
It's really expensive.
And also, besides that, the rate of change in the AI space,
this rate of change has never been seen before.
So the primary public cloud provider that you chose maybe a couple of years back and went all in that,
they may not have the AI services or the capacity that your company is looking for.
You might see that I don't want to name the clouds because I'll make somebody mad.
But one service here, you're in that service and these guys have the shiny object,
but you like their model that they don't have available or a specific service or a data platform
or maybe none of them have it.
And you've got to do it either on-prem.
Maybe there's a little for sovereignty, right, and other things.
Or you have some SaaS provider you want to get in the game, right?
AI does start with the data platform and that's probably driving most of that stuff on-prem
because of the privacy and sovereignty and the performance stuff.
the ability to use that data on all the different platforms,
it's becoming kind of table stakes to have control over that data
instead of having it locked in a corner of one cloud.
I'd say all this kind of helped create the conditions
that bring equipment back into interconnected co-load, right?
So you can have advantage to all all these services and clouds.
So, Scott, what are some of the advantages
and new responsibilities of co-locating AI workloads
in hybrid environments that you're seeing customers talk about?
It's very interesting to your point.
Glenn's done a great job of giving some great detail here and even in the previous conversation that we had.
It's really around, to his point, the speed.
I've had the luxury of having conversations with people that have worked at Los Angeles National Labs in the HBC space on AI before we called it AI to his point.
It's not really net new.
It's the speed at which it's increasing the volume that it's been driving.
And there are slow movers.
there are fast movers and even in the public cloud space, a lot of them are fairly slow moving
on net new infrastructure that everybody can get access to. And so when you're trying to do this
infrastructure where you're creating the next level of AI, you've got to be able to have multiple
points of access to it. And you really don't want to go to every cloud out there, right? You want to
pick the one that you like to work with, but you need another instance somewhere else, either
closer to the data, like you said, sovereignty involved or collo because I want to be able to look
at my data in multiple places because we're all on a 24-hour time zone now, right? Nobody stops
overnight like we used to way, way back when. So there's always someone who wants access to the
data and needs to be easily reached. And to Glenn's point, the access of internet is not always that
greater, has its issues, or we see downtime coming up as a big problem for a lot of infrastructure.
And the more and more we rely on these AI operations and these AI tools to assist us, the more
challenges we start to introduce when it comes to things like that. And you see that with
airline companies shutting down groundstop because they can't wait and balance with their
AI engine because some server somewhere went out. At the end of the day, it's going to be a
server that crashed. I thought it was DNS. I think it was only DNS. Yeah. So those are the
kinds of things that we're looking at and when we're dealing with this is just you can't put it in
one spot. You never really wanted it there in the first place, but you've got to balance it
appropriately. You got to load balance it to the term that Glenn used across the
infrastructures that are available.
Glenn, anything you want to add to that?
Yeah. So the advantage perspective, I'd say it's agility, right? It comes down to agility,
both in the ability to innovate and iterate more quickly. I use those terms a lot,
innovate, iterate, then operate, right? So innovate and iterate. Also in the ability to respond
to both regulatory and macroeconomic changes in the environments, right?
So in order to be able to respond, to be agile in both ways, right, you need to have that one actionable copy of your data platform on equipment you control in locations you can access.
It's my mantra.
And you need to have that place where you connect it to many of the services and locations as possible in order to respond to those changes.
If you fail to do this, it comes to the responsibility part of this, right?
You'll be watching others respond well.
While, meanwhile, you're getting victimized by this change.
And you have a choice with change.
You can either be ready for it or be victimized by it.
It's one or the other.
And we've seen plenty of companies that kind of saw their market leadership just go away because they weren't ready.
We're seeing one right now in the chip industry, right?
Yeah, that's true.
Glenn, I have a question for you.
I'm going to change topics a little bit.
We all know that AI is compute hungry.
There's no question about it.
You need GPUs to be churning that data and ensuring that you've got time to results.
How do you balance as an IT organization?
How do you balance efficiency goals and even broader organizational sustainability goals
with the reality of increased demand for power and cooling to drive this technology forward?
I can only really speak from that perspective on this, right?
And really, I would say we don't balance at all.
Our sustainability goals are going to be the same.
And the corporations, enterprise and sustainability goal should be the same.
regardless of whether they're doing AI or not.
And our goals will remain as aggressive as ever throughout the AI craze.
I mean, we've worked extremely hard to cover our power consumption over the years with sustainable energy, right?
Whether we use it directly in our data centers or we cover the utilization of power in the data centers
with creation of other renewable energy sources elsewhere to put back into the grid.
We've set net zero goals in the EU by 2030.
And we're not alone in this.
There's a lot of organizations that are doing this.
And they're not changing.
People are not changing. Enterprises are not changing their sustainability positions, regardless
of any kind of geopolitical changes. Everyone seems to be staying on their goals. So I would say that Equinix
has been an absolute leader in sustainability. We intend to sustain that. And we haven't seen from our
enterprises anything different. Some of them are doubling down in Europe and APAC and Canada,
just really strong pushes to remain in that sustainable kind of world. So I don't believe that there's a balance
because of AI. It doesn't change anything. Now, the interesting thing is going to be when we
get all this new power generation. Typically, nuclear power is not considered sustainable. It's
renewable, net zero, but it's not sustainable because it uses water. So we'll see how that all
plays out and how that's treated in all of these efforts, because like I said, 290 gigawatts
are required, and we're not even close to that. So it's got to come from somewhere. So we'll see
what that means from the overall power generation role.
But it doesn't mean an enterprise's goals
or certainly non-equidics' goals are going to change.
You know, and I think that even if we take a step back,
half step back from sustainability and just talk about energy efficiency,
there's so much that the industry has already gone down the path
in terms of delivering, and there's always an opportunity to do more.
Scott, when you think about that and you think about facility design,
computer architecture, even closer to your own,
or its storage optimization,
how do all of these things go into mitigating
AI infrastructure's power demand
and ensuring that the AI infrastructure deployed
is actually delivering the most compute capability per watt?
It's a very interesting one.
And to your point, this is a story that started well
before the AI surge, right?
One of the key things that Solidime has been very proud of it,
if you go back to my favorite, right, the storage part of it,
is you look at when we introduced NVME devices,
we had this, it must be in the,
the same box as someone else's box so that we can make these boxes interchangeable and things
like that. And we led the charge along with several other organizations to solve that problem
by giving flexibility to customers. So when you talk about efficiency of design, not any one
providers building exactly the same box. So therefore, perfect example is one of the form factors
for our drives is the E1.S. It comes in four different flavors now. You have liquid cooled,
which is an introduction from solid I. You have no fins, medium fins, and big fins. And
reason you have those is to address the air cooling aspect. One vendor needs more cooling,
but wants to use less fan. So you put a bigger fin to satisfy those airfoil. So you have things like
that at our product level. And then as you move it up, the food chain, it's how does that system
architect, do we put the drives in the front? Put them in the back. Does the rack have top
down access versus front access? Do you do horizontal or vertical racks? All these kinds of
things are implementable changes that you can address with modern storage infrastructure.
because you're not looking at having to deal with that vibration concern that exists
in your more historical architectures and things like that.
So as we move forward with AI, there's all kinds of different little tweaks you can do
the networking architectures, how many different ways you do the tourists or you do the spine
and leaf and all those kinds of things impact also what kind of boxes you have to attach to it,
how much redundancy you have in those kinds of aspects.
So let alone just the density of the needs from a memory footprint and HBM footprint to SSD
footprint, that kind of thing.
Glenn, you guys are building out data center capacity all over the world.
Tell me about what we missed.
One fact to remember, and the reason that directed chip lip cooing has become so important,
is that it can capture and dissipate up to 3,000 times the amount of heat that air can.
So that's great.
The issue is that the industry and let's say the chip manufacturers, the consumers of
this power, the new efficiency paradigm becomes, it's almost like a hermit crab.
It grows its shell, goes fine, it's a bigger shell, and it keeps growing.
So we've gotten a lot more efficient.
But the tech manufacturers have used the opportunities that these efficiency of getting you
to go and consume more and more power.
Yeah, it's more efficient.
But the overall power consumed in these consecutive platforms that are coming out is still
higher.
We're still using more power.
When you say, oh, I went and I went to the store and I saved 20%, I spent 2,000,
to save that 20%, when I only would have spent $1,000 before.
You didn't really save 20%.
You spent 20% more, but you gain an efficiency on that.
That's kind of what we're dealing with right now.
So when you couple that by the fact that the efficiencies aren't getting any better,
anywhere near the rate, that the resource requirements are accelerating.
That's the other problem, right?
Obviously, liquid cooling brings that big 3,000x for sheet, just from the consumption of power
perspective.
I don't want to say it's completely unsustainable, but I don't think people have
the 600 KVA Rack problem yet, not at scale for sure.
And like I said, there's a big gap in what power is available versus what people are going
to want to use.
I'm not sure how this gets answered.
So we are investing, I think we announced like a $15 billion investment in North America
loan data centers.
And we're not the only ones.
There's a lot of data centers being built.
I do like to say anybody can build a data center, but not anybody can connect it and
actually make value out of it.
But we'll be a place where customers are going to be looking, no question, to deploy
this technology at an enterprise scale, which could be anywhere from one to maybe ten of these
racks, which is a lot of computing power. If you look at beer Rubin Ultra, we're solving
the problems. I want to take the conversation into the heart of what you guys care about, which is
the AI pipeline and storage. I think it's very clear that at no time in the history that I've
been involved in tech has storage been cooler and held more cachet than it does in terms of
serving that AI pipeline. AI workloads require rapid access to large volumes of data.
Regardless of where you're thinking about, from training to fine-tuning to inference, that access
to data is so important. How does storage strategy affect AI performance and cost, particularly
in co-located environments? Scott, do you want to take that one? It's an interesting conundrum
that we're dealing with and playing a little bit more off of Glenn in the comment that I have a fixed
power budget, I'm always going to pay for that power budget. How do I best utilize that power
budget? So I start at the top and work my way down. So when you're talking about looking at the
AI pipeline and I need the stuff that's going to focus on training, we all know it sits mostly in
memory and we have that whole marketplace that's going there, but it doesn't fit at all. It just
never will. It's kind of like the scale. And SSD is never going to replace a hard drive because
there's just not enough volume to do it or a cost to do it. So you put in your performance SSD is
there to balance that at your training or tuning.
side of thing as you need it. And then as you start to push it back out and you get into
these large scale inference clusters and things like that, you start going into more of a capacity
play. And you can then start leveraging infrastructure and partnerships and design techniques where
we've done things where we've shown that we can offload rag to a drive and not impact your
net performance of the system, even though the existing system was designed with DRM. So you can
start to look at how you can utilize the resources that are sitting in any given location, whether
it's direct data center, public cow that you've paid for, or some variation of a theme,
and re-architect your performance expectations based on what exists there from both the
storage, the memory, and even the compute perspective. And having the right solution is important.
When these things first came to market, when Flash was first introduced, it was one drive
to a rack, right? You have this cash there. Now it's ubiquitous. We have Flash is storage,
and hard drives are archived, is pretty much how people look at it.
And so I don't need super, super, super fast in a inference cluster, but I need lots of data available
at a very fast rate. And that's the difference that we're starting to see as these architectures
are moving forward. Do you guys think that we're coming to a moment where customers are finally
going to fully embrace a holistic view of the structure across compute storage and interconnect
as a unified system rather than isolated decisions
that organizations need to make.
I'd say for sure because enterprise customers
are now looking at full-stack offerings
from the major tech manufacturers
who are providing AI factories, right?
So the tech manufacturers are thinking more holistically
on the customer's behalf up front for these,
and it's absolutely necessary.
Again, everything's changing so fast,
so many of the new players and offerings.
A lot of it's in the open source world, by the way,
And for an enterprise, that's impossible to keep track of all this stuff.
No enterprise who is not in that business is going to invest in the resourcing in the people,
the tooling, the security in the open source world to be able to go and make sure they've got
the best in breed and can go and experiment and figure out what's best for their business.
They're going to need their tech vendors to form opinions, do this work for them,
work with global systems integrators.
And those guys have a great history of working with the line of business and helping these
customers create centers of excellence where they can understand AI, go after the right use
cases, work with those tech vendors, and by the way, with the vast ISV ecosystem that's
out there to go and sit on top of those tech vendors, to get the business value out of these
AI solutions, that gets filtered down through the GSIs, through the tech vendors, to get
the actual technology that this will all run on. Now, it's still going to be some cloud.
That's where you innovate, and then you iterate out to something else when you want to get
rationalized. Now that can go on-prem hardware, can go to as-a-service. This is going to be a world
of end, not or, right? So you're going to have everything. This comes back to the conversation where
this is why it's so important for an enterprise to architect for that mobility, that workload
mobility and data mobility while still maintaining governance and security and leverage over your
own data and sovereignty. These are conflicting paradigm. So I think the enterprises will be well
focused on that data portion of it, let their partners, let their tech vendors worry about the
technology of it, and then let the GSIs get their business outcomes done. So the enterprise,
I think from a technical perspective, can take care of the data side of things and then focus on
the business and let their partners and their tech vendors deal with the technology in the
middle. Yeah, I have to admit, that's a very unique spin on it, right? So if you think about it from an
enterprise customer, we have different people that are the actual customer. We have a lot of people
that when they think about infrastructure, it's a prompt. It comes up on their screen. They're coding away in some open source platform. They don't really understand what's underlying behind it. There are people that do that need to. But a lot of the immediate people that are really taking advantage of this massive AI boom that we're seeing right now are people that they expect hardware to exist. And so our ability to talk to those people and understand what they expect the result to be helps us put that whole pipeline that we've just been talking.
talking about together, that makes that infrastructure behind it actually work as expected.
An SLA is all a software guy gets. It's not a hardware license. It's not an OS. It's not a memory
footprint. All that kind of stuff. That's superfluous to them. But we as the people building
that infrastructure behind them need to understand what they're talking about. So if I'm not talking
to those people, they're never going to get built what they really need to satisfy what they're
working on. So it's a fun way to look at how the unification is really top to bottom. And everything in the
middle is everything Glenn talked to, all these other players and pieces that we have to put together.
Now, final question for both of you. First, we'll start with Glenn. When you think about an AI
native data center, what is your vision for that? And what are you building now to support that
future? Okay, well, I can't give too much away. All right. We've recently broken ground on a 240
megawatt dincentre complex in Hinton, Georgia, publicized. And we do have plans to build more of these
types of campuses that support how are hungry, very dense AI infrastructures, both, by the way,
from a retail perspective, which is how people are used to thinking about Equinix, but as well as
a wholesale perspective, which is what our X scale offering is all about. And all new data centers,
and this isn't just us, but all new data centers or expansions to existing data center
footprints, they're going to have to accommodate these new requirements. They're just going
to have to. Our data center design teams, I work with them often enough. They're the best,
in the business. But you're just going to have to wait when these things come up and visit
and get the tour to get the specifics of how we're going to go and handle this. That's about
as far as I can go on that one. Now, we know I'll give you something that's more of a softball.
We know that AI is going to continue to advance and become ubiquitous across enterprise
applications and verticals. What do you think enterprise need to change today to support AI at scale
tomorrow.
This one I'll start.
Last time, I forced the last word.
I guess got the last word on this one.
So first, modernize your network.
Because you need to be able to first, before you do anything else,
take advantage of agile interconnectivity.
That's kind of a table stakes before you can do anything else.
Once you've got that done, and we've got lots of customers that we work with that,
then you have to get an understanding of your data, all the places that at that,
consolidate your data platform, get it down to a minimal footprint as you possibly can,
as homogenous as footprint as you can.
And then at the end of the day,
make sure you've got physical control
over at least one actual copy of your data platform
in that location you can access.
Once you've accomplished that,
all your other options are now open
and you can go wherever you need to go.
So that's why I would say first,
modernize that network,
discover your data,
get that data platform consolidated
and homogenize as much as possible,
and then make sure you get that one copy
of that data platform
where you can use it pretty much anywhere
with that modernized network.
Sky, you want to take us home?
Yeah, I would say that one of the things that you need to change today
is just your expectation.
At the end of the day, we all are realizing that anything we build today
is outdated tomorrow, and it's literally becoming that fast.
To Glenn's point, infrastructure around what you think your net ownership is key.
And so making sure that every way that you talk to your data
is as best as it can be, or at least,
flexible enough to integrate something new.
Spoke to a customer recently.
They're like, I love some of this new technology things we're talking about.
I simply can't access it fast enough, right?
So back to the modernized infrastructure.
But then start to think through the right balance of the architecture.
I don't need fully loaded this and fully loaded that today because we know tomorrow it's
going to be something different, maybe faster, maybe slower.
You get someone like Jensen on stage saying, stop buying hoppers.
I want to sell my blackwells.
So things like that.
And as Glenn mentioned, they don't want necessarily to be 100% blackwall today, right?
So think through architect for today as if it is tomorrow and realize that even once it's built
and it's online, you better start thinking about it again.
It's unfortunate, but it's very real right now for platforms and customers.
You think you're buying the best and you're not because the best is always changing at such a rate today.
It's unachievable to think you can get it all done perfectly the first time.
Very cool.
I love these discussions. We should do them all the time. Glenn and Scott, thank you so much.
Final question for you is how people can continue the dialogue with each of you and learn more about the solutions we talked about today.
Scott Shadley, S.M. Shadley, and all the wonderful social platforms that are out there at solidime.com.
We've got solution architects as much as we do storage architects available to help.
And you can see a lot of the stuff I write at blogs.econics.com or you can get in touch of
on LinkedIn. I do connect with a lot of people. I have a lot of conversations there.
Awesome. Thank you so much to both of you for being with us today. It was a fantastic discussion.
Can't wait for more. Looking forward. Appreciate it. Thanks.
Thanks for joining Tech Arena. Subscribe and engage at our website, Techorina.com.
All content is copyright by Tech Arena.
Thank you.
