In The Arena by TechArena - Inside Data-Centric Strategy with Equinix and Solidigm
Episode Date: October 1, 2025Equinix’s Glenn Dekhayser and Solidigm’s Scott Shadley join TechArena to unpack hybrid multicloud, AI-driven workloads, and what defines a resilient, data-centric data center strategy....
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein.
Now, let's step into the arena.
Welcome in the arena. My name is Allison Klein, and I am really excited about today's interview.
I've got two industry experts with us, and we're going to have a fantastic conversation.
First, Glenn DeKaiser, Global Principal Technologist, and we're,
is in the program.
And then Scott Shadley's back,
Leadership Marketing Director at Solidime.
Welcome, guys.
Thanks for having us.
Great to be here.
Looking forward to another fun conversation.
So why don't we just start with quick intros?
Glenn, do you want to go first,
just talking to a little bit about what it means
to be the global principal technologist at Equinix?
First of all, a part of a team.
We've got, I believe, 11 of us around the world.
We don't have field CTOs at ECHRECN.
So each of us has our own subject matter,
our expertise. Mine happens to be data storage and just enterprise data as it's deployed around
our platform globally. We don't sell storage necessarily at Equinix, but a lot of companies and
a lot of enterprises locate their infrastructure with us. So how that's deployed and how it's
interconnected, how it's used, especially in hybrid paltic cloud, that's kind of where I get
involved in working with both enterprise customers as well as our tech partners. That's awesome.
And Scott, Soledim, you've been on the show before, but why don't you remind our audience about
what your role is as a leadership marketing director. Yeah, my current role is leadership narrative
officially. So my job is to help people understand the value benefits and what solid I'm really doing
in the ecosystem. So everybody knows we make a drive, but it's really not about the drive to Glenn's
point. We sell storage, but people don't want to buy just stores. They want to buy architecture,
systems, and solutions. And so part of the role that I have is to help with that conversation
and education and making sure customers understand the holistic system and approach.
So, you know, today's topic is evolving workloads in the future of data center strategy.
And I just want to start it off with you, Glenn.
Let's talk about data center workloads and how they're evolving today.
What are you seeing in terms of the biggest shifts in how customers are actually trying to do
in terms of workloads that they're running on their infrastructure?
So I'd say first, hybrid multi-cloud had been for the last few years one of the biggest drivers.
AI had definitely 10x hybrid multi-cloud architectures, I'd say, as people are quite sure how to deploy
that, where they're going to deploy it, cloud, see pieces of service, on-prem, there's edge implications
as well. And as those AI workloads start to move to production, I would say that the next
change in infrastructure is dense power solutions and liquid cooling as customers start to
rationalize all these innovations onto stuff that they can control and get a better cost,
I'd say, ratio out of it than they would necessarily get in the public cloud where you're buying
by the cup. Now let's open it up to both of you. The term data centric comes up often,
and data is obviously at the center of a lot of what organizations are trying to do with their
data center workloads right now. What does this actually mean in practice? And how is it changing,
how organizations think about their holistic infrastructure strategy.
Scott, do you want to go first?
Sure, yeah.
So from that perspective, data-centric is, to your point, very simple, but yet very complex.
Because the idea is if you're a hardware vendor, you're worried about where you're putting
the data physically.
When you're a software ecosystem provider, you're worried about how quickly I can get to my data.
And all of that plays into physical versus logical location of that information and things like that.
It's been an interesting run to watch how these things are evolving and changing
and the need for how quickly we can get to that data and how centric it is to what we're
trying to accomplish, not necessarily just the traditional holistic architectures of there's
a CPU, there's a memory block, and there's a storage block, and I just play with them
appropriately. It's evolved, especially as the AI ecosystems are starting to grow.
Glenn, do you have anything to add to that?
Yeah, I mean, data used to have a one-to-one relationship, DataSense, I should say.
They had one-to-one relationship with the application they were associated with AI, you have
data sets that have many-to-one relationship.
Yeah, like many applications are accessing data sets, and you have an enterprise who's creating
entire data-marts where they can centralize their governance around all these different
data sets.
And even if this data, the original source was various production line of business applications,
ERP and what have you.
So customers within enterprise, and you're creating various value streams out of these data
marks, right?
So the nature of how data is used as an enterprise has changed during.
dramatically over the years. So because of that, it kind of implies that you're going to have the
centralized data, well, that's where data lakes are coming from. And they call on data lake houses,
however your warehouses. It's variations of a simple term. But this idea of bringing all
your data together where now you can go and extract new value streams out of it requires this term
data-centric, right? Because now everything you're doing, it starts with the data. And so really,
it's a long-winded version of data-centric kind of means that whatever you're doing, you're creating
value starts with your data, and the workloads will come to that data.
Now, when you describe that, one of the things that comes into my head is that there's so
many intersects in how you actually manage your infrastructure, how you manage your data,
how do you manage that strategy, things like globalization, privacy regulations, energy
consumption, regulatory pressures, and then something that you brought up multi-hybrid cloud,
especially when you start considering things like latency-sensitive apps.
Glenn, how would you see companies making these data-centered decisions today
based on that very complex landscape?
To bring it to something a little bit more simple,
the first characteristic of a workload is workloads are relatively easy to deploy and bring up.
Kubernetes is a great example, an orchestration framework, to do that, right?
But that doesn't address the data.
Datasets have lots of constraints.
They're slow to move.
They require governance.
and by governance in this context, I'm talking about copies of this data, how it's retained,
how it's secured, and then there's also compliance and sovereignty, which is a big topic today.
It makes a lot of sense to create a strategy where you're keeping your core data assets
in a sovereign location on equipment you control in locations you can access, right?
At least one copy of that dataset, but then you maintain the ability to project the appropriate
data sets to where they need to be for the most optimal governance, compliance, cost,
performance, all the things together.
And that will be in one or many or a couple of public clouds at a service, perhaps in a SaaS
provider, right?
Or perhaps at your edge or in your core data center, right?
So it's going to be the result of a calculation of all those parameters.
But the trend is that enterprises are starting to realize that they're not necessarily set up,
take advantage of all of those places.
And if their data architecture, where they store their data, how they govern it,
if it's not ready to accommodate this mobility, they're going to find themselves at a
competitive disadvantage.
Now let's click down on that, Scott.
Glenn just put a great premise on what's guiding enterprises today.
How are you seeing that in terms of data storage technology in particular and the power
and efficiency, proximity to compute, and workload-specific performances,
that are required in this complex model.
Yeah, it's interesting because to that point, he made a comment about also, and you,
he did too, about governance and security and things like that, where people are wanting to
have a better ownership of their actual data. And that means that physicality of it becomes even
more of a challenge. And so when you're trying to store that data, put enough data in a useful
place that is also under that control, that's when you start getting into these wonderful,
large capacity solutions, right? You're talking to 60 and 100 terabytes and beyond that
because it's not necessarily that they have to have it, but they need to have it because they
want to be able to have that data close and they want to have control of it. And to Glenn's point,
you're migrating data from point A to point B. You need a faster path to do that and you don't
want the data storage to be that bottleneck. So you have to think through the architecture enough
that you're pushing the bottleneck to where the bottleneck is truly unbreakable, right?
You can't get over certain aspects of the protocol transfer between location to location or from your on-prem versus your public cloud.
So make sure that you're optimizing how the data is available to the user in the confines of your geopolitical requirements, your geophysical requirements, and things like that.
So being able to store more for less power is important.
So that's when these large-cap drives come in.
Being able to perform fast on the data you have is when you need the performance.
And so that's why there's an architectural requirement to look beyond what we used to call traditional
architecture. And I would call it more of a modern infrastructure where flash tiering flash plus
hard drives is actually becoming even more and more valuable.
Storage guy always wants to make the server guy the problem, not him, right?
Exactly. Always pushing the problem somewhere else and making sure that you can deal with what
you need to. But the amount of data that we're also driving is key to this kind of concept too,
because we're just collecting so much more information real time
and you have to be able to sort and scan and filter it
without moving it where it doesn't need to go.
And that's a really another big picture
as you get closer to the edge is consume a lot, move a little.
And how do you do that effectively?
Now, if we look at workload placement,
one of the things that I think about with Equinix
is that you guys as a large data center capacity provider
are having strategic conversations around workload placement
based on everything we just discussed.
How do you guide your customers
towards those types of decisions?
You've got choices now
that you didn't have in the past, right?
If you go back to 2019, right?
Cloud first was the rage, right?
And that was that everybody was dealing with.
That skewed the conversation
into something sort of unnatural, right?
A lot of workloads didn't belong there.
And it kind of had a bipolar choice.
You had either on-prem
or you had the one cloud that you were going to work with,
Then people said, I'm going to move to the cloud so I can get out of the data center business.
They cut the branch they were standing on, and now they got one choice.
And that's the public cloud, whether it fit or not, a lot of those workloads came back.
If they even could, like I said, a lot of folks got out of the data center business.
And we'll say that Equitex got a lot of business over the years from customers who got out of the data center business, went cloud first and realized,
oh, some of these things are going to move back, or I got to have at least a portion of this stuff that doesn't fit the cloud.
think mainframe, AS400 or I-Series type of applications
and a whole host of other that just didn't fit the public cloud,
whether they knew it or not.
So either stuff couldn't go up or they went and realized
this is not a good fit operationally or cost-wise come down.
We also now have service providers, GPS of service providers, Neo-Clouds,
and also the SaaS providers.
Now, these guys do very specific things,
but they need access to data as well.
And so their workloads technically,
but they don't normally get direct access to the data.
they usually need to put a copy up to these providers, whether it's SaaS, which is completely
out of your environment, or GPU is a service provider, which can be connected to your environment
through a secure firewall. Still, you're creating copies of data. There's governance problems,
security problems, sovereignty issues, the new paradigm, right? Like I said before, it usually comes
down to cost, compliance, performance. I don't usually hear sustainability too much in these
conversations because sustainability is an expectation across all these platforms, whether it's
public cloud, whether it's GPs of Service, you're going to be looking at sustainability in your
vendor that you choose. So most of the public cloud providers are pretty good about providing
scope two, not scope three, at least scope two reporting. Just like us, we have a great reporting
capability and also a great sustainability story. That usually doesn't come up in an individual
conversation. I'd say normally stays with the other parameters we've already discussed,
which is performance, cost, governance, and sovereignty, the regulatory perspectives.
And in this landscape, you know, I think that the interesting things, enterprises are using a very large selection of providers and potentially on-prem utilization to house different workloads.
How do you both see the enterprise IT leaders weighing the tradeoffs in those selections?
I'll start with that one.
So many of our customers have figured out how to use interconnected Kolo, like Equinex, to de-risk their public cloud endeavors.
Now, public cloud will remain the place for innovation and fail fast.
It's what it's made for.
It's elastic, right?
But then as a customer, you ask, which public cloud am I going to do this in?
This is just going to do one?
Which service within that public cloud?
Or am I going to use a SaaS provider, right?
How do you try multiple things at once?
How do you compare?
How do you do performance testing A versus B, right?
Or perhaps I want to actually have a multi-cloud experience.
How do you rationalize the cost once you move these workloads into a production situation, right?
and you don't need the elasticity or fail-fast characteristics of on-demand cloud, right?
What we counsel our companies is to build what we're calling an authoritative core,
or authoritative data core, which is essentially one copy of your active data sets
on equipment you control in locations you can access.
And that's kind of a way of keeping one foot in and one foot out of a given public cloud
or service, always maintaining leverage over your own data.
Around that core, you wrap an agile interconnectivity platform like Equinix has with a
fabric and network edge in our various interconnectivity products.
And this allows the enterprise to connect to all of its ecosystems.
And so we solve for all those questions.
That's how we approach it, right?
And others do it in their own ways, but that's how we do it.
Scott, anything to add on that one?
Yeah, as you look at it and you're talking about IT leaders and how do they decide where
and why and type of thing, a lot of it comes down to a lot of what Glenn was talking about.
It's ownership of your information flow, right?
So if you're going to go build an on-prem, you can control every piece of what's going into it.
The hardware you're putting in, the rack you're putting in, the power constraints to it.
When you decide to push something to a cloud-type environment or even outside of your own premise,
you're now acquiring a service that goes on hardware that you can't control.
And so it becomes, okay, do I need to have exact ownership of where it's being placed
and trust over that hardware and that infrastructure?
Or am I okay letting someone else manage that piece of my data management?
it. And that's kind of where I see it from that perspective. And being able to understand how that
flow works and where it goes is actually quite important for someone even like Solidine because
we need to help customers understand where their data is most valuable and how, based on the
infrastructures that we understand and help build. Now, obviously, there can be some complexity that
comes with exactly what you described. So how are companies simplifying operations without losing
control of either their data or their workload operations. And what role, Glenn, does Equinix play
in helping them achieve that? Well, Equinix, the way I see our customer is doing it, it's the judicious use
of strategic operational partners, right? So I see a lot of the enterprise clients, especially
in the high end, leveraging IT outsourcing companies, global systems integrators, but then you also
have to couple that with a holistic observability platform so that both you and that provider can
stay in control, all of the mess, because you're never going to get down to one platform.
You do want to minimize it and talk about that a little bit later on, I think, but
Acknowledge has a robust global network of global systems integrators, tech manufacturers,
the ones I won't name a wall because you probably know the names and Vars,
value out of resellers.
These are the ones that get the customers to the outcome, right?
And they're not only getting that, you know, they're taking assistance designs.
Remember, we're reset in the food chain, right?
We're underneath providing the plumbing, the electrical, the foundation of the house.
These guys are building the house that this customer is going to live in for a long time.
So we depend on our partners to create those outcomes.
From an operational perspective, though, like we participate in the observability platform.
We can feed all of the environmentals from power and cooling, et cetera, up into our customers' observability platform.
It's important, though, that the customer not had multiple of those because then it becomes impossible to control.
You're looking for operational simplicity, one observability platform.
It gets expensive.
this is too complex to start bifurcating where you're going to look for stuff that get to
cause, right, or even be proactive.
The way we also help our customers out is we've got a global team of architects and engineers.
That's the team I sit on.
We are the ones who are the glue between all the parties, the partner, the tech manufacturer,
and the customer can make sure that all the workloads and all the infrastructure is in
the right places.
There's a lot more choices than there used to be.
And it's very interesting.
It was the dynamic there where in the past, the customer had a data center.
And so the customer thought monolithically and said, I need a data center here.
When you start looking at companies like Equinix, they have lots of data centers, even within given metros around the world, and it's all interconnected.
The world's kind of your data center, and you do not need to be monolithic anymore about where you put your stuff, especially in the day of scarcity.
You can put part of your infrastructure in one data center maybe that needs latency close to the cloud, and you need a whole other set that's a little further away.
So there's other options now that customers don't even know about.
So that's what our team helps all the different parties understand and how to lay this stuff out.
Still operationally simple, but you get an optimal outcome and the customers connected to their ecosystem in the most cost-effective way.
I love what you're saying.
And I guess one question to both of you is within this hybrid, distributed, data-centric world,
are we seen any best practices among customers who have really successfully navigated this landscape and are delivering great business values?
From that perspective, I would say that some aspect of it is best practices to exactly
what Glenn was talking about. Don't go too wide. This whole concept of RFPs and RFIs and all this
kind of stuff to figure out who could do it the cheapest or whatever the case, fastest type of thing.
The architecture really comes down to working with people that really know what they're doing.
Those partners are important. So much like Equinix has multiple engagements, so to Solidime.
And we talked to the end customer, not about buying our drive, but how does their platform?
form need to be put together. What do they need to do with it? So that when they do come to that
final decision, that best practice is trusting the advisors you've picked to actually give you the right
answer and not necessarily go for cheapest or best. One of my favorite things of the line from
the movie Armageddon when the really geeky guy says, do you realize we're sitting on a $10 million
rocket bill by the cheapest provider with the lowest cost parts, right? That's not always the best case
scenario. So you want to look for the partnerships and the engagements where people are helping
you solve the problem and not just trying to sell you the bit or the bite or the server or the
location, things like that. And those are kinds of best practice you have to look at is there's so
much information flow out there. Weeding through that and making sure you know the right decisions
are being made is really important about that. And the rest of it kind of falls together from there.
Steve Buscemi always gets the best line than every movie. But if you're going to put yourself
in the best position to execute on an operationally simple and executable,
architecture for hybrid multi-cloud, especially as you go into the AI world, right?
The first thing you've got to do as enterprise is rationalize that data platform.
And from a technology perspective, I would say that means keeping that as simple and as
homogenous as possible, but don't go overboard with it.
Don't try to shoehorn stuff that just won't get in.
There are storage platforms and data management platforms that can cover 80 to 90% of the use
cases within an organization fairly comfortably.
multiple protocols, data motions that can accommodate all of the different motions that you need
to use data in clouds, either in place or projecting and deleting or caching or getting to the
edge, right, or streaming. There are data platforms that can cover a lot of these different use
cases. So the goal should be, just as you had cloud first, you should have your data platform
first. So your first efforts, can I get this into the data platform that I've standardized on?
If not, okay, I'll make an exception. But that exception should be just that. You can never get to
one technology, but I believe by reducing them as much as possible, it's easy to scale and
to focus on the logical data architecture, which is where users are going to interface
with all the corporate superior enterprise data. And that logical data architecture is really what
they're looking at to go and create their enterprise value. Nobody wants to talk to storage.
Okay. If your enterprise customers are thinking about or using storage, you're doing it wrong.
The storage should be felt as a service to the upper layers.
Now, I know that efficiency is a top challenge and a key metric when deploying these complex technologies.
How do you see Glenn enterprise measuring success in today's environment and how do you guys help your customers at equinics?
It's an interesting dynamic, right?
The customer has their own data center.
Efficiency is all about cost.
How much data can I stick into a storage environment?
It's not about power consumption.
It's not about sustainability.
And even in data center, right,
customers contracting for a certain amount of power
they're going to use. So if
they're able to get more power per
terabyte, or more terabytes, I should
say, for a given power, if they're able
to do that, they're able to get more data
into that same power number.
They're not going to save power
because they're still contracting for that power.
They're not saving any money. They're just going to get
more into that. But you're never running
storage devices at 100%
full anyway. So from a power consumption
perspective, also in these big environments,
I don't see storage being the biggest ticket anymore.
Obviously, GPUs have become the stars of the show.
Even in the Superpot, I think storage only consumes, what, maybe 10% of the power,
13% power, so I'm seeing?
If you're looking at remembering my Phoenix project book reading days of Gene Kim,
you'll look to save money on the place that's got the biggest problem.
You'll solve that problem because solving the smaller problem isn't going to really save you much.
So if you save 30% of your storage power, you've gone down from 13 to 9.5.
in the grand scheme of the SuperPod, that's not going to help you very much.
So obviously, if you were a GPS and service provider and you have that at tremendous scale,
you want every little percentage you can get.
Or if you're a hyperscaler, obviously, I'm not talking about that environment.
So I'm talking about an environment where the customers got,
maybe they're contracted for 200, 300, 300 kilowatts, and that's what they're going to use.
I don't think efficiencies at the storage level are going to gain you too much.
One thing you do have to look out for, though, in efficiencies, is will the use of that efficiency
slow down the most important asset you're trying to maximize,
which is your CPU and GPU, right?
Again, we're trying to make the problem of performance,
the guy on top of us.
If efficiencies are fine and squeeze as much of your asset as you can,
but the minute that thing starts to cause any degradation,
if I can't fill the CPU or GPU,
I can't get the throughput or latency.
In an AI world, when I'm doing tuning or training specifically,
that's a problem.
That I can't have.
Sometimes these efficiency technology,
they get in a way of performance.
They just necessarily do.
If I've got to decompress in CPU and storage device,
am I going to be able to feed my high-level applications data at the speed they want?
I'm not saying they will or they won't.
You have to make sure they do before you choose to use that efficiency.
But sometimes efficiency can cause problems.
Scott, you know, I think that Glenn made a great point about the role of storage
and the tight coupling with driving data center compute.
When you look at some of these broader infrastructure strategies, how do you see storage media and storage platforms fitting in, especially when customers want performance without complexity or high energy costs?
How do you do it?
Yeah, it's the Glenn's point.
You have to look at new metrics, right?
It's no longer, you know, if you think of storage, we've always been fighting this gigabyte per dollar problem.
And it's not that anymore.
It just won't ever be that again.
It's terabytes per watt.
It's iops per watt.
Again, it's all around the power consumption because every little mini ounce of power that I can pull back in an infrastructure system brings extra power to the processing engines or in this case, the larger memory footprints and things like that.
So building storage products that can perform across that bandwidth and that strategy or even the new architectures, for example, take liquid cooling as something that's very interesting.
Right now, everything in a server is liquid cooled, but the storage devices until now, because the storage.
devices had to be hot swappable because they didn't trust them. But as Glenn said, the storage
isn't really the problem anymore. We need to make sure that we're architecting the whole holistic
solutions. That's why Solidime has the world's first liquid-cooled hot-pluggable drive so that we can
provide these levels of efficiency in the storage and also allows to help with power draw too,
right? Because now no fans in the system. Think about how much power it takes to spin up a fan
in a box. We can move aisles closer together, giving more server footprint in the system.
I don't have to worry about hot and cold aisles.
Things like that are the innovations on the storage front
that are just well beyond talking about media
and the density and the dollar per gig,
but it's really now getting into
how do you make the system the most performance per watt you can
and not starve a very expensive high-powered GPU
because something decided to hold on to a bit of data too long.
That's all about PUE, right?
Yep.
Exactly.
Okay, final question for both of you.
if you ran into an enterprise data center manager who is rethinking his or her data center
footprint right now what's the one question that they need to ask themselves that they maybe
weren't thinking about five years ago and second part of the question looking ahead what will
define a successful data center strategy over the next five years I'll let Scott go first
I want the last word on you what's the last word I like it Glenn with the last work I got
No, I would say five years ago, everything was about how expensive it is to set it up.
How costly is it to put this system together?
And they don't think about what it costs to run the platform.
Stop worrying about the CAPX.
We all know CAPEX numbers are big and they're ugly and they consume EBITA or whatever financial
number you want to talk about.
But at the end of the day, it really comes down to how do you operate efficiently over that
five-year window?
Everybody's looking for ways to extend systems and get more life out of platforms.
if you build it right the first time, targeting what you're looking for as far as operational
execution, your net net at the end of the day is going to come out better off than you ever
would have thought it was going in trying to build it too cheap, too fast, to whatever the case
may be. So take the time, architect it, and consume the savings in your operational expenses
and your operational control of the entire infrastructure. That's really where I would go.
I mostly agree with you. However, take
into account that if you went back five years from today and tried to predict, if anybody had
told you where we'd be today, you'd have told them they were reading to me science fiction
books. Understanding that the rate of change is not just unpredictable, but we're on a factor
of two acceleration of change, ironically aided by AI. AI is now helping us accelerate AI faster,
right? So the main question I ask, and I do this today with Enterprise, is, and I usually shut up
after I asked the question, because I want to hear their answer, is how are they architected
for change? How are you set up? All of this change that you know is going to happen, you don't know
what it's going to be. You don't know which way the next Mike Tyson punch is going to hit you from,
but you know it's coming. How are you going to stay standing up? Is your data in a place
where you can choose to do something new and do it quickly and painlessly, by the way, without
massive egress cause? I have 30 terabytes in Frankfurt S3 and I want to get it to Chicago.
okay, it's a million and a half dollars for that egress, by the way. And it'll take you a few months.
Is the data located in a place most likely to succeed and have access to all of the services
that you want to connect to, both today and the ones you don't even know about tomorrow?
Four years ago, Coralweave was the Bitcoin mining company. Caruso was like doing natural gas
arbitration. Landau Labs is more academic. These companies weren't doing GPUs and service three years
ago. Now they're like household names. So who were going to be the household name in three years?
you have no idea, but you do need a strategy to make sure that the data, you know they're going
to need access to that data when you put it up there or they can access it directly.
So you need to make sure that data is located in a place most likely to be where those services
are going to be and those services are going to locate at the place most likely to have access
to the customers that they want.
So feedback mechanism, right?
So this is the question that I asked customers because I'm looking for that answer, how
they are connected for change.
I know probably asked three or four questions, right?
It wasn't one question, but it usually leads to a pretty cool discussion because most
organizations, they haven't thought that way.
They weren't that way when they were in their own data center.
When they did the cloud first, they basically recreated the same mistake they had by being
in their own data center.
And now they've got to move large sets of data.
And my advice to them is, it's just the best time to plant the tree was 50 years ago.
So the problem of data getting it out and getting that one copy, even if you're staying in
the cloud, that's all good.
but you want to get that one copy on that equipment you control on locations you can access
paradigm because tomorrow or next year or two years from now, that 30 petabytes is turning
into 45. It's going to be more painful to do it when you really want to do it. So there's no
reason not to go and get to that architecture now and protect your enterprise.
Glenn and Scott, it was a fascinating discussion. Where can folks engage with you and learn
more about what your respective companies are doing in this space? I'm at Equinix.
You can go to blogs.econix.com.
I've got a bunch of stuff published there as well as some great content from a lot of my colleagues.
And I'm also on LinkedIn by my full name.
Fantastic.
And you can find me at SM Shadley on most of the social platforms and then, of course, solidime.com.
Again, we've got a lot of solutions and product brief.
So it's not just about the SSD itself, but how it's being used and where to find it.
Thank you so much for a fantastic discussion, guys.
I can't wait to have the next one.
Looking forward to it.
Yep.
Thanks for joining Tech Arena.
Subscribe and engage at our website, Techorina.ai.
All content is copyright by Techarena.