Storage Unpacked Podcast - Storage Unpacked 265 – The Enduring Benefits of Centralised Storage
Episode Date: January 17, 2025In this episode, Chris discusses the enduring benefits of centralised storage, particularly with reference to storage virtualisation, with Dan Kogan, VP of Enterprise Growth and Solutions and Cody Hos...terman, Senior Director of Product Management, both from Pure Storage.
Transcript
Discussion (0)
This is Chris Evans, and today I'm joined by Dan and Cody from Pure Storage.
Chaps, how are you doing?
Doing well. Thanks for having us on, Chris.
You're doing great.
Excellent. So let's start with Dan and then Cody.
Do you want to introduce yourselves and just tell everybody what your job titles are at Pure,
and then we can dive into our topic for the day.
Sure thing. Dan Kogan.
I am the Vice President of Enterprise growth and solutions at Pure Storage. So what I focus on are new areas of opportunity for us as a company and kind of core growth
markets, cloud, AI, cyber resiliency, and working across our ecosystem to build differentiated
technology solutions around those areas and helping take those to market.
And my name is Cody Osterman.
I'm senior director of product management here at Pure.
I'm responsible for two parts of our business, our public cloud investments and where we're taking the multi-cloud approach of Pure, and then also our virtualization
ecosystem and driving the strategy for our customers in the data centers and beyond.
Excellent. Okay, so the reason for our conversation today really sort of sprung from
discussions we've had over the last couple of months, which have related to VMware and features that you've added to that platform in terms of support.
And one of the things I think that sort of came out from that discussion for me was that even though we might have thought it would go away at some point in time, or a lot of people thought it would go away at some point in time, centralized storage actually has become much more important to business
than it probably ever has been.
And really, I just wanted to dive into today why that is,
what the choices are that people might have had
if they didn't use centralized storage,
and why actually going forward, it probably will be
a sort of a really prominent feature of modern enterprises.
So that's really our conversation for today,
based on adding in some of those discussions about VMware and your products and your solutions
and what you've been doing in this area.
So let's start with the history of this stuff
and where it's come from
and why centralized storage
is really sort of being such a perennial thing
in the enterprise.
Who wants to kick that off
and give me a little bit of a background and your opinion on that? And then I'll add a bit of thing in the enterprise. Who wants to kick that off and give me a little bit of a background
and your opinion on that?
And then I'll add a bit of my opinion on it.
I'll nominate Cody for that because he's been doing VMware pure storage
for coming up on a decade, I think, and then even before that with EMC.
So I think you've got a lot more history on this one than most people.
Yeah, yeah. So, I mean, Dan's repeated my work history.
I've had three jobs.
I worked at Pure, I worked at EMC, and I worked at Hollywood Video, a video rental store.
So most of my career has really been on external storage.
Of course, I guess that's another form of media, I suppose. But yeah, if you really look back at it, when virtualization screamed onto the stage, at least for open systems 20-ish years ago, there was efficiencies of the compute layer.
That was really the whole point of it is how do I better use my CPUs?
How do I better use the memory?
How do I not over-provision my assets?
And so then the next question is, well, how do I get more efficiency out of my storage, right? Because it's the same concept applied in the sense that like,
I have 100 gigs internally of my server, but I'm only using 30 of them. So how do I take advantage
of that somewhere else, right? And so the concept of like, all right, taking that external storage,
consolidating, getting the efficiency from capacity usage was that first step. And then,
of course, the features that come along with that, the data protection features,
snaps and replication, you can get a lot of economies of scale and benefit around that consolidation from an efficiency perspective there too, right?
And so that added value.
But as we moved forward, especially in the virtualization space, again, the concept around
data mobility, expanding clusters, making sure that data was
available here and here without having to move it was becoming more and more important.
And how do I, if this workload's not busy, how do I make sure this server has access to this
storage, right? So sharing that, having consolidated approach for overall TCO of the product, but then
having the data where you need it to be, right? And so a lot of the work we as a storage industry have done with VMware, if you want to take
that as an example over the years, is about how do we offload more of these processes
back to the underlying storage?
Because they have a consolidated approach.
They can control it.
They know where it is.
They can move it without moving it.
And then how do we allow the VMware ecosystem, virtualization, the hypervisor to do the work
that you want it to do, which is running my applications,
managing my virtual machines or containers,
as the case may be.
And I will probably talk more about this,
but as we fast forward into these days,
it's the same concept around GPUs.
How do I make them busier?
It's the same idea, the same problem and so forth.
And so making sure they don't have to wait on the storage from
a latency perspective and literally getting the data there is a key piece of making them busy.
And so that's the lightning round approach of really the history of where this comes,
but it's all followed. It's all followed the same path is how do I make more efficient use
of my infrastructure and make sure that I'm not over-provisioning a given asset or my entire
infrastructure overall.
I would say that it's actually evolved since the days that you work for EMC.
I mean, the fact that you called it EMC and you didn't call yourself Dell, it shows that
you know how long ago it is that you actually work there.
So if you look at it, we now don't just look at block products.
We look at file, we look at objects.
You know, it's sort of irrespective of the protocol.
You know, centralized storage
suits all those requirements. And especially with AI, you're talking about probably a lot
of unstructured content there. So really, you know, people might be thinking, oh, we're just
talking block here, but absolutely not. This is a, you know, a multi-protocol, multi-platform
discussion. Yeah. I mean, it crosses objects, it crosses block, it crosses file, like the use cases
and how it's consumed might be slightly different, but the backend concepts is the exact same thing.
It's like, how do I make sure my data is shareable, accessible, and efficiently used,
right? And sometimes these, you know, the access patterns are slightly different. Am I running a
VM? Am I accessing the guests? Regardless,
right, having that direct access and fast access is the same thing, whether it's object, file,
block. It's in the end, it's a storage underlying architecture. It's a data access architecture,
not a protocol thing, right? Because, you know, you don't run between iSCSI and Fiber Channel,
right? It's the same thing. I think data access is the key point in there. And I mean,
Chris, you hit on it with AI, but it's been a bigger need even before that
is just if you want to actually do something with your data,
that access to your primary systems,
your secondary systems, it all matters, right?
And you have to have kind of a cohesive way
for those things to come together
and be able to access that data
and use that data for analytical applications,
now increasingly AI applications
and the data pipelines between them.
And so block as a protocol is typically used in primary applications.
Now on structured data, it's been much more popular and prominent in secondary applications.
That's all kind of coming together in a centralized platform, right?
And I think we, and just for end end users what matters is you're able to access
those different types and manage it in a simple way and kind of have everything effective effectively
in one big virtualized environment yeah but you know we could have gone down a different route
because stayed with das which would have been unbelievably crazy you know why would we would
we possibly want to go back to a model where we put storage into individual servers but we did
and we went down the hci route and we we looked it and said, you know, we don't need shared storage anymore. We
could just scale out the compute infrastructure and the storage infrastructure at the same time.
And in one respect, that looks good. But personally, I don't think that's a scalable
solution. And I think it introduces other challenges into the environment.
Yeah. I mean, I think, goes full circle in IT. You've seen
many, many, many times, and it's the same thing, right? We saw hyperconverged come in the market.
And I think a lot of the driver around that was, I want a turnkey solution, right? I think that's
really more than anything else that was pushing that. But as it's evolved, right, we saw it kind
of top out from a market share perspective. And then what ended up happening, these hyperconverged solutions offer like storage nodes, right?
You know, so it just kind of went that full circle.
Whereas there's use cases where, you know, things make sense.
Everyone's got a little bit of their swim lanes, but overall it's come full circle, especially at the high scale.
And this is, I mean, frankly, this is the primary focus that our conversations with Broadcom has been for the past six months is that are like,
we want to work with you on scale out enterprise external storage. Like this is why we need you.
This is why you're important. This is what we see from our customers. And so I said, these things
all go full circle in many different ways. We're seeing that even in the public cloud vendors
around consolidated storage platforms for TCO reasons and data management reasons.
And so these things all tend to come around.
But I think it's hard to beat the TCO and overall efficiency of what we can do on a
consolidated platform.
Yeah.
And I think just the need for that got much more pronounced in, you know, kind of some
of the recent Broadcom licensing changes and how customers are adapting that environment
that it's been a shed a real light on the urgency of having that centralized storage environment decoupling from,
from a hyper-converged environment. I was going to mention that because ultimately, you know,
you look at that decoupling piece and, you know, you look at licensing and you look at the fact
that if you are running a platform where you're adding services into your VMware environment,
which was really, I can say this because I'm the independent person, you know, you look at it and think,
well, I'm paying quite a lot for this license. And now you're expecting to run your services
behind the scenes on my, on your, on, you know, my infrastructure. And I get, what do I get for
that? I have to pay for the licenses for that. And, and I lose some of my virtualization license
and my capacity for, for VMs on that platform platform so if that's an expensive license that becomes an expensive issue especially
if i have to scale out that environment in order to gain more capacity more storage capacity and
you know i had nodes i may i maybe didn't even need so the decoupling thing i think is quite
an interesting one too and i i wonder whether that's something we're seeing more in the industry, that disaggregation,
because simply we can't afford to try and bundle everything into one platform in a sort
of mainframe-esque style like we used to.
Yeah, I think that's exactly what we're seeing.
I mean, certainly the customer demand is there for that.
And again, there's been a lot of urgency because of the cost you mentioned, without getting
the specifics, but we're having conversations with a number of other historically HCI platforms
about un-HCI-ing and offering external storage.
So it seems to be the trend and the way things are going and the other vendors are realizing
the need for that if they want to tackle larger enterprise environments.
I think they found HCI works great for, you know, shops without a lot
of IT staff and, you know, the turnkey simplicity that comes with it. But once you hit a certain
scale and very often storage is going to be the pressure point on the scale of just adding more
data, it doesn't make sense economically as well as, you know, just kind of performance and general
scalability wise. I looked like you were about to say something there, Cody. So go ahead.
Yeah. I mean, I think there's two things that crop into mind here around this.
One is that going back to, you know, point Chris around the history is, you know, we saw there's
maybe 12 years ago, there was the Fusion IO, there was the Pernix data, like moving this stuff into
the server. And then once FAs came out, then that industry just completely disappeared, right?
Because the efficiencies weren't there. And having all that data and acceleration in one place was a
key part of that. The other one too, is this is a conversation I recently had with a large customer
of Pures in the US where they're building these AI training models, right? They want these systems
to learn like the people that work there. They want to take the history,
their learnings over the past, not 10 years, 20 years, 30 years, 40, 50 years of data.
And they want them to learn like the humans have learned and take that knowledge, not just what we
figured out from it, but take all of it and be able to run their models against it. And the way
to be able to do that, right, in a cost-effective way is through consolidation, right? And so
there's things, they're literally taking handwritten notes from the 70s, right, and putting them on their platforms so the systems can read it and understand it.
And so having that in an efficient way of provisioning it, not consuming all these servers, but putting it in the storage, this is why QLC has gotten more important around those densities in the platform.
This is changing everything in many ways.
I mean, conceptual, similar paths and ideas, but it's becoming more important in the platform. This is changing everything in many ways. I mean, conceptual,
similar paths and ideas, but it's becoming more important in different ways. And the problem's only getting larger. And that's why I think this is a key piece of the infrastructure moving forward.
I was just going to say that I think that, you know, if you look, this may be a massively,
a massive generalization, but I think if you look back and you think about what we were looking at,
say 20 years ago, when we first saw centralized storage, you know, there was all of that benefit of being able to say, well, we're optimizing here,
we're putting all of our resources into one place, we're not distributing it between lots of
different physical servers, we're making life easier in terms of admin, management, you know,
cost saving, all the sort of things that you sort of mentioned at the very beginning there,
Cody. But as time's gone on, we're more focused on the value of our data and keeping data
for many more use cases.
So you said about the fact that we might collect data from different sources that could be
many years old.
It's not all about structured databases and the traditional stuff we used to have.
And as a result, I think we're focusing more on data now rather than applications that
are running on infrastructure. So things like centralized storage start to be the center of
the infrastructure and the compute is just stuff that sits around the outside and actually does
something with it. Rather than it being compute that's the center of the infrastructure and
somehow compute and data to sort of shove together into an infrastructure model. And I think that
centralization of data as the core
is probably partly the reason why centralized storage
becomes more of a logical sort of move to make.
Yeah, I mean, I certainly agree with that.
And so the compute is, well, I mean, I think an interesting example,
once I go back into history, right,
is that I think almost every storage vendor I've seen,
including us to a certain extent, have looked in the past at putting and running applications inside the storage,
right, inside the storage platform. And that's just never really worked out because in the end,
it's more efficient to use your servers that you bought to run your applications and leverage the
storage to do the storage, present the data, manage that, right? And so I think it's definitely
a key piece i think
we've learned some lessons we're getting better as an industry of figuring out where these things go
but it's definitely pointing that path pointing down that road yeah i never thought the application
i the idea of running applications on the storage was was a goer i i it sort of was in some respects
i mean you could look at it and say okay we could do certain things but you know you you hit a million and one compromises because now if it's block storage you have to
have some way of actually interpreting what that block looks like if it's file it's not too bad if
it's object it's not too bad you could do that with that but but now you you're compromising
it because you know you've got to think about another layer of security that's going to be
locked into that level there's a million and one things that don't make that that model really really complex so let's perhaps bin that one and never do that one again but somebody will come on somebody will
do it again in the future there'll be another another startup that'll come along and decide
that's a great idea but okay so we sort of we sort of talked about some of the things that we think
are important here but i think it's probably a good idea to try and formalize this a bit more
and think about this from um from a user's perspective because there's a million and one
different things you just mentioned and i think some of these things are operational some of them
are financial some of them are resiliency based and you know we really sort of need to i think
to think about how these actually fit together there's a lot of operational benefits for instance to doing, to doing this, isn't there? So, you know, I know where all my stuff
is when I'm going to upgrade it. I can build systems that I can upgrade in place. And so
operationally, just even just starting with that one, there's a huge benefit.
Yeah. I mean, I think we'll go back to the, you know, the use case of VMware and VMware
Cloud Foundation, right? One of the benefits around that architecture and that offering is that
you can deploy new workload domains, aka clusters, essentially your vCenters, if you want to get into
the product name, at will, right? Scale those clusters up, scale them down, repurpose them.
And so having to move and shift data because you're decommissioning a server, you're changing
the version that maybe it's running because you're trying to test something different.
What? There's a million different changes that you might want to make. Not having to wait and deal
with that and be able to truly take advantage of the, really the flexibility of your compute is a
key piece there. And so operationally, it allows the VMware admin, the infrastructure admin to do
more faster by not having to think about like, okay, is taking the server down going to affect
my availability of my storage? Is it going to affect the performance of my storage for another computer?
You don't have to think about that piece. The less you have to think about, right? This is
simplicity play. The faster and easier it is to do everything else, right? I mean, it's not a
profound point, but it's very, very true. And I think also operationally, when you think about
it, this is, you know, you mentioned security before, kind of in a different example, but there's a security piece around operationally there too.
One of the points we've always made around protecting your data, which I think is a key piece, is if you can do it, if you're an admin and you can do it, anyone can do it.
And so this is a piece of that.
If your storage is in those compute nodes, you're an admin to that.
You install the OS, you own root,
you own administrator.
And so if you do, someone else can get in.
So this is a key piece of what we've done,
like what we call safe mode
on the FlashArray platform and FlashBlade
is immutable snapshots
that cannot be deleted by any admin, right?
We own that.
We build that into our software.
We don't let customers get that level of access.
And so those restore points are there too.
So operationally, from a security perspective,
I think there's an advantage there too of what we can do.
I mean, that one in itself,
that's an entire discussion on its own, right?
Isn't it that whole idea of how security could be an issue
for things like ransomware and you name it,
the fact that by integrating the storage into the compute,
as in if you're running it as a VM,
you are really exposing yourself to potentially significant issues.
You know, from that perspective, I definitely see that.
But one of the things that sort of sparked this conversation at the very beginning was the idea of VMware
and being able to do stuff with VMware.
If I might be so bold as to suggest that VMware is not the only play in town anymore,
and people might be looking bold as to suggest that VMware is not the only play in town anymore. And people
might be looking at other alternatives. If you're locked into the VMware way of doing things with
its own integral storage solution, then, you know, potentially with any solution, by the way,
it doesn't have to be VMware. It could be another solution that you're also locked in. You might be
wanting to move to VMware. Having the flexibility to be able to move between different hypervisors is enabled by the idea of having shared storage.
You know, that's another example of where you can change that compute, change that hypervisor,
I think, and have that flexibility. Well, and I think that's, yeah, again, I think the sort of
shock to the system that came around the VMware ecosystem today put that in focus for a lot of customers. And obviously, we're not an unbiased player in this space, right?
We want to sell customer storage and sell them external storage.
We try to come at it from a customer's best interest sort of standpoint.
We think there's very good reasons for that.
But we obviously have a slightly biased opinion in this.
But there are very, very few customers that we're not having this conversation with right now.
Right.
I mean, the vast majority of our customer base is running VMware environments and applications,
you know, in those environments on our FlashArray product.
And they're all trying to, and others are with other hypervisors.
Again, increasingly, we're getting asked about other hypervisor platforms that we don't
quite support yet today, but those conversations are all ongoing.
The point being, across our 12,000 plus customers, we're having this conversation with almost
all of them of what does your virtualization future look like as you put this into context?
And it doesn't really matter.
The vast majority are going to stay with VMware, at least in the near to medium term.
But for almost all of them, that flexibility to be able to make a change and change their environment and potentially add non-VMware workloads for other applications, start in the cloud and other areas. We've seen a few and a few really large ones
move over to OpenShift
and kind of working with Red Hat
or SUSE in those environments.
So we're seeing more and more things happening
across the customer base.
But that key piece that we keep bringing them back to
is if you want to control your destiny
and you want to be able to have,
you know, really this is a decision you make
on your business needs and your business priorities and the other vendors you want to work with.
You don't want to be locked in by a singular hypervisor and then this locked in HCI stack.
We're your best partner in doing that.
And that's why we, again, continue to work with what we'll say is you want to work with
VMware and continue to work with VMware.
That's great.
You know, Cody and his team have done a phenomenal job of leading best-class integrations there and giving customers a really, really good experience there. If you want to work with a open stack, we support that as well and have really good reference architectures and references in that space. recently, just yesterday announced replacement for VCE on the, on the, or VMC on the AWS side,
those are all options too, right? And so we've done that work to support almost all of the other
options a customer might want to go to and do that in the context of we're giving you back kind
of the full, the full power to make your own choice here and control, control your data.
Yeah. I mean, I think in one, Dan, I like to call it informed opinion,
not a biased one, you know,
but regardless, jokes aside,
it's like, no, like pretty much every RFP
that I think we've gotten in the past year is changed.
It used to be like, hey, there's 47 features
that we need you to be able to support
in the VMware environment, right?
Those are still there.
Those apps are still there,
but they're also, what about these other hypervisors?
What are these alternative approaches, right?
Because they're all looking for,
we might make a new step, we may not, right?
To Dan's point, probably many of them won't.
Most of them won't.
But most of them need to think about it, right?
And so it's like, hey, do you support these platforms?
And strategically, the thing that we've always done,
to Dan's point, I've always tried to build this
into our virtualization
ecosystem is to support data mobility.
How can we make open formats where it's easy to make moves,
share data, move it in, move it out,
share it across different platforms, whether it's a VM,
whether it's a different hypervisor,
whether it's a container, whether it's bare metal,
whether it's to the cloud.
So that's always been a design principle
for what we've done on our product
is not only being able to support multiple platforms,
but make it easy and or seamless to move that data
or shift it or redirect it right between different platforms.
And I think that's a really key piece.
And we truly try to take advantage of that on our platform
whenever we possibly can through partnering
with these companies.
I think you raised an interesting point there
that looking back at it, VMware, I think, did a great job in terms of supporting external storage. I know they wanted to go down the route
of vSAN they wanted to bring in their own solution there as well but they did initially do a great
job of external storage support and I remember using probably ESX 3.5 and 4 maybe mid 2000s
when we were trying to work out how we were going to get fiber channel
storage to connect to those platforms and it was really early on and we had issues of things like
if we'd had too many devices on a on a path it would just scan forever trying to catch up with
all of the all of the lunds and it would it would the boot would take forever to run but that was
you know really early days but it always struck me that they did a really good job in supporting external storage and it was really simple to do and you
know things like self-discovery when you connected another cluster node member to it and it would
just spot everything and be able to diagnose oh yeah that's a learn i recognize because that's
got a date store on it and stuff like that all of that stuff to me seemed really good really well
well well written well integrated and it does
strike me that possibly one of the challenges that competitors will have is that they need to look at
how they do something similar how will they actually address some of those abilities that
vmware had so that you know that's just a throwaway sort of comment that i think as you're looking as
to whether you should leave vmware one of the things you might actually have an issue with
is you might want to look at, you know,
how good the storage support is from the other vendors.
But what I think that sort of leads us on to
as the next stage of the discussion really
is just to understand what that model a little bit
in a bit more detail and just, you know,
your effort as a vendor to support people's requirements
for accessing storage in different ways,
whether that's a hypervisor,
whether that's in a different type of platform.
And that, to me, is quite a key part of, I think,
any storage vendor's offerings is how well you support
the ecosystem that goes around it.
Yeah, I think there's a couple pieces here around what we've done.
One is it's clear, especially in the mid to enterprise, that customers are moving to the
hybrid cloud or leveraging the hybrid cloud, right?
So they have application services in both places, right?
And so our investment in building our cloud business, which has been my primary focus
over the past couple of years, has been a key piece there because we see customers like, hey, I'm running production in my data center,
but I need copies of it. I need things to move there. And so supporting what we've done in the
public cloud is really important. And a couple of weeks ago, we announced our native Azure service
to help not only bring the data there, we've done that with our previous CBS product, Cloud Block
Store, but now in the service-based approach that the customer's expecting in the cloud. So it's one is,
how do we make sure the data isn't in both places, but also how do we build it in the way that
customers expect? And a key piece of connecting all of that, that connective tissue around our
platform is Fusion, right? And you spoke to our team about that not too long ago around
how it's helping customers deal with placement
and data moves without having to get into the details of actually doing it. How can we make
that policy driven? And that's really a key piece of building not only a hybrid cloud, but also a
cloud in your data center is abstracting away the concept of an array, right? And really control
things via policies. And that's a core piece. And if you dig into the more ecosystem, the infrastructure,
one piece, as I mentioned before,
is making sure that when we're not only existing partners,
but as we're expanding our ecosystem to Dan's point
of having these next level conversations,
learning from the past has been the first conversation
that we've had.
What should we not do?
What do we have to do?
What have we learned? What do customers actually use? What value do they get out of this? Right?
So let's, we can skip lap one, two, three, four, five, six, seven, and go straight to eight and
say, this is really, this is where we got. We can skip these things. We don't need to do it,
or we can abstract it away. And so that's a key piece of these next level conversations
is to take what we've learned and improve it.
Right. And do something different, not just, hey, let's add support to this hypervisor as well.
How can we build it differently? How can we simplify it and remove a lot of those things that customers don't actually really need anymore or they don't truly need or don't get a lot of value out?
And it's a breadth of support is important. I think there's some standardization around that, but overall, OpenStack has its own
methodology around the Cinder driver. We've been putting support into that for a long time. We have
huge customers running that, almost exclusively huge customers running that. Obviously, on the
Kubernetes side, there's a CSI spec and everything we've done well above and beyond that with
Portworx to make sure that Kubernetes across the board is supported and the data can be easily moved in and out.
And so continuing to drive on the VMware ecosystem, but really like our partnership with Microsoft,
as I mentioned, for around the services, like we're seeing a lot of renewed interest in
the Microsoft compute layer, Hyper-V itself.
Windows 2025 just came out, right?
And they've publicly mentioned they're working on VMware Fabric support.
So they're putting a lot of effort into driving that forward too on the external storage
side. So really, as I said, the benefit you get out of external storage in many ways, to your point,
is we work with everyone, right? And so it's my goal, my job to make sure that that data is
accessible and supported with that partner. It's easy to move it in and out or share it across the
board. And that we not only are reactive to move it in and out or share it across the board.
And that we not only are reactive to customer demands, but we're thinking a couple steps ahead about where is it going? What have we learned here? How do we work with them, work with these
new partners or existing partners to get there? I think a lot of people got Windows Server 2025
who didn't even ask for it from what I saw in some of the things that were published about unexpected upgrades.
Windows as a service.
Which is quite interesting.
Yeah, exactly.
Yeah, whether you like it or not.
So yeah, I think if you talk about all those additional
pieces, Cody, I think what perhaps it makes me think
is that rather than talk about centralized storage,
actually we should be thinking about storage as a service
and this being more of a transition
into storage as a service rather than centralized storage.
So yeah, okay, go back to the old EMC days
when we put in centralized storage
for very sensible reasons of consolidation
and cost saving and all the rest of it.
But actually, centralized storage
is actually evolving into storage as a
service where you can provide me my storage services in the cloud on-prem. You can automate
a lot of stuff, all the stuff that you just said about simplification. You're taking away a lot of
the overhead and management I have to think about as an end user and actually what you're giving me
as a service and an end point. And actually, I think perhaps that's where I think centralized
storage is really likely to head.
Yeah, it's a means to an end, right?
The end is not centralized storage.
The end is I want storage and I need it where I need it.
I want the data on that accessible as I need it, when I need it, without having to deal with the ins and outs around my compute and the location and this provisioning and this array and so forth.
It's absolutely a means and end. There's benefit.
There's TCO that comes from it,
but absolutely that's our point
is building a storage platform,
storage as a service across the data centers,
across the cloud for the end customer, right?
This is what we see in the public cloud, right?
In the end, their core block storage services
in AWS and Azure,
they're back in consolidated racks of storage, right?
But it's provided as a service.
And this is exactly conceptually, right?
What we're providing for our customers,
but obviously they can build it
and customize that service
to their own application needs.
You actually said the perfect word, Chris,
which is endpoint.
And that's very much how we think about it.
So, you know, our strategy revolves
increasingly around our product peer fusion,
which is the control plane that sits across the way we think about it as Fusion sitting across all those different endpoints.
So we've got, you know, a handful of storage products, FlashArray, FlashBlade, PureStorage Cloud, which Cody heads, and then Portworx or Kubernetes environments.
Those FlashArrays and FlashBlades may live in your data center.
They might live at a colo or a partner data center, obviously, pure storage cloud in the
public cloud environments and port works in all sorts of different environments, public
cloud in data center, whatever it may be.
All of those are endpoints is how we think about it.
You interface as a customer with a master control plane infusion and doesn't matter
which array type you're using, which protocol you're using. they are just endpoints out of that virtualized storage environment. That is
essentially how we kind of view the world. Yeah, that makes sense. So I think sort of
summing up our discussion, initially I thought, okay, we'll talk about centralized storage as a
product as to how it would actually continue to have some value in the modern world and
with the volumes of data we see being used across various different applications, especially
with AI.
But in reality, what we're transitioning to is a service-based approach, which just happens
to have a logical view of the storage as being centralized, even if it physically doesn't
necessarily sit in one location. As you just said, Dan, that could be boxes in my enterprise,
it could be in the cloud or whatever. But what's centralized now is my view of that
and my access of that. And ultimately, I think that's where we are. We're at that sort of
centralized access position rather than necessarily centralized from a physical
infrastructure perspective.
And I think that's probably a good way to look at it.
Yeah, exactly.
Exactly right.
Okay.
Do you know what?
This was interesting because we sort of started in one place and ended up somewhere else. But actually, in reality, what we have done is we've demonstrated that centralized storage still has value.
It just happens to be in a different way of looking at it,
in a service-based way of looking at it,
which I guess we should have got to eventually.
It's the logical conclusion of where we're at.
So I think that's probably the best we can say.
And just out of interest, have we got anything that's come up?
It might be worth just you touching on the Azure thing, Cody,
because I don't know whether we've even talked about that.
So people might not be familiar with that.
If you could just tell people what that is,
and then I think that's probably us done for our conversation.
Yeah, absolutely.
So to give a little bit of history, just a quick background,
is that a couple of years ago we introduced a product
called Pure Cloud Block Store.
This was a re-architecture of our Purity software
that runs on FlashArray
and building it specifically for AWS and Azure as a customer-managed deployed application from
the marketplace. It brought all the features and everything and the data connectivity to the data
center. Actually, two very different products between AWS and Azure. We're not just hosting
a VM. We literally engineered the operating system
to work with the very different backends offered
by those two vendors.
Yeah, it was not a lift and shift of purity.
It was a true re-architecture of it.
One customized for AWS and one customized for Azure.
And we're constantly updating it
because the cloud's constantly updating the pieces,
the components, the performance,
the characteristics of the product.
But one of the pieces of feedback that we got,
it's no surprise, is that customers looking to use the public cloud
are looking for a service.
And so we engaged in an invite-only partnership from Microsoft
called Azure Native ISV Services
that allows us to take our SaaS offering
and integrate it natively into Azure itself.
There is a Microsoft engineering team that integrates our service natively into Azure.
So from a customer's perspective,
from purchasing, from provisioning, from management,
it's a service approach.
It looks and feels exactly like an Azure service,
Azure APIs, SDKs, CLI, UI,
but it's built and managed.
We have our own SRE team supporting the service.
And so not only do we get the value of our product,
the native experience of Microsoft,
but we also get to bring the support experience
that we're pretty proud of here at Pure as well.
And so it's an exciting opportunity, I think,
for us to help our customers accelerate migrations
and deploy their applications in Azure.
And so we just announced it a couple of weeks ago,
and we're looking for preview customers as we move close to the public preview of the offering.
Great. Okay. I think that was just really useful because I think that's good.
Just sort of summarizes our discussion, really gives you an exact example of what we've been talking about.
So great conversation. Really, really interesting.
And it's nice to do it in such a way that we sort of, you know, we can jump between different things and use a bit of history and a bit of experience but um dan cody thanks for your time really a really
great conversation and i look forward to catching up with you soon all right thanks so much chris
thanks