Podcast Archive - StorageReview.com - Podcast #135: HPE Alletra Storage MP Sanjay Jagad
Episode Date: January 17, 2025Brian talks with HPE’s VP of Product Management for Cloud Data Infrastructure while on-site… The post Podcast #135: HPE Alletra Storage MP Sanjay Jagad appeared first on StorageReview.com....
Transcript
Discussion (0)
Last week I got to get on site with HPE down in Spring, Texas. They've got a wonderful
headquarter campus down there and should you ever be in the area and have an invite or know how to
get into the back door, door number three by the parking garage I think is the most likely one to
sneak in. It's a great campus, great facility and the team down there is fantastic. I got to check out the Electra MP storage array, and we got into what that universal storage platform enables
when I was on site. But today, what I want to do is talk to you about what the business benefits
are, what the customers are experiencing, and what the future of storage looks like from HB's
perspective. So I've brought in my good friend Sanjay to help me out with this.
Thank you for doing the podcast. Glad to have you here.
Hey, wonderful to be here, Brian. Looking forward to the discussion.
So thank you. So set it up for us. Tell us a little bit about yourself, what you do at HPE.
So my name is Sanjay. I've run the product management for our hybrid cloud storage, looking after our storage solutions and super excited to talk about all the new stuff that we are doing out here.
Okay, so set up MP for us. It was about 18 months ago or so that you had the storage launched down there in spring. I was there for that. And the concept of a, I'm going to tell it through
my lens and then you correct me where you think I'm wrong. The concept is basically that you have
a universal storage platform and you've disaggregated the storage from your, the storage
hardware from your storage operating systems or operating environments and now have a hardware platform
and software platforms, multiple, that can be paired with that hardware.
And it's really like a universal storage server for high availability storage.
That's my take on it.
Tell me about your view of MP when that came out about a year and a half ago.
So let me take a step back, right, if you don't mind, so that I can set the context before I jump into what is ElectroStorage MP and why we decided to
do this. You know, as you know, we all have been doing storage for quite some time. I've been doing
this for 20 plus years, right? And if you look at how the storage systems were built about 15,
20 years ago to what they are today.
It's pretty typical.
You have a box, mostly an engineered system.
You have two controllers in them.
You have a midplane, and then you have a bunch of drives attached to that.
That universal design, legacy design, I call them, hasn't changed.
But if you look at what's happening in the market in terms of the evolution of
applications and workloads, in terms of the amount of data that we generate and that needs
to be stored, and also when we talk about storage from an HP perspective, this is saying that we
have internally out here. Storage is where people used to store their data. But if you look at in the modern term today,
storage is where data comes alive. And what I mean by that is people are looking at ways to
convert and monetize that data. They're looking at ways in terms of how do I store this data more
effectively? How do I cater this data to the modern applications
which are more and more distributed from edge to core to cloud?
And that required different ways of looking at storage.
Like I said, block storage,
if you look at all the offerings out there,
they have not fundamentally deviated from the architecture
that I was talking about.
Two controllers in a box, active, active, active, passive.
But that does not cater to the modern workload.
So when we decided to do something different, I call it HP reimagined the storage.
And that's the genesis of Electra MP storage.
Now there are two fundamental components to this. One is that you were alluding
to, which is how do I standardize on a common hardware platform so that remove all the custom
engineering, make it such that it's standard building blocks like Lego blocks, where I can
purpose them in different personalities, whether it's for compute, disk, JBuff, or an appliance,
and simplify the way hardware systems are built, taking in the concepts of SDS, right?
Where I can streamline my supply chain, I can streamline my planning activities, I can
streamline how those new boxes are built, and so on and so forth.
Then the second component is just reimagining the software stack.
And this is where the silver sauce or the secret sauce is, right?
Which is we have broken down the paradigm of storage design where we have decoupled
the hardware and added more value in the software, which means now my new
software stack, which is more going to be in the line of taking advantage of the evolution of
technologies underneath it, the NVMe fabric, that becomes now more of a scale-out, high-bandwidth
backplane for storage. The evolution of compute in terms of the CPUs,
that becomes my new compute nodes with the ability
to add GPUs in it.
The evolution on the disk, whether it's QLC, TLC, density,
different formats, now those becomes my JBOF.
And I can now compose them together
and be able to scale compute and storage independently such that my software
now can accommodate that kind of growth and scale is the fundamental building block that
is different now.
Now, there are nuances into it, Brian, and I can start getting into it.
But at the highest level, we have reimagined primary storage.
We have decoupled the complexity of engineered hardware.
We have decoupled the software stack so that we can bring in the innovations and be able
to take advantage of the latest technologies, whether it's compute, network, disk, so on
and so forth.
Well, it's interesting because you talk about the, I mean, everybody knows, as you said,
the two-controller storage SAN.
I mean, it's been a staple in data centers for a very, very long time.
With different methodologies to scale, the way you've architected the hardware for Electra MP,
especially now with the switch support, gives you quite a bit of flexibility in terms of scaling controllers independently from storage
via JBoffs and gives you not just the scalability but like tier zero workload sort of availability
as well with the ability to cross-connect every controller node to every JBoff so that
everything's available all the time.
Talk a little bit about that,
about the advancements in the hardware platform
that enables some of these scalability
and availability dimensions.
Yeah, and I think this ties into the value story, right?
So when you look at this architecture of disaggregation, right?
We are the only vendor from a block storage perspective that does that.
Now, when I decouple the hardware and software, that means I have removed the dependency on the hardware, right?
So my controllers are now stateless, right?
Each and every controller can access each and every disk connected to the fabric.
So when a controller dies or failovers are instantaneous, right? I
don't have to worry about that. And plus from a performance perspective, I can deliver higher
bandwidth and throughput through the NVMe fabric and deliver superior performance. That's the
fundamental principles of this architecture. Now, what does it mean from a value perspective to the customer? First thing first,
as you mentioned, I can scale granularly, meaning if I want to just add more capacity because my
workloads need this, I can do this in two drive increments, and I can add up to 16 JBoffs.
That's like about, if I look at just 15 terabyte drives, it's about six petabyte raw storage.
That's huge.
And as the drive capacity increases, that will double, triple.
The compute side of it, remember in the old legacy, you have to do it in HA pairs.
I don't have to worry about that.
I've broken down that HA paradigm.
I can just add one controller if I need to just scale the performance to get that bump I want. If I want
more, I can just add a second controller dynamically. I call it a point-in-time OPEX
kind of usage where I only buy what I need so that I'm not over-provisioning, thus reducing
the amount of dollars that I'm spending and maximizing the utilization of my infrastructure so that I get better value
and better TCO, right? So this is what the advance is. Yeah, while you're on that expansion piece,
I think that one was really interesting to me and actually kind of surprising because if you look at
the MP hardware on its own, it's a 2U box, looks very ProLiant-y on the front, and on the back,
it looks very SAN-y because it's got two one-use
LEDs that can come out that are your IO nodes or your controller nodes if we're going to use
some more common parlance of the storage day. But the fact that you could start with two and
add a third one without having to do a fourth one is kind of a big deal, I think.
It's a huge one, I think, especially for customers
who don't know what their workload needs are going to be, right? And they don't want to
lock up their investment in CapEx or infrastructure spend when they don't know whether they need it.
Plus, if you look at it, most of these customers who are trying to figure out how are they going
to take advantage of these newer workloads where they want to figure out how they monetize the data, there is the
dynamicity of workloads which they even don't know. They haven't anticipated. For them, this
flexibility of, like, hey, I don't have to worry about this where I don't have to try to figure out
stitching together these HA silos,
in a scale-out storage where all my controllers can see all the disks,
I can keep adding one controller at a time because I know that that controller will be able to see all the storage
that was stored already before, and it's a shared storage environment.
It's very, very helpful to scale the performance of these
particular workloads dynamically.
The dynamicity of this architecture is what brings value to the customers because modern
workloads are dynamic.
Modern infrastructures are going to be dynamic.
We all know that.
Well, you talked about some of the hardware, we've got a long form video that's going
to get into this really deep.
But the other thing that struck me when you talk about things like supply chain and having
a universal platform for HPE storage solutions, what that means then is that instead of many
different chassis, we've got essentially one that have some different controller
nodes depending on if it's JBoff or if it's the storage controller.
But even within those 1U trays, it's AMD CPUs, it's standard DDR, it's OCP NICs, which I
think is something that is really compelling because that makes it easy for you guys to support
new fabrics as they come out, give your customers more options in a NIC that's not proprietary.
I don't want to diminish your engineering, but there's nothing particularly fancy about
the NICs or HBAs. You hit the right word, nothing proprietary. I think that's key because this is what customers want, right?
They don't want to lock themselves into an engineered system, right?
Where now if I have to upgrade them, I have to go through forklift upgrades and costly
migrations and so on and so forth, right?
So when we did this, and I literally mean it, it's like Lego brick approach.
We have a standard 2 you box, and depending on which AMD
CPUs are in there and how much memory, we can put on different personas. We can put on a persona for
a high-performance controller node. We can put on a persona of a JBoff, or we can put on a persona
of a system or an appliance, like when people want to treat it as a legacy storage appliance.
Not only that, across our software stack, whether it's block, file, object, we will have the same universal hardware platform that will power all our storage offerings. This gives me the economics
of scale. Now think about this. We went through this supply chain nightmare during the COVID
timeframe. Now I have to figure out a way to make sure that my supply chain is not disrupted. So I
have now efficiencies built in where if I build the next generation box, I can seamlessly add them
into my existing scale out disaggregated cluster, slowly phase out the old ones without
even having to go through disruptions or upgrades or migrations.
I call it an always-on infrastructure.
This is why we embarked on this journey to standardize on the hardware, remove this
proprietoriness, and then put the value in the software stack. Well, I think it's important to note, though, too,
that this lets you embrace and adopt new technology faster,
more likely, I would say.
So as we look at things as like Gen 5 SSDs,
as those are kind of coming to market or different form factors
or different protocols as already highlighted
or new CPUs or DDR5 or whatever it is.
Obviously, you've got to engineer the support in for those, but it's not as if I take any one of those components.
I've got to throw everything away and start over.
If I add a Gen 5 JBOF in the future at some point, that doesn't mean that all the other stuff goes away.
It just it can fold into
the organization. It can coexist. It can coexist and deliver value. So at the end of the day,
the goal here is this architecture will deliver superior value for our customers because it will
give them maximum value for their investment on their hardware, right? It will give them a
point in time scale. So I don't have to over provision. Plus, it also allows us, and this is very important, Brian,
right? We talk about investment protection from a customer perspective, right? We have something
called timeless program that gives this customer the assurance saying that, hey, CPU TikTok speeds
change every two, three years, right? So when they need to refresh their environment,
we will give them an assurance that I can give you the latest,
whether it's DDR memory, CPU, and IOMs,
as an upgrade seamlessly into your environment at no additional cost
so that you can continuously get value out of this architecture
and literally keep doing this every time the generation of CPU,
TikToks, and everything else changes.
That's where our partnership with AMD also comes into the fruition.
Right.
Yeah, I know we talked about interconnect and HBAs already,
but I think that's one of those that's kind of easy
where if you had bought this, say, a year and a half ago at launch
and you had 32 gig fiber in it or 16 or
whatever you supported at the time. And now my organization makes an investment in 64g fiber.
It's a lot easier now to come back and pick that up in MP when you support it, if I want to
immediately be able to pick up that double speed improvement, maybe some latency gains on that
fabric versus the old days where you'd wait three to five years when you take your old array down
and put a new SAN in. And just little things like that make the adoption of new technology
so much faster and provide more value of the existing deployment with very
minimal changes.
Exactly.
And I think this is why I go back to my original comment.
We have reimagined how primary storage is done.
We reimagined how storage architectures were.
We have reimagined how storage systems are built.
We've thought through all of those things.
We have gone through the pains of understanding what the customer lifecycle
is in terms of how they go about deployment, how they go about refreshing.
And the challenge was for us is, how do I break this paradigm of the legacy architectures
so that we can deliver a superior story for our customers,
not only for their current issues, but moving forward, right?
So you are absolutely spot on, Brian, right?
You know, the hardware boxes were built such that some of these components are seamlessly hot-pluggable,
whether it's your HBAs and OCP cards, whether it's your IOM controllers.
The standard 2U design,U design fits that concept.
And then the intrinsic value inside the software to take advantage of it in a dynamic way so
that we can deliver value is huge.
Our customers are seeing it.
We have had many customers who have, when we talked about them about this architecture, they are
like, this is exactly what we are looking for.
Fantastic.
They are looking to refresh their entire infrastructure.
We have had a few huge wins where we actually literally wiped out some of our competitive
arrays and replaced them with this new architecture because customers see this as value.
It's interesting to think about this too because software-defined storage has been around forever.
We typically think of that as a different class of storage
than the traditional SAN array, right?
Because it was always conceptually for the customer was easier.
Go pick whatever hardware you want.
Go pick whatever software you want.
Use a bar, mush it together, and off you go.
And while that works sometimes,
it's not for what we're talking about
for enterprise storage,
for enterprise data features
and availability and resiliency
and all of these things,
you often need more than that.
And we've seen a hundred times
the number of problems that you run into
when you take a software package and
try to throw it on hardware that should work, but it doesn't always work because no one ever did the
integration. And maybe you're the first person or first couple of people through the tunnel trying
to make that happen. That's not a place where your customers want to be. So I want to be clear that
just because you've disaggregated the two from each other, it doesn't mean they don't seamlessly
fit together. And clearly that's what your customers are seeing as they've deployed the
MP. Yeah, exactly. And I think you raise a very good point, right? SDS brings in its value,
but it has its own challenges because they have decoupled the stack, but they don't have a way
to standardize on a hardware building block. It's literally like we have taken the best of both breeds.
We did the SDS approach for our software stack.
Then we supported it with the standardized building block
on the hardware, which is deprived of any proprietoriness.
So it's not like I'm locking them into something
that they cannot move away from.
But together, it delivers
a superior story in a way that we have never done it in the industry before.
So what are the early results in terms of what your customers have seen over the last almost
year and a half, two years now? Obviously, you had some in early. You've added scalability
over time. You've added scalability over time.
You've added more data services, more anything that was numbered has probably gone up in the number of things supported, right?
Your LUNs or whatever. But what's been some of the feedback that you guys have taken to heart?
And maybe even some things that have surprised you in the way that customers are using this, because often you guys conceptualize it, customer puts it to work, and now they
start doing something a little bit different than you expected.
But I'm just curious if you have any feedback on that front.
I think I'll state two things, right?
And I say this, alluded to this a little bit.
The feedback has been awesome, right?
You know, I think if you look at just from where the market
adoption is for our Electra MP, it is by far the most highest performing product when it comes to
adoption from an HP perspective. If you look at even in the short period that you referred to,
about a year and a half, we are ranked as a leadership quadrant for Gartner MQ. That's for a new product to get
and achieve that and beat some of the incumbents out there. From a legacy perspective, we've been
doing this for 15, 20 years. That's huge. Customers are seeing tangible value, not only in the sense
of reduced spend on their infrastructure because of the scale-out architecture that I'm talking about, 30%, 40% overall TCO.
The other thing I will probably need to highlight out here is this is also supported with all
the innovations that we have been doing on the GreenLake Cloud Platform and our AIOps
story.
Because once you have the architecture, once you have the hardware building block, the software piece in terms of eliminating mundane day-to-day operational tasks and using AI,
which is now much more powerful and the data points that we have to simplify operations,
support experience, how people provision their storage, and allow them to speed up their time to value story is huge.
So if you look at all the three layers,
GLCP from an operational perspective, software architecture to derive value,
and then the hardware, the feedback has been tremendous.
Especially when you look at some of the big enterprises
who are now trying to standardize on this and our ability to support this across file block object.
I think this is like a perfect storm for us in terms of getting more momentum behind us.
Well, so you hit on a couple of things that are interesting I want to dive into.
Let's talk about ease of use, ease of management, lifecycle updates,
things like that. The hardware and making that easier to acquire because you've controlled for
some of the supply chain challenges, maybe more cost effective potentially as well, easier to
update and manage. Physically, that's one thing. On the software side,
what are you having to do? You talked about AIOps a little bit, but what else are you having to do to help make this an easy platform to manage? Because if we've seen anything,
is the continuing trend for IT admins to be tasked with more and more activities.
We're seeing fewer and fewer storage admins, classically trained storage admins coming to market as part of the job force. So the
trend seems to be, seems to me, that we need our IT staff to be more flexible. And for that to be
something that's real, systems have to get easier to use, easier to administer, update,
troubleshoot, et cetera. So you did touch on that. I'd like to hear a little bit more of the way HPE is thinking about that challenge. Yeah. And I think you said it, right?
The role of an IT admin has evolved, right? It has been evolving for the last 10 plus years,
right? It became the virtualization IT. Then it became the cloud IT. Now it's like, you know,
now you have people like, you know, because of the way they do cloud, the DevOps guys, the application guys who want to come and say, hey, I need this storage.
And they don't want to wait for weeks to get their volumes or whatever, lunch or shares done.
They come in from a workload perspective.
They see this as something that should be made available to me when I want it, because that's what the hyperscalers have brought to the game.
Because of what we have done in the GreenLake Cloud Platform from an operational perspective, we have mimicked a lot of those workflows and added more to it because of the amount of data points that we have. If you look at it, InfoSight is by far the leader in AIOps, though we don't get enough
credit for it.
But we have taken it to the next level.
It has gone beyond support experience in trying to figure out how I'm going to proactively
support and troubleshoot the arrays.
But the integrations that we have on the workload side, for example, our plugins are on the
private cloud so that I can manage the lifecycle of the VMs, including the storage.
I can do workload-based provisioning where I can now go and say, the developers can come
and say, hey, I need this much resources for this particular workload.
We will go and find out the best place to put that workload in so that it accommodates their performance needs, their scale needs, and so on and so forth.
We have dynamicity in terms of monitoring some of these things, like any drift in the workload
application and performance so that we can adjust the QoS policies so that they can meet their SLAs.
Those things where we are now leveraging an internally built AI model that monitors all these different data points and be able to then dynamically adjust the infrastructure resources so that the SLAs are delivered on are enabling some of these newer personas, whether it's the DevOps guys,
whether it's the new modern cloud IT admin guys, how they can take advantage of that
seamless experience plane. And by the way, because of this software stack now is SDS,
I can run the same instances in the cloud and give you that true hybrid cloud control plane for your on-premises or in the cloud
where you can enforce your data policies, your security policies, your governance policies,
your operational workload deployment, so on and so forth. When you think about this,
the architecture is great, value is great, performance is great, but we also have to
do it from an operational side of it and to give that experience where you are removing
the time it takes for people to use your storage from weeks, days, months to minutes and seconds.
You're talking about some of these things.
Bring it all the way around though and tell me a little
bit more about how HPE views the benefits of being able to take something like this hardware with
block storage software and to be able to present that through GreenLake Cloud Console in addition
to all the other little tiles that are available that you may be using, or maybe not, that maybe you start here and then decide, well, I need a better hybrid cloud strategy,
so I can add that capability, or I need a better data protection, so I can add that
Zerto service or whatever it is. Can you talk a little bit more about the management of the
larger data estate, not just MP.
I know it's stepping a little bit outside of your expertise.
No, no, this is perfect.
I talk about this to our customers all the time.
We in HP have this thing which we call a three-layer cake.
The bottom layer is the infrastructure and the architecture that we talked about. The middle layer is basically all the services that I call it, right?
So once you put up the infrastructure, so you go to GreenLake Cloud Platform, you set
up all your infrastructure, basically all your devices and everything else, it's configured
and paired and everything else, then you start provisioning services, whether it's block,
file, object,
private cloud, all those things are services, data protection, and so on and so forth.
Once you have those services up and running, now those services are what is consumed by
the application and the workloads.
If somebody from the workload down comes in and says, I want block, they go to the block
side of it and say, hey, I need to provision
block storage. How do I go about doing it? Then it will have a guided way of doing this so the
customer doesn't even have, for a person who doesn't understand storage or doesn't understand
IT, they don't even need to worry about this. We have a guided selling motion or guided
configuration motion where we will guide them through what exactly they are looking for from a workload perspective and then get them the resources that they need.
The third layer is where you now bring in value-added services.
Now, if I want to run, let's say, some form of AI services or if I want to run as a platform as a service, how do I package all of those pieces and convert that into a service that can be then used by the organizations
or whatever they can do to actually get more value from their infrastructure that aligns with their business objectives.
Bottom layer is the infrastructure.
Middle layer is all the services, data services, storage services.
And then third layer is all value-added services
that basically is outcome-based,
where a customer is trying to say,
I need to do this.
Let's say I want to use AI or I want to use hybrid cloud.
I want to deploy some of these things in the public cloud.
Those value-added services are also plugged on all of this.
And all of this thing is done on GreenLake Cloud Platform.
So it's a universal control plane, operational plane, a platform story that allows you to
manage all of those pieces seamlessly alongside security and governance and so on and so forth.
Have you found customers that are net new to HPE that start with Block and then are blown away or excited about what they can see within the GreenLake Cloud platform?
Because I would think a lot of traditional SAN customers
wouldn't be used to saying,
okay, well, now you've addressed my storage need,
but oh my gosh, look at all these other little widgets
that I've got access to that I can pull in
to make my entire infrastructure more resilient
or more capable than maybe I ever thought possible.
And I think that is why the word reimagining storage is very, very important.
Because if you go to a classic IT guy, they will say, okay, fine.
But if you go now, talk to the right personas in the customers, like a person who deals
with data, like the chief data officer or a cloud guy, right, or a DevOps guy, they
will immediately gravitate and understand the value.
So you have to overcome the personas who are looking and evaluating this because the
traditional IT guy still thinks from a traditional IT perspective.
And it's not that they don't get C value.
They also are looking at saying that, hey, what does it mean to me?
Yeah. Well, they're measured against different things.
Exactly.
And if they're legacy guys that prefer CLI, then maybe they're not used to some of the
cloud management capabilities of modern storage, right?
Yeah. And we need to cater to all of those audiences. So we have robust APIs, not only just CLI, but REST APIs that integrate into different workloads and to applications.
We have APIs that allow them to integrate into orchestration tools, whether it's Puppet or whether it's your Ansible or whatever it is.
Because different customers have different ways.
Some of them have their own little framework that they
have built to be more heterogeneous, right? They want us to integrate into that. So we do that too,
right? You cannot take away options, right? We need to make sure that during the transition phase,
you need to get the people from point A to point B by telling them that, hey, we will support you
throughout this, by the way, and this is how you can move over and see more value on the GreenLake Cloud Platform. Does that make it harder or easier to sell then?
Because if you're appealing to so many different personas within an organization, from the SAN
administrator to maybe an IT generalist, all the way up the stack, they all care about very
different things. So what does the trial process look like now?
In the old days, it was drop your array off for some indeterminate amount of time on a customer's dock and hope they put it in, hope they deploy it correctly and then have a good experience and buy it.
But that doesn't I don't I don't get the sense that that works as well anymore as it used to.
So what does the modern buying process look like for block storage?
I know transitions are never easy, right?
And we are in the midst of those transitions, right?
If you look at how the workloads are pivoting, how the infrastructure is pivoting, how the
cloud guys are also now figuring out that we need a hybrid storage.
So it's never easy, but this is where what we are doing right now is more valuable, to educate and showcase why we are doing this and what it means to the customers and the different
personas. That means our sellers should be aware of what we are trying to do. Our buyers need to be made aware of what this
brings to them. Then from an analyst and other communities out there, they need to see the value
and then be able to talk about this. If you don't talk about this, like what we are doing today,
then it's the best kept secret that nobody knows. It is tough, but I think the reception so
far has been phenomenal. This is what we are looking to build momentum on.
We talked a little bit about the high availability nature, the switch fabric option that lets you
have all the nodes seeing all the storage and that sort of thing.
When you're thinking about block storage, HP has other things in the portfolio like XP8 for mission-critical storage, but there's a lot of things here that get pretty close there in terms
of availability. And I know you've got an availability guarantee. How should customers think about both ends of the spectrum? Where
to start with MP Block or Electra MP for block storage on GreenLake? And then how far up
the scale in this system go?
So we are a year and a half into this and there is a long road ahead.
I can go and tell you right now, my roadmap is loaded for the next three years, if I may
say that, because there's so much we can do with this architecture.
The way I look at this is, like I said, transitions are happening.
We are transitioning major part of our storage portfolio to be MP-based. But there will be corners where the XPs will have a play,
the MSAs will have a play, right?
And we need to make sure that eventually we'll get there.
However, those bring unique value in terms of their own speciality,
what they do, right?
And the customers in some of those segments have a long tail when it comes to
switching over to a new generation of architecture. We understand that. We are not looking for a clean
sweep in just one way. This is a journey. We are well on our way to run this journey and win out in this journey, but it's
going to take time. However, having said that, where we are right now with the Electra MP and
the capabilities that we have, I think we can cover 75%, 80% of that particular use cases,
and there is more to come.
Well, I think it's a good point that it's a robust platform.
And as you say, your development list is long
and probably will never be complete
because there's always something to-
It's never complete, right?
There's always something to make better,
whether some service that can always be improved.
But fundamentally the platform,
you're using it for block, you're using it for block, you're
using it for file, one day maybe object as well. With the same platform does give you from a
hardware perspective, more flexibility than ever. I mean, if you look back at some of the storage
brands that HP has had and developed and acquired, very much this traditional SAN business was, this is good
for this, this one's good for this, this one's good for this.
And you end up with a lot of different solutions.
The path is pretty clear, though, that this universal hardware platform with the appropriate
software and scale for these different solutions is definitely where you're going and wouldn't be possible under those traditional SAN guidelines, hardware and software development guidelines.
I think the legacy approach to SAN storage needed to be changed, right?
You know, it's just been like this for 20 plus years.
I remember I started my career at Sun where we built those SANA arrays.
It's been 25 years now.
I think that's a long time in a technology world where things have not changed.
This was needed to be done.
We have had a lot of technology evolutions like HCI, SDS, but nobody has fundamentally
deviated away from this HA pair and notion that we talked about. Practically, it's also because
of the technology available. With the NVMe fabric now, it allows you to get that scale that you want
so that there's no performance bottleneck. The CPUs and GPUs on the compute side now can deliver more to
keep up with the disk performance that flash drives brought. Now we are at that cusp where
all the technology innovations are lining up. With the right architecture, I think we
are by far leading the way right now. I think that's where I'm super excited about this.
Well, we're excited too.
Like I said, I was down in Houston in the spring office
and we got to play with the hardware,
see it all come together, see the scalability.
And I know I've said that a hundred times,
but when it started, it was a smaller solution.
And you guys have really quickly escalated that
and grown the capabilities to
scale the controller nodes up, to scale the JBoffs up. And I think it'll be fun to watch as new
hardware, new protocols, new drives, new everything comes to market, how you continue to adapt the
Electra MP platform to suck those in and give your customers more flexibility, more choice, and get it to the point
where they don't have to worry about the underlying technology,
you know, what Gen 5 or 6 or CXL or DDR5 or whatever it is.
None of that should really matter.
At the end of the day, you'll deliver a management plane,
a hardware platform that's easy to consume and scale. Hopefully, the trends
continue to be with you on this one. It's pretty exciting. It is super exciting. Like I said,
we have a lot of new stuff that is right around the corner. Super excited. I think this is by far,
if you look at it, and you mentioned this briefly, and I just want to reiterate, right? In the year and a half, we have delivered significant functionality in a predictable way.
And this goes to the strength of this team.
Also, the commitment that we have towards the roadmap.
Like I said, right, my roadmap is loaded.
There's a lot of exciting new innovation that is coming.
And this is where HP is going to change the landscape
when it comes to storage and primary storage.
So we are super excited about this
and we should continue talking about the innovations
that we bring into the table.
Well, no problem.
I know it was released four,
just came out this past summer,
which that's a pretty quick cadence
that you guys are working on bringing new capabilities and
features to the platform.
So that's great.
We will make sure to link to the HPE site so you can check that out for more, learn
more about Electra MP, learn more about GreenLake, all that good stuff.
Sanjay, thanks for doing the pod.
This was a great conversation.
Appreciate your time.
Super.
Always appreciate, Brian.
Nice talking.
And go Electra MP. Thank you. conversation. Appreciate your time. Super. Always appreciate Brian. Nice talking and go electric.