Storage Unpacked Podcast - Storage Unpacked 261 – Pure Storage Platform Announcements at Accelerate 2024 (Sponsored)
Episode Date: June 19, 2024In this podcast episode, Chris discusses the platform update announcements from Pure Accelerate 2024 with Prakash Darji, VP and GM of the Digital Experience BU at Pure Storage....
Transcript
Discussion (0)
This is Storage Unpacked. Subscribe at StorageUnpacked.com.
This is Chris Evans and today I am joined by Prakash Darji from Pure Storage.
Prakash, good to see you again.
Yeah, likewise.
So I think the last time we bumped into each other you were in London.
Yeah, it was great. It was good to see you out there.
Yep, that was an event where you were updating us all on some technology.
And we're here today, actually, funny enough, to talk about the announcements you've made at Pure Accelerate 2024.
And there's quite a few things to get into.
Now, before we start, I think I was going to make a joke about AI here,
but I'm not sure whether it's a joke or not. Everybody seems to be focusing on AI, and you
have got some AI stuff to talk about. But thank goodness, it's not all about AI. There's quite
a number of other things to talk about in this conversation. Yeah, it's interesting. AI is a
workload. AI is a capability. We're treating it as such, but it's not the only workload and it's not the only capability. So, you know, I think there's way more things
happening in the world, but obviously, you know, we don't have our head in the sand ignoring that
one either. Absolutely. And it's interesting because everybody seems to have forgotten the
fact that there are still traditional workloads out there that still need to be serviced and
things that still need to be done. So thankfully, we're going to be talking about a mix of all of those things as we talk about
your announcements. Now, I'm just having a look down the list here. And the first thing I think
is really updates to the Pure Storage platform. So why don't you kick us off with that section?
We'll have a discussion about that. And then we can go on and talk about the other pieces that
you've got that are being announced today. Yeah, well, everyone can say platform, but let's actually first define what we're considering
the peer storage platform.
We typically talk about it under four components.
One, our platform has a unique evergreen architecture.
Two, it provides a single operating system for scale up and scale out, managing all types of workloads.
Three, it provides simple and unified management.
So it's not disparate management tools and disparate APIs and those types of things.
And four, it delivers outcomes via SLAs, via our Evergreen One service offering.
So as we talk to customers, we're anchoring everyone on those platform things. Now, from a platform and
announcement standpoint, you know, key core platform capabilities are improvements in cyber
resiliency. One, just kind of capabilities in the, you know, management and protection space.
Two, we're introducing more flexibility in our Evergreen One service. And three, on the management, you know, Pure's
been known for our simplicity of managing, but, you know, it started with the simplicity of
managing a box. How do you rack stack cable it all the way to how do you operate it as a plug
and play box? But now we have customers with thousands of boxes. So how do we extend that
simplicity non-disruptively via Evergreen to the fleet so we've redesigned the management paradigm
to think you know namespace first versus box first like and extending that management paradigm
via virtual namespace across you know the boundary of a box yeah i think that's really important
because it's very it's very interesting to think back to the way we used to deploy and manage the
technology. We would very much think about it as a piece of hardware. And I think really, you know,
if you look at the evolution of where you've gone over the years, you've tried to pull people away
from that, especially with things like the licensing terms and the consumption models,
but also in terms of being able to actually load balance and do things like that with the workloads that are sitting on your platform.
And I think more and more you should be getting customers, and you are getting customers,
to think about application workloads and the data rather than necessarily the physical
boxes that are being put out there.
You know, it's evolved because now we collect about 25 petabytes annually a year of information that we can fingerprint and understand workload types.
This is an Oracle workload.
This is a SQL Server workload.
This is a Splunk workload.
We're doing workload identification.
And we think about policy management for workloads.
So I've got an Oracle workload.
I need to provision it.
What type of protection policy may I need? And that might be provision, you know, my primary on a Flasher AX and
provision my snapshot offloads on an FAC or set up a HA topology, like all of those things you've
had to consider, but they were almost always like disparate. It's like, okay, this is the
provisioning workflow. This is the replication workflow. This is the, you know, tiering workflow. And, you know, why don't you
just think about everything from a workload lens first? You can't do that unless you understand
workloads. And, you know, we've been focused on building up that intelligence over time.
So, so what's new? What in terms of the detail of you actually announced?
Well, so the first set of platform announcements that are new are our next generation Fusion.
This is not the first time we've talked about Fusion. We've introduced it in the past.
And we had that workload-oriented paradigm in mind.
But what's different between this release and the other one is that one was more of a new management paradigm.
We initially started saying customers could go and start either managing by box or managing by fleet. I think
we've learned that that probably wasn't great because you had to make a choice between two
worlds. So what if we could give you the best of both worlds where we said, it's not a choice,
right? So we took the fusion concept and we natively integrated it into Purity in the box, meaning now you could be in a box and you could operate and manage within that box, or you can join that box to a fleet and manage across the fleet, regardless of what box you're on. of the concepts of fusion we had in our first version, but making it broadly available and backwards compatible
for people to adopt easily to link into a fleet
without thinking about a new management paradigm.
Okay, so the sort of thing that I would think here
that would make sense to be was, for example,
let's say I've set a set of policies on a particular box
and I move the workload to another box,
the policy goes with
that workload. So now rather than thinking that the two are disparate and I have to go and
reapply everything in each box I want to move things to, I've now got flexibility to, I guess,
use both. I can have the box level of policy or I can have workload-based stuff, which is
taken from Fusion. Yeah. And it's not incongruent. So, you know, I think that makes it easy and we'd
like to get people to think beyond the box, right? So, you know, a lot of times it'll start with,
okay, I started with a workload. It was on this box. I've had this policy attached to it and
I'm moving it. But we want to take customers further than that saying, when you think about
the workload, don't think about the box. Let's think about the policy and you can place it in the box that's most efficient. And that actually, that to me works
two ways. One, the customer can make that decision and decide where they want it. But also if you're
delivering this as a service, the customer shouldn't have to necessarily care because if
you put that box in and you're managing it, you're going to be the one that not necessarily gets the
advantage of that, but it means you can be flexible with what hardware you put in and how you manage that
hardware. And I just think that makes life easy for everybody if you've got that ability to
step back from that hardware. Well, that's why we think about it as a platform, right? I mentioned
the four components, management and Fusion is one area, but like with Evergreen One,
we're taking the responsibility of delivering that service. We'll use Fusion to go ahead and implement policies and manage our services
and those types of things. So it creates a unique environment for a customer where they could say,
I want to retain control. I want to set my policies, or I can relegate it to a vendor SLA
where it appears doing it on my behalf and I can just use storage.
Okay, excellent. So that's one thing on the platform, but come on, there must be something related to generative AI going on here. I can't believe we're going to get away with the whole
presentation without it being mentioned once. Yeah, so it's interesting. You're leading the
witness here. So we're introducing a generative AI co-pilot as well. So I think I
refer to that 25 petabytes of information we collect. Well, we started, we've had about,
I don't know, 10 years ago introduced a concept called Pier 1 Meta. It was an AI engine. It was
built on models that we did, and we trained it on dozens of parameters that we've been using
as an engine behind Pier 1 to do
those workload fingerprinting and things like the things we talked about.
But what's changed now is with community models, you can train on millions of parameters, not
dozens of parameters, and interact with natural language versus algorithmic language.
So I think using large language models, we've introduced that and started training large
language models in our data set. So customers now can use this AI co-pilot. And we're announcing
that in tech preview because we want to ensure that we don't end up in the hallucination space
or any of those kind of crazy places where customers that want to experiment with that
will help us train and reinforce the learning
around their environments. But you're going to ask questions like, what should I do to improve
my security posture? Which arrays have problems? Where should I place this workload? All of those
types of things that if we have the data and the intelligence, you can interact in a fairly
seamless way. And that helps with, I know, I don't know about you,
but I don't see too many people coming out of school with storage admin degrees nowadays,
you know? So how do you, you know, it's that problem of what happened with mainframe for a
while. It was like, there was, everyone thought mainframes were going away, they never went away.
And then, you know, now it's like, well, who's going to manage the mainframes, right? So there
always are these cyclical things.
And I think this is a leapfrog in the technology space that may make it more accessible to have more consistency of operations management and, you know, just allow people to not worry about all the nerd knobs that we typically talked about in storage in the past.
Yeah. typically talked about in storage in the past. Yeah, I look at it and think, you know, if you go back a few years, you'd have probably had to maybe write some code to go and extract stuff
from a database. Or you'd have to go through and look at some graphs and tables and things like
that to extract the data you want to see. So you would almost need to have to be aware of what data
was being collected, how it was being collected and presented to you in order to actually understand
how to extract the right level of information. And me the the gen ai type stuff really just gives seems to be
about getting a better interface to get to that data without having to really understand too much
of the detailed structure so you can ask more sort of natural language questions and probably
not necessarily have to know the detail and get back more sort
of comprehensive answers. So I think I see that as a real quite, you know, quite a positive benefit
for this sort of technology. Yeah, I think so. And, you know, what's interesting is we've had
this kind of cyclical loop where, yeah, we're introducing and using, you know, generative AI
in our product delivery, but our end customers are going through the
same journey of how do they adopt the technology. So we took a look at some of the learnings we had
and saw there were different requirements for training and inference. We typically see when
you're doing training, it's actually the millions of parameters is a large compute problem, but
typically not a large data problem on training. In an inference, you're applying it to a larger bulk data set where your throughput
to data ratios change. So in a traditional workload environment, we've typically seen
you scale performance and capacity based on the number of workloads you put on.
In AI, it's completely different. So we're also announcing a new evergreen one for AI service tier that is optimized for
GPU utilization when you don't know what you actually want to do.
So if you're in the hoarding GPU space with the fear of missing out and not knowing what
workloads or what's going to change or know how to size it or any of that,
then we believe that we've designed an offering that makes a lot of sense because
Evergreen One for AI is kind of akin to optimizing for an unplanned environment.
And I kind of talk about this with people like their water bill in their home.
Typically, when you have a home, you have the throughput of water to your home
is in a one inch pipe, two inch pipe,
three inch pipe, four inch pipe.
When I look at my water bill,
it actually says that on my thing,
here's my connection fee for my pipe to my home.
And then here's my consumption
in terms of cubic feet of water that I've consumed
in the last whatever month or quarter,
depending on my billing cycle.
Well, we designed this Evergreen one for offer based on
that concept where we're like, well, you could go ahead and say, what provisioned pipe do you need
for throughput? And you can change that for training and inference and those types of things.
And then you pay a marginal rate for what you consume, very similar to that water bill design,
which as far as I can tell, that kind of model doesn't exist in traditional workloads.
It doesn't necessarily exist in hardware deployments because you can't.
Hardware deployments are fixed.
With a service, we as a vendor deploy whatever hardware we need to meet whatever requirements
under our SLAs.
So when you take an app, like previously, a lot of vendors thought about subscription
as like, okay, here's my box.
How do you want to pay for it, cash or credit?
But when you flip the game saying, no, here's my service, how do I want to deliver it?
That gives you a lot of flexibility, and that's how we're thinking about Evergreen One for AI,
where we can align to whatever our customers' needs are.
Yeah.
I see this as very cloud-like in the sense that if you look at the public cloud vendors
and the way that they deliver, they will allow you to choose a product tier they'll allow you to choose the iops and the
performance and you'll literally pay for what you use on that basis so why shouldn't i be able to do
that on-prem it doesn't seem like that doesn't seem like that's a hard thing to ask i think
possibly it hasn't been detailed in that sort of way that, you know, somebody said, well, actually, there are certain types of workloads like AI that need this level of performance.
And therefore, we need to have that as a service offering.
And clearly, that's what you're doing here.
Yeah, you know, what I would state, though, like there is even a difference between the public cloud, where in the public cloud, you would choose an instance type.
You know, GP2, IO1 on Amazon,
you choose an instance type.
And that instance type has a set of characteristics that are both performance and capacity.
And you're choosing an instance type with a set of attributes and characteristics, and
you're basically scaling based on those attributes of that instance type.
That was very similar to what we were previously offering for Evergreen 1.
What's unique about this is we've decoupled the performance
and capacity vectors away from that instance type box
concept.
OK.
Right?
And because of that, I think this even
offers more flexibility than you would see based on choosing
an instance type and scaling that way in a hyperscaler.
Yeah, the hyperscalers, I'd agree.
They're quite fixed in that sense.
They haven't completely desegregated the storage performance and capacity
away from certain instance types,
and it is still quite specific.
So yeah, I see there is definitely a difference there.
So anything else on the AI side that we should discuss?
Because clearly AI is, you know, flavor of the moment for everybody.
So there must be more that's going on here.
Well, I think if you go to any conference nowadays,
everyone's trying to get Jensen to show up at their conference
and, you know, talk with NVIDIA.
So we are working with NVIDIA as well.
And we're announcing that we're working with NVIDIA on SuperPOD certification for Ethernet.
Because a lot of people have like Ethernet-based networks, and we want to make sure that you can use your existing Ethernet-based networks and your storage connections and all of that.
In a way, but still get the throughput you need for GPU utilization.
So NVIDIA, we're working with NVIDIA and announcing SuperPOD
certification planned in the back half of this year. We're going through it with NVIDIA now.
They're committing with us that, you know, we can do that with them for certifying our products for
SuperPOD. And we're announcing that in AI as well. I think there must be four or five Jensen's
because I can't, I just can't see how he manages to turn up to every event in that.
I think he's just cloned himself and there's just lots of different
versions of him coming around yeah I don't know if you remember the old multiplicity movie right
where you know I think it was Michael Keaton cloned himself so yeah yeah it does feel a little
bit like Jensen Jensen is everywhere absolutely and how do, you know, within this, how are
customers sort of protecting their data? Because it seems like as we sort of, we start getting into
using things like SuperPod and those sort of technologies, you know, in fact, I was just
looking at something today that was talking about the levels of issues of, you know, more exposures
of ransomware and stuff like that. This starts to make me wonder whether we need a greater degree of security around it.
Well, I think there's kind of two things going on in that space.
One is AI is not new.
It's been around for a while, but it was primarily in research labs.
It wasn't, you know, mainstream enterprise.
It was in the high performance computing arena for a really long time.
It was expensive.
So it's become more accessible now. And the large language model development has made it easy for people to
consume these models without a larger upfront cost environment. But I think we're on the early
side of people thinking about how to secure their AI workloads. Because in the research labs,
you secure your perimeter. You just assume no one could get into your lab and you're done. In an enterprise application, it's not that clean,
right? So you have to rethink your deployment model. And interestingly enough, we looked at
how people like and even as we were using and deploying, you know, our AI co-pilot,
we saw this idea of needing a secure application workspace, where when you're deploying a data
agent, you typically do it in a container.
And you need to look at that data agent and the slice of data you're making in that data
agent and have a secure tenant space all the way in through your storage where you can
isolate that environment.
So we've linked our PortRooks technology with secure multi-tenancy with delegated security
to create this concept of a secure application workspace for LLM deployments. So that will allow customers to
actually deploy LLMs in a secure way and isolate the data on their storage environment, on a shared
storage environment, to an isolated tenant per data agent where you manage the policies all from
the data agent container in
Portworx. Okay. I think this is very interesting. First of all, from that angle, you know, you
explained that really well. And I think that's really an important angle. The other one that
I look at, and I thought about this as well, is the whole sustainability and deployment of
technology. Because so many times we deploy new infrastructure to try and solve a problem. So
somebody will say, let's build out this AI model.
And suddenly, oh, let's put a new box in and another box and another box.
And before you know it, you've got many boxes.
And we literally just talked about the need to get away from thinking about boxes
and to think more about application workloads and data.
So this, to me, seems like a sort of like a requirement to help you get past that
let's not deploy many more boxes all the time scenario. Yeah. And it's interesting. I think I read an article from you
a while back where you were talking about, you know, the need for shared storage versus
disaggregated and like the pendulum that goes between disaggregated and shared. Right. And I
think that concept, like, I think, as you mentioned in that article, it was, you know, there's been
this pendulum for a long time around,
okay, do we isolate? How do we scale? What's the need for shared? And when you share, how do you
virtually isolate? You know, that like, that thing has been floating around in our industry for a
long time. But you know, if you can, the best of both worlds is, you know, you have logical
separation versus physical separation, right?
And as long as you can enforce that, I think you can achieve the best of both worlds without
losing the efficiency of shared, which is space power efficiency, cooling, no stranded
capacity, no stranded performance, et cetera.
So I think this approach allows running infrastructure very efficiently without isolating the workload
saying, well, those are my lab arrays for high performance computing in the corner. Exactly. And I'd also admit that's what ends up happening in
that. And it just seems to be that this is a much more logical way of actually getting around that,
which is good to see. I just want to touch on something you sort of led us in earlier,
actually, Prakash, you talked about it. And that was the whole cyber recovery and resilience piece
and that side of it. And I'd like to go on and talk
about that now because you sort of touched on it, but I think it's a good opportunity to go back and
talk about what's new there. Yeah. You know, this is one of the areas where I think when I, when I
met with you in London, we talked about cyber resiliency as well with our data protection
assessment at that time. Right. And our last podcast, I think every release you'll probably
hear some us talking about cyber resiliency because it's a moving target. And it's moving faster now, I think because of AI and I hate the AI wash, but I do
think like I am scared of AI in the hands of the bad guys. I was like previously, it's like, oh,
what are the attack patterns? Okay. Let me look at data reduction. Let me see if someone's encrypted
my data. Well, now malware signature variants might be, I'm just isolating a block without encrypting
the whole data, encrypting the whole data set anyway, right?
So the attack patterns and signature variants become exponential with AI very easily.
It's actually very easy to do that.
So we separated out how we want to approach, you know, protecting an environment.
And I think the NIST framework that, you know, NIST provides, provides a pretty good way of thinking about how to secure your landscape.
And, you know, they talk about, you know, their six point plan, previously five,
is, you know, how do you identify issues, protect against them, remediate, recover, govern,
and, you know, even detect. So all of those areas, we have capabilities along. Now what's new in this
announcement across cyber resiliency is one, we're introducing a security assessment for
operational security. This is not about detecting signature variants. It's about detecting human
configuration issues that typically like my front door is open, my ports are open. I haven't rotated passwords like those types of things because the attack
patterns are making it easy to run a run
like a large scan over a large surface real estate to look for open front doors
and look for, you know, passwords that are on the dark web.
And all those things become easier. Yeah.
And so, you know, we're introducing that security assessment
and that's about operational security
to help people identify.
And we're providing even with Evergreen One,
a resilience SLA, where our customer success managers
will look into the environment
based on that security assessment
and guarantee that we'll fix your configuration issues
for you and we'll give you a validated security timestamped document for your CISO
saying these are the things we detected, recovered, and solved on your behalf.
So customers can ensure that they're getting the best practices
from the vendor's discovery around best ways to secure their storage.
So that's kind of the security and resilience bit.
And we've trained our AI co-pilot. So if you want to know what to do, it's like, oh, my security assessment score
is 2.4. What can I do to improve it? How do I benchmark across other people? You can even ask.
And it's trained off our knowledge base, which will even go down to run this command on this
array to set this config file, right? The copilot is right now recommending.
It's not going in and starting to do things on your behalf. Like we're still building trust on
the copilot thing. But, you know, it's a recommendation all the way to run this command.
You know, it's pretty detailed in terms of how we trained it. And then, you know, as you get into,
okay, yeah, we've done that. but what about these signature variants and attack patterns like
denial of service and exfiltration? So we've now introduced a new anomaly detection capability
that's performance-based. So we look at the management commands that exist. And these are
backup commands, these are coming from these applications, et cetera. And we build a profile
of what performance should look like from a latency and bandwidth standpoint.
And we can then see anomalous things that are like denial of service, like you're in a high latency thing or data exfiltration where you see a high bandwidth write off cycle.
Those types of things are pretty, now using AI, we can get pretty sophisticated in our detection patterns more than, hey, my data is encrypted and my DRR has changed.
Right. We've had that in the past. Now we've made that broader in terms of new types of anomalies like data exfiltration and denial of service.
I was just going to say there's a lot in there, actually. But you obviously had one extra point. Finish that off first and then I'll go back and we'll talk about some of these here. Yeah, no. And look, the last set of capabilities we're introducing in the cyber recovery area are such that, you know, okay, we put all this
identification protection, you know, detection capabilities, but what if something happens?
Well, we're leaning into just providing an SLA to help you get up and running as fast as possible.
Previously, we had a ransomware SLA that guaranteed a clean room array, allowed you to keep the
previous array for forensics
purposes, and gave you all the PS resources guaranteed to move your data and bring you
back up and running from your last known good copy. We're now extending that capability to
disaster recovery as well, not just ransomware. So customers can get guaranteed SLAs for time
window to get back up and running from a response and recovery standpoint.
And then, you know, with Fusion, as well as, you know, our self-service upgrade capability
we've had in the past, we're introducing governance capability where we can look at
compliance and drift to ensure you're like provisioning the same way, setting policies
the same way.
Because the more variance you have in operations, the harder your pen test and stride
tests are in your environment to secure your operations. So if you can streamline your
operations across compliance and governance, then you create a lower attack vector from an
operational deviation standpoint. And if something happens, you can mass remediate, like log4j,
we can do fleet-wide upgrades now versus box-wide
upgrades where you can you know push updates to the fleet in a one bulk operation as well
yeah so there's a huge amount in there and i think this is um an area that perhaps is is good for
another you know separate conversation because there is so much in this but you know the things
i would highlight here are first of all the fact that this is a never-ending situation what i've called a darwinian
problem that will never be solved um yep which you know it is you you see it's like cat and mouse
isn't it you know one side sort of improves their techniques and then you have to sort of go and
write your defenses against it and it will be a constant thing but i think having some of those
things you've just talked about for instance that ability to have somebody else come in and say, here's what your best practices would be and here's where the environment is.
And I would hope that obviously you'll change your practices over time and you'll tighten your security.
And I'm sure then it means that I can just go into your AI co-pilot and say, run that assessment again.
Is it current?
Well, no, let's do it again.
And all I have to do is just ask that one question.
So now I don't have to go troll through manuals
and look at the latest set of requirements.
I can just say, go pilot, go and double check.
Everything looks good.
That's where I think I'd be really like,
I'd be happy to get to.
Yeah. And look, at some point in the future,
we hope that we can get to like,
instead of it telling you,
you can trust it enough to subscribe and save
and correct your environment for you, right? I'd say we're early innings for that but you know i could see a world where that
becomes possible and you know the next horizon or so yeah i mean not everybody trusted v motion
to move workloads around dynamically from day one did they but you leave that technology and
you try it out and you see what you think and after time you after a bit of time you get used
to it and then you decide you're happy with it. So those sort of things, definitely when you come and talk about skills, I think are really important because it means that you can actually, A, be quicker.
You can be more responsive, but also you don't have to have that deep dive skill to go and double check every single setting and parameter.
And if you get something wrong, you miss it, and that could be the difference between being hacked and not being hacked.
You don't want to get to that point.
So having something else that does that for you,
I think is really important.
Yeah, it's interesting because we're a big believer
in like, you know, certification
when you're kind of training new people, right?
We like, we have bootcamps to train our new sales guys.
We have, you know, for support engineers,
we have, you know, certification
and training programs, et cetera.
I imagine a space where like,
when you have this type of capability and your new hires
in there doing things, your certification can be not like pre-job, but it could be during
job where you're like, okay, you've gone 90 days by actually doing all the right things
on a daily basis.
You could probably even start looking at which employees are violating best practices from
a, you know, security and configuration drift.
Right.
And then use that as a way of achieving levels of certification in your staffing.
So I think there's a lot of interesting places where this could go.
Yeah, definitely.
Definitely.
Okay.
So that sort of talks about some of the cyber stuff.
Is there anything else we haven't covered that is going to get announced?
I think there must be something around storage as a service here. There must be something in that section
because that's always an area that you've been,
you know, released new features over the years.
Yeah, look, just like cyber, that's an area where,
you know, it is early innings
and we're continuing to innovate in that area as well.
So you'll probably almost every conference or release
hear a lot of innovation in that area.
This conference is no different.
So, you know, obviously our evergreen one
for AI service tier was one innovation in that area where we're getting better.
But we kind of break up what is storage as a service in terms of three areas.
One is, you know, how do you buy right up front versus cash or credit,
like, you know, buy or rent, that kind of model consume, consume.
The second kind of piece is, you know, buy or rent, that kind of model, consume. The second kind of piece
is how you operate, right? Like what are your operating SLAs that give you efficiency, where
you're trusting the vendor to do something on your behalf. And the third area is your experience,
right? What are we doing to improve your experience? And we think of this like Netflix.
How do we make this the Netflix of storage where everything's recommended for you in product where you don't have to think? So in the area of the cloud economics, how you buy,
we've introduced this AI-powered reserve expansion recommendation. So in the product,
it'll tell you the last three months you've been on demand. If you increase your reserve commit to
this, you could save this much money. Or if you rebalance your reserve between these sites, this is what it would be.
And it's giving you that recommendation and allowing you with our workload planner and
simulator to not just trust us, click on that recommendation and simulate what it would be
and what it would be in your environment based on your actual usage. In the area of the operating
SLAs, one of the areas that we see happening a lot right now is people are looking to consolidate data centers, or they end up with stranded capacity across sites where you might have more data in Singapore and out of space and less in the UK where you have excess capacity.
So that concept in like AWS, Azure, or Google is you set up a reserve commit per site because you're deploying or locking
something in to a physical site. And we designed Evergreen One that same way where your reserve
commit is site-based. So how do you deal with the stranded capacity or this data center consolidation
without being locked into this three-year reserve commit type world for the cost savings? So we're
trying to create that additional flexibility. So we're introducing this new site rebalance SLA where our customers can actually once
annually go ahead and alter their reserves at different sites or consolidate all of their
reserves to one site.
And that's inclusive of pure moving all of the data and the workloads because it's our
gear, right?
We're managing the environment.
So as a customer does that, we'll take on the expense of going ahead and moving the data and ensuring that we can do
that consolidation and provide the customers the flexibility based on their goals.
And then lastly, the continuous improvement kind of category of Evergreen still applies to Evergreen
One. So with our unified block and file tiers, the ultra and premium tiers, we're
increasing our performance SLAs by 50%. So that's just something we're continuing to do where,
hey, your SLAs are just getting better over time. Don't worry about it. It's not like the
hyperscalers where you chose an instance type, and if you want more performance, go to a different
instance type. It's fascinating that even the hyperscalers have missed the concept of Evergreen,
right? So where you get stranded on an instance type, well, even in our service,
you're getting the continuous benefits of Evergreen where it gets better over time.
So I think these sort of updates are interesting because when we go back to talking about platform and we talk about getting away from physical hardware and trying to not be hardware focused
all the time.
These sort of features allow me to do that.
So I might think, well, OK, I was running a workload in one data center,
but I actually want to run in a different one now because from a sustainability perspective
or maybe latency or other reasons, suddenly now that new data center or that location is going to be much more important.
And that might be business driven it might be other reasons and to not have that ability to rebalance that tech the technology and
say actually let's move that all of that storage there and leave you know not not have to leave it
where it is and have to double pay if you like that becomes a real well it becomes an impact
on business or i have to find a good reason to use the old stuff and then there's a lot of messing
about to try and make that work.
Knowing I've got that flexibility means I can actually step up a level and start thinking, oh, actually, well, actually let's just move that workload because
that's got business benefits and I don't have to worry about how it affects my
storage.
So those are really important features.
I think when people are in dynamic environments and it's those little bit
of enhancements, I think that you make each time that are really useful for people. Look, our due north is SaaS. We think of ourselves as a SaaS.
Why is this different than your Salesforce instance? You don't think about the capacity
you need in your Salesforce instance, and you constantly get new features like Einstein and
other things in your offering, right? So due north for us isn't the cash or credit world.
Due north for us is how do we continue to deliver SaaS?
And the evergreen architecture as part of our platform is the baseline capability.
If you don't have it, you don't have the ability to non-disruptively continue to innovate.
I've seen other offerings where at time of renewal, they're like, yeah, and we'll do a data migration as well to ensure we can drop in hardware just so you can keep your subscription current. I'm like, that's
the anti-SAS. So without the evergreen architecture, you're kind of toast. Then the next
element is, okay, you need this management of a fleet level. So you're abstracting away the box.
And Fusion provides a lot of that capability. You need a single operating system with APIs and management
and policies that, you know, so you're not saying,
hey, if I had, you know, this version of, you know,
there are some vendors that have like four or five
or seven operating systems and APIs.
And, you know, when you're like trying to manage
across that environment, you can't have continuity and use any of those things in a consistent platform way.
So that, you know, single operating system provide approach provides the ability for us to say, you know, for one of our file service levels, we can choose whether we want to deploy FlashRay or FlashBlade.
It doesn't matter. It depends on the performance SLA.
It's not you're getting a box on subscription.
You're getting a performance SLA.
Right. performance SLA. It's not you're getting a box on subscription, you're getting a performance SLA. And then finally, delivering these SLA outcomes just will, I think, expand as people trust us
more. Absolutely. So lots of things we talked about there. So we've talked about platform
changes, we've talked about AI being integrated into your platform, but also AI support for the platform, as in the AI storage as a service through Evergreen One.
A whole lot of cybersecurity features and obviously some enhancements that allow us to be more efficient.
When will we see these come in?
Are they going to, will this be immediate?
Will it be later this year?
So when can people expect the sort of delivery of this?
We'll go through the capabilities.
The Fusion capabilities and the SuperPod capabilities are planned in the back half of this? We'll go through the capabilities. The Fusion capabilities and
the SuperPod capabilities are planned in the back half of this year. Okay. The AI co-pilot is in
tech preview and all the other capabilities we're planning on being GA within the month of Accelerate.
Okay, great. So an awful lot for people to go there and dig into. And obviously having an event
where you can go and do that, hopefully people are attending and they'll be able to go and dig into this
and find out more details.
I think it's nice to have a discussion
where AI is both used and supported, shall we say,
without it being all about, I'm just going to sell you AI.
So I'm really pleased that we've had that discussion.
Well, it's like saying I'm going to sell you Web 2.0.
Yeah.
Like, you know what I mean?
What does that mean, really?
Like, AI is a capability.
It's not something I can sell you.
Yes.
And, you know, I expect you to have a story around it.
But the fact that we've got some actual concrete use cases here I think are really important.
So that's great for me.
As usual, put links to all of these new features and the different technologies.
And I tell you what I want to do. I think I need to go in, try and map each time we get some of these against things like
efficiency and performance and scalability, because now I'm starting to see them sort of
fall into different categories and maybe I need to start tagging them. I think that's a job for me to
do. But, you know, for now, Prakash, enjoy the rest of the event and thanks very much for spending
the time to take me through it.
Yeah, thanks.
I appreciate connecting with you as always, man.
You've been listening to Storage Unpacked. For show notes and more, subscribe at storageunpacked.com.
Follow us on Twitter at Storage Unpacked or join our LinkedIn group by searching for Storage Unpacked Podcast.
You can find us on all good podcatchers,
including Apple Podcasts, Google Podcasts, and Spotify. Thanks for listening.