Podcast Archive - StorageReview.com - Podcast #114: Dell Alpine Delivered – PowerFlex on AWS
Episode Date: December 20, 2022Brian recently caught up with Dell’s Michael Richtberg to get the latest on PowerFlexOS… The post Podcast #114: Dell Alpine Delivered – PowerFlex on AWS appeared first on StorageRevi...ew.com.
Transcript
Discussion (0)
Hey everyone, Brian Beeler and thanks for checking out the podcast.
We've got an interesting conversation today coming up with Dell Technologies.
They've taken their first storage system and ported it to the cloud.
And I'm sure Michael's cringing at me already by calling it a storage system and importing it.
But basically, PowerFlex OS is now available in AWS
as part of Dell's Project Alpine,
which is making all of their storage system softwares
available in the cloud.
So Michael, what have I done wrong to bristle you already?
Yeah, that's all right.
Brian, thanks for having me here.
I appreciate you giving us some time.
The role I play here for
our team is on the product management side for our primary storage team. I lead advanced planning
and product strategy initiatives that are often aligned with the things that are emerging for
us in this industry and for the Dell primary storage products. PowerF flex has its genesis from being first of all software defined
from the very beginning uh dell and emc through the acquisition of emc acquiring uh power i'm
sorry scale io scale io is the genesis of what we are talking about here so this has been through
many many many iterations of maturity of genesis of this product has always been software defined.
It's always been cloud enabled.
So this is just about basically enabling us to provide a validated deployment mechanism
that happens to be on the cloud.
They offer physical x86 platforms, and this runs on virtually any kind of x86 infrastructure
without specialized hardware required.
Yeah, I mean, we saw Scale.io and you and I talked in Vegas at AWS's event.
We saw Scale.io near the very beginning from them.
And it was really great.
One of the best software defined solutions we had ever seen.
And still today, I think is one of the most comprehensive in that you can run it
in a variety of different ways. You can run it in a hyper-converged, and they were doing that with
four-node 2U systems before hyper-convergence was even really popular and in vogue. And then
it can also be run just as a standard storage array as PowerFlex. So I think it's one of your neater offerings, but it also makes sense.
I mean, we talked to Caitlin back at Dell Tech World about what you guys were doing
with Alpine.
Flex is, of all of them, probably the easiest since it was already disaggregated from hardware.
It was already standalone.
You guys were running it on PowerEdge servers and other things,
but probably the easiest to go take that and put it in AWS.
How much work was required on the code base,
if any, to make that happen?
That was the remarkable part of this was that we didn't do
your reporting to get this to work in a public Cloud environment.
AWS being a market leader in this platform of infrastructure as a service
offers the necessary ingredients that we fundamentally need.
We need x86 processors,
we need a proper network,
and in this case, we have
a data store options that can leverage either the local instance stores
where we can federate the necessary NVMe media
together to form the cluster or we can use EBS as a data store as well.
In this case, there was no code change necessary for us to make that happen.
So the version that we ship to our customers that takes the form of an engineered system
that runs on a PowerEdge server equally operates on the exact same code base
without any changes to it on AWS.
We use instance stores and or EBS for the construction of that.
We can grow that cluster from anywhere from three to over 512 nodes.
We scale to over 2000 compute nodes that interface with that in a two-layer or what you would call in your
description a disaggregated form so the two of those ingredients working together produce
millions of iops and extremely high throughput you can imagine might be very necessary for workloads
that need to go from potentially just starting off with maybe a dev or test environment to a
full-blown production deployment.
Being able, though, to asymmetrically scale either the storage or the compute has been extremely important to our customers on-prem and, in this case, now in the cloud.
So I don't want to drive past this notion of how storage is allocated, because if we think about Flex on PowerEdge, typically will be, I don't know, say it's just a 2U24-based server.
Each one of those SSDs adds incremental performance benefit to the system as you load them into the system
and generate the performance gain from having more drives available to you, more nodes, then more drives, etc.
In the cloud case, you've done something that's pretty interesting here,
I think, as we've talked through it prior in how you collect and aggregate the storage. So you said
EBS or EC2 storage that's NVMe attached local, but talk through that in a little more detail, if you would, about how customers can leverage Amazon's existing resources
to let you still get the performance
and or the economics you want out of Flex in AWS.
Yeah, exactly.
So if you take the analogy of a PowerEdge-based server
where the NVMe media is installed on the server,
we add more and more nodes, you add more and more capacity.
With that, you incrementally add more IOPS, more throughput.
What you don't add is latency.
So these are sub millisecond
mission critical workload types of environments.
The same exact environments are there on the AWS platform.
The EC2 instances that are available within NVMe media
are clustered by us in exactly the same way.
We federate all of the capacity.
You're adding more IOPS, you're adding more throughput,
you're not adding latency to this.
One of the differentiating elements of doing that though,
isn't just being able to cluster the nodes,
but to be able to use something in PowerFlex
we call fault set architectures.
On-prem you would
have been looking at this as I want to make sure that I have a means of protecting my rack if I do
lifecycle management and I need to bring a rack down. I have the rest of the racks in my data
center to take over for what's going on when I'm doing that administrative event. It might be if
you actually had something unintentional,
such as a failure of something that unlikely occurs,
but in the event that you had a rack outage,
the fault set architecture takes over for all of the nodes
and all of the drives.
And think about how dramatic that has to be
in terms of being able to keep up also with the performance.
And in a cloud environment,
that fault set architecture translates
to an availability zone.
So with three or more availability zones,
we can stretch essentially the cluster of all of these nodes
across that multiple AZ environment.
And in doing so, what we have not done
is created a replication of two or three X,
like you might see some alternatives in the marketplace do.
What we're able to do is still federate all of those nodes in all of the AZs and still
provide a federation of all of the capacity that's presented by all the nodes in all of
those AZs and federate the performance and throughput with low latency.
So there's an example of something that we already had in the product, already was designed
because the software defined capabilities allowed us to take advantage of this in the cloud.
So again, without having to change the core function of the product, the hero functionality
of this fault set architecture mapped extraordinarily well into the concept of a multi AZ implementation
in a cloud environment.
So for those that don't know, an availability zone is in AWS parlance.
Is that the same as like US East and West?
Or are there multiple availability zones
within a physical location?
How does that work?
Yeah, no, they're not regional.
So what you just referred to there would be regional.
So within a region though,
there are multiple availability zones.
They are in proximity to one another, typically within 30 miles of each other on AWS's case.
And what we're able to do is take advantage of their very high speed connections between those AZs with the advantage of how PowerFlex works and taking advantage of all of those nodes that are across those availability zones,
and they're having low latency connections, high bandwidth connections between them,
we're able to take advantage of all of the things I mentioned about those fault set architectures.
Now, with that, though, we do have an asynchronous replication product functionality in the offering.
And what that allows you to do is incorporate a BCDR, a business
continuity disaster recovery architecture, which could take the form of one region to another,
where we have up to 15 second RPO time. So again, world class capabilities in terms of being able
to do something that should a disaster event occur, you have an ability to quickly get back
to production workloads.
And I failed to mention earlier
that with that AZ protection mechanism,
should an AZ outage occur,
this is without disruption to the workload
or what the workload sees as available performance.
So we may detect that within four seconds or so
that there is some disruption
that's occurred. All of the automation in the product takes a look at what's going on from those
AZ or node outages, takes over for re-protecting it and rebuilding that without disruption to
your production environment. In the case of the replication, I just want to add that one of the options might be from
on-prem to cloud as a mechanism to use our application as well.
So let's stick on this AZ thing for a minute because I think the resiliency benefits here
are pretty important.
If you wanted that level of resiliency on-prem, how much infrastructure would be required for that? Because at a certain point,
it would become untenable, wouldn't it? I think what you're asking is you're dealing
with something that's going to end up with a multi-rack configuration, which is not an uncommon
deployment scale that our customers are using for PowerPlex. But the multi-AZ though gets you some physical separation,
which may be harder to do on-prem, right?
Yeah. Certainly, when we look at a three AZ,
having a three node in three different AZs and a total of,
in this case, nine to just use that example,
very reasonable thing to separate in the AWS environment.
Would you build a rack one at a time
with only three compute nodes in it
and physically separate them?
That's probably not as likely as you would fill up a rack
and have multiple racks in a data center,
in which case using a AZ would be for a large deployment.
And that's just because of what it is capable already of doing.
Dell Digital here, our own infrastructure,
is heavily deployed on any major database
that you can imagine today that's on the planet
is used by Dell for our own infrastructure.
And our primary deployment for mission-critical applications
is using PowerPlex.
So this is a good point. And probably we should have done this a little bit at the beginning.
You know, sometimes I get locked in and you and I have talked already quite a bit. So we know each
other. But for those that don't know PowerFlex, let's take one giant step back and talk about
how PowerFlex is differentiated in the
portfolio that has other unstructured offerings. So maybe just look at, well, I don't know,
you do it how you want to do, but talk to us a little bit about where Power Flex sits within
the broader Dell Storage portfolio. Sure. So if you look at where we are servicing multiple primary storage segments,
you have PowerStore, and our product there has been market-leading
in servicing a lot of the customers who are looking for our mission-critical mid-market storage products.
PowerStore is phenomenal in its eclectic ability to do block and file capabilities.
Very, very easy for customers to implement.
It has been a logical progression as we've consolidated
our portfolio to make things
simpler for our customers to understand.
PowerMax for our ultra high-end customers.
PowerMax being the ultra of people who are looking for
the bulletproof mechanisms for
relying on mission critical applications.
PowerFlex here to offer our customers some combination of those where we're able to help
with people who might want to modularly scale. As they say, we started with three nodes on a cloud
infrastructure. On-prem, we typically start with somewhere around four, but we can grow that
implementation as customers need and do it very
modularly. So if there's an unpredictable amount of capacity that's needed, then great option here
is for people to be able to store, add more modularly. Now, because this is a product that's
capable of full stack implementation, when we deliver an on-prem infrastructure our engineered systems
are inclusive of everything right so the uh the nodes that deliver the storage the nodes that are
delivering the compute the networking all come together as an ingredient set that is oriented
towards what we do when we go through our customer conversations and that is what sort what's your
problem that you want to solve what are the implementation objectives what are the applications that you're trying to deliver
we spend a tremendous amount of our roadmap all the time on working out how to best architect for
certain workloads we've got a tremendous arsenal of white papers that talk about what the results
will be if you're looking for any type of application.
Containers, virtual machines, bare metal.
Our ability to mix and match those is a tremendously valuable part of this
because it makes it a bit future-proof.
Not knowing how much of your deployment
might change from being VM-based
and to potentially bare metal containers,
and that's a trend that is taking place here.
Might be something that you're not quite sure how fast or how much you're going to need.
But a non-disruptive option is to use something like PowerPlex. You've got an ability to run the
workloads on the compute side. You've got the storage to keep up with whatever that is.
And to do those things asymmetrically means that you're not going to over-provision one or the
other. We can, as you mentioned earlier at your introduction, do hyperconverged where it might make sense initially if somebody was starting off knowing what their ratio is of compute to storage.
Great place to start. I want to add more compute. Great.
We can add more nodes that do just the compute. It needed more storage.
We can add more nodes to add more storage without disrupting anything that they already invested in. So the ability for us to combine hyperconverged two-layer, which we call
two-layer by having separate compute and storage, they're not siloed so that you have to start one
over again. All of these are mixable and matchable. You hit on a couple things. Draw me a parallel or
differentiation real quick between in the unstructured world for what you can do with Flex versus PowerScale.
Sure. In PowerScale, you're doing something similar, which is that you're adding file services as a node-by-node expansion needs to occur.
Very, very much so an important part of what people might be more familiar with as Isilon. PowerScale is now that brand name for Isilon and what we have done
here is essentially that for block storage. Now what we have done
recently in our version of 4.0 of PowerFlex is we have added transactional
NAS as a controller mechanism in this as well. So using the powerfulness of that extensible block storage as a back end,
we can add NAS controllers to this that scale from 2 to 12 nodes.
That adds the capability for this to be a multi-element product with the scalability,
as I mentioned, 512 nodes of block storage can be back ended by these two to 12 nodes of NAS controller functionality as well.
Okay. Okay, that's helpful.
I mean, you've got a lot there, and that's even a consolidated portfolio from where you guys were, I don't know, 24 months ago, right?
So that's pretty good.
A lot of those have added up over the years with the acquisitions and everything else. So this has been with early
customers for a little while, I assume, the AWS version. And I know it's consumable in the
marketplace. So that'll be how customers go to get started. But what have you seen in early customer
adoption or use cases? Where are you seeing some trend lines
with kind of the early returns on Flex on AWS?
Yeah, so as customers look at what they think they hear,
and that is storage, or they hear block storage,
there's almost an immediate assumption
that everything that they've been used to using
with enterprise-class storage on-prem premise there for them to use elsewhere.
What we are providing customers is that transition to make that a
seamless and a equivalent type of experience is what they've been used to using
in an on-prem environment.
So some of these mission critical workloads or all of these mission critical
workloads that need to get to
the kind of scale that we can provide is the value add that working with Amazon, we provide
the ability for us to give people basically this elasticity of modular growth, the ability for us
to deliver these extremely high performance workloads, ability to show proven outcomes that
keep up with what they may have been doing when they were doing something with us on-prem and doing also with a public cloud environment is allowing us to show that there's now this extensibility that both can be, hey, I ability to do replication to a cloud environment is something that somebody may choose to do for if something were to go wrong.
A BCDR environment might be for a test in dev, and I've got to have something that works the same exact way as it will on-prem because I might do the dev test there because I need instant gratification. My developers expect me to not say, I've got a PON and I've got three or four months
for you to wait before I get more gear in here.
But rather to give them instant gratification,
a benefit of the cloud is doing exactly that.
How do I help them with a internal as a service
for something that's literally as quick
as you can respond to a trouble ticket
to deploy what they need to do that dev and test work
and then come back maybe do the deployment.
The other though is if I do need to deploy
a mission critical workload,
I need this thing to be able to do what I expect
and that's I need this to be resilient.
So therefore why the value of that multi AZ environment,
if I need to be capable of doing something
that's literally bulletproof in terms of the functionality I mentioned earlier of being able to do region to region replication
also is the kind of thing that people are expecting that they're going to have that they
were used to using all along on on-prem the databases are often the things people are
gravitated to using in our environment in a cloud environment.
Being able to run any combination of those that might be containerized or what you'd consider conventional databases and mixing and matching those.
So a particular client of ours is set up to do basically a database as a service internally, right? So they're trying to service their developers needs for being able to run
literally any kind of environment and do it quickly and deploy it without
going through any kind of physical transformation is extremely valuable.
Let me ask you about the database example specifically,
because Amazon's actually done a pretty good job of developing instances and offerings around database and I know there's always going to be
overlap with your services as Dell and theirs and yours on theirs and all this
sort of thing right but what does what does power flex in the cloud get you in
terms of running those applications versus running them on services that Amazon provides?
Yeah, so certainly you understand if there's a database that's running on infrastructure that's
set up with ingredients that Amazon provides, then that's what you're going to have as a
substrate under which those services are rendered. And what we've done here with this multi-AZ,
with the scalability getting you to literally any capacity that you want, the ability for you to grow this
and deliver the sub millisecond latency performance
envelopes that I mentioned earlier,
makes it an extremely attractive option,
not to necessarily compete with AWS,
but rather compliment what it is that they can do for
their customers. The whole point is to be able to have a way of eliminating any obstacles that
customers may feel that they have. If they've gone through something that was maybe less
performant than they had expected, rather than regressing back to doing something the old way,
then this would potentially be there for them to continue to to work through and
work with our partnership with with Amazon.
They're very familiar with what we have been putting together here.
We've partnered now here for quite some time.
And as a result,
they're very impressed with the outcomes that we've been able to demonstrate
to the degree of some of this being almost
unbelievable if you think about what we've talked about here on this call the ability to get to that
distributed scale out mechanism is truly remarkable so all the work you're doing now is
with existing customers right uh existing well i don't want to assume, are they all existing PowerFlex customers?
Or do you have some customers that are big Dell shops
that are saying, I've really been waiting for this
and now you've given me access to a beta version of it
and now I want to jump in and try.
I'm curious, have you had any of that?
It's a combination.
Certainly those customers that are very familiar already
with using PowerFlex in an on-prem environment
are faced with a, hey, I would like to be able to use
more flexibility a cloud environment offers.
Why vacate one to go to the other rather than,
why don't I tie the two of those together?
This is certainly causing as a part of our announcement
earlier this year of Project Alpine,
net new customer interest in what it is that we're able to do that, frankly, they've not
seen being capable of being satisfied with other options before. So whether they are
PowerFlex customers or not is less important than the fact that most customers in the world are
somehow working with Dell
in some capacity anyway.
So what this is doing is helping to broaden
the addressable opportunity space
and to help customers with some things
that they may have found to have been a little bit
encumbering in terms of how to get to what they wanted
to get out of doing something with a transition like this.
You've talked a lot about the large scale opportunities
for mission critical apps.
Do you see, how small,
and I know you've talked about three nodes
and that's a common building block for the cloud deployment,
but how small do you see these going?
And is there an edge play here that makes any sense?
Or do you have feelings that maybe there are other
dell products that are a better fit there i'm just sort of thinking about as as we look at where data
is being created we're seeing more infrastructure physically being driven to those locations but
some of that could be cloud driven too i suppose or or some sort of hybrid what do you is it too
early or what are you seeing there yeah uh there's a couple of ways
to answer that so uh first of all there's usually some aspect of uh conversation with the customer
first about it where are you going so starting off with something as small as a three node
configuration may be the appropriate step that they want to take because they're looking for
something that helps them with the performance so just to give you some idea what this view is like for nodes,
we're over a million IOPS. Now, do you need that amount of performance?
If you do, then that's a frankly unmet need that we help fill in.
That is important. But also if they're in a state of,
let's see how this goes for an initial deployment,
then that might be the first step. If that's all they want to begin with,
then they may be perfectly happy with the offerings that primary storage from Amazon
offers such as EBS in the form of the various IO2, IO1 options. If that's fine, and we're not
here to displace the existing needs. We're here to help with these needs that customers may not find
are going to keep up with the pace of change. that's where the next step would be is where do you want to go
and if part of the conversation is look where we're starting off at this point what we need to
get here and this is the growth trajectory that we're on then this would be a non-disruptive way
to get there as i say adding node by node what you need add compute nodes add storage nodes being
able to put them
together in that asymmetrical manner is a perfect example of how customers will try and make sure
that they don't outgrow something, or in this case, match better what the cost is for the workloads
they need instead of overspending to start off with something that might not necessarily be
fully utilized for some time. As far as edge implementations go,
one of the things that you've seen us do is qualify with the EKS Anywhere option. So
the Amazon option for people who are saying, I want to do containers. I'm picking the distribution
from Amazon. EKS Anywhere is an outstanding platform for me to use. I'd like to be able to
use it on a PowerFlex implementation, and that's because I need on-prem proximity to where the
workloads are occurring. I need the responses, the latency. I need to be able to expand this
similar to what our existing on-prem customers are doing today. Then this is an excellent story
for people who are looking for, again, some hybrid mechanism.
They prefer to use an Amazon distribution of their Kubernetes options. There would be
extensibility future conversations here about what Outpost potentially could be doing
in this environment as well. Outpost is another interesting angle. I often lament the lack of hardware at AWS reInvent,
but Outpost always comes through,
and the snow devices are always there.
No snowmobile this time, or at least not that I saw,
but the snowball and snow cones.
They didn't have the dunk tank this year.
The previous years, they had the rainstorm going inside of the little container with the snowball inside of it.
Yeah, those things are neat.
And actually, we did a podcast on just snow a couple times ago.
So for anyone that is super into those products, check that out.
Yeah, the outpost angle is interesting.
Maybe we'll come back to that.
Listening to you talk about the use cases, though,
it's got to be exciting because there's so many,
but also challenging because there are so many.
And it's really hard, I think,
for Dell to take and communicate to the market what PowerFlex on AWS is capable of because it's maybe so broad.
That's got to be a challenge.
How do you maintain the ability to do so much for so many different use cases, but check that against ease of use,
ease of deployment, ease of management,
because the cloud's got to do both, right?
Exactly, and if you think about what the cloud
has been leveraged by many organizations to do,
it's the instant gratification,
the I can procure what I need now,
and I can do so with the intent of being able to deliver
some outcome to my business.
I mean, and that's really what both we and Amazon
very much are focused on is,
how do we address customer needs?
How can we be there and help the customers
with their future direction?
So yes, the diversity of what we can do
is very much oriented towards
those mission criticalcritical applications.
We're here to help customers with sustaining something that's got an important sensitivity
to conducting transactions typically for their organizations, being able to provide insight
that helps them run their businesses more efficiently, being able to do so with the
type of performance and levels that we can achieve.
Here are, I think, the kinds of things that customers are always going to continue to care
about. I mean, why is it block storage? It's block storage because those are the ones that
are servicing these most performance-sensitive use cases. If all we did was provide something
that was, quote-unquote, like you said, ported to a cloud environment,
we might not have necessarily been very optimal. But in this case, that was not a necessary step.
Our whole platform has been designed from the very beginning to be scale out and a performance
is the design tenant that's most critical to what we deliver. Being able to do this modularly,
being able to do it
asymmetrically with the compute and storage separately, all have been important for those
kinds of workloads that you're asking about. So yes, there's a diversity to that. And that's
why the type of white papers that we publish very regularly here are trying to help people
understand that we've already been through it. We're here to show you that you're not the first ones to do these things. These are the outcomes you can expect. Give them
some expectations of performance levels per node. The things that we've shown is that the performance
levels are not just good. They are a very important cost savings because the level of performance that
we can achieve with relatively few nodes is a testament to
efficiency. We didn't cover this previously but some of the
important parts of the data services that are in what we
deliver on PowerFlex especially in a cloud environment are
important for customers that we're looking for not only
performance but some cost savings. So, we're able to help
our customers go through
a profile conversation, explain to them,
how is it that we can reduce what they're spending?
It's a very unusual circumstance for me
in product management to have the ability to say,
good, fast, cheap, pick three.
We've got the very, very important part of this equation
that people always will care about.
And that is, is it going to be good, but is it cost effective and the answer is yes absolutely and we're more than happy to discuss
that as part of how we profile customers and help them with their journey towards this direction
so you brought up another thing that reminded me um we we talked about life lifecycle management a little bit, but also let's talk about operating system software management a little bit in terms of PowerFlex.
So on-prem, you're releasing updates on a pretty regular cadence, right?
That's correct. Yes.
And obviously, we're keeping up with things that have to do with our software-defined ingredients.
We're helping customers as they need to go through
updates that may be at the hypervisor level,
also obviously physical layer.
The whole engineered system experience for PowerFlex when we
deliver something that's PowerEdge-based is not
just saying we take care of only one layer, but the entire stack.
All the way from the BIOS or iDRAC level on up is how we treat the system
when we say we deliver an engineered experience. So how does that parallel to the cloud? So when
you have a PowerFlex OS update, is there some lag to make that available to AWS customers? How does
that work? Yeah, it's the same code. So the same version, same updates to it.
There's less things that we have to physically manage because obviously in the case of AWS, they're in charge of what needs to happen for the physical layer.
Right.
All the way up to inclusive of the Nitro-based infrastructure, which is where you're hosting your virtualization layer.
Many people probably don't maybe pay too much attention to
hypervisor enablement when you're dealing with an AWS environment, because frankly, you do have
that already built in. That's fine. We are agnostic to the virtualization layer.
And that may be an important point that we didn't cover here, but we don't require and nor do we need to use the bare metal
instances that Amazon offers. And why am I making that point? If we were in a software-defined
system that you might typically associate with hyperconvergence, where the hypervisor is a
co-resident ingredient of software-defined in hyperconvergence, we don't have that. In this case, we certainly
do use a Linux-based kernel for delivering our software-defined infrastructure, but we're
not requiring a hypervisor to be inclusive with this. So we just use what, in this case,
Amazon provides. But similar to on-prem, our multi-hypervisor approach, or bare metal,
is capable of then hosting multiple environments,
whether you choose one hypervisor or another or bare metal,
where there isn't any requirement for a hypervisor,
then that's fine.
In this case, we've got the ability to mix and match those
in the on-prem environment.
And in the case of AWS,
we run right with them on what they're putting into
nitro infrastructure, in this case,
the Xen based hypervisor.
Okay, well, that's helpful because we have seen times
where release schedules will lag
and for customers that are in a more agile manner,
accepting new updates to their on-prem stuff,
you'd like to match as fast as possible in the cloud.
It sounds like that problem has been solved here.
Well, as being the same version,
the updates may be applicable to certain parts.
So if a release update were being put out by us
for setting up a set of maybe changes
to firmware that's in the network switches
or into a PowerEdge
server or something like that obviously wouldn't be necessary for a cloud environment.
Those things that are going to get changed that might be applicable to the PowerFlex
layer are released in, again, the same version as being used.
So there's no discriminating about whether it's cloud or not.
There isn't a quote unquote cloud version.
We did do though some things here as a utility to automate this,
and you'll hear more about how we're going to make this a universal
across the Dell portfolio direction.
But a lot of what it is that we enabled customers to do is get this right
the first time by us having picked all the necessary ingredients
out of the possible combinations that you could get wrong in an AWS catalog of instance types, storage types, the purpose of us here is to help our customers get it right, get it right the first time, eliminate variables that could potentially cause a drift in what their user experience is, and therefore performance could end up being a detriment if somebody isn't picking the right things. The whole point of us is not letting you get into that kind of trouble.
What we did here was to make it so that it was literally could start with zero of anything on
being able to pick up what's necessary for the instance types, being able to install the
ingredients from a PowerFlex perspective, also going through all the VPC wiring,
so your virtual private cloud networking
is all automated and set up.
We're literally leaving you at the time
we've been on this call in an infrastructure
that can be scaled from whatever capacity you want
to a ready to provision volumes
in this time we've been on the call.
So automating that whole thing,
being able to give you literally only about four questions
to answer for you to have that outcome occur
is the experience that we really want customers to enjoy.
How do you get from here to there quickly, easily,
and with no errors is the whole objective.
So let's talk a little more broadly about Alpine
because you started to go there a little bit.
Obviously here you're starting with AWS, biggest cloud, makes a lot of sense.
They also have, from what I'm told, I'm not a cloud developer,
but a mature API stack that makes it a little easier to work with
in terms of getting,
you know, your hooks or their hooks or whatever together
to make things work.
What is the Alpine vision both internationally
and across multiple clouds?
Yeah, so you think about what we're trying to help
our customers do is worry less about all of those APIs
that may differ across all of these different platforms, making it such that the customers are relying on us to
have worked out those things. So if at the end of the day, I could simplify it by saying, look,
I want 100 terabytes and I need it to be block storage. I should probably be able to say those
two things and I'll let you, Dell, take care of the rest.
In this case, wire up everything I just mentioned, install it for me, get me to a place where I can then start to do what's necessary,
which is assign a volume to a application. And then I'm off to the races.
If you are to do it yourself and be able to hand wire all of that, there are a lot of steps.
And many people potentially are not going to appreciate
having to do all of those steps,
even if they have the experience
of having done it before.
Why require them to do that
when we can automate that?
So the whole concept here
is for us to deliver our provisioning
and lifecycle administration experience
that makes this simple
for our customers to get to what's important.
I want to get to business, and business means applications.
And so what does that mean then?
How does that translate to your expansion either to other geos or other clouds or both,
really?
What has to be the emphasis there?
Yeah, I think you've seen what we've published before on Alpine's direction, and that is
for us to be capable of helping customers with wherever they have chosen to, if you will, commit to their spending direction.
And that's a reality.
Customers have often picked a particular commitment and a cloud vendor for what they're going to use.
Now, will they have different preferences?
Absolutely.
And we recognize the market leadership positions of each of those platforms. So we're here helping our customers to achieve the results regardless of where they prefer. Are you setting up, helping them set up a specific,
a real live instance in AWS?
Do you have something else?
I know you obviously have demos and config videos
and all sorts of other stuff
if they really want to get into the weeds.
But if I'm thinking about this as an org,
what is the easiest path to actually get hands on?
Yeah, so we have an outstanding group
of sales specialists here that are very well acquainted
with what PowerFlex has been doing all along.
Cloud environment for them is no different.
Whole point is to first have a little bit of a discussion
about what is it you want to get out of the implementation.
Our guidance to them is generally based on
what do they say about what they want their workloads to do, how performant they want it to be, where they are now, where they want
to be, giving them recommendations on how to best practice this.
As I mentioned, we've got flexibility on how this gets done.
So part of this is a guided journey through our sales experts to help them with making
sure that they get it right.
Well I guess the good news is, is if you want to do a POC on AWS,
it shouldn't take long to stand one up, right?
Exactly, that's exactly right.
I mean, a heck of a lot faster
than you guys shipping them, you know,
a bunch of nodes and sitting on the dock bay
and getting someone to rack it and provision it.
I mean, we've seen how many dozens are, you know,
you've seen, I'm sure hundreds of those POCs die on the vine
because they never get set up or operationalized the right way.
I mean, this is should be dead simple.
Absolutely. I mean, and it usually ends up being after we get you to the point of being able to provision and load your applications.
It's mostly in the hands of the customer's ability to install what they're going to do to use the workload. The length of time it takes for them to go through and doing that is generally going to be longer than, as I say, it's taking anywhere around 20 minutes or so
to set up a configuration of whatever scale or size that they would like means that, you know,
then it's in the hands of what they want to do with it from an application perspective.
Well, it's very cool. We've got a detailed piece up on the website back from the last week's launch.
I know you guys have got a microsite set up. We'll link to that in the description for this event, for this podcast.
So for anyone who wants to check it out, there's plenty of materials.
You guys have you've said it a couple of times. you know lately or at least the the way I've been paying attention in last year
of creating a ton of marketing and technical information around these
solutions not just for this one but for all of them so there's lots of resources
out there we'll link to a bunch of them in the in the notes and Michael
appreciate the time thanks for doing this yeah absolutely and listen I would
be remiss if it did not mention that part of this is leveraging our inventory of products across the Dell portfolio.
So the visibility into monitoring and what we're doing operationally through Cloud IQ, the hygiene our customers should still be following,
leverages the domain virtual edition, which is already in the AWS platform for people that want to do their backup and do it with extreme data efficiency and cost efficiency,
which allows them to write this to S3.
All have been tested and converged into options
that customers can hopefully take advantage of.
All right, well, you got those last plugs in, that's good.
Thanks again for doing this and it's great.
We're looking forward to seeing more
and seeing
where you guys take all these alpine services great as am i thank you appreciate your time