Podcast Archive - StorageReview.com - Podcast #121: Simplifying Data Protection with an Integrated Appliance
Episode Date: June 14, 2023We recently spent some time with the Dell PowerProtect Appliance, posting an in-depth review… The post Podcast #121: Simplifying Data Protection with an Integrated Appliance appeared first on St...orageReview.com.
Transcript
Discussion (0)
Hey everyone, Brian Beeler here. Welcome to the podcast. We've got a great conversation coming up
around data protection and some of the latest and greatest things that are going on there. David
Noy makes a return visit to the podcast from Dell Technologies. David, how are you today?
I am doing pretty well. Thanks Brian.
All right. So I set it up with data protection,
but I assure you, I promise you, I will not be giving you 35 minutes of ransomware stuff,
because while that's an important topic, I find that excruciatingly boring to the point of
discussing it that long, but we'll probably glance along that subject a little bit. But David, just at a high level for those that may not know the portfolio,
what is in the Dell Tech data protection portfolio
these days, what does that look like?
It ranges across a number of products, Brian.
There's, you know, we have a deep heritage
in software and hardware for data protection.
So we have the Avamor networker products,
very widely adopted, very widely used
to provide a very deep breadth of functionality.
And you'll see them everywhere from the smallest customers
to the largest fortune 100s.
And of course we have our PowerProtect DD,
which is the world's leading backup target,
with just rock-solid resilience,
unbelievable reduction rates.
And you'll see that underneath our own software,
but you'll also see that underneath a lot of other third-party backup vendors.
So it's kind of the vendor-neutral storage target
that gives you the best TCO for your backup
and data protection deployments.
Now, having said that, there's a number of derivative softwares that we've built,
things like DPA for reporting.
There's capabilities like cloud snapshot management.
We sell an as-a-service offering called Apex Backup Service.
We're doing kind of hybrid or in-cloud
PaaS backups.
And then recently we've introduced
a couple years ago we introduced something
called PPDM, PowerProtect
Data Manager. It's kind of a
next generation
backup software, data protection software.
Really
re-architected
to be more containers-based, to be more modern in terms
of the underlying constructs.
And that's going to be kind of our big bet going forward.
It doesn't mean that we stop investing in Avamar and NetWorker.
Those are so deep in terms of breadth and functionality.
But there's a coexistence strategy, and I'll talk about that as we go through the discussion.
And, you know, as a while back, we took PowerProtect DD,
we virtualized it, and then that gives you a software-defined version
of our backup target, PowerProtect DD.
It's available in cloud. It's available on-prem.
Yeah, that guy shows up in cloud and also in VxRail.
I know it's in the little store to kind of plug in and add data protection to your
rail vSAN setup there.
Yeah, people use it on-prem in rail, but in cloud, actually
what people don't know is we protect about 17
logical exabytes of data with the PowerProtect
DD virtual edition in cloud today. In cloud, 17 exabytes. It's a lot. And it's not something
that we commonly talk about. You don't usually think, oh, wow, those guys are really just
crushing it in cloud. It's a lot of data. Well, that's, I mean, that's, well, not to interrupt you, but that's obviously a huge
number, but you're right. Like the, the virtual edition was kind of put out there to bridge some
of these gaps with things like hyper-converged. And then I know when you launched it in the cloud,
it was sort of a software defined first way to approach data protection in the cloud.
The fact that, that, that little, that little engine that could, that probably when you, in your wildest dreams,
I don't think you guys probably mapped out that much data protected under DDVE in the
early days.
I mean, you couldn't have imagined it would grow that fast.
I don't think, no way.
And even, you know, I was on Dell a couple of ago, and I was looking at how fast it was growing.
It's only accelerating.
So we're just seeing more and more of it.
And, you know, the value proposition is pretty straightforward.
You're backing up stuff in cloud.
Cloud infrastructure is not cheap.
If I can give you 10 to 1, 50 to 1 reduction on your backups in cloud,
I'm dramatically reducing your infrastructure cost
to actually store images and keep them for some period of time.
So it's an arbitrage mechanism against cloud costs.
So customers love it.
We've actually built some innovation on top of it.
It's not available in cloud yet, Customers love it. We've actually built some innovation on top of it.
It's not available in cloud yet, but, you know,
obviously we're qualifying the cloud vendors called SmartScale.
SmartScale is the ability to federate multiple PowerProtect DDs or DDVEs,
32 of them together into one giant namespace.
So it's kind of like the beginnings of the ability to scale.
And then for customers, you know customers who want to buy a full integrated
appliance, we have a product
called IDPA. We continue to sell IDPA,
the integrated data protection appliance.
But we've introduced a new product,
again, taking that
PowerProtect Data Manager,
the next generation backup application
with that DDVE,
that little engine that could,
and wrapped it together into one fully integrated appliance. And I got to tell you, it's a beautiful
thing. Yeah. So the IDPA appliance, I actually remember that well. We looked at it in the very
early days. And I know what you were trying to do back then was take some of these services that you mentioned, six or seven, and aggregate
them into one unit with licensing that was a little more easy to understand rather than six
or whatever the count was. And that was a really good step forward. It wasn't as far as definitely
where you are now. And we'll get into that because back in the day the IDPA appliance still had multiple logins and it wasn't it was clearly on the road to your vision for a unified
easy to manage appliance but but was uh only a step in that direction I would say
but I still understand it was successful for you and did it and got customers you moving in the
direction you wanted them to or wanted to enable them to, right?
Well, it continues to be successful.
So we're still doing well with that product.
But to your point, it's one thing to take a couple of different products and kind of make that experience a little bit easier to deploy and use. It's another thing to build a fully integrated solution
where you're really only interacting
with an operating environment.
The fact that there's components running underneath
that we're reusing our technology
should be completely seamless and invisible.
And that ranges from everything from
how you upgrade the system to how you secure the system to how you provide disaster
recovery and uptime for that system or maintenance. It should be just one seamless product as a single
offering. And that construct is something that we're going to continue to evolve. So we just
put it out this year, but it's definitely on an evolutionary path with a North Star in sight.
Right. And so you've got some new appliances there, and I want to talk through that as well.
But you made me think of something while we're talking about making things easy, right?
Making the operations of any of these pieces of equipment or software packages easy.
I mean, that seems to be one of the driving factors that we hear, not just from you on data protection and backup.
I think it's across the board, right? Whether it's adding Bluefield DPUs to VxRail,
making sure that I can update them through iDRAC and through the common tools that
people are used to for lifecycle controller management. I mean, just some of the little
stuff and, you know, little, it's still very important. And I know it's not simple engineering,
but taking all of these functions and trying to put them in a spot or two spots max where people
can go and manage and interact with these things
so that their IT departments are not having to spend a lot of time looking for places to do
things that they can just go in and administer things easily or their partner, because I know
a lot of these things are deployed through the channel and sometimes a partner manages it for
you. So keeping it simple so that you're not logging a bunch of partner time
to do a lot of these day-to-day tasks.
I mean, all this stuff is really important.
I'm sure you're getting that feedback from your customers.
Yeah, so actually the channel partners who've tried it,
and we have, you know, I won't tell you how many,
but there's lots of seed units out there that have gone out to channel partners.
And the feedback that comes back consistently is, wow, this thing was just a breeze.
So, you know, our litmus test is pretty simple.
You find anyone in the organization who's got a vice president title or higher, because
at that point, operating a conference call, let alone, you know, data protection appliance
is hard enough, right?
And we put them, you know we set them loose on the appliance.
So from unboxing to getting it up to first backup,
consistently getting about 15 minutes to get that thing up and going,
which is phenomenal.
And when we compare the time it takes to actually perform operations
like virtual machine backups to the time it takes to actually configure operations like virtual machine backups to the time it takes
to actually configure and get things set up,
the combination of performance and the combination
of simplicity that makes it easy to get it operating
and easy and fast to even to operate in your
day-to-day experience.
We compare that to some of our competitors
and we're anything from 30 to 40% faster than
pretty much anyone out there.
And again, if someone like me or an SVP of engineering who hasn't coded in decades can
sit down and get that thing up and running in 15 minutes across the board without a glitch,
I think that's the hallmark of success.
Well, I will say that I went out to Hopkinton,
to your labs and got hands on
with the new PowerProtect appliances.
I hadn't yet interacted with them at all.
And we started with one at zero.
So it was powered on and had an IP address.
And then we went to a conference room and connected over a laptop, and we recorded that session.
We'll publish that video at some point.
But, yeah, I mean, it's a lot of basic blocking and tackling at the beginning.
It's some of the administrative info.
It's setting up a backup security account, which I actually like some of these two keys to do anything
you know real serious on the system and and plugging in to an existing VMware
environment I mean it's it's insanely simple to the point where and and I hope
you take this as a compliment is that it felt like I wasn't
administering an appliance and a backup application. It was just a thing. It was one thing
that did that. And I know you probably like that, but I also don't want to diminish the
power of any one of those pieces, but it was pretty fluid.
Well, there's a lot that goes into orchestrating
all of that, so imagine that we have
different development teams, ones who are working
on the backup application, because it can run
as a standalone backup application.
We sell quite a bit of it that way.
There's the data protect, I guess the target appliance
that's running in there virtualized as you just
mentioned 17 exabytes of that running separately there's a whole team that's how do i put these
things together in a way that obfuscates the fact that these are different products but makes it
very clean interface by which you work with the product and then aligning all of the release
schedules so that the features come out on the same cadence.
Like there's a lot of orchestration that goes into getting that to work,
you know, clockwork turnkey. And it's just, it's coming off without a hitch. I love it.
Yeah. And the, the hardware is pretty cool. I mean, I like the dual shelf design for your dense systems, but you've got some flash in there too uh which which
is good i mean it's uh for some of the rapid restores and uh it is good to have on board
tell walk through some of the the hardware highlights or some of the things that that you like
uh the most from the hardware side i mean for our our our tech nerds know, as much as the appliance is a thing and you're
buying the appliance, like some of the underlying bits, I think are pretty cool there.
Well, it is on a PowerEdge server, right? So you have that and you will continue to evolve that as
the new PowerEdge lands come out. I think the other cool thing is that we ship it fully populated
to 96 terabytes out of the gate.
So we'll be adding an expansion shelf to that,
to 256 terabytes shortly.
But the idea is that for smaller customers,
so for channel or going after more of the commercial mid-market,
if you want to start small and grow large,
the expansion is just a license key based expansion.
And again, this is in the spirit of simplicity, right?
We don't have to ship any more capacity to you.
And so we want to continue to keep things as simple as possible,
whether it's expansions or what have you.
You know,
because you have this operating environment that's wrapped around the sub
products that are built in, we can really lock down that box as well.
So one of the big things you hear about all the time,
you know, about even going into the cybersecurity
is just security in general.
We have a lot of our customers
who are looking at penetration testing and,
hey, you know, can I go in and spoof retention lock
by modifying the clock on the system
and basically pushing it forward 10 years
and all of a sudden all the locks expired. Now I can go delete everything or modify any file. So we just
locked the whole thing down. It's completely controlled environment, which is great. That
gives you a level of security that you wouldn't have otherwise had.
Well that's fair because in every report that any infrastructure or backup app or anyone puts out, it's about the vectors for cyber attacks and they are very much on the backups because that's your last line.
It's your primary defense, but it's also your last line of defense, right?
Because if they nuke your backups and then go after your production, you've got no leg to stand on when it comes to remediating that.
Yep.
So again, exactly.
We try to lock it down.
It is integrated with our entire cyber stack.
So we'll be able to use this to go into a cyber vault.
But the form factor, very slim, nice little 2U box. And again, the ability to go add expansions down the road
makes it easier to go grow, expand in the box itself.
The performance we're getting out of it is great.
So the restore and backup performance is just fantastic.
Part of it's because of the hardware specs.
Part of it is also because of some of the technologies
that we use, particularly around VMware. So we have the ability within our, you know, the backup application that's
built in PPDM, Power Particular Data Manager, has a feature called transparent snapshots,
which basically is a very lightweight mechanism for taking backups of VMware. That really doesn't cause any impact
to the VMware cluster it's backing up,
but as kind of a side effect of that,
those lightweight snapshots are just a lot faster
than move across the wire.
So the whole restore and backup speeds
are just sped up and accelerated dramatically.
So overall, I think just a solid product.
And again, just going to continue to get better. It's just in its first iterations. So you talked about the high-end build at 96
terabytes. I know you've got a smaller one too. So in terms of the audience that you're after
with this, is this primarily mid-market down? I mean, how are you thinking about this, and where's the line that you're seeing with customers
between consuming an integrated appliance like this, and then doing something like a
data domain or Avamar data domain, whatever, a larger infrastructure off to the side?
Where are those swim lanes kind of?
They're not always around just capacity.
So if you think about data domain, it could be customers already picked their third-party software vendor and they want a best-of-breed target underneath.
That's where we're seeing success.
And data domain scales to just over a petabyte.
So pretty good amount of capacity in one package the initial offering of this integrated appliance we call it dm5500 is a 96 terabyte box but again starting at 12 terabyte
internally provisioned easily extensible via software licensing to 96 and then later on this
year we'll add a 256 terabyte expansion shelf. So it'll grow to 256 with the expansion shelf.
Now, over time, you can imagine this is part of a broader strategy to get much bigger than that.
So the frameworks that I talked about, that orchestrate bringing together the backup software and that backup target as a software extension are all being built to be containerized,
fully modern architectures, and you can kind of imagine where this might go.
As a software-defined offering, underneath the covers, it's just running on PowerEdge.
So it is fully a software-defined offering.
This doesn't have to necessarily run on our TIN.
It could run in the cloud.
And because it's been containerized internally, the software has been containerized,
you can start to imagine that this might become a multi-node system down the road.
So this has all evolved very quickly into something that I think is going to be pretty exciting for the industry.
For customers right now that we're targeting with the DM5500, specifically,
it's focused on the kind of mid-market enterprise up to around the mid-size enterprise, because
that's where that 256 terabyte would play. Larger enterprise might be looking for something larger,
and that's where we've got plans to go and expand this in the years after this year.
From a growth perspective, though, it sounds like you're enthusiastic about where you're
starting with the DM5500, but where you're going with it to be able to hit more market segments,
to hit a bigger slice of that overall market with an appliance that's super easy to consume.
To be fair, the back- end recovery is not always super simple
i mean i think that's right next to networking where somebody's always complaining about
something not being set up right or configured right or or whatever like those two trade off
with uh with internal it admin complaints more than anything else i hear. Yeah, and the other thing is that
if you look at our ProtectGD ranging to a petabyte,
we don't get that many requests for customers like,
hey, go build me a 20 petabyte backup something, right?
Because with backup being your last line of defense,
you don't necessarily wanna put all your eggs in one basket like that.
So what you'll tend to see is more deployments of in that 200 to kind of 800 terabyte range, maybe all the way to a petabyte in a larger enterprise.
Even in a large enterprise or a Fortune 100, it's very rare that someone would ask you for, oh, go build me a five
petabyte backup target.
They'll say, build me five one petabyte backup targets.
Make them easy to manage.
That way, if one goes down for whatever reason, something catches on fire, then I have reduced
my blast radius significantly by not putting all the eggs in one basket.
So you can do some of that with the cloud too, right?
With the 5500, because there is cloud connectivity.
Why don't you walk through some of that in terms of the cloud's relevance in the data protection world for this mid-market target?
Yep. So the next release of the software that goes on the product will be able
to either tier to cloud and we'll also be able to replicate to a ddve that's running in cloud so
that can be your dr copy for example and then um if you want to have that then go off to a vault for cyber protection, that can be the
source to a vault.
So that on-prem DM5500 can drive cloud consumption, can tier, can replicate, and then use the
cloud as its vault destination as well.
All of those things it'll be capable of doing.
At some point, we'll actually lift the entire operating environment off and drop it in the cloud so you can have a mirror image of that appliance running in
cloud as well as on-prem.
Is there an analog for that in more of an enterprise hub and spoke?
I'm thinking retail.
I mean, that's been a hot topic lately with so much like AI inferencing and stuff being
driven out to retail.
They're creating more data.
There's all this customer intent data in store.
I mean, there's so much going on there
and that orgs want to preserve that,
but maybe they don't want to ship it to a public cloud.
Maybe they want to drive it to an internal cloud
and back up the same sort of way.
If I'm going to create all this POS data
and time clock and all this and back it up on-prem,
but I want to send it back to the core data center. Is there a way to make that happen with
an internal cloud? So, remember that all these components that are running inside of this
integrated solution, even though it's presented as an integrated solution out to the customer,
they're all components that we use. So, for example, the target, the storage target management
is actually just that PowerProtect DD software.
So I could run this little, I could put this little box
into all of my retail locations, you know,
in the closet room, wherever it is,
and, you know, be backing up onto it,
and then replicating that back to a large, you know, big iron power
protect DD that's running in the data center somewhere.
And that could be basically where all the data is being sent for, you know, for
basically for the longterm or for, um, uh, you know, for different retention
policy or for, uh, even the genesis of a vault that's being kept actually
on-prem as opposed to in the cloud.
So, or for disaster recovery purposes.
So I just, in case that little closet things goes down,
there's another copy back in the data center.
But the data center could be my hub
and all of these retail locations could be the spokes.
And this thing is so small that it's actually viable
for that kind of a use case.
Well, we see plenty of half racks in those types of use cases where it's two or three servers,
a switch, maybe a UPS or something. Another 2U for backup is not a very big lift
figuratively or in the reality of racking this thing. So that's easy from a space standpoint or
relatively easy. You talked a lot about VMware integration and backing up. One of the things
that I thought was interesting as you go through the setup configuration process, or I guess after
you're done and now we want to set up a backup job, you've designed it to be very wizard-like where it's pick VMware and then
add the credentials and select the VMs and go start your jobs. But it's not only VMware, there's
all sorts of application options and then there's more modern things like Kubernetes is there too.
Talk to me a little bit about Kubernetes and that integration and why that's important
to you.
I mean, I think you picked up on it, which is again, inside that integrated appliance
is our next generation data protection product, our data manager. It has a rich set of functionality
around not just VMware, Oracle, SQL. There's exchange backup capabilities.
We have Kubernetes in there.
And so obviously all of our modern workloads
that we're going to go back up
are going to be handled by this PowerProtect data manager,
including NAS backup as well.
Now there's a modern one,
but we try to handle it in a much more high throughput way.
So at some point being able to handle multi- a much more high throughput way. At some point, being able to handle
multi-petabyte NAS data sources.
Really, the focus of all of
our innovation around modern workload support is
going to be adding it into PowerPotent Data Manager
as it comes into that product,
which is again available as a standalone product
and people use it that way.
But then because the releases are all orchestrated
the way we talked about earlier,
every release of the product that adds new workload support,
that integrated appliance by default
then picks those up as well.
So we'll be looking to add more support around Hadoop,
for example, that it'll go in,
when it goes into PowerProtect Data Manager,
we'll pick up Hadoop.
When we add support for any,
like the next generation databases,
it's picked up and we'll basically go
into disintegrate the appliance.
So there's a lot of benefits of having that embedding
of our flagship next-generation data protection software.
So do you have to have native support to be able to back something up? I'm just thinking about a
customer that has something that you don't have a wizard for or an integration for specifically.
What do we do there? In general though, we'll be building out the operating environment in conjunction with the actual backup software.
So we'll, again, because those are orchestrated to come out in lockstep, the support comes out in both simultaneously at the same time.
So you'll never kind of be in a case where we added something into one and it's not in the other. There are some generic pre and post script capabilities
for just kind of manually scripting certain types of backups,
and those are always there and available to you.
But in general, we want to make sure that experience
is not going to be a hacky one where it's like,
I'm going into the backup software and trying to work around the fact
that this doesn't exist in the overarching packaged solution.
We want to make sure that that's a smooth transition.
Okay. Well that makes sense.
So you're talking a lot about the on-prem workloads. Have you paid,
you know,
much mind share to any of the as a service workloads?
Because that's another big push in the backup
space and I guess part of the reason why the organizations, large organizations average
what is it like five backup applications in their environment.
It's some kind of ridiculous number, but a lot of it now is driven by SaaS apps like
O365 or anything, any of the SAP workloads that are running in clouds, I mean, those
workloads are fundamentally not protected by that service provider in almost every case.
So are those targets for something like this too, or is that too far outside of the scope
of what you're doing with the DM5500?
Not just yet. So we have, we can have two different ways
of backing up in cloud infrastructure,
or maybe three, I should say.
Let's try to integrate them.
So one is that the PowerProtect Data Manager
that runs inside this integrated appliance
also is available in the cloud.
So you can deploy it in AWS.
Simple CloudFormation template,
you get PowerProtect Data Manager
and the DDB instance.
And so for doing application integration,
it works fine.
That's application running on cloud compute.
We're actually backing up cloud infra
like compute instances using snapshots.
We have something called
Cloud Snapshot Manager.
It's not fully integrated
into PowerProtect Data Manager yet,
but it is a SaaS solution for doing snapshot-based backups.
So basically it takes snapshots and it sends them off to storage
for deduplication and retention.
And then we also have, for some of the SaaS apps that you mentioned,
like M365 and so on, the ABS,
the Apex Backup Service offering.
Those are not integrated into PowerProtect Data Manager yet,
but you can see where our goal is to get
everything into that PowerProtect Data Manager console,
so it's an all-in-one UI.
As a user, if I want to take a snapshot-based approach to backing
up, say, EC2 instances running in
Amazon, and I want to have
backups of my
M365 and my SQL
as or whatever or what have you,
and I want to also back up
my enterprise applications like Oracle
and
some NAS workloads,
I want all of that to be in one pane of glass.
Today, it's still a little bit segregated.
There's some Lincoln launch capabilities,
but that's all coming together,
and that PowerProtect Data Manager
is to be the one UI to kind of rule them all.
And that'll be also embedded
into any of these integrated appliances.
So we're on the journey to go do that.
Yeah. Can you give me the journey to go do that.
It makes a lot of sense because it, well, unifying it all in one spot makes a tremendous amount of sense. What are organizations going to have to do to adjust to this more modern world of
disparate apps on-prem in the cloud as a service, whatever, because it seems to
me, and you probably know this better than I do, that a lot of organizations are buying these
subscription services without having any notion of data at all, forgetting about the fact of
having a backup and recovery process in place. And the clouds have made it so easy to buy and consume these things,
which is great, but I do worry a lot that we're one mistake
or one outage away or one bad event from a lot of maybe smaller orgs
getting burned because they didn't understand the implications
of what the SLAs are from
any of these providers.
And I don't know, I'm just curious what you're seeing.
Yeah, I mean, and it happens, right?
It does happen.
So, you know, one is to adopt a multi-cloud strategy, right?
And that means that, you know, where you can,
customers have to be looking at what services are available in both clouds.
In cases where you're kind of beholden to one,
if I'm using M365, there's no analog in any other cloud vendor,
then I want to make sure that not only do I have a data protection strategy
that's augmenting any of the simple data protection that's built into the offering itself, the service by the cloud vendor,
but then I've also thought through like, hey, do I have this protected in multiple availability zones or multiple regions?
So if an entire region goes down, that I can restore my operations and actually bring them back online in another region.
Having planned that out and mapped it out and gone through and kind of built a runbook for
if something bad were to happen, how do I get things back up and running is really important. And to that extent, we're about to release a pretty sophisticated orchestrated recovery
capability. This orchestrated recovery capability says,
look, if you look at the different services
and applications that you have in your environment
that you're protecting,
how do you ensure that they kind of come back
in the right order,
that the most critical ones come back first
and that the dependencies are basically restored
appropriately so that you can get specific applications
back up and online as soon as possible if for example this group of resources
have to come back together let's make sure they come back together if they're
the most critical ones let's make sure that they come back together first so
that's a pretty important kind of planning process to go through and then
we encourage customers to go and test their recovery processes,
make sure that, Hey, guess what? Like let's run a fire drill.
Let's make sure that in a case that I actually do have to come back,
I can come back.
And that includes everything from fire drilling,
a normal recovery operation to, you know,
oops,
there was a catastrophic data loss because of a region
going down in the cloud or a data center going down to a malicious data loss because somebody's
gone and actually infiltrated a system. All of these things are things that you really should
take the time to, you know, a little bit of that when we learned we learned uh back in when i was younger stop
drop and roll right because i grew up in uh in california in order to start school
wait well i heard that one too yeah yeah i was like uh you know um
there's an earthquake like where do you where do you go what do you do like make sure you just
practice it until every until you're really good at it and i think that uh you know our customers really have to do that too so just plan it out i agree yeah
make a plan and then execute it right and and test it i will add to that stop drop and roll though
in ohio we don't have very many earthquakes every now and then we have one but our answer to
everything was hide under your desk at school so yeah not if you're on fire, but earthquake,
the Russians, you know, nuke us,
which was a thing we were scared about in the 80s.
I mean, whatever else it was,
it was always hide under your desk.
Exactly.
With moderate success.
No, that makes sense.
And I do think we need to do a lot more
of getting these playbooks ready and then testing.
I mean, a lot of organizations will create the book, never update the book, never test the book,
in which case, you know, why even create it in the first place?
You're just going to run into a problem along the way, but going through those processes is a is an important thing the
other thing that you guys did recently and I know this was part of a recent
blog I believe and I haven't had a chance to play with it yet but you're
doing some power protect integrations with other parts of the Dell portfolio
as well so it's not just it's not just backup and recovery,
not just cloud, not just services.
You've got some new PowerStore stuff too, don't you?
Yeah, so if you think about,
we want to make sure that the product that we're building
has a lot of great differentiation.
And so you brought up the stuff that we've done with VMware,
some of the next generation workloads,
the stuff that we're doing around orchestrated recovery. But you have a really good point, which is that one of the things generation workloads, the stuff that we're doing around orchestrated recovery.
But we have a really good point,
which is that one of the things that we have
as an advantage with Dell is that we do have
a very broad footprint of Dell storage in the new center.
And for customers who are backing up stuff
from PowerStore or from PowerMax,
who want sort of a direct from storage to backup path
that they can basically set up,
okay, I'm gonna back up these storage targets
directly into my DB or my deep power point DB environment.
We're building that storage direct capability
in the PowerStore.
We had it in PowerMax,
we're kind of rebuilding it to make it easier to use. But that gives you a very high-performance
direct path from storage
into the backup target.
It's easy to configure, easy to use,
and extremely beneficial
for mission-critical workloads
where you need that performance.
You need that direct connectivity.
Well, I mean, it continues to... I mean, you guys have been telling you need that direct connectivity.
Well, I mean, it continues to, I mean, you guys have been telling the Better Together story forever since you really had, you know, the server business, then networking and storage
all coming together. I mean, it's been a big part of the Dell Tech trumpet, but seeing these little,
you know, I call them little, it's probably a bigger deal than that, but seeing these little, you know, I call them little, it's probably a bigger deal than that. But seeing these little integrations across the portfolio does really hammer that point home that there's reason for, you know, working with across the portfolio and there's benefits to do that.
Absolutely. And it doesn't just stop at the block products.
So even with PowerScale, we have that high speed NAS backup as well. So, you know, as I say, when we go in, we're able to go and position, hey, for all of your storage needs, you have best of refile, we have best of reblock. And oh, by the way, if you want to back it up, we have tight integration. And so to your point, it is a very much a better together story.
Well, and I suppose too, is that as you continue to disassociate software and hardware, I mean, obviously you need both to run, but that's the Alpine project, right? To pull some of these,
the storage softwares, you know, apart from the purpose-built hardware so that they can run
in the cloud or run on more power-edgy kind of infrastructure conceivably. But having those
connections between the applications, the underlying OSs and applications is really
going to add, I think, more value potentially as you go further down that path. Absolutely. So your choice of deployment,
your choice of where you want to home things,
your choice of consumption model.
So obviously we're getting with Alpine
and sort of moving things to more software-defined,
consumption models become more flexible as well.
And then your licensing models,
is I'm going to pay for as I go?
Am I going to, you know,
which is more like a cloud-like model.
And then how does all of that then tie together
and how do we orchestrate data mobility
between these products as well, right?
So both backup and the restore
or replication between products that exist on-prem
and potentially in the cloud.
And you start to get some very interesting topologies between people running exist on-prem and potentially in the cloud. And you start to get some very interesting topologies
between people running things on-prem and in different clouds
and then moving data between on-prem and the cloud
or into colo locations where they can get potentially reduced
or zero egress fees and then serve multiple clouds
from one set of infrastructure.
All of these things become possible.
And so then it becomes up to the customer to go and say, okay, well,
let's work with Dell as a trusted partner to figure out based on the problem
we're trying to solve, where is the best place to put our infrastructure?
So it meets our needs and gives us the best TCO gives us the best
performance and reliability.
Yeah. Yeah. Well, it makes a lot of sense. I mean, as I've said throughout
this conversation, we played with the deployment, the VMware integration, a couple other points
along the way, and it's all very simple. And that's one thing that backup and recovery, in my
view, has really suffered from over the years is that it was always thought of that it had to be robust and reliable and
so many nines of availability because you just can't lose data there. And that's a great
underlying tenant. But usability, I think, somewhere along the way and ease of management
kind of didn't fall to the wayside but wasn't the priority. But clearly with the DM5500,
you guys have made tremendous strides
there. And it's pretty clear to see where you're going with it too, which is kind of neat because
with many products, you can't often telegraph too much for all sorts of reasons. But the
direction you're headed down here makes a lot of sense to me. I think it was hard from a marketing perspective.
You always want to hone in on three words to describe your product, right?
But if I want to just throw away the three-word rule for a minute,
really what we're trying to aim for is simple, secure, modern, resilient, right?
And then, as you can see, we're adding a scale factor
that's coming in with the expansion shelves
and then where this could potentially go
because it's containerized.
And of course, making it multi-cloud
or hybrid cloud,
whatever you want to call that,
it's kind of another even dimension
that we could build in
or will be building in
to that integrated appliance experience.
And so it's hard to pick which of those three you want to hone in on.
But the first four are certainly the key tenets, right, that simple and secure, along with a modern resilient architecture.
Resilience is really important because when we're shipping a new product like this, we want people to understand that this is not like, hey, we just built a yesterday type technology. For
example, the data domain virtualization or DDVE that ships inside that product, that
is tried, true, and tested technology. Very resilient, super efficient, but made better with that wrapper around it.
Yeah, absolutely. Yes, there are a lot of tools, but at the same time,
part of the allure for the 5500 is that the customer doesn't really need to know that. If
they want to know that and you want to have that conversation with a senior technology official at an
organization or even the practitioners, that's great. But like you said,
at the onset, the, the, the person, a VP or, or,
or up that you want to have hands-on experience,
this they'll never know and never have to know what what's behind it. Right.
I mean, that's part of the beauty of it.
Outwardly simple, inwardly rich and complex. It's got to have the depth on the inside
to give you the assurances that this is a tried and tested technology, but the presentation tier has got to be elegant and simple. I feel like you're describing somebody at a dinner party I just met
about two weeks ago, but that's funny. All right, so we've played with this thing a little bit.
We've got content out. We'll link to it in the description here. We'll link to the product page
so that people can check that out. How else can someone interact with one of these appliances?
What's the best way to check it out to get hands on?
We have hands on labs I believe.
Gosh, I was just looking at the URL the other day.
We'll fire that in too.
Yeah.
And then, you know, reach out to your channel partner, reach out to your Dell sales reps
and we'll figure out how we can get you a demo. Those demo systems are available and there's seed units
all over the world of different channel partners where they're actually floored and you can actually go
and get hands-on.
Like you said, it's very simple. In modern
it's got the containers that we talked about, support for Kubernetes
and all sorts of other
applications.
And I know your software rollout cadence is relatively quick here too.
So as you guys are developing new technology in the overall platform, it's getting pushed
out here too relatively quickly.
So that's all good to see.
So I think this will be a dynamic platform and something to watch to continue to see
what you guys do with it so we're looking forward to that that growth as well thanks yep i'm very excited good
well thanks for doing the pot again david appreciate the repeat of performance it must
not have been too painful last time if you came back and i'm sure we'll we'll talk again soon. We've got Dell Tech World right around the corner. Always happy to chat with you.
All right. Thank you.