Grey Beards on Systems - GreyBeards talk with Lee Caswell and Dave Wright of NetApp
Episode Date: April 11, 2016In our 30th episode, we talk with Dave Wright (@JungleDave), SolidFire founder, VP & GM SolidFire of NetApp and Lee Casswell (@LeeCaswell), VP Products, Solution & Services Marketing NetApp. Dave’s... been on before as CEO of SolidFire back in May of 2014, but this is the first time for Lee. Dave’s also been a prominent guest … Continue reading "GreyBeards talk with Lee Caswell and Dave Wright of NetApp"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Howard Marks.
Welcome to the next episode of Greybeards on Storage monthly podcast show where we get
Greybeards Storage and system bloggers to talk with storage and system vendors to discuss
upcoming products, technologies, and trends affecting the data
center today. This is our 31st episode of Greybeards on Storage, which was recorded
on March 29th, 2016. We have with us here today, second-time guest star Dave Wright,
VP of SolidFire of NetApp, and first-time guest Lee Caswell, VP of Product and Solution
Marketing. Dave, why don't we start with you, and why don't you tell us a little bit about
what you're up to nowadays? Yeah, thanks, Ray. Well, as you mentioned, I'm now
running SolidFire within NetApp as the Vice President and General Manager of SolidFire.
And that really means that in a lot of ways, I'm doing the same job I used to do as CEO of
SolidFire, now with a much larger supporting cast in NetApp. And so we're working on, obviously, continuing to develop the product.
We've got a great roadmap we're working on here,
continuing to grow the team, invest in engineering resources
as well as field resources,
and obviously tap into the tremendous field and channel
and distribution capabilities that NetApp has
and integrate with their go-to-market engine.
So are you finding working with unlimited funding like this relatively interesting?
Well, I wouldn't say we have unlimited funding by any stretch,
but I think that we do have certainly access to funding that we didn't before
and more excitedly access to a whole set of resources that are already funded at NetApp
that we didn't have access to before and really figuring
out how and where we can use those and ultimately get our product into more customers' hands
is the goal here. Yeah, yeah, yeah. The multiplication of boots on the ground must
really be an interesting impact for your operations folks. Yeah, I know it is, obviously.
A lot of inbound inquiries to deal with here.
NetApp has a reputation of not handling acquisitions as well as, say, Cisco.
But, sorry, Lee, didn't mean to go right for the ribs at the first blow.
There you go, yeah.
But I'm pleased that they've decided to leave you guys more or less alone in Boulder to keep doing the good things you've been doing. I think that's a very encouraging sign for me. Yeah, I think so. I mean, I think
NetApp has learned both good and bad lessons from acquisitions in the past. And, you know,
they've had their issues with some, but they've been very successful with others. And the Ingenio
acquisition, the acquisition of Onaro that became their On-command insight product, as well as the
bycast product that's now storage grid web scale. All of those have turned into real products,
generating real revenue. And I think they've learned lessons about how to do that successfully.
SolidFire is the largest acquisition ever, is one that they have a keen interest in making sure they
continue the momentum that we have here. And obviously, leaving our engineering resources
in Boulder,
having us continue to invest in a center of excellence here is a big part of that.
Yeah, so the cloud aspect of all this and the scale solution that, you know,
I listened a little bit to the storage field day. Sorry I wasn't there when you presented and stuff like that.
But the lies that Flash vendors tell was kind of extremely interesting.
But as you got farther into it, you started talking about, I would call it product positioning of ONTAP, Flash ONTAP, and EF, and SolidFire.
And the scale aspect of it became much, much more interesting.
I just don't understand how the cloud applications take advantage of Flash.
Maybe you can talk a little bit about that, Dave.
Sure. Well, I think part of it is,
you know, there's this thinking out there that, hey, only really high-performing applications
need or benefit from Flash. And I think that is a pretty old-school way to think about Flash at
this point. You know, Flash is really just the future of primary storage, and anything that
needs, you know, real-time access to data is going to run better on top of Flash.
Even if it's not consuming hundreds of thousands of IOPS, just the low latency on the IOPS that it does need is a huge part of that.
And a lot of what's happening with next-generation application development is really about a range of data access patterns, a range of databases, but all of these things, whether it's Hadoop or Cassandra or MongoDB or
just traditional MySQL and Oracle databases, all of them need consistent performance access to data.
And the challenge when you get into a cloud scale environment is not how do I serve one of these
applications, but how do I serve hundreds or thousands of them at the same time? And Flash
in and of itself doesn't solve those problems. And that's really what we set up, solved with
the architecture of SolidFire, designed to use our scale out and quality of service capabilities to
deliver consistent, reliable performance to large numbers of applications at the same time.
So it's the consistency of the IAP that becomes the determining factor to some extent of why they're moving to Flash to a large extent and the cost per per you know buy up
or cost per performance unit down uh low enough to do this cost effectively you know flash is really
a key part of that but you have to combine it with the scale out and quality of service pieces to
really be effective and especially since what we really want is just one pool of storage not
you know we isolated these applications so they wouldn't
bother each other. Yeah, absolutely. And that's, you know, the real key aspect here is when you
think about a cloud or cloud-like architecture, whether it's a public cloud or just a enterprise
private cloud, everybody's trying to move to these pools of resources that they can put workloads
into, take workloads out of, and have it be something
where the infrastructure and applications can be very much decoupled and the application owners
don't need to worry about what piece of hardware they're running on and the hardware owners don't
need to worry about what application gets put on that piece of hardware from day to day.
They can just interact with each other with SLAs around the performance and capacity requirements
that they have. Yeah, and from the storage guy's point of view, the annoying thing about cloud is we don't get
information about applications until they're up and running. So you really do want a deep pool
that'll service anybody rather than saying, well, this application, I can plan for it and
build the storage to support it.
Absolutely. I mean, that is kind of the huge challenge of these cloud infrastructures is when you create an infrastructure that's designed to be very dynamic and designed to be very easy
to provision new workloads on, that's exactly what's going to happen. Things are going to show
up and you've got to be able to deal with them. And it would be nice to be able to deal with it
by having an infinite pool of capacity and performance.
But, you know, that's just not the case for anybody.
Well, it requires an infinite pool of money, right?
Yes.
And sadly, nobody seems to have that these days.
As we just established, not even NetApp.
Apparently.
You know, I look at what's going on in the cloud and stuff like that.
You know, when I fire up applications on the cloud, you cloud, I don't typically use SSDs and stuff like that.
Of course, I'm not doing these 100,000 user environment video game moogs or whatever.
I'm just going off their strictly magnetic disk in some form or another.
Well, it's actually a little surprising.
Most people don't realize that the default cloud instances on Amazon and Google these days are SSD-based. The default EBS storage on Amazon, the default block storage option on Google are SSD-based storage. And the fact that there are disk-based instances still are as much just kind of a legacy of the replacement cycle as anything else. And that's not to say
that disk is going to go away in the cloud. Obviously, the object storage and cloud archive
tiers and things like that are continuing to run on disk. But we are seeing cloud providers
rapidly replace disk, at least as a primary storage medium for both local attached storage, as well as network block storage with SSD. Yeah. Yeah. I mean, I look at some of the
instances I fired up. Some of them, it's kind of interesting. The operating systems are on SSDs,
but when you go after the volumes and stuff like that, you've got a lot more options to specify.
I'm not sure if it's defaulted across the board. Depends on the instance, maybe,
or something like that. I'd have to look at that, that they're tied to.
Yeah, I mean, Amazon still has some of their legacy EBS options that are disk-based,
but the newer options they've introduced are all flash-based,
and like I said, on the newer instances, those are essentially the defaults.
Yeah, you're talking about the ephemeral storage, right, Dave?
Well, actually both.
I mean, obviously the newer instances use SSD ephemeral storage, right, Dave? Well, actually both. I mean, obviously the newer instances use SSD ephemeral storage,
but also the EBS, the elastic block storage on Amazon,
the newer kind of flavors of that, if you will, are all flash-based as well.
Yeah.
I guess I'll have to check next time I fire up an instance,
see what it says from an EBS perspective.
But, yeah, I think you're right to a large extent.
A lot of the new instances, all their operating systems are on SSDs and stuff like that. So Lee, where does cloud on tap and
net app private storage and a lot of stuff fit into this new cloud world we're talking about?
I think you're just bringing up a large number of just great points about how flash adoption is
first off changing, right? I mean, Dave says it in a nice way, I think, is that SolidFire really is enabled by Flash, but it's not about just all Flash.
It's about how you use Flash to go and build out new, simple, seamless scale-out systems.
And then, you know, inside of kind of classic NetApp offerings, what we're finding is that customers are adopting Flash in order to simplify management.
It's a remarkably different change. I mean, two,
three years ago, we would go surgically find an application that had to be accelerated,
if you will, and go and apply accelerator because Flash was relatively expensive. But
as the prices of Flash have come down, what you're finding is that customers are looking to say,
hey, I could go and remove the likely culprit of performance issues,
right? I mean, storage has always been the likely culprit because it was still on mechanical drives.
When you remove that, all of a sudden, right, it just gives a lot of power back into the
administrative team saying, hey, I may not know where a performance issue is,
but I'm pretty sure it's not the storage. That's a very different dynamic that's happening.
You know, one of the things-
That's why we blame the network guys, right?
You know, it's so interesting, right? Because a lot of the storage guys, I was with a group last
night of storage folks, and they were basically saying, you know, I spent a lot of my time proving
it's not the storage. They're basically going back saying it could be the application, it could be the
servers. And what you find is it takes a lot of time to go and diagnose where issues, where bottlenecks are.
And so certainly we'll move the bottleneck around with Flash.
But the idea that I can suddenly now start, you know, bring the management aspects, because, you know, for a lot of folks, what they realize now is that, you know, it's only price that keeps disk in play. You know, outside of price, right? I mean,
Flash dominates disk on whether it's noise, vibration, power, cooling, rack space, even
reliability. And so it's so important right now to be watching some of the dynamics around
the efficiencies you can apply. And then some of the business practices as well around Flash,
you know, how we do controller
upgrades for example or how the warranties are extending i mean all these things are making
flash actually even more economic than disk has ever been yeah i actually got involved in a
twitter argument yesterday with someone who was arguing that that 15k RPM disks are still cheaper. And I just went to the Dell site
and fired up a server configurator.
And a 600 gig 15k RPM drive is 600 bucks.
And a 1.9 terabyte SSD is 1900 bucks.
So, you know, there is no price advantage
to the hard drive,
considering the fact that I got to pay
for four times as many slots in
the array. Flash is actually cheaper than high performance disk. You know, Google did an
interesting study recently, right, that they published a couple of weeks back. You may have
seen this, right, that basically their conclusion was, if you ignore the high performance part of
the disk market where that is true. Their assessment was, hey, for
SATA drives, let's say the just data tub drives, that for the next decade, there wasn't going to
be a crossover point. Let's just assume that that's true for a moment. But then they said,
the differences are so nominal that it's only for users at the very largest scale
who will actually care. And then they just, you know, proceeded to describe what that scale is, right?
And they said, you know, so the YouTube property alone is generating a petabyte of new data every day.
So that's scale, right?
And so, but the idea was that for everybody else, right, you know, the management, power, every other benefit, right, says that Flash, you know, will be used on the on-premises and even in any sort of hosting environment.
And then, as Dave mentioned, right, if you looked into any of the performance elements of the cloud.
Now, you asked a little bit about things like NetApp private storage.
You know, how does that play?
Because what you get out of the cloud, right, is clearly you get the flexibility of being able to, you know, burst into the cloud and expand into the cloud at times.
But what's expensive and actually not possible yet is to get guaranteed IOPS or guaranteed performance on a storage level.
And so one of the things we're looking at how customers are investing is they're investing in storage that they own in places where they want to differentiate based on performance.
And increasingly, storage now through Flash is able to contribute to revenue. And storage has
always been a cost center, if you will, right? It was kind of an unnecessary evil of doing business.
But now if you can think about using storage as a way to go and deliver actionable information
more quickly to a society that's increasingly impatient.
You have the ability now to go and basically change the revenue projection or revenue
opportunity for companies based on investing intelligently in storage. And we think that
Flash offers a pretty interesting opportunity for that. Two things I would say, and I'm the only
disc guy here left in the world, but I listened to a little bit of the Storage Field Day Dave Hitz session,
and as he goes around talking to a lot of his customers,
less than 20%, 30%, 40% of them are actually using all flash arrays at this point.
Do you think this is because of just how long it's going to take
the legacy storage that's out there to be transitioned out?
Is that the phenomenon that he's seeing?
I'll answer from our side, Dave, and you can go from there.
From an ad app standpoint, I mean, I would have to say somewhere around 90% of our customers are using Flash in some way.
Yeah, yeah, yeah, yeah.
But, you know, hybrid Flash or caching Flash and all Flash arrays are distinctly different solutions in that environment, as Dave would say, right?
Yeah.
So what you found was that customers made great economic buying decisions that said, hey, if what I needed was a little bit of flash for high-capacity storage, great.
Flash cache, flash pools, right?
I mean, these were great ways to go and accelerate data ingest and egress out of and into relatively slow drives.
And very economic for data that I may not be accessing all that often.
And now what we're finding, right, and this is a huge opportunity for NetApp right now, is we are finding that customers have a choice.
They can either upgrade their SANS or they can go and find out what is the next platform I want to consolidate onto.
And for customers that have existing NetApp portfolio, let's say, they're looking and saying, hey, the all-flash FAS is a terrific way to go and add all-flash capability now into probably a flash-enabled high-capacity environment. And the fastest-growing customer segment we have in the all-flash-fast segment today
is to sand workloads that are, wait for it, fiber channel attached.
We are taking existing fiber channels.
That segment of our customer base has tripled since the all-flash-fast product launch.
And so for customers that want to go and take their existing portfolios
and basically remodel, if you will, that's a fantastic way to go and consolidate onto what was great NAS
performance or NAS manageability and now offers terrific SAN performance. And then for customers
that want to rebuild, we've got SolidFire as like the terrific way. And here I'll just turn it over
to Dave to say, hey, how they're looking at building out an all-flash data center that's really offering a new level of simplicity, elasticity,
and scale. Yeah, I would just echo that, is that customers are still obviously in the early days
in terms of broad adoption of all-flash, but part of that is just the life cycle of these storage systems and,
you know, storage systems that typically last, you know, three, four, five, six, seven years.
There's going to be a transition point over the next couple of years as those come up and refresh
that people are going to move to all flash configurations. And now that you have pretty
much all the large vendors, not just NetApp, having kind of all flash configurations of their
premier arrays that are cost competitive, if not less expensive than performance disk based systems.
There really is no reason that customers will refresh with disk unless it's purely inertia
that caused them to do that.
And I'm sure for some customers that inertia will continue to have them buying disk for
some period of time.
But for customers that are, you that are keeping track of the way
things are going, there's really no reason not to go to all flash. And solid fire is really a next
step beyond that. It's really not simply about a commitment to flash. It's a commitment to a new
way of building their storage infrastructure and the elasticity, scalability, and simplicity that
solid fire represents. If we could only get corporate America to adjust the budgeting process to fit, scale
out better.
No, it's absolutely.
The use it or lose it part just makes me crazy.
It absolutely is a challenge.
When we go out and quote against competitors and one of the big value propositions we bring
is that we can tell them, look, don't buy three years worth of storage.
Buy one year worth of storage.
Come back next year and we can pretty much guarantee your prices will be lower and you can buy what you need next year.
And they say, yeah, but I've got my three year budget right now.
And so the other vendor, that's the only way they sell it.
And so I'm going to try to compare apples to apples.
So quote me three years worth of storage.
And, you know, we can we can do that.
But, you know, it's a good point, right? And the fact that customers' IT buying behavior has not fully caught up with
what the technology is capable of and even where a lot of these customers want to go,
because they don't want to have idle equipment sitting around on the floor. They don't want to
have storage that's consuming space, power, and cooling that doesn't have any data on it. It just
doesn't make sense. But that's always the way that it's been done because architecturally, that was the way that
it had to be done. You had to put in that footprint on day one because it wasn't easy
to expand it down the road. So you had to overbuy and hope that it lasted you for a couple of years.
One of our fellow storage field day delegates, whose name shall be withheld to protect the
innocent, had a VMAX on
the floor in his data center for 17 months without powering it up. They paid for it, but they didn't
have any domain, but the project never developed and it just sat there wasting money. Yeah. And
it's, you know, it's a shame and it really goes to show where when we talk to customers about,
you know, where the savings opportunities are, you are, it's not just about what's your dollar per gigabyte.
That is really not the way to look at the efficiency of the next generation architecture in terms of dollar savings.
Intel processors still cost a certain amount and SSDs still cost a certain amount.
It's all about how efficiently you can deploy it, what your utilization rate is on that infrastructure, and how moving away from
a bunch of poorly utilized islands that represent years and years of unused capacity sitting there
idle and consolidating those can not only save tremendous amounts of capital expense, but a lot
of operational cost as well. All right. All right. Second question about disk. And I'll end it there
and then we'll start talking about type. And a purely throughput perspective, I mean, if you look at SPC2 results and stuff
like that, there aren't a lot of flash, all-flash devices that have
excelled. Now, recently, VMAX all-flash and 3PAR
made a big splash. But historically speaking, and just
plain bandwidth perspective,
SSDs have not done real well on SPC2 benchmarks
or all flash arrays.
I think there's a couple of reasons for that.
One is that the bus connections themselves
have been huge bottlenecks when it comes to bandwidth on SSDs.
If you're limited by a 3 gig or a 6 gig, you know,
SAS connection, that's a bottleneck relative
to what the chips are capable
of. And now that we're getting to SAS 12 gig and even better NVMe and PCI direct connections,
that uncorks a tremendous bandwidth advantage for flash. The other big difference is that,
you know, disk sequential throughput was the one place where it wasn't completely horrible when it
came to performance. So from a cost per gigabyte per second perspective,
you could make a reasonable looking system with that. But here's where it falls apart.
That looks great on a single stream benchmark where you really are testing sequential performance on a
single effective workload across spinning disk. But that's not the way that people want to deploy
their infrastructure anymore. They don't want to dedicate a storage array to a single sequential workload. They need to be able to share that
resource between a lot of different workloads, some of which may be sequential, some of them
may be random, some of them may switch from day to day. And that is where disk completely falls
apart. It's great on a synthetic benchmark for a single workload, but it's horrible as soon as you
start putting two, five, 10, 20 workloads together on it. Well, even the same, I mean, Lee knows a little bit about
video and disc surveillance market. Yeah. Right. But if you leave the video surveillance market
and go to video on demand, you know, you, you're a big hotel and you want to, you know, not pay
lodge net and do your, your own, that disc system is going to work fine when 10 people are watching. But when 200 people are watching 200 different movies, that workload doesn't look very
sequential anymore. Yeah, it starts looking random in a hurry, right? And this is another reason why
I think, you know, disks in like some large hyperscaler applications will persist, not just
because of cost, but actually because they run very concentrated applications, right?
You can actually build, if you have an application at such large scale that you can build custom infrastructure for it,
you might be able to find a workload where disk is there.
But this idea that we're using broad application spaces, and this is pretty interesting, right?
So what we're finding, we've been doing some surveys of all-flash-faz customers, for example, and showing that, as you'd expect, databases, virtual environments, VDI tend to be the top workloads.
But here's some surprises.
SharePoint.
Like, I don't think that's like 10% of our customers are running SharePoint on all-flash-faz.
I don't normally think of SharePoint as a high-performance app.
Well, no, but SharePoint is a pig.
And you can't say that because you have to be nice to Microsoft,
but I can.
Ask you well.
And throwing all flash on the back end is,
you know,
how you deal with a pig.
We all know that a really good DBA and developer can make bigger performance changes than anything we can do in infrastructure.
I can't tell you how many meetings I've been at where I'll run into the purist who says, you know what, if we could just get the guys to rewrite the applications.
And the answer is, yeah, you could do that.
Or you could throw a flash at it.
Just make it sunshine every day.
Yeah, there you go.
That's why I moved to New Mexico, and I've still got 20 cloudy days a year.
Interesting, interesting.
So disk is surviving in throughput and hyperscale is probably the only real solutions that you see long term.
Well, and the applications with low I.O. density.
But only when you can isolate those applications from ones that don't have low I.O. density. And again, that's where as you move into
cloud infrastructure, it gets harder and harder to do that. When you say I.O. density, you're
talking about I.O.s per data footprint? Like I.O.s per gigabyte?
I.O.s per gigabyte. That if you have
the PACS archive for a large hospital of images that
were taken more than 90 days ago, it's an object storage application on 10 terabyte hard drives,
because we're talking about 10 to 12 accesses an hour to the petabyte of data. And those
applications don't go away. And as the size of our objects gets bigger
because it's higher resolution
and the time we retain things gets bigger
because lawsuits keep dictating that,
there's a lot of that data.
And I love the idea of all flash for transactional,
but the concept of the all flash data center
sounds too much like either the paperless bathroom or the Flash Justice League of America.
You know, Aquaman very often, but sometimes you need him.
One thing I wanted to come back to, though, is, you know, we started off talking about integration of SolidFire into NetApp.
And I thought it would just be interesting.
You know, one of the reasons that, you know, if you looked at past acquisitions, Spinnaker, for example, or other things, right, I mean, we were integrating technologies into
a very big business of ONTAP, right? And so that was, you know, there's a complicated process
around that, that was difficult and complex, but provided lots of benefits. I mean,
the cluster data ONTAP right now, right, is one of the lead successes, if you look at from a,
just an actual shipping volume. And this is materially different, right? is one of the lead successes if you look at from a just an actual shipping volume.
And this is materially different, right? So SolidFire, there's no plans to try and integrate
what SolidFire has from like an OS standpoint into ONTAP. And at the same time, we are looking at
the concept or our vision of a data fabric brings our products together. And we think,
actually, this makes the SolidFire acquisition even more powerful, right, as we go to integrate that.
We've done some early work with on-command insight to make sure that you can manage and monitor systems just like we did with the E-series teams.
And, you know, what we'll plan to do over time then is bring also this snap mirror protocol, the idea that you can share capacity efficient snapshots across dissimilar platforms. We talked about that
and showed that at our last big event, Insight in Berlin, about how we're allowing that or support
that now by taking ONTAP snapshots, for example, and being able to natively put them on something
we call AltaVault, which is an on-premises caching device that then streams data up into the cloud, NES3 compliant cloud, right?
This is a way to go and basically have local restores from on-premises gear,
but then tap into the cloud as data ages out so that you can go and basically offload the, you know, overriding storage requirements.
And so, you know, that idea that we're bringing solid fire into the data
fabric is a very different concept and one that we're really enthusiastic about. Dave, you may
want to comment on that too, about how that's materially different. That's not an integration
effort, right? That basically says that ONTAP is one of several architectures we have, including
the E-series, right, where we use that for the line of business or just looking for the ultimate,
you know, consistent low latency response times that are still kind of that classic performance.
So somewhere in here, I probably ought to say that NetApp is a customer of mine.
And I've done some work in the data fabric space and in the cloud space as well with NetApp private storage and stuff like that.
So I just wanted to put that out.
So while we're on disclaimers, both NetApp and SolidFire were clients of mine before the acquisition.
Okay.
Hopefully will remain clients of mine in the future.
Yeah, yeah. So, Dave, the only other thing I wanted to say was Howard and I have been talking a little bit about Google had made some papers at the local FAST or the most recent FAST conference about – it's not redundant.
It's latency.
It's actually the far-out latency or the small-tail latency.
Yeah, they did a paper basically throwing every idea they've ever had for designing a disk drive at the wall, seeing which one Seagate or Western Digital would pick up on.
But one of the interesting things they were talking about
was tail-end latency.
It's not the average latency that gets them in trouble.
It's the 99.5 percentile latency,
how long it takes when one of the drives in a RAID array
doesn't actually return the data
and they have to do an error recovery. But the same concept applies to flash. I mean, in flash storage, there are some flash
storage products out there, which will remain nameless, that do very well at low latency levels,
but at tail latency do not do very well at all. Yeah, you know, but, you know, just going at a
higher level, that reason is a big reason why hybrid storage has really hit a limit
in terms of the performance benefit that it gives. I always describe hybrid storage as a great
performance benefit for the arrays perspective. It offloaded a bunch of I.O. from the disks,
but not really much performance perspective from the application. Because that tail latency on the
I.O.s that were still hitting disk was very high, applications themselves didn't really change
their performance profile dramatically. You were able to get more of them onto a similar disk
footprint by having that flash caching in front of it. So it really helped the array itself,
but it didn't really help the applications. And going to all flash where you eliminate at least
the disk tail latency is a big portion of that. Now, obviously the flash and the flash architecture
and how consistent the performance is on the flash architecture that. Now, obviously, the flash and the flash architecture and how consistent
the performance is on the flash architecture is very important as well, you know, which is a big
reason why, you know, we focus on the quality of service with SolidFire to deliver the consistent
performance and make sure that, you know, a lot of the tail latency type things can come from a
variety of different places. One of the big places they can come from, though, is application
workloads are naturally bursty. And if you have
different workloads sharing the same storage system and one bursts at one time, another bursts
at another time, the impact of that is going to be seen by the other workloads in temporarily
spikes of IO latency on those other workloads. And so that's really where there's a number of
ways to counteract that tail latency on the Flash platform, but quality of service is an important part of that because as soon as you get multiple applications, as good as your architecture may be, you can't hide that kind of noisy neighbor effect, even if it's for short periods of time, that can occur when individual applications, you know, do a database flush or something like that.
It causes a spike in activity on a controller and everybody else on that controller
gets to feel that effect. Yeah. I would say there's one exception to that with hybrids and
that's the SME case where I can pin my SQL server that serves my Microsoft Dynamics to the flash
layer and just have one storage system that's cheaper and I don't have to worry about it.
Yeah. And again, I think that's certainly the case as well.
Again, it's a case where...
Once you get up to enterprise.
Yeah, if all I need is one storage array anyway and I can get it done with a hybrid array,
then that's great.
But you're right, once you get to a larger scale where you're going to have multiple
storage arrays anyway, you might as well get the important stuff on flash, get the stuff that really is cold on disk, and give that performance benefit to the
applications rather than just to the storage array itself. Certainly once you get to three or four
storage arrays, it doesn't make sense anymore. And I would go back and argue that anybody that
still has a single storage array that is their entire IT infrastructure probably needs to take a closer look at the cloud these days.
There are a couple of vendors out there pushing
hyper-reliable storage arrays that are hybrid today
because of the fact that they believe
they have to manage petabytes of storage effectively
within one storage array and things of that nature.
So in those cases, it's kind of interesting how it all plays out.
But, you know, their statement is,
if I only need a couple hundred terabytes in an all-flash array,
it might be the way to go.
But if I'm talking multiple petabytes of storage,
then the hybrid storage makes a little bit more sense.
And it's not, these guys aren't hyperscale.
These are big enterprises that are looking at this sort of stuff.
Well, again, it really depends on,
to your point earlier,
it really depends on the IO density
of those applications.
If they have extremely low IO density,
you know, then a hybrid story
is going to make more sense.
I mean, honestly, if you have the choice,
having at least some flash in your system
is probably better than having no flash,
but you probably don't need
a very high concentration of it
if their IO density is really that low. So, you know, there is still a point, a place for disk in the data center, but as soon as
you get past those saturation point, you know, you're just going to be fighting fires left and
right. And that's what, that's what customers ultimately want to avoid is this problem where,
yeah, it kind of sort of runs, but you know, every couple of weeks I'm fighting another
performance problem because it's really
not running that great. If you live by the educational, you fall off the cliff and that's
always a bad thing. So I have a question for Lee. Are you guys going to be talking more about what
for NetApp in terms of public image have been the secondary products like StorageBrid?
We don't hear about it very much. Yeah, I'll say that. Yeah, I think, you know,
if you look at NEBs coming up, for example, we just announced that we've got, you know,
here's 10 customers that are using object storage in the media and entertainment market.
And certainly where we're taking object storage right now is, I think initially there was this
idea, hey, everything's just going to be flash on-prem and, you know, objects, you're going to basically have fast and slow almost, right, or fast and cheap and deep.
And what we found and said, right, is that, you know, hey, the resurgence of NAS workloads,
has been, I think we were all like pleasantly surprised by that as we've watched NAS and even
NAS with flash, right, that dynamic. And so what we've done is we've taken our object
search where we have a terrific solution here, backed by E-Series, supporting up to billions
of objects. And we're taking that now as a more vertical approach, media entertainment, oil and
gas, financial services, healthcare, right? These are the four key verticals that we've-
The usual suspects. The usual suspects, right? They've got very large repositories. Managing that, right,
can start to become an issue. Managing that across distributed geographical territories,
right, can become interesting with the ratio coding across locations. And so we're taking a
more, I think, just more a stepping stone approach to this that
says, let's go get these vertical market solutions well solved, show the success,
give recipes out to say, here's how you would go do that. And we've got a very proven solution here.
I think that is the way that object storage is going to go, as opposed to, hey, everyone's
going to have an object store immediately. We found that just that probably wasn't going to be the way it played out.
Oh, yeah.
Well, certainly not.
Just, you know, our friend Martin Glassboro has blogged about how, you know, he's got
the perfect set of data for an object store, 24 high definition cameras of rain at the
Masters tournament.
You know, it's easy for me to, you know, because I think Dave's come in the most recently.
You know, I came in about, I don't know, just over a year ago, right?
And, you know, if I look back, I think what happened here was that the company basically,
you know, was heads down to make sure that our ONTAP software, right, was fully baked,
fully feature parity as we went to go and build this scale out system, a clustered
system and bring out all the features of that. Now, right now, that's back on offense. And what
you're going to find is the assets that we have internally, whether it's our object storage
software, whether it's the work that we're doing on OpenStack, for example, and we've got a great
set of customers. SolidFire, obviously, a strong leader in that space.
But, you know, NetApp by itself.
Yeah, I was smiling when you said OpenStack with Dave on the line.
Yeah, of course, right?
But then you look and say, well, you know, the lead contributor to Manila has been NetApp.
You know, lots of interesting opportunities for us to go and just take the idea that, you know,
we kind of got through that period of, let's call
it code completion on a major product, one of the leading, certainly one of the leading products in
the industry, and then basically going and expanding out from there. So yeah, expect to see more
visibility on these other devices now that we've got our Flash product. You'll also see that in
Converge infrastructure, by the way.
I mean, the FlexPod solution, you know, with all Flash,
all Flash is reinvigorating, rejuvenating the entire converged space,
you know, which is different than HCI, by the way.
And the way that I think about that is... Oh, very.
It's a question of who has the architectural keys first, you know, and where you scale.
We think the opportunity for
converged infrastructure right now to be converged not just across storage, compute, and networking,
but to be converged across SAN and NAS is immense. We have a terrific opportunity at a time where
EMC customers and partners are fearful that Dell will be in their accounts. There's never been as big, I think, a sea change happening
where those customers today are looking, thinking,
especially if you're a Vblock customer,
like Vblock customers are looking, thinking,
wow, what's happening next?
They have to do something different.
Certainly if you became a Vblock customer because you loved UCS, I would start thinking
that buying Cisco servers from Dell EMC might be an issue.
I can see Michael Dell coming in to help close the deal.
All right.
Lee or Dave, any final comments you'd like to add?
Yeah, super excited about that.
Thanks for the time.
Thanks for getting out.
The idea of Flash, Disk, Cloud, all of these, right, are changing the nature of storage decisions.
And suddenly, like, storage is the key decision.
It is.
It always was in my mind.
And as analysts, we really like it.
Yeah, yeah.
Well, this has been great.
It's been a pleasure to have Lee and Dave with us on our podcast.
Next month, we'll talk to another startup technology, storage technology person.
Any questions you want us to ask, please let us know.
That's it for now.
Bye, Howard.
Bye, Ray.
Until next time, thanks again, Lee and Dave.