Grey Beards on Systems - GreyBeards talk all-flash arrays with Eric Herzog, CMO and SVP Alliances at Violin Memory Systems
Episode Date: July 21, 2014Welcome to our 10th monthly episode where we return to discussing all-flash storage arrays, this time with Eric Herzog, CMO and SVP Alliances for Violin Memory. The GreyBeards haven’t talked with a...n all-flash array vendor for a couple of months now and it seemed a good time to return. Eric’s claims to be a fellow Greybeard … Continue reading "GreyBeards talk all-flash arrays with Eric Herzog, CMO and SVP Alliances at Violin Memory Systems"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here and Howard Marks here.
Welcome to the next episode of Greybeards on Storage monthly podcast,
a show where we get Greybeards storage and system bloggers to talk with storage and system vendors to discuss upcoming products, technologies, and trends affecting the data
center today. Welcome to the 10th episode of Greybeards on Storage, which was recorded July
11, 2014. We have with us here today Eric Herzog, Chief Marketing Officer and Senior Vice President
of Alliances at Violin Memory. Why don't you tell us a little bit about yourself and your company,
Eric? Ray and Howard, thank you very much for inviting us to participate in Gray Beards.
And while I don't have a beard, I am a gray beard, having been in storage for 29 years.
I have worked at several of the large storage companies, EMC most recently, IBM,
Maxdoor, the disk drive company before its acquisition by seagate but also seven startups
so sort of understand not only how the big companies view the storage market but also
how the little guys do having seven seven startups this is a new world record for us
howard i think well it's it's the first time we've had a fellow graybeard you know mostly we've been dealing with youngins oh mark yeah okay mark
counts all right you know rick vanover hasn't had time to have been on to work for more than two
startups yeah yeah yeah i'm sorry go ahead go ahead and talk to us about a little bit about
great uh graybirds about violin so i came Violin about four months ago from EMC. Violin is one of the
leading manufacturers of all flash arrays. We have 350 employees. We're based in Silicon Valley,
but sell globally. We have a set of global customers, close to 400 now, spanning Asia, Europe, and the Americas. So our focus tends to be in the
mid-enterprise to global enterprise. In fact, three of the 10 largest companies in the world
use violin arrays in production right now for their flash needs. So broad set of customers,
hardware and software are the frames. We're not just a hardware company. Have introduced a whole bunch of new products this year, our Windows Flash Array,
and most recently our enterprise class data services software suite, the Concerto,
with snapshot replication and all of the normal things you'd expect to see
when you're selling into the large enterprise accounts across the globe.
I was particularly pleased to see you guys finally come out with Concerto.
My impression was that Violin was the leader of the drag racer school of all flash arrays.
Okay.
They go really fast, but that was pretty much all they did for a long time.
Well, that's absolutely a fair assessment.
Having been at a competitor as recently as four months ago, coming from EMC, our perception was that Violin was a really strong hardware company and focused only on performance. And as we all know, being graybeards that we are in the storage space,
particularly in enterprise accounts, while performance is clearly a critical factor,
it's also about the economics of the data center
and also about how you keep that data protected.
If it's a tier one application for Oracle or SQL or your Hyper-V farms
or your VMware farms or whatever it is, the global enterprise,
me being in Silicon
Valley, is certainly concerned about when there's really an earthquake. And as great as anyone's
arrays are, when the earthquake really hits or that hurricane was experienced last year, I guess
it was, in the Northeast, well, really the data center gets hurt, whether people believe it or
not. And you've got to be able to snap the data, replicate the data, and make sure it's safe.
So, Howard, that is absolutely, A, appreciate the comment,
and B, it absolutely was a very fair assessment.
We were perceived as probably the best hardware company in the all-flash space,
but having no clue about software.
And we've rectified that with the Concerto release and then also the Windows Flash Array,
which also offers replication,
migration, snapshotting, dedupe, etc. So, you know, we've learned that these big giant enterprises
don't just need cool hardware, they need the software to protect the data that's sitting on
that great hardware. Yeah, and, you know, and when violin started, there wasn't a whole lot of competition in really go fast.
You know, there was violin and TMS.
But, you know, as the market has gotten crowded, services have become really significant.
So tell us a little bit about replication, Eric.
How does replication work with violin memory systems? So our replication strategy can be either full box and we scale up to 280 terabytes of raw flash with a single namespace or it can be done
at the LUN level. From a replication perspective, you have that granular control or box level
control, your choice. We have asynchronous replication for long distance, of course, and when we do replicate, we compress the data before we send it if it's compressible.
We dedupe the data before we send it if it's dedupeable.
We encrypt the data using AES encryption, so it's actually encrypted in flight.
We can do bandwidth throttling, which, of course, with dedicated replication link is not necessary. But for companies that share the
replication with regular data traffic, you may want to throttle the bandwidth allocated to the
replication depending on it. We have synchronous replication, local, so box to box, building to
building, floor to floor. So replicate across the street synchronously. We also have synchronous replication with a stretched geo-metrocluster
up to 100 kilometers distance from the source.
So a wide range, in fact, having come from the largest storage company in the world
and the second largest storage company in the world,
those are sort of the checkbox items we had at those former companies,
and we now have those at Violin ready to roll and actually have customers in production using these systems.
Yeah, that's great. That's great. So, I mean, you mentioned deduplication as well. I mean,
I understand deduplicating prior to, you know, replication and stuff like that. Do you have
deduplication internal to your violin storage arrays and stuff like that uh on the windows flash array we
have uh dedupe post process and all i can say is everyone should come see us at san francisco
vm world and they'll see some new things in the inline dedupe and compression market space
good because that's becoming table stakes. The nature of Flash eliminates a lot of
the performance penalties that deduplication created on spinning disks. And you already
mentioned the word economics. If a little bit of software can let me store five or six times as
much data on that expensive Flash. I'm all for it.
Yeah, absolutely. And we see that as an important need. We also see something that, you know,
personally, having done this for 29 years, I find somewhat misleading is many of the companies lead with their deduped capacity. And that's great if you've got VDI. But if you're not running VDI and
you're running other workloads, such as your Oracle accounting system or your manufacturing system or your HR system or your payroll system or your ERP system, the dedupability of data sets don't help you.
But that data does compress well.
Yeah. And, you know, and while vendors who don't do data reduction in the storage generally
then respond with, but you could do columnar compression in Oracle, let me say, and that
means that I'm going to have to spend more on my Oracle licenses because I'm going to
be using more cores and that stuff don't come cheap.
Well, as I mentioned, people who come see us at VMworld in San Francisco,
VMworld in Barcelona, and Oracle World in San Francisco, we'll see some things
in this technology area that we're discussing right now.
So tell us a little bit about the applications where you find a lot of success.
So for us, it's been tier one and tier two workloads.
You know, we're not really good at helping in the backup space, the archive space, any of that. It's
tier one and tier two applications. Our customers span the gamut. So we have people using us with
Oracle, people using us with SQL, both in OLTP type applications, as well as data warehouse
applications. We actually have a number of people using us in some of the newer databases.
So Cassandra, Hadoop, and MongoDB, we have multiple customers using those databases with
violin technology.
Wait, wait, wait, wait, wait, wait.
I always thought that Hadoop was a DAS configuration and stuff like that.
What is a violin memory system doing in a Hadoop was a DAS configuration and stuff like that. What is a violent memory system doing in a Hadoop cluster?
Well, we actually have some people who've decided that,
one of which is one of the largest Web 2.0 companies on the planet,
that Hadoop is a great technology, but it does not have to be DAS.
They've deployed it as a SAN configuration, and they're getting incredible performance.
Now, that's part of the reason they went with Flash,
is that if they tried that with an all hard drive array or hybrid array,
they wouldn't have been able to do it.
But by configuring an all Flash array in a SAN config with Hadoop,
they're able to get incredible performance, and that's what they do.
And they're one of the largest Web 2.0 companies in the world.
Oh, my God. All right. I got you.
So a lot of applications, when you look at it from a business perspective,
not just the app, but what does the CFO care about or what does the CIO care about?
As you fellow graybeards know, I have yet to meet a CIO who started as a storage admin. Not that
we don't love storage people and storage admins, but the CIOs are mostly software guys. And every once in
a while you might find a X server guy, but they're heavily software centric. So for them, it's what
do you do to optimize my application? So if you can close the quarterly financials in 20 hours
instead of 125 hours, that's a big deal. If you can run the nightly logistics software in an hour so the trucks can actually deliver the chicken to the right grocery store,
and the report used to take you 10 hours and you would miss many of your shipments because you couldn't physically get it there because the trucks couldn't physically drive there,
that's the kind of technologies that we'll talk about at the two VMworld shows and Oracle World that we most recently talked about, the reality—
We only get to pitch that one more time, Eric.
Okay.
There you go. Watch out. Howard's pretty tight on this sort of stuff.
People really use us for how it delivers business value.
So as an example, two of our customers are very large telcos, one here in the States and one in Europe.
The one in Europe is actually in production on this system.
The one in the U.S. is getting ready to go into production.
So many of the people who may be listening may not like Violent after I say this,
but both of these two telcos, independent of each other,
were having, in their minds,
trouble capturing all of the billable texts and mobile minutes.
They did a survey.
They checked it out.
They rewrote their software.
They tried it on their existing arrays,
which were both hybrids, and in both cases, tier one large vendors, although not the same vendor. They had one had one guy, one had
the other. It didn't work. They tried it on violin, all flash arrays, and it does. The U.S.
company has told us that they project that they will bill an additional $150 million a year
in minutes. So I apologize up front and the european telco told
us that is an impressive roi oh god i would say so and the european telco said that they would
be able to bill about i think they said it was about 180 million euros per year additional for them. So, you know,
that's the kind of reason that people buy us. So do you provide these arrays on like a 5%
of additional revenue deal? We probably should have because we would have made more money than
selling them out to them outright. So that's really where it's coming. And, you know,
one of the things I think that people look at is there's a lot of talk about cost per gig,
cost per gig, cost per gig. It's not cost per gig. Flash itself as a class, not just violin,
just in general, is at the economic tipping point. And I don't think people realize that,
you know, gray beards. And although I wasn't that gray, cause I was in high school at
the time. And then in college in the seventies is the shift away from tape as the primary medium
in the data center to hard drives. Yes. Tape was used for backup and archive, but anyone who's
seen that old movie, true lives with Arnold Schwarzenegger sees those giant storage tech
libraries, fetching stuff, you know, a couple of times in the movies
when I worked on that stuff. So great stuff. Yeah. Yeah. At the same time, you know, in the
seventies and eighties, as people transitioned to, to mostly hard drive based arrays, there's no
argument that hard drives were way more expensive per per gig, or actually per megabyte back then, than hard drives were. But
the overall economics of using hard drives made a big difference. And I think Flash is at that
economic tipping point. Quite honestly, for people, I think, particularly in the press that
overdo the cost per gigabyte, well, if that was the case, then all CIOs would be running to the
all-tape data center, because the cost per gig of tape is still cheaper
than hard drives and cheaper than Flash. And so far, I have yet to meet a CIO telling me he can't
wait to move to the all-tape data center because, boy, his cost per gig is the lowest out for his
data center because the overriding economics of your data center would be horrible if you switch
to tape for all workloads. So flash is at that same tipping
point today, regardless of what vendor, you know, the listeners buy their flash from, regardless of
that is at the absolute economic tipping point, just like hard drives were in the 70s and early
80s, when the market switched from primarily tape based data, disk systems, and data centers to
hard drive based and hard drives were much
more expensive.
That's exactly where you see flash today.
Howard's point about data reduction is very helpful in certain use cases and scenarios,
but even without that, you can just mathematically calculate it out.
If you're a giant global bank, one of our customers, their estimates, not ours, project a saving of $2.5 billion over four years
as they consolidate from 38 data centers to 10, take their 245,000 rack view of storage
using hard drive and hybrid, they have both systems, consolidate all tier one and tier two workloads to violin all flash shrinking that rack you from
245 000 a day down to 37 000 shrinking their power budget which today is a billion kilowatt hours a
year to 69 million kilowatt hours a year and shifting their storage and server OPEX, which they track both together,
from 981 million annually to down to about 275 million.
Again, those numbers were provided to us by the customer, not by us.
So that's the type of economic tipping point you're seeing, and you don't need to be a giant global financial institution to do that.
Got a local customer here in the Bay Area.
Their CIO told us he's since switching to all flash for tier one and tier two.
He's gone from 12 racks of storage running a $6 billion company to three quarters of a rack
of violent all flash running all workloads, accounting, finance, manufacturing, HR,
inventory systems, sales and marketing, the whole company,
they are now down to and saving about $2.5 million per year by switching.
So those are the things, regardless of what...
The cost per IOP difference is just so significant that, I mean, I will argue the point when we start talking about, you know, where's that tier two line and is it better to have, you know, to spend extra for the all flash for the user home directories, which are arguably a tier two application.
But that's really where the line goes, not is this happening?
Yeah, no. And we would agree 100 percent. It is. We're at that tipping point. It's what happened
when hard drives came in vis-a-vis tape. Hard drives will absolutely still have an important
space. File based data is growing astronomically. You've got all kinds of applications that either
you're going to stay with tape or you're going to move to some sort of hard drive based system and i'm talking now tier three tier four tier five
workloads in a big enterprise for a smaller enterprise you know probably their tier three
and tier four workloads so hard drives absolutely is still going to be a huge part of the market but
for these other workloads you're going to see a dramatic movement across into flash regardless
of what vendor, whether
it's Violent or one of our competitors. The reality is that's where the market is going,
and that's where the prudent CIO and CFO should be looking to drive their technology. Let's face
it, as much as us Greybeards hate to hear it, IT has become a commodity. It's not quite as much
of a commodity as the desks and the chairs that are sitting in the enterprise. But let's face it, every enterprise is completely
computerized. And come on, what isn't Lucchese's barn grill
and Howard Marks' pizza joint completely computerized? More or less.
So even in a small business. I just paid the electric bill for the lab.
Oh, God. Yeah, yeah, yeah.
Hearing your client going, yeah yeah millions of kilowatt hours
i'm going damn and i thought my thousand dollar electric bill was a lot yeah yeah really really
so i mean is the violin system so you've got two systems you got the violin memory uh array and the
windows or flash array um are they do they both are they both san are they both SAN? Are they both files? What do you do to talk to these things?
Sure. So the Violin Flash Arrays, fourth generation, are SAN-based,
and they can be either Fiber Channel, InfiniBand, or iSCSI.
The Windows Flash Array is actually a NAS-based.
It supports both SMB slash CIFS and also NFS. And basically,
we have an embedded version of Windows Storage Server 2012 R2 literally on the array.
So when a channel partner or an end user buys it, you don't buy the software from Microsoft.
It actually comes preloaded on the array. So don't buy the software from Microsoft. It actually comes preloaded on
the array. So we actually buy the software from Microsoft and preload it onto the Blade servers
that sit inside of our array box. And so that will give you NAS capability on the Windows Flash
array. And then for SAN connectivity, whether it be Fiber, iSCSI, or InfiniBand, we've got the traditional violin flash rate.
Other than that, the hardware is actually the same.
So, you know, really the difference is in the software load.
The hardware uses the same violin intelligent memory modules, the same V-RAID controllers.
We have four RAID controllers in every shelf, yet the same density, 70 terabytes raw, 45 terabytes usable in three rack U.
So all of that is identical.
The difference is, is it a SAN or NAS-based system?
And there is a difference in the software between the two boxes.
So you mentioned the VIM.
Yes. You guys are one of the very few actually dealing with flash qua flash as opposed to buying a controller from somebody or buying SSDs from somebody.
Right.
Why?
Seems like it's got some serious time to market concerns.
Yeah, yeah, yeah. Yeah, so the reason we do that, there's three primary reasons. First is performance. In a 70-30 read-write mix with 8K block size with the array
full, so let's say 44 out of 45 usable terabytes filled, we can deliver 500,000 sustained IOPS with MLC and under half a millisecond
latency, guaranteed. There's no hotspots. There is no performance hit when you do the infamous
garbage collection. There's no right cliff. And with SLC, we can go up to 750,000 and read-only,
we can do a million. And that's per shelf.
You can aggregate shelves.
We have a couple of customers actually that are in the 2 million IOP range and sub 0.3
millisecond latency in production with Oracle databases.
So that's the performance side.
Second is a resiliency side.
Everyone knows about the you know, the durability
of flash compared to hard drives. Personally, it's a little bit overblown, but it is reality.
So it's absolutely an issue. So one of the reasons we do it is each of those violin intelligent
memory modules actually has a processor. So there's actually a processor there that runs firmware.
And that firmware not only helps with the garbage collection, but that firmware does wear leveling. So instead of wear leveling, we have 64
violent intelligent memory modules in a 70 terabyte raw box. We're doing wear leveling
across all of that flash with that controller versus you've got SSD and you've got more flash
behind the SSD controller. So it allows us to have a measure of control that gives us that resiliency.
And then the last thing is clearly density.
We are 70 terabytes raw, 45 terabytes usable in three RACU.
One of the other players in the market who happens to use SSDs, it's right on their data sheet.
So they can give you 70 terabytes raw, about 45 terabytes usable. It's right on their data sheet. they can give you 70 terabytes raw about 45 terabytes usable
it's right on their data sheet but it's 42 rack you so what they do in 42 rack you we can do in
three and that's the difference between using ssd technology and using quote raw flash and then
configure we're not the only guy in the industry doing it there's you know at least one other
player who does it that way as well they also also have good density. Our density happens to be a little bit
better than theirs. We're at 70 raw. They're like at 60 raw. But both of us do it for that reason,
is the density, the performance, and the resiliency compared to using an off-the-shelf
SSD. And yes, I can see Howard getting ready to ask the question, doesn't it cost you more money to do so? Absolutely, we have a group of engineers that
have to do that and have to write the firmware. But the value from the performance perspective,
the density perspective, and the resiliency, we felt it was worth spending the engineering money
on that. And by the way, it's a fair criticism, as Howard brought up already. That's part of the reason we were always known as the speed freaks and always had this great hardware,
but didn't have the software because we were spending money on the hardware development.
We've now balanced that out and now have 40 software engineers as well that do nothing but work on software and don't do any hardware at all.
But the main reason we did the raw flash was that performance that density
and that resiliency yeah yeah yeah i think you actually left out my favorite piece on the
performance side and that's that so in it in an ssd based system the overall system controllers limited to talking SAS or SATA or someday soon NVMe to those SSDs.
But all of those protocols, even NVMe, assume the SSD is a relatively stupid device. And when one SSD really needs to do garbage collection right now,
it's got no way to tell the controller upstream, I'm busy, send this data to somebody else for a
couple of seconds. I understand the VIMS and your V-rate controllers can do that, right?
Yeah, in fact, that's a good point, Howard.
We use a PCIe switch fabric.
We're currently at Gen 2, moving to Gen 3 PCIe.
So it is a switch fabric infrastructure.
So if one of those violent intelligent memory modules needs to do its garbage collection,
it communicates both with the V-Ray controllers and with the other VIMS.
And since we do lay the data
out across the entire box, while we look like a RAID 5 configuration, like an SSD box, and do
divide up by LUNs, the reality is literally the data is spread out across all 70 terabytes.
So that does allow us to do the garbage collection and the magic fairy dust that helps out a lot is
not only the firmware and the fact that it's got a controller on that flash, but literally that PCIe switch fabric infrastructure allows the VIMS and
the RAID modules to communicate with each other. And that helps a lot. That is both a performance
issue, why we can get that sustained performance, whether the array is empty or whether the array has got a full 45 terabytes of data set on it, or also the resiliency, why we can do, you know, the garbage
collection one violin memory module at a time instead of across the box. So yeah, actually,
very good point, Howard. I should have mentioned that. All right, I have a question here. Now,
you mentioned the performance word, so I have to ask the question.
I haven't seen any SPC or SPC2 or spec SFS benchmarks from Violin.
So we are currently working on a benchmark with an unnamed server vendor that will be published in the next 90 days.
Like a TPCC or something like that?
Okay.
I got you.
So we will be doing that. We do have a number of older benchmarks, but we're about to do some
changes in the products. And so right now it doesn't make economic sense for us to do a bunch
of benchmarks based around that. So there are some other stuff that we will be doing at the
end of the year and into Q1 of next year, but we're holding off because of some other stuff that we will be doing at the end of the year and into Q1 of next year,
but we're holding off because of some product stuff we have coming instead of doing it on the current systems.
Yeah, releasing the wonderful specs for the product you just discontinued isn't really all that effective.
Right, exactly, exactly.
So, I mean, you support RAID 5 across the V the vims is that how it works is that uh well basically
what we do is we we lay the data out actually across the entire box so everything's striped
okay everything's striped across and you get that parallelism the v-raid controllers and there's four
of them the v-raid controllers make it look like to the system like it's a traditional raid 5
configuration when it's obviously not because we're laying the data system like it's a traditional RAID 5 configuration when it's obviously not
because we're laying the data out everywhere. It's more three-par or compelling-like.
Yeah. Yeah, you want to put it in a paradigm that the storage guys can understand. And obviously,
with the Windows Flash array, you could have Microsoft guys playing with it who are not
necessarily storage experts. So you want to put it into a paradigm that people can easily understand.
So, for example, in the Concerto software, our replication software, you can go LUN by LUN by LUN. You could have, you know, let's say 10 terabytes worth of LUNs replicating between
Silicon Valley and New York asynchronously every half an hour. You could have another 10 terabytes
worth of LUN replicating asynchronously to Singapore twice a day, another 10 terabytes replicating synchronously across the street, another 10 terabytes replicating in a stretched geometro cluster from San Francisco to Sacramento because it's a 100-kilometer limit, and another 10 terabytes of LUNs doing CDP, real-time continuous data protection.
But in order to do that, the paradigm has to be what the storage admins used to, which
is LUNs and RAID groups.
So it appears to be LUNs and RAID groups, regardless of writing the data in parallel
across all the VIMs.
That way, the storage admin or the Microsoft admin or the infrastructure director, depending
on the size of the company,
is comfortable managing it in a way that they're used to managing traditional hard drive storage,
even though obviously you're using Flash and even though what goes on behind the scenes is different. You want to make it easily usable and consumable by the customer, and you want them to
make it easy to manage. You don't want to make it overly complex from a management perspective.
You mentioned CDP. You support continuous data protection as well?
Yes, we do. Real-time CDP, copy on write.
And it's cross-replicated groups and stuff like that?
You can do it however you want to do it. So you could do it asynchronously,
which obviously means you've got potential data loss because you're really doing copy on write.
So you've got to write it from California to New York.
You could do it synchronously.
You could do real-time CDP synchronously.
But obviously, if you're doing a local synchronous replication, that almost is CDP because you're doing a write.
And you're doing a copy to the secondary site, wherever it is.
Let's say it's just across the street.
As you guys probably know, while human error is the number one cause of data loss, the number two data loss is still fire.
It's not flood.
It's not earthquake.
It's not volcano.
Not like those things don't destroy your data centers.
They do.
But even with all the fire suppression, it's still fire. So there are plenty of companies that replicate locally, not from California to San Francisco or San Francisco, sorry, to New York.
But they replicate from one building to the other because fire is still an issue regardless of all the fire suppression.
So we support local, we support metro, and we support, of course, global.
And then the CDP can be done in a similar fashion. So you decide how
you want to do it. And you can either do box level or, you know, one of the key benefits we see is
granular control, which, of course, again, they're used to that. If they're used to that, you know,
very tier one vendors products that they've been using for 20 years and now they want to go all
flash with violin, they still have process and procedure and that process and procedure is going to replicate like
this from california new york we're going to do the following you know from san francisco
to oakland across the bay we're going to do the follow come on be real eric it's from wall street
to jersey city yeah something like that yeah. You have no earthquakes there. You're safe. Yes, anymore.
So the Windows Flash Array is a NAS product.
When you're replicating there, you're replicating RAID groups or file systems, or what are you replicating there?
You're replicating file systems.
And actually, we are announcing on Monday that we are now certified with Azure.
So you can use the Azure cloud as a replica target.
So you can replicate either from one Windows Flash array to another Windows Flash array.
Obviously, you could replicate within Windows domains.
But actually, I don't know if you were aware that Azure has just come out with a storage certification at the beginning of the year.
So we're now certified with that, which is why we can now replicate.
So if you feel that the cloud is more a cost-effective way to do your replication or your tiering, the Windows Flash Array will support that using DFS.
Or obviously there also happens to be the Azure Data Services as well. Okay.
So in the Flash Windows array, when you're talking replication,
you're talking about DFSR, right?
Yes.
Okay.
Yeah, so it's not really, you know, you can't,
you wouldn't use that to replicate a SQL server
and make sure your transactions were in sync.
Yes, right.
Okay.
Yeah, yeah, yeah, yeah, yeah yeah but it is a window now do you guys support my
installing other software on that windows box whoa whoa it's a file in storage box
yeah but you know i might want to install right you can't be doing stuff like that
no but i might want to install DoubleTake to get asynchronous replication.
Oh, yeah.
So right now – Clearly, I don't want to install Adobe Creative Suite on it, but there is some storage-related software.
Yeah, yeah, yeah.
Yeah, right now in the current incarnation, and there will be some hardware revving at the end of the year,
current incarnation, all of our blade servers,
there's actually two fault tolerant active-active blade servers actually embedded in the array,
which one Windows Storage Server 2012 R2. Right now, that's the only thing you can run on them.
When we have our newer version later in the year, early next year, right around the Christmas time,
then those blade servers are going to be pumped up in the type of processor we use and the amount of DRAM. So then we would be able to run
other things on those Blade servers embedded in the Windows Flash Ray.
But for right now, it's limited to Windows Storage Server.
Okay. And you guys
dropped the idea of running VMware on those Blade servers, right?
Yeah, we don't do that.
We're not doing that right now.
Absolutely not.
Yeah, yeah.
So, you know, Eric, so, you know, there's been a lot.
We've had some discussions in the past with, you know, PCIe, Flash,
caching vendors and stuff like that.
Where do you see the market heading with, you know, you've got PCIe Flash,
you've got all Flash arrays, you've got hybrid storage,
you know, obviously you've got object storage.
How do you see all that stuff working out?
Don't forget the server SANs and hyperconverged boys.
I'm sorry, server SANs and hyperconverged, which also could use PCIe Flash or DAS.
Yeah, yeah, Yeah. So where do
you see the world going here, Eric? So as a fellow gray beard, um, tape was supposed to be dead
already and it's not, uh, mainframe was supposed to be dead too. And that's not dead either. Right.
And of course, sands were never going to take off because why would you want to put a SAN? And God forbid there ever be NAS.
And we have all of it.
And iSCSI was going to replace Fiber Channel.
Right.
So I think what's going to happen is you're going to see, based on the workload, the application, and the use case,
customers will tend to use a couple different technologies.
So I think you're clearly going to have SAN and NAS-based
all-Flash. You will have Flash sitting in the server infrastructure, whether that be in the
hyperscale side, something like EMC ScaleIO or Maxta, one of the startups that competes against
EMC. So you'll see that. Or vSAN from VMware. So you're going to see a number of products that'll be host-based,
trying to scale out that way, whether that be hyperscaler, more like the vSAN for our small
scale deployment. You're still going to have SAN and NAS, but that will transition when it is not
performance-oriented. So tier three and tier four, that will be hard drive based. When it's sort of in the middle of the pack, it'll be hybrid.
And for the larger workloads, the more powerful workloads, it's going to be all flash.
And in fact, you guys know some of the storage analysts that track numbers.
As you probably saw last year, the all flash array market grew from about $275 million a year to almost $700 million in one year.
And that's not all going just for the fastest application.
As we talked about earlier, this economic transformation means that flash will not be in a niche.
All-flash arrays will be a big segment.
I think clearly traditional hard drive arrays will be a big segment, but changed in their focus to tier three and tier four. But as we all know, the growth of
file data, you know, if you want to believe some of the growth projections are going to be from,
you know, 4.5 zettabytes of total data up to 45, literally by 2020. If that's even remotely true,
not only will you need tons of all flash radars.
And since tape is dead, that's all going to have to go on object storage or spinning disks of some
sort. Right. So the reality is they're all going to exist. It's going to be it's very I think what
people don't realize is storage is a great technology. But whether we like to hear it or
not, storage is more of a commodity. It's very specialized commodity, but it is a great technology, but whether we like to hear it or not, storage is more of a commodity. It's a very specialized commodity, but it is a commodity.
And the reality is what really matters is how you tune it and how it interfaces with the applications, the use cases, and the workloads.
So you're not going to use an all-flash array for an archive.
That would not be the smart thing to do.
You would use either tape or disk.
But the guys at facebook have been talking
about that and similar other stupid things no optical and facebook guys are talking optical now
yeah well they were talking they were talking optical they were talking about
and we're going to get the cheapest flat the nastiest flat dead flash drives
so i think what will happen is over time, Flash is going to spread everywhere.
It's already, as you know, huge in the hybrid world.
I mean, that's what hybrids are.
And several of the vendors out there, they're the big boy vendors.
They don't ship that many hard drive only arrays anymore.
They ship mostly hybrids.
So Flash is ubiquitous in the array space, not only in the all Flash world, but in the hybrid world. And the bigger companies, my former employer in
particular, have publicly stated over 70% of every array they ship out the door, that's not all-flash
as a hybrid. Yeah, and some percentage of those all-disk ones are people using them as backup
targets when all you have is a hammer. Everything looks like a nail theory. Yeah, well, and you also
have, of course, a bunch of older customers that haven't shifted yet.
But the reality is Flash is going to be ubiquitous.
I think you're going to definitely have server-side
and DAS both hyperscale as well as in the lower end.
So the world is going to, again,
be a plethora of different storage technologies
that are going to coexist.
Well, that's good for folks like us.
Absolutely.
Yes.
And the smart end user is going to figure out what are the two or three that they need.
They're not going to use all six or eight.
They'll use the two or three.
But we're coming to the end of the, you know, I bought a Symmetrix and put everything on
it because that was the conventional wisdom era.
Yeah, no, I don't think that that's going to be
around at all anymore. It's going to be, if I got file-based data, okay, do I need certain type
performance levels of that file-based data? Okay, great. That should be a hybrid NAS box.
I don't want to go all disk. Oh, it's really slow. And it's all of the, now all of those
photos that people send to all state and state farm on
the insurance site okay great once the case is closed and howard's got his check for the guy
that hit him hit his car right and now great it goes into an archive by law and if it's a disc
it'll either be a tape archive but if they decide okay they want it on disc it's not gonna be on a
hybrid array it'll be on all disk drives because they can save money doing it in that use case and workload,
which by the way, given all the data being created, that's video, photo, you know, that type
of data is not going to be an unusual use case. And I don't mean just because you got pictures
on Facebook or what people put on LinkedIn, it's going to be used all over the business world.
And, you know, we've all seen those all state and state farm ads where the guy's taking the photo of the damaged car.
And they really do use that.
And, you know, that's what they use.
And that's their legal records.
And they keep that.
And it's going to be.
Just because the digital cameras keep getting better and those images keep getting bigger.
You know, the sheer volume of that data grows so so things are good for
violin and for clever safe and everybody in between is in trouble oh there you go there's a matchup
made in heaven well as you know howard the guys that are good in the space will adapt yeah there
are companies that have been through several transitions in storage and have
survived. And guys who don't will be looking for jobs. Or looking to come to violin because we'll
need more people as we grow. And we'll hire the best of those that their company didn't adapt
because there's always good guys, even if the company itself doesn't adapt. So you'll see that transformation. And I do, you know, let's face it, in some of the bigger companies, they do offer a
wide swath of solutions that include everything from either tape or, you know, their idea of a
backup device or object store all the way up into all flash. And the key thing for those bigger
companies is, will their support teams in in particular and sales teams be able to adapt that, okay, it's not – we're old graybeards.
But certain graybeards act 18.
As my daughter said, who is 18, that I am the oldest 18-year-old she's ever met.
So you've got to be adaptable.
That's why I've been in seven startups.
But we all know guys in the storage business who only work at the big companies. Some have only worked at one big company. And if the
big company can't adapt, you know, they're not adaptable people. They don't work in startups
or small companies. And some people do both. Right. I've done both. And you guys have also
done both being in big and small and working with big and small. But some people can't. They just
have that security. So some of those bigger companies will not be able to adapt,
and whether it be the server business, the storage business, the software business,
there is no compact, there is no deck, there is no tandem.
Those companies didn't adapt.
Partially it was acquisition, but part of it is they weren't adapting.
And so someone pulled them up.
And it's going to happen in storage too, right? And so we'll see how it goes. But there will be,
as you guys were both talking about, a number of storage technologies
that will be out there. Customers will choose. I don't think,
like I said, NAS was never going to be successful. It's wildly successful.
Why wouldn't it do Fiber Channel? Let's do SSA.
Don't go there.
I know that stuff.
I laugh because of having fought that war and had wonderful experience, really wonderful, trust me, with SSA.
Here's what's funny.
Here's an example of a company that historically and maybe they want to adapt
this time has been pretty good at adapting you know ibm has been around for a hundred years
and there is no such thing as a cash register anymore which is what they started on and using
that specific example because i was at ibm at the time when these wars were going on they had both
ssa and fiber channel they supported both they laid their bets on, they had both SSA and Fiber Channel. They supported both.
They laid their bets on both. They had Fiber at the time. As you know, they were the second largest
hard drive supplier. And of course, part of the reason all three of us have jobs is that IBM
invented the hard drive here in Silicon Valley in the early 50s and grew this great business,
not only in the hard drive raw business, what Seagate, WD, Toshiba, Fujitsu
have today, but also all the system houses from violin, which now is flash, but all that sort of
spawned out of that invention of the, of the distribe, but they were smart and they played
both sides. They played the fiber channel side and the SSA side. And if they had ended up
coexisting, they would have had both. And other
guys chose one or the other. Fiber Channel ended up winning. And they had a huge, robust Fiber
Channel business. And I was there at the time. We were the number one provider of Fiber Channel
based OEM RAID controllers. And they were the number two provider of Fiber Channel disk drives
after Seagate. Yet they had been, quote, the kingpin of SSA. So to learn from that,
if storage companies can adapt, these big guys that have all of these various technologies,
that's great. Some guys, as you know, buy. EMC is a great company, but let's face it,
other than the Symmetrix, Isilon acquired. Even the VNX technology is an acquisition from the old data general, right?
Extreme IO.
Data domain.
Data domain.
Avamar.
Legato.
They're very good at acquiring companies.
If you're EMC or Cisco and you can acquire companies and roll them in that well, it's cheaper than doing your own R&D because the VC guys get to place the bets and you get to come in at round B or C.
And reap the rewards.
And reap the rewards.
But on the other hand, you could be NetApp.
Yeah, I was just going to say some of the others.
And not be able to pull off a successful acquisition.
So if you're going to be like that in that example of NetApp, you've got to be adaptable and flexible.
And if you are, you'll have been great for a certain number of years, but then you'll go away.
And no, no, no, knock, Ray, but where's storage tech these days?
It was acquired by some processor company and got acquired by a database company, of all things.
It's dead.
You know, the storage tech library still exists. and got acquired by a database company, of all things. It's dead.
You know, the storage tech library still exists.
There's still some storage tech technology in Oracle, you know, Exadata and Exelogic and stuff like that.
But that's about it, really.
So it's really about adaptability.
And storage, if nothing, is not adaptable.
Let's face it.
One of the great things about storage is there's startups all the time, whether that be storage software, startup storage, hardware, startup storage, solution startups.
And it hasn't changed. And I know some people out in the industry.
When was the last server startup?
Yeah. And even though some of the financial community thinks storage is boring and no one can.
I tell you what, CIO can survive without storage.
None. None.
And so, you know, it's an exciting
business. There's a lot going on in storage. And, you know, there was all the CDP startups. Well,
now it's a standard thing. There was all the NAS startups. Now it's a standard thing. It's all the
all flash array startups, which Violent is no longer a startup because we're 100 million in
revenue and we're publicly traded. But, you know, you've had all this growth and all this development. And it's all really,
in most instances, been a startup. EMC was a startup. EdApp was a startup. Now, IBM,
obviously, a long time before we were even around. Hitachi the same way, you know,
but NHP the same way. But even Dell only got started in the, you know, nhp the same way but even dell only got started in the you know basically the night
early 80s so you know it's not like it's the where general motors or ge or at&t you know back into
the before 1900 that's just not storage has got that vibe all right all right all right eric you're
great to talk to but we've run we've run a long time over our limit here.
I don't have any other questions.
Howard, do you have any last questions?
Nope.
All right.
I'm going to put this in the close then.
Well, this has been great.
Thank you, Eric, for being on our call.
You're a hard man to get a word in edgewise sometime.
I'll say that much.
Next month, we'll talk to another startup storage technology person.
Any questions you have, please let us know.
That's it for now.
Bye, Howard.
Bye, Ray.
Bye, Eric.
And thanks again, Eric.
Thank you very much.
Until next time.