Grey Beards on Systems - Greybeards talk car videos, storage and IT trends with Marc Farley
Episode Date: March 17, 2016In our 30th episode, we talk with 3rd time guest star, Marc Farley (@GoFarley), Formerly with Datera and Tegile. Marc has recently gone on sabbatical and we wanted to talk to him about what was keep...ing him busy and what was going on in storage/IT industry these days. Marc is currently curating a car comedy vlog … Continue reading "Greybeards talk car videos, storage and IT trends with Marc Farley"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Howard Marks here.
Welcome to the next episode of Gray Bridge on Storage, a monthly podcast show where we
get Gray Bridge storage assistant bloggers to talk with storage assistant vendors to
discuss upcoming products, technologies, and trends affecting the data center today.
This is our 30th episode of GraveBridge on Storage, which was recorded March 11, 2016.
We have with us here today third-time guest star Mark Farley, formerly with Quadra, TGI, and Dayterra.
Why don't you tell us about what you're up to these days, Mark?
Oh, man, I'm preparing taxes, I'm brewing beer, and I'm working on a curated video site called TheRideCast.com.
What's that?
TheRideCast.com, like I say, it's in-cars video.
You know, I've done the Steering Wheel Camera Society of America.
Ever since I started doing videos, I thought, you know, I want to do videos in cars.
And then you see the Seinfeld's comedian in cars getting coffee.
You see the James Corden interviews, sing-alongs with Adele and Stevie Wonder and all of that.
And there's this thing going on.
And if you look around YouTube, there's a lot of people making videos in cars, right?
The car is the new stage and studio.
That's a very California thing.
Yeah, totally, man.
We are.
You know, this is car culture.
And so what I'm doing is just I'm creating this site that pulls together all of this stuff that people do in cars.
And you find out that there's an amazing variety of stuff.
There's comedy.
There's music.
There's commentary.
There's just chilling out.
And some of these chill-out videos are really cool.
It's like watching a U-log or something.
So anyway, I'm just putting this site together, theridecast.com.
It's a gas.
Cool.
So are we going to graduate to storage geeks in cars getting bourbon?
Oh, yes. Anytime. Anytime. Let's a gas. Cool. So are we going to graduate to storage geeks in cars getting bourbon? Oh, yes.
Anytime.
Anytime.
Let's do it.
I don't think I want to do a video of drinking in a car.
No, that would be legally dangerous.
Yeah, so, you know, we could, like, drive a car, have a conversation,
go to a distillery, you know, sip some bourbon, and then, you know, end of show.
Yeah.
Works for me.
All right, Mark.
This is about storage.
What do we know about storage these days?
Oh, man, the storage world is crazy.
We've got the incredibly shrinking budget out there.
We've got more vendors than we know what to do with.
And then we've got the transition that's going from IT,
from people doing things in the cloud instead of in their own data centers.
And we've got the conversion of old ways of putting applications together
to doing things with agile development and DevOps,
and all of that stuff, because it's application-oriented,
ends up affecting storage in a major way.
I haven't seen this industry this uncertain or this shaky for a long, long time.
We can look at all kinds of indicators of that, too.
God, you didn't mention Flash at all in that discussion.
No, I didn't mention flash.
You know, flash is not important.
I mean, it is.
Flash is not important.
Flash is why we have all these vendors.
Flash is why we have all the vendors.
That's right.
And flash is important.
But flash is like, okay, it's another media, and you can do things faster.
I think where flash gets interesting or solid state gets interesting is with 3D crosspoint,
because that changes the equation, I think, of how controllers work
or what the notion of a controller is.
Well, the concept of addressable non-volatile memory that even if it's slower than DRAM
starts opening up all sorts of new opportunities for application and system design.
Yeah, it complicates all that activity as well because, you know,
whereas before all that when you went offline it went away or, you know,
that sort of memory-type structures all got initialized during boot up
and stuff like that.
Now you don't want to do that.
So there's a lot of complication with 3D cross-point.
It's surprising to me, again, that we're still not seeing samples on the technology,
but it's an Intel Micron Flash thing.
They're still working on it, I guess.
Yeah, well, I didn't think they were going to have samples, though, until this summer.
I heard it was samples last year.
At Flash Memory Summit, they talked about sampling soon.
Yeah, but soon.
Yeah, but that's the kind of technology that we won't.
Samples are samples to people who won't tell us anything about them for another year.
Yeah, and maybe it'll be like samples of the first transistors that came out.
It'll take like 10 years for them to get solid.
Were you there when the first transistors came out, Mark?
I mean.
I was born.
I was born then.
I was born at night.
I mean, you qualify as a gray beard normally.
Gray beard, right?
So, okay, my birthday was in 57.
And I suspect, Ray, yours is before that, and Howard, yours might be a little after that.
I'm just a year after.
So I'm the cream in this cookie.
I'm definitely before that.
Yeah. We were all born about the same time when there were really shitty transistors.
So maybe the first 3D crosspoint
will be shitty for a while, too.
And I did spend about a month working at
Bell Labs, Murray Hill, where they got the water cooler
with the three legs so it looks like a transistor.
Nice. You worked there?
Yeah. Yeah, I did the
training program
for Starland when they rolled it out.
Okay, yeah, yeah.
So what is going on with the Flash in the storage business?
I mean, it's like Howard's right.
It's the reason that we've got 25 different vendors competing for the enterprise storage these days.
But Jesus Christ, it's not ending.
I thought it would end after about five years.
No, it continues on.
Oh, it's never going to end.
I mean, well, it's going to end because a bunch of these companies are going to run out five years. No, it continues on. It's never going to end.
It's going to end because a bunch of these companies are going to run out of money.
Yeah, there's two things.
First, something else new is going to come around. We've reached the point
maybe last year, maybe the year before
where everybody
understands how to get Flash to work.
And that's putting
some serious pressure on the guys
whose whole raison d'etre is, look, we're Flash.
I mean, look at Violin.
Did you just speak French?
Only a couple of words.
Anyway, I'm sorry, Howard.
As long as you don't react like Mr. Adams, we're fine.
Violin is an example of a company that's struggling, you think, with flash? Well, Violin came out, and they were in the flash market early,
and they got to sell, look, it goes fast because it's flash.
And now that everybody else is selling flash, that's certainly fast enough.
I mean, if you look at one of Mark's other former employers, a 3PAR 7450,
that's a perfectly nice flash device
and it does a million IOPS at
one millisecond of latency.
Why do I want to buy
something special that was
designed strictly to do flash
that could get the latency down
to 750 microseconds?
Or 80 microseconds
as the case may be or something like that.
Yeah, but if you want the data services, it doesn't go down that far.
Yeah, the problem really is what's good enough, right?
If what's good enough is from any number of vendors,
then if you're a smart shopper, you'll just take the cheapest good enough option.
Or you'll take the one from the vendor that you trust.
I mean, Keith Townsend was on the Cloudcast the other day,
and he said that big enterprises trust the vendors that take trust. I mean, Keith Townsend was on the Cloudcast the other day, and he said
that big enterprises trust the vendors that
take you out to dinner. And I tweeted
him and I said, no, that's not quite right. It's not
that we trust the vendors that took us out to dinner.
We trust the vendors that sent
an SE with pizzas at 11 o'clock
when we were having trouble.
I will pay extra.
The CIOs trust the vendors that take them out
for golf. Yeah, yeah, yeah.
Yes, there is some of that.
But the truth is much of the reason we deal with big vendors is because we had a problem
and they came through the last time.
Right.
And, you know, one of our other Tech Field Day folks who will remain nameless
because I didn't get his permission to say this,
his company is a mult-billion dollar enterprise, and they only buy mission-critical stuff from
people who are big enough to sue for $100 million because that's what it's going to
cost when something goes wrong.
That's an interesting approach.
Yeah.
Well, I mean, I saw that in my consulting days when I was a lone gun consultant, and
as soon as the Deloittes and PWCs of the world had somebody who could do the job,
I wasn't getting business from big companies anymore because they could get that talent
from somebody who was big enough when they sued them to pay off.
The rat bastards.
You know.
One thing that came up this week that was kind of interesting, and I think you tweeted it, Howard,
was the Google paper on the disk and all that stuff.
Not speaking of disk, but one of the things was the focus on the worst-case latency.
The 99.9 percentile latency response was something I had not seen before.
Well, I've seen a lot of interesting papers over the past year or so
that are revealing how much that tail end latency is the tail that wags the dog.
That, you know, in a typical RAID system, the 99.9% latency is, you know,
well, that drive didn't read the block and it did a retry
and its latency went from 20 milliseconds to 400 milliseconds.
And that caused the application to run slower.
It's really interesting when you start taking that data and applying it to SSDs
because the difference between a really good SSD
and an SSD that just looks good on paper isn't the average latency.
It's the 99th percentile latency.
It's how consistent is that latency.
Yeah, I mean, if you think about it, right, it's 99th percentile every 100 IOPS.
Right.
I mean, even if it's 99.9, it's like every 1,000th IOPS.
Every 1,000th IOPS. You can think about it. Right. I mean, even if it's 99.9, it's like every thousandth IOPS. Every thousandth IOPS.
You can think about it.
Okay.
So you're doing 40,000 IOPS.
That means 40 times a second.
Exactly.
Something's going wrong.
No, you can see why it's an issue, even if it's that low percentage.
Yeah, because you're pushing a lot of IOPS.
Okay.
Yeah.
That's a lot of low.
It's a lot of bad.
And so, you know, we've revised all of our testing so that, you know,
we don't just report average IOPs. We, you know, show the standard deviation and the 99.9th
percentile and, you know, what the variation is over time because those things are what really
matters. Yeah. So that'll be interesting to see if that actually plays out ever. Part of what's
going on is all flash versus hybrid, right? It'll be interesting to see if that actually plays out ever. Part of what's going on is all flash versus hybrid, right?
Yeah.
It will be interesting to see how this plays out in hybrid systems that are more likely to have that worst-case latency be a lot higher than an all-flash system.
Oh, yeah, and especially hybrid systems that do data deduplication on the spinning disk stage.
Right. Because that increases the number of IOs that you have to do,
and it makes the, I missed the flash,
now I have to go to the spinning disk cliff even bigger.
Yeah.
I publish these champion charts once a quarter on, well, in this case,
flash storage performance and stuff like that.
One of the axes is responsiveness.
But I've been typically using, you know, a measure associated with, you know,
least responsiveness.
But I'm thinking maybe I should change to worse responsiveness,
which is also documented in some of these benchmarks and stuff like that.
But that's an interesting approach.
It pretty seriously varies.
I mean, with Flash, it's pretty flat, I'll say,
until it gets up to whatever its peak IOPS are, and then it goes off the mark.
Well, it really depends on the flash controller.
Yeah.
I keep hearing people saying, you know, all SSDs are the same.
And from my experience, there's a big difference between the low-end SSD whose controller is based on a two-core ARM processor
and when that guy has to do garbage collection, latency spikes.
A guy you should have on the show sometime to talk about this would be Steve Peters.
I don't know.
Do either of you know Steve?
I don't know Steve.
Steve's an old deck guy and been around the industry, but he was working on flash controllers
at WD.
And I remember talking, I think the last time I saw Steve was a couple years ago here in San Jose,
and he was talking about all of the stuff that controllers have to do.
He's just a terrific computer science guy and a great engineer.
And he might be reluctant to talk to you.
I mean, he typically likes to stay in his engineering cube.
But he gets out at development conferences and those kinds of things.
You can see him there.
Right, where the audience is engineers.
I really recommend him.
If there's anything to know about controllers for Flash, he's the guy.
Yeah.
Cool.
Yeah, I think the challenge with hybrid storage is you're always going to have a much worse 99.9% latency.
Sooner or later, you've got to go to the disk.
When you've got to go to the disk, it goes from microseconds to milliseconds, you know. So where does the magnitude change there?
And that's a... Oh, yeah, no. And as the gap between the cost of flash and the cost of spinning
disks shrinks, you know, I did a blog post a couple of weeks ago where I said I was wrong,
that over the past few years, I had believed that the market was going to shift to hybrids
with more and more flash to address this problem.
And I'm not seeing it.
I think that we're going to have hybrids with a little bit of flash
because the first little bit of flash helps a lot, especially with metadata.
And we're going to have all flash systems because things like deduplication,
while you can do them to spinning disks don't make any sense.
The really funny part is I did some rough calculations with vSAN 6.2
and the all flash configurations,
because VMware only supports data reduction in all flash,
actually turn out to be cheaper than hybrid solutions.
Interesting.
Very interesting.
On a dollar per gigabyte basis, yeah.
If you can achieve somewhere between 2.5 and 3 to 1 data reduction,
which is pretty typical for a VM mix, then all flash is cheaper.
And, you know, sure, VMware could have done compression on the spinning disks without much cost,
but they didn't.
And it's just becoming the all flashes are becoming more and more appealing.
And, of course, SteelyEye storage guys want to buy the all flash system because it's new and shiny.
Just like they're willing to pay extra to buy from the vendor who sent an SE with pizza,
they're willing to pay extra for all flash,
whether they need it or not.
You know, I suspect that VMware will include compression in a future rev.
I mean, it's just got to be one of those things on the feature list that they haven't got around to yet.
Well, no, they do it for flash.
They just don't do it for disk.
Got it.
Okay.
And, you know, compression on disk is a perfectly good idea.
Ddupe on disk has a huge performance penalty.
What do you guys think of Infinidat and what they're doing with hybrid, right?
They seem to have something that's a little different.
I spent some time with Steve Keniston lately.
They are a sharp crew.
They've got very knowledgeable individuals.
They're focused on the enterprise, high-end products.
And they fit in the hole that has always existed
between the dual controller reliability
and the VMAX USP reliability at all cost model.
Going to the, yeah, we have three controllers.
So when one fails, we still have two.
Yeah, but the controllers are like this mesh, right?
It's described as Dell machines.
I don't know if it's specifically Dell or if it's any white box or what.
They're x86 servers, and all the controllers are through various sets of cables,
SAS connected to all of the media.
And so it's kind of halfway between that three-par.
We have eight controllers, and the Clarion, we have two.
And when one goes wrong, you're really going to notice.
What's interesting to me, though, is that it is a hybrid system in all the choices they could
have made. And Moshe, of course, is a legend in the business and a really sharp guy. And you don't
know if he's distracted by flying helicopters these days or not, but he comes out with a hybrid
system. And yet I believe that they're doing fairly well. I think that they've got a good
solution for the people who just want one storage system.
Yeah, I mean, it's got to be a sizable system to make hybrid pay to some extent.
And, you know, by the fact that they've got this 99.999-whatever-percent reliability
with the third controller set makes a big difference.
And the fact is they've got a burning lab that's just, you know,
they burn these things in for days at a time, weeks at a time, to make sure they don't have any early
life failures. So the rock-solid reliability that's been, you know, VMAX and HDS-VSP and,
you know, to some extent IBM DS systems, these guys are going after that market.
Yeah, N plus 2. N plus 2 redundancy on everything in the system.
Yeah, yeah, yeah.
That's interesting.
Yeah, and so hybrids make sense if you're saying
I want one storage system to
solve all of my problems
because all flash is really
attractive for your transactional
workloads.
But everybody has at least as
much unstructured and semi-structured
data as they have
transactional data.
So if you're going to go with an all-Flash array,
and you're big enough to say,
and all my unstructured data is going to go on this object store with a NAS front end,
then you can do that.
But if you're not that big,
or you don't want to spend the time teasing your data apart,
then a big hybrid like Infinidats makes sense.
I mean, it's a good question. Do you think
the unstructured data belongs on Flash?
A lot depends on the application
need that's driving, I suppose. Yeah, I would say
generally not.
If you're doing analytics, and everybody
is, I would say yes.
Because of HDFS
usage of storage? I don't think so.
You know, are we talking about unstructured
data like, you know, 4 billion Word docs?
Or are we talking about semi-structured data like logs?
It's logs.
Because, you know, analytics, nobody's really figured out how to do analytics
on all those PowerPoint presentations, Word docs, and all that old stuff, right?
Nobody's doing that.
Oh, yeah, no, I mean, we haven't even reached the point where most people are analyzing the metadata on that,
let alone the content.
It's log files.
You know, it's information.
It's, you know, all that Internet of Things crap that people are trying to store up, right?
It's unstructured crap.
Right.
And that stuff makes sense on a hybrid system because MapReduce is a scheduled job.
And if you're really clever with your DevOps, you know, you can change the QoS on that data just before the job starts
and then turn it back to low when that job ends because you only run that job once a week.
Well, that would make sense.
I mean, if you can tier the stuff back and forth, that would make sense.
But if you're trying to analyze and keep track of things in nearly real time,
I think it's got to be on Flash.
Right.
If we're talking about nearly real-time analytics, that might as well be transactional data.
True that.
That's interesting.
It's certainly not structured, right?
Well, it's obviously driven by transactions, but, yeah, I don't know.
I don't know.
So near real-time Spark and those sorts of things,
are where you're seeing the near real-time analytics going on?
I don't know.
I don't.
You know, you hear about it, right?
There's anecdotal evidence that this is happening,
and there's a lot of interest in the developer community around this stuff,
but I'm not sure how much of it is actually being used in production.
It would be interesting.
I don't have visibility into companies like GE.
I'm sure GE is doing that, right?
They've been leading the charge in Internet of Things stuff.
It's a big company.
So I know that it's going on.
I just don't know what the size of it is out there.
There's a bunch of interesting things in this industry right now that could very well be capped by lack of skills. And analytics is one of them.
Where are we going to find data scientists? I don't know. Do we have enough data scientists?
What about DevOps? Do we have enough full stack engineers? You know, we talk about these trends
and we think they're inevitable, but there's human resources that do not exist in numbers
and critical mass yet to allow it to happen.
I think that's a very interesting thing to watch.
Which is part of the attraction of some of the cloud stuff is, you know,
rather than hiring six PhDs to get CloudStack or OpenStack running in my data center,
I'll use a public cloud and rent their PhDs.
Right.
Yeah, I saw this on a Tech Field Day thing that these guys, Pivot9,
have a software-as-a-service solution for OpenStack.
So you run OpenStack on your hardware,
but they provide all the OpenStack deployment and configuration and monitoring.
It makes it almost easy to run OpenStack.
I was almost going to do it on my Macs here, but I'd have to boot up Windows, of course, or Linux.
Yeah, we know how you feel about Windows.
I've actually started using Windows.
Boy, what a pain in the butt. So I've got how you feel about Windows. I've actually started using Windows. Boy, what a
pain in the butt. I've got a question for you guys. How interrelated and dependent, interdependent
is OpenStack and containers? I mean, there seems to be a lot of overlap there, but you don't have
to have OpenStack to run containers. That's right. Yeah. Right. And containers are a really
interesting thing, too. Obviously, it's taking off in the cloud space.
If you're going to be running anything with microservices, you need containers to scale up.
And it's been proven.
I haven't gotten to the point where I really grok containers.
Nigel's the key there.
Yeah, but all of this stuff I keep hearing about, and it's all the cloud guys, and it's all Greenfield.
I've worked way too long in corporate data centers where it's, okay,
so you've got this whole new microservices architecture that runs with NoSQL,
but all the data is still in SAP.
And how do you build interfaces between those things?
Yeah, or these semi-structured data lakes, right?
And it's like, whoa, yeah.
How do you build that interface in there?
I asked a question a couple of weeks ago,
what's an optimized Docker storage subsystem look like?
I got some weird stuff.
Part of it is like Docker to some extent
relies on non-persistent data.
It's almost a stateless service environment to some extent.
At best, it's no SQL. At worst, it's no storage whatsoever.
It's just part of a flow of data between one end of point and another.
And so that makes it work for the business rules layer of your
client server application. But there is a, Docker does have a plug-in
or something like that for persistent storage. Yeah.
I mean, I don't know if it works or not, but they have one.
I keep getting emails from companies that claim to have solved the Docker storage problem.
Right.
No, we should have one of mine.
Well, you've got Docker, you've got Flocker, and next there will be Blocker and Jocker.
Mesos and other things.
Yeah.
It's funny how the open source community resembles that scene in Life of Brian
where the Palestinian People's Liberation Front
and the Liberation Front for the People of Palestine are calling each other splinter groups.
Well, it is interesting, and it is very confusing.
It's confusing, of course, for storage too, right, because you just used the word open source.
And to me, that's not anything that's the same as OpenStack.
And containers are different,
but all of this stuff overlaps and it causes a great deal of confusion. What do you guys think
is going on in buying patterns these days, right? When you look at the storage industry and what
they're reporting and what the revenues are, it looks like it's flat or not great growth. Let's
put it that way. Are people really moving to the cloud?
Are people just on the sidelines waiting to figure out what the hell is going on?
I had a good analogy.
I was on a couple of podcasts about, you know, what Flash is doing to the storage base today
because it seems like, Howard, you know, you should chime in whenever you want here.
But, you know, customers are seeing that they don't need a disk or a hybrid VMAX anymore
to get the performance that they need. So they can move to effectively a cheaper all-flash solution and still make out like a bandit.
So that's driving a lot of tech refreshes that are moving away from hybrid architectures to all-flash.
And then the other damn thing that's happening is all...
Or even just from disk to hybrid.
Ten years ago, if you needed 5,000 IOPS, which today we consider very modest,
but which then was a lot, you bought something in the VMAX class. And that meant you bought a huge
amount of RAS because you wanted the performance that came with it. And today, for substantially
less money, you can buy an Xtreme IO that doesn't have the 6.9's reliability,
but has lots of performance.
Or Pure, or T-Tile, or M1.
There are so many options.
I specifically stuck to EMC, so there's not a,
you went from one vendor to another and saved money argument.
Think about it. You know, I bought a VMAX with 1,073 gig 15K RPM drives, and I short-stroked them.
Yeah, and now you do it with, you know, 15 SSDs behind a dual controller.
It just costs less.
Immensely.
And it operates faster.
Yeah, and you combine that with, you know, a not insubstantial fraction of the gigabytes moving to the cloud?
I think that the cloud thing is all derived from the application owners and the application developers
deciding that it's cheaper to do this and quicker to do this on the cloud rather than going through IT.
And by doing that, the applications are developing stuff on the cloud,
so the data is moving to the cloud.
It's almost a virtuous cycle from a cloud perspective, right?
I want your apps, I want your developers, and I get the data as part of the deal.
It's not at all dissimilar to what happened in the early 80s with the PC.
Yeah, movement from the clients.
I submitted a request for a report from IT in CICS for this application.
They said it will be ready in nine months.
I bought a PC with Lotus 1-2-3 and hired a consultant, and it was done in a week.
Guerrilla IT has been with us a long time.
There's an interesting difference here, though, right?
It has to do with data ownership.
And that is when you put something in the cloud, your data is in the cloud,
and all of a sudden it really is separate from the machines and the applications that use it.
When you run it yourself and you run out of capacity, you run out of, you know,
your maintenance contract or whatever, you've got this massive data migration to do.
And I did this video about a month ago, you know, the storage industry driving in circles.
Customers really don't like data migration.
It's their least favorite thing, right?
So the great thing about the storage industry is the products wear out every three or four years,
and it has to be replaced at high cost, right?
I mean, that's what's really fueled this industry in many respects is the replacement,
the overall fast replacement of all of this stuff.
So the storage industry is General Motors in 1957 with planned obsolescence?
Yeah, don't you think so?
Oh, no, I do.
The analogy just came to me.
We've been riding a planned obsolescence wave in this business for decades.
And when you go to the cloud, that goes away, completely disappears.
And if that's not a huge benefit for customers, I don't know what is. Yeah, but on the other side, the transition to cloud is way harder than people anticipate.
Yes, yes, absolutely.
Because if I'm running everything in my data center,
then I can build interfaces between application A and application B.
It might not be easy, but I can do it.
But you know what it's like, though?
If application A is running in the cloud and application B is still in my data center because we haven't moved that one yet,
look, we've got 20 milliseconds of latency between those two applications.
That would be great.
But think about how much time is wasted on transitions of storage products in a large company, right?
There's always products that are coming off end of life, right?
You're going through the refresh cycle constantly.
You're talking to vendors constantly.
You're worrying about data migration constantly.
The worrying about data migration is a little bit less than it used to be
because of storage vMotion.
Yeah, there's storage solutions outside of vMotion that provide migration services.
But it's still a pain in the ass. Yeah, yeah's storage solutions outside of the emotion that provide migration services and stuff. But it's still a pain in the ass.
Yeah, yeah, yeah.
So you think the lack of the tech refresh cycle because storage is moving to the cloud is driving down acquisition of storage?
I mean –
I think it is friction.
I think it is friction on the industry.
Oh, you're eliminating the friction by going to cloud.
Well, and just think about it. I have had consulting clients
who were stupid enough to buy a
Clarion just to support
Exchange. And now they go to
Office 365. Well, the software
as a service are also
driving a lot more data to the cloud.
Yeah, nobody today
says, we're going to have a major CRM
product, let's buy an array to support it.
They go to Salesforce.
So we've got Greenfield applications going to platform as a service.
We've got applications that are migrating to or new applications
we're deciding we're going to get as software as a service.
And we've got some of the secondary storage applications going to S3.
Even if that's 10 or 15% of the gigabytes the corporate world would have bought,
a TAM that shrunk 10 or 15% for HDS and EMC and those guys, that's a big change.
Okay, I guess the real question becomes, you ever see this plateau where the reduction in storage,
enterprise-class storage sales is going to flatten out and continue on?
Not unlike cars.
You know, another point is that cars have gotten from a three-year obsolescence to a ten-year obsolescence or whatever.
After 10 or 12 years, people start buying new cars again.
Yeah, that's because the plastic starts getting hard and cracks and looks crappy. Yeah.
Also this week, you know, was the OCP summit. Yeah, that's because the plastic starts getting hard and cracks and looks crappy. Also this week was the OCP summit.
As usual, the guy whose name I forget right now,
who used to do architecture at Facebook, who now works at Sony,
wheeled out his latest optical disk thing and said,
you can store data on these disks for 100 years.
It takes 10 seconds to access anything on it.
Leaving aside any arguments about the optical,
the concept that I'm going to have data in my data center online on the same equipment for 20 years,
let alone 100 years, is ludicrous. Because in 10 years, from 2005 to 2015,
tape has increased in capacity 15 times and disk drives have increased in capacity 20 times.
And so 15 years from now, I'm going to be looking at this six rack thing full of optical disks,
and some vendor is going to come in and go,
I can take all that data, I can put it in this for you box,
and you can free up all that data center space and put something else useful in there.
Yeah, optical's been a non-starter for a long time.
I just don't think it's ever going to make a difference.
Yeah, I agree.
I don't think optical's ever going to be a substantial player.
I think there's a place for optical.
I think there's a place for tape, I think there's a place for tape.
And it's in this long-term archive stuff.
Now, yeah, you don't want data online that's sitting there in these things.
But if you're going to have, you know, let's say video, let's say a movie of, you know,
God, something in the 40s, I don't know, Wizard of Oz.
You want to have that stay around for a long friggin' time.
You don't want it to go away after two years or three years because you've got a tech refresh or whatever.
So you want to keep that someplace where, yeah, access is maybe 10 seconds, but, you know, you still have access to it.
Yeah, but that's not my point, Ray.
My point is that 10 years on media, the media advances so far in 10 years that you want to do the migration.
What you really want is something like a cloud structure or a big data lake
where it doesn't matter if parts fail because the data is redundant over all of these various nodes.
When stuff breaks, you just replace it with new stuff.
Yeah, which is what makes Spectra's Black Pearl so attractive.
Object storage front.
Yeah, object storage right.
You hook it up to a huge type tape library, and the data on all the tapes is in LTFS,
and it will automatically go, oh, look, you have tapes that hold ten times as much as your old tapes.
Let me consolidate these old tapes to these new tapes and do it all in the background
and, you know, spit the old tapes out when it's used.
Speaking of optical, I did a blog post last week or two
on this 5D technology coming
out of the United Kingdom that has
a 13.8 billion year lifetime.
What?
Honest to God, it's fused silicon
and it holds like 200 terabytes
per disk. And it's a
one inch diameter disk,
mind you. For how long? What was that?
What was that? 13.8 billion
years. Longer than the universe.
For what?
Is that the half-life or what?
No, no. That's at like
190 degrees C.
They're storing the data within
fused glass.
And that's the right solution?
With what kind of lossy stuff?
Well, lossy is just a matter of your encoding.
It's five-dimensional encoding.
It's three layers deep.
It's based on polarization of the dots.
It's a pretty impressive technology.
Of course, it's all lab stuff at this point.
Yeah, but if that kind of technology comes out, that's the thing you want for real archives.
Duh.
It's like my ex is an Elizabethan historian,
and she goes to the UK National Archives,
and they bring out scrolls written in 1603.
Those you want.
So the problem is you won't be able to find a controller to read that shit in 100 years.
The funny thing about it is these things are all optical,
so you can read them with a microscope
and a polarization filter, you know?
It's more like microfilm than it is
like a Blu-ray disc. That makes
perfect sense. Okay, so the aliens,
the aliens that come to Earth after we blow
ourselves up will be able to read all about
us. That's correct. Great.
Right, and isn't it important that the
aliens come and be able to read all about it?
Or they could see episodes of Cheers.
Or that one survivor of the human race
that managed to reproduce for the rest of his life.
You know, that sort of thing.
Hey, somebody's got to survive the
zombie apocalypse. Let's hope it's Kristen
Shaw. Or the nuclear apocalypse
or whatever. The biological apocalypse.
And the digital dark age
that follows, yes.
Yeah, of course.
Well, you know, one of the questions was what the hell the format was on his desk.
It had to be, you know, ASCII or UFD or something like that.
I don't know.
PDFA, it'll last forever.
Yeah, bullshit.
Well, there goes the family friendly part.
Yes.
That's okay.
It's the end of the show.
The kids fell asleep anyway.
Not quite. Oh, good. All's the end of the show. The kids fell asleep anyway. Not quite.
Oh, good.
All right.
So what else is going on?
So I did a forecast contest on, what is this, OmniPath architecture for Mintel.
You guys know anything about that?
No.
A little bit.
It's supposed to fit somewhere in between InfiniBand and Switched PCIe.
Yes. That it's very low latency, very high bandwidth, host-to-host communication.
Right, it's inter-cluster communication for supercomputers and stuff like that.
What do we need it for?
InfiniBand.
It's damn latency, man.
It's a damn latency and bandwidth.
When you've got thousands of these supercomputers running along,
you need fast access between clusters and low latency.
So we need faster than InfiniBand then is the answer.
Oh, yeah.
Lower latency than InfiniBand.
Because if you start thinking, so let's pause it because we're storage guys,
a new scale-out storage system.
A new flash scale-out storage system, for instance.
No, let's envision a new hybrid storage system where each node has 256 gigabytes of 3D crosspoint
as the performance tier and then Flash.
As the low-performance tier.
Yeah, yeah, okay.
Right, and Flash as the low performance capacity tier. This scale-out cluster over something like DirectPath can do direct DMA from controller one,
controller one can reach directly into the 3D cross-pointing controller two
and write to it so that we don't have a complicated protocol for maintaining cache coherency.
The controller that receives the data writes it to its local,
and via RDMA writes it to another controller,
and we don't need checkpointing and all that stuff because it got done by one guy.
Now think what level of latency you need to take full advantage of the 3D crosspoint.
The vanishing point latency.
Exactly.
We've reached the point where a microsecond is a long time.
I really never thought that the quantum mechanics I took in school would apply to my professional career when I got out of chemistry.
Yeah, it's happening.
Well, and that 3D cross point is making, you know, it's making that sort of latency be more important as it starts,
if and when it starts to roll out in a more of a storage solution space.
I see it, you know, as a memory space as well.
I see that a year from now, there'll be a replacement for the NVDIMM based on Crosspoint.
And NVDIMMs cost three times as much as regular DRAMDIMs. And if Intel and Micron can get 3D Crosspoint to that price point or to even five times.
They are claiming lower price than DRAM, higher price than Flash, but lower price than DRAM.
That's what their price point is.
Yeah, that's got to be with caveats.
But they didn't put a date on that.
Dates and caveats for that, right?
This is a semiconductor product.
I understand.
And the longer
you make it,
the better you get at it.
We're reaching the point
now where everybody
is using 3D
NAND.
That's not true.
Only Samsung
is currently
producing 3D NAND.
Micron is saying
we know from earlier
in the show
we were talking about
all the differences
in flash anyway.
I mean, there are
a lot of differences
in products out there.
Right. There's a ways to differences in products out there. Right.
There's a ways to go before it's settled.
But Jim Handy, who's been a guest on the podcast,
has said for years that Samsung is only using that 3D flash in their own SSDs
so they can hide the fact that they're losing money on every NAND chip
because they just can't get the process in
because it takes years to get the process and the yield up.
Yeah, so how long is it going to take for a 3D crosspoint then?
A decade?
I think in 2017 we'll see it as an NVDIMM replacement,
bigger, cheaper, but still that place.
And then the price will come down,
and it will be what we use on our NVMe SSDs
where we're going for performance for performance's sake.
So it's going to be a big gamble.
I mean, the manufacturing process for 3D Crosspoint is going to be expensive.
They're going to have to sell the stuff below what it costs to make it
in order to get it in the market and compete with existing Flash.
I think it's easily five years before 3D Crosspoint is twice the cost of Flash on a per gigabyte basis.
But they're going to have to sell it.
They're going to have to sell it down there, so they're going to lose money.
This is an interesting investment.
Where's the break-even for Intel on this thing?
Is it seven years out, eight years out, ten years out?
It's going to take a long time.
I think it's probably five years out.
I think there's a market three years out for a new class of hybrids
with Crosspoint as the performance tier and Flash as the capacity tier.
So to your point, in that amount of time,
the rest of the industry is going to move forward with all kinds of technology to become faster.
I mean, it's interesting.
It's a big investment by Intel over a long period of time while the rest of the industry continues to move.
And everyone's going to figure out Intel just wants a chunk of our money, right?
And so it's not just Intel.
It's Micron, too.
Yeah, Intel, Micron, Flash.
And there's competing post-Flash technologies.
Yes.
Crosspoint is just the one that looks like it's most ready for primetime.
But if we go back to when Mark was working at HP...
Memristor, man.
You know, they were talking about Memristor like it was around the corner,
the way Intel and Micron are talking about 3D crosspoint being around the corner.
Around the corner could be two years from now.
We just don't know.
Yeah, big, long bets have a hard time paying off.
Okay, gents, we're almost at the end of the show.
Is there anything that you'd like to say, Mark, at the end here?
Drink craft beer.
Drink craft beer.
Well, we all do that already.
There's nothing new there.
Or is it craft whiskey?
I don't know.
That was the only thing left to say.
That is the only thing left to say.
Howard, any final questions?
I'm for craft whiskey.
Craft whiskey.
I'm for actually craft wine, but that's a different discussion.
Hey, now.
Well, this is – hey, now.
Well, this has been great.
It's been a pleasure to have you once again, Mark, with us on our podcast.
Oh, thanks a ton.
It was a really fun conversation.
Next month, we'll talk to another startup storage technology person.
Any questions you want us to ask, please let us know.
That's it for now.
Bye, Howard.
Bye, Ray.
Bye, Mark.
Until next time, thanks again, Mark.