Storage Developer Conference - #47: NVMe – Awakening a New Titan... Deployment, Ecosystem and Market Size
Episode Date: June 14, 2017...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Chair. Welcome to the SDC
Podcast. Every week, the SDC Podcast presents important technical topics to the developer
community. Each episode is hand-selected by the SNEA Technical Council from the presentations
at our annual Storage Developer Conference. The link to the slides is available in the show notes at snea.org slash podcast.
You are listening to STC Podcast Episode 47.
Today we hear from Sean Walsh, Managing Partner with G2M Research,
as he presents NVMe, Awakening a New Titan, Deployment, Ecosystem, and Market Size
from the 2016 Storage Developer Conference.
So, part of what we're here to talk about today is the new NVMe Express Market.
So, G2M Research is a new company that is in this space,
but myself, my partner Mike Heumann, who's here, have been doing
this for many, many eons, hence no hair and gray in the beard.
We started out working together when there was quarter gigabit fiber channel and have
gone through just about every storage protocol ever since.
So when we started looking at NVMe Express, one of the first things we wanted to do was
say, guess what?
There's a whole new list of acronyms.
And I've come to the fundamental conclusion that as marketing people,
we create acronyms just so we have a reason to have a job
and we can go to trade shows and explain them.
Probably half of these are in the definition book.
But a couple of the ones that I think are important today are NVMe over fabric,
obviously moving that over
Ethernet and fiber channel. And this is the new official NVMe Express branding, if you
haven't seen it on their guide or not. We use the term NVMe bay a lot in this. So why
do we do that? Well, because in the front of these storage and server boxes, they're
going to have NVMe trays. In there, they might put 1⁄2 inch, 3 1⁄2 inch,
or in some cases what we're seeing are the use of M.2s with multiple of those in a bay.
So as we sort of looked at those,
we sort of viewed them as the basic commodity element of what we were talking about.
NVMe I.O. block, there are a couple of companies that have actually,
because we're bringing PCI Express to the front of the machine,
putting I.O. cards out there, so Ethernet and fiber channel and other stuff there was even one vendor that was talking about doing it with KVM
and HDMI and USB ports for doing front panel console and I always find that
interesting is every time you you put out a protocol or an interface people
find new ways to use it you would have never thought of.
We use the term accelerator block for CPU and GPU.
At Flash Memory Summit, there was a company called NVXL that was actually taking GPUs
and putting them in the front of each of these slots to create clusters based on these systems.
I think most of the rest of this you probably know. SSDs, obviously.
M2, the small form factors.
U2, the replacement for the SAS connector.
RDMA, Rocky. I think most of you know all these other things.
We use the term storage class memory a few times in here.
And then, of course, SDS for software-defined storage.
But I always like to start with this in the beginning because you never quite know what acronyms you're going to put out there.
So a little bit about us.
Market research firm, as I said, my last one was with Avago.
And for those of you who don't know the story, I was at Emulex.
We spent 10 years fighting Broadcom.
Then Avago bought both of us and slammed us
together and said, thou shalt make nice, whether you like it or not. So such is the joys of M&A
in the world. So most of our team is from that background in pedigree, do stuff on sales
enablement, digital marketing, and I'm not going to bore you with a commercial from here.
So what do we see happening in the market? So a couple of things that are happening. In enterprise, you're obviously seeing the internal traditional enterprise moving into a hybrid-based
model. Not a big surprise. But enterprises still have a role. The theory that the clouds
can take over everything is not true. There is a cost to scale, there's a time to deployment,
and there's also a balance in terms of business and regulatory requirements a lot of businesses can't get away from.
On the cloud side, I call it Gen C.
Our friends at Enterprise Strategy Group,
and I know most quote-unquote analysts don't talk about other analysts,
but they're all friends of ours.
So they did a great thing a couple years ago.
When they did their responses in their IT spending survey and their strategic partnership survey,
they shifted
and split it by age. And for everyone 40 and over, it was the folks you would expect, the IBMs and
Dells and Microsofts as the most strategic partners. Those under 40, it was Amazon and Salesforce.
And nowhere was there a hardware company mentioned. And I think that's one of the biggest things that, as an industry, we need to look at.
So we're seeing AWS certainly drive this business.
Google and Azure are attacking the latency problem.
If you look at what Google is doing with their cable fiber optic network
and how they're trying to change that,
they see that as their strategic advantage in the market.
And one of the things that I thought was fascinating was in the last earnings call,
the CEO of Microsoft said, we're now selling more versions of SQL,
I'm sorry, not SQL, SharePoint, Exchange, and Microsoft Office in the cloud
than we are direct as licenses to enterprises.
And that's the first time that's ever happened.
SQL, still big on on-premise.
And again, what are you going to see is a theme where high IOP, high transaction,
high value stuff is still big on premise, but some of the other stuff moves out.
And then in the telco and the IoT space, you're seeing NFV come in.
You know the old joke, Windows 95 equals Mac 86. Well, in the case of the telco space, it's kind of virtualization.
Or NFV 2016 equals virtualization 2010.
So they're moving in that direction.
One of the terms that we coined here for IoT is what we call nanodata.
So lots of volume of data generated by small devices brought in,
sorted at the edge.
The vast majority of that data is thrown away, and then only the metadata and the anomalies are loaded up into the cloud.
Then you've got, obviously, the changes in terms of Wi-Fi and 5G
and big fan-in networking and edge networking changes as we go forward.
So these are things that we think are driving the need for lower latency,
and that becomes the commodity. So, obviously, you're seeing the move with Flash and SCM getting
closer to the CPU. Intel is doing everything they can to make NVMe pervasive, virtually
short of giving it away. As we said, latency is the new value commodity as we go forward,
and that faster applications need to be more closely
aligned with the storage and the CPU.
I mean, it's motherhood and apple pie, but a lot of times we forget the obvious things
in what we do here.
And then the last part is legacy connectivity always wins in the market.
Say what you will about Ethernet.
The reason it survived is it looks the same, it feels the same, and it gets pretty close
to the same. Now, we'll forget that we've changed damn near same, it feels the same, and it gets pretty close to the same.
Now, we'll forget that we've changed damn near every protocol on top of it
and that what we thought of as Ethernet 15 years ago really isn't quite Ethernet today,
but it's pretty damn close.
And whenever you protect the legacy, you end up winning.
And that's one of the things that I think will happen here.
And one of the other things that surprised us is after Flash Memory Summit,
we started the detailed work into this and the survey.
Eighty different companies shipping or have announced NVMe-based solutions right now,
which is pretty surprising for such an early phase in the market.
So we look across those 80 companies, not a big surprise,
storage server manufacturers.
And we split this out in the report.
This was one of the very specific pieces of feedback we got
was don't just tell me it's a bunch of servers.
Tell me what the split is on those servers
between network appliances, storage appliances,
and those sorts of things.
So we took a stab at that.
We really used the data that came from Intel's
data center group earnings call
to sort of do the split out of the server business.
New class of storage arrays for NVMe, obviously all the components that go into that.
And then what surprised us that we really hadn't thought of in the beginning
was the growth of some of the NVMe software companies
that are trying to do basically scale-out architectures using that.
So this is the taxonomy that we've built, starting with classic server models at the top,
the OEM, ODM, and open compute type stuff, broke out the software-defined services,
looked at external arrays.
We pegged a lot of this to all flash arrays.
We think that the corollary between the growth of where all flash arrays went in
and the adoption rate of NVMe will be a very, very close corollary.
And you'll see that in some of the preliminary numbers that we've put out here.
Then we looked at the markets and platforms that we saw going on here.
I think most of these will not surprise anybody.
And then we broke that down into a solution stack that those 80 vendors are spread across
so in terms of applications at the enterprise i don't think this again will be a big surprise
the database stuff in memory analytics and then scale out software defined storage
and amazingly enough over fabrics becomes the new lifeboat for Fiber Channel.
It continues to live on, and it's found itself a new place in the world,
which I think is pretty interesting.
It continues to show the value of air gap networks,
because if you can't access it, then there's some value in it from a security perspective.
On the cloud, big data and advertising,
this is basically replacing what you saw Fusion I.O. and a lot of those guys doing with PCI Express cards going directly to NVMe.
Concept distribution media, this is about concurrent video streams.
And then the one that I personally find the most interesting are the deep learning and the AI systems. systems is if you look at the sheer number of comparisons that they want to use in order
to build accurate analytics and accurate AI, this is one of the places where the low latency
at the storage level helps.
And then from vertical sides, not a big surprise again, high performance computing, telco,
NFV, IoT, Fog, these are ones that are driving early adoption in some of the vertical segments.
So we've kind of broken the NVMe market into two segments, the intra-chassis and the inter-chassis.
So in the intra-chassis, as you would expect, you've got your CPU, your PCI Express bus and bridges.
You still have PCI Express slots where you're doing your over fabrics and other sorts of cards in there.
SSDM.2s and then your NVMe drive bays.
And what we've seen as you look at these is that most people today even aren't filling up all those drive bays.
And then when you look at the growth of flash, look at the density of what's happening,
that drives even more opportunity for adding these other capabilities inside the system.
So we expect to see more front-of-rack systems, especially in the telco and the IoT and some of the embedded markets that will drive that as well.
One of the other things that surprised us from some of the discussions we had with vendors in this process
was how many storage vendors actually liked the front-of-bay opportunity
because they looked at it and said, look,
as we move to 1U, 2U boxes, we're limited in IO options with PCI Express slots.
This gives us an overflow valve, and it plugs directly into PCI Express and gives us the
ability to do some of the lower performance stuff out the front of the system, higher
performance stuff out the buy 8 ports on the back.
And when NVMe moves to buy8 or x16 in the future,
I think you'll see even more use of that as we go forward.
Then the second level was then obviously the intra chassis or inter chassis where you start to add the fabrics capabilities.
So you're going to have connectivity through the traditional enterprise fabrics, fiber channel, Ethernet, InfiniBand.
You'll have cards coming out the back of the system,
cards coming out the front of the system,
and then, of course, you'll have the pure NVMe targets on the other end.
Now, this is going to take time.
One of the things that when you look at a lot of these pure NVMe targets,
they give you incredibly fast pipes,
but they don't come with load balancing and failover.
They don't come with all the traditional services
that you've come to expect as part of storage systems.
And there will definitely be a lag time between those.
We've seen deep learning, cloud, content distribution driving some of those early adopter proof of concepts in this space,
but there's still quite a bit of work to be done in that area.
Then top predictions for the NVMe market, total market size $57 billion including servers,
including storage arrays, including SSD drives, M.2, everything all rolled together.
This is going to be a big chunk of change.
When you look at the drive side of it, 50% of the servers will ship NVMe by 2020.
This will be driven by the next turn at Pearly.
Software-defined servers, 60% will be running NVMe out of the gate.
And again, you can see the big gap between the server NVMe drives, an average of about 5.5 devices,
versus the software-defined storage stuff at 29 NVMe devices.
Not a big surprise if you're dedicating something to storage versus application usage, but we
did make a very conscious split there. NVMe over fabrics, growing to about 750,000 units,
and 75% of that's going to be running RDMA over Ethernet. Obviously the easiest thing to get out in the market,
and Fibre Channel will make up the vast majority of the rest.
As we've spoke to vendors, those are clearly the two protocols that we expect to see.
Then on the all-flash arrays, expect to see relatively quick adoption here.
And this, I think, is going to be driven more by competitive issues than by user issues.
So we all know that we like to get together and compare our benchmarks.
And when you look at those people who move to NVMe,
you're going to pick up that extra incremental performance,
and that will drive the market more than I think the absolute end user demand will.
It's happened before, and I think it will continue to happen again.
And then the last one that we think is probably the most interesting prediction
is that NVMe price point will cross over with SATA SSD in the next couple years. And when that
happens, then you really start to see the volume. You see it move into cold storage. You see it move
into non-IOP driven types of applications.
And again, when you look at the right endurance characteristics of these drives,
they're different than the enterprise ones.
If you look at some of the 3D NAND stuff that Intel announced at IDF,
they had very high right endurance devices, low right endurance devices.
And when you look at the massive growth of cold storage, when you look at what's happening with those devices, we expect that to be one of the biggest drivers of NVMe in terms of
volume as we roll the film forward.
So this is our glorious hockey stick chart.
Every analyst must have a good hockey stick chart or their customers won't buy their report.
So what do we see here?
Obviously, these are all the segments that we talked about stacking up pretty quickly.
Servers obviously making up a large portion of this.
Storage arrays next, and then storage appliances.
That's the vast majority of the business.
Then from there are the drive elements that go into it,
and then the connectivity pieces layer on top.
So when you look at this as a market,
it's going to very much mirror what you've seen in the traditional enterprise.
So where do we see some of the evolution happening?
Again, Intel 3D NAND, pick your favorite next generation storage class memory or NAND replacement.
That volume, that power savings, that scale is going to drive.
We see 2017 as sort of that pivotal year around Pearly.
In terms of what we're trying to do with drive bays, we expect 50% of drive bays will be NVMe.
And again, this is understanding that no matter how good a new technology is, we will take years to adopt it.
Why?
We've already bought something, and we're going to spend three years on amortization,
and we're not going to change it once it's put in place.
So it doesn't matter how cool the new thing is, if it came out two months afterwards,
I'm not going to touch it for at least a third of my data center for probably three years.
If I'm doing storage or networking, I have a five-year amortization,
and that's going to drive it even further.
And that's part of what drives this thing out as you look at it from an adoption perspective.
I think it will rapidly be adopted intellectually,
but operationally it will take more time due to those reasons.
Then in terms of the drive bays, obviously starting at about 5.5 on the low end.
We didn't raise this number significantly as we went through the report because of the capacity growth.
We don't see that changing.
The number of drive bays radically.
And then obviously the top OEMs that are driving this today, they've been pretty aggressive with it. And Supermicro and Quanta, who are really leading that appliance business,
are extraordinarily aggressive here.
If you look at what they've done with some of their 48-bay and 96-bay solutions
with double rows of these NVMe drives that you can tilt up in the middle of the chassis
or the top down load, that's really going to drive that as we go forward.
So when we look at software-defined storage,
again, we think roughly 60% of the drives that support this by 2020 will be based on NVMe.
RDMA support, you see an open stack.
At FlashMari Summit, VMware finally announced
that they're going to put it into their product for vSAN.
It's already shipping in 2016. It's already shipping in a number of flavors of Linux. Summit VMware finally announced that they're going to put it into their product. For vSAN,
it's already shipping in 2016. It's already shipping in a number of flavors of Linux.
So we see this as a very strong driver. And of course, the use of hyperconvergence will also help with the NVMe over Fabric.
When we talk about the evolution of the arrays, again, very much following a model here based on the all flash arrays.
About 40% by 2020, 30% will use custom flash modules.
So what do we mean by that?
Probably multiple M.2s in 2.5-inch drive bays that they can then scale that storage inside the system.
NVMe for storage-defined systems will definitely be a big part of it.
We were surprised by the growth of M.2.
I will say of all the other things that we learned in the research phase of this report,
was just how much M.2 is used by the web giants.
When you look at Azure, when you look at what Google is doing,
and you look at the major web players,
they're moving more to M.2 than two-and-a-half-inch form factors.
And then, obviously, growth in performance and latency.
And then we expect MVMVs, whether they use MVMV or Fabric to export their file systems,
it's all going to be done via RNICs.
So we see that as a big piece of it.
So 30 million NVMe SSDs in 2020.
So still assuming there's some SATA, still assuming there's some SAS,
but you're going to see that move.
About 25 million of those are going to be the U.2-based systems.
Five million will be M.2.
And of this M.2, the vast majority that we're talking about here are the cloud enterprise stuff,
not the laptop, desktop stuff.
NVMe prices for SaaS, SATA, they'll start to turn at the end of 17.
18 will be a year of transition.
19 will be the year of massive rollout.
And the new storage class memory and flash options will definitely lower price and increase speeds.
Now, we can argue about when 3D Crosspoint will come out
or when some of the other solutions that have been announced will come out,
but there's no question that there is another generation moving beyond NAND.
Now, whether that's a new generation of NAND, which always seems to find a way,
just like my comments about Ethernet, or you truly do see the leap to 3D cross-point, the
end result will be the same. Lower cost, higher performance, greater density on these systems.
That's another reason why we didn't radically jump the number of drives per system.
On the NVMe over Fabric side, as we said,
we expect Ethernet to be the driving force here,
70% of the shipments, lots of stuff around scale-out,
software-defined storage,
and we see this challenging array far more as we go forward.
Mellanox is leading this charge right now,
but Broadcom, Chelsea, have announced products that are moving in this area.
We expect Cavium will follow, and we'll see where that goes.
On fiber channel, I heard agreement on that.
All right, good, I like that.
And then on the fiber channel over fabrics, again,
this is going to drive incredible sustainability for fiber channel over the years.
A lot of people, you know, it's easy to beat up older markets and that sort of thing, but
we see this as something that's going to continue to go for a very long time.
So if you look at the adoption rate between 1, 2, and 4, that happened in about 2 1⁄2,
3-year cycles.
At 8, we saw a big cost move because of the optics, and that extended the market to more like 4 or 5 years.
And the economic downturn in 2008-2009 didn't help.
But when you look at the 8 to 16 transition,
32 just coming out, serial 64 coming,
the way it matches PCI Express bus rates,
still see that as a strong player in the market.
Lots of folks have already shown their dedication and demoed this.
And again, this is going to be tied more to the transition of the motherboard
around the array side than it is the other.
Generally speaking, when you look at protocols in Fibre Channel,
you see the switch first, the host second, the target third.
And then InfiniBand, you know, Mellanox, this
is an area that they're, if anyone's going to do it, it's going to be them. We were asked
about OmniPath. You know, clearly they're going to be a player in this market, but,
you know, whether they're going to be an RDMA player, we'll see how that goes. And then
the last part is my homage to Jeff Goldblum here.
Life finds a way. He used that line in Jurassic Park. And there was an opening
part in Jurassic Park where they go, well, how do you make sure there's no more baby dinosaurs? They go, oh, well, it's all
girls. Don't worry. And his comment was, you know, life and
nature always finds a way. And that's kind of my belief system
on NVME,
is that you look at all these different applications that we've talked to people about
over the period of time we were doing the research on this,
and everyone is looking at that front end, bringing PCI Express to the front of the box,
and going, what can I do with it?
One of the things I love about being in this business is the creativity of the box and going, what can I do with it? One of the things I love about being in this
business is the creativity of the people, is the fact that no matter what you think of, someone
else comes back and goes, hey, have you thought about using it this way? And you go, damn, that's
a good idea. And that's one of the things I think we're going to continue to see in this as we go
forward. So some final thoughts. NVMe adoption, obviously we're proponents of it. We think it makes a lot of sense.
And we think from a market perspective, you're going to see it in the hyperscale markets.
You're going to see it mirror all flash array on the target in terms of deployment times.
The flash devices, NVMe is going to drive the price down. You're going to have new NAND and memory type materials that come in
that accelerate that part of it and make it run faster.
This will go over fabrics.
It will be that mid-life kicker 10 years later for fiber channel.
And we certainly expect Ethernet to be the majority of the market.
And we do expect to see emergence of new devices,
what we call these I. these IO blocks or accelerator blocks,
where people invent new things that they really didn't think were going to happen before.
So that, as they say, is the end of the prepared remarks.
This is the obligatory where to find me chart, if you ever want to do that.
And with that, we'll open it up to some questions.
Yes?
On the slide level, your oxygen stick chart.
Yes.
It seems like you're missing on this chart,
and I'm not sure if it's rolled into things like your NBME.
It's more of the actual bulk of the NBMe SSDs
are this blue part here. So enterprise native SSDs are this blue part here.
So enterprise-native SSDs.
So that's that blue line there.
And you're right.
We did look at that as if it's in a storage array, it's part of a storage array,
as opposed to a dedicated standalone device.
So as a storage array, then you're not separating out SSDs that are being put in there?
No, that's included in part of it.
That's why we called out the number of drive bays.
So when we look at a storage appliance or a storage array,
we specifically call out 29 drives for the software-defined appliance.
And what did we go to, Mike?
I think 36 on the or 38?
36, 38 on the dedicated arrays.
Yes?
So we were trying to use your comments
aimed at enterprise components.
You indicated a three-year reinvestment,
a five-year reinvestment.
However, cloud cycles tend to flip over much more quickly.
Amen.
What's your viewpoint on that in terms of adoption?
Well, that's what I said.
We were really surprised.
So Azure was a great example where when we spoke to them,
their new platforms today are going NVMe M.2.
And we see this happening, to your point, about adoption cycles.
So when you look at how cloud deploys, they'll have 6, 8, 10 data centers throughout the world.
They will then say, okay, we're going to roll out a given service with this generation of architecture.
And that generation of architecture is completely independent.
They don't care about amortization cycles, any of that stuff.
They're rolling that out.
And they'll do, you know, pick your number, 1,000 servers per site.
And of those 1,000 servers, you're going to have 600 to 700 that are access.
They're going to be NVMe drives or M.2 drives, as cheap as you can,
on the cheapest servers you can without bezels or anything.
You're going to have a couple hundred servers that are the control, access, big data, content,
delivery, whatever the thought process part of it as opposed to the delivery part of process
as to who it is.
And then there'll be about a hundred servers that are the back-end billing systems, which
ironically still look a lot more like enterprise stuff
and still running Oracle.
One phrase I like to use is that fiber channel is the Swiss banking of networks,
and that's where you still see fiber channel in the cloud.
But to your point, in the beginning of the year,
they may roll out with one architecture,
and in the next six months later, they may roll out with one architecture, and in the next six months later they'll roll out with the next one
because they run these things concurrently over and over and over again,
and they are not like the enterprise by any stretch of the imagination.
So we tried to capture that, especially in the M.2 and some of the other pieces,
and that's why we have the call-out and the breakout of the cloud servers.
So we have that ramping from around 18% to about 24% of total server shipments
that are going to use NVMe as we go forward.
And, again, the big crossover point will be the SATA.
When you have NVMe cheaper than SATA, then those guys will throw out the SATA connection,
move on to the next generation
with native NVMe.
Okay.
Yes.
All right.
What would be the $57 billion by 2010 versus the entire drive market?
Versus drive market?
Entire drive market.
Or entire IT market?
Including all the legacy drives.
I don't know the answer to that.
Damn good question.
Wish I had a damn good answer.
I'll have to come back to you on that.
Yes?
Yeah, you mentioned the M.2.
Yeah.
What's the reason behind that?
Density or cost?
Why is that so surprisingly popular compared to the M.2?
I think it's a combination of density and form factor more than it is capacity or power
because it's the same NAM chip, so it's not like they're changing anything at that point.
I think the controllers are a little bit less,
but it's the flexibility to be able to do them intra-server.
So, again, when we talk about this,
it's usually we've seen the biggest growth on the cloud market in this segment.
So, you know, we've all seen the pictures of the Google and Facebook and Amazon data centers.
I mean, these are the cheapest motherboard.
And if they can get rid of a bolt or a screw because they're deploying X million a year, they do.
And this allows them to get rid of a few connectors.
It allows them to use it in a smaller footprint, squeeze more together. So I think it really has more to do with that than it has anything to do in particular with the M.2 capabilities.
Okay. Yes.
Back to the previous question.
Okay.
Okay. So let's reduce the entire dry market to SATA SSD only.
Okay.
What will be the SATA SSD market share by 2020?
SATA SSD only. Okay. What would be the SATA SSD market share by 2020? SATA share?
I would see SATA dropping to probably 20%.
You said 2020?
By year 2020.
Yeah, by 2020, I would see SATA dropping to, oh, I'd say probably 25% to 30%
if the price crossover happens because you're going to see all the new servers go.
You'll have three years of amortization on the enterprise side,
and the cloud guys will move quick.
Now, every predictor in the world has always been overly aggressive on this,
but that would be my guess is you're going to see somewhere around 20% to 30%
of the pure SATA market at that point.
And that represents dollar value how much?
Again, I'd have to go back and dig that number up.
I don't know that off the top of my head.
Well, I am willing to make stuff up.
When I know I have the data, I'd rather go find it first.
And, sorry.
That's okay.
Yes.
I saw two years, one by 2017, the other say 2018.
Are you predicting BME pricing will be par with the SAT system?
Yeah, so we expect that.
Will it be 2017 or 18?
I would say 18 is probably a more accurate term.
But, yeah, that's why I said it. I think on one slide it said end of 17 or 18 on the next one. But, yeah, that's why I said it.
I think on one slide I say end of 17 or 18 on the next one.
But, yeah, it's in that ballpark, within that half year, first half of the year time frame.
Okay.
I'm sorry.
Any more questions?
I'm just taking all the time.
Anything else out there, guys?
Well, there's one other question.
Sure.
So by the prediction of the pricing drop,
you SSD or in part with SSD drive on the SATA base,
do you predict the same reliability that SATA SSD drive is offering today compared with MEMV on a PCIe base?
Assuming that we stay with the same material for recording,
I don't see any reason to expect
that there'd be a reliability difference.
If we're still using NAND versus NAND,
then I don't think there'll be any change.
If we go to a 3D crosspoint
or some other type of storage material,
then all bets are off.
You don't know.
And a lot of work has been put into figuring out the algorithms for placement
and read and write in nan environments to make sure that you have the proper
write endurance.
That took many years of development.
As you move into next generation technologies,
that is certainly one of the risks involved.
Clearly all the vendors who are producing those are claiming that, no, no, no,
we're not going to have any write endurance problems, it's going to have all
these latency reductions, it's going to have better density and higher performance. But
as I think everyone who's been in this business knows, we'll see when they ship it.
So again, I reiterate that people are very conservative in the storage business.
And while I think the protocol transition will be easy and all the other stuff,
I think your comment about NAND being reliable
and that being a crossover point for people is very true.
Yes? Do you have any data to report about the Y-axis?
Like, do you see the market competition?
I don't think we did that.
Mike, did we do anything specific to that?
I think we talked about that in our future predictions.
Yeah, I mean, obviously it's part of the standard,
but in terms of doing sort of the classic area chart that says,
here's the four and then by eight comes in, I don't think we've done that.
But that would be relatively simple to do from the base data that we have,
so we could certainly do something like that.
Yes?
What are your core resources on cat and mouse?
So it's a combination of things.
Unlike most analysts that try and tell you we know everything, we do not pretend to do that.
We have multi-sourced this data and triangulated it.
So we have used reports on the top-end server side from other vendors and other analysts.
We have looked at flash shipments that are out there both by talking to vendors and by what we see in reports.
Same thing on the controller side.
So we've used those as boundaries to kind of guide where we're at.
We've looked at these S1 filings of Pure and Nimble and a number of the other guys that looked at their growth rates
that were audited to use those to sort of cross-reference some of the other things.
So it's been a combination of those items.
Do I claim that we have spoke to every single vendor
and that every single vendor has given us their shipment data?
No, that's never going to happen for any analyst.
But I think that...
Do you have, like, surveys that you send out to...
We do, in fact.
...kind of percentage and get back in terms of how many reach outs?
When you target, let's say, 100 server vendors or store vendors,
what kind of rate of return do you get back in terms of these servers?
On the order of about a third.
Yes.
What do you guys think the transition to PCIe Gen 4 is going to make sense of this transition chaos?
Well, if history is any indication, it'll be two generations before that matters.
And again, if you look at the cloud guys, they will go immediately
because they don't care about anything about history, legacy, or anything like that.
When you're looking at the Telco Enterprise guys, you're going to see anywhere from three to ten
year cycles for those guys. The Telco guys move
cloud is sort of a 12 to 15 month cycle
or 12 to 18 month cycle. Enterprise is a three to five year cycle.
Telco and I think the IoT guys are going to move more like
the Telco guys are in that five to ten year cycle. Telco, I think the IoT guys are going to move more like the Telco guys are
in that five to
ten year cycle.
Okay.
Yes.
So you
mentioned that
the PME and
the PME over
fabric,
there are some
small features
compared to
the legacy
from the
post-legacy
standard.
Could you give any more examples what customers ask about what is missing from the existing?
Yeah, so some of the things around drive sharing, drive pooling, and virtualization,
there's still some work that has to be done in that area.
Some of the load balancing and failover capabilities are not as robust across that.
Now, those are coming in the 1.3 standard of NVMe Express,
and I expect them to be addressed over time.
And just like we saw with every generation of new class of storage,
you saw it when we moved from SCSI to Fiber Channel,
Fiber Channel to NAS, NAS to iSCSI.
They start out immature.
They mature fairly rapidly over a couple-year period as they get these products installed in the customer bases.
And then there's kind of a parity in those environments for quite a while.
So I would say right now, if you look at the NVME market, we see that sort of taking off more at that pearly transition.
So, again, think about array targets.
They're going to bring these things in.
They're going to get the pearly motherboards in.
They're going to make whatever their top-level architectures are to their arrays.
That drives it more than anything else.
Then they'll figure out what they want to do for I.O. and connectivity.
Generally speaking, those target guys are kind of one generation or one.
I always get my tick and tock mixed up, but whatever the second one, the tock is the second one.
They're kind of at a tock time frame, and then they'll ship that.
And in most of those cases, these guys are taking the same software and moving that over.
The early guys that we're seeing, like MangStore and E8 and those guys,
they haven't finished all those features yet.
They'll get there.
But they're focused on markets that are incredibly dedicated to performance.
And they've had some very, very interesting early success,
in particular in the cloud markets, high-frequency trading,
and content distribution in particular, especially around not so much the cloud markets, high frequency trading, and content distribution in particular,
especially around not so much the absolute performance, but the number of concurrent streams that they can provide.
Yes?
Who is G2M's primary client?
Well, after this, hopefully a bunch of you guys.
Honest to God, we just published this report today.
We are, if you look at the people that have done business with G2M to date,
it's Western Digital, it's people like Sanmina, so it's folks in that area.
We've worked with Dot Hill and Seagate in the past.
But for research reports, this is the first one.
So I don't want to misrepresent that I have a ton of research customers at this point.
Okay.
All right.
Thank you.
Thanks for listening.
If you have questions about the material presented in this podcast,
be sure and join our developers mailing list by sending an email to developers-subscribe at sneha.org.
Here you can ask questions and discuss this topic further with your peers in the developer community.
For additional information about the Storage Developer Conference, visit storagedeveloper.org.