Grey Beards on Systems - 50: Greybeards wrap up Flash Memory Summit with Jim Handy, Director at Objective Analysis
Episode Date: August 19, 2017In this episode we talk with Jim Handy (@thessdguy), Director at Objective Analysis, a semiconductor market research organization. Jim is an old friend and was on last year to discuss Flash Memory S...ummit (FMS) 2016. Jim, Howard and I all attended FMS 2017 last week in Santa Clara and Jim and Howard were presenters at the … Continue reading "50: Greybeards wrap up Flash Memory Summit with Jim Handy, Director at Objective Analysis"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Howard Marks here.
Welcome to the next episode of the Greybeards on Storage monthly podcast show where we get
Greybeards storage system bloggers to talk to storage system vendors to discuss upcoming
products, technologies, and trends affecting the data center today.
This is our 51st episode of Graybridge on Storage, which was recorded on August 15,
2017. We have with us here today an old friend, Jim Handy, General Director at Objective Analysis,
and we were all at Flash Memory Summit 2017 last week. So Jim, why don't you tell us a little bit
about yourself, your company, and what went on at Flash Memory Summit? Oh, thanks Ray and Howard, and it's always a pleasure to be on this.
I'm Jim Handy. I'm an industry analyst following mostly chips, which I've done for a number of
years now out of the semiconductor industry. I follow memory chips, and with that SSDs,
and to a lesser degree, storage downstream from the SSDs. So I like to tell people that I look at SSDs from the inside out.
The Flash Memory Summit is something that I've actually had a role in
for the past 12 years, I believe, that has been going on.
And certainly an interesting show in the way that it's evolved
from being just around chips to now being around storage in a big way,
and including people like yourselves.
I tend to go to FMS to see what's going to hit our market next year and the year after.
Yeah.
I just go to meet everybody else, actually.
And there's an awful lot of that there, too.
Oh, yeah, definitely.
Yeah.
So the excitement this year at Flash Memory Summit obviously was the fire in the exhibit hall.
I came in late Tuesday afternoon, so by that time it was already closed down. Did it ever
actually open up before the fire? No, that's a real shame. The fire, as far as I understand,
started at about six o'clock in the morning on the first day of the show. And that just threw
everything into kind of a tailspin. Fortunately, the fire didn't impact any of the papers, but it had a very significant detrimental effect on the show floor because they were never able to safely open the show floor.
According, you know, the fire marshal kept it closed.
Yeah, yeah, yeah.
I saw some of the vendors that actually set up what I'll call auxiliary show floor exhibits, various places in the hotel and stuff like that.
But not everybody could do that, obviously.
Yeah, the ones who were nervous and didn't put their stuff out on the show floor the night before,
they were actually in the best position because they had things in their hotel rooms that they could put somewhere else.
And set up tables in the hall.
It was interesting.
Yeah, yeah.
And, you know, some of them had meeting rooms or were able to finagle a meeting room.
The unfortunate thing is that the convention center has a hotel adjacent to it,
and a number of their ballrooms were undergoing restoration. So those were unavailable, too.
Oh, gosh. So it was kind of a tight space problem.
Yeah. And all the hotel rooms were booked.
Yeah, I ran into that problem myself. Anyway, so at the summit, there was a lot of discussion,
obviously, about NVMe and NVMe over Fabric. We did talk to Mellanox there to some extent.
What's your take on what's going on with NVMe and NVMe over Fabric, Jim?
I see an awful lot of stuff going on there. I defer to people like you who understand
the implications of it a whole lot better than I do, because like I said before, I'm a chip head.
I look at it and I say, well, that's a heck of a good way to take advantage of all of the bandwidth
that Flash has to offer instead of putting it through more conventional disk interfaces.
And it's a nice way to get around the fact that
people have been unable to share the data on their SSDs between servers if they put the SSDs
actually into the servers. So, you know, it seems like from my perspective, it's something that,
you know, certainly should take off well. But there are probably a lot of problems that I'm
completely overlooking. Well, I've been really surprised at how quickly NVMe over fabrics and more generally NVMe over networks is taking off.
Last year, there were a small handful of startups with proprietary solutions that provided huge performance from Accel know, Accelero and E8 have,
you know, we're both guests on the podcast. This year, vendors like Caminario and Pure
have integrated NVMe over Fabrics into their systems. And I think the most interesting part to me is we're starting to see NVMe over fabrics and NVMe over
TCP, a closely related technology, not just as software but getting integrated
down into the FPGA and ASIC level. Two years ago, I was sure that the future was going to be rack scale switched PCIe
infrastructure that, you know, resembling DSSD. This year, I look at NVMe over fabrics ASIC from from Kazon Networks or from Attila that combined with a PCIe switch chip from MicroSemi or LSI
looks like it'll make a 40 gigabit per second Ethernet NVMe over fabrics JBOD
about the same cost and complexity as a SAS Expander JBOD. And if you can do that
and deliver the vastly better performance, that looks really interesting and hopefully
will mean that we'll get people to be more adventurous about how they build storage systems.
Ray, you and I know most people buy storage systems completely populated,
and they rarely actually add a shelf in the field.
Because with SAS and even worse with fiber channel loops,
if you don't connect all the cables and disconnect all the cables in the right order,
you could create a problem.
But if the capacity expansion just plugs into the Ethernet switch, that's a lot less disruptive.
Gosh, speaking of NAND, the 3D NAND stuff seems to be really taking off, Jim.
Gosh, I think I saw 64-layer stuff starting to come out. Is that true?
Yeah, 64 and also Toshiba, and also they have a 96 layer part, although they weren't showing it to anybody. But Micron and Intel are actually shipping SSDs
that have 64 layer 3D NAND. And the whole point of that is to get the cost out of it.
So I mean, from a cost per bit perspective or terabit or whatever?
Yeah, the only thing that's keeping NAND prices up right
now is the fact that we're in a shortage. But the manufacturing costs on 32-layer NAND,
I calculated, should be about $0.12 per gigabyte. This is just for the raw NAND chips. And as a
matter of comparison, raw NAND chips today are selling for about $0.25 per gigabyte.
So that cuts the price about in half. You go to 64 layers, it's not going selling for about $0.25 per gigabyte. So, you know, that cuts the price about in half.
You go to 64 layers, it's not going to cut that $0.12 in half, but it is going to come down to maybe $0.08 or $0.07.
I was at a Micron session, and they talked about, you know, their roadmap is just to keep incrementing by 32 layers each shot.
I mean, they're talking 96-layer NAND sampling maybe next year sometime.
Yeah, yeah. It's not 32 at a shot so much.
It's like they keep adding it as half again what they had before. 3D Flash would be kind of a tick-tock process where there would be more layers, and then there would be a shrink, and then there would be more layers again.
But it doesn't seem to be going that way.
Can you talk to us a little bit about stringing and things like that?
What the hell is this string stuff?
String stuff?
Well, everything comes back to string theory, but that's a whole other story.
Yeah.
No, it's not like that really at all.
The NAND flash has pretty much abandoned the process of shrinking that all of semiconductors since the 1960s has been about how do you make the transistor on the surface of the chip smaller.
And we got to a point at 15 nanometers with NAND flash where you couldn't store enough electrons to make up a bit by doing that. And the reason why that's not an issue with regular memory So they turned everything on its side. And that's where 3D NAND came from. They're not able to shrink anything anymore, but they're able
to stack it up higher and higher and higher. So we're not going to see the tick tock that you
were talking about, Howard. What we're going to see instead is just added layers. Now, where string
stacking comes in is that when you make those layers, you basically put down a whole bunch of layers
like a layer cake, and then you have to bore holes into it, which, you know, it's not as
hard as it sounds.
You do need freaking lasers, don't you?
No, actually, they use chemical process to do that.
Yeah.
Yeah, there are two kinds of etch.
They either use wet etch, which at those kinds of process geometries is really hard to do,
or you use what's called a plasma etch, where you're actually burning holes into the chip.
Oh, God.
Even better than lasers are.
Plasma beams.
Close enough to lasers for me.
Well, actually, they use lasers for EUVv which is what's going to be used for uh
microprocessors and probably for dram in the future um those euv lasers they call them euv
because it's a euphemism for x-rays they didn't want people freaking out about this
honestly they're very powerful x-rays that they're going to be using for making these chips. And those X-rays are actually from Star Wars research that these are high power CO2 lasers that were developed by the Department of Defense in the United States in order to try to shoot Soviet nuclear ballistic missiles
and make them blow up while they were in outer space
where they could cause no harm.
Well, I'm glad we got some use out of all that Star Wars technology.
Yeah, it's a real swords to plowshares kind of a thing.
But you don't use that for 3D NAND.
3D NAND is not going away with X-rays or EUV, extreme ultraviolet.
It's going to be going just to more and more layers.
And you were asking about string stacking.
The reason why they do string stacking is because the layers, there get to be too many layers.
When you're at 32 layers or 48 layers, you have to bore these holes that are 60 times as deep as they are wide,
and then line the insides of those holes with atomic layer thicknesses of various compounds
in order to make the things actually work as a memory chip. And that gets to be tough. If it were
96 layers of that, then it might be a 100 to 1 aspect ratio of holes 100 times as deep as it is wide.
If it were 128 layers, then it would be 125 times as deep as it is wide or something like that.
That's something that after a while, it just becomes too difficult and too costly to do. And so with string stacking, what you do is you build,
in Micron's example, you build a 32-layer 3D NAND,
and then you put a nice, clean topping over the top of it
and say, okay, that's all done.
And then you build another 32-layer on top of that.
And you can keep doing that.
Nobody knows today how many times you can do that before you start running into problems.
So nobody knows how many times means probably four, maybe 24.
We don't know what it is in between.
Yeah, yeah.
Could be.
These are really third-tier stacks we're talking about, right?
Yeah.
So 24, I don't even want to talk.
It's pretty impressive.
Yeah. What kind of... The end result want to talk. It's pretty impressive. Yeah. So what kind
of... The end result is that it gets a cost out of it. And everybody is really intent on keeping
Moore's law going. This is, you know, Gordon Moore found in 1965 that the industry seemed to be
doubling the number of transistors that they could put on a chip every year or two.
And he said, oh, well, that'll bring cost reductions, which it has for the industry.
And so everybody in the industry is trying to keep those cost reductions going,
and that's why they're multiplying the number of layers on 3D NAND.
So what's the bit density per die for today's 3D NAND? Well, Samsung was talking about
one that had a terabit, which would be 128 gigabytes on a single die. And then they said
that they also were going to be stacking 32 of those devices inside a single plastic package.
So what looks like a chip to somebody who's just looking inside a box might actually have 32 chips inside that little black blob of plastic.
You're shitting me.
So it's 32 terabits and a chip?
Oh, yeah.
It would be 32 terabits, which is 4 terabytes.
Yeah.
I remember when 4 terabytes was a lot of storage.
Yeah.
Yeah, yeah, serious stuff here.
I mean, you have to have access.
I mean, can you access this stuff and, you know, all that other stuff?
But that's a whole different thing.
Yeah, yeah, that's a bottleneck sometimes.
Yeah, yeah.
So speaking of Micron, you know, the 3DX stuff was hot, hot, hot two years ago. It was hot last year. I suppose it kind of cooled off a little bit. Intel is now starting to ship Optane. But we're still waiting to see to be little snags along the way. I think that they've had some snags.
This is a very new technology.
It actually uses phase change memory, which is something that Gordon
Moore published an article on
a chip that his lab had made using that back in
1970.
Oh, God.
Wait, wait, wait, wait, wait.
3D Crosspoint really is PCM?
Yeah.
Weren't they vigorously denying that a year or two ago?
Yeah, they were, and nobody really understands why.
Okay.
So it looks like PCM.
It quacks like PCM.
It must be PCM.
Yeah, yeah, yeah.
Tastes like PCM. Smells like PCM. it quacks like PCM, it must be PCM. Yeah, yeah, yeah. Tastes like PCM, smells like PCM.
And it was invented in 1970?
Well, the first working chip that Intel had using phase change memory was 1970.
Oh my lord.
Yeah, but it was two bits.
Yeah, maybe one.
No, it was one bit per cell.
And there was a lot missing on that chip.
I think it was a 256-bit chip.
So not a lot of storage in that puppy.
So you're thinking that it's still having some volume challenges
or I don't know what you'd call them, quality?
Trying to get the yield up kind of thing?
There's one of these jokes that only people in the semiconductor market I don't know what you'd call them. Quality? Trying to get the yield up kind of thing?
There's one of these jokes that only people in the semiconductor market understand or even laugh at.
And that is when you're talking about how much product you're capable of producing, you talk about diaper wafer.
And it's, you know, how many chips can a silicon wafer produce and for 3D cross point people are talking about how many wafers does it take to produce a single working die
So are these stumbles sufficient
that some of the other post- post flash non-volatile memory technologies
get an opening like mram or you know crossbars filament based re-ram yeah it could and and rein
me in if i get to uh go down the economist path a little bit too much for this but um
there there's uh something the memory storage hierarchy, which is,
it says, okay, things make sense in a computer if they're, you know, a new layer will make sense in
the memory storage hierarchy if it fits in between two adjacent layers. So these layers are your
caches inside your processor, which could be your L1, L2, L3
cache. Each one is slower and cheaper than the one with the lower number. And then after that,
you go to DRAM, which is slower and cheaper than any of the caches. And then your SSD,
which is slower and cheaper than the DRAM. And then a hard drive, which is slower and cheaper
than an SSD. And then tape, which is slower and cheaper than a hard drive.
Yeah, well, and certainly Flash had this big gap between DRAM and spinning disks to fill.
Yeah, and that's why NAND has caught on so well.
And what Intel's trying to do is to do the same thing with a layer that would fit between DRAM and an SSD.
And that's where 3D Crosspoint fits in.
Well, the trick and the reason why I talked about being an economist about this
is that the only way that 3D Crosspoint makes sense
is if Intel sells it for a lower price than DRAM.
If it's the same price or more expensive than DRAM,
but performs worse than just buy DRAM, you know, makes a lot more sense.
And well, some, some of us are in the persistence business.
Yeah. Yeah. And that's true. That's that for it, because you can get an awful lot out of
something like that. But let's go down that road in a minute.
You know, just to finish up here,
the idea with Optane or the 3D cross point
is more to make a layer,
to make the system perform better
so you get a better cost performance for the whole system.
And the way to get it to cost less than DRAM is by pushing the volume up to DRAM-like volumes.
And the only way to get that volume up to that point is by pricing it lower than DRAM.
So it's a chicken and egg problem, and basically Intel is going to have to lose money for a couple or few years
before they're able to get it to a point where they can sell it and not lose money.
Well, certainly Intel's in a better position to lose a lot of money to gain a foothold for their new technology than some of the startups with their new technologies.
Yeah, and that addresses Ray's question.
Ray was wondering how Optane compares against these other technologies. The other technologies might be better, but they're not going to have the kind of funding that 3D Crosspoint is going to have.
So they have to be bootstrapped in kind of thing?
Yeah.
And, you know, I tell people that Intel's motivation is, you know, let's say Intel loses $10 per system for the 3D Crosspoint memory that they sell into the system. But as a result, they're able to get people to buy a $50 more expensive processor
than they've netted $40 more by losing money on the memory.
That's a kind of bundled kind of solution kind of thing.
Yeah, it is.
We can make money someplace else and lose money here,
as long as it's not too much.
I played that game.
While we're playing economists, Jim, let's pull out your crystal ball.
We're very clearly in a shortage on both of the products you follow, DRAM and Flash.
When and how does that shortage end, and what does that shortage ending mean for prices? Strangely enough, the shortages are
caused by two different things. We had a lull in DRAM demand back in 2015 and 2014-2015. That
caused the manufacturers to not grow their production capacity. They said, oh, this must be the new
norm, lower growth in DRAM consumption. And so they geared up for that. And as a result,
they have too little production capacity now that demand has come back up again.
So that's the DRAM problem. The NAND flash problem relates to how difficult it is to make 3D NAND. 3D NAND uses
technologies that have never been used in the history of semiconductors.
And so those technologies are taking a while to master. And what we're expecting to see at my
company is that at some point, they will solve the last problem.
And that will be like a breakthrough,
that NAND flash production capacity
will suddenly be used efficiently
and will end up flipping from a shortage
to an oversupply almost overnight.
That typically happens.
I can point to some times in history
when it's happened before,
the most notable being
late 1995 when DRAM went from just a very significant shortage to a huge overcapacity.
But it's impossible to tell when that last breakthrough is going to be made. So for the
sake of taking a position, we're just saying it's going to happen in the
middle of 2018. Whenever it does happen, then what we're expecting to see is that NAND flash prices
will drop, you know, possibly by as much as 60% in two quarters. When that happens, then planar
NAND capacity will no longer be cost effective to run. So the people who have
manufacturing plants for planar NAND, not the 3D NAND, they're going to be looking for some other
use for this capacity. And they'll say, oh, we'll point it towards DRAM. That will cause a DRAM
oversupply and it will cause DRAM prices to collapse. And then the DRAM guys will start
deciding to shut down some of their plants or to turn them into something else, which would most likely be foundry capacity.
They're making chips for people like Qualcomm and NVIDIA who don't make their own chips.
That will cause the foundry business to be oversupplied and for prices to collapse there.
And so we'll have by the end of 2018, we should have.
Cascading problems here.
Well, unless you're a customer.
Yeah, I suppose.
Yeah, no, it'll be great for people who are buying storage arrays and SSDs because the
prices are going to be going down very significantly.
What typically happens during a shortage like this is prices don't go
up too much. They usually just stay pretty stable. But boy, when the prices go down,
they go down a lot. Right, because the knowledge that leads to lower costs continues and
companies just put that all into profit while they can.
Huh, huh. Well, that's the other question. So QLC seems to be kind any technology like that in volume.
And they're using it for just things like USB flash drives and the SD cards that you put in your cameras and that kind of stuff.
They usually don't say an awful lot about how they're using it because the way that they look at it is this is their way of being more profitable but they don't need to make the customers concerned by saying they've got
a possibly less reliable product this is 3d qlc right yeah yeah 3d is a lot better behaved than
planar nand ever was and so you i mean i, I'm sure it's an oversimplification,
but didn't the cell geometries grow when we went 3d? You're absolutely right. What happened was
that the transistors that used to be printed kind of like, you know, postage stamp that,
you know, let's say this postage stamp that was printed on, on a sheet of stamps, uh, that,
that every couple of years
they were cutting its size in half.
You know, that's kind of the way that it was going.
And now instead of using that kind of a format,
the individual bits are kind of like donuts
stacked on a dowel or a stake or something.
And these donuts have an awful lot more surface area
than the postage stamps ever did.
You know, and the decreasing surface area is a problem.
Which means we can store a bunch more electrons in there and being able to keep track of 16 states as possible.
Yes.
Okay.
Yeah.
You make me hungry with this donut talk.
Mmm, donuts.
All right, don't go there.
So the QLC stuff is sort of on the horizon from your perspective?
It is, and the purpose of QLC is to knock an extra 16% or so out of the cost of 3D NAND compared to TLC.
And, you know, even in our end of the business, there's certainly uses for that.
I saw Viking at FMS announced a 50 terabit SAS 6 gigabit SSD.
They rated it at one drive rate per day. And then I did the math to figure out how long it would take to fill it at the rate they said it would accept data.
And that was 1.7 days.
So it can't be done.
You can't kill it.
So apparently this is the you can't wear it out.
Yeah. And for applications like video streaming and content delivery networks and the long tail of the hyperscalers, they may be three or four rewrites of a device like that over its whole life.
Yeah.
And so QLC, it's good for 500 cycles.
Hey, that's good for me.
Yeah.
Yeah, there are a lot of applications that don't need an awful lot of writes.
One that I like is just software.
Is, you know, how often are the rev level changes on software?
It's pretty infrequently.
And, you know, 500 writes.
Can you imagine saving a drive for 500 revs of
a piece of software? Like Windows or something like it would be Windows 510. I don't think I
want to be here. Well, I'm just thinking system upgrades, you know, even if you get a system
upgrade every two months, which, you know, would be awfully frequent, then still every two months, so that's six times a year, so it would be about 100 years for you to…
That's an agile software.
They do it on a week basis.
Yeah, but that's still 52 weeks a year.
That's like 10-year life.
Yeah.
Yeah, it's still like enough.
Yeah. Yeah, it still seems like enough for us. So the other thing that kind of was exciting from my perspective was to see some discussion of automotive usages at the Flash Memory Summit.
It seems like that might be a significant market, a new market for NAND.
Yeah.
Do you see that?
Oh, it would be for anything computing-wise, because you consider that you, in a car, first of all, you have to have something that recognizes the scene around it through cameras and, you know, whatever else there is.
LiDAR.
Yeah, you need something that stitches the LiDAR image with the radar, the forward-looking radar of the car with the camera images and make
sense out of all of that. And there's a pretty high level of computing required to achieve all
of that. So a lot of people are, you know, licking their chops at the idea that they're going to be,
you know, several hundred dollars worth of computing stuff going into cars. Now,
the automotive business being what it is,
they probably won't start putting it into the cars
until it's well under $100 worth of computing equipment,
which will happen with Moore's Law reducing the price all the time.
But still, it will be a thing.
Yeah, but we're already seeing things like auto brake
for front-end collisions being built in.
So it's just, you know,
as the cost of these features
comes down, they move
from the Mercedes and Lexus down
to the Hyundais. Yeah, exactly.
I would imagine that the most expensive cars
won't have it, though, because those will be your Lamborghinis
and Ferraris, and nobody wants a self-driving
Lamborghini when they can drive it themselves.
Yeah. Nobody's
actually commuting with a Lamborghini anymore.
For some obscurity.
Well, except in LA, but that's a whole different thing.
Hollywood.
The other thing I heard rumors of that was going to be on the show floor,
which unfortunately never opened, was this ruler form factor for SSDs.
Did you see anything about that, Jim?
Yeah, yeah.
Actually, Intel had that in their keynote.
And it's a metal box, much like any other SSD or hard drive metal box.
But it's just a completely different shape.
It's like two inches tall and about a half an inch wide and about, don't know 13 15 inches long and you know great
great long thin thing and poking out of one side is a dual port pci okay that was the part i never
got a straight answer on was whether it was pci it is and the whole, A, you can put a whole ton of flash into that size box,
and then B, you can put a whole lot of those boxes side by side into a 1U rack or whatever container.
Yeah, and actually being long and thin like that simplifies the thermal design as well,
because it is enclosed, not exposed like an M.2, right?
Yes, it is enclosed.
Yeah, and so the case is a heat sink, and if I only have two circuit boards, I can cool both of them.
Well, that would explain then why the rack that I saw with those in it actually had gaps in between the things instead of having them pressed up against each other. Oh, yeah, no, you would definitely want to do it.
One of the problems with M.2 format drives has been cooling because it's just bare and there's no heatsink space.
And in a U.2 or a 3.5-inch form factor,
you have to stack multiple PC cards to use all the volume.
And so whatever's in the middle just gets hot.
Yeah, that makes sense.
So long and skinny, a lot of surface area makes sense.
And I don't remember what, what Intel said.
The, the amount of storage was that you'd be able to get into a rack.
But they had a rack that they wheeled out during their keynote.
And they said, you know, here's – you need this entire rack with today's densest hard drives to be able to get something, I don't know, 512 petabytes or something like that.
And then they said –
512 petabytes?
Is that what you said?
I can't remember what the number is.
Let's assume it's a petabyte.
Okay.
It's a nice round number.
Okay, fine.
And, you know, Intel's point was that you could take this one RU thing
and it would have the same amount of storage as an entire rack of hard drives.
Mm-hmm. which is interesting you know i i don't hear people arguing ssds against hard drives so much anymore but you know every so often
you run into one and this this was one where intel was doing well and there's you know and then
there's the you know it's been five years now and every year somebody's told me next year ssds will be cheaper than hard drives yeah
and and i kind of laugh um but the that day is becoming visible yeah more and more so
especially with qlc and all this other stuff kind of tlc the 3d you know 96 you know, 128 layers stuff. I figure
for practical
purposes,
when the
SSD to hard drive cost comes down
to about 3 to 1,
justifying hard drives
becomes difficult. Yeah, but
unfortunately they're both being driven by the same
mechanism. Moore's Law is
driving 30% per year out of the cost of NAND flash on average.
And Kreider's law drove 30% out of the cost of SSDs for a very long time.
But A, that seems to have slowed substantially.
Well, with the shortage.
Well, the HDD shortage?
Oh, you said SSDs.
Yeah, no, that, well, okay, so Crider's been driving 30% of the cost of hard drives for a long time.
But that's slowed.
And the returns are starting to require some very expensive technology.
We've been talking about Hammer for three or four years, but haven't actually seen drives.
Helium made a big step function.
Well, helium let you put one more platter in, but it was a one-time shot. shot you know and western digital and seagate are going to have to invest huge money to get
into something like bit pattern media to to keep up with that because the cost of a hard drive is
100 bucks the question is how much capacity do you get for that 100 bucks and manufacture it at a
profit well you know the 100 bucks is the head is the heads cost so much, the spindle motor costs so much, the casting costs so much.
And so, you know, nobody makes 20 megabyte hard drives because they would cost the same $100 as one terabyte hard drives.
Why would anybody buy them?
Yeah. I was in one session with a vendor and they talked about, you know, there's just not enough
fab volume capability to really replace all the disk drives that are going on. But what's
happening here is the fabs start to increase production or more fabs come online over time
as they're able to generate enough cash to do that sort of thing. There will come a point in
time where I hesitate to say it.
Discs will no longer be in our storage future.
Yeah, I'm not expecting to be alive when that happens.
Yeah, let's hope neither of us are.
Any of us.
Being graybeards, we're short on life anyways.
I don't know.
I figure it's seven to ten years from now.
Yeah, no, it's going to be a lot longer than that, I'm sure.
Yeah, no, it's going to be a lot longer than that, I'm sure. Yeah, yeah.
Well, I mean, before they're dead, that's a whole other thing.
But, you know, they're not something I consider for new designs.
I'm thinking seven to ten years from now, the number of use cases for spinning disks is going to become pretty minimal.
Actually, I look at it and I say it's a useful part of the memory storage hierarchy.
Because it's got that upper space, I guess, between it and tape.
It's cheaper and slower than the next faster thing.
Yeah.
Well, and then we come to tape, which is the gray joy of the storage industry.
Because that which is dead will never die.
And maybe disk
will never die in that case.
It's possible.
It just gets relegated to a different portion of
the application space, what it's used for.
I mean, tape has kind of moved from backup to
archive to deep archive
kinds of things. Maybe disk
will follow it down that path.
The last thing I thought was kind of interesting at Flash Memory Summit was
the NVIDIA session. A CTO from NVIDIA talked
about AI, and in the end of that session, he pitched their
new AI supercomputer thing.
What's a GPU guy doing at a Flash Memory Summit?
I was struggling to try to figure out what that connection was.
Yeah, I wasn't able to see that keynote myself, and I wish that I had.
But the idea was that this is the next thing that's going to create demand for flash memory.
Yeah. and everybody's talking about IoT and AI
and autonomous vehicles
all of these things that they're
hoping consume an awful lot of hardware
But Skynet
from all this we get Skynet
Yeah, I guess
I think the thing that kind of drove it from my perspective
is all this deep learning and machine
learning all depends on lots, vast
and vast amounts of data
and being able to access that data relatively quickly.
And so in the past, it was kind of computationally bottlenecked.
But now with GPUs with thousands and thousands of cores
and getting faster GPUs,
computation is no longer the bottleneck.
It's starting to become, God forbid, data.
Something that a lot of people were talking about at the show was moving the compute to the data rather than moving the data to the compute.
Yeah.
And I did go to a suite that's a company that used to call itself NextGen Data.
Now they call themselves NGD, they were showing
in their suite that they had an SSD that had a more powerful processor in it than your typical
SSD has for a controller. And that that processor is programmable, has a standard Linux interface,
and that you can put programs into disks that will allow them to do functions that are synchronized by your server.
But the server load will be extraordinarily small while the disks are off doing sorts and searches and that kind of stuff by themselves.
Yeah.
I don't know.
I mean, the challenge is, I mean, the AI stuff is all, you know, floating point, heavy point, heavy floating point.
He was talking teraflops.
And I don't know if you can put that on a SSD.
Yeah, I wasn't even thinking about that problem.
I was thinking about, you know, okay, so each SSD has to have a substantial piece of the data set,
not, you know, arrayed stripes or erasure code stripes.
Right.
And so it's kind of like, okay,
what's that do for everything we've done in storage in the past couple of years?
I guess it doesn't lend itself to any kind of redundancy, does it?
No.
No, you know, unless you're, you know, three-way replicating
and then all three of those are trying to solve the problem simultaneously, at which point –
It's going to cost a lot.
Yeah.
Well, I think you could have the cores – I don't even want to go down this architectural discussion here.
Yeah, it's not worth it.
It gets pretty deep pretty quick. Okay, so the last big question I have from the chip side is, you know, when are the Chinese coming?
Oh, yeah.
I keep reading about Chinese companies building flash fabs, but I don't read about them actually developing any products.
So how are we doing here?
They're trying to go it alone right now
I'm expecting them to give up on that at some point and go back and do the exact
same thing that Japan did when it took a lot of the chip business away from the
US in the early 80s and that Korea did when it took a lot of the business away
from Japan in the early 90s and that Taiwan did, you know, all of when,
when each of those countries put together their initiative to, um, coordinate funding of
semiconductor business, the companies, basically they, they bought all the factory and, uh, you
know, provided all the labor force that was needed, but they didn't have the technology.
And so in the case of Japan,
these companies went to U.S. companies and said,
look, we'll sell you wafers for a cheaper price than you can build them yourself.
All you have to do is show us how to run the fab,
how to run the manufacturing plant to make that happen.
And the U.S. companies thought that that was a good business deal.
They signed like five-year contracts, and they were partners for the five years. And then they went
separate ways and Japan became huge in semiconductors. The Japanese companies helped the
Koreans get into the market the same way with the Koreans supplying cheaper than normal wafers and
the Japanese companies supplying technology. And then both Americans and Japanese companies
helped Taiwan get into the business.
And so I'm just expecting to see something like that.
After China finds that they can't do it by themselves,
they'll probably go and offer really cheap wafers to somebody.
And basically, it's one of these game theory things
that if China offers cheaper product to samsung hynix and micron
to make drams with um if if one of them takes that business then they're going to have a
competitive edge over the other two because they'll be able to uh get lower costs uh oh it's
a game theory problem very much so it's always a game theory problem yeah but i mean it it's a game theory problem. Very much so. It's always a game theory problem.
Yeah, but I mean, it's the stupidest thing in the world
because you're trading short-term gain for long-term having them eat your lunch.
Yeah, yeah.
But you're also, but as soon as someone makes a move that they're doing that,
it's not just their lunch that's going to get eaten. It's their competitors too. And so everybody's incented to be the bad guy first.
Yeah.
Yeah.
And,
and,
you know,
I'm expecting that to happen.
I've,
I've seen China really,
uh,
you know,
blow the bottom out of the photovoltaics market,
um,
by over-investing in that.
And now they're going to be over-investing in
semiconductors. And so that market is probably going to have a very, very long oversupply.
China won't be the cause of the oversupply. We're still expecting for that to be caused by this 3D
NAND breakthrough that we're saying is going to happen in the middle of 2018, but sometime in 2020 or so,
then China will become a pretty important factor in semiconductors. And so we could have an
oversupply and constantly declining prices for a pretty long period, like five years.
Huh. Does that technology ever come back home? I mean,
we're still doing semiconductor development here, right?
Semic-fabs and stuff in the States.
Typically, you know, it's interesting, but there's this huge cultural element to it that in Japan and in Korea, there's a bigger focus on market share than there is on profitability. In the United States, Europe,
and Taiwan, there's a much bigger focus on profitability. And I'm expecting for China to be
on the market share side of things. And so what ends up happening is the United States
ends up being a place that does an awful lot of development of technologies that then get manufactured in other countries
where the cost of labor is cheap and where the focus on profitability is not as great.
I'm always happy to see a buyer's market, but damn.
It should be good.
You know, if you've got anybody who's actually buying chips or buying SSDs listening
in, probably the most challenging thing during times like that is that if you go and buy a whole
bunch of inventory during a shortage and then you're holding on to that inventory as the price
goes down, you end up having inventory rate downs, and that can be pretty tricky for upper management.
Yeah, luckily end users don't face that problem, but
they do tend to start hoarding because delivery times
get extended. Yeah. I suppose it comes baked into those storage
farms at some point. Yep. That's it. All right, gents, well,
it's gotten to the point where we're almost at the end.
Jim, is there anything else you'd like to say to our
listening audience? Well, there was something actually that Howard
brought up
quite some time ago, and I said I'd address
it later on, and that is that
the
part of the reason
for Optane or 3D
Crosspoint to exist is because of
its persistence.
And there's this big move afoot in the SNIA,
the Storage Networking Industry Association,
and JEDEC and other standards organization,
a number of organizations,
the Linux community and all that,
to work on making persistent memory
a part of the overall memory storage hierarchy.
This is something that is of crucial importance to people in transaction systems
where they're worried about power failures wiping out their database.
And yet, in order to make sure that the data is written to an SSD or worse, to a hard drive,
it ends up being very,
very time consuming protocol and persistent memory offers to provide an awful lot more speed
at a very low price for those things. You know, that's, it's, it's cool seeing what's going on
with all of that. And I think that we're going to look back on this year and next year as being this time when all of a sudden there was this huge change in the adoption of persistent memory,
even though today it's more expensive technology than it was before, through things that are called NVDIMMs and eventually through Intel's crosspoint memory. That gets really interesting when we finally have crosspoint on DIMMs
that's memory addressable, if not literally byte addressable.
Yeah, and then it doesn't lose its contents.
Yeah, unfortunately that didn't happen with the latest scalable Xeon announcement,
so we may be waiting another 18 months for the platform support.
They're just having problems with the new products, you know, new product problems that
kind of you have. I was at a vendor session and they said as a memory tier, let's call it,
doesn't necessarily have to be byte addressable because you can byte address and bit address,
I guess, DRAM, but maybe you can't bit address it anymore, but you could offload it as a block
to this persistent memory.
So they're talking it's going to be a block read and write
to the back end of this tier.
So it's interesting.
Yeah, I don't know.
I don't know how it's all going to play out.
Yeah, it'll be an interesting thing to watch
as it does play out, though.
All right.
All right, gents. Howard, any last questions for Jim? No, I think we out, though. All right. All right, gents. Howard,
any last questions for Jim? No, I
think we've covered it. All right.
Well, this has been great, Jim. Thanks
very much for being on our show today.
Always a pleasure. Next
month, we'll talk to another system storage technology
person. Any questions you want us to ask, please let
us know. That's it for now. Bye, Howard.
Bye, Ray. Until next time.