Storage Developer Conference - #107: The Long and Winding Road to Persistent Memories
Episode Date: August 26, 2019...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the
SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage
developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage
Developer Conference. The link to the slides is available in the show notes at snea.org slash
podcasts. You are listening to STC Podcast, episode 107. So I'm Tom Coughlin. I'm Jim Handy.
And we're going to talk to you about the long and winding road in persistent memories.
So this is our sort of click and clack or mutton Jeff or Tom and Jerry.
Anyway.
Big button.
Yes, the big button.
So we'll be talking to you a bit about persistent memory types.
What are some of the memories that we're talking about here?
Drivers in the market that are, you heard some things, Dave Eggleston spoke in his keynote about
this, we'll be talking about that as well, some of the support requirements, and then finally
some outlook from a report that Jim and I just finished recently.
So first, let's look at persistent memory types.
And the ones we're going to look at in particular are phase change memory.
Despite what Intel says, we still think a cross point probably falls into that category.
Magnetic random access memory, resistive RAM, ferroelectric random access memory, and then there are others as well.
And we'll touch, a lot of those are going to be similar to the ones that David mentioned,
but we threw one out in there that he didn't get into.
So 3D crosspoint or, you know, other types of phase change memory has actually been around a long time.
This is an Electronics Magazine article from September 28, 1970,
talking about Oshivsky's phase change memory
technology phase change memory technology so first amorphous semiconductor memory down there
so a lot of these things and actually this is kind of interesting thing in terms of history
sometimes things take a long time before they they actually reach a point where they can be
if they will ever reach a point to reach the be, if they will ever reach a point, to reach the point where they will become a viable entity
and something that people can try to commercialize.
And part of the reason is because of the development of technology.
The other is the demand for what they can provide.
So, and this is actually one of Jim's slides, but it's another way of looking at kind of a hierarchy of different types of technologies for memory and storage.
In this case, we've got a log-log scale for the price per gigabyte of the different technologies.
So we've got the caches in the processor L11, L2, L3, plotted versus price per
gigabyte, and then the bandwidth, the data rates, effective data rates you can get out of these
technologies. So these are very fast, but they're rather expensive. And the faster they are in
general, the more expensive these technologies are. Then we have DRAM here, down here in the middle.
And then the solid-state drives, hard disk drives, and tape down there.
So tape or hard disk drives are less expensive, but they also in general provide lower data
rates to access the information.
So there's the point here in this particular slide is that there's some space between DRAM
and SSD where some kind of persistent memory may be able to be added as a way of augmenting these other technologies when used in combination with them.
And so in this case, we're showing that being a place where 3D crosspoint would fit in.
And that's kind of another way of looking at what Intel was saying,
where they think their 3D crosspoint technology fits in.
So what is a crosspoint?
Well, a crosspoint is basically a memory structure
where I've got what are called word lines and bit lines.
And where they intersect is where,
either with a voltage or a current,
I can access a particular memory cell.
So the memory cell is the material in between here.
So we have, in this case,
we have a current that's going down here,
and that's the green line. That's actually, in this case, we have a current that's going down here, and that's the green line
that's actually, in this case, supposed to be reading that cell, getting information out of it.
And then we also have the possibility of some of that current actually leaking out. That's the
red line here, giving you what's called sneak paths. And so the material that makes up this structure here has both a memory element
and then oftentimes a selector technology, something that basically tries to prevent
these currents from causing problems or erroneous data in adjacent cells.
And crosspoints can be stacked. This actually is that Intel Micron 3D crosspoint memory,
so they've got basically two levels
of these crosspoint memories on top of each other,
so there's a certain amount of three-dimensionality
that they've implemented.
Now, phase change memory in general,
you have a...
The classical one is the chalcogenide materials,
which would exist in both a polycrystalline form as well as an amorphous form.
And the – and what you do is you – so here we have a metal interconnect.
We have a polycrystalline chalcogenide.
You apply some heat to that, and you create a bubble of an amorphous material,
which has a different resistivity than the crystalline material.
So by having those two different resistive states, we're creating
effectively a memory device.
And the way that works basically is
if we do
oops, there. That's not working.
So if we apply a higher temperature over a short period of time,
we can create an amorphous region.
So that's basically setting the memory and making the memory state.
If we were to go to a lower temperature,
but we apply that temperature over a longer period of time,
we give time for the amorphous material to crystallize,
and basically that's the way to reset the material.
So we've got these two different states and ways of accessing them.
They differ between one technology and another in how we do this,
but with the phase change memory, that's what's going on.
The interesting thing about this is the current only goes in one direction.
So the kind of phase change memory that have existed have, we've already seen
products in the past, are these NORC compatible phase change memories. Both Samsung and Pneumonix,
now part of Micron, were shipping some of these products in the past. They both obsoleted at this
point. These materials, you know, people have been working on this
since the late 60s, early 70s,
so it's a pretty well-understood material
to call cogenides are.
There's a single current flow direction,
so this is a unipolar technology.
We don't have to change the direction of the current flow
to be able to access the information
or to write information.
The selector device is not very complicated as a consequence of that.
Today, there's not really commercial products very much out there.
It's largely experimental and university products
in this classical phase change memory technology.
And this one is Jim's.
Go ahead, Jim.
Indeed.
So Intel is producing 3D cross-point memory
based on this phase change.
And something that I think is very telling about this
is that a couple of years ago, three years ago actually,
when they first introduced it at the Flash Memory Summit
a month later, I said, Intel's probably going to have two years of losses by producing phase change memory.
Now this is the profitability, this is the net profit margin of companies who make NAND flash.
This isn't exclusively their NAND flash profits because those numbers don't exist.
So some of this is polluted a little bit by DRAM profits. And there's another thing that factors
in there, and that is that Intel tooled up a new NAND flash plant in Dalian, China, which also
impacts their profitability. But you look and they've basically lost a lot of money.
Oops, Jim, Jim.
Oh, Stephen Bates had this problem on Monday.
I hate it when that happens.
There were only choice words for it.
I don't know if you guys remember what the words are,
but the word he used was abracadabra.
He needs to learn that word.
Okay, so anyway, all I was saying was that Intel is losing money.
Some of this is from Dalian.
Some of it is from 3D Crosspoint.
But still, one of the things that I'm going to be stressing later in the presentation
is that you have to make a whole lot of something to be able to get profitability out of it,
and 3D Crosspoint isn't there yet.
So, you know, we have a report that we put out
on the 3D Crosspoint memory a while ago
that is now available,
and it talks about 3D Crosspoint,
why, how, when it's going to happen. And the
reason Tom and I are up here is that we sell reports for a living. So if you want to know
anything about 3D crosspoint, then go to the objective analysis website and have a look at it.
But now I'm going to hand it back over to Tom so he can take you through the next few slides.
Thank you, Jim.
Okay, so we're going to talk a little bit more
about some of these technologies.
The next one is magnetic random access memory.
And so this is basically using magnetic materials
in order to be able to store information
as a non-volatile memory.
So the first type of...
or the MRAM that's been the most popular
or the most used up to date,
especially by a company called Everspin, which you
heard about before in Dave's talk, is
called a toggle MRAM. It's
basically an offshoot of the hard disk drive
reed head transducer
design, where you have
in-plane magnetic materials
and a
these are actually
metallic magnetic materials and
you've got an insulating layer the insulating layer is very thin and as a
consequence the I can I've got I get tunneling I get tunneling electrons and
go through this and in the case that the tomb and one of the magnetic layers is
actually fixed in its magnetic orientation,
oftentimes by an anti-ferromagnetic material that is adjacent to it.
The other one is free to rotate.
If both of those layers are oriented in the same direction, then that's a low-resistance state.
It's easy for those electrons to cross the tunneling barrier and move from one of the layers to the other.
In the case where, so this is basically your reset or your default state.
And in the case where they're actually in opposite orientations,
that becomes a high-resistance state,
and then it's more difficult for the electrons to be able to tunnel from the one layer to the other layer.
So this is called a magnetic tunnel junction,
where I have the magnetic material insulator and then another magnetic
material. And the magnetic properties determine the tunnel barrier resistance as well as the
thickness of the material. This is a technology that has had the most volume of magnetic RAM
being made, but it has issues compared to the more recent technologies, such as spin tunnel torque
or STT MRAM. And that's the next one we'll talk about here. So in the case of spin tunnel torque or STT MRAM. And that's the next one we'll talk about here.
So in the case of spin tunnel torque MRAM,
it actually deals with some of the scaling issues that we ran into
with this in-plane toggle mode MRAM.
And particularly if you go, instead of in-plane orientation of magnetic materials,
you go to perpendicular orientation.
So it's actually magnetic orientation out of the plane.
And that's what the modern spin tunnel torque
magnetic tunnel junctions do.
So again, if the two magnetic layers
are in the same orientation, I'm in a low resistance state.
It's easy for the electrons to tunnel again
through this oxide tunnel barrier here.
In the case of where they're anti-parallel to each other,
they're going in different orientations.
Now you have a more difficult time for the electrons to be able to tunnel through,
and as a result, the resistance is higher.
So MRAM effectively is one type of a resistive memory device,
only it's using magnetics in order to change the resistive state
and therefore be able to store information.
So this is the technology that is being also...
So Everspin is going to be making stand-alone chips
with this technology.
They've actually shipped some parts.
They're actually being used in that IBM drive
that Dave Eggleston talked about during the keynote.
But they're also, this is the technology that's being talked about
for all the folks that are talking about doing embedded MRAM.
They're going to do the spin tunnel torque perpendicular orientation MRAM.
So the status was once considered a DRAM replacement.
The big issue there is getting the price down to where it is truly competitive,
and that requires getting the volumes up in manufacturing that they could do that.
And so we're quite a ways away from that.
There's a possibility, and actually in our report towards the end of our projection period,
a chance that if volumes get up that we could do that.
But right now it probably is going to be threatening more NOR
and also SRAM for different reasons.
So there's only one chip supplier for the standalone parts at this point, Everspin.
They have shipped over 70 million units.
A lot of these are very small capacity devices that are used for caching and buffering.
But they're very fast and they're persistent and they don't wear out. And so that's
why they're popular for these applications. They're in the process of converting from
this toggle MRAM to spin tunnel torque MRAM, and they developed a partnership with Global Foundries
to make 300 millimeter wafers both for Everspin's own production,
but also as a process that can be used
for embedded memory with MRAM in it.
So they have a whole line of products
with embedded MRAM.
And by the way, also, because I was at
their Global Foundries technology meeting yesterday,
they also apparently have a resistive RAM option,
not magnetic, for an embedded memory as well.
Others basically said that they're doing things in this
either in terms of providing IP or an actual production.
A lot of the major, it seems like most of the major foundries
are talking about having embedded MRAM options
that people could use in their embedded products.
So TSMC, UMC, for example, and Samsung.
Also, there's people with IP like Avalanche and SpinTransfer and TDK,
and Toshiba have also made some mention of using these technologies.
Today's markets are special environments right now for the standalone products,
like space or high uptime systems or caching and buffering applications.
But potentially, if this becomes part of an embedded process, the volume could go up.
That would bring down the total price of putting the MRAM in products, and that could lead
to even more applications potentially becoming one of these major elements in the new non-volatile
memories for persistent memory applications.
So let's talk now about ferroelectric memory.
So ferroelectric memory basically is causing some materials,
they act kind of like magnets do, only with electricity, with static electricity.
They'll create a spontaneous charge polarization inside the material.
We call those ferroelectric materials.
And they actually have hysteresis loops.
If you run them through applied fields,
they're similar to those you get with magnetic materials.
So it's kind of an analogy to these magnetic materials.
So there's a long history
of products being made with ferroelectric memories. And actually, our commercial products
are very specialized applications that have been available for a long time. Ramtron, now Cypress,
is partnered with Fujitsu with the idea of some higher volume applications.
PZT, which is lead zirconium titanate,
is the conventional product that people have used
for a long time for these materials.
There are versions of this which are actually kind of interesting
because they're flexible, thin-film, organic,
ferroelectric memory devices are out there,
you know, limited capacity,
but you can put it on a flexible substrate.
Could be interesting for some applications.
Another company called Symmetrix.
Hafnium oxide is a material that's actually used
in a lot of CMOS semiconductor processing.
It turns out that there is a form of hafnium oxide
that actually can be quite ferroelectric,
and that showed up in the industry in the last few years.
There's a company in Dresden, Germany
that is trying to develop ferroelectric
RAM products using hafnium oxide for that. Today's markets, RFID, the low-right current
applications, a flexible ferroelectric RAM device might be used for wearable devices,
things of that sort, that may have to undergo a lot of flexing
or where the flexibility is of value.
And then the general topic of resistive RAM,
which is a big field.
As I mentioned before, in a sense,
you could say that the magnetic random access memory
is a resistive memory.
It causes resistive memory state.
Anything that changes resistance in the device
that I can get two separate states,
it falls into this category.
In fact, phase change memory is one of those,
as I said, MRAM essentially.
Some of the more famous ones are the Memristor
that HP was working on and announced a few years ago.
There's also the oxygen vacancy products
and also CVRAM. and announced a few years ago. There's also the oxygen vacancy products,
and also CV RAM.
And then another one we're going to talk about,
which fits into this general category, is carbon nanotubes.
Okay, so let's look at some of these each in turn.
So resistive RAM, it's any memory, again, with a resistive bit.
In other words, a change in resistance will give you a 1 or a 0.
And it's sort of a generic picture.
I have a top electrode, a bottom electrode,
and then I've got my switching media.
And then I have something that's happening in the switching media that can cause the resistance to either be higher or lower.
Maybe it's movement of ions.
Maybe it's oxygen vacancy movements.
Maybe it's filaments forming
various sorts of things can do that
and so the issue with a number of these processes
is it's a bipolar process
in other words I'm going to drive current
in both directions
through the memory device to do different things
in this case for instance
I've got an initial state over here, so this is a high-resistance
state.
Whatever the bubbly conductive things are there, they're not going all the way from
the top electrode to the bottom electrode, so the resistance is fairly high.
Now, I apply a current there in that set mode on the left, and then I can end up dispersing
some of those conductive things
in between the top electrode and the bottom electrode.
Now I have a better conductive path
between those two electrodes.
I'm in a low resistive state.
Now if I want to...
So I can read that.
And then if I want to then erase it
and go back to where I was,
then I have to apply current in the other direction.
Now I'm driving those little conductive bubbles
and forcing them back up to the top electrode from the bottom electrode.
So a simplified view of what's going on here.
And now I'm back to a high resistive state.
So I have a low resistance state to a high resistive state.
But to go from one to the other, I have to apply current in one direction
and then in the other direction.
So it's a bipolar device versus, for instance,
the phase change memory, which only required
current to go in one direction.
So here's the oxygen vacancy,
or oxygen depletion memory.
To some extent, the mechanisms are varied
or not well understood, depending upon the technology.
A company, Pioneer, or Unity pioneered this technology. They were acquired by Ram Bus,
and consequently it was shut down. There's a lot of research underway at universities and
memory makers on trying to make this into a reality. So this thing has possibilities,
and it's one of the big resistive RAM technologies that's being talked about.
The Memristor is the thing that HP did a few years ago.
They said it was a missing circuit element.
Talking about a paper that was done, I think, in the 1970s,
talking about there being various sorts of basic circuit elements,
and something that had the properties like this memristor showed
was a missing circuit element.
So HP is the only one that basically made this,
although they did, I think, was SK Hynix,
that they were at one time talking about manufacturing,
but I don't think anything ever came of it.
A lot of people claim it's really just one of these oxygen vacancy devices,
and it was supposed
to be the basis of the persistent memory in HP's, the machine. I hope there's not anyone
from HP here. I don't want to insult them. But it did end up delaying that project. When
they finally did it, it was mostly with solid-state drives, or rather flash memory that they were
using a couple years ago. So the future is unclear at this point of this Memristor technology,
but it's kind of famous because they made
a lot of publicity about it at the time.
So here's a conductive bridge resistive RAM devices.
So it may, for instance, be silver filaments and glass.
Adesto is currently shipping a limited volume
of products like that.
Other companies are trying to get in there. Contour was acquired by
Western Digital. Crossbar
is licensed to multiple foundries.
This actually is Crossbar
that company earlier that Dave was mentioning
is also doing some work on neural network
type technology using
resistive memory devices.
It's basically one of these conductive bridge devices here.
Today's markets, for instance, high radiation, medical, and space applications.
So a lot of these memories, as you can see, are used in very specialized niche markets.
In order to get to the higher volume, get the cost down,
they have to become more general markets.
Now, carbon nanotubes.
With carbon nanotubes and buckyballs and all these things are really wonderful, wonderful things.
They've been of great scientific interest and technological interest for a long time,
including the idea of using these as memory devices.
So there's a company called Nantero that has been around for a while,
actually has worked with a lot of companies, provided licenses and stuff.
The most recent one was a partnership with Fujitsu with the idea of making these in a higher volume.
In time we'll tell. We'll have to see it.
I think Gervaisi spoke earlier at the conference.
There also is a lot of university research on this.
It holds a promise of potentially making very small, lithographically small, well,
manufactured small devices that could provide potentially dense memories.
So all, sorry, this one is yours.
So all of these memories have something in common with each other.
They're all a single-element cell,
so they end up being able to be scaled very small as opposed to something like an SRAM or an E-squared PROM,
which uses multiple elements.
Some of them use a diode select mechanism.
MRAM uses a transistor to select, which automatically makes it larger.
If you can use a diode to select,
then that allows you to use that cross-point
architecture and be able to stack cells. And so because of the fact that they can scale so small,
then there's a possibility that the number of transistors on a chip can scale past DRAM and
NAND flash. And if that can happen, then the costs can scale beyond DRAM and NAND flash. So this is
why the technology is so interesting to these people.
In fact, wasn't there a paper just we saw yesterday?
A Nature paper, I think, that was talking about some folks from Stanford.
Yeah, yeah.
There was a Nature paper about MRAM that said that some people at Stanford had found a way to get past using the transistor for the select device inside an MRAM.
And I haven't read the paper.
I just saw something in the storage newsletter about it.
So I look forward to learning more about that.
All of these technologies are non-volatile.
One of the reasons why people put up with DRAM and SRAM is because of the fact that they're fast.
But if they could be persistent, that would be so much better.
And so non-volatile is the memory chip guy,
which I am, memory chip guy's way of saying persistent.
They're also right in place.
They don't have this really messy block erase
and, you know, page write.
And they have read and write speeds
that are more similar to each other.
They're not off by a couple of orders of magnitude
like hand flash. They're more in line with each other. And so because of that, they're all very
attractive. The thing that gets in the way of they're just moving in and taking over is that
they use new materials, which I apologize to the people in the back who can't see the bottom bullet
on that, but it just says new materials. Anything that uses a new material
is something that is difficult to get past in manufacturing.
I'll go into that a little bit more
when I get into the economics part of this presentation.
This is like an eye chart.
It's something that is for your future reference when you pull the slides off the website,
but it compares the technologies against each other and against established technologies,
SRAM, DRAM, NOR, FLASH, but then there are some other technologies here.
And it's just a whole number of different attributes that each one has.
You know, where is an important one for some of these,
power and read speed, write speed.
But, you know, an easier way to look at this
is if you just take probably the most important criteria here.
And so you've got on the back, once again,
on the bottom for the people in the back,
it says cell size in F squared.
That's the way that chip people measure what says cell size in F squared. That's
the way that chip people measure what the cell size is. And so you get over toward me, it's a
larger cell size. You get over toward the other end of the room, it's a smaller cell size. And
then the thing that's important to most of you is the bandwidth. How fast is it? And so SRAM is way
up there as far as the speed goes. Oops, oops, oops. The PowerPoint gods hit us.
Oh, this is Scott Shadley's haunted slide.
That's right.
And it showed up in our show.
Scott Shadley's presentation.
Well, he didn't know the word either.
Abracadabra.
Okay.
So if these guys knew the word,
their presentations would have gone
so much more smooth.
It's so much easier when you know it.
Okay.
So anyway, you've got the more expensive things are over toward here.
MTJM RAM, which is the old kind of MRAM, is way over on my side of the chart.
SRAM is way over on my side.
Those are expensive technologies.
You go toward the center of the chart, and you have STTM RAM.
You know, that's as cheap as a DRAM.
A DRAM is there, too.
These are just approximate locations because STTM RAM and DRAM should be about the same cell size there,
but then you couldn't read them.
NRAM, the Nantero NRAM, should be towards the center there, too.
As far as I understand, Bill, correct me if I'm wrong, that's a single transistor technology right now, isn't it?
Zero transistor. Okay, so take that pink thing technology right now, isn't it? Zero transistor.
Okay, so take that pink thing in the middle and push it over to PCM and NAND flash,
because that's the cheap end of things.
And NRAAM has the advantage that it's faster than NAND flash.
PCM has that advantage, too, and that's why Intel is pushing it into 3D cross-point.
So I'll hand it back over to Tom.
Indeed.
All right, so now it's time
for a brief commercial announcement.
So this is the report that Jim and I just finished very recently
where we're looking into persistent memory ecosystem.
We'll talk about all the different types of memories
in some detail, talk about how they're used,
the companies making them,
and then talk about support requirements
and what could make these memories become more mainstream
what's the requirements to do that
and then we give forecasts based upon persistent memory consumption
for both discrete and embedded products
so it's a 161 page report with 31 tables and 111 figures
and it is now available
so with that commercial announcement you go back to your regular program.
Tom called that his pop-up slide.
Yes.
So now I'm going to talk about markets.
Even though I'm an engineer, I talk about markets.
So market drivers for these things.
You know, we've got persistent memory competing against RAM.
You've got persistent memory and SoCs.
Those are two very different markets.
Persistent memory and SoCs is going in there
for a different reason than persistent memory versus RAM.
And then I'm going to talk about the economies of scale.
And that's an important thing,
the reason why Intel is losing money.
So this is the vision for, and I'm out of jokes,
so I'm not going to have something pop up over this, but this is the vision that everybody has had for a very long time for how these new memory technologies are going to fit in.
The bottom axis, once again for those of you in the back, is process geometries.
It starts with the largest process geometries on that side.
It moves to the smaller process geometries on this side. It's kind of like saying time. And then you've got a logarithm
of the price per gigabyte on the vertical axis. And this is close to the truth, but it's just
straight lines on a chart. The assumption here is that the new technology, a wafer of the new technology,
costs twice as much to make as a wafer of the established entrenched technology, I like to say,
which is flash, the red line in here.
And so you've got flash always being cheaper at each process geometry than the new technology
until some point at which flash stops scaling.
Now, that was 15 nanometers for NAND flash.
And for NOR flash, it's 15 nanometers too.
Everybody saw this coming.
And so I actually put this slide together in, I think, 2007.
And it just said, how are things likely to go, given what we knew back then?
And we were expecting around 15 nanometers.
I think it was 8 nanometers for this particular slide.
Then flash would stop scaling. If it stopped scaling, the price stops going down. And so
that's why the red line goes flat there. And that's the opportunity that a lot of these new
technologies have been looking for, is once scaling stops, then they'll have the opportunity
to reach a lower price point than the established technology, even though the wafer might cost more to produce.
Eventually, with economies of scale and everything like that,
then that black line would come down, and it would be cheaper to produce.
But this was the thing that was driving the market.
What ended up happening in actuality was in 2006 or 2007,
Toshiba came out with 3D NAND, and that gave new life to NAND flash.
But that hasn't done a thing for the NOR flash and SOCs.
And so SOCs still are a ripe place for new memory technologies to get in.
So I like to point out that there are only three elements to memory costs.
The cost per gigabyte depends on
the cost of the wafer, like I said in the last slide. How many megabytes you can get onto the
wafer, which is tied into the process geometry that you're doing, and also how small your cell is.
And then what is your yield? Are 50% of the chips that you make on a wafer useful, 99 and everybody in the industry pushes to get way up there near 99 percent
and then the megabytes per wafer are driven by the size of the bit which was that thing that
was on that chart with the bubbles on it that i have to correct for nrom and you know maybe for
sct sometime too but but the megabytes per wafer is driven by the size of the bits. Shrinking the
process down to 90 nanometers, 50 nanometers, whatever, allows you to do cost reductions.
And so manufacturers shrink the process to drive the costs down. That's how Moore's law works.
But the important thing is, is that wafer cost and yield are functions of the amount of volume that you
push through the wafer fab. And so that's the economies of scale, is how many of these things
do you make? This is an example, a very concrete example, of pricing of NAND flash and DRAM.
NAND flash is the black line, DRAM is the red line. And I have a dotted line at the beginning of the black line
because this was before the source that I used,
the World Semiconductor Trade Statistics, WSTS.
It was before they actually provided price per gigabyte of NAND flash.
But you can see that there was a crossover there.
What's interesting is that NAND flash is inherently a smaller die size than DRAM,
and yet its price was higher because the economies of scale weren't there. The industry wasn't making enough NAND
flash. The beauty of this is that in 2004, the crossover set the stage for putting SSDs into
computers. Before that time, hard drives were a cheaper way of adding storage to computers,
and DRAM was a cheaper way of adding speed. Now, SSDs are a cheaper way of adding storage to computers, and DRAM was a cheaper way of adding speed. Now SSDs are a cheaper way of adding speed than DRAM.
In a lot of cases, people get by with a very minimum amount of DRAM
in order to take advantage of that.
And this scale shows up in this chart, which is driven off the same statistics.
This is how many gigabytes shipped of each of these technologies,
NAND flash on the black line and DRAM on the red line.
And so you can see that there was a crossover.
What's important, oh, I thought I had a build on there,
but what's important is that in 2004,
when the dotted line turns into the solid black line,
NAND flash gigabyte shipments were within an order of magnitude
of DRAM unit shipments.
And so what that means is that as long as your bit shipments are less than an order of magnitude
of whatever it is you're trying to displace,
chances are you're not going to be able to reach the scale to get your costs down.
The same is true of all memory technologies.
You can't get a competitive price unless you've got good scale.
So this is a very busy chart to explain how that all works together
and how these different technologies work against each other.
For the people in the back of the room, the chart goes from 2010 out to 2030.
That's because, for Tom's and my report, that's the time range that we looked at.
We took a very wide time range to do our forecasting because we think that some of these changes are going to be relatively slow.
You've got all of these technologies that already exist and then a red line for the new memory
and how we expect it to work as its scale grows and as it takes advantage of the fact that it's got a smaller process geometry
or a smaller cell size than these other technologies.
So E-squared PROM is the most expensive cost per gigabyte,
but that doesn't matter because it's just in little five-cent parts, two-cent parts that are used for the serial presence detect in DIMMs or for storing the identity numbers in Internet-connected devices and that kind of thing.
SRAM is also very expensive.
It's got six transistors in each bit cell, and NorFlash is also a very expensive technology.
DRAM is a far less expensive technology, but we're expecting for it to do what that curve I showed you before is doing,
is that it's starting to hit a scaling limit, and its prices are probably going to level off.
And then finally, NAND flashed down at the bottom, which we expect to go for a while.
We're expecting to see the interplay between these things.
Oh, sorry about that.
These technologies, their markets in revenue dollars are shrinking.
And so because of that, nobody's spending money to move them to the next process technology node.
So they're going to just stay where they are price-wise.
NAND, Flash, and DRAM, and I apologize to the people in the back who can't read this word balloon,
but it just says process leaders, DRAM and NAND, cost reduced to Moore's Law.
Those are the technologies that are
still scaling. They're still being made using finer and finer process geometries. And then you've got
your new memory. And what's happening with the new memory is the same thing that happened with
NAND flash is that it's moving from a lagging edge process, some very behind process. MRAM
is right now at 40 nanometers, moving there from 90
nanometers, whereas you've got DRAM that's in the 18 nanometer to 15 nanometer area.
The new memories have some catching up to do, but they'll do that, and so their prices will
come down faster. And so as a result of that, we're expecting, we've already seen MRAM in particular go past E-squared PROM prices. We're expecting a price crossover
with SRAM today. There will be a NOR crossover pretty soon. And then we'll see DRAM stop scaling
at some point. And in this slide, it's built in, I think, around 2024. And when that happens,
then we'll end up having the new memory technology
cross over DRAM a couple of process generations later, just like in that earlier slide.
We don't expect to see NAND flash crossed over by a new technology anytime soon.
So what are the support requirements for this? This is a place where SNIA really shines.
You need to have hardware support. So this is like new DIMMs and that kind of a thing.
You need to have software support, operating system support, and application program support.
And the hardware for these new technologies is an early development. We've got NVDIM Ns, which are a good proxy for what's going to happen when the NVDIMM P comes in. The NVDIMM Ns are just DRAM
with flash backup. But we've also seen some things that haven't really been noticeable,
and that is like changes in the BIOS to recognize a persistent memory is there,
and maybe you can boot up knowing that the memory has got valid data in it.
And then new signals to the DIMM to allow it to understand a power failure in the case of
an NVDIMM N. And soon, 10 minutes, soon we'll have new Optane modules that have some new signaling
on them too. We don't know what those are, but those I'm sure have been worked out.
There is an NVDIM report.
This is the last advertisement, I promise.
And I put that out about almost a year ago,
but it does cover NVDIM Ns and NVDIM Ps
and forecasts what all the technology is going to be for those.
But there are ongoing hardware requirements to handle NUMA. You need to be able to deal with different speed memories and non-determinism.
And basically, you hit an Optane DIMM with enough writes in a row, and it's going to slow down.
And so you're going to need to be able to feed that back to the processor somehow.
That's a new support requirement.
Some of this is going to require redesign of the MMU
because you're no longer doing context switches
when you find that what you need is not in there
or that you need to evict something.
They're using polling right now
where the processor actually ties itself up,
reading, reading, reading what the status is of the DIMM,
and then finally when it gets what it needs,
then it will come back out again.
The reason it's doing that is because a context switch
is a very, very slow process in most processors.
And then finally, updates to the DDR bus
to support these non-deterministic access times.
This is SNIA's diagram for the programming model for persistent memory.
And so I love it that they used to talk about the squiggly lines.
Things were different.
But basically the most important takeaway from this is that the application has got to be able to talk directly to the PM device
or run through its normal storage stack to be able to do it.
And so they've put together all of the tools to be able to do that,
so then application programs can be developed to make that work.
And I've got the URL for the persistent memory model there.
Applications programs, they are really required for this, that there really isn't any benefit
to having persistent memory in your system unless you've got an applications program
that's going to take advantage of that.
And this change, I'm expecting to take some time.
Jim Pappas likes to talk about it taking two Olympic cycles.
So about eight years for it to become, go from new to becoming something that's in popular use.
And then finally, what is the forecast for this?
Well, I say nothing works in a vacuum.
It's not one thing that's got to work by itself.
It's like the whole memory ecosystem and everything like that has to work together.
So persistent memory is just a part of that ecosystem. And the memory market swings very wildly. Now, what that means is 3D cross point,
for example, has to be cheaper than DRAM to make any sense in that diagram that Tom showed you with
the bubbles on it of the different levels of memory. And so when the DRAM market falls apart,
which it should be doing either late this year or early next year,
and when prices go down by about 60%, then Optane memory is going to have to go down by 60% in price, too.
And that's going to be really tough for Intel if they're already losing money on this technology.
Foundry processes, though, should have a huge impact,
because if these SOCs that I said we're going to be using
and emerging memory technology, if they start reaching high volumes,
then the silicon processing community is going to understand.
They're going to learn how to use persistent memories and how to make them yield.
And so that's going to then fall back into the standalone memory chip market.
And that's going to benefit those guys. You know, I'm expecting for 3D NAND to continue to increase.
This is a timeline out through 2022 of how many layers should be in 3D NAND. And it used to be
that the industry thought that it was going to stop at about 100 layers. And then somebody, I don't know who, invented string stacking.
And string stacking allows you to basically build a 64-layer NAND
and then build another 64-layer NAND on top of it and another 64-layer.
And nobody knows how far this can be taken.
There's talk about 500 layers today.
I wouldn't be at all surprised if in two years there's going to be talk about 1,000 or 2,000 layers.
And so that kind of allows NAND Flash to avoid being superseded by these technologies.
Another thing that's really important to this is that there are cycles in the memory business.
And this is a very simplified way of looking at how price cycles work. But basically, during a time like this, you remember that slide that I showed you
that had everybody's profits and Intel's losses?
Everybody's really profitable, so they're all investing like crazy, putting in new fabs.
Sometime, that is all going to kick in, and that is going to cause an overcapacity.
The overcapacity is going to cause everybody to compete on price, and the prices then will collapse. And so the manufacturers will then
stop investing, and eventually the market will catch up to however much process capacity they
have. And once it catches up again, then I expect it to see the profits regained.
Our overall look at that is a collapse in 2018 or early 2019 and a recovery.
Unfortunately, not until 2022 because of the emergence of China as an important supplier.
So stable prices to the middle of 2018 driving profits and then once 3d nand becomes cost
competitive we're expecting to see planar nands the capacity turned over to dram and cause that
to unravel dram pricing but china won't be a factor until the downturn the 2018 price collapse
though will be supply driven it won't be because of a lack of demand. Everybody is just consuming all kinds of memory,
but not enough to catch up
with what's going to happen in the industry.
And we have today in DRAM
the largest price-cost gap
that there ever has been in the market.
The impact of persistent memory,
like I said about 3D cross-point,
persistent memory competes on price
against these established technologies. So 3D crosspoint, persistent memory competes on price against these established
technologies. So 3D crosspoint must be cheaper than DRAM to make any sense. And a DRAM collapse
will cause a 3D crosspoint collapse, even though 3D crosspoint is sole sourced. Because being sole
sourced in this case doesn't mean that you've got everybody locked in. They're going to be able to
say, okay, well, it was nice using 3D Crosspoint
when it was cheap,
but now that DRAM's cheaper,
we'll just go back to using DRAM.
So we're looking at a timeline for change.
What I have is logic on the top bar.
That's the SOCs.
That is the NOR flash inside of your microcontrollers,
your ASICs, and even inside processor chips
where things like microcode are stored inside of there.
We're not sure which technology is going to be used to displace those.
Right now, the biggest contenders are resistive RAM and MRAM,
but that could change very rapidly.
In NAND Flash, we think that there's a possibility
that it's going to be displaced by resistive RAM,
but that just depends on whether or not NAND flash runs out of layers and needs to be replaced.
SanDisk used to say that there were only going to be three generations of 3D NAND before that ran out of steam.
Now, you know, people are talking about these 500-layer devices.
DRAM, you know, that's another question.
You know, Bill is going to be talking
to you later on. Today you're talking, right? Yeah, and Intero is going to be presenting their
case as to why NRAM should be a good technology to displace, or NRAM, I'm sorry, should be,
yeah, NRAM should be a good technology to displace DRAM, and it very well could. MRAM is something
that a lot of people have been talking about in the past,
but I put a question mark by that because it's still not decided.
But, you know, the timeline goes out to 2035.
An awful lot can happen in that amount of time.
You guys have been in technology for a long time, so you know.
And finally, getting off the hamster ball of supply and demand,
here's our projections for a high baseline
and a low projection for these emerging memories
out to 2028 here.
So, and the bottom line here is that our baseline case
is the emerging memory, non-volatile memory market
could exceed about $6 billion by 2023. And so that's from our
report there. And that's basically our presentation today. So thank you all very much for being a
wonderful audience. Thank you. Thanks for listening. If you have questions about the material presented
in this podcast, be sure and join our developers mailing list
by sending an email to
developers-subscribe at sneha.org.
Here you can ask questions
and discuss this topic further with your peers
in the storage developer community.
For additional information
about the Storage Developer Conference, visit www.storagedeveloper.org.