Grey Beards on Systems - 110: GreyBeards talk FMS2020 wrap up with Jim Handy, General Director of Objective Analysis
Episode Date: December 2, 2020This months it’s back to storage and our annual wrap-up on the Flash Memory Summit Conference with Jim Handy, General Director of Objective Analysis. Jim’s been on our show 5 times before and is a... well known expert on NAND and SSDs (as well as DRAM and memory systems). Jim also blogs at TheSSDGuy.com and … Continue reading "110: GreyBeards talk FMS2020 wrap up with Jim Handy, General Director of Objective Analysis"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Keith Townsend.
Welcome to another sponsored episode of the Greybeards on Storage podcast,
a show where we get Greybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, and trends affecting the data center today.
This Greybeards on Storage episode was recorded on November 23, 2020.
We have with us here today Jim Handy, General Director of Objective Analysis.
Jim's been on our show a number of times before and recently attended and moderated the Flash Memory Summit 2020.
So, Jim, why don't you tell us a little bit about yourself and what's new in Flash?
Oh, well, thanks, Ray.
Yeah, I'm an industry
analyst, actually came from the chip business. So I like to tell people I look at SSDs from the
inside out. But, you know, they tend to be more on the technical side, but also speak an awful
lot with financial analysts. We do a lot of forecasting and stuff. And people who go to
our website at objective-analysis.com can have a look at the various things that we do.
I also write a couple of blogs, one called The SSD Guy and the other called The Memory Guy, that talk about technologies and business issues relating to memory chips and SSDs.
So what's going on with flash memory these days? Oh, you know, flash memory, people
are adding more layers, which basically is a way to get the cost out of things. So, you know,
it looks like for a long time to come, we're going to continue to see flash being used in a
broader range of applications. And we see also, you know, SSDs and solid state storage being used in new and different ways. And so,
you know, probably later on, we'll talk a little bit about computational storage,
because that was a big part of the Flash Memory Summit.
Right. So what's the high layer count these days at shipping versus what's in the lab and
stuff like that? So layers are, this is a 3D dimension that Flash has started to scale with, right?
So they could scale both horizontally in two dimensions as well as in a third dimension.
I guess there's actually a fourth dimension.
I'm sure you'll talk about that as well, right?
Every April Fool's Day, I write April Fool's blog posts on both the memory guy and the
SSD guy.
And one year, I wrote about four- four dimensional memory that it not only went in
the normal three dimensions, but also the fourth dimension is time.
And so this was a memory that had increasing gigabytes of storage over time.
Oh, I love it. That's, I think everybody would like that.
That is flash though, right? I mean, over time it's just keeps,
keeps on keeps on trucking here.
Yeah. Yeah. But the layer count you know the
the big deal with layer count is that um the way that chips have always worked since the 1960s
is that transistors have gotten smaller and so you can squeeze more of them onto the surface of
the chip and nobody ever really thought about going depth wise into into that. At a certain point, flash memory became incapable of, you know,
the transistors became incapable of shrinking anymore, simply because of the fact that they
ran out of electrons. You know, they couldn't, wouldn't be able to take a smaller thing and put
enough electrons on it to tell whether it was a one or a zero. And so somebody came up with the
genius idea of building layers of NAND. And so that's how you end up with
these layer counts. You asked about how many layers there are. And the bulk of what ships
today is 64 layers. 64 layers. Okay. Yeah. Oh, it's, you know, people, people are guessing how
many layers it can go up to. And basically nobody has any idea when they're going to run out of steam with this thing.
But the Hynex is shipping 128 layer device.
A few other companies are shipping 96
or thereabouts layer devices.
They don't have to go with pioneering numbers.
And Micron, yeah, Micron's got one
that's now 176 layers,
which they claim to have have started shipping uh in in
november no kidding and that's that's like so and that can be tlc or or uh the other qlc or
something like is it possible to have mlc 3d layer and 3d yeah actually the 3d makes it easier to do mlc than uh with the other ones
something that happened when uh you know i told you about the the number of electrons getting down
to a certain point when the transistors got too small when these guys went to 3d all of a sudden
the transistors got enormous again and the larger transistor the more electrons you can store on it
then the easier
it is to differentiate between these different levels and the multilevel cells. So with 3D NAND,
for the first time, you've seen every manufacturer be able to ship QLC, which is four bits per cell.
And some are already talking about PLC, which would be five bits per cell. So help us. So, you know, I hear these terms, you know,
I kind of get MLC, QLC, TLC,
and I get the attributes of, of their durability,
performance, et cetera.
But where does a technology like 3d cross point in these other memory types,
you know, kind of intersect with these terms?
Well, good question, Keith.
I would like to say that 3D crosspoint, you know, first of all, it's a very different
kind of a 3D than 3D NAND, but also, you know, just looking at how it plays in a system,
there are some people who think, is 3D Crosspoint a threat to 3D NAND?
And it's not, because 3D NAND is always going to be significantly cheaper per gigabyte to manufacture than 3D Crosspoint.
3D Crosspoint is actually something that people will start using as an alternative to DRAM in a number of systems.
And it also will allow them to reconfigure it. But you know,
I can touch on that later. As far as the difference between the 3Ds and these things,
though, 3D cross point, the cross point thing is what's really key on this, is that you pattern
rows of bits all side by side with each other. And then above that, you layer another pattern
of rows of bits, 90 degrees offset from the bits that were below them. And that allows you to make
a really dense and really high speed memory. With 3D NAND, you don't do anything of the sort.
With 3D NAND, what you do is you build this layer cake with 176 layers or whatever, you know, a whole lot of layers.
And then you bore holes down through the layer cake and put all kinds of fancy lining materials on the walls of the holes.
And all of a sudden, those holes become strings of bits.
And so 176 layer NAND becomes 176 bit cells, which with, yeah, with MLC, it would be twice that.
With TLC, it would be three times that many bits.
And with QLC, it would be four times that many bits in a single string.
So does that enable like faster?
Is the reason why 3D cross point faster is because of lower latency between the,
basically between the bits?
Yeah.
It's also a faster technology.
Faster to scale or faster to access?
Faster to access.
What?
The way that
you get to the bits
in NAND Flash is really
clumsy. What you have to do
is you go to that
string, the vertical
hole thing, and you say, okay, I want to know what's on one of the bits on that entire string.
And so you turn off all of the layers in the layer cake except for one.
And then that whole string is dedicated to reading just that one bit.
That's actually, you know, a faster part because you're reading it.
The writing is incredibly slow. It can take like half a second to write a flash bit. And the reason why is because you're using quantum tunneling, which is a process, a probabilistic process, where you're pushing really hard on it by putting a high voltage on it, somewhere like between 15 and 20 volts,
and then waiting for electrons to jump across a gap that they don't want to jump across.
And the gap is high enough to prohibit minor errors in writes, but with enough voltage,
you can actually do the write. Half a second to write a bit yeah no way no way it's really no way well that's why they got all this DRAM and other stuff surrounding it to to buffer the the write activity yeah I never realized
that so 3d NAND is actually quicker to access because you've got all these layers and and all
that stuff I guess huh that's interesting yeah but I think what Keith was asking about was the 3D crosspoint and why it's faster. 3D crosspoint uses, it doesn't use flash technology,
which 3D NAND does. It uses instead something called PCM, which stands for phase change memory,
and let's not get into that. But PCM can switch from a one to a zero and a zero to a one really fast.
It does it in somewhere in the order of three nanoseconds, three billionths of a second.
And it's symmetric, right?
It can read and write pretty much at the same speed.
Well, it's more symmetric than NAND flash, but it's not symmetric itself.
It's about three times.
It takes about three times as long to write as it takes to read.
Okay.
Yeah.
All of these technologies are incredibly ugly.
Yeah.
I'm thinking through, you know, we usually just see this at the interface level.
Yeah.
It doesn't matter if you have an Optane drive or drive from Samsung or whatever, I, you know, that I get that SATA or NVMe
interface. And that's where I, that's where the story ends for me for the most part,
other than knowing that, you know, I shouldn't buy QLC, you know, previous to last year and a
couple of other vendors, I shouldn't buy QLC for enterprise workloads, which is starting to kind
of dissipate as a theory.
But other than that, you know, you never bother to look past the interface itself.
Yeah.
I think, Keith, depends on what's in front of that QLC.
Yeah, yeah.
It gets way more complicated now. You know, you have our friends now, you know, making this look, QLC look more like TLC from
an endurance perspective. but, you know.
And there's been a big debate for a long time as to how much involvement the host should have
in managing the flash and doing all of this, you know, basically playing shell games to make it look like it's fast.
Yeah, there's the open channel SSD, which is something that certain hyperscalers really like.
I think that Baidu was actually a big champion of this some time ago.
What's it called?
Open scale?
Open channel.
Open channel.
Okay.
Yeah. Open skit? Open channel. Open channel, okay. Yeah, and this is an SSD where a lot of the housekeeping has been put onto drivers inside the host processor.
Well, this is something that I really will look forward to talking to you about this year,
is this march towards this computational storage,
and where we're getting super cheap CPU prices arm.
And as we are able to do more in the drive itself, are we seeing that impact flash in general?
They're hoping it does. And computational storage is is an interesting field because of the fact that, you know, people are saying, okay, it's time to start offloading tasks from the main host
processor and start running those other places. It's kind of the opposite of virtualization,
where, you know, before virtualization used to have a mail server and an internet access server and, you know,
servers for various things. And then you started putting all of the tasks into a form that could be
run on any server in the array. And, you know, that went against having local storage.
And then SSDs came along and people started embracing local storage again.
And now people are saying, well, the SSD should be doing some of the computation. And so you're
breaking that out from the main processor. You know, it's, I think that there's an awful lot
architecturally from a systems architecture standpoint, that people are going to have to
figure out exactly how they want to use these things.
We, we had, we had our last podcast was on smart NICs and stuff like that,
which is effectively doing similar types of functionality
only from a networking perspective,
moving networking functionality from the computational CPU level
out to the NIC card itself.
So, yeah, computational storage has been around for a couple of years.
Is it starting to take off, Jim?
Certain hyperscalers are buying it. There are a lot of other people who are kind of...
Hyperscalers. That's interesting. Why would a hyperscaler buy computational storage?
I know that Scaleflux has got some people who are doing SQL-type database management and also doing some ai algorithms in it
and some people are buying it for the very humble task of doing data compression
or yeah or encryption or even you know video cross-coding and stuff like that
i guess this is one of those examples that scale you, if there's a instance where I can save a penny
of cost or a fraction of time, computational storage makes sense at these fringes because,
you know, you multiply that by a billion users. Yeah. You know, there are places,
really unexpected places where the savings can come from too. And I wouldn't be at all surprised if people are using computational storage to do special encryption in order to not have to destroy the drives when they go and update them. Because what typically happens in these data centers is that they say, okay, we don't want to have any security breaches.
And so every time that we update our hard disk drives or whatever.
They crush them or something like that?
Or they mass erase them or something?
Yeah.
No, they don't do mass erasing.
They do physical destruction.
So I heard another term last year, and I was wondering where it stands this year, and that's zoned named storage.
Is that the correct term?
It's zoned namespace.
Namespace.
Okay, yeah.
You know, the deal is that there are all of these technologies that are really cool and really offer compelling advantages, but they all require system support.
Post-level functionality.
Yeah, all you have to do is rewrite your application program.
And zone namespace are like that, that with a zone namespace drive, you don't have,
well, you know, Robin Harris had something called the IO Blender, where, you know, in a
virtualized machine, you end up having just complete unpredictable garbage on your IO stream.
More or less random access kinds of thing, regardless of what the little virtual applications
might be doing from a sequentiality perspective, everything shows up randomly at the drive. And it gets to be more of a problem as the capacities go up. Yeah. You know, and it was a
bigger problem with hard drives than it is with SSDs because SSDs are a little bit better at
random workloads, but SSDs still perform better with sequential workloads than they do with
random workloads. And so then the question is what can be done on the host side to make things, to make the SSDs understand what sequential or what was
originally intended to be a sequential and what is not. And communicating that information to
the drive, right? Yeah, yeah, yeah, yeah, yeah, yeah. Yeah. And so with zone namespace, it's like
everything from this particular program that is being written has some kind of a flag with it that says, OK, this is for that program.
And so it's probably in sequence for that program, even though it's intermixed with stuff.
When I first heard about zone namespace last year, I thought this was a way of, you know, taking a single drive and, you know, letting multiple hosts access that drive
at NVMe speeds.
But then this year,
it started coming back to the sequentiality.
So it's more like a shingled magnetic recording solution
kind of thing.
So if you want to have sequential workloads
on your disk drive,
or in this case, your SSD,
you want to use zone namespace.
It seems like it's almost more important for the sequentiality aspect than
the multi-user aspect.
It is, but the, the, they're, they're intertwined is that your, the, the IO blender happens
because of the fact that you've got multi-users and, you know, that's, that's something that's
actually caused by virtualization.
Yeah. So it's effectively in a single system, you end up creating this distributed system problem
in which you're using the same underlying storage. That storage looks like a SAN, but it's not.
So, you know, SAN providers are doing this for us on the micro side when we're accessing multiple,
when we're using multiple systems
to access the same access layer of storage,
they do this sequencing for us.
But when you're looking at an individual system
with one SSD, which from a performance perspective,
raw capacity and performance,
it can handle the workloads,
but the randomness of the
sequence of reading the, the reading and writing the datasets makes it perform less optimal. So
this is just taking that, that sand, sand approach and putting it inside of a single drive.
Yeah. Adding, adding to the drive. Yeah, exactly. Yeah. And that's just Moore's law, is that whatever a SAN did this year, whatever features it has, an SSD will have in five years.
What?
Don't tell me that.
Yeah, that makes perfect sense to me, especially as we start to combine these DPU-type capabilities as we get this more and more distributed, the challenges to IO become less, I think they're less apparent.
But when we look back on it, we'll say, you know, that was fairly obvious that that was going to for our year-end podcast. Moving compute out to
the devices, whether it's storage or NICs, does that make sense? If it was non-Voyne-Nyman or
something like that, maybe it would make sense. But these are all CPUs. They're ARM or it could
be x86. It could be Atom. I don't understand the logic. We should
have this. It's a broader discussion than just computational storage, but it's certainly,
you know, a significant example, I would say. So, Jim, this does bring up a good point back
to like the core concepts of Flash. We're getting denser Flash with all these layers. We're getting more capacity in a single wrapper and we're asking more of it
because we have more of it.
What impacts are we not seeing that's obvious from, you know,
from the flash from an industry perspective?
Boy, you know, the impacts we're not seeing.
I actually think that we are seeing the impacts that what's, you know, the impacts we're not seeing, I actually think that we are seeing the impacts that what's, you know, this computational storage thing is in response to a problem that you've got the capacity in the SSD increasing at, you know, an exponential rate, which it always does, you know. It doubles every two years or so.
And you've got the processing power in the server increasing at a kind of a similar rate.
And then you've got the network increasing at a slower rate. And so how do you deal with that?
And I remember back in the 70s, hearing somebody tell me something one of his professors at Caltech said, which was the bandwidth of the channel needs to be, it can be inversely proportional to the computational power of the nodes at either end.
That basically, if you've got really smart things, they barely need to communicate with each other. If you've got just basic dumb storage,
just bits on one side and all your computation on the other side, then you need to have an awful lot of communication bandwidth between the two. And so this is just kind of a ramification of
that is that this, you know, storage is getting bigger and bigger and bigger. And how do you make
it not need to communicate at that level with the server.
This becomes even more important with IoT and devices at the edge, which they could be in network light environments and stuff like that.
So you've got to do some more and more intelligence with those devices
in order to be able to transmit across the WAN and stuff like that
or satellite radios or whatever.
It's an interesting problem.
Every time we think we've defeated data gravity,
we get more gravity.
Yeah, yeah.
That's extremely interesting stuff.
So I was surprised when I saw the initial announcement
for Flash Memory Summit that they were going to have a keynote on DNA storage.
What's with that, Jim?
They were actually kind of testing things, I think, when they put that together because they had keynotes from a number of kind of offbeat sources.
But the DNA storage, there are people who are actually doing research on that.
Oh, God, we had a podcast earlier this year, a catalog DNA.
They are moving head fast with this stuff.
Yeah, it's actually got incredibly high latency.
I think it takes 15 minutes.
Oh, God, yeah, yeah.
Half a day, maybe, something like that to write.
But the bandwidth is coming up, I guess, right?
Oh, I don't know. You know, the people, I don't understand biological sciences. And I know that,
you know, the human genome thing, somebody charted Moore's law against the cost of doing it.
Yeah, genome sequencing. And it's just amazing how much faster the sequencing has developed than that.
Keith's my expert on this stuff, right, Keith?
Yeah, I worked for a biopharma and this has been super exciting.
You know, we had a couple of dozen of those things throughout the U.S.
Getting the data is one thing, getting it moved and processed, completely different. So again, as we think through of, you know, if we can make, if we can
make those sequencers part of the overall system for storing the data and accessing the data,
that really changes the dynamic for research. Yeah. That's, you know, and so, so then you look
at the DNA thing and you say, what if that follows the same kind of a path that genome sequencing did?
What if all of a sudden it becomes really fast to do and it ends up being cheap?
The guys that we talked to, the catalog DNA said they were like three orders of magnitude denser than tape today or something like that.
So, I mean, you know, tape can be like a hundred terabytes
in a, in a cartridge. We're talking, you know, 10 petabytes on a cartridge sort of space. It's,
it's pretty bizarre. Yeah. Maybe three or four years ago, I, I sat in on a presentation
of a small startup that was doing this stuff. And the guy who presented said that in theory, you could store the entire contents of the Internet in a shoebox.
I want that shoebox, maybe, but I want to be able to actually read it and maybe update it every couple of years or something like that.
And then just flashing back to the cost of sequencing the genome.
I think I looked at the stat four years ago.
The last time I looked at it, it was $1,000 per sequence or per session, whatever.
That's down from a million dollars.
And then, you know, obviously the pace of change is going to change.
I just looked at and there's a stat from genome.gov that says it's now $300.
Yeah. Yeah. I know that the goal is to get it down cheap enough that it becomes a part of
your regular doctor visit. Yeah. Wouldn't that be nice? A little box, you know, like a USB
chip that takes up a prick in blood and does all the analysis. And yeah, I think I can work on
that. That's a startup. Well, I don't expect it to be in the doctor's office. I expect it to be
in the blood lab, but you know, the same place where they analyze your urine sample. Yeah. Yeah.
Something like that. Hmm. Bizarre, bizarre.re. So what about this disaggregated, comprehensive infrastructure, that sort of, is there any talk?
Is that still going on?
Are you seeing that in the industry?
Easy for you to say that.
It's not, actually.
Yeah.
Yeah, it's not something that I track an awful lot.
So, you know, I don't have a high level of confidence. What I have noticed, though, is that some of these initiatives seem to be things that are attracting the interest of companies like Western Digital or like Kioxia or Samsung, where they're not just saying we're happy to crank out chips and we're also happy to crank out SSDs, but they're saying we want to play some kind of a role in big storage.
And by big storage, you're talking like, you know, just a box of flash or a bunch of flash that they can sit on the server or maybe even a top of rack kind of thing and have it be, you know,
dished out to all the servers almost on demand. Right. And then change that, change that access whenever they want.
You got to take this, you know, with the understanding that I'm a chip guy.
And so I look at everything as a chip and then,
then an SSD is a big bunch of chips.
And then a group of SSDs, a JBOF is a big, big, big bunch of chips.
And, you know, Micron, or I'm sorry, Microsoft at the Flash Memory Summit talked about having millions of servers.
And, you know, I would look at that and I'd say, well, gee, how many chips would that be?
Yeah, and it's pretty funny.
I take a similar approach, but I look at everything as a bunch of servers.
I look at a drive as a server.
We're going to get to the point where these things are way more intelligent than what they've been in the past.
Yeah.
Yeah.
Well, you know, they already use an awful lot of intelligence to sweep up the mess that NAND Flash is.
Yeah. NAND is just, you know, I already use an awful lot of intelligence to sweep up the mess that NAND flash is. Yeah. NAND is just, you know, I've, I've said it before. It's, it's a wretched medium that, you know, every, every bit that you write into it, you got to count on, you know, a certain
percent are not going to come back out without being flipped. Yeah. Well, you can do that all
that way, the ECC and a few other, uh, tricks and stuff like that. But, uh, yeah, you can do all that with ECC and a few other tricks and stuff like that. But yeah, yeah. So are disks still alive? Every time I talk to some vendors, they're saying disks are dead and stuff like that. You see this? I mean, obviously not disk guy, but obviously, you know, you're following the density curves and stuff like that. So what's going on with disks these days, you know?
The market has undergone an awful lot of changes.
You know, you look back 10 years and there was still a very healthy market for high RPM disks.
And now nobody makes those anymore.
You know, those have all kind of yielded to flash.
PCs are migrating over to laptops and the laptops are getting smaller.
And so they're going to SSDs, even though it's a more expensive solution, just to be able to get the size down. And it also, you know, lengthens the battery life a little bit more or allows you
to use a smaller battery, but it's, you know, not a really huge amount. It's like 10%. But, you know, that still is eaten away at the
PC market. And so, so then the question is, well, where do disks fit in? And data centers still use
an awful lot of disks. But they want to have very, very high capacity disks. And so that's where
shingled magnetic recording and helium and the energy-assisted-
Multi-platter and stuff like that, yeah. Hammer and stuff.
Yeah, there's hammer and there's mammar. And so the disk drive manufacturers' unit volumes of disk units that they ship are declining.
I don't know at what rate, but it's a noticeable rate.
But the number of gigabytes that they sell is still growing at a rate similar to what it was in the past.
So if you're doing a cost per gigabyte, and if the cost per gigabyte is consistent,
then overall revenue remains consistent.
I don't think the cost per gigabyte is flat.
I'm sorry.
I don't think it's coming down.
Cost per gigabyte for hard disk drives
actually is coming down pretty sharply
with Shingled and, you know,
with the Energy Assist, with the Helium drives.
I mean, there's a parallel track here between NAND and disk and tape and stuff like that.
And they've been kind of, they've been slowly but surely narrowing, you know,
the delta between NAND and raw disk and magnetic tape.
But over time, there's still, I wouldn't say it's an order of magnitude,
but it's almost an order of magnitude cost difference in price per gigabyte.
Yeah, it is about an order of magnitude.
And, you know, the company that I look at is having the least of an agenda that way is Western Digital.
That, you know, if you want an SSD, then they'll be glad to sell it to you.
And if you want a hard drive, they'll be glad to sell it to you. And if you want a hard drive, they'll be glad to sell it to you. And like two years ago, they came out with some curves that showed that through 2028,
they predicted that there'd be a 10 to one price difference between hard drive and NAND flash
gigabytes. So, you know, I take their word for it. Yeah. I was in a, I was in a long, long ago, a meeting in Japan with a major disc producer
and, and, and they were getting out of the, out of the, out of the business because they thought
that NAMM was going to kill it off effectively. And, and they predicted a crossover point,
you know, about a decade ago, actually. And, and, you know, yeah time when somebody said that,
I said, okay, well, I represent the contaminated silicon side of this.
Right, right. The electron contaminated silicon side.
Well, no, it's not electron contaminated. The whole way that semiconductors work
is you take silicon and make it incredibly pure. And then you put in tiny little amounts of boron and arsenic and things like that.
And yeah, where you put them defines where a transistor is.
And or flash and computations and all that other stuff.
Yeah, exactly.
Exactly.
Yeah.
You mentioned the 3D crosspoint, but are there other non-flash persistent memory technologies on the horizon there?
I mean, when I was at the Flash Memory System last year, they seemed like there was a couple of other vendors playing around.
Yeah, there are a lot of them.
And as a matter of fact, one of the keynotes was from a company that makes MR mram magnetic ram um and these companies continue to struggle to find
um a place you know or a way to to uh make big market out of something they need to have a niche
that they can start becoming profitable and yeah the the the thing that is great and beautiful about MRAM and about resistive RAM is that it's radiation tolerant.
And so you can set it up into outer space where you don't have the ionosphere blocking all of these nasty particles being thrown off by the sun.
And so those things end up getting a lot of radiation. And, you know, it just takes one alpha particle to move a bit from a one to a zero and throw your whole computer off.
So, you know, they have had some success there.
There's a company called Everspin in MRM that's had some success and a few other niche applications, but you know, this is like a couple hundred million dollar business as compared to memory,
which is a $60 billion business. You know, it's just a huge difference,
but still a lot of people work on it.
You remember that I told you that 3d NAND was created as a solution when you
couldn't make planar NANDs, the stuff on the surface of the
silicon, any smaller. Well, a lot of these guys who did a lot of the research and spent a lot
of research funds on these emerging memory technologies did that because they knew that
NAND flash was going to stop that. It was going to reach what's called a scaling limit.
And so they were licking their
chops, waiting for that to happen so then they can go in and just take the whole market over.
And then when 3DNAN happened, all of a sudden, that was the end of that for them.
Interesting how technology can be your boon or your death now. It's depending on which side of
the coin you're at. But it's mostly about cost. You know, these technologies all behave a whole lot better than the technologies that we currently have, but they
cost a little bit more. And at the end of the day, cost is the main driver of acceptance of any of
these things. Now, is PCM radiation tolerant as well as these other solutions? It is, yes.
So it would also work in that environment.
So, you know, with the volumes that they're starting to accrue on 3D Crosspoint, you'd think even those markets would start to shrink for these other solutions.
Yeah.
And, you know, 3D Crosspoint, one of the things that's hugely important in chips is the economies of scale.
That if you make a whole lot of something, then you learn how to make it cheap.
And 3D Crosspoint is headed down that path right now Intel is losing gobs of money on 3D Crosspoint
you know by my calculations in 2017 and 2018 each year it lost two billion dollars on 3D Crosspoint
then 2019 and narrowed it down to 1.5 billion dollars looks like this year it's going to be up around a billion dollars again
um but uh you know they're trying to uh they're they're selling it below cost in order to get it um accepted you know they need to price it below drams price i i love to say that it's a
proprietary part with a commodity part price yeah Yeah, it's just the worst of both
worlds. But, you know, what that means is that they're going to force the volume up. And that
means that it's going to have a pricing edge against these other, you know, resistive RAM,
magnetic RAM and everything else. So there's a possibility that it will find its way into
other applications than just Intel, and that it might end up being in satellites someday.
It must be nice to have a business to subsidize $3 billion
of losses over a few years.
Well, I don't know how you feel about it, Keith, or you, Ray,
but I'd love it if they sent a part of that $3 billion to me.
You know what?
It can be a tiny fraction.
In reality.
One percent.
Exactly. Let's not go there. It doesn't pay for us to wax on these sorts of solutions. We're just not in that space. I do find it interesting from a data center builder's perspective as we're looking at the cost basis to build this stuff versus what is packaged and sold to us for.
Intel's messaging, this game Intel is a sponsor of my data center, but Intel's messaging is very persistent about the cost benefits of 3D crosspoint over DRAM.
The big question is, can they sustain it?
And that is a true consideration as I'm looking at building technologies and data centers
for the next five years or more.
Can I bet the farm on obtaining storage is a legit question.
Yeah, yeah. And, you know, you talk about betting the farm on Optane storage is a legit question. Yeah, and you talk
about betting the farm. The way
that you get the most out of Optane storage
is by rewriting your application so that
they use it as storage.
If you rewrite your
application and Intel says, okay, well, that was fun.
Let's quit doing that now.
Yeah, you know,
SAP is all in on this with HANA
and some amazing performance and cost benefits for embracing HANA.
But again, these are two really big bets.
HANA in itself is a big bet.
And then 3D Crosspoint Optane in itself is another big bet on top of that bet.
The only way these companies move the needle is with big bets.
So if it's a small bet,
it's not going to pay the marketing expense.
They got to make those big bets
and live and die on this kind of thing.
In my mind, it's the only way forward
for these big companies to change their business model.
Yeah, I can picture that if you do,
you take a more nervy move than your competitor and you end up winning, then you win big.
Yes, yes, yes.
Or, you know, obviously there's some potential for losses here.
Yeah, but again, $3 billion, this is where I'm appreciative, a $3 billion loss for Intel.
It's a lot of money, but, you know, it's Intel.
It's $4.5 billion, according to the last three
years of what Jim said. $2.5 billion in 2017, $2 billion in 2018, $1.5 billion in 2019.
And that's less than the bid acquisition. Oh, yeah. Okay. I guess it is. Yeah. Funny. You know, I should here put in a plug. We've got a report about emerging memory technologies, and we also publish reports on 3D crosspoint memory. And so anybody who wants to go to objective-analysis.com and buy the reports would make me of us happy, Jim, to see you, quite frankly.
Okay, so what else do I want to talk about here?
There must be something else at Flash Memory Summit that was of interest.
So how did the virtual Flash Memory Summit go this year?
I mean, it actually went more smoothly than of giants that, you know, other people blazed the trail structure as a result of what they found about the support requirements for a show that big. Yeah, yeah, that's interesting.
So you had a fairly good population of attendees and stuff like that? Yeah, yeah. And you know,
what's interesting is that this platform Hoova was originally designed as a way for people to connect if they're at a conference
that, you know, I could walk into a room and as soon as I walked in the room, then I'd know
that Ray Lucchese and Keith Townsend were, you know, in there with me and I could look around
for you guys, you know, and we could trade notes, you know, snarky comments about the current
presenter and that kind of stuff. We do that over Slack nowadays. Yeah, yeah. But, you know, snarky comments about the current presenter and that kind of stuff. We do that over Slack nowadays.
Yeah, yeah.
But, you know, the Hoova has now become an important virtual conference platform.
And, you know, those guys did quite the pivot.
And I noticed that Zencaster, what you're recording this on, also said, try out our
new video channel.
You know, everybody's finding ways to work around or to improve their business
through this year's COVID phenomenon. But yeah, I'd say that the conference went pretty well.
I think that people are still trying to figure out exactly what level of production they want
to put into it. That some of the keynotes were, you know, just a kind of a Zoom presentation with a guy's face in a little
box in the corner while PowerPoint slides went by. And there were other people like Western Digital.
I saw the Western Digital one last night. It was pretty well produced, I would say.
It wasn't a Zoom thing. Yeah. Yeah. They had like a camera team and, you know, they did post-processing and, you know, that kind of stuff.
So, yeah, you know, it's kind of funny, you know, you watch the news on TV at night and there's this guy who walks through the newsroom with a selfie stick and, you know, cell phone on it talking about all the upcoming stories.
And, you know, it makes sense that he'd do that because you can do a selfie stick and socially distance, whereas it's harder to do that with a camera operator.
Yeah. But, yeah, you know, there I think that, you know, if we were, you know, the virtual conference will become an adjunct to any conference from now on.
And it almost was before it was becoming that.
So that, you know, that people were recording all their presentations and making them available over time.
But there wasn't as much real time of that stuff other than the big keynotes.
But nowadays it's becoming much more prevalent.
Yeah, I'm hoping that the real time nature of in-person will enhance the online virtual experience.
You know, Ray, I've done an awful lot in virtual conferences the past few months, and it is not an easy nut to crack, that's for sure.
Yeah. There are certain things that I think are very fundamental that are going to change as a result too. In a regular in-person conference, then as a general rule, people have 30 minute or
hour long speaking slots. And when people are watching videos, it's hard to keep their attention
that long. so yeah so i
wouldn't be at all surprised if people find ways of taking the same message and turning it into a
10-minute message or even three 10-minute messages or something like that yeah yeah yeah yeah so my
final question does flash memory summit still have a place in this world? I mean, we don't have disk summits anymore.
We don't have, and maybe there is a DRAM summit out there.
You know, at one time when Flash was changing so dramatically year to year
and technologies were coming on board and going away,
it was pretty important.
Is it becoming a more stable technology, I guess is the question.
I think that, you know, you still have an awful lot of innovation happening around Flash.
And so it's not the Flash itself that's the big attractor to this conference any longer.
Yeah, it's what kind of systems, what kind of software are being designed around it. And the people who manage the Flash Memory Summit have from time to time thought, gee, should we change the name? Because this is the biggest storage show rather than just being a flash memory.
It is. It's become large. Besides VMworld, I might add. But yeah, I agree.
Yeah.
From a technology perspective, it's the biggest storage technology out there, certainly.
Yeah. And it started out as a Norflash show, which your listeners might not know what Norflash is, but it's basically what's used in little applications.
And none of the Norflash vendors come to the show, or the ones who do come because they've got NAND flash and they talk about the
NAND flash, but they don't talk about the NOR.
So it's kind of funny to see how it's changed that way.
I wouldn't be at all surprised if it completely loses being a chip show at
all and just becomes a system show.
Yeah. I guess my last question is,
so when we go to record this next year,
what will be the big theme looking back, like from the show? Yeah, well, two things were actually
more prominent this year than last year, and that was persistent memory, which, you know,
basically is Optane, and computational storage.
Yeah, I would say computational storage was a big thing. It continues to get some momentum.
And as I find niches out there where it becomes useful,
I think it can be very successful.
Or at least I perceive that to be the case.
But all the networking stuff is becoming pretty important too.
And I'm not sure that there's a venue for really talking about networking by itself.
Keith, maybe we should do a smart NIC summit.
That would probably be right on time if no one has already done it.
It seems like an area ripe for...
You could just look to the Flash Memory Summit for speakers.
And moderators, I might add.
All right, Jim, this has been great.
So, Keith, any other last questions for Jim?
No, this has been educational just like in years past.
Well, thank you.
And Jim, anything you'd like to say to our listening audience before we close?
No, you know, other than shameless plugs and stuff like that.
All right.
Well, go ahead.
Did you want to do one more shameless plug?
No, no, no, no, no, no.
I was just, you know, thinking of saying, you know, I do have fun writing blogs.
And so, you know, anybody who wants to check out the memory guy or the SSD guy, please do.
Okay.
Well, this has been great.
Thank you very much, Jim, for being on our show today.
Well, thank you, both of you.
That's it for now.
Bye, Keith.
Bye, Ray.
Until next time.
Next time, we will talk to the system storage technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast,
tell your friends about it.
Please review us on Apple Podcasts,
Google Play, and Spotify
as this will help get the word out. Thank you.