Grey Beards on Systems - 70: GreyBeards talk FMS18 wrap-up and flash trends with Jim Handy, General Dir. Objective Analysis
Episode Date: August 25, 2018In this episode we talk about Flash Memory Summit 2018 (FMS18) and recent trends affecting the flash market with Jim Handy, General Director, Objective Analysis. This is the 4th time Jim’s been on o...ur show and has been our go to guy on flash technology forever. NAND supply? Talking with Jim is always a far … Continue reading "70: GreyBeards talk FMS18 wrap-up and flash trends with Jim Handy, General Dir. Objective Analysis"
Transcript
Discussion (0)
Welcome to the annual post-Flash Memory Summit episode of the Gray Beards on Storage podcast,
where your hosts, Ray Lucchese and yours truly, Howard Marks, talk to guests from the storage community about
what they're doing and seeing in the storage world. Since we've just gotten back from FMS,
our guest is our memory guy, or as his website is named, the memory guy, to talk about what we saw
at FMS and the semiconductor end of storage. This is Jim's fourth visit with the Greybeards.
Welcome back, Jim Handy, the storage guy,
with what's big news on this Flash front.
So, Jim, what were you impressed with at FMS,
and what's going on in the Flash world?
Lots going on in the Flash world.
It's gotten to be a really big show,
and there is this huge show floor.
Fortunately, this year was open.
Last year was closed because of a fire. That was a big improvement as far as I was concerned.
I think everybody thought it was a huge improvement. There's, from what I understand,
the insurance settlement process is going on. It doesn't look like it may be ending anytime soon.
So, you know, be glad you're not one of those people who is having to wait for that.
Yeah, I heard rumors, in fact, that Samsung wasn't back because of various insurance problems.
Yeah, my understanding is that Samsung doesn't want to tell anybody why they're not back,
but they just aren't.
So obviously the big news was the NVMe stuff, NVMe over Fabric, really.
Yeah, I wasn't paying quite as much attention to that as you guys might have been.
And so it could be that you could fill in your audience better about NVMe over Fabric than they can.
But, you know, it kind of brings me back to something about 10 years ago when everybody was talking about SSDs and what they were going to do to the system.
A few guys who are even older than I am were saying, ah, we've seen this before.
The processor's fast.
The storage is now getting faster.
And so that's going to move the bottleneck to somewhere else.
And the bottleneck is, yeah, the network now.
And so NVMe over Fabric is the step to take that to the next level there.
Huh.
Interesting.
Yeah.
I saw some very interesting NVMe over Fabric stuff.
I mean, even silicon vendors like marvell are starting to god it was it was all over the place yeah well
and i expected it to be all over the place but i didn't expect that you know it would get to the
people who make chips that they want to sell a million stage yet right right yeah it was there
i saw the marvell stuff looked pretty impressive The next big thing. Well, I thought the next big thing was that we're going to finally be able to afford SSDs.
That's coming. And something that I'm working on right now is a report on China, but I've worked a forecast into it because what China is doing is going to change the whole way the semiconductor market works.
And they're getting into the NAND business, which is going to be part of an oversupply.
But the oversupply has already started for NAND flash.
Okay, well, why don't we start this conversation with the folks at Yangtze?
Because the Chinese entering the market seems something very good for us as consumers.
Oh, yeah, yeah, really ought to be.
You know, NAND flash prices have been pretty flat since 2015. And this has been a very long, very difficult transition from planar NAND to 3D. The whole point of 3D NAND is to get the costs down so that prices can continue to decrease, and they haven't been doing that. It looks like we finally have a situation where the prices are going to
go down and spot prices which is basically you know over overages that
are being sold back and forth through gray marketeers those prices have been
on the downturn since late 2017 and contract prices are starting to go down
now too so that means that eventually end users are
going to see ssd so spot prices are instant you know overage of some manufacturer has got over
provision or over manufactured some nanchips and you just want to sell them as fast as you can
whereas the contracts are more long-term kinds of things is that what you're saying yeah actually
it's more like the contracts are you know know, Apple signs a big old contract with
Samsung, let's say.
And then Apple finds out that they sold 12 fewer cell phones than they were expecting
to this quarter.
And so they've got some extra NAND flash rattling around.
Somebody says, clear out that inventory.
So a guy calls up a broker and says, hey, you know, we've got a little bit of NAND flash
here.
If anybody wants it, why don't you sell it to them for half price or whatever?
Okay, I got you. So it's not really a manufacturer as much as it's the
over commits of some contracts that didn't get sold and stuff.
Yeah, it's the stuff that's sloshing around.
Well, I would imagine that a manufacturer sells
contracts for 95% of what he thinks he's going to make this quarter so that he can make sure and fulfill the contracts.
Right.
There's a little bit of overage there too, right, Jim?
Yeah, yeah.
There is sometimes.
Absolutely solid.
But typically they'll do what you say because when they find somebody who desperately wants some parts, they can say, oh, yeah, we'll charge you double.
Yeah.
Oh, yeah.
Desperation pricing is always high. I kind of like desperation pricing that's a different discussion if i'm selling i should say
yes yeah but when spot prices are headed south which is what they're doing right now
then you want to sell 100 not 95 of the factory output because you don't want to have to take half price for those things right
so what kind of rate are we seeing i think if if i looked on what disc trends or one of those guys
are dram exchange they were talking about 10 a quarter yeah um well uh that that's true for
uh contracts for the spot prices they're down actually more than 60
since the beginning of the year 60 yeah six zero
yeah but but for the price that i pay for ssds that's the contract price drives that more than
the spot price it does it does and and that's true for anybody. And the contract prices are going down. So SK Hynix announced their earnings a few weeks ago and said that their prices had gone down by 9% from Q1 to Q2. And Samsung, for their Q1 to Q2, they said their NAND flash prices had gone down by low teens, which could be anything from 11 to 14%.
Yeah.
And since Samsung's the dominant provider, that's really the significant number in it.
This is per quarter?
Yeah, it's big.
But typically with memory chips, the prices go down by 60% in two quarters when there's a collapse.
You know, it's never a smooth downturn or a smooth upturn. It's always either the prices go flat and just all of a sudden there's a collapse. You know, it's never a smooth downturn or a smooth upturn.
It's always either the prices go flat and just all of a sudden there's a shortage or
the prices collapse.
And so we're on the verge of a collapse if we're not already partly involved in one.
So you're saying it could go down 40 or 50 percent in one quarter here coming up.
Yeah, yeah.
There's a possibility of that.
Ouch.
Well, it's ouch for the manufacturers.
Ouch for the manufacturers,
but what about the OEMs?
You know, what about the guys who,
what about the Howard Marks'
Ray Lucchesis of the world
who are out there wanting to buy an SSD?
Do you buy it today
or do you wait till the end of the year?
I'd wait till the end of the year.
Yeah, at this point.
Yeah, this is,
unfortunately that's contrary to the way enterprise buyers buy things.
Yeah, right.
It's, you know, the enterprise guy goes, I'll take that array and I'll take it full and that'll be how much I think I'll need in three years.
Right.
Where they'd be smarter to, in a world of falling prices, to delay.
Yeah.
Well, you can imagine what a headache it would be if they open up the box and tucked in another SSD and then suddenly the whole system stopped working.
Yeah. Yeah. Yeah. But we're really, I mean, in the enterprise world we're talking about,
and we bought it with five shelves instead of buying it with two and adding a shelf.
Yeah. Something like that. So they've got the space, they've got the access to it. It's just
a question, do they want to populate it all up front or do they want to do it over the course of years?
And with the vendors controlling the media price, it goes into those slots.
It's that easy, right?
Yeah.
Two, I've heard that in your world, prices don't flail around as much as they do in my world of chips.
No, uh-uh.
No, but what does happen is the CFO says the sales guys can offer another 10% discount this quarter.
Because their media is going down price, yeah.
The MSRPs change once a year at most, but the actual selling price changes more dynamically.
So I was hearing at FMS that Yangtze is coming into the market and will have Chinese competition, but that they should go down by almost a half. And Yang probably actually be higher just because they're a newcomer
and they're ramping up production.
And so, yeah, their costs aren't going to be competitive
for now, for the next year, for a long time, probably.
But yet they're going to have to sell
at the same price as everybody else.
And so they're going to lose some money early on in the game.
But this is China.
Something important is that China has got a $3.1 trillion foreign exchange reserve. This basically means investments outside of China that they can use. They can shift that money around and use it to buy the equipment to put up a whatever $12 billion fab in China or something like that. And they won't even notice that they did it.
So, so so yeah so so let's nice to have that
problem i might add it's hard to think about paying for a fab with the change you find in
the sofa cushions but yeah anyway uh so so you know they're they're all ready to lose money.
They're saying to themselves, yeah, this is going to be a tough battle for us.
When we get in, we may end up losing 0.1%, 0.2% of that multi-trillion dollar foreign exchange reserve, but it'll be worth it because we'll be self-sufficient semiconductors by the time we come out the other end.
So that's the approach that they're taking.
And they're not traded on Wall Street,
so they don't have to make a profit every quarter.
That's true.
So you think they can catch up at some point?
Obviously, that's their strategy, I would say, huh?
Yeah, and I look at that.
My way of looking at things is I just say,
well, has this ever happened in the past?
And yeah, in the early 80s, Japan took the market away for DRAMs from the United States.
And in the late 80s, early 90s, Korea took it away from Japan.
And then Taiwan came in and, you know, made a thrust into it, decided that Taiwanese companies are more profit oriented than their counterparts in Korea and Japan.
So they kind of backed off.
But, you know, China is just doing the same thing
that these other countries did.
If you go way back in the wayback machine,
that's what the United States did to Great Britain
during the Industrial Revolution.
Great Britain did all these manufactured goods,
and the United States came in and made them cheaper
and took the business away from Great Britain.
So it's more or less the same.
Well, there was some patent infringement involved there.
Oh, there will be patent infringement here, too.
Infringement involved here as well, yeah, no doubt.
The other thing that's going on right now is that, you know, Micron Technology has a big manufacturing plant that they got from, you know, Terra in Taiwan.
And so some of the employees said, yeah, we're all leaving.
We're going to go and work for UMC, who's going to be supplying the technology for dram to a chinese manufacturer fujian and
and uh micron said well you know don't you dare take any company secrets with you
and you know there was somebody who said well i'm not going to do that what those other guys
are doing i'm just going to take some time off and so then he went and downloaded tons and tons
of files of micron proprietary information took it over to um, showed up at UMC the day after he left Micron.
And another example of why we should all be grateful that most criminals are idiots.
Yeah, that's true. Well, Micron ended up, you know, working with the Taiwanese police and the Taiwanese police raided UMC and found all this proof of this stuff. And also, meanwhile, Fujian,
the China DRAM manufacturer who was benefiting from
UMC's knowledge about the DRAM business, started recruiting people to work on fab processes F18
and F24, which, my God, happened to be the same fab process names that were used by Micron.
Oh, God. they worked with UMC to file a patent infringement lawsuit against Micron for patents that were just recently granted in China for UMC. And so Micron is now forbidden from selling.
So UMC got patents on stolen Micron technology that Micron didn't get patents on because they
were keeping it as a trade secret and are now trying to get Micron to stop using their own
technology because they hold the patent? No, no. This is technology that was patented elsewhere years ago.
And newly granted patents in China give the rights to somebody else. Oh, okay.
Right, right, right. So they have the intellectual property rights and they can actually then sue. Oh, it gets better.
Micron against that. That's an interesting approach. Yeah, it really does. Fujian has forbidden Micron from selling DRAM and SSDs in China.
Somehow or another, a provincial court has been able to do that. Well, there's a DRAM
shortage going on right now. Fox so Foxconn and all of
these, you know, Chinese companies are desperate for DRAM are now being told, yeah, okay, we just
cut off one of your suppliers. Oh, God. Yeah. Well, that makes SK Hynix happy, right?
You know, it doesn't really make a whole lot of difference because what's going to happen is
Micron is going to not sell in China. SK Hix and samsung will sell to cover for micron but they'll do it by taking away d-ram
from people in other countries yeah yeah so micron will be able to support that it's all it's all fun
it's all fungible and none of this trade restriction stuff matters yeah except for the
fact that the people the people who are buying the Micron DRAM are the ones who are
suffering.
The Chinese companies are getting hurt by what the Chinese court did.
Okay, so anyway, that wasn't Flash Memory Summit.
I'm sorry.
Yeah, yeah.
But this whole how big an impact Yangtze is going to have on the flesh market depends on how
much money they're Chinese are willing to lose. Reminds me of a 3d crosspoint.
Oh yeah. Yeah. It's exactly like that. Do we need to go back over that?
I can, I can wax that liquid on that. Well,
I do want to go over, you know,
the Intel micron divorce and people calling optane and therefore 3d crosspoint
a failure it's too early to say um but you know my my answer about 3d crosspoint is first of all
okay nam flash taught us something and that is that you have if you want to compete in cost
against dram even though your die size is smaller than dram and your production cost should be
cheaper you have to get near dram volume shipments to be able to get a cost structure that's close to what DRAM is.
And, you know, NAND Flash didn't cross over DRAM pricing until 2004 when NAND Flash gigabytes got to be about a third as large as DRAM gigabytes.
3D Crosspoint is battling the same thing.
It's just economies of scale. And so Intel
has, their goal is to sell these very expensive Xeon processors. And because of the fact that they
might be still losing money with 3D Crosspoint when those processors come out, I've been telling
people that, you know, if Intel loses $10 per system in the Optane memory that
they sell into that system, 3D Crosspoint, that they'll make it back by selling a $50 or more
expensive processor. And so it's still good business for them.
So is that the driver for the divorce from Micron, that Micron isn't selling Xeons and
therefore eventually has to make money on Crosspoint itself?
Micron and Intel are not telling us what the details are for their contract.
But apparently, Micron... That's why we're asking you to speculate, Jim.
Exactly.
What I'm saying is that apparently Micron was selling its share of the 50 output of
3D Crosspoint from their fab.
It was selling its share to Intel.
And at some point, Intel said, well, we don't need that much 3D Crosspoint from their fab, it was selling its share to Intel. And at some point,
Intel said, well, we don't need that much 3D Crosspoint because it hasn't taken off in SSDs
way that we expected. And so Micron is sitting there holding onto this 3D Crosspoint memory that
they can't use because they don't want to sell it at a loss because they don't have any offsetting
processor price to be able to make up for it. So, you know, Micron right now is looking at 3D
crosspoint. They're saying it's not popular. It's losing money. Why in the world do we want
anything to do with this? And I think that's what's made the breakup really happen.
Ah, that's interesting. So they've got this glut of 3D crosspoint that they can't unload
at the price that they're willing to unload, and they're, they're not willing to take the loss at this point.
Yeah. Well, 3d cross point to your, your audience,
3d cross point doesn't make a grain of sense if it sells for a higher price
than DRAM, you know,
why would anybody buy something that underperforms DRAM at a higher price
than DRAM?
Yeah. Well, for our audience, it's not even that it's that once you package 3D Crosspoint into an Optane SSD, it's five times as fast as Flash, but, you know, you're taking a thousand times performance advantage and limiting it down to, you know, only five times or it's actually seven to eight times according to Intel.
You know, that's like.
Well, according to public benchmarks, I've seen it's five times.
Yeah. But, you know, hide your light under a bushel basket.
Tie a 10 ton weight onto the back of your Ferrari and try dragging it down the road.
It's the dim form factor that always was attractive to me.
And they just, they, I was at the announcement where they released those.
Well,
where they announced those,
I understand they've shipped one dim to a customer who doesn't have a
server.
They can plug it into yet,
but that's a whole other story.
Well,
they,
they named it Google.
And that was part of the flash memory summit was that Rob Crook,
the guy who gave the keynote for,
for Intel announced that they had shipped for revenue into Google,
which is one of the goals that they had this year.
Me and a few other guys who kind of watch over this market closely,
we looked at Intel's projection that they were going to sell revenue units this year.
And we said, OK, one is going to somebody's grandma for 10 cents.
And nobody would be able to find it.
Well, it didn't go to somebody's grandma. It went to Google and still somebody, nobody can find it.
And Google is using it, it's in a dim form factor. So it's more or less for some memory
intensive application activity, I guess. The dim form factor is more just to be able
to take advantage of the speed. You know, for all I said negative about Optane SSDs, I love this technology when it's on a dim.
That, you know, if you've got something that, let's say it's a third as fast as DRAM or something like that.
Well, you know, an SSD is one one thousandth or so the speed of DRAM.
And so, you know, this is a memory layer that
makes a lot of sense, especially if it costs half as much as DRAM. It's about a third as fast.
That makes sense. And if you look at in-memory database technology like HANA or Aerospike,
the advantages are just glaringly obvious. Yeah, you know, there's the glaringly obvious,
but then there's just a plain
old computer architecture thing that you stick in a memory level that's way cheaper, that is,
you know, a little bit slower, and it ends up giving you more compute performance per dollar.
You know, a lot of your guys probably found out that they can use less DRAM in a system by putting
in an SSD, and, you know know the system will cost the same but the
performance will be higher and you know that's the same well most of our audience is virtualizing up
the wazoo and so it's not DRAM it's uh coarse okay that's good you know because because the
CPU is sitting idle less I can use a smaller CPU just just back to the SSD thing though you know
something that um I've got this other blog other than the one that you mentioned. You said something about the memory guy, but I also do the SSD guy. And I've got this post on the SSD guy that is, why will Optane SSDs be slow? And this was before the SSDs came out, and I that I calculated it was that if you put an impossibly fast memory, you know, it had infinitely fast access time behind an NVMe PCIe interface, you'd still end up with something that was only six times as fast as an AdFlash SSD.
And it's just because of the delays.
You predicted six.
It turned out to be five.
Pretty good, Jim.
Yeah.
You know, it's based on numbers I got from Intel, believe it or not.
Yeah, yeah, yeah.
So, I mean, the standard game here is like it has to be at least 10x faster or cheaper
or something like that. And 6x is not going to cut it, which would explain the
Optane Darth of market and the SSD space,
I guess. Yeah.
But you didn't want to talk about this. You want to talk about Flash Memory Summit, right?
Yeah. So the other thing in Flash Memory Summit, I saw some
more QLC coming out.
Yeah, QLC is a big deal.
And that's because of 3D NAND.
Something that's always been a difficulty for the NAND flash guys is that they've been doing this scaling thing,
which the whole world of semiconductors is built around, where you keep making the transistors smaller and smaller and smaller.
And, you know, in the past, they used to always get faster.
And so that's why processor speeds used to go up. But it also, you know, it allows you to sell the transistors
or basically the gigabytes of NAND flash for cheaper. And that was all good. But one of the
consequences of that was that the number of electrons that accounted for a bit kept cutting
in half every time you did one of these process shrinks.
And so then you go to QLC and you cut the number of electrons per bit down by a further amount.
And people just said, oh, we can't do that. It's a lot easier and cheaper just to shrink the chip.
So they got down to 15 nanometers and they said, oh, well, now we're down to so few electrons per
bit that we can't even detect them with TLC, much less QLC.
And so, you know, it just looked like QLC was never going to happen. Then they went to 3D
and all of a sudden the number of electrons per bit goes up by a factor of about 10.
And, you know, it's like, oh, now we can do this. Now we can do QLC. And so that's why
everybody's doing QLC now is all of a sudden, all of the yellow room opened up for them. Well, I get that part. You know, I can do 16 levels
because I have enough electrons to be able to tell the difference. But the part that I'm still
having a little bit of trouble with is when we moved from MLC to TLC, there was longer rate latencies
because it had to pump some charge in
and then see if that was the right amount and adjust.
Doesn't that problem exist with QLC even worse?
Yeah, it's a very similar kind of a problem
that they use a feedback mechanism
to decide whether or not they actually program
the bit to the value that they want to.
And so it's, you know, you push it in that direction and then check it
and then push it a little farther and check it.
And so, yeah, it's going to be slower.
QLC.
So the write activity will actually be slower,
but does it affect the reads as well?
Not nearly as much.
No, no, nowhere near as much.
The reason why is because the reads don't use this, you know,
cut and try and cut and try thing is yeah the reads they they have to wait until the chip settles down for a while before they can
actually read the data and the amount of time that they have to wait is longer just because of the
fact that it's they're having to do a more sensitive read of what the data is so do the qlc
chips we're talking about like some of the the previous versions, allow the SSD controller to declare part of them as SLC?
Yeah. And as a matter of fact, Micron made a big deal about this.
They had a what's-its-little event at the Fortnite Niners Stadium the day before the Flash Memory Summit began. And they showed off a TLC and a
QLC SSD, both of which used SLC caches that I think were about 10% of the size of the SSD.
And both of them performed about the same as long as you stayed within the cache. But once you got
to the point where you overstepped the bounds of the cache,
then the QLC SSD's performance was just enormously slower than the TLC.
Yeah, but so writes go into the SLC,
and then they only get pushed out into QLC as part of garbage collection or such,
which would mean that how long that write is is way post-ACK,
and as a storage guy.
I don't care.
Yeah.
And I think as long as it's D stage hidden.
The question is, what kind of workload do you have?
And if you've got a workload that does a lot of writes, this is a really bad SSD for you.
But they are aiming this at the client and client workloads are read intensive.
Yes.
And also they're just they don't do a whole lot of disk access.
Well, even in the data center, we're seeing a much larger range of variation of SSDs for workloads and use cases.
It fascinates me how much people understand their workloads and how much better it is.
Because I remember only five years ago, everybody was saying they had to get more drive rights per day than the next guy.
And they were up to about more drive rights per day than the next guy. And
they were up to about 25 drive rights per day. And apparently, apparently there were, there were
some people who said, Hey, wait a minute. We don't want to pay extra for that. We found out our
workloads don't require it. Yeah. I remember I was at, uh, an HPE deep dive. It might be two years
ago now. And somebody asked a question about SSD endurance. And one of the three-part
guys just dialed into their phone home analytics and said, over 80% of the SSDs we've ever sold
still have over 80% of their life left. Yeah, it's pretty amazing. And the larger the SSD you use,
the more likely you are to be there. I look at these SSDs, you know, and at the Flash Memory
Solid, everybody goes out and says, oh, we've got a 16 terabyte or we've got a 32 terabyte or 64 terabyte SSD. Well, Thomas
Sokovich has got that 100 terabyte thing he says has unlimited endurance because the SATA interface
is such a bottleneck. You can only write 0.3 drive writes per day. Yeah, that was a point I was going
to make because, you know, you put that much storage into it, then you put it behind some kind of a, you know, very slow interface and, you know, who cares about
endurance anymore? Yeah, but that kind of thing is perfect for, you know, Facebook,
whose long tail is hotter than most people's hot data. Yeah, except it's still expensive.
From what I understand, Facebook is using hard drives where they put stuff onto the hard drive
and then they just power it down.
And then, you know, once every 12 months or so, somebody says, oh, where's that picture of Uncle Benny that I posted?
Made?
Made.
Made is back.
Yeah, that's it.
Everything old is new again.
Ray, you know that?
I know.
I know.
That's impressive. I'm going to grab the conversation here and just say something that I find absolutely fascinating that I sat as a chair on a panel for is something that some companies call in situ processing.
Others call it computational storage.
And there are like a couple other names for it.
It's, you know, people are starting to say, hey, we've got a processor inside the SSD.
Can we do something with it?
And some other people are saying, oh, well, what if we put a more powerful processor into the SSD? Then what do we do?
And it seems like there's a lot of attention that's falling onto this. It allows you to do
scale out at the SSD level, which gives you great linear performance increases as you add SSD.
So, yeah, we've been talking computational storage types of things since the late 90s.
We were doing sort functionality and stuff like that out in the storage system.
Even a couple of years back, I think Dell EMC showed that they were running almost a hypervisor in their mid-range systems out there,
and you were able to run different almost applications underneath the storage controller.
Moving it out to the drive, you know, it's a different level.
And, well, the fact that you've got, you know, a cache inside the drive now for SSDs with SLC as well as TLC or QLC. You've got a lot of complexity there,
but if you've got the processors that you can put in there,
the question is how do you get the processing out there
and stuff like that and the information back, I guess?
Where's the standards?
It's too early for standards, Ray.
Well, where's the API then?
I mean, you've got to have some way of getting functionality out and back and stuff like that. I was in one of the sessions, the keynotes, where the guy talked about stuff like that. It cameras and you can say, well, you know,
these seven cameras send their data to this SSD, which figures out if any of the guys in the black
book are in our casino. Well, here's the question. Should the facial recognition be at the edge at
the camera or should it be at the storage, which is all behind the process? It's a question of
where are you going to put this logic, right? Isn't it always?
It is always. Now you've combined in-situ computing and edge computing.
Ray, you've created a whole new category.
I know, I know.
But it's the problem, right?
Where do you want to put the computing?
You want to put the computing at the server?
You want to put the computing at the edge?
You want to put the computing at the storage?
You want to put the computing at the network?
It's got to go everywhere ultimately but i know we've we've reached the stage where
eight arm cores is so cheap you can put it anywhere and and you know this is the one of
the big trends in computing's over over its whole lifetime is things get smaller and cheaper and you
can deploy them in new places and now we have to figure it out. Mainframe, servers, PCs,
phones.
I got to the point where the phone is
almost like my altar anymore.
I give
sacrifices to the phone.
It's bizarre.
I do everything with my
phone, other than a few other
operations here with desktops
and stuff. It's bizarre.
So you think this computational storage is going to go someplace?
It's hard to tell, but you know, the, the, my inner geek is just in love with it.
The, the thing that I keep in mind though, you know, the, to keep me skeptical a little bit
about it is the idea that there were a couple of companies,
Schooner information systems and Viridant who back in the middle
two thousands came out with boxes that were the same products.
One of them was a MySQL box.
And the other one was a,
Oh,
come on Memcached box.
And,
and they were just basically programmed servers that would do these operations.
And, you know, if you loaded in all of the database and then you just kept feeding inquiries
to the MySQL thing, then it would give you back answers.
And, you know, it seemed like that was all very good stuff, but, you know, neither company
was able to make a go of it.
So, you know, I kind of wonder of computation.
Big enterprises would rather spend 40 times as much on an Exadata
than on something new that runs MySQL.
That's part of the problem. The other part of the problem is
do you want a product that's dedicated to one application
like that? Obviously, the database machines and stuff like that, maybe.
The hyper guys, the hyperscalers might,
I don't know.
But you know, the hyperscalers got enough intelligence
they could do it themselves.
Well, at the application appliance state level,
but they'd rather buy a SSD
that they can program from NGD, I think.
They can program the controller logic itself.
Yeah, we might have to get somebody
from this field on a future episode.
Yeah, possibly.
But this does get you past that
whole network being
the bottleneck thing, because if you've got the processor
and the storage all in the same box,
then it gets rid of...
I don't think so.
You still got to get the...
You're getting the data from someplace,
and you're pushing it through the network to the server and you're pushing it from the server to the storage. And then you want to do the computation in the storage.
I don't know.
Yeah.
Now, if the data is sitting at the storage, okay.
Well, and the computation is, you know, 10,000 times as many accesses because it's all random and parallel.
And the right is from a camera which
is purely sequential so yeah but you know but nvme over fabrics is kind of the whole other side of
this is if it's only 100 microseconds to access a nvme ssd in a jbof what's the advantage of having it in the server?
So we live in interesting times.
And that's not even to mention the whole thing that's going on with storage class memory, which now SNIA wants for everybody to start calling persistent memory instead.
Well, I have problems with storage class memory because I keep seeing vendors declaring that they're fast SSDs or storage class memory.
And it's like, no, unless it appears in the memory space, it's not storage class memory.
Yeah, yeah.
Speaking of storage class memory, is Crosspoint the only one out there at the moment or are there others?
Oh, this was actually a question I wanted to ask.
Samsung and Toshiba have their special NAND?
Yeah.
What makes it special?
With Toshiba's, it's that it uses several planes.
And the way to look at that is just assume that each plane is like a very small NAND flash chip by itself.
And so they can all operate independently of each other.
One of the big problems with NAND flash is that you erase it,
and the area that you erase ends up getting tied up
and can't perform other operations for the 50 milliseconds
or whatever it takes to erase it.
So, you know, by having multiple planes,
then you can have concurrent operations happen
in multiple places at the same time.
But that still just means that they've got faster write throughput.
Yes.
Well, also that writes and erases don't block reads.
Right.
Yeah, yeah, yeah.
So they've got faster performance.
Does that put it as a storage class memory?
No.
I don't know.
I mean, so what's the big difference between Crosspoint and NAND?
It's faster. It's faster and it doesn't require power. Well, so what's the big difference between Crosspoint and NAND? It's faster.
It's faster, and it doesn't require power.
Well, NAND's like that.
It can be faster, I guess.
A couple of companies tried putting NAND onto the memory bus, and that didn't work out very well.
This is Diablo.
Well, Diablo tried that, and then Netlist sued Diablo out of existence.
Well, yeah, except Diablo wouldn't have been sued out of existence if they had had a product that had been popular.
Really? Yeah, because they could have paid for the suit, I guess.
Yeah.
Interesting.
And now Netlist seems to have announced something very similar to what Diablo was talking about.
Yeah, like their memory one.
Yeah, but I never bought it. Yeah.
So, I mean, Crosspoint has got, I don't know, phase change memory. Whatever the technology in there is sort of different than NAND today.
And there's a couple other guys out there with re-ram and stuff like that.
But I haven't seen much noise from them.
Well, the one to look out for, just because of the fact that they're in production with something that can be used, is Everspin, who makes what's called Magnetic Ram.
Yeah, MRAM. Yeah, I was going to say Everspin who makes what's called Magneticram. Yeah, MRAM.
Yeah, I was going to say Everspin.
And pretty much the other companies have things
that are either used in niche applications
or only exist in paper.
There are an awful lot of the ones that only exist in paper.
But Adesto makes something that's in production
and Ramtron makes ferroelectric memory.
The Adesto thing is resistive memory, resistive RAM.
And, you know, all of them share the same common problem that they're just way, way, way more expensive than anything else you'd use.
And so you have to find an application that can justify that expense.
Adesto sells chips into surgical instruments that need to be sterilized by blasting
them with very heavy doses of x-ray and those heavy doses of x-ray would serve the bits in
uh nan flash basically it would erase the flash and so so it's like the uv light i used to use
for double e-promise oh it's it's just like that, except the UV light required a quartz window
on the package.
Yeah.
Yeah, and we are old.
But anyway, the quartz window isn't required
when you're using x-rays
because the x-rays will just go through anything,
the plastic package.
But so that's one application
that's willing to pay more with,
who is it?
Somebody, maybe it was Adesto too, sent something.
No, it was Everspin, sent something up in a satellite.
And, you know, there's high radiation in satellite.
Cosmic rays.
Yeah, and, you know, the alternative was to put an NAND flash up there and wrap it in lead.
Well, you know, when you're worried about the weight of the satellite, lead is not a good option.
Yeah.
It just weighs too damn much, you know, to fire up the space and stuff like that.
So, you know, there are good reasons.
And they'll spend a whole gob of money on these parts that are not very cost effective compared to NAND flash or DRAM for applications like that.
For the most part, you know, computer users, they look at the price of those things and they say, wow, this is expensive. Now, that said, IBM just recently
announced a system that was on the Monday before the Flash Memory Summit. I can't remember what
the system is. It's based on this, you know, Texas Memory Systems. Yeah, it's modules for the
9100, if I remember right. Flash System 9100 and those those modules those modules have mram in
them and from everspin it's as a journal and what they found out is they could make the right journal
using battery backed up sram or like people do or by using nvdims like a lot of people do
um or they could use this everspin stuff the everspin stuff is faster consumes less power
and you know it was a reason of cost.
And they get to save the UltraCaps that you need to hold up the DRAM.
Yeah, which is a reliability concern more than anything.
It works for them.
Well, it's a cost concern.
It's a reliability concern.
I've been hearing lately that there's supply problems with them too.
Yeah, yeah, yeah.
Well, listen, Jim, this has been great.
Is there anything you'd like to say to our listening audience?
Next time, we'll talk to another system storage technology person.
Any question you want us to ask, please let us know.
If you enjoy our podcast, please tell your friends about it,
and please review us on iTunes, as this will also get help get the word out.
That's it for now.
Bye, Howard.
Bye, Ray.
And bye, Jim.
Bye, Ray. Bye, Howard. Good day, gents.