Grey Beards on Systems - 86: Greybeards talk FMS19 wrap up and flash trends with Jim Handy, General Director, Objective Analysis

Episode Date: August 22, 2019

This is our annual Flash Memory Summit podcast with Jim Handy, General Director, Objective Analysis. It’s the 5th time we have had Jim on our show. Jim is also an avid blogger writing about memory a...nd SSD at TheMemoryGuy and TheSSDGuy, respectively. NAND market trends Jim started off our discussion on the significant price drop … Continue reading "86: Greybeards talk FMS19 wrap up and flash trends with Jim Handy, General Director, Objective Analysis"

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everybody, Ray Lucchese here with Keith Townsend. Welcome to the next episode of Graybeards on Storage podcast, a show where we get Graybeards storage bloggers to talk with system vendors to discuss upcoming products, technologies, and trends affecting the data center today. This Graybeards on Storage episode was recorded on August 15, 2019. We have with us here today an old friend, Jim Handy, General Director of Objective Analysis. This is the fifth time we've had Jim on the show for our annual post-FlashMemory Summit. So, Jim, why don't you tell us what you saw at Flash Memory Summit 19 and what's going on in the flash market?
Starting point is 00:00:49 Well, I'll tell you one thing that I saw when I was there was you. Yeah, it was a really crowded show. God, it's gotten big. Yeah, yeah. I noticed that, you know, even though I kept seeing you on other sides of Windows and stuff like that, you and I never had a chance to speak while we were at the show. Yeah, these things happen. We're both very busy. You're more busy than I am at the show.
Starting point is 00:01:10 You do a lot of the lead-in discussions and stuff like that, right? Yeah. One of the biggest things that I do during the show is the very first morning I give an overview of the flash memory market, where it is now and where it's been. And, you know, this is a very tough time for the business because prices pretty much decayed all the way through 2018. And, you know, then hot on the heels of that, at the beginning of this year, DRAM prices collapsed, which was, that's like the other thing that is made by three of the major NAMPLASH firms. And so those guys were hurting. Yeah, so both sides of their business model have collapsed. Not a good sign, but they've seen this sort of thing before, right? And it's not like it's a new gig.
Starting point is 00:01:57 No, it's, you know, one of the things that I make money at is telling people it's going to happen again. Because every time everybody says, no, it's different this time, it's never going to go down. Or, you know, when it's down in the dumps and they say, it's never going to go up again. And, you know, guess what? It does. Yeah. So does this ever translate into cheaper SSDs and, you know, the enterprise level or does that take a while? No, very much so matter of fact i was looking at a curve that i had drawn of flash prices that i think said in uh 2001 or something like that the flash was ten thousand dollars a gigabyte and now it's about a nickel so you know things have changed an awful lot
Starting point is 00:02:39 so so yeah for the enterprise guys uh we've had um you know the flash price has been going down since the beginning of 2008 and so enterprise ssd prices have gone down but usually uh what the way that your uh clients look at it is not so much that the prices have gone down for a certain um capacity level but instead that they're getting more capacity for the same price. So, you know, if they're in the market for $500 SSDs, then that $500 SSD is going to be maybe six times as large this year as what they would have been able
Starting point is 00:03:14 to buy for $500 last year. Six times. That's good. Yeah, because we went from 25 cents per gigabyte for a flash down to about five cents. Yeah, yeah. No, I understand. Maybe five times
Starting point is 00:03:25 yeah anyways lots less lots less yeah yeah that's that's that's because of the the transition of 3d is as well as just uh the overproduction well yeah it actually both of them kind of tied into each other because 3d didn't come online as quickly as everybody thought it would, and it wasn't until they hit the 64-layer configuration that it actually became profitable for anybody to make. And so what they were doing was they continued to extend the life of old planar NAND flash. And by doing that, because they were having problems, there arose the shortage. And then once the shortage was over, boy, was it over. It became an oversupply in the wink of an eye.
Starting point is 00:04:10 Yeah, it kind of reminds me of robotics oscillating from one end to another or something like that. But it's certainly a challenge in that market that has to be managed, I would say. Yeah, but that's nothing that anybody on this phone call really has to worry about because they don't own NAND flashpabs. Well, NAND flashpabs, probably $6 billion these days, maybe more. Double that. Double that. So, yeah, I'm priced on the market already then. Let me just, you know, because we don't have to talk about market dynamics for the whole call, but let me just put a quick cap on that as to where we are, is that during the cycle that DRAM and NAND
Starting point is 00:04:53 Flash go through is that there'll be a shortage, prices will stay flat, or for DRAM, you know, we had a really rare year last year where the prices actually went up. But anyway, typically they go flat. And meanwhile, costs continue to come down because of Moore's law. And so you've got this growing gap between price and cost. And when the oversupply hits, then the prices collapse down to cost. The only thing that keeps them from going below cost is that there are trade sanctions found against people who sell below cost. The only thing that keeps them from going below cost is that there are trade sanctions found against people who sell below cost. And so they're disciplined not to do that. So right now, NAND flash prices are at cost. So last year, you had about a 60% price drop. This year, we're at the follow cost, which is about a 30% annual price drop. And what we're expecting to see is because of, you know, capital spending that went on,
Starting point is 00:05:50 you know, basically people did too much capital spending in 2016, 17, or I'm sorry, 2017 and 18, then we end up with an oversupply as a consequence of that in 2019, 2020. But now Chinese manufacturers will be ramping up right when the market is about to go back into a shortage again. So that's going to prevent it from coming out of a shortage until 2021. And so what your audience will be noticing is that NAND flash prices have dropped significantly. Sorry, NAND flash prices have dropped significantly this year. And they'll continue to drop in 2020 and in 2021 before they finally level off and become stable for a couple of years in 2022,
Starting point is 00:06:40 2023. So 30% per year cost declines this year and next year for SSDs. Great for us, not so much for the folks producing it. No, no. It'll be difficult for them. So I do have a question around, because we do want a healthy market. You can't have your producers taking a loss because ultimately that hurts the overall industry. Are we at risk of people basically exiting this business? That's a good question, Keith. And so far in DRAM, there have been a lot of companies that
Starting point is 00:07:21 exited the business. Back in the middle 90s, I counted 28 manufacturers, and now there are three. NANDFlash has a little bit of a different dynamic because of the fact that it is growing really strongly right now. It's got 45% annual gigabyte growth, and that will fuel outside investment to keep businesses propped up. So I would expect for NANDFlash to do all right. DRAM might see some more consolidation. We might go from three vendors down to two in DRAM. And the Chinese fabs are starting to come online with their own manufacturing, right? I mean, in addition to the three or four others. Yeah, they're tooling up to do it. And know it's it's if if you look at it from
Starting point is 00:08:07 their perspective it's the most reasonable thing in the world these guys have a lot of cash and they are buying all of their chips from outside as a matter of fact they buy more chips you know that they spend more on foreign chip purchases than they spend on oil. Interesting. Yeah. Yeah, and so they develop a domestic source of supply, and so that's what they're after right now. You know, there's a lot of saber rattling and people, you know, pitching their voices up about Chinese threat and all that kind of stuff,
Starting point is 00:08:40 but it's just a very natural thing that they do. Yeah, yeah, yeah, I understand. Well, we always said that data was the new oil, and here we have an example of it almost. Yeah, and I think the semiconductor manufacturers would dislike having it be said that they had greasy products. Well, we could take that metaphor and throw it away, I guess. So what else happened at Flash Memory
Starting point is 00:09:06 Summit? There seemed to be a lot of discussion on computational storage more than last year. Yeah, there were a few companies who had that in there. And this was tried earlier on with hard drives. And now the SSD guys are trying to harness the processor inside their SSDs to do some computation. You want to describe what computational storage is, Jim? Yeah, what they do is there's a little bit of a computer inside each SSD and all of the storage is right there. So rather than move data all the way to the main processor in the system, the idea is to use the processing inside the SSD to do a part of the processing. So smaller tasks can be done right there without moving the data all over the place.
Starting point is 00:09:55 I was tried by a storage guy once or twice before, but go ahead. I'm sorry. Yeah, it's an interesting concept, and maybe the time has come for it now. I saw NetEnt there, and they were doing something focused on video transcoding. Yeah, that's one that I missed, so I can't really comment on that. Yeah, I don't know if I would call it minimal. They were talking about almost running a container-level functionality on the SSD itself. And then video transcoding is not lightweight, I would say. Yeah, the big things that they're trying to do is to find out which things involve more data movement than actually the data processing. And so they try not to put too much of a processor inside the SSD,
Starting point is 00:10:47 but just try to do the tasks that involve smaller amounts of processing. And maybe video transcoding is something that actually involves a whole lot more data movement than actual manipulation. I think it's something you do over and over again. It's something that can be, I'll call it distributed out, you know, and different chunks of data could be processed at each node. And if it happens to be the SSD, then you don't even have to move the data that much. But you're doing a lot of data bandwidth intensive stuff. Keith, wouldn't you say, I don't know, are you in the media entertainment space encoding is requires gpus to really be effective so i would say that you know for it to be on the ssd you know i spent a lot of money on gpus to encode you know to do my stuff so it i think it depends on the codic and and the bit rate and what they're trying to accomplish. But I can see it being the application.
Starting point is 00:11:47 What I've seen is people talk more of, I think, Ray, it's more in your wheelhouse for AI and the ability to do inference and neural net stuff. The closer you can move CPU to the storage, the better. That's been more of the applications I've seen. And mainly it's been in a storage array and not necessarily on the SSD itself. Yeah, yeah. So back in the 90s when I was working for a storage company, we were doing specific database types of things and sorts and stuff. We were trying to move sort functionality out to the storage system. And it worked for a little bit. And there were certain
Starting point is 00:12:27 customers that liked it. But finding a universal solution out there like being able to run a container on an SSD, that's got some interesting legs to it, I think. Yeah, there's something going on in the industry now where the standards bodies are getting involved pretty early on. Yeah, so that should help out with this. Maybe it was a lack of standards that was a problem before.
Starting point is 00:13:01 Something that is a little bit of a tough decision, though, is how much processor do you want to put into the memory or into the SSD? Because typically you don't have an awful lot of DRAM in an SSD. And once you start moving processing in there, then you'll start wanting to bulk up the DRAM and maybe bulk up the processor some more. Yeah, memory and more cores and stuff like that. Yeah, it seemed like it was pretty straightforward to add a couple of ARM cores. Maybe it's not from a chip perspective, but, you know, it seems like as the, I'll call it Moore's Law, gets more and more farther down the path, getting more chip cores is a pretty straightforward thing to do.
Starting point is 00:13:38 It doesn't really have that much incremental cost. The DRAM and the support logic to support it, that's a different game. Yeah, I think the A16Z guys did a deep dive on this a few months ago. And I think it might have been Martin Casado that was talking about this. But the concept, or it could have been Andreessen or Horace, one of the two, talked about the concept of Moore's Law that at some point, they still believe that CPUs at a certain threshold will basically be free. So I'm wondering if these computational computing guys are kind of banking on that. General question for either one of you, how much headroom is in a CPU that's in a typical SSD? Like how much capacity to do anything other than the functions that a basic SSD does is low.
Starting point is 00:14:26 You know, what's the utilization level of an SSD control CPU? I don't know. I would say it's pretty low. Yeah, the way that they choose how much processing power to put into the SSD is that they determine what is the greatest workload that they'd like to specify the SSD for. And the absolute worst possible workload would be one that just did tons of writing because you end up having to clean out spaces. One of those weird quirks about NAND flash is that you have to erase everything before you write to it. And yeah, yeah, it's really awful.
Starting point is 00:15:10 A lot of garbage collection going on, a lot of work to support that sort of workload at speed. I guess the assumption is that at some point, ARM prices are going to get so low that individual chip design for SSDs may not make sense. And it could just, you just go and buy a couple of ARM cores to go into an SSD, which would be overkill for most SSDs. And you end up with extra capacity just because of Moore's law, that Moore's law is outpacing the need for compute inside of an SSD is kind of my layman assumption.
Starting point is 00:15:45 Yeah, I think that's there today, right? Yeah, yeah. They could give those things away for free. So, and, you know, I think I mentioned that Samsung has six cores inside one of their processors. They're, I think, the top of the heap as far as how many cores they're willing to throw at a problem. But they're really there. And, you know, it's interesting that Moore's law is what's allowing NAND flash to become cheaper and cheaper.
Starting point is 00:16:13 But it's also the thing that's allowing the processors inside the SSDs to get more and more powerful. It seems like these guys are having a real day in the sun. The hyperscalers seem to be the companies that are the most interested in computational storage right now. But it's a little bit hard to tell with them, because very often they'll buy a sizable number of products to go into some sandbox effort, and then they'll end up shifting directions and just dropping it. So, you know, we'll have to wait and see whether or not this becomes real mainstream stuff. So another thing at the Flash Memory Summit that I thought was interesting was Toshiba's XCL NAND. It wasn't clear to me what they were trying to do there,
Starting point is 00:16:59 but they claim it's a storage class memory kind of thing. Did you hear that, Jim? Yeah, it kind of turned my stomach. Ah! Why is that? It's NAND flash. All it is is it's SLC NAND flash is faster, and also it's got what are called multiple planes, which means that you have one chip that looks like it's actually,
Starting point is 00:17:20 I think it was 16 chips. And so you can exercise an awful lot of parallelism inside that one chip and, you know, get a lot more speed by doing that. You're saying it's SLC flash? It is. Oh my God. Well, I would explain, I guess I explained the endurance levels and stuff like that. Oh yeah. Yeah. It's, you know, really great. and slc always has been great but slc is a niche market and because of that even though technically you would expect slc flash to cost only twice as much as mlc flash or three times as much as tlc um slc flash actually costs or is selling for about 10 times as much okay yeah so Yeah. So it's a huge price difference.
Starting point is 00:18:06 And you look at it, and DRAM is only about 20 times as much as an ANFLASH. And so you get an SSD that's maybe six times as fast as an ANFLASH SSD, and it costs almost as much as DRAM. And I just can't picture there being much of a market for something like that. Does it play in the Optane space? I mean, is it roughly equivalent price kind of thing per gigabyte? Yeah. So that's the other side of it, right? Yeah. What they're trying to do, and Samsung is trying to do the same thing too. Samsung announced their ZNAND a couple of years ago, and their ZSSD. And the idea is that there are two places where Optane is playing. One is in the
Starting point is 00:18:46 NVMe SSD space, and then the other one is into the memory channel as a DIMM. You know, ever since I first heard about that, I said, well, this doesn't make an awful lot of sense. They're not going to sell very many SSDs. Everybody's going to want the DIMM product because that's going to really perform. And, you know, it seems like they've had some success with their SSDs, but, you know, certainly not enough for both Toshiba and Samsung to want to take that business away from them. Yeah, yeah. So they're trying to circumvent, kind of go around that really with a NAND technology that they think they can support. It's not actually easy for me to say, but nonetheless. Yeah, so the big deal is that Intel's got this. My suspicion is that the Optane SSD is mostly selling,
Starting point is 00:19:40 or has been for the past year because the Optane, well, I should use the official Intel name. It's called Optane DC Persistent Memory. That's the BIM product. Yeah. And I think that's probably going to cannibalize the SSD, Optane SSD market. I would thought, you know, it would start, you know, the volume production would increase, therefore the cost would come down, they maybe reduce the price of the Optane SSDs, maybe there would be a place where they could start gaining some traction, but I don't know. Yeah, they're losing a ton of money right now. A different session where I spoke at the Flash Memory Summit actually shows how much money Intel was losing when everybody else was profiting handsomely. Not a good place to be for long. The thing that I've started to quip about is that
Starting point is 00:20:33 Intel is selling a proprietary product at commodity pricing. It's just horribly unhealthy. But it's a part of the Cascade Lake ecosystem. Using Optane, they can make the Cascade Lake perform a whole lot better. And they've got it wrapped up that the only processor that knows how to communicate with the Optane DIMM is Cascade Lake. Yeah, and so AMD can't touch them there. And now all they have to do is wait for software to catch up
Starting point is 00:21:04 that actually takes advantage of everything that Optane has to offer. So where's Micron in all this? I thought they were going to do their own Optane SSD and be full-fledged partners and all this stuff. Micron's in a really cool position because they're the only company that is allowed to alternate source the Optane products. And they also are manufacturing the chips for Intel. And so they know, and you know, they know how much Optane is selling for in the marketplace. You know, that's pretty easy. But they know absolutely precisely what it costs to make. They know costs, they know market is what you're saying. Yeah, that's interesting. Yeah, yeah, yeah. And so they know how much money Intel is losing. And, you know, Intel says, yeah, we'll lose a billion dollars here, a billion dollars there. But, you know, we'll make it up with Cascade Lake sales because, you know, they're there right now. It's on like their ten thousand dollar and up processors. But Micron is sitting there saying, okay, we don't sell Cascade Lake processors. If we lose money, we lose money. And that's all there is to it. And so they have the enviable
Starting point is 00:22:13 position of having visibility into all of that stuff. And they're not introducing Optane until it has a clear path to profitability, which it doesn't have at this point. Hey, Keith, are you seeing a lot of Cascade Lake adoption in the market these days? Yeah, there's a deep, deep desire to have announced a huge joint partnership around basically optimizing SAP HANA for Cascade Lake and Intel Octane, DC memory. are discovering is even with in-memory databases, with the current DRAM design, they can ask questions that they thought of two or three years ago. And what they're discovering is that those questions lead to bigger questions and they need bigger databases to answer them. And you just run into, you know, the limits of cost and physical space.
Starting point is 00:23:26 So Cascade Lake, it's a real thing. I mean, there's a real demand for it. So Intel has a really interesting and great opportunity, I think, for this loss leader for the persistent memory to sell more Cascade Lake. And SAP, in an example of SAP, they really control the heck out of the certification for HANA production workloads. So they can demand that, you know, customers have to have Cascade Lake to get support. And that's a pretty unique position to be in.
Starting point is 00:24:06 For all the server vendors. Right, right. That's interesting. Even operating system vendors. Wow. I didn't realize that they did that. So how many layers are we talking about today? You know, we talked about 64 as being the place
Starting point is 00:24:20 where they started to make some profits, but it seems like that's not standing still. Oh, but you're switching gears. Optane is only two layers. Okay, all right. So I'm talking 3D NAND. Yeah, no, that's okay. Yeah, I'll talk 3D NAND with you.
Starting point is 00:24:36 You know, we're at a funny place right now because both Hynex and Samsung have said that they've got things with more than 100 layers. But what is really happening is that, um, pretty much everybody is starting to ship 96 layer product, except for Samsung has a 92 layer. I think it is, it's a very strange number. And, uh, you know, we're probably, you know, less than 10%, 96 layers right now, with 64 layers being the bulk of the market. Hynex has a 72-layer product. I don't think anybody else has gone down that path. But what's interesting is that every year, the companies talk about what they think is the farthest out that they can go with these layers. And last year, people were talking about 500
Starting point is 00:25:23 layers, and that made everybody sit up and notice hey wow you know 3d nand's going to be around longer than we thought and now this year samsung has mentioned 800 layers so yeah you know it's it's hard to figure exactly how far this technology is going to go but one of those cool things about semiconductors and it's probably true with your business too systems business is that everybody thinks that we're going to hit a brick wall somewhere. And then some brainiac comes out and says, what if we do this? And all of a sudden, you know, you get an extra whatever, an extra mile added to your runway. Yeah, yeah, yeah.
Starting point is 00:26:03 Well, yeah, the whole NAND business, quite quite frankly has been amazing you know i thought slc was great and mlc was going to be kind of iffy and then tlc came along and now qlc is there gosh i think i had 4d somewhere but i haven't figured out what that is yet that's ridiculous that's another one that's the guy it's another one of these marketing pitches and that's hynix and they're basically saying we're copying micron. But they wanted to make it sound like they invented it, so they gave it the moniker 4D. Okay. Every April Fool's Day, I do April Fool's blog posts on the memory guy. And maybe five years ago, I announced that some researchers had come up with a 4D memory that it grew in width and length and height, but also over time, then the number of bits in the memory increased. Yeah, and so that's what 4D means to me. But, you know, Hynex decided that they'd come up with a name and try using it again.
Starting point is 00:27:04 And I'm just hoping that people. I think you should sue him for trademark infringement or at least copyright infringement. No, no, no. I'm, I'm, I'm wanting people to look for the Hynex part and, and come up on my blog. There you go. As, as, as, as the number one item in the search list and stuff like that. That's good. That's good.
Starting point is 00:27:21 Yeah. Yeah. But back to where you started before you got me on this rant about 4d as you were talking about um you know mlc tlc qlc and um uh both toshiba and somebody i think it might have been no western digital talked about having um five bits per cell. They called PLC for penta-level cell. Yeah. And what that means is that with, with, what is it now? With QLC, you're putting 16 levels per bit cell. And with PLC, then you'd be putting 32 levels per bit cell. It makes the parts very error prone and very noise sensitive. But if they can do it then they can probably
Starting point is 00:28:05 cut their costs by about 10% and you know so that that helps them bring further costs out of these things yeah yeah so I mean is you know when they went from SLC to MLC there was an endurance drop considerably yes from MLC to TLC there was an endurance drop. Obviously, there's an endurance drop at QLC. I'm sure the PLC, if such a thing ever went out there, it would be an endurance drop. What they seem to have been doing, though,
Starting point is 00:28:36 is that because they've increased the capacity, let's say it's a 0.1 drive write per day. If it's rather than 30 terabytes, it's now 50 terabytes. That's an awful lot of data. I'm not even sure you could write it that day. Yeah. There was an SSD company whose claim to fame was that the SSD had so much capacity that it was impossible to wear it out. And basically saying, we've got a bottleneck. Yeah, yes. Yeah. But they wouldn't want to talk about the bottleneck. They want to talk about the advantage the bottleneck provides. Yes, yes. Yeah. But, you know, if they can get this cost out, then they might be able to put in a little bit extra over-provisioning and do that.
Starting point is 00:29:19 But also the business, the users are getting far more sophisticated. I remember not too long ago, you know, maybe five, six, seven years ago, that SSD manufacturers, the ones who are making enterprise SSDs, were getting into this, you know, great chess beating contest about how many drive rights per day they could withstand in their SSDs. And guys got up to like 27 drive rates per day. And then they started, the users started getting more sophisticated and saying, well, we might need that for this application over here and we'll spend the money on that. But let's buy consumer SSDs for this application over here because we've measured it and it doesn't require an awful lot of drive rates. Yeah, yeah, yeah, yeah. That's amazing. Yeah. That's amazing that the requirements for drive rights per day seem to have dropped considerably.
Starting point is 00:30:10 Yeah, yeah. And that's just because people are measuring their workloads, which they never did before. You know, it was something with a hard drive you never had to do. And so now that they're measuring it, then they're saying, oh, yeah, most of our applications don't require an awful lot. And so it could be that PLC goes into something like that. There's one other thing I should mention about the SLC, MLC, all those things. That before we went to 3D NAND, every time that you'd shrink the NAND chip to get the costs out, then the endurance would go down.
Starting point is 00:30:43 And then you'd start doing this SLC, MLC thing, then the endurance would go down, you know, for each one of those, how many bits per cell you're writing. With 3D NAND, you are no longer shrinking the size of the bit. You're just stacking more bits on top of each other. And so because of that, then if you learn how to make PLC, then all of a sudden putting PLC into every single flash chip is really easy. And you know that you'll be able to do it for 96 layers, for, you know, 128 layers, for 256 layers, et cetera, et cetera. So, you know, I think that we're probably going to go down that path. Well, it's, you know, I was, we were talking to a vendor who remained nameless, but, you know, they're focused on QLC and they're focused on effectively, they believe they can eliminate disk altogether because the economics of what they're doing with QLC and it's just, it's just amazing. You know, I told them, I thought they were crazy.
Starting point is 00:31:42 Everybody has always done that. No, everybody has always done that. No, everybody has always done that. And something that I find extraordinarily interesting is that Western Digital put out a chart. And Western Digital is the only company that is deeply invested in both hard drives and in SSDs. And so they don't have an ax to grind. They are not trying to position one versus the other. And they put out a curve that went out through, I think it was 2028, that showed that there would always be a 10 times price difference between NAND flash and hard drive bits. On a per capacity basis.
Starting point is 00:32:18 Yeah, I believe that to be the case. I mean, we've seen, you know, they've been trying to kill off tape literally since 1967, right? Tape still exists. It's been, you know, it's gotten to, you know, they've been trying to kill off tape literally since 1967, right? Tape still exists. It's been, you know, it's gotten to, you know, it's no longer a backup solution. It's an archive and it's a cold solution. And now it's even a colder solution. But there's a gap there. It's persisted between disk and tape.
Starting point is 00:32:37 And the same thing is going to happen between NAND and disk, in my mind, for the foreseeable future. I just can't see it. I like to think back. I had a conversation at the Intel Developer Forum probably around 2007, 2008 with some folks at Toshiba. And they were asking, when do you expect for 50% of PCs to use SSDs? And I said, probably not for 10 years or more. They were floored because they thought it was going to be like one or two years out. Well, here we are 2019, you know, and so that's what, nine years later, and it's still not 50%.
Starting point is 00:33:16 And it's still shipping millions and millions of disk drives every quarter. It's not growing, but it's not shrinking either. The thing that did disappear was the enterprise hard drive business, the 15,000 RPM things. And that was a very ripe target for SSDs. It was very costly. They over-provisioned. They had to have more than they needed. And with an SSD, all that goes away. Yep. Yeah, yeah. So the other thing
Starting point is 00:33:48 I found was kind of exciting at the show, there was a lot of PCIe switch vendors. Yeah, it's the bottleneck. What people are starting to do from a computer architecture standpoint, and I'm not that good at this stuff, but they're talking right now a lot about NVMe over Fabric. The idea is that you have an array of servers and an array of solid state storage, and they can all communicate very quickly. And then how do you share those resources so that you can allocate things to different tasks? And the way that they want to do that is over a fabric and have any server be able to communicate with any fast.
Starting point is 00:34:34 It's this whole composable infrastructure thing that they've been pumping for a couple of years now. It's getting more possible, more prevalent because technology for PCIe fabrics is coming online. There's more PCIe switching. There are more things like JBOS, which are just a bunch of flashes that are connected to a PCIe bus that now can be carved up and given to whatever workloads you want. Gosh, I was talking to one guy. I think it was Dolphin Technology. I have never heard of these guys before, but they contend that they can do a composable infrastructure, change where an SSD is actually connected to on the fly. I said, they're crazy. I thought that was the whole idea. Not so?
Starting point is 00:35:20 No, yeah, you can do this, but you have to reboot all the processors and the servers. So every time you change the configuration of the PCIe bus, you have to reboot the processors. They're saying they could do it on the fly. And I said, you're crazy. I mean, because you're changing connectivity between components on a PCIe bus. They called it renting rather than reconfiguring. It's like Airbnb for disks or SSDs rather. It was pretty bizarre. It was some sort of specialized namespace thing that one group was talking about. I can't even think of what it was. Zoned namespaces. I'm going to have to think who the vendor was. It's almost like PMR. It's almost
Starting point is 00:36:08 like zoned disks. So you have a zone of a disk that has certain capacity, has to be written sequentially. There's another zone next to it that has less capacity, it has to be written sequentially. So if you're willing to write sequentially to uh you know pmr disk it's pmr right isn't that the term yeah no there's smr which is shingled yeah yeah that's shingled magnetic recording sm smr smr yeah shingled this is like the zone disk now it's zone flash effectively it It's bizarre. And each zone can be owned by different... Is this NBM namespaces?
Starting point is 00:36:50 No, it's beyond that. Take a namespace and you can segregate it out. I think that part of it too is that you can have different namespaces inside the same SSD. It's an extension of streaming. Yeah, I knew that. but the zone namespaces is a different game altogether.
Starting point is 00:37:09 But it's bizarre. So yeah, the show has gotten a lot larger of late. It seems. It's gotten more successful, it seems. A lot more vendors are going there and the vendors seem to be with bigger exhibits and stuff like that. Yeah, there has been a little bit of a change though that Samsung and Micron
Starting point is 00:37:27 and Intel weren't there this year. They had some whisper suites and stuff like that. Something else that's kind of funny is that the health of the business is that there was a fire there two years ago that closed the exhibit hall. businesses, that there was a fire there two years ago that closed the exhibit hall. So the exhibit hall was only open for a half a day out of the three days of the show. That, you know, that worried a lot of people and it costs a lot of people money that the
Starting point is 00:37:58 insurance companies still seem to be bickering over. But, you know, it didn't keep people away from the show that they're these same exhibitors are still on the floor. Yeah, yeah. I think there's, you know, from my perspective, it's just too much componentry and technology and vendors floating around for me not to be at the show, quite frankly. It's just, it's just too interesting. All right. Well, this has been great. Do you have anything you'd like to say to our listening audience before we leave, Jim? You know, if anybody's interested in chip stuff, come to me. Okay. And we'll put your website. You've got multiple websites, right?
Starting point is 00:38:37 Yeah. There is the regular business website, which is where we sell reports and introduce ourselves to the world and that kind of stuff. There are three of us. And then I've got a couple of blogs that I run. One is called The Memory Guy, which is about memory chips. And the other is called The SSD Guy. Yeah, yeah, yeah. So next time we'll talk with, well, this has been great. Thank you very much, Jim, for being on our show today. Oh, thanks a lot for having me. Next time we will talk to another system storage technology person. Any questions you want us to ask, please let us know. And if you enjoy our podcast,
Starting point is 00:39:08 please tell your friends about it. And please review us on iTunes and Google Play as this will help get the word out. That's it for now. Bye, Keith. Bye, Jim. Bye, Ray. Bye, Keith.
Starting point is 00:39:19 Until next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.