Grey Beards on Systems - GreyBeards year end storage trends wrap-up

Episode Date: December 27, 2013

Welcome to our fourth episode. In this year end wrap-up Howard and Ray talk about the three trends that have emerged over the last year or so which are impacting the storage industry in a big way and ...will continue to affect the industry in the the years to come. First up is scale-out storage. … Continue reading "GreyBeards year end storage trends wrap-up"

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everybody, Ray Lucchese here and Howard Marks here. Welcome to the next episode of Greybeards on Storage, a monthly podcast, a show where we get Greybeards storage and system bloggers to talk with storage and system vendors to discuss upcoming products, technologies, and trends. This episode, our end-of-the-year episode, is going to be devoted to trends that occurred in 2013 and trends that are likely to continue to occur in 2014. We talked about a number of trends internally of which ones we wanted to talk about, and we've determined the three that we believe to be the most critical trends for us. It became really clear at Storage Field Day, which Ray and I both attended, that scale-out is becoming more the norm. Of the ten companies we saw, six or eight of them had one form of scale-out system or another. And clearly, at least in the mid-range, the model until recently was scale-up.
Starting point is 00:01:14 Yes, yes. You got something that looked like a clarion compute in a pair of controllers to run a thousand SSDs like you could run a thousand hard drives. I think the other aspect of the Moore's Law is starting to merge into something that's more core-based rather than speeding up the current cores. It's providing more parallelism, really. Right. Well, the silicon guys hit the wall about five years ago that as they increased the clock frequency, the current leakage became so high that the processors used huge amounts of juice. And so instead of making processors faster and faster, they started giving us more cores. And that meant there was more parallelism. Right.
Starting point is 00:02:23 But it also meant if you had old software like EMC had the Clarion software still running in the VNX. Right. Some core operations ran as a single thread. And when the fastest Intel processor went from being one core 3.5 gigahertz to 12 cores at 2 gigahertz. Things slowed down at the thread level. Things actually slowed down. So getting back to scale storage, scale-out storage, why is scale-out storage, why is
Starting point is 00:02:45 scale-out storage emerging today in your mind, Howard? I see a couple of things. First of all, there's just the fact that you need to have a certain amount of CPU to deal with a certain level of IOPS. And IOPS have become readily available in the forms of SSDs. So we can now pretty easily deliver more IOPS than the processors can handle. Because SSDs are capable of just millions or thousands, hundreds of thousands of IOPS per, and if you put more than a couple, a handful, you're pegging the CPU of the controller. Okay, go ahead. The other side is that we want more and more data services.
Starting point is 00:03:32 And so there's more CPU doing things like compression and deduplication and metadata-based snapshots. And so, especially since if you look at that at the SSD market or even at the hybrid market, the performance is there and the capacity is very expensive on those SSDs. So if we dedupe or compress the data, it gives us a substantial advantage to users who are used to buying in terms of dollars per gigabyte. Right, right, right. And that requires more CPU as well. So the computational load for SSDs and for data services associated with SSDs are becoming higher. Right.
Starting point is 00:04:21 And therefore, if you're going to deliver 20 or 50 or 100 terabytes of capacity, which people need today because they're storing 48 megapixel video or 48 megapixel photos. Right. They used to store one megapixel photos. Right. I'm not quite there yet, but I'm hoping to get there before Christmas. Yeah, I understand. We keep talking about we're drowning in data and part of it is
Starting point is 00:04:52 we collect more data, more types of data. And part of it is just the same data we used to collect is bigger. Yeah, yeah. The H4K HD, def and all that stuff coming out.
Starting point is 00:05:03 It's just crazy, crazy. So, I mean, you know, the guys we talked to, GridF, and all that stuff coming out is just crazy, crazy. So, I mean, you know, the guys we talked to, GridStore and Coho Data and, gosh, there was a couple other ones out there. There was Overland and Oxygen. Yeah, yeah. Oxygen was kind of a sync, enterprise sync company, but more or less kind of the same sort of thing. They're trying to scale their storage up to support larger and larger environments. The challenge there is because they have SSDs and they're going to need more compute power. And because they're SSDs, they're more expensive on a gigabyte basis.
Starting point is 00:05:35 So you want to provide data services that effectively utilize that space more. You're required to have more compute power. So is this the end of legacy storage as we know and love it today dual controllers or you know the monolithic eight eight board eight controller environments well when you start talking about the eight controller environments um that's really a different you know the the the symmetrics three par hds hds HDS. HDS, DS8000. Right, right. That's a different market.
Starting point is 00:06:11 And a lot of what you're looking for there is ridiculous levels of reliability. And availability. Yes, yes, I agree. We're not talking about 5-9s. We're talking about 6 or 7-9s. I don't think it's ridiculous. It is certainly a lot more than what you can get out of some of the other products. To a guy like me who spent his career in the mid-market where, you know, 20 minutes of downtime was a crisis during the 20 minutes but didn't affect the bottom line.
Starting point is 00:06:46 You know, those systems are just a different beast. Yeah, a different world. But when you start looking at the meat of the storage business, I think the dual controller model – EMC is going to sell a lot of VNXs this year. They're going to sell a lot of VNXs for three to five years. But you can see the end of the road that, you know, scale out. Today, the scale out systems that have been out for a while, the Panasas and Isolons of the world, are corner cases. Those are solutions for edge cases.
Starting point is 00:07:29 Right, right. And we're now getting to where it's, oh, look, there's a solution for the general case. In the mid-market, you look at Equalogic, which is a scale-out system. Left-hand, yeah, that kind of thing. Equalogic more than left-hand, because left-hand is really, Left hand, yeah, that kind of thing. Equalogic more than left hand. Okay.
Starting point is 00:07:51 Because left hand is really, you know, you have nodes and then A-Lun lives on two nodes. Right. And in the Equalogic world, it can be eight. Yeah, yeah. And, you know, I blanch it calling two scale out. Even though you might have multiples of those in a configuration. Right, you might have 10 pairs. Yeah, but that's a different discussion. But it's still 10 pairs.
Starting point is 00:08:09 It's a pair is the issue. All right, we're going to need to move on to our second or third topic here. But our view of the scale-out market, the computer requirements for storage are getting higher. The software to support that stuff is getting, I'll say, easier to come by and put together. It's certainly easier to develop because your development platform is a server. Yeah, yeah. Five or six years ago when you were developing FPGAs and ASICs, the cycle was so much slower and the cost to do that development was so much higher.
Starting point is 00:08:50 You know, now you can hire guys anywhere in the world to do your coding. They buy super micro servers to develop the code on. And maybe even execute the code on. Well, yeah, that may be your delivery mechanism as well. Yeah, yeah, yeah. But the equipment a software, an engineer to develop a storage system needs is under $10,000 now.
Starting point is 00:09:17 Anymore. I would say even under $5,000. Maybe the storage. Oh, yeah, no, but he's going to have two systems, one to write code on, one to test on, and there's going to be some SSDs. But 10 years ago, what was that, 100,000? Yeah, easily, easily, easily. All right, so I didn't want to cut you short. Do you want to interject any last thoughts?
Starting point is 00:09:41 No, no, that was just – that thought came to me as we were speaking was, you know, the other reason why scale out is the development environment becomes simple. Yeah, I agree. All right, so moving on to our second topic trend for 2013 and 2014, and that is software-defined storage, software-defined anything. Did you want to tee that up, Howard? Well, I think we need to start with definitions. Okay.
Starting point is 00:10:11 What's a definition? Because the truth is, with very few exceptions, every storage device today is defined by its software. Oh, absolutely. Functional software. Even a three-par where there is a custom ASIC in there that's important to making it work fast, what a 3PAR is, is defined by the software
Starting point is 00:10:35 as much as it is by that hardware. And for most systems, there isn't even that custom ASIC. And when I did development back in the late 90s for enterprise storage, the majority of the development dollars went to software development, not hardware development. And we generated multiple FPGAs and ASICs, et cetera. It's still from, you know, the cost of the engineering of the hardware was pretty high because of the chip fabrication and that sort of stuff.
Starting point is 00:11:05 Right, because making the prototype is expensive. Making that first one costs a lot. Yeah, yeah. But from an engineering headcount perspective, we actually had more engineers in the software development side than we had in the hardware development side. Right, because it's the data services. Yeah.
Starting point is 00:11:22 So I think, you know, if we're going to talk about software-defined storage, I'd like to say that, you know, we're going to talk about it in terms of software that runs on the compute platform to provide storage. So, you know, everything from the VSAs that we've been living with for the past five or six years that implement the left-hand kind of controller as a virtual machine to you know the generation that's now coming out like vsan and maxta that implement more a scale out that is not pairs right kind of architecture where you now don't need a you don't you can't point in the data center at what's the storage and what's the compute, that it's the same platform. Anymore, anymore. It's an integrated platform.
Starting point is 00:12:12 Kind of like a Hadoop taken to more enterprise solution space kind of thing. Or Nutanix or SimpliVity without the tin. Yeah, or scale computing kind of thing. Yes, yeah, I agree. I agree. That, you know, it's the same concept and, you know, the difference between hyper-converged and software-defined storage for the purposes of this conversation is the go-to-market strategy. It's whether you provide a software-only solution or 10. Right. Right. Right. And I clearly see why companies like Nutanix and SimpliVity deliver 10 because people are willing to pay more for it that way.
Starting point is 00:12:53 Yeah, scale computing too. Yeah, absolutely. Well, and it gives them a little bit of limited test environment that they can control. And it simplifies support a lot. Yes, yeah. And test, actually, a lot. So that's good news. Yeah, I agree. So the software-defined storage, in our parlance, is something that software implemented on compute servers.
Starting point is 00:13:14 And companies that provide these sorts of solutions are effectively delivering a software-only solution. Yes. And so, you know, the most visible of these is VMware's vSAN, which is still in beta, but there's a lot of talk on the Twitters about it. Yeah, yeah, yeah. And there's a lot of talk
Starting point is 00:13:37 on the Twitters about it from people who, it appears to me, have spent a lot more time in the virtualization world than in the storage world. Yeah. Who, you know, it's like, well, you know, I was a network guy back in the 80s and we had asynchronous transfer mode. Right, right, right.
Starting point is 00:13:57 It was a LAN protocol and it was a WAN protocol. And IBM made 25 megabit per second ATM cards that you could put in your PC and it would be the solution to all of your problems. And, you know, curiously at the same time, Visa had a commercial where this couple is lost in the souk in Marrakesh or something. And they find a street urchin who goes, oh, I solve your problem. ATM, ATM, fix everything. And of course he leads them to an automatic teller machine. And they can get cash and that will solve all their problems.
Starting point is 00:14:36 I am dubious of any solution that claims to solve all of my problems. You know, HP and EMC have five or seven lines of primary storage systems. At least. And you could argue that that's one or two too many, but you can't argue that there should be one product that goes from three terabytes serving a small remote office to petabytes of storage running the billing system for Verizon Wireless. Yeah, or even the SEC or something like that.
Starting point is 00:15:21 Right. There's clearly pieces there. Domains. And so I'm very excited about this whole software-defined thing for a lot of use cases. For SMBs, for remote and branch offices. I would have to include Hyper-V and Spaces and all that stuff that surrounds the Microsoft Windows storage services, you know, kind of thing.
Starting point is 00:15:48 Yeah. I mean, architecturally different, but a similar solution. Right, right. You know, Storage Spaces is kind of more like host hardware virtualization. Right. You know, like Storage foundation on the Linux side. Right, right, right. But when you combine that with Hyper-V and you combine that with Windows Storage Server,
Starting point is 00:16:12 it becomes a software-defined storage solution in and of itself. Yeah. Yeah. And it runs on Windows, of all things. Right. Who'd have thunk it? Well, we knew about it for a while, but that's another story. So why now?
Starting point is 00:16:30 Why is software-defined stuff coming to the fore today? I see a couple of reasons. The first is we are CPU-rich, and we've been CPU-rich for five or six years. So on the one hand, the scale-out computing is coming out because it's becoming more CPU bound, but the software-defined storage, which is coming from the low end to some extent, to a large extent. It's coming from the low end. And it's because we are CPU rich and have been CPU rich for many years. Yes, I agree. But we were CPU poor in the storage system.
Starting point is 00:17:06 Yeah. We're CPU rich in the compute system. Right, right, right. That even with virtualization. Those cheap bastards. Excuse the buzz. Those cheap buffs. Hey.
Starting point is 00:17:20 Profits, margins, it all rolls together someplace. Somebody's got to make money. When was the last time you talked to a CIO who said, I got a 20% budget bump this year? I haven't ever. Ever? Yes. And certainly not since 2008. We're five years into IT, dealing with austerity budgets.
Starting point is 00:17:47 Yes. And it doesn't look like it's going away. Well, no, because the problem is that we've been good at it. Yes, we've gotten better at it. The problem, you know, the first couple of years, everybody was really suffering because they were doing more with less. Right. And the truth is we figured out how to do more with less. We figured out that the 60 crapplications in the data center could all be virtualized to two VMware hosts.
Starting point is 00:18:16 And I didn't need four racks of servers to support them anymore. Yeah. With the CPU richness that came out. So back to the software-defined. Okay. So back to the software-defined. Okay. So back to the software-defined. Why now? Because CPUs, we've got more than we need.
Starting point is 00:18:32 We have plenty of CPU. With 10 gigabit Ethernet, we've got plenty of network bandwidth. Yeah, yeah, yeah. And with a little bit of SSD, we've got plenty of performance. IO performance, IOPS. So now I can take a 2U server. I can put two SSDs and six 2TB drives in it. And I can make that, and I can run my storage process on four of the 24 cores that that bot has.
Starting point is 00:19:03 Yeah, yeah, yeah, yeah. And the other 20 cores I can actually devote to compute activity or even networking stuff. Right. And it will synchronously replicate to another couple of devices, but that just means I need more of those 2-terabyte hard drives. And when you buy 2-terabyte hard drives, capacity is cheap. Right. And the slots in the 2 drives, capacity is cheap. Right. And the
Starting point is 00:19:25 slots in the 2U server were there anyway. Right. You know, it means that I change from buying blade servers to 2U servers, but I save the space in the racks that used to be the storage devices. You know, the problem is there's a lot of moving parts that we really have to be replicating the data and when a workload vMotions from one of these servers to another, do we build locality into the software-defined storage the way that SimpliVity and Nutanix have or do we not the way that vSAN did and now generate more network traffic accessing that data
Starting point is 00:20:06 from the server where it's stored because the compute loads on another server right right and why would you move the compute if you didn't want to move the storage as well you know yeah yeah right and you know what happens when I know, now one of the things that we love about virtualization is I can evacuate the server in order to do maintenance. Right. But now I'm not just evacuating memory. I got all of these active disk drives. Yeah. Do I either leave the data there? Which means I can't do any maintenance on the server.
Starting point is 00:20:44 Well, I'm doing three-way mirroring across the research. Okay, all right. Now you've got the data in other locations. All right, I got you, I got you. But if I shut this server down, first of all, I go down to two-way mirroring, so I'm going to take a small performance hit. I'm going to take a small reliability hit because with two-way mirroring, two bad things could happen at the same time.
Starting point is 00:21:04 Right, right. hit because with two-way mirroring, two bad things could happen at the same time. And then when I bring this server back up, I'm going to generate a huge amount of network traffic re-syncing. Right. So, there are places where it makes
Starting point is 00:21:20 a lot of sense, but I can't see that I would rather have 16 of these two-use servers that make up my storage across the cluster rather than having one Tintree. Yeah, yeah, yeah. Now, if I need two or three, and the Tintree is a $60,000 box, and this is a $20,000 solution, now I know that's what I want. Yeah, no question. I got you. I got you. But when I'm spending the same amount of money, you know, the guys who are fans of this technology, none of whom have it in production yet because it's still in beta, are all going, well, you know, there's no complexity there. With the possible exception of Windows Storage Server Hyper-V and Spaces,
Starting point is 00:22:09 which is out of beta. Which is out of beta, but even that relies on hardware components that aren't readily available. You have to have the SAS JBOD with all of the enclosure functionality. Right, right, right. And Dell, HP, and IBM don't make one. Yeah. Okay, okay.
Starting point is 00:22:34 All right, all right, all right. So we did scale out. We did software defined. Our last and final trend of the year that we think is worthy of discussion is where do hard drives end up in this world of scale-out, software-defined, and SSDs? Yeah. And in the long run, I keep hearing from people that there's no future in the hard drive. We're going to go all solid state all the time. Right.
Starting point is 00:23:05 And I've dealt with way too much cold data to buy that. Yeah, yeah, yeah. When you start saying that, and for a lot of applications where streaming performance is what matters. Oh, yeah, you know, it's kind of interesting. I did a paper the other day on SPC2 performance, and there are not a lot of throughput-intensive workloads that SSDs do well on. No.
Starting point is 00:23:32 Streaming sequentiality is still a significant point of departure for drives, disk drives. Yeah, I agree. You know, a disk drive can deliver 125 megabytes a second. Yeah, it's hard to do that in SSDs. One SSD can deliver twice that. Right. But at 8 to 10 times the cost per gigabyte. Right, right, right.
Starting point is 00:23:56 And those streaming applications, by definition, when you're talking about streaming, you're talking about a lot of data. And they don't like to be deduped, and they don't like to be compressed, and they don't like to be randomized. They like to be serial sequential, and the data comes off the disk and goes right out to drive, right out to network, right out to the application. Yeah, I agree. It's the kind of thing
Starting point is 00:24:17 where, you know, a 12 plus 2 RAID 6 set, or a Reed-Solomon erasure code, that works just fine yeah yeah um the other thing is this drives aren't going anywhere but we've become used to um crider's law where disk drive capacity doubles every 18 months and then you know the disk drives are just going out of out of crazy. There are 5 terabyte drives out there, 4 terabytes in an enterprise system.
Starting point is 00:24:49 It's insane. You and I are old enough to remember 14-inch disk drives with 5 megabytes of capacity. Oh, yeah, yeah. And even a 9-gig, 3.5-inch drive was great news. Oh, yeah. I just bought on eBay one of these Memorex 1982 calendars. It's a rejected 14-inch platter, and they silkscreened the calendar on it. Yeah, that's great. That's insane.
Starting point is 00:25:21 1982, that 14-inch disc, that was probably 5 megabytes per side. Right, right, right, right, right. Welcome to today. 4 terabytes or 5 terabytes in a 4 or 5 platter, 3 1⁄2-inch drive. Well, if you fill it with helium, you can get 6. Yeah, or, you know, the shingled magnetic recording and stuff like that. It's just crazy. Now, the problem is shingled magnetic recording means it's not really direct access anymore.
Starting point is 00:25:49 It looks different. Yeah, I know. You can't just write to any logical block. That means either the applications have to be smarter. Or the controller has to be really smart. The controller has to be really smart or The controller has to be really smart, or the drive has to be really smart. Yeah, yeah. So, you know, my favorite is Seagate's new kinetic drives.
Starting point is 00:26:13 Yeah, the kinetic stuff, object storage on a drive, yeah. Where they said, let's make the drive really smart. Yeah, yeah. And so you write, you know, it's a key value store. The key is up to 4K. The object is up to 1 meg. So if you're storing the latest Thor movie on it, you break it up into 1 meg shards. You write them to the drives.
Starting point is 00:26:38 And because the drive is smart enough to buffer a whole shingle stripe, which is like 10 or 15 or 20, or they don't tell us how many tracks there are in a shingle zone. But it's smart enough to know how to write the data to the shingle zones. Yeah, you don't think they're buffering a whole shingle zone. They're buffering, you know, they're effectively creating a log-structured device out of a random access pattern, or at least an object access pattern. And they're writing this data sequentially to the shingled zone. Right, but there's at least one megabyte of memory on that controller. Oh, at least. Oh, come on. Any controller, you know, any drive today,
Starting point is 00:27:17 8, 16, 64 megabytes is easy for a buffer. I mean, yeah. The other interesting thing I learned talking to those engineers is that those drives do housekeeping. They've got to do garbage collection. How else are they going to survive? They actually physically store
Starting point is 00:27:34 the data at the end of a garbage collection cycle. They actually physically store the data on the drive sequential by key. Each object is in line to deliver the data on the drive sequential by key. Right. Oh, interesting. So each object is in line to deliver the best streaming performance. Yeah, yeah, yeah. Again, that throughput stuff.
Starting point is 00:27:54 So disk drives aren't going away. They've got to – I see it happening. It happened to tape over the last couple of decades. It's becoming a specific segment in the storage tier, which is big, deep, cold data, high throughput intensive data, where that still needs to be, disks will still continue to play a significant part. And what's happened to some extent in my mind is the 15K RPM drive seems to be going away.
Starting point is 00:28:22 Yeah, it is. And I was kind of surprised at that. I thought it was the 10K drive that would going away. Yeah, it is. And I was kind of surprised at that. I thought it was the 10K drive that would go away. And the 10K drive is doing better. More and more subsystems these days are coming with 10K RPM SAS drives. And the performance is actually pretty damn decent compared to even 15K RPM drives just a couple years ago. Yeah, the 15K RPM drive seems to be going away with SSDs. The one absolutely clear thing we can say is there will be no 20,000 RPM drive.
Starting point is 00:28:52 I was hoping for a 22K RPM one-inch drive. I thought they could do it if they wanted to, but apparently there's no market. It would cost more than an SSD and perform less well. Yeah, really. Western Digital just sent me one of their new black drives to play with. Oh, really? Yeah? Yeah, it's a 128-gig SSD and a 1-terabyte hard drive that all fits in my laptop, which only has room for one 2.5-inch drive. Yeah, yeah, yeah. Hybrid drive. A hybrid drive.
Starting point is 00:29:22 It's not a hybrid drive. A hybrid drive. It's not a hybrid drive. It's not like the Momentus XT where the 8 gig of flash was a cache. There's a SATA expander in there and it looks to the computer like two drives. Oh, that's amazing. You can either run host-based caching software. If you want. Server-side. Yeah, yeah, yeah. Or, well, not really server-side, but laptop-side. Right.
Starting point is 00:29:50 Or you can just boot and run your frequent applications from the SSD and use the big drive for data. Yeah. So it's a dumb hybrid drive. It's a two-drive drive. It's a two-drive sandwich. Yeah. All right. So getting back to this. hybrid drive it's a two drive drive it's a it's a two drive sandwich yeah yeah all right so getting back to this so the major trend here is that drives will continue to grow in capacity and we'll have uh at least some segment of the storage hierarchy that'll devote to them and what's happened is the 15k rpm drive is going away but surprising to of us, the 10k RPM seems to be doing well. It's got a
Starting point is 00:30:27 little bit of life left. I don't know. Certainly, as flash prices fall, and flash prices fall a little bit faster, a little bit slower. Wait, a little bit faster.
Starting point is 00:30:44 Than disks are. Flash prices fall about 25% a year. disk prices fall about 20 a year yeah yeah so as that gap narrows yeah you may see 10k drives drop even the 10k rpm drives are getting squeezed out the question is if five years from now we use disk drives for primary storage at all. That for the next five years, we're still going to need hybrid systems because people don't want to be bothered with segregating their data. I want software to be smart enough to do that. At some point, we still want to segregate the archive data,
Starting point is 00:31:27 the really cold data, off because I don't want to back it up every night. It's not even about the cost of the primary storage. It's about how frequently I have to touch it. And at some point, Flash will become cheap enough that the majority of organizations might make the decision that there's my primary storage, it's going to be all Flash. I think the challenge there is that, number one, and we touched upon it earlier before the podcast, that as Flash becomes denser, it becomes less, less permanent among other things, you know? So you have to have some solution there that can keep data for a long period of time.
Starting point is 00:32:12 Five years is, you know, the enterprise class average. And at, you know, at some densities, flash just can't do that anymore. So you have to have some solution that does now maybe PCM,
Starting point is 00:32:24 MRAM and FRAM and all these other solutions, maybe one of them's the final arbiter of that. Yeah, but there are ways out. Yeah, they are. Jim Handy's a semiconductor analyst that I talk to on a regular basis. And he's saying it's 2018 or so or so before a solution a replacement for flash nand comes out yeah well before it becomes cost effective to nand yeah yeah that you know if you look at you know there's there's one or two more process shrinks left in
Starting point is 00:33:02 nand before it becomes untenable. Yeah, yeah, yeah. Well, and then they go 3D. Yeah, yeah. So there's four to five. And then they go 3D. They're already going 3D. Well, some of the vendors got the 19 to 16 nanometer process shrink to work. Right. And so they went with the shrink and are going to go 3D next.
Starting point is 00:33:26 Yeah. Some of the other vendors didn't get that shrink to work and they went 3D first. Yeah. But ultimately, everybody's going to figure out how to do both. Right. But it'll be like 10 nanometer and 20 layers deep. At which point, there's just no place else to go. Okay.
Starting point is 00:33:46 So here we are. Drives are going to continue to exist in the storage hierarchy as we know and love it today for a long, long time. And at some point, you're thinking that primary storage will cut over to a flash or its replacement. I don't think so. In my mind, there's still this throughput-intensive workload that even on a smaller scale, enterprises require for large database queries
Starting point is 00:34:15 or backup processing or their ongoing transaction activity. And I think disk drives can continue to play and perform better than flash in that environment. And the cost is not as much of an issue in that environment as the throughput and the time to first byte, but not necessarily the time to last byte, is more of a crucial issue there. And I think disks have a long time play in that space. So I think there is a 10K RPM niche there for a long, long time in my mind.
Starting point is 00:34:56 I may be wrong. You heard it here first. So whether all flash becomes the norm for primary storage, I think is an open question. I agree. I think before that happens, that the 10K RPM drives will die out. I don't think 10K RPM drives will die out, Howard. I think you're wrong. Okay.
Starting point is 00:35:19 Well, we'll see. Five years hence. We will see. The fifth anniversary of our first podcast. That is actually a five-year-out question. It is. It is. Whether the controller logic gets smart enough that it's flash and trash in that hybrid system and eliminates the performance mechanical device completely?
Starting point is 00:35:45 I don't think it will happen. Not in five years. Not in seven years. That's what he says. That's his prediction. All right, gents. We've taken a long time to go over the three trends, longer than we had anticipated. But next month, we'll probably talk to – we'll go back to our normal mode of talking to a storage industry expert of one fashion or another.
Starting point is 00:36:09 Thanks for now. It's been great. Thank you, Howard, for being on the call. Always a pleasure, Ray. Always a pleasure. We'll see you folks next time. All right. Thanks again.
Starting point is 00:36:18 And goodbye for now.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.