Grey Beards on Systems - 40: Greybeards storage industry yearend review podcast

Episode Date: January 3, 2017

In this episode, the Greybeards discuss the year in storage and naturally we kick off with the consolidation trend in the industry and the big one last year, the DELL-EMC acquisition. How the high ma...rgin EMC storage business is going to work in a low margin company like Dell is the subject of much speculation. That … Continue reading "40: Greybeards storage industry yearend review podcast"

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everybody, Ray Lucchese here with Howard Marks here. Welcome to the next episode of Greybeards on Storage, a monthly podcast to show where we get Greybeards storage and system bloggers to talk with storage and system vendors to discuss upcoming products, technologies, and trends affecting the data center today. This is our 40th episode of Greybeard's Unstoried, which was recorded on December 27, 2016. As this is our year-end progress, we have no guests today, and we are here to discuss upcoming and current storage trends in the industry today. So what's happening in industry today, Howard? It would be hard to talk about what's happening in the industry today without talking about
Starting point is 00:00:49 consolidation and, you know, most specifically the Dell acquisition of EMC. Yeah, yeah. Brocade also got acquired, I think, in there someplace. Brocade got acquired by Broadcom. Yeah. Which is interesting because Broadcom is an arms merchant. You know, they're really chip vendors.
Starting point is 00:01:10 Right. You think they're going to strip the technology and move out or something like that? No, I don't think so. Well, no, it's interesting. You know, this is funny.
Starting point is 00:01:20 This is a topic we actually weren't planning on talking about. That's correct. But if you look into what's going on in the chip business, there's been a lot of acquisitions. Avago bought LSI and Emulex and Qlogic bought up Broadcom's Ethernet client chip business, and PMC bought up Adaptech. So we've got a couple of companies, PMC, Avago, Broadcom, who are consolidating the market for all the chips that go on a server motherboard that don't come from Intel, or that don't exclusively come from Intel, because Intel does Ether ethernet chips but doesn't own the market like they do for
Starting point is 00:02:10 processors and the ethernet switch vendors are moving towards merchant silicon mostly from broad common cavium so you know the 25 gig chips are out and so we're going to start seeing a lot of 25 gig ethernet switches because broadcom and cavium have the chips to do it and so now it's the degree of hardware engineering and the time involved in doing a chip layout have been taken out of that market for making an Ethernet switch. Yeah, yeah. You think this is part of the open flow software
Starting point is 00:02:50 to find network flow out into the world? Well, I mean, part of it is that the software is also available, and so you can buy hardware, you can buy software, and you can put them together. The network guys talk a lot about white box switches, but the market research numbers I've seen show that they exist, but there's a couple of very large customers for them. And it's not like SMC, Edgecore, and Quanta are selling hundreds of thousands of switches. So the question is whether that's a transition that hasn't happened yet or whether corporate America just isn't ready to go white box for switches. I think there's a
Starting point is 00:03:32 little bit of both. Yeah. Yeah. But you know, in our business, you know, we had an 800 pound gorilla for as long as I've been in the business and I was AMC. Yeah. I mean, you've been in the storage business longer than i have because i started out as a generalist and then a network guy but these guys have been dominant since the mid-90s if not longer well i mean as as law you know they basically invented the third-party storage business you know store tech store tech was there i'll give you that. Yeah, yeah. Okay. Thank you. They really took it off.
Starting point is 00:04:08 They took it off. They took the fiber channel side of things and just went bonkers. Yeah. Well, I mean, I remember working on my first SCSI SIM with 24 fat SCSI cables coming out of it. But it was the transition. It was the introduction of fiber channel that opened up the market for that i agree i agree for that third-party storage now they're a part of dell yeah and and that's really interesting because well you know from the obvious front dell knows how to live on thinner margins than emC does. Yeah. EMC margins are pretty thick, as is most of the storage industry up to this point, I'll say that much.
Starting point is 00:04:51 How's that? Well, I mean, they always have been. This is what opened the door for the hyper-converged guys. When a storage array visibly became a pair of servers and shelves of media, and I could sit down at the Dell site and go, okay, so those two servers cost this much, and the SAS HBAs cost this much, and the SAS shelf costs this much, so the hardware is this much, and the storage system is 20 times that much. It became obvious that if I could get the software separately, it could be less expensive. Yeah. Oh yeah, definitely.
Starting point is 00:05:33 And that's what happened, obviously. Yeah. And so that software defined storage in and of itself, you know, right. Well, the,
Starting point is 00:05:41 the X86 processors became fast enough. You didn't have to spin FPGAs and ASICs. At the high end, you still do. But if you want to support a thousand VMs, you can do that on a pair of x86 servers and media. You don't need ASICs. You need good software, but you don't need. Yeah, I agree. but you don't need yeah i agree and the open source guys kept developing pieces of the software so then you know ultimately it became that you could take this open source project and that open source project and this other open source project and stitch them together and write a management layer and be able to deliver you know a, a perfect, you know, free NAS, which is a perfectly good NAS operating system with ZFS. So it's got the data integrity and the compression and the deduplication.
Starting point is 00:06:38 And that puts a lot of pressure on, you know, how much can I charge for a vmx e which is what i would buy instead right and if it's i can buy a five thousand dollar server and ten thousand dollars worth of ssds and disk drives and pay the free nas guys who's at ix systems you know a thousand or two thousand dollars a year for support you know so that's call000 or $2,000 a year for support, you know, so that's, call it $20,000 compared to $35,000 for a VNXE, that starts to become a difficult set of economics to say, well, yes, I'm going to spend half again as much because I want to feel secure. You know, 20% or 30% more because I want to feel secure is easy to justify. Twice in three times as much is hard.
Starting point is 00:07:28 Yeah, which is one of the reasons why they've come down in price in some of the stuff from EMC's perspective, which is what's driven them to Dell, apparently. Right, because Dell can leverage their manufacturing expertise and can leverage the fact that they have a sales force that apparently works at a slightly lower commission rate than the EMC sales force. Sourcing levels and stuff like that. And well, you know, it's part of the, and I never, you know, I never understood the, let's get IBM selling off their desktop division while keeping the x86 server line.
Starting point is 00:08:02 Because now all of a sudden you just moved in your volume relationship with intel down two notches you know the accounting might say that the desktop division loses 10 million dollars a year but it saves 20 million dollars a year in costs on the server side so you hit you hit margins a little bit but in the end i don't think that you know pc ibm thought they could survive in that PC market. The margins are so low. Yeah. Well, I mean, it's part of IBM's transition to be a services and products company from a products and services company.
Starting point is 00:08:32 Right. Right. And, you know, and ultimately they sold the x86 server division as well. Yeah. Which was the right move. Right. And I, you know, I'm kind of, you know, my quandary is why did you sell them all at the same time? Yeah. Yeah. Maybe they had to work their way through the problem there.
Starting point is 00:08:51 Yep. And so, you know, we saw the in consolidations, we saw Dell buy EMC and take them private and private is a big deal. It's everybody looks at the well, look at how much money we have to spend to borrow to go private. So we've got that consolidation. We've also had some of just the usual consolidation, Pernik's data of lamented memory. You couldn't build a company on SSD caching, which actually was a big surprise to me. When I first started seeing SSDs come out, it seemed to me that that caching market would be really attractive. Yeah, it seemed like it, you know, I think the problem to some extent is you almost had to tie it to the storage to really take advantage to it. And there were limited attempts by the vendors to
Starting point is 00:09:43 tie caching layer to the storage, but they never really pushed it, probably because the margins weren't there, I guess. I don't know. So basically, there's three kinds of caching products that we've seen. There's the local read cache, which is the one that I thought was going to take off. I thought that was going to be a band-aid. I thought that people who had two and three years left on the lease for their disk-based systems would flock to, but wait, for a grand a server, I can buy an SSD in this caching software and I can get the performance and I can extend the life of my existing array
Starting point is 00:10:22 another two years. But that didn't happen. And then there's the really tightly integrated with the storage system maximizing how everything works class. Yeah. Well, there's two of those
Starting point is 00:10:37 that happened. There's Dell's fluid cache, which was too tightly tied to a not very successful in terms of sales quantity storage system. It only worked with Compellent. And, you know, Compellent's really easy to use and it's a good system, but, system, but Dell never sold millions of them. And the other one's Datrium, which is still new, and we don't know if it's going to take off. Datrium's a different worldview, I think.
Starting point is 00:11:20 Well, Datrium is a storage array designed around the fact that it's got an integrated local read cache. Yeah, yeah. I like to think of it as deconstructing storage, moving the functionality into the server and leaving the drives out on the tail end of Fiber Channel or something like that. Right. So I'm currently working on a piece for a vendor
Starting point is 00:11:39 who shall remain nameless for the moment about data locality in hyper-converged systems. Interesting. Yeah. And, you know, and at first glance hype at you, you go,
Starting point is 00:11:50 well, of course you want the data local. And then you start thinking about complications that keeping the data local creates. Yeah. And the more I dig into that set of arguments, the more it looks like a solution like Datrium addresses both sides. Yeah, yeah, yeah, yeah.
Starting point is 00:12:10 That by having not one of the two or three copies of the data stored locally, but a cache of the data stored locally. Right, right, right. Works just as well, if not better. It solves a lot. You don't have the, well, what happens when I vMotion? It's like, well, the cache reworms. You know, things like that become very simple. And, you know, when you start thinking about, okay, so what happens if I have 50 Windows VMs that I created from a template, and so they all exist in the back-end storage as metadata snapshot. One instance, yeah.
Starting point is 00:12:49 And they're now running across six hosts. Which host has the local copy? They all do. Well, do they? Well, it's a question of who's modifying it. The one that's modifying it ought to have the only local copy, but if they're all just reading, they're fine. Yeah, but they're not. Yeah, they are.
Starting point is 00:13:12 They are modifying their user directories and stuff. They're each pointing to different snapshot instances, and now it starts becoming, yeah, I really want a local cache. Well, the Dell guys seem to have, I believe they sold off Datamation, right? Datamation? No, the content management thing. Yeah, Content Management. Datamation was a magazine. Sorry. You're right. I know exactly what you're thinking of. But yeah, I mean, they sold off, right, they sold off the content management documentum.
Starting point is 00:13:46 Documentum, that's it. Yes, thank you. The old mind's not working as well as it used to. Yeah, it happens to the best of us. We are graybeards after all. Yeah, it's true. It's true.
Starting point is 00:13:56 We can't even claim early onset Alzheimer's. It's just the regular kind. Yeah, thanks. I needed that. Documentum was always a strange fit with EMC. Well, what about RSA?
Starting point is 00:14:08 I mean, that would be the next big chunk to spin off. I think, you know, obviously security is getting a lot more heat these days. Yeah, except that, A, if you're Dell. And so I have long thought that Michael Dell is trying to build Thomas Watson's IBM. Good luck with that. That he wants to be that old-time solution to all of your IT problems. One-stop shop. One-stop shop.
Starting point is 00:14:39 The whole floor be Dell blue, as it were? Well, you know, whatever color they decide to use, I doubt it'll be blue, but yeah. And so they need to have a security play. God, I don't know. I understand where you're coming from, but Jesus. Dell security play, right? They sold off SonicWall and they sold off, you know off the other security companies that Dell had bought. So we can probably look at that as RSA winning the first piece of the product rationalization that comes from that acquisition.
Starting point is 00:15:18 Now, I mean, clearly there's more product rationalization to come. I can't imagine any reason why Dell would continue to invest R&D resources into, say, the DR series of deduplicating appliances when they own data domain that's got 80% market share. I mean, they could probably give every one of the customers they piss off at replacement data domain cheaper than keeping the R&D going. And it's been clear for a long time that Ecologic isn't getting any R&D and that product will die when the last PO comes in from somebody for one. But they're still, you know, compelling. Michael Bell personally assured me that they weren't killing off compelling. But I remain dubious that Unity and Compellent are different enough that it makes sense for them to continue both lines.
Starting point is 00:16:22 Yeah, I mean, there's always this concept from a sales perspective that more products are better, but it confuses the customer at some level, you know? Well, more products are better from the sales perspective, but sales guys only pay attention to the top line. Yeah, yeah, yeah. And so, you know, having 27, you know, being General Motors of 1975, then you have Oldsmobile and Buick and Pontiac. But ultimately, building a version of the same car that's different enough to be recognizable as a Buick and having the little portholes on the side costs more than the incremental sales you get. And so today's General Motors doesn't have Pontiac, Oldsmobile, and Buick anymore. Well, they do have Buick, but that's because it sells in China.
Starting point is 00:17:11 Yeah. I think the key there is there's a, I think in a low margin, more and more products are fine. But at the high margin where you're doing high touch, high relationship sales, it's a different game. Yeah. Well, I mean, then you get into the how can one sales guy know. Yeah, everything that needs to be known.
Starting point is 00:17:30 Well, just, you know, how do you make a consultative sale, which, frankly, has never been either company's strong point. Not EMC's strong point? Really? Well, not like NetApp. Yeah, okay, I got you. I got you. I mean, if you compare the NetApp sales process to the EMC sales process, the NetApp sales process was always much more consultative. Okay. I got you. They spent a lot more time talking to you about the problem. But if you're trying to do a consultative sale and understand for this customer whether he should have an Equalogic or a Compellent or a Unity or a VMAX.
Starting point is 00:18:12 And there we're just talking about block primary storage. Yeah, not the rest of the stuff. HA and performance and all of this stuff. Yeah, and so it starts to get difficult. And, you know, I remember talking to And that meant that HP had to have two SKUs, one that was all flash that you couldn't add disks to, and one that was all flash that you could add disks to. It seems like more than one vendor has done that over the last year, Howard. Well, because they have to satisfy Joe Unsworth so that they can show up on the Gartner. I mean, when Gartner issues a report that says this is the market share in the, well, he doesn't even call them all flash arrays.
Starting point is 00:19:31 He calls them solid state arrays because someday there might't show up because they don't punish their customers by having a model you can't add disks to yeah then chris meller reports that they didn't sell to not in the other category because they didn't show up and it's not they didn't sell enough to not be in the other category because they didn't show up. And it's not they didn't sell enough. It's that they didn't sell enough that qualified for Joe's ridiculous categorization. And so at one point I called Vish and I asked him, how much does it cost HP? To create another SKU? Well, it's not even another SKU. It's another product line, basically.
Starting point is 00:20:09 Yeah, it's not trivial, I would think. It's a seven-figure number. Yes, yeah, I think so. And that's just taking the disks out of an already there product, you know? That's just bookkeeping. That's a million dollars zero r and d and when you compare that to okay and we're keeping compelling around and we're gonna have to develop a new version of the operating system and we're gonna have to continue to add features to it now you're talking real money now you're
Starting point is 00:20:42 talking real money and so i just you know money. And so I just, you know, Michael literally personally assured me that they were going to stick around. But I wouldn't do it. And so I remain... So maybe Unity is the final merge of all that code.
Starting point is 00:21:00 Well, Unity is the VNXE code. Right. And taken to the next level, of course. At the point where they decided to come out with the VNXE code. Right, and taken to the next level, of course. Right, well, they, you know, at the point where they decided to come out with the VNXE, and, you know, I haven't talked to any of the inside people, so I don't know the full story. But from outside, it appears somebody was really smart and said, okay, this, what was the? Solera. Solera, right?
Starting point is 00:21:26 Right. Solera is a clarion with NAS heads. That means that we need four controllers, because we need two clarion controllers, and then we need two NAS heads. Ultimately, we need to integrate the NAS head into the block storage system code. And VNXE was that.
Starting point is 00:21:48 And VNXE was that. And they managed to sneak it out at the low end so that they could test it in the field. Yeah, get it working well. Yeah. About performance testing later, about building performance into the code in later revs, because they were only selling it at the low end. And then Unity is where
Starting point is 00:22:11 they said, okay, we've sold this enough and we've done enough revs of this, but it's basically a fork. It's ready to rock and roll. And now it's ready to take over being the main line. So, you don't think that Unity could somehow incorporate Compellent personalization without a lot of R&D?
Starting point is 00:22:33 Maybe not. I don't know. I know. You know, it's what would the Compellent... You know, there's features in Compellent that Unity doesn't have, but much of them are performance-related, and frankly, some of them are performance related. Some of those performance features are specifically for spinning disks. Yeah, which aren't as important anymore. Well, I mean, if it comes down to, I have to give up the data placement algorithm that
Starting point is 00:23:02 puts the more frequently accessed data on the outer tracks of a spinning disk. But I can get that performance by increasing the flash in the system by another 5%. Which is easier. Right. So there's, I mean, there's good stuff in Compellent and, but the main thing, you know, Compellent had two huge selling points. The first was ease of use. Unity has substantially caught up in terms of ease of use. My Clarion XCX500 is such a pain to use
Starting point is 00:23:39 that I haven't turned it on in years. And you have to register every fiber channel host adapter by running software on the host and it's just a pain in the ass and the compelling you would get a screen it would go i noticed there's another hba that that zones to access me do you want me to use it bang yeah instead of having a type of wwwn or load software I don't want. But Unity's got, it may not be as easy to use as a compelling, but it's beyond the point of diminishing returns. I mean, this is an argument I have with the HCI guys all the time. They go, you can get it out of the box and have it up and running in an hour.
Starting point is 00:24:20 A traditional three-tier system takes a week. That's like, wait a second. I have taken six servers and an equal logic out of the box and had them up and running in a day and you only do that once and the difference between four hours in a day once not that much not that much did you want to talk a little bit about the emergence or normalization of object storage? Yeah. So even two years ago, if you said object storage to me, it meant two things.
Starting point is 00:24:54 First of all, it meant you had like a petabyte of data. It didn't make any sense to implement Ceph or CleverSafe or AmpliData for 100 terabytes. And it meant that you had to modify your apps to talk to the S3 or Swift or proprietary API of your provider. And even folks like Barton Glassboro, who is a friend of ours, who runs storage for a major cable channel in the UK. This is a guy who would say, my programmers refuse to write to S3. They really want to have a POSIX interface. Oh, by the way, we were at the Masters last week and it was raining and I have four hours of rain from 18 high definition cameras that were keeping. Right. So this is like an obvious use for an object store.
Starting point is 00:25:55 You've got this huge amount of video that doesn't compress. It doesn't de-dupe. It just sits there like a locks. And so three years ago, I talked to him and he said, I can't get my programmers to write to it. And this year I talked to him, he said, yeah, we finally got them to do that. There are better NAS interfaces. And there are several good solutions now for small object stores. That, you know, if you have 50 terabytes of data and you think it's going to grow, or so that each of your developers can have their own object store, you can go to Nuba or to Scality's S3 server and set up lots of little object stores very easily.
Starting point is 00:26:40 I just got a Raspberry Pi in a 314 gigabyte, I wonder how they picked that number, Pi drive from Western Digital. And I'm in the process of installing Scality's S3 server on it, which means I'll have a 300 gigabyte object store I can fit in my shirt pocket. Yeah, but I've done pretty works pretty well on a Mac. Let me know when you're up. I'll start sending you stuff. I did a blog post for them, and I set it up as a VM. But just the idea of running it on the Raspberry Pi intrigues me.
Starting point is 00:27:18 It's kind of like, look, you can do this too. So object storage, I think, is just becoming the standard. And the place I'm really expecting object store to take off in the next year or two is as a backup repository. You know, data domains are really expensive. And if you look at the changes that have happened to how we make backups, the duplicate rate within the backup stream... Yeah, are coming down. backups, the duplicate rate within the backup stream has to be a lot lower today than it used to be. When I did a full weekly daily incremental file-by-file backup, even in an incremental backup,
Starting point is 00:27:58 well, there's the rotating log and I keep a week's logs. Well, that means I'm backing up the whole file, but one seventh of it changed. We switched from that to techniques like incremental forever, which existed in TSM, but, you know, really only makes sense when you're backing up to disk because consolidating on tape was an ugly, ugly process. And change block tracking in VMware. Now, with change block tracking, I'm not backing up the six-sevenths of that file that didn't change. And with incremental forever, I'm not storing a month's full of backups. So four fulls plus all the incrementals, I'll have one full and 30 incrementals. So the 10 times as much that a data domain might cost relative to an object store starts to become harder and harder to justify.
Starting point is 00:28:54 Yeah, I guess it's a question of recovery time. You know, if we're taking, oh, I don't know, a couple hundred gig or even a terabyte off of an object storage versus data domain is quite a bit of a difference in performance. But if the object store happens to be local. Yeah, no, I'm thinking about in data center. Yes, yes. That's a different game. It's just saying, you know, because remember that restoring from a data domain was always slower.
Starting point is 00:29:22 Yeah, because of rehydration, right? The rehydration process takes all the 7,200 RPM disks that make up a data domain. And, you know, it's basically reassembling the data so it's all random. And an object store, you know, it's like, okay, restore that object. Well, they're good at that if it's a big object. Yeah. It's the small IOs that object stores are slow at right in general you know there's there's cases like i'm gonna have to fire up my raspberry pi and attach a disc to it or something like that i'll email you the guy at western digital he'll send
Starting point is 00:29:56 you one yeah 14 gig i'm thinking a couple of terabytes or something like that i don't know if the raspberry pi has got a scuzzy interface though it's only usb yeah yeah and it's only usb2 yeah there's definite performance limitation right right interesting though so you're thinking an object store that that uh sits in sight on site could potentially displace a lot of data domain. Instead of buying a data domain, buy a CleverSafe or a HGST Active Archive or a Scality Ring. And all the backup software will work directly with those things now. Nowadays, yeah. Because they all do that to support Amazon S3 for cloud backup. And either turn on the deduplication in NetBackup or Commvault or don't.
Starting point is 00:30:50 But even if you don't, if you're buying an HGST active archive at X cents per gigabyte and the data domain costs 10X. Yeah, it's worth it. You know, you're not getting 10x reduction anymore because the backups aren't reducible 10x anymore plus you've got all the scalability advantages and well plus you got the ha event just as clever safe and stuff like that multi-site all that stuff yeah scalability the same way right so you know i i'm i, I see the writing on the wall for those dedicated appliances. Well, that's interesting.
Starting point is 00:31:30 I hadn't really anticipated that. You think it's the emergence of object storage in a smaller form factor, more capable on-site environment? Yeah, well, I mean, it didn't make any sense at a remote site where, you know, all you need is 50 or 100 terabytes of backup space to put a CleverSafe in anymore. But, you know, if we're talking about, oh, yeah, I'm going to run S3 server on a local server there and use that as the repository. Yeah. Or, you know, something a step up from that and, you know, six nodes of CleverSafe and Replicate. Now, for those remote sites, there may still be advantages to the data domain in terms of bandwidth. Yeah, a performance perspective, if you're comparing an on-site data domain versus a remote site S3 repository, I agree.
Starting point is 00:32:17 Right. And if you're comparing it to a local repository, just the amount of data that you replicate because it's dedup before replicate but you know if you look at a big data center today there's a bunch of data domains because one data domain is big enough but you could certainly build one clever safe big enough easily if and any of them actually scale it or active archive yeah you know all these guys go to you know well you want 10 petabytes no problem problem. We can do 10 petabytes. And of course, the problem with object stores is that they're slow at doing random IO, but it's backup and it's pretty much sequential. But, you know, then on the other side of the spectrum, there's the whole new go fast category. Talk about the go fast category talk about the go fast category so i i've taken to calling this category the new tier zero because if you remember you know what it was at eight ten years ago we had tms and violin
Starting point is 00:33:11 and that was tier zero and overMAX and 3PAR. And so it came down to, well, do I want the original violin stuff, which was basically a rack-bound SSD? It offered no functionality except fast. Or do I want a 3PAR that's 95% that fast, but it's still a three-par? And it's got all the data management, cable management, all that stuff. Yeah. Yeah. And that's an easy decision. I want the three-par. So as the performance of tier one increased, it squeezed out that old tier zero stuff. Yeah. I mean, still there, obviously TMS or
Starting point is 00:34:03 flash systems still exists and you know dssd and stuff like that came out well so tms and violin you know who we really have to say you know violin is toe tagged and has a respirator and it's life support it's sad to see a pioneer like that die but if they're still in business six months from now, I would be really shocked. They filed for Chapter 11. We're in last days. But if you look at TMS, they got bought by IBM, and the TMS technology got rolled up into and integrated with SVC to create basically the storage system 9000, which is IBM's answer to a 3PAR.
Starting point is 00:34:47 That's tier one. It's fully functional. But those systems deliver a million IOPS at a millisecond latency. It might be two million IOPS, it might be three million IOPS, but it's single-digit millions, and the magic number on latency is one millisecond.
Starting point is 00:35:06 Yeah, under. Tier zero, for me, is defined as, well, it's 10 million IOPS or more, and it's 100 microseconds of latency. And so that's solutions like DSSD that are doing NVMe over switched PCIe. And they're the NVMe over fabric solutions. I mean, MangStore and E8 are probably closest to NVMe over fabrics. Although here, today we're in that funny point where the standard is out, but it's so new that nobody's implemented it yet. And so MangStore was a leading contributor to the standard. So their system looks a lot like the standard, but it's not standards compliant yet. And E8 is similar, but again, non-standard.
Starting point is 00:36:03 And then there's companies like Aperion that are taking a different approach. They're more network-centric than storage-centric, and they built a 40-gig switch into the storage array. But it's all about NVMe media on the back end so that you can get the very low latency between the controller and the media, a low latency fabric connecting the hosts to that's mainline. They're generally lacking in data services because data services, well, you have to have a working system
Starting point is 00:36:49 before you can start programming the data services. And it's really hard to implement data services without having any impact on performance. So if you're really saying, we have the absolute best performance you can have, then you don't have data services. I would say the NVMe or Fabric is of interest to a lot of vendors I talk to. For instance, Pure was very interested in talking about it. Well, Pure is in a really interesting position.
Starting point is 00:37:22 And I have to admit, they're a client of mine and I deal with them on a regular basis. But when we were at Tech Field Day, and I forget which one, and they rolled out the flash array slash slash M. And they said, well, we've done these things. And the biggest thing in the version they were shipping was that they switched from using ZUSA IOPS to NVRAM for the cache.
Starting point is 00:37:57 And they built their own PCIe interface to the NVRAM from the controllers. And it was shared NVRAM, so there's none of this coherency problem i was like well that's good that you know you you'd solve the latency bottleneck in your existing system there i like that then they said yeah and we have pcie going to all the drive slots too there you go but all the drives are sass and i looked at it and I said, oh, so you can put NVMe, you know, and I think we were still calling it SCSI Express then, but it's now U.2. Drives in the drive slots and get NVMe across the whole thing between your controllers and the media.
Starting point is 00:38:42 That could be really nice. I thought it was the FlashBlade where I thought it would really start to take off with because it's all internal. It's all PCIe on the back end of that. But the FlashArray was designed for the next step, which
Starting point is 00:38:59 not saying much out of school, I expect this year, where they say, okay, now you can do a U.2 drive instead of a SAS drive. And so we can vastly reduce the controller-to-media latency. And then in 2018, 2019… Maybe rolling out the back and cut down the host to storage system latency. I understand why they didn't do it at the very beginning, because when they announced their product, there were udot2 ssds but there were no dual port udot2 ssds they were all one port of four lanes
Starting point is 00:39:51 and interposers are evil so you know they just didn't have the product to buy but those products you know the the two by two udot2 drives are available now. They should be out soon. Yeah, I look at that and I go, oh, yeah, there's a real path to, okay, it's still the pure controller. It won't be 100 microseconds. It might be 200 microseconds. Right, but still, it's a step up. But 200 microseconds with data reduction. Yeah. God, wouldn't you believe that?
Starting point is 00:40:25 I don't know. Yeah, well, I mean, you and I go back to, you know, 50 milliseconds being fast. Yeah, 40, 40. Don't go there. I understand. Yes, yes, yes. And 10 was the dream, right? I mean, you know, I remember selling Priam 33 megabyte hard drives.
Starting point is 00:40:45 Oh, gosh. Yeah, they weighed 100 pounds and they had a linear voice. I remember setting one up on a desk once and the linear voice coil made the whole desk shake. Yeah, rotational vibration and big time. It wasn't even rotation, it was linear. Yeah, no, it was like when it was, you know, when it's seeked, you could see the desk shake. And that was like a 20 millisecond drive.
Starting point is 00:41:12 And now we're talking about, you know, well, can we get down to 50 microseconds? How do we get below a hundred microseconds? God, they're already there. Yeah.
Starting point is 00:41:23 Oh, no, I mean, I spent a lot of time at the flash memory summit talking to e8 and mangstore and aperian and all of those guys and you know and the magic number for all of them was 100 it was like we deliver 100 microsecond latency at multiple millions of iops and that of course means whole new application models. Whole new world from that perspective. Yeah.
Starting point is 00:41:47 You know, the other thing that was kind of surprising when we talked to Rob Pegler, our symbolic IO, about their use of NVDIMS was kind of an interesting step in this kind of tier zero kind of world. Well, yeah. I mean, we're getting – so we as a storage industry are moving closer and closer to our friends in the semiconductor business. I mean, Flash was the beginning of this. But, you know, hopefully this year, and everybody, of course, expected it in 2016, but I'm saying hopefully in 2017, we'll start seeing 3D crosspoint. We're seeing 3D Flash. God flash god 3d flash coming out everywhere you know our friend jim handy did an interesting piece on ee times a week or two ago where he was looking at micron's financials and his reading of micron's financials and is that the fact that microns bit ship rate
Starting point is 00:42:51 has increased so dramatically means that they're getting a handle on the 3d flash yields i read stuff occasionally from people who don't understand the semiconductor business who talk about 3d flash and they go, yes. And, you know, we got 64 layer 3D flash. All coming, right? Yeah. Yeah. And they interpret that to mean that we'll have 64 times the capacity per chip all in one fell swoop.
Starting point is 00:43:20 And when we talked to Jim, we learned some important things. Like, first of all, yet there are 60, you know, it is a 64-story apartment building, but the cell size is much larger. So that doesn't mean there's 64 times as many bedrooms. Instead of being four-bedroom houses, these are now one-bedroom apartments. Right. Suites. I like to think of them and and so you know while they we you know we do get 64 layers it doesn't mean 64 times as much it means four times as much yeah yeah well and there's a lot of process problems that have to
Starting point is 00:44:02 be solved for technology like 3d NAND, which means that the yields are low to begin with. And so even if we are getting four times as many bits per chip, if the yield is half as much, we're only getting twice as many bits per wafer. And when we start talking about how many, how many bits micron can ship, yield is a major consideration. And to some extent, the way the semiconductor guys work is they don't ramp the system up to run at full speed until they get the yield up. You know, they'll, you know, if, if in full full production a batch is 10,000 wafers, they might run batches of 1,000 wafers until they get the process down. Because they don't want to waste the raw materials and the cost of the machines.
Starting point is 00:44:57 You know, it's kind of like, well, let's make four times as much and lose money on every one. Right. That's making sense. Right, right. as much and lose money on every one right that's making sense right right and it's not profit you know and there's a certain yield level before a semiconductor product becomes profitable but you know getting back to nv dims and you know and especially when we say nvdim it basically ram with a flash backup and it's great you know that now there's three models of hp proliance i can plug that into and i don't need to be an oem and do the integration myself but the really really interesting part comes when we have not 8 and 16 gig NVDIMMs,
Starting point is 00:45:51 but 256 and 512 gig NVDIMMs that are using a technology like 3D Crosspoint that's inherently non-volatile. And that leads to a whole new generation of applications. Yeah, I think that, you know, the thing that surprised me about NVDIMS is that it's a functionally equivalent path to 3DXPoint, 3D CrossPoint. So you're developing the software and application functionality today on NVDIMS that ultimately you can use with 3D CrossPoint with just, you know just vastly more storage space. Right. And I, for one, sure hope that the guys at SAP have realized that. Because SAP HANA, we talk about it as an in-memory database, but it's really only an
Starting point is 00:46:38 in-memory database for reads. That if you have a sizable HANA installation, you need a big honking all flash array behind it just to absorb the right traffic. At the point where HANA can take advantage of the sizable NVDIMMs, they can eliminate that. That can become, and we replicate the data between two HANA nodes and write it to non-volatile memory on both,
Starting point is 00:47:05 and we journal everything to the big honking array in case the system goes down. Hey, it's about time for us. We've talked long enough. Is there any last comments you want to make? Howard, you've been talking a lot this time. My gosh. You know, I tend to do things.
Starting point is 00:47:22 Yeah, yeah. Alright. Well, this has been great. It's been a pleasure to talk with you again, Howard. Always a pleasure, Ray. Next month, we'll talk to another startup storage technology person. Any questions you want to ask, please let us know. That's it for now. Bye, Howard.
Starting point is 00:47:37 Bye, Ray. Until next time. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.