Grey Beards on Systems - 92: Ray talks AI with Mike McNamara, Sr. Manager, AI Solution Mkt., NetApp

Episode Date: November 4, 2019

Sponsored By: NetApp NetApp’s been working in the AI DL (deep learning) space for a long time now and announced their partnership with NVIDIA DGX systems, back in August of 2018. At NetApp Insight, ...this week they were showing off their new NVIDIA DGX systems reference architectures. These architectures use NetApp AFF A800 storage (for … Continue reading "92: Ray talks AI with Mike McNamara, Sr. Manager, AI Solution Mkt., NetApp"

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everybody, Ray Lucchese here. Welcome to another sponsored episode of the Gray Bridge on Storage podcast. This Gray Bridge on Storage podcast is brought to you today by NetApp and it's being recorded at NetApp Insight 2019 on October 30th. We have with us here today Mike McNamara, Senior Manager Product Solutions Marketing at NetApp. Maybe Mike, start and please tell us a little bit about yourself and what's new at NetApp Insight, especially about the AI space. Hey Ray, great to be here. And it's a little bit about myself. I've been at NetApp over 10 years now. God, we've known each other for at least nine of those. Yep and you know variety of different roles as you know right but the current role has probably been about maybe 18 months or so is
Starting point is 00:00:52 focusing on AI and AI at NetApp is one of our key we call must win areas. I can see that I understand that AI is coming pretty hot throughout the world, as far as I can tell. Oh, it's impacting all of us, which is kind of cool because when you just think of things, Ray, like even our phones, right? If you're an Apple user, Siri is AI. Oh, God, yeah, Alexa and all that stuff, and OK, Google. I've got all that stuff at home, quite frankly. I don't have the Siri thing, but I've got the've got all that stuff at home quite frankly I don't have a Siri thing but I've got the phone so it's pretty bizarre yeah so so what's new with AI and at NetApp I know you guys have got a partnership with the
Starting point is 00:01:32 NVIDIA and stuff like that and was at one of the analyst events I think they had a guy there from NVIDIA as well Charlie Brown I think or Boyle Boyle yeah he's the GM for DGX yes oh god Oh, God. So what do you guys do at DGX? Well, I tell you, you know, we've got a really good relationship with NVIDIA. Matter of fact, they're actually a key sponsor here at Insight. I saw them on the floor.
Starting point is 00:01:54 And, you know, Ray, what's important in this AI space is that, not just NVIDIA, but a big, broad ecosystem, family that we test and validate with. But NVIDIA is certainly an extremely key piece because they're that compute piece. And they also have a really nice software toolkit. Their software stack is pretty impressive. Really nice.
Starting point is 00:02:17 So what we've done is we've tested with their DGX platform. So the first platform was the DGX1, and then we've done reference architectures. And then, Ray, last year around the March timeframe, we were the first vendor to test and validate with the DGX2. Matter of fact, in our booth here at Insight, we're showing a demo of that. So what we'll do, Ray,
Starting point is 00:02:46 is we'll put together, I think of it as like a recipe, we call it a reference architecture, but it's to help the customer, you know, set up, configure, if you're looking for this type of performance, this type of... So from an AI application perspective, a performance characteristics of like deep learning, training, or deep learning
Starting point is 00:03:05 inferencing that sort of thing yes exactly and and what we do as well ray is that we've been kind of focusing to morphing more now into certain vertical industries so a key one is and what actually one we can relate to is automotive in that I have a colleague I work with who's got a Tesla, you know, an autonomous pilot option. But that's a big one, you know, autonomous vehicles, autonomous. I was talking to a customer a little while ago, a company called Z Tractor, right? And they're talking about autonomous farm equipment. Oh, God, yeah, yeah, yeah, yeah.
Starting point is 00:03:43 John Deere and all that stuff, they've got them all over the place now. Yeah, and think about like, if it goes awry, it's just going to ruin your crops. But he was telling me, it was interesting when you learn, like getting into the business case, but he was saying, you know,
Starting point is 00:03:55 the average, the hourly rate of a tractor driver is $19. Yeah, it's a good chunk of change. California, $25. But he said, but more important than that, he said, you don't need the steering wheel, the seat, the seatbelt, the windshield, the gauges, the cab. So when you factor all that in, you know what I mean?
Starting point is 00:04:15 Like, it's like a lot of cost savings. So there's a lot of, I think it's, nice thing with AI, you know, I think a lot of people, there is some concern that it's going to take jobs away. Yes, it will, but I think, Ray, it's going to take jobs away, but it's going to create other jobs. New opportunities for other people, yeah. And the intent is to have people doing more value-add work. So let me give you another example.
Starting point is 00:04:40 So at NetApp, I mentioned automotive, so autonomous vehicles, connected cars, a couple of use cases. But another big one we're going after, and this is something we're doing jointly with NVIDIA, by the way, on all these, because they have strong focuses on those verticals as well, is healthcare. Now, when you think of healthcare, right, one of the key ones, use cases for AI is the imaging, medical imaging. Yeah, the imaging, recognition, and diagnosis kinds of things. Without question. And, you know, a big problem, and I was really surprised when I, you know, we started to plan for this and get into this, was that, you know, radiologists,
Starting point is 00:05:20 there's burnout, there's extremely high, as a matter of fact, unfortunately, suicide. From radiology? From radiologists. Because they're seeing all these gruesome images and that sort of thing? Ray, there's not enough. And it just, think about how many X-rays, CT scans, MRIs. I just had an MRI, honestly, a month ago.
Starting point is 00:05:36 Really? I'm having surgery on November 14th. But, sorry. I should ask about that, but we'll ask about that later. Yeah, but so Ray, so what's great with AI is that, and we ended up working on a solution with NVIDIA. They've got a nice toolkit for medical imaging called Clara. And so we're in the throes right now of putting together an end-to-end architecture. But if we can help a radiologist do their job more effectively, quicker, easier, it's a win- it's kind of it's a win-win for
Starting point is 00:06:06 everybody win-win for everyone and it can scale to some extent because lots of those you know radiologists are specialists in that field so they see a lot of stuff from different hospitals and different uh different machines and that sort of stuff you were talking about the automotive space i just bought a new uh new truck actually and uh it's got more sensors than I know what to do with. It's telling me when I'm moving out of the lane. It's telling me how close I am to the guy next door to me and stuff like that. It's telling me when I shouldn't even turn lanes and stuff like that. And it's got voice recognition in it.
Starting point is 00:06:37 It's got everything. I mean it's not quite there yet. It's certainly not autonomous but it's taking that sensor information and doing a lot with it that you wouldn't think you could have done 10 years ago. It's doing it on board the car, you know, it's just amazing what you can do. But all that's involved with all the training and stuff that goes on back at the home office on machines like DGX-1 and DGX-2 and stuff. So what is a DGX-2?
Starting point is 00:07:03 Is it so it's got... It's 16 GPUs. 16 GPUs? Each GPU has got 2,000 cores or something like that? Something like that. It's crazy. Crazy performance. Super, super performance. God, that's impressive.
Starting point is 00:07:17 It is, it is. And they've got their own memory interface or IO interface that they use to get the data to those systems as fast as possible and stuff like that? They do, yeah. And they've got it using even InfiniBand interconnection, just to get that really high speed pipe. And the NetApp solution there is the device that's providing the data images and radiological and stuff like that? So from a NetApp point of view, Ray, to match up with them in our solutions, while our end-to-end NVMe-based solutions, our A-Series,
Starting point is 00:07:50 but you know, Ray, we're not, yes, we have this solution called ONTAP AI, which is AFF and NVIDIA, but we also have FlexPod. Okay. FlexPod AI. So if a customer is, so Cisco has a has a essentially a derogating right in video right with UCS but Cisco now we never offer the flex body I did this come on the existence this came in on gosh I
Starting point is 00:08:19 want to say maybe man a little while ago several quarters go matter of fact the rates here on the show floor. It's on the show floor. I should go see it. So I saw you at the AI booth, but I didn't see the FlexPod guy. So that's interesting. So in that case, it's a UCS with NVIDIA GPU configuration. Yeah, so it's all Cisco, of course, with NetApp storage,
Starting point is 00:08:41 but it's the Cisco UCS. They call it the ML480. It's their Cisco, of course, with NetApp Storage, but it's the Cisco, they call it the ML480. Is there a machine learning solution? Yeah, you got it. So heavy cores, heavy GPUs, heavy IO and stuff like that. You mentioned the all end-to-end NVMe stuff. So it's NVMe over fabric to NVMe SSDs and that sort of stuff? Yeah, exactly.
Starting point is 00:09:03 It's an all-flash solution, right? Yeah, it's NVMe to the host, NVMe to the disk. So the latency is excellent. And, Ray, the big thing with these, you mentioned these DGX servers, all the processors, what we want to ensure is that we are really utilizing that. That DGX needs to be up at 90%, 95% utilization. Yeah, it's tough. Or it's not.
Starting point is 00:09:24 And it's 2,000 cores per GPU, and there's 16 of them in a DGX, too. And it's just keeping those things busy is very difficult, I would say. So we've tested, just to give you an idea of our performance, we've tested, I think it is, up to 11 A800s to one DGX. What? So, yeah, 11. 11 top-end. No, no, I'm sorry.
Starting point is 00:09:47 11 DGX to one 800. Yeah. Okay, okay. We can drive a lot of. That's great. That's great. That would be a fully populated 800 with NVMe SSDs. It would be, yeah.
Starting point is 00:09:59 It would be fully populated. And when you say NVMe, that's NVMe over fabric. Is that fiber channel as well as Ethernet? It's correct. Yeah. So our solution tends to be, it's more over ethernet. Okay. So it's, you know, the, it's more NFS based. Right, right, right. So it's files level and stuff like that. However, but Ray, if we're also testing in the throes right now is our E-series platform with DGX. And you might ask, okay, so what we're- So E-Series has always been kind of high performance, high performance computing kind of solution,
Starting point is 00:10:31 so in an AI space, You hit the nail on the head, so. It's providing rapid access to data again, I guess, right? And the use case sort of there is what we're looking at, customers who require an InfiniBand connection to the storage. E-Series is our only platform that offers that. Okay, okay.
Starting point is 00:10:50 And parallel file systems. So E-Series has a lot of history in that. So when you have those two combinations, you know, the customer wants to do AI, they want InfiniBand, maybe parallel file system, boom. So it's like NFS 4.something or something? What's a parallel file system? Well, it's like a BGFS. Okay.
Starting point is 00:11:07 For example. Something like that. A cluster. Yeah, yeah, yeah. Those guys. It's from the HPC space. Absolutely, yeah. So what's nice, Ray, is that we've got a portfolio approach. And even like, you know, we talk, hey, the solutions I was just referencing are more core-based, but if you want to do the edge, our HCI platform,
Starting point is 00:11:28 the ability to do some inferencing at the edge. So the HCI platform is built on the SolidFire solution, and it's also... Using some NVIDIA cards within that. Yep, so you've got the GPU process. What's the packaging for something like that? How big is that? Is that like a server pizza box? Yeah, exactly, yeah.
Starting point is 00:11:49 So it's great, you know, another option for the edge of, hey, if customers want to run on tap select on a commodity hardware and they want to do some analytics at the edge, we can enable that. If it's in the cloud, you know, core, cloud volumes on tap, cloud volume service. So, you know, for us, it's like, cloud, you know, core, cloud volumes on tap, cloud volume service. So, you know, for us, it's like, hey, Mr. Customer, if you want to do AI in the cloud, check, we got you.
Starting point is 00:12:10 You want to do it in the core, we got you. But, you know, the ability to allow the customer to choose where they want to start with AI, and then we can, we allow them through our data movers to move that data. So that data fabric solution that you guys have been talking about for the last couple of years and stuff like that goes all the way from the edge to the core to the cloud. And it's all pretty much on tap sort of data access and that sort of stuff.
Starting point is 00:12:37 That's pretty impressive. And then even Ray added another portfolio. So this is a common sort of use case. So a customer, say they're a traditional data center doing AI in a data center, and they want to take their cold data and they want to tier it. Storage Grid is a great platform for that. They can use Fabric Pool and automatically tier
Starting point is 00:12:58 that cold data off to Storage Grid. Now, Storage Grid is the object storage solution. There was some news on that as well this week. You guys come out with a new storage grid appliance I guess? Yes, yes, yeah. And it's an all flash solution. All flash object storage which must be tailored to data analytics and AI solutions. Because that's where the need for that sort of data is and there's lots of data in there and stuff like that. Well, so, are you finding, you know,
Starting point is 00:13:28 I've been playing and toying with this AI space for a couple years now, and are you seeing enterprises starting to actually use it? I mean, obviously the big guys, Apple and Google and Amazon are doing big stuff with it, but are normal companies these days starting to take advantage of it? Yes, yes, we are, Ray.
Starting point is 00:13:45 We're seeing a lot of interest in it. And right now, we've got over 50 customers. No kidding. Right now, installed worldwide. And I tell you, Ray, we've got... And this is the GGX architecture? Yeah, so I'll just... I can mention a few names, but without mentioning a few...
Starting point is 00:14:00 Let me just... Vertical. Broadly say, so, of course, I talked to Automotive, so we have a big automotive manufacturer who is using us for autonomous vehicles. That's great. We've got another big telco, can't say the name, but what they're using us for
Starting point is 00:14:15 is for predictive maintenance and chatbots. We've got a biopharma company that's using that NetApp NVIDIA solution for genomic sequencing. What, genomic sequencing? Yeah, so think of DNA, that's process, that's heavy, heavy data, heavy, heavy performance. Yes, yes, and what do they do with the AI? They're trying to ascertain what the DNA does?
Starting point is 00:14:38 They're trying to look at DNA and match and look at streams and streams. And see what the DNA is doing and seeing at you know streams and streams and see what see what the DNA is doing and seeing and seeing how it plays out I'm just gonna go back to one thing you had mentioned there I'm trying to think it was automotive it was telco yeah and it was predictive maintenance so how do they do predictive maintenance with AI and so they're taking all their support data to some extent and trying to say okay okay, given this support situation, this is the type of problem that they were having.
Starting point is 00:15:07 So now they can start training an AI model based on that data and say, hey, you're going to see this in like 20 minutes or something like that. Exactly, and think about it Ray, it's not just telco. Just about anybody. Just about any industry. You guys as well. Airlines, us, right, right.
Starting point is 00:15:21 Well, our active IQ. You want to talk about this active IQ. I've heard more about active IQ. That's using AI within that, right. Well our Active IQ. I want to talk about this Active IQ. I've heard more about Active IQ. That's using AI within that. I heard more about Active IQ this week than I have in the past. Lots of discussion about workloads and using Active IQ to understand customer workloads and what's happening in their environment and trying to optimize, I would say ONTAP or any other solutions to deal with that? Absolutely. A great use case of that, Ray, is that say you're a customer, you know, Active IQ,
Starting point is 00:15:53 it can look at your environment, how you've been using it. It can say, you know, Mr. Customer, if you turned on compression, you're likely to see this percentage improvement. So it's kind of great, right? I. And it's a space-effective. So it's kind of great, right? I mean, it's telling you, you turn it on, this is the benefit you're going to see. You can turn it on or not, but this is what you see. If you want to try it, this is what we predict, that sort of thing. So the same sort of thing from a telco perspective,
Starting point is 00:16:17 if they want to configure so many towers or something like that, this is the type of drop calls you'll see or not drop calls and stuff like that. But to be honest honest though, Ray, like you were asking too, how's it, so AI has just taken over, but you know, I was talking to a few customers recently and what they were saying is, I'll give you one example, Macy's.
Starting point is 00:16:38 And what was Macy's was there's two use cases. One was, think about facial recognition and allowing women to see what different shades of makeup would look like. So they would actually apply different types of shades. They scan your face and say this is what this shade is. However, the woman was saying product manager I was speaking with was saying
Starting point is 00:16:57 it's good but it isn't perfected yet. We've still got some work to do. I was talking to a gentleman from Amazon and again they're using facial do. I was talking to a gentleman from Amazon, and again, they're using facial recognition, but it was for to look at your expression when a product pops up and to make it... To see if there's
Starting point is 00:17:13 a correlation between purchase or your emotion or something like that. But he was saying it's not perfect yet. We're still working, but it's only going to get better, right? But it's kind of cool. You were saying on the floor, there's not perfect yet. We're still working, but it's only going to get better, right? But it's kind of cool. You were saying on the floor there's actually facial recognition that gives you an indication of your emotional state? Yes.
Starting point is 00:17:31 If you walk by our booth, we're showing an implementation of where we're running our ONTAP software on this little NVIDIA card. It's called the Jetson Nano. There's a camera. We're running some camera software, and we're in TensorFlow. So it's pretty cool. Yeah, it will scan your face, and it will tell you your mood, your gender, and your age. I don't want to hear that. I don't want to hear that.
Starting point is 00:17:54 Well, the good news is we're actually purposely going a little bit lower on the age. That's good. That's good. That's good. Well, I was thinking what I would look like with more hair on the top and stuff like that, but that's a different discussion. But let me tell you a cool story, Ray. An actual customer I can mention.
Starting point is 00:18:10 So we have a customer. It's called Cambridge Consultants. They're big guys, right? But they're also a partner of ours. Okay. And they've got a lot of expertise. We partner them more from AI consulting. They're a very, very forward-thinking company.
Starting point is 00:18:22 And they've recently released a press release on this and so they have a solution right it's called Bacillus AI and the problem that they're trying to solve with AI and built off of a NetApp Nvidia solution or ONTAP AI solution is this is that in a lot of emerging countries tuberculosis or TB is a big issue now here in the United States we don't we don't think right right it's not a big issue. Now here in the United States, we don't think, right? TB is not something. Right, it's not as big a deal here. But Ray, like 5,000 people a day die of TB in the world.
Starting point is 00:18:50 Oh shit. Is that crazy? I'm sorry. So what they've done, Ray, is that, so they've created an algorithm and to enable a physician, sort of out in the field, so what physicians have to do is
Starting point is 00:19:05 there when they're counting TB cells they're using a microscope and it's very very tiny like blood cells yeah right it's it's very tedious there's hundreds they're counting takes a long time there's a lot of chance for error so what they've created is the solution was just taking a standard micro a microscope that a physician would use, put a smartphone on top with an app. No kidding. They've got a DL, deep learning algorithm written
Starting point is 00:19:35 that will count those TB cells, count them, put it in their dashboard, come up with some analysis, significantly saving time and effort of physicians in the field. So it's a great example. It's impressive. So that's sort of an application is characterizing the cells, be it TB or not TB, and counting the cells, putting that information into another database, and trying to interpret that information
Starting point is 00:20:01 to see what sort of percentage likelihood of having TB and that sort of thing. All done with a smartphone app and a microscope. Is that great? Is that awesome? Yeah. That's real impressive. You know, my son, we do some robotic stuff and they're using smartphones in our robots, but the smartphone's got more computing power than you can think of.
Starting point is 00:20:20 It's got more sensors there. If you could, you know, taking advantage of something like that in this sort of situation it's just one example right I mean of tuberculosis but they're probably a thousand blood diseases that could be very similarly use that sort of technology to do that sort of thing impressive we have a partner in our booth and you know I mentioned our ecosystem and we talked about in video but also a lot of these independent software vendors so as an example I mentioned genomic sequencing well there's a vendor we've tested with called power bricks that's what they they have the the genome sequencing
Starting point is 00:20:56 software okay that's GPU based and we would partner with them and offer the customer hey look you've got the software you've got the infrastructure from that up in a video you've got your end-to-end solution. But in our booth we have another company called OmniSci. What's kind of cool is what they're showing is they went and took six years of data from the U.S. Coast Guard of all the coastal shipments over six years in U.S. coastal waters. Really? Take a guess how many rows of data that is.
Starting point is 00:21:25 Six years? I got to say, it's thousands a day. So it's, you know, God, it's hundreds of thousands. It's got to be a million-ish. Well, a little lower, it's 11.4 billion rows of data. Is that nuts? So, but Ray, but you could never- This is a hell of a spreadsheet. Yeah, but you can never... This is a hell of a spreadsheet.
Starting point is 00:21:45 Yeah, but you can never do that on a CPU-based solution. So OmniSci has got the software. 11.4 billion. Six years of, this is a shipment per row. Yeah, is that crazy? So, but what they're able to do, Ray, is take 11.4 billion dollars, analyze it using NetApp, you know, video, crunching it, and then just putting together a nice dashboard so the Coast Guard can make, look at it and say, all right, you know what, we've got, you know, issues off the coast of Florida. You know, over the years we've been seeing from October through September, we get some bad, you know what I mean? It just makes them, makes them.
Starting point is 00:22:19 More sensitive to the flows of the traffic and try to understand where they might be able to intervene better. Exactly. And, you know, the Coast Guard, their budget crunched, you know, and they don't have a lot of resources. So this will help them better deploy their resources in the areas that truly they can get the most bang for our tax dollars. That's interesting. That's interesting. And that's all via the DGX solution and that sort of stuff? Yeah, it's running on DGX, running on NetApp.
Starting point is 00:22:44 Yeah. That's amazing. Yeah, so it's almost like in the last half a dozen years or so, this deep learning stuff has come out of the research labs, starting to be deployed in various companies and stuff like that. It's just amazing what they can do with it. I mean, and it's all with the data. It's quantification of the data, understanding what that data means at some level,
Starting point is 00:23:06 and providing that information to a deep learning algorithm and training it to do that automatically for you. It's just impressive how this all stuff works. Yeah, it's just going to become more and more ingrained in our everyday life. But you know, right? It's becoming commonplace almost. Yeah, you know, and everyone, even like at this show, right?
Starting point is 00:23:23 Like folks who, maybe they're a storage admin or a cloud architect, and maybe they're not right now working on an AI, but they know what's coming. And so they're stopping by and saying, I know I'm going to need to learn about this, I know I'm going to probably get some responsibility and have to own some of this. A lot of it's been the data scientists
Starting point is 00:23:44 and the data engineers are key folks in this world. Part of that ecosystem. Yeah, but the line of business owners is saying, hey, whether it's marketing or finance, hey, I need to get better insights. I need to utilize AI to better understand my business. Mr. Data Scientist or Mrs. Data Scientist, Data Engineer, I need you guys to help me create some models.
Starting point is 00:24:03 So you guys do professional services in that space as well? We do, we do as well, correct. Yep, and we also, actually, I mentioned ecosystem just recently. If a customer wants, we just announced last week a partnership with a company called Flexential. And their whole approach is like a try and buy. No kidding. So if you want to try it, you can go ahead and do it. We have another partner called Core Scientific and it's an AI as a service model. No kidding.
Starting point is 00:24:36 And then we have a third one called Scale Matrix and they've got this mobile data center. So there are different options. So if you want to, if a customer wants to get involved with AI, there are these. You're kidding me, I can hire AI as a service kind of company to come out, take a look at my data and say, you know, this is the kind of stuff we can do for you if you're interested. Right, right, and if you don't want to get involved
Starting point is 00:24:56 with the, I don't want to. I don't want to do the actual work. I don't need to get a DGX going. I want someone else to have the infrastructure and I just want to use that infrastructure when I need it. So it's a model that might be applicable to some folks. So what we're getting as an industry more creative to help the customer. Deploy these AI deep learning solutions and that sort of stuff.
Starting point is 00:25:23 Well Mike, this has been great. We're coming to the end. Is there anything else you'd like to say to our listening audience about what NetApp is doing in the AI space? Sure. I would just say, you know, stay tuned. We're continuing to invest in this area. You know, a couple of the key use cases we're focusing on, as I alluded to, are automotive, health care, financial services, retail. But we're continuing to produce content and collateral. So check our website. It's just simple, netapp.com forward slash AI.
Starting point is 00:25:51 Forward slash AI. That's interesting. You mentioned financial services. Just for a second here, it's credit fraud detection? Oh, big time. Big time fraud detection. Matter of fact, Ray, that's the use case we are right now testing on. But think about it, right?
Starting point is 00:26:03 Our credit cards. Our companies, I have discovered, they're always looking at us. That's the use case we are right now testing on. But think about it, right? Our credit cards. Yeah, yeah. Our companies, I have discovered, they're always looking at our, it happened to me, looking at our transactions. And seeing if something's going weird. If something looks awry. You know, you've been spending. Too much time in Vegas. Yeah, yeah, right, right.
Starting point is 00:26:18 All of a sudden, it's like three $2,000 spikes. Whoa, wait a minute. It was a good party, actually. Yeah, actually. That's bizarre. I was driving towards, I was doing a college tour kind of thing with my son, and yeah, I got nailed in the middle of Iowa, and they wouldn't accept my card. And they said, got to call your credit card vendor, because you need to inform them that you've been traveling and stuff. It's very interesting how much stuff has gone. Well, Mike, this has been great. Thank you very much for being on our show at NetApp Insight.
Starting point is 00:26:47 Thanks to NetApp for sponsoring this podcast. Next time, we'll talk with another system storage technology person. Any questions you want us to ask, please let us know. And if you enjoy our podcast, tell your friends about it, and please review us on Apple Podcasts, Google Play, and Spotify, as this may help us get the word out. That's it for now. Bye, Mike.
Starting point is 00:27:04 All right. Thank you, Ray. All right, thank you Ray. All right, until next time, thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.