Podcast Archive - StorageReview.com - Podcast #141: Graid Delivers Enterprise RAID Functionality to AI Workloads

Episode Date: October 26, 2025

Gain insight into improving RAID performance for AI workloads with Graid SupremeRAID AE. Storagereview… The post Podcast #141: Graid Delivers Enterprise RAID Functionality to AI Workloads appear...ed first on StorageReview.com.

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everyone and welcome. We're live on LinkedIn, which is pretty exciting, working with no net. So Kelly, be on your best behavior, please. I've got Brian Beeler here, proprietor of StorageReview.com and my friend Kelly Osborne from grade technology. And Kelly, you and I have been doing a lot of work together lately, mostly recently. You looked at your A.E. Software and how that works for AI servers. But before we get into all of that, just set the level for who great is and what you guys do.
Starting point is 00:00:38 You bet. So great technology is a four-year-old, some odd startup. We were founded in Santa Clara, California, so typical Silicon Valley company. And we basically founded the company under the premise that there are problems and some old technology that's been around a long time called Rade and we're done an array of independent disks. So that's nothing necessarily new. It's Reed Solomon algorithms, striping parity, rate 5, 6, 10, 1, you know, all those kinds of things and some of the newer things like your ratio coding. But essentially, what's happened is NVME drives, which are directly connected to devices
Starting point is 00:01:20 or to the motherboard, are so fast, especially with PCI-E Gen 4 and now 5 and 6 is coming, that the traditional rate technology severely bottlenecked the performance of those drives. So what we've seen is a lot of organizations end up doing things like Raid Zero Striping, which they're flying without a safety net, and maybe snapshoting or checkpointing and some other things that cause overhead. So the real problem is traditional hardware raid creates physical bottlenecks. So the physical bottleneck, when you wire a drive directly to a card, If that drive needs four PCIE lanes and the card is a 16 by 16 device,
Starting point is 00:02:03 you can really only get the throughput of those of four of those drives. So we're seeing servers with 16 drives, 24 drives, 32 drives. In fact, we recently tested with you guys a server in certain configurations that can have as many as 44 drives. Right. So if you take 44 drives times four lanes and connect that to 16, you can see the obvious giant bottleneck. Well, I mean, with Ray in the old days,
Starting point is 00:02:25 which wasn't, gosh, it doesn't seem like that long ago. I mean, you might have a server with, I guess if you had 10 or 15K spinners in there, I mean, you were looking at 22, 24 drives. I mean, you could get quite a few of them in there, but the challenges with aggregating those drives and the speeds, as you're talking about back then, they were quick back then. It's funny, I just started thinking about short-stroking 15K drives, which people would do to get every ounce of performance out of those things.
Starting point is 00:03:01 But as we got into faster flash media, and then envy me, as you say, that's really the linchpin that pushed us over the raid limits, I guess, or the over the top of the raid cliff that the Silicon just couldn't keep up anymore. So we had to do something. You guys came in with an amazing solution at probably the right inflection. point. Yeah, and the hardware rate's pretty bottlenecks pretty obvious. So when you go to servers with a lot more dense numbers of drives in them, you typically don't use a hardware card. You're going to use a software to find storage environment. And the most commonly used is probably Linux MDADM, which is the rate stack that comes at Linux. And the problem you run in there is CPU overhead. So your CPU becomes bottlenecked and consumed with dealing with infrastructure tasks instead of doing application type tasks. Some of that has to do. do with a CPU in the grand scheme of things is not particularly good at mathematics when compared to a GPU that has lots of parallel cores and
Starting point is 00:04:05 it's designed to do mathematical calculations far more rapidly because that's graphics and now AI and Bitcoin mining and all the other things that you can think of and so our team just figured out that if we create our own rate stack but we offload it from the CPU and run it on a GPU we can free up the CPU and we can do all those mathematical calculations far more quickly. And then couple that with our peer-to-peer DMA technology that allows the data to not have to flow through the card. So we can tell a, when an application asks for data, we can tell the drive where to send the data right across the motherboard. We don't have to read it in and forward it so we eliminate that
Starting point is 00:04:45 by 16 lane bottleneck problem. So we should talk about this because this is a thing that when we talk to end users blows their mind and they can't quite wrap their hands. head around. So we didn't really mention it, but grade runs off of a GPU, and it's a software stack that runs there. And the GPU, when you're thinking of, oh, my gosh, an expensive item in the server for raid management, that's not even close to the case. You guys are using an A2,000, I think, now, which by GPU standards is almost, I don't want to say free, but it's not very expensive, right? Right, right. We're talking A400, A1,000, A2,000, series. Those are the Ampeer series now in Ada.
Starting point is 00:05:28 We used to use touring cards. But these are very inexpensive cards. And they have enough horsepower for us to deliver very close to 100% of the throughput of these drives. Right. But one of the things that we've got now is a new version that... Wait, don't do the new version yet. Go back to the SPY16 thing. So I talked to so many people and they're like, okay, well, the drive can go, you know, maximum 14,000 megabytes a second or whatever, the top reading.
Starting point is 00:05:55 number is. And you and I've worked together before with Kevin and our team on putting the other boxes even more modest than the 40 or 44 drive box you can do with with Power Edge these days. But we've done work with you before with just 2024 drives. We're hitting 200 gigs a second in this box. Or now if we really wanted to juice it, I know with some of your new code and we can get into it, we could probably get 300 gig a second or maybe even a little bit more out of of a single server. So when you think about the limitations of either direct connected NVME drives or NVME drives through a raid card, traditional silicon or MDADM, as you mentioned, or through grade, how do you
Starting point is 00:06:43 communicate your ability to get 300 gig a second in a box when people like, okay, well, that just doesn't make sense because by 16 can't go that fast? Right. It's because if you have 10 drives, you have 40 lanes, so we can deliver all 40 lanes across that motherboard to the application. You're not choking on 16 because the data never flows through the GPU that our software is running on. Think of it as a traffic cop. Think of it as an intersection. We're sitting on PCIE, the memory, the CPU, and the drives are all sitting on PCIE.
Starting point is 00:07:16 So it's a peer-to-peer route architecture or bus, if you will. So we can take a read command and redirect it and tell. the data where to go. You know, the other thing you, the other kind of tradeoffs that we see, when you start getting beyond, say, 24 drives, you maybe have to run the drives at by two. So a two-you machine that holds 44 drives, those are probably going to be running it by two. So that's more of a capacity play and an IOPs play, whereas your true performance play is going
Starting point is 00:07:48 to be probably around 24 drives. If you consume too many lanes, you start sacrificing front and back end, you know, networking connections and things like that. So it's always a balancing act. Yeah, it is, but we really haven't seen in modern server design the challenges around lane allocation, the way we're seeing them now, right? Because as we go to E3S, we get these real narrow drives that are pushing capacities up over 60 terabytes in these, what are the E3S? They're seven and a half millimeter, I think, or are they nine, whatever, they're very, very skinny. And if you can get 40 of those or more into a single to use server, then you're right. We run out of lanes.
Starting point is 00:08:26 I can't get four to everything and also have room left for I.O. or anything else in the back. So it's really an interesting time for a server designer is to say, okay, well, we found a new limitation as we jam all these drives in here. We simply can't address them all at high speed, although we're seeing some versions of Power Edge specifically where they've dialed that back and said, okay, well, let's give you a version with 24 drives, but you get all four lanes,
Starting point is 00:08:54 and that's interesting, too, in that XD model. So there's so much going on from a design perspective that's really fun. One thing, though, too, before I let you get onto the new stuff, is we're talking about bigger environments, but you guys also put out a smaller environment version of this, too, right? Yeah, that's the one that's based on the A400, which is a very inexpensive card. I mean, the whole solution is under, you know, $1,000, if you will,
Starting point is 00:09:22 but that will support up to 12 drives. So that's an ideal for like a 1U server, maybe even high-end desktop, desktop AI type applications where we're starting to see even desktops that will have, you know, four to eight NVME drives. And so that's a very workable solution for that at a good price. We need that because these desktops, to your point, are coming out with the new 6,000 GPUs, the RTX pros. And I mean, the amount of work that can be done in desktops now is wild
Starting point is 00:09:57 or the GB30 desktop tower form factors. But I don't know if the vendors are really thinking about storage as much as they should as part of those designs. I hope you're right. I hope we see a bunch of amazing things. And then you can come into play there too. and help aggregate that storage. And my question there is if you're buying even the newer RTCX 6,000s are Blackwell chips, essentially,
Starting point is 00:10:24 and you're putting it into a high-end graphic workstation that's got lots of memory, CPU, and everything else, and it's got two small SSDs that are, you know, consumer grade, you're never going to get the data, you're never going to get utilization out of the GPU because it's going to be starved. The only way to get that much data into something like that is you need more than four, maybe six, eight, ten drives, because then you're striping and pushing from all those drives data into that for training and other workflows. And so, you know, I think we're going to see those workstations get ever more powerful.
Starting point is 00:10:57 Yeah, there's no doubt. I mean, we're already seeing some come in and working with the vendors on what's next. And it's going to be pretty wild, especially I think as we roll around to CES and in January, where more of those client-y-type things come to fruition. It should be pretty exciting. Okay, so we cover the small and the mainstream, but the project that we worked on with you most recently and with Dell and with Mycrime is around the AE product line.
Starting point is 00:11:26 Why don't you talk about that a little bit? Sure. So if you're building a server that's really a storage-oriented server, maybe you're running a high-performance database on that. We do a lot of work in edge environments where it's high-speed data collection, I think military applications. So you need to be able to write very quickly.
Starting point is 00:11:44 So that's great for our inexpensive, dedicated solution. But let's say you bought a Dell XC90, 680 that has eight H-100s in it. Or like our R770, you put two H-200s in it or something like that. First off, you don't want to have to go get a third GPU to put in there and sacrifice a slot. If you've already got those GPUs, what if we could make a version of our software that would run alongside your applications on one of those GPUs and that's what we did and we call it our supreme rate AI addition and it once again flexible deployment we have a whole line of GPUs all the way from an A2 through the full RTX and then all the way up to Blackwells that we support and we only
Starting point is 00:12:27 take a very small piece and then because it's an AI oriented uh software stack we've added GPU direct storage or GDSIO which is an NVIDIA framework that allows us to move or storage manufacturers in general, not just us, but it allows us to take data from a drive and move it straight to the GPS, bypassing host memory in the CPU, so it reduces latency, hop counts, and improves throughput. Once again, all to keep those beasts running, right? It's called Feed the Beast. That's been the dream for video gamers for forever, right, is how do I get more of my game loaded into that graphics memory without having to go through the whole complex to back down to storage, which is why, you know, in the gaming world, there's all these loading
Starting point is 00:13:12 scenes. You can't do that in the AI world and still be efficient. And I will say that, while Kelly and I are talking here on LinkedIn, if you've got comments, I saw, you know, a thank you show up in the chat. Feel free to contribute and I'll make sure to hold Kelly accountable and get those questions answered. But jump in anytime with your thoughts. We'd love to hear from you. So on AE, what I think it's interesting, and you mentioned it, but I think it's worth repeating, is that as we look at how flexible these servers are, we did it with Power Edge, but obviously this works with Super Micro Prolient, whatever your server of choices, the designs are getting so good that on those mainstream 2U servers with the different riser combos that they offer, as you said, we put in 2H100s, but you'd don't want to use up those other mid slots for things other than really I.O. You want to have your high speed networking there
Starting point is 00:14:12 so you can get more data into the system. You might even be able to take advantage of DPUs or other accelerators, depending on what your workloads are. And so I think it might be a little bit of an undersell to talk about the flexibility that it gives you to use grade, but that's really an important point that you don't lose a slot for this. Yes.
Starting point is 00:14:34 because you're already using a GPU, and we can take a very small piece. These GPUs are kind of ranked on what we call streaming multi-processors, which is a combination of kuda cores and tenser quarters and cache and other things. And an H-100, for example, has 144 SMs. We only consume six.
Starting point is 00:14:53 So we're taking a very small piece. And the other thing we have the ability to do is drop out. So once that GPU has its data, it may be churning away. way on its own and not really asking for anything. Right. We don't need to sit there and wait.
Starting point is 00:15:08 We can get out of the way and free those resources up to improve the training performance. And then when it starts asking for data again, we can come back alive, if you will. It's a time division multiplexing function that allows us to be more optimal in how we utilize that resource. Yeah, I think that's a fair point too, because when we're looking at it in our lab, I mean, we saw just a modest hit on the tokens per second when we were doing our inferencing work. workloads. And we've got that full report. In fact, I'll put the QR code up there. So if you want to check out the report, we have an amazing amount of charts. So if you like data, you're going to enjoy this. And then we've got the full YouTube video link there too. I'll leave that up for a second and people can check that out. But yeah, the hit was relatively light. And as you say, it's rare that you're doing a whole lot of intensive data stuff and a whole lot of intensive AI stuff at the same time, although you could. if that's what your workload calls for. Well, the other thing that I think is interesting, you know, people immediately think of AI
Starting point is 00:16:11 and they think of a large cloud full of these eight-way boxes like DGXs and these XE products and Super Micro has one and a lot of those types of environments sitting in a hermetically sealed data center are probably connecting to very large storage arrays with NFEMI over fabric and things like that. But the vast, the vast majority,
Starting point is 00:16:34 of the workloads that we're starting to get involved in are AI in the field. It's either a smaller company. They don't need an eight-wave machine, but they're going to buy something like this R770 with two hoppers or two Blackwells. And it could be an environment where it's they're doing data collection and then training and then inference all in the same machine and it all has to be self-contained. And then they're going to put a hundred of those out in every one of their field locations or stores or cell towers. You can imagine. all the places this kind of thing could go. So this becomes incredibly important to maintain performance and yet, once again, preserve that slot. Well, or just inferencing, right?
Starting point is 00:17:16 Because there are more and more models just available there where you may not have to do a whole other training. And we're talking before about the small version of grade that you offer. We're seeing, you talk about retail, but edge is a tremendous opportunity where the workload is inferencing, we were actually just scoping a project with getting these little edge boxes at wetlands for preservation research, animal research. And so you can use YOLO 11 for visual inferencing of objects, and they've got these wildlife models that are already trained.
Starting point is 00:17:54 You can just dump that thing on, and unless you're looking for something, you know, very bizarre or highly localized, then maybe you have to train, you know, that bird or that bear. or whatever, but a lot of these models out there now are so well done, and this stuff didn't exist like this even a year ago or two years ago. The stuff was out there, but not at this level to the point where you can pick it up and insert it into your organization or your server, your edge box, or whatever, and be operational pretty quickly. Oh, yeah. The more interesting one that we're also involved in.
Starting point is 00:18:33 revolves around military high performance fiber jets and we're partnering with a defense contractor that's kind of building the next generation of data collection. So you think about an F-22 on a given flight will generate a couple terabytes of data. That's using the old packages. That's using the old avionic systems and the old data collection. Well, now they're able to capture 100% of radio frequency at all times and they want to preserve that and write. It has to write really fast. You have to have data protection. If you have a drive fail while that mission is happening, you still need to be able to maintain right performance because no one's going to crawl back there and swap that drive out. So you have to still be able to protect the data and write it quickly. They'll swap the drive
Starting point is 00:19:17 when the mission's over and they land. And that's in a ruggedized environment. So it's, and that's just one example of another, you know, scenario that we're definitely participating in. Well, I'm in Cincinnati, so I think about things in the consumer package goods version. So we got Kroger, for instance. And every single one of these Kroger's, there's an IT stack, and they're being tasked with doing more inferencing there as well in terms of stock management. Are these little robots that are going around picking the orders for the online services? How are they working? Security, falls, injuries.
Starting point is 00:19:55 I mean, they're doing all that stuff. and they don't have IT staff sitting there. So you talk about the redundancy that Rade offers, and not even yours, just any Rade. I had this debate in the last two weeks with somebody about, is Rade still relevant today? And I said, it's more relevant than it's ever been, because especially with any of these systems that have a GPU in them
Starting point is 00:20:18 that are very expensive, that need to operate to create value, you can't have storage going down and losing either the progress or your ability to run these workloads at the edge. And that's true the main data center as well, but I think it's more painful at the edge with lack of resources. Yeah, and not all dry failures are dry failures. So there's a whole data integrity portion to this, and one of our capabilities is the ability to recover from a transient error.
Starting point is 00:20:49 So you might hear the term URE or NRE, non-recoverable error, unrecoverable read error sometimes things happen to a drive and we can retry the read a couple of times and if it keeps coming back bad something's happened in that cell and it doesn't mean the drive's bad these drives have over over provisioning and so if the cells determine bad we'll write the data somewhere else but sometimes the drive is sending us bad data and what we have to do is go read the striped parity from the other drives recreate that then we'll write it somewhere else Before that, you could have a really bad day where, uh-oh, my system crashed. Now I re-format the drive, and it's fine.
Starting point is 00:21:31 It wasn't actually failed. And so we've really built on a lot of resilient features in terms of our retry counts and that ability to recover on the fly from that kind of a transient error. So data integrity, not just failure protection. So there's one other thing that I want to point out that we found is we were working through this, and that is the value of large namespaces. So that's another thing, as data gets larger, that becomes more important that in other rate methodologies,
Starting point is 00:22:03 depending on the number of drives you can get behind the card or the software, you can run into some challenges, performance at scale with a large namespace. And as I mentioned, with those micron drives at 61 terabytes, the solidime drives at 122 terabytes, and now we're talking about 256, in that E2 long form factor. I mean, so much is going on there.
Starting point is 00:22:27 What are you seeing when you talk to your customers about that particular part of the value chain? Maybe it's too obscure, and I just think it's cool for an impractical reason. But I think when we're looking at massive data sets, that that namespace thing is actually pretty substantial. It is, and we're seeing even the need to support 32, 64, 96, 128 drives.
Starting point is 00:22:54 And that may not be drives all inside the host. It could be NB&B over fabric drives coming in across the network as raw devices. So there are a lot of companies now that build J-Boffs, just a bunch of flash, and they have the ability to present those drives across Rocky or some RDMA across fabric. And so now your server thinks it's got 24 drives.
Starting point is 00:23:19 Well, if you have four of those, it now thinks it's got 96 drives. So the problem is most of those J-Bops, they're not full servers and they're not full storage arrays with built-in control, you know, they'll have hardware redundancy, but not raid protection. Right. So you have to still go back to software raid or some other way to protect the data that you're striping across that huge set. And that's why we're continually working to improve things like how many drives we support, adding new features like your racial coding, so you're not stuck with rigid raid group. per se so we're you know those are all upcoming things that I won't talk more about but just a little snapshot of what you can expect from us going forward well I mean when you start talking about some of these technologies you get a little bit more like storage array adjacent I mean I don't know you probably don't want to talk about the grand vision of of grade but yeah I mean you can if you
Starting point is 00:24:14 get to these jaboffs and give them the ability to connect to a host server you get to decost that element anyway a little bit because you don't need all the x86 cores and all the other stuff inside there you can really condense and make some some pretty neat storage servers there just from a design standpoint me talking about 44 I mean you could you could make some real heavy storage boxes sure and we can couple that with something like a parallel file system like saff or bgFS and luster and those kinds of things that we support where we can add a lot of value or the other thing we we can also do our own envy me over fabric target offload so you could build you could build a storage server with 44 drives and we can then create virtual
Starting point is 00:25:02 this that we off target offload to various hosts maybe you make that drive available to an ingest server and then you move it over here to a training server make it part of a workflow and because it's envying me of her fabric will be far faster than traditional, you know, you know, SMB or NFS or, you know, other types of protocols. Now, the traditional enterprise class array that's going to have a lot of snapshot features and maybe many other capabilities and multi-protocol support, that's not necessarily where we're going, but we do have customers that are going to build very high-performance, low-cost storage backends just for these very unique workloads.
Starting point is 00:25:43 Well, so talk about that a little bit more, too, because we focused in on a lot of AI in these premium emerging workloads. You talked about the military edge use cases, but where else are you seeing success? Because I don't want to pigeonhole you into people thinking, well, gosh, I only need this if I need a bazillion iops. That's not necessarily the case either. Talk a little bit more about some of the mainstream use cases. Sure, large application like high performance database, high frequency trading databases, so you think about in-memory compute like Redis, maybe Aerospace, we've got customers running Oracle. We've got customers running Splunk. So Splunk's taking a huge amount of data from a lot of sensors and you're doing that for log analytics and things. Being able to write that quickly becomes important so you don't drop things or lose things. video streaming, so getting involved in entertainment workflows where you want to be able to stream a large or multiple 8K streams. And so the server guys might not like this, but if we can write 10 streams because we're 10 times faster than the traditional solution,
Starting point is 00:26:59 you might be able to reduce how many servers you need to actually handle that kind of a security environment, for example. Well, it's a real challenge. I mean, because they're not, it's not like it's all streamed out of one place and one data center in the cloud. I mean, these things are set up in local data centers all over the world to get that content closer to those that are consuming it, right? So, you know, efficiency, rack efficiency there. You talked about high frequency trading. I mean, when you're in the data center, like the NYSE, for instance,
Starting point is 00:27:30 you're extremely limited in rack and power. So it's not like you could just throw whatever you want in there and your demands are increasing so that density, and performance per rack you starts to become really interesting too. So we talked about it a moment ago, and Samar is asking a question that we should revisit, is the throughput on data integrity validation tasks limited by the lanes to the GPU?
Starting point is 00:27:56 We talked about lanes a little bit, but just revisit that for a second. So I'm not an engineer, so I want to make sure I get this right. It's not. think where are the lanes to the GPU can come into play as if we're actually doing a rebuild? Because in that in that world, we have to read the striped parity into the GPU, regenerate the data, and then write it back out. So, you know, that can impact things like rebuilding. The data integrity, if a bad URE comes in, that's read from the striped parity,
Starting point is 00:28:32 and obviously we'll add some latency during that time, but it's done real time and the application typically doesn't even notice. And since it's usually only a small chunk of data, it's not a massive problem with lanes and things, right? Or the lanes. Yeah, and I think that gets it. Well, with Gen 5 now, you're even can be quicker there. And you talked at the very beginning about Gen 6.
Starting point is 00:28:59 I can't believe how long it took to get from Gen 3 to 4. But now that we're at 5, we're going to hit 6 really quickly. probably next year. So what do you guys think about from a technology standpoint, whether it's server design, Gen 6, Flash, your engineers have to keep
Starting point is 00:29:18 staying out in front of this. And I guess because your software defined by nature, you're a little more flexible in how you can do that. But talk about the technology path challenges that you have. Ironically,
Starting point is 00:29:33 our solution in a PCI-E-Gen server with 24 drives, if you took that actual GPU in our software out and physically moved it to a Gen 5 server, even though the card we have is a Gen 3 or Gen 4 card, you will see a doubling of performance. We don't improve the performance of NBME. We get out of the way of the NBME to deliver its performance to the application. So if you've got us running on a Gen 3 by 8 card or a Gen 4 by 8 card, and yet that GPU, we're still delivering very close to 100% of the theoretical performance
Starting point is 00:30:06 of those drives at rate zero that proves that we're able to do that so we expect with gen 6 coming i know you tested a gen 6 drive already and got close to 28 gigabytes a second um on reed that's four lanes that's that's incredibly fast you know gen 4 drive it would take four of gen 4 drives to do that so it just goes to show you that flash is already far faster than our than our interconnects So, you know, there's always something bottlenecking your performance somewhere. And the good news is because we're not in the data path, when you go to six, you're going to see a much higher level of performance, just like we saw with four to five. Well, we're close to, well, I think we're probably over our half hour target here,
Starting point is 00:30:54 but I do want to leave people with a path of how do they engage with grade? because I know you're in the Dell catalog now, so Dell customers can order and ask for the product through there, but what's the best way for people to interact with you to get a P-O-C or a demo or see slides, God forbid, they want to do that? I forbid the slides from this particular broadcast, but what's the best way for people to engage with Great? Just come to GrideTech.com, G-R-A-I-D-E-C-H-com,
Starting point is 00:31:28 and you can say, please contact me. We're happy to do that. We have lots of partners. Dell is a great partner. We also have an OEM relationship with Super Micro. We are talking to many other server manufacturers that you might imagine. And so we also are available through the traditional channel. So if you're a gamer at home and you want one of these, you can call CDW and order it today or wherever you buy your IT stuff, we even sell on Amazon.
Starting point is 00:31:55 Yeah, I was about to say, I think you're on Amazon. I heard from Liff today. Yeah, yeah. No, but generally it's a more prescriptive sell than that. We're here to help solve a problem and we have a solution. And, you know, if you're buying a server that's got, you know, a bunch of spinning disk in it and two SSDs, it's probably not our place for us. You're probably still going to need a traditional hardware rate controller that has caching and things like that.
Starting point is 00:32:20 So it's just understanding what you're trying to do, reach out to us. We'll work with you to determine the best. fit and the best approach and how to design it and implement it in your environment. And I'll give the shameless plug again for there's the QR code for our paper that we did that has all the data and then the video where Kevin breaks down some of his favorite things about grade and server design and all that sort of thing. And then also we've got a number of events coming up. Our whole team, most of our whole team will be in St. Louis, I think sunny St. Louis for Supercompute in November. Kelly, I know you guys will be a
Starting point is 00:32:57 as well anywhere else this year people can run you down live and in person because we are Nvidia inception startup partners we're one of 25,000 companies in this program but by invite we now have a booth at GTC in Washington we also had a booth or a kiosk in the inception partner pavilion at GTC in San Jose this one in DC last year was called the AI Summit now it's called GTC Washington it's at the Washington Convention Center the week of the 27th And we have a full 10 by 10 booth. It's not a kiosk.
Starting point is 00:33:30 So, you know, we have a great partnership with NVIDIA and many other companies out there. And we do that because it's the only way to solve customer problems. There isn't any one place to get it all done. So we partner with a lot of people. Well, we've enjoyed the partnership with you guys, ranging from getting great performance out of QLC drives to getting as many gigabytes a second out of a single server to now how to use a little bit of a GPU and a server to accomplish both your AI tasks and data integrity, single namespace, all the great things that you provide there. I'm enthusiastic for what we do next. So let's find another project and see if we
Starting point is 00:34:16 can't break some more records. Yeah, we appreciate the partnership we've had with you. And I think it's kind of fun. Every once in a while, you open up on your, you know, unpackaging unboxing and Kevin's playing with all these toys and you get all these drives in and then the only way you can actually see how fast they are is to use our product because if you plug it into a traditional product you're going to only see the bottleneck of that device you'll never really see the true performance of those devices so it's kind of fun when we see you using us to test other things that's pretty yeah it's absolutely true that was the first line of the video if you want the fastest raid for your NVME storage there is no other option than great so there you have it thanks
Starting point is 00:34:54 Thanks for tuning in, everyone. Hope you appreciated this LinkedIn format. We're gonna keep trying new things. And Kelly, thanks for participating in this re-inogural LinkedIn event. Awesome, appreciate it.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.