Storage Developer Conference - #174: Computational Storage Update from the Working Group

Episode Date: August 16, 2022

...

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage Developer Conference. The link to the slides is available in the show notes at snea.org slash podcasts. You are listening to SDC Podcast, episode number 174. Hello, everyone. My name is Jason Mollegaard, and I'm a Solutions Architect at ARM, and I'm one of the co-chairs of the Computational Storage Technical Working Group. And hi, I'm Scott Shiley. I'm the VP of Marketing and NGD Systems, and I have the pleasure of co-chairing with Jason. And today we're here to present an update on the Computational Storage Technical Working Group and its impacts and what's coming out
Starting point is 00:01:05 as far as content and information that we can share with you guys. So for our agenda today, I'm gonna cover the first couple of topics here on the updates of what's going on with the membership in the market, talk a little bit about the work that's currently out for public review,
Starting point is 00:01:18 and then I'm gonna hand it over to Jason. Who's going to- And I will talk about the status of the architecture and the software API, and then I'll hand it back to Scott. And I will tell us where we're going from here. So sit back, enjoy the next few minutes. We'll keep it short and sweet and we'll enjoy the show. So for the first part of it, we want to talk about the growth and evolution of the Twig. And the best way to do that is just highlight the work that's ongoing with the TWIG itself. For this particular one, the continued experience, we've had now 51 member companies. So we've actually gained and lost a few over the course of the last year.
Starting point is 00:01:57 We're up to 261 individuals that are monitoring or participating in the events. And on a weekly average, we have between 20 to 30 people that are very active in the group, as well as several key contributors. You can see here the logos of all of the different organizations that are participating in the group. And one of the beautiful parts about this is it's not all just the vendors of the products, but it's also consumers of the products and even end customers, if you will. We have some very notable new entrants this year with folks like Los Alamos National Labs as a participant, as well as companies like Vicinity that put this product into the marketplace. We've been doing a lot of work with our companion group, the CS-SIG,
Starting point is 00:02:34 or Computational Storage Special Interest Group. For those that aren't totally familiar with SNIA, we have technical working groups that do all the actual architectural design and standards, and we have marketing arms, in this case, the special interest group for the computational storage work group. We also have a lot of collaboration going in the marketplace between different other organizations, including those like NVMe. So while we're working on the high-level architecture that we'll get into a little bit in these slides and in some of the other presentations that are being shared today, the NVMe group is actually working on an implementation in their particular protocol for deploying these solutions. And of course, there will be a session being shared by the co-chairs of that as part of this
Starting point is 00:03:13 event. Since these are all released at the same time, we want to make sure that you know of the different presentations that are available and can participate in those. As far as what's going on in the marketplace, it's always good to see the impacts of what the technical working groups are doing out in the market. Our friends over at Computer Weekly have recently done a 13-part developer series where they talk to not only SNEA and member companies of SNEA, but also potential consumers of computational storage and put together quite a series of articles that are available on their website related to that. We also have been blessed with the opportunity to join the hype cycles from Gartner as well. Several other analysts have their
Starting point is 00:03:50 unique graphics that represent the growth of this particular technology. Our friends at GigaOM, for example, are about to release their next report on computational storage. One thing that's unique about this is we have been able to secure what's called a cool vendor award for several of the vendors in 2018 was the last one that was presented by Gartner. And when they restarted it in 2021, that cool vendor award was given to another vendor of the working group. So as you can see, computational storage is truly taking off. One other aspect about that is, as you can see, there are two separate hype cycles shown in the center of the screen. Generally speaking, a storage product would end up on the storage type cycle, but because we are a computational storage product and architecture, you see that we actually arrived on the compute and data center infrastructure aspects of the
Starting point is 00:04:35 hype cycle. So this is a unique opportunity to enter in a couple of different spaces. As we continue moving forward, we also have a lot of sponsored efforts from both members and consumers of this technology. The EMC3 consortium that's sponsored by Los Alamos National Labs is participating with several vendors in the computational storage space that are members of the TWIG, as well as they are. And if you have the opportunity, please check out the session from Brad Settlemyer. He's presenting as well as part of this event. There's been publications in other events, VMworld last year in 2020 presented a computational storage project that they're working with one of the vendors of the organization as well. And then Dell has published several articles, including a blog that's highlighted here, talking about how they plan to utilize this technology as we move forward. So not only is the technical work doing great things in the market, the
Starting point is 00:05:29 marketing arm, the CSCIG, as well as all of the member companies are doing a great job of promoting and pushing the technology and efforts forward. Now as we look at what the technical working group has been up to, we've been working on quite a bit of publicly available content at this point. So we have in the TWIG what we've classified the computational storage architecture and programming model. We released this last year at a 0.5 revision. We've made significant updates to that document. We've got it to what we classify as a 0.8. We've got a few items left that need to be managed within that particular document, including things like security that need to be addressed as well. But that is out there and is available for public review. If you download a copy of these slides, you can click the hyperlink in the deck and it will take you to the appropriate document.
Starting point is 00:06:12 Now, as we've continued to grow this architecture document, we've also realized that there's some needs to work on a computational storage API for certain implementations of the technology. And by doing so, we've released a 0.5 of a draft document of that as well. Now, one of the key aspects of this is the TWIG members and the members of SNEA do a significant amount of work to put these documents together. However, it's very important that we get the feedback from participants of this event and the general public consumers of these technologies to make sure that what we're putting together is less alphabet soup
Starting point is 00:06:45 and more implementation friendly. So there are sessions that are going to be presented by TWIG members, both Bill Martin and Oscar Pinto have additional sessions, and we'll highlight those further as we go through this particular presentation. Now, one of the last things I'm going to leave you with before I hand it back over to Jason is the concept of the use cases. We have an annex within the architectural document that talks about different ways that current vendors and consumers are looking at deploying this technology. You can see here is a list of a couple of those particular items. If you'd like to go out, read those, have inputs or thoughts on that, please provide that feedback through the SNEA feedback portal. And if you'd like further kind of conversation on that, that is available
Starting point is 00:07:22 online as well. The Compute Memory and Storage Initiative has a SNEA on storage blog that has recently released several updates related to the computational storage work, including the fun little graphic here that you can find in the latest blog on that particular site. So we're very excited about what's been going on in the market, what's been going on with the work within the group of the last year. And now I'm going to hand it over to Jason and he's going to give you a little bit more details about the technical side of what's been going on. Yeah, great. Thanks, Scott. So yeah, let's kind of dive into those details about what we've done to the CS architecture document. So if we go to the next slide, then we can see part of the reason that there's been a lot of change is as a result of all the
Starting point is 00:08:07 members that have been added. As Scott indicated, we've grown quite a bit. Yeah, we lost a few along the way, as with anything, but with a significantly increased number of members, there's been more understanding about what it is that we're trying to create and new ideas brought forward to the group in terms of clarifications that are required and new details that needed to be explained. And so highlighting some of the key changes. So there's been a renaming of the internal computational storage processor is now a computational storage engine. We've introduced the concept of a computational storage engine environment. And so that is now also present in the document. New architectural elements of a resource repository
Starting point is 00:08:55 that can contain those computational storage engine environments and the computational storage function. And then the discovery and configuration flow has been documented. And so all of these changes are in the spec, and I definitely recommend that you go get a copy and download it and take a look and get into the details. So if we go to the next slide, this slide gives us a summary of some of the dictionary changes
Starting point is 00:09:23 that have been made. And these are in the SNEA dictionary that's available online. So you can certainly go take a look at that and see what all is available. So we have removed a few terms, the CSS, the FCSS, and the PCSS. Those are gone. So we certainly don't want to spend any time focusing on what they were and why we had them. And instead, we've added in some new terms here so that I kind of already mentioned a few of them. So we've got the computational storage resource, computational storage engine,
Starting point is 00:10:00 computational storage function. And in addition to the terms that were already there, like computational storage device, processor, drive, and array. And so what we're trying to highlight here in this diagram is how all of these terms interrelate with one another and, you know, what components are contained within those particular definitions. And then the block diagram is showing us where those definitions and where those blocks are actually used in the architecture document. So this is a snippet from the architecture document itself. And again, if you don't have a copy, then please go download a copy for yourself. So as I mentioned, one of the next things that we had worked on was the discovery process. Tried to really refine that and spell it out
Starting point is 00:10:45 on how would we find all these computational storage resources and devices and so forth. So we've got an example of a diagram here that's in the architecture document. There are others. And Scott and I are not going to get into details today on exactly how all these work, but there is a presentation by our colleague Bill Martin who is going to get into those details in another session. And so we encourage you to go over and take a look at that session after you've viewed this one and you can gain a little deeper understanding of all of these new terms and the discovery flow and how everything has changed in the architecture document. So what about the software? So let's take a look at what's happened there and some of the changes. And well, and actually, what have we created? Because a year ago, we didn't have this API available for public review, but now we do. And so we definitely welcome your feedback and input as Scott indicated. So essentially this new API document is intended to
Starting point is 00:11:52 be, you know, a proposed application programming interface for computational storage drives. So if you, again, if you don't have a copy, then please go download a copy, take a look, read it, and provide your feedback. But essentially, we're all using a standardized API, a standardized way of providing that data, then it's just going to make it a lot more easy and straightforward for adoption and the ability for end users to switch from one vendor's product to another, leveraging the same discovery and software and so forth. So, you know, so again, we're not going to get into the details of that API. There's a lot of detail in the document as to what's being covered. But there's another session that we'll reference you to by Oscar Pinto, who's also our colleague and participates actively in our SNEA discussions about this API. So I encourage you to go view
Starting point is 00:13:07 that presentation after this presentation as well. And you can learn quite a bit of detail about what's going on with the API and how it works. And so with that, let me turn it back over to Scott, and he is going to kind of take us to what's happening in the future. Thanks, Jason. And again, great chance to spend us to what's happening in the future. Thanks, Jason. And again, great chance to spend a little bit of time in the different sessions that are taking place at the event today and be sure to follow through them and then see what you can get out of it. And hopefully it gives you a great insight into what's happening within the market and the efforts. So we do know that we have to get beyond and get to the next stages of what's happening within this. We need to get to a fully released standard. We need to make sure that
Starting point is 00:13:49 we have proper documentation of lots of different ways to use this technology. So I use this graphic. I love this image of a state-of-the-art aircraft breaking the sound barrier. And I see that's the kind of the junction where we are with computational storage in the market and with the efforts that we're doing here within SNEA. One of the biggest where we are with computational storage in the market and with the efforts that we're doing here within SNIA. One of the biggest things we do know is that security has become a very serious topic. And we are doing a lot of collaborative work with the security twig within SNIA, which is driving a lot of the different standards within all of the different ways you can use technology from compute, memory and storage, because we are doing all of those within our devices. So when you create new places to do work, you create new places for potential security concerns. And so that is one of the big focuses of the ongoing work we have right now.
Starting point is 00:14:36 We want to continue to see new ways to use this technology. Again, as vendors and consumers of this and participants in the work group, we can sometimes miss opportunities to get new insights. So we look to you as the viewers and as members of SNEA and or your companies to give us some ways to look at how to deploy these particular products. So there are continued market growth opportunities for this technology and use cases where we're engaged heavily as both members and or companies to continue to drive the work forward. So as one of the next great things within what we can do for you is have you help us. So there's a great opportunity to join the SNEA efforts here today. You can come in as a member company with voting or non-voting rights, do what you like and participate, monitor and pursue follow-ups of all of this great confirmation and information that's out there. The NVMe Working Group also does have a
Starting point is 00:15:31 task force or task group that's been working on a lot of effort. You're going to get a great update within the confines of this event as Kim Malone and Stephen Bates, who are members of both the TWIG and the NVMe Working Group, as are Jason and I, has an opportunity to pursue this technology as we move forward. And so what we want to leave you with is the fact that there are a little over 12 sessions, including a keynote that you've probably already listened to, that highlight and reference computational storage and the technology that we're engaged with here today. And we won't go through and read them all. They're here on the screen. And of course, they're on your them all. They're here on the screen. And of course, they're on your agenda. So again, thanks for listening to our presentation. We look forward to seeing you all in person at the next event, hopefully in 2022.
Starting point is 00:16:15 Thanks for listening. If you have questions about the material presented in this podcast, be sure and join our developers mailing list by sending an email to developers-subscribe at sneha.org. Here you can ask questions and discuss this topic further with your peers in the storage developer community. For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.