Storage Developer Conference - #193: Computational Storage APIs

Episode Date: June 20, 2023

...

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, this is Bill Martin, SNEA Technical Council Co-Chair. Welcome to the SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage Developers Conference. The link to the slides is available in the show notes at snea.org slash podcasts. You are listening to SDC Podcast Episode 193. My name is Oscar Pinto. I'm going to talk about computational storage with respect to the APIs and how you can program the device. So in our agenda today, we'll touch on the computational storage APIs.
Starting point is 00:00:57 We'll touch on the programming model, specifically how you can discover computational storage resources, and how you can configure them. And once you have them configured, how you can use them by discovering the specific CSS. I think we had some questions on this, so we can walk through that, and also how you can execute that from an application perspective. And with that, we'll also touch on a programming example.
Starting point is 00:01:27 And we will cover a little bit on how these APIs and NVMe work together. Now, there will be more in detail that Bill will cover later on, but this is just an example from the API perspective. And lastly, we will conclude this here. So let's start with the CS APIs. Some of this material you may have seen before.
Starting point is 00:01:55 I apologize for that. But I want to say that this is a work in progress. And whatever you see here is subject to change. So from a SNEAR perspective, what we thought was we develop a set of APIs that are common that work across different types of devices. Jason and Scott touched on CST, CSP, and CSAs, so basically different ways to look at computational storage device. So we are saying we have one set of APIs that work across all these three types of devices. Now, the APIs are abstracted and they're agnostic with the type of hardware.
Starting point is 00:02:38 So they hide the device-specific details. They hide how they're connected, whether they're connected local, remote, or the specific device type. That could be FPGA type, it could be an accelerator, it could be NVMe type storage, CSD, and so forth. So what the APIs try to achieve is don't expose the device specific details, but rather keep it agnostic to a level that the same set of APIs apply to all of them.
Starting point is 00:03:08 Now, that abstraction covers different aspects of the device, basically discovery, how we discover the device, how we go about accessing it. Memory, I think there's a question on memory. So the way we have defined memory is memory could be mapped. That is, it is visible in the host address space, something typical of how FPGA GPUs do. Or it could be not mapped. And that's what we are trying to do on the NVMe side, where there is device memory, but it is not mapped to the host virtual address space. And then we kind of define how you can do near storage access, that is, how data could be moved from storage into that memory that is close to compute, and how we can facilitate data movement within the device
Starting point is 00:04:06 so that it doesn't cross the device boundary, which is where we get one of the benefits of computational storage. We also have a definition for how you could copy data from that device memory. That is, copy the data between host and device or between device memories. We also have definitions of how you could download CSFs or the functions and execute those functions and then so forth. One of the benefits you get with the abstracted interfaces, it completely hides any vendor-specific implementations that you may have. So totally agnostic to a level that you could do
Starting point is 00:04:51 whatever you want within your device and your device stack. The way we do it is we have an extensible interface that is covered by what we call as plugins. And the plugins is basically a mapping layer between the APIs and your specific device stack. So you can think about the APIs as basically a mapping exercise between an abstracted interface to your specific device. And lastly, the APIs are supposed to be OS agnostic. And that is they're not specifically tied down to a specific OS. So you could implement this in whichever flavor of OS you want.
Starting point is 00:05:32 Another thing that I should mention here is the APIs have also been crafted in a way that they could work with the traditional device stack that you see, that you have a classic user space library and a kernel device driver. So you could work in this model. You could work in an all user space usage model where today you have drivers in user space.
Starting point is 00:05:56 Or you could work also in all this working in kernel space. There may be one or two things that you may have to compromise on the interface type, but overall, the APIs have been defined in a way that they could work in all these three configuration models. So what we're seeing on the figure on the right is basically covering the API library, a plug-in interface, maybe a few set of plug-ins, depending on what device you have and the type of features that you had seen in the past that can host one or more CSFs,
Starting point is 00:06:51 computational storage functions, and there are some examples that are shown in this figure. So we have a spec out that is up for public review. It's a.8 spec. Please make a note of that URL, sneer.org, public review, if you're online. Just have a look at it so you can get access to the latest spec. We had a.5 as the last public review,
Starting point is 00:07:21 and since then I'd like to give you an update on what we have done. So we worked on how you can discover the compute resources that the device comes with. And with that, we worked on the type of APIs, that how we can query, how we can configure those resources. We also simplified that whole model on how you look at the computer resources by themselves to make it simpler for configuration
Starting point is 00:07:56 so that the practical usage of it is your execute and runtime. And that way we kind of separated how you discover and configure and how you execute them. So a couple of changes since then on the configuration of the resource usage on how you download and configure your CSFs, that is your functions, and also how you can discover these functions once you're connected to the device so you can address that particular function by, let's say, an ID and then run that specific CSF.
Starting point is 00:08:33 So these are some of the changes that have been adopted since the last update. One of the things, as you see in the last line, is the CSF. We've also defined CSFs to have characteristics wherein you could choose, if you have more than one CSF of the same type, by its performance or by its power saving. That is, you may have one implemented, let's say, in a low-power embedded CPU, or you could have one written in probably an ASIC or an FPGA and so forth. So you can have the choice if you have more than one CSF to discover it by its characteristics and then choose that for your execution.
Starting point is 00:09:20 Totally depends on what is the type of environment you want. Okay. So this is a brief overview of the APIs that we have defined. This is not the exhaustive list, but this is just some of the key ones that I touched on earlier and what they basically do. So with that, let's get to the programming model. So I'd like to define the programming model in five steps. The first one being you discover your resources.
Starting point is 00:09:54 So this is basically you're discovering your CSX. What are the different resources that you have? And you walk down that list, and then you figure out what exactly is the type of device you have. The second one is once you have discovered your resources, you want to configure them, and you want to configure them in a specific way. It could be for a particular user,
Starting point is 00:10:17 or it could be more than one user. Maybe you have a multi-tenant environment of some type. So you have the ability to configure your device to that particular usage. Once you have configured your resources, specifically your environment, for your execution usage, then the next thing you would want to do
Starting point is 00:10:39 is configure your CSFs. So CSFs, by by default are not activated. And I think we touched on this in an earlier presentation that you need to activate your CSFs to be usable. So we have a separate step for that as part of configuration. So on the bottom, what you see is these are, we term these as privileged operations. That is, you don't do this every time you want to execute. You do it once, and once it is done, you would go to the next step, which is your normal operation, which is discover the ones that have been activated, and then you go and execute those CSS. I've also shown in the last line that steps one and two may be pre-configured.
Starting point is 00:11:29 That is, the manufacturer would have them built in, that you don't need to make any of these changes. They come pre-configured, and they are in a certain state that you cannot change. They're already configured. So we can say it's a fixed state. In addition, the manufacturer may expose some of the CSFs as always activated, and that is possible, so in which case steps one and two and three
Starting point is 00:11:57 may be pre-configured options. And we may probably see in the early devices that come out in the market that you have a pre-configured device with a fixed functionality, be it for storage services, be it for data analytics and so forth. So this would be the typical programming model that we will touch on. And we will pick each one of these and walk through how we can go through this programming model using the APIs. We have touched on this in detail with Jason, but I just want to cover this from a different point of view.
Starting point is 00:12:38 So I want you to look at this figure, and if you remove the storage, what is highlighted as storage, if you remove those boxes, what is highlighted as storage, if you remove those boxes, it becomes your CSP. You add the storage in the device, it becomes your CSD as it is shown. And if you add additional control software and so forth, and maybe some additional storage that may be part of the same device
Starting point is 00:13:03 or it may be distributed, it becomes a CSA, the storage area. I just wanted to distinguish this as this is a starting point on the device that you have at hand and how you discover it. So let's touch on CSFs. So with CSFs, what you have on the top right is the basic definition as per the spec. But CSFs in general, we can have two types. One is they are pre-installed by the manufacturer. That is, they're fixed. You cannot remove them. You cannot unload them. And also,
Starting point is 00:13:57 they could be activated by default, but some manufacturers may give you the ability to activate and deactivate. The reason you have this ability of activating and deactivating is your device is only of a fixed size, and whenever a function is activated, it takes some resources on the device, be it some memory, be it some scratch space, and so forth. So you cannot, let's say you have downloaded multiple CSFs, and it's possible that you cannot execute all of them. So you may have to activate some, or you may have to deactivate some, but you may not be able to activate and use all of them at the same time. And that's the reason you have this activate and deactivate,
Starting point is 00:14:33 so you can use the resources accordingly. And lastly, the CSF on the fixed part, the manufacturer part, they would be fixed. That is, they have a certain requirement on what they can do, and that's all you get. Whereas in the downloaded part, wherein CSFs can be downloaded to the host, when you download, they go into what we call in SNIA terms
Starting point is 00:15:03 as the repository. So once they are in the repository, they are in a state that they cannot be used, but they can only be configured to be used. That is, you have to walk through the configuration steps, and you have to pair it with some other resources, basically the environment, and then you have to activate them, and then you can use them. So these CSS that are downloaded can also be unloaded. And like I said, they could be activated, deactivated.
Starting point is 00:15:31 And depending on the environment that the CSS are running in, you could have multiple copies of this. And for example, let's say you have an embedded CPU environment. You can have more than one copy of this. And that copy totally depends on how you activate it and how you download it. And it depends on the CSF type. Okay, so with that, let's go into the first part, which was discovering resources. So we have an API called query device properties.
Starting point is 00:16:20 Basically what it does is it takes a device handle. And I have not touched on a device handle. So let me talk about how we manage the API. So since the API is abstracted, we also abstract the access and the resources by a handle. So the handle is basically an opaque entity that is provided to the host user, but internally from the API and the implementation, that handle can map to that specific resource and the basic workings of that resource.
Starting point is 00:16:57 So prior to this, you have already accessed the device by basically saying open device. And once you have opened that device, you get a device handle. So using that handle, you can query specific resources that are there on the device. So we have defined resources as shown here, CSX, which is basically the whole device by itself. And within that, you can have different resource types. And we touched on this earlier,
Starting point is 00:17:27 but basically you can have a computational storage engine. You can have one or more of those. You can have computational storage execution environment. This is the environment that engine is running in that the CSF runs within. And then you can have basically the functions, the CSFs, or you could have a vendor-specific implementation that is undefined, but we give that option
Starting point is 00:17:52 that you could do something like that. And the figure on the right kind of represents kind of the hierarchy of how these resources fall through. So you have the CSX, and you could have different engine types. So engine types could be taken as an example I gave earlier. You have an embedded CPU as one, an FPGA as another. You could have an ASIC, a dedicated hardware. Three different engine types, and each of them
Starting point is 00:18:20 have a different working environment, physically working environment, right? So they have their own execution engine. They have their own execution environment. And also their own CSF because the CSF for that execution environment cannot run on another engine. And I think somebody raised that question earlier. And this kind of figure kind of shows that differentiation that the CSC, CSC, and CSF is dependent on the engine type. Now, a vendor may abstract an engine, may not just show an embedded CPU or an FPGA in its raw form, may show an environment that may contain more than one engine type as an abstracted engine. And it could be in that execution environment,
Starting point is 00:19:13 they could use embedded CPU to trigger off an FPGA and maybe some hardware IP of some type, right? So they could abstract that in a way that that resource may come up as one engine. But overall, when you discover it, you will get, when you discover by resources, you will find them in this order. So how do these resources look like? So here is the hierarchy looking at
Starting point is 00:19:37 from the programming level. So each of them wants, let me go back. So as input, you give the type, and as output, you provide a buffer, which is your properties, and it is a union of any of those because you can select only one type, and the length says what's the size of your buffer.
Starting point is 00:20:00 So as output, you can get one of these resources, and they are segregated by properties, and each of those properties could have further details embedded within. Yes? In terms of the discovery, is there another something like a namespace for the discovery? So as you know, at SNEA, what we're trying to do is we are trying to abstract namespace-specific functionality from NVMe side. But also if you have some other namespaces, we want to abstract that to a level that it becomes a resource and within that resource you could, that resource could actually map to that namespace that you, of your choice. So
Starting point is 00:20:53 the mapping would be provided underneath the layers. You don't exactly get to the NVMe namespace as is, but you get it to a resource which actually maps to that NVMe namespace. Does that answer your question? So, application would have to generate all the CSS in a particular namespace
Starting point is 00:21:19 for a particular function? Probably, yes. It depends what you're looking at from the application point of view. So that's a good question, though. So if you have one namespace, how would you, across CSXs, how would you query it? Yeah, so we have another discovery wherein your actual usage comes with CSF. So the way we have structured the APIs like I touched on earlier, you discover all your resources, but once it is configured,
Starting point is 00:21:55 your actual usage comes by, you really need that CSF, the function that you want to execute on a specific CSF for that, let's say, namespace, and there you go. Then you go and execute. And the APIs are structured in such a way that you can find a specific function across all these CSXs in one time and then go and execute.
Starting point is 00:22:16 So that's the five steps that I touched on earlier. Okay, so getting down to the resources, as you can see, each property has embedded information on that specific resource type, and that specific information can be further broken down, like in the case of the computational storage engine, could be broken down to a computational resource, a compute resource. Now, a very good example of this is the CSE info could be your embedded CPUs by themselves. That's the whole execution environment. And within that,
Starting point is 00:22:57 compute resource could be CPU1, CPU2, and so forth. So you can, later on when you configure, you can select, I want to run this program on CPU one, on engine Y and so forth. The blue boxes, I will get to it in the next step. So that's basically your activated instance. That is, once you're configured, they turn blue. Not necessarily blue in this fashion, but I'll walk through those steps. So the next one is once you have discovered all your resources, you will get a list of different resources
Starting point is 00:23:33 that your CSX has. You would next go and configure your environment that your resource wants to run. So as shown here, you would next go and configure your environment that your resource wants to run. So as shown here, the execution environment that you want to configure, it was basically you're pairing your execution engine, CSE, with your environment. So you pick those two together, and then you activate it.
Starting point is 00:24:04 Now, you cannot pick any of them. You cannot pick from a CSE from one type with an execution environment of another type. So the properties and the info that I touched on the earlier slide, info, so each of these have embedded within them what we call as a CSE token. So each of these have embedded within them what we call as a CSC token. So that token basically says so with this token information, this CSC can run in this type of environment and can execute these type of CSFs. So only the tokens that match
Starting point is 00:24:41 can be configured together. And, yeah, so basically you pair them together, and then you do the configuration. And what you get there as part of configuration is you pick your engine, you pick your execution environment, and each of them have some IDs that uniquely identify your engine with your execution environment. And once you activate them, you get an activated instance, which is your blue box.
Starting point is 00:25:12 And along with that, you get the instance has the activated ID. Now you use that ID to configure your CSS, which is your next step. So you can only use activated instances to execute your CSS. This could very well be, the activated instance could very well be, maybe you have embedded Linux, right? So the notion of activating it is basically you're kicking off probably a VM of some type, and you have made it ready so that you can now run programs in that execution environment.
Starting point is 00:25:54 Like I said, activation takes resources, and you can visualize what basically activation does here. Okay, so next, the third step is let's configure the CSF. Again, the API is the same, but here what we do is the configuration info is we, instead of passing a CSE, we pass a CSF. And it's a union, CSConfigInfo, that is shown there is a union of the different activation types. So in this activation, what we do is we take that activated CSCE
Starting point is 00:26:33 that we did in the earlier step, and then we take a CSF that is either in the repository or we download it. I'm not showing that step here, but you can download a CSF before this, which falls in the repository or we download it. I'm not showing that step here, but you can download a CSF before this, which falls in the repository. So we pick that up and we activate it. And by activating it, you create an instance of the CSF,
Starting point is 00:26:55 which I'll show you in the next step. We create an instance of the CSF. And along with that CSF that you activated, you can also specify the compute resource that you want that to run with. So earlier I did say that you can attach it to a CPU, maybe one, maybe more than one, as an example of embedded CPU.
Starting point is 00:27:21 So basically that is what activation does. You can tailor your CSFs to execute on a specific computer source, and that activation holds good until you deactivate it. So this is as good as affinitizing a particular running program to a particular CPU. And only activated CSFs can be executed. Sure.
Starting point is 00:27:59 Is that a one-to-one relationship? Can you bind multiple CSFs to a single computer? You could. It totally depends on what the vendor lets you do. It's, again, the execution environment that the vendor provides you. So you could have different types of implementations. And we provide that flexibility on how you want to, what to say, activate it.
Starting point is 00:28:27 And that token that I said earlier, that holds the unique key on what the vendor chooses to do with the device and the resources that are exposed. Okay, so we have discovered the basic resources, the compute resources that came with the device. We have activated our execution environment, and we have activated our, what is it, the CSFs. So once we have the device set up and ready for execution, the next step would be to discover your CSFs. Now, we have two APIs for this,
Starting point is 00:29:08 and the first one is you query across a specific device, and that could be a path, and the path could be your execution path, be it a file system, be it a device, or it could be also specified as a null. That is, I want to scan across all my CSXs that are available. So it gives you that ability. One thing that we have done with the APIs is we have tied it to storage. As you can see, the path is one of the things that we let you discover with. So we have another API to just discover the CSXs,
Starting point is 00:29:50 that is your computational storage device. You can just discover whether you have a computational storage device and a specific path. So it makes it very simple because then you're working with storage. That's the whole notion behind the computational storage. So we are tying it to storage, and the storage can be specified by a path. So here the discovery is, in this path, do you have a specific CSF? Or do
Starting point is 00:30:16 you have CSFs, right? And it kind of tells you whether you have computational storage devices on your system. So null would give you, if you have 20 drives, and maybe five of them are computational storage drives, you may get five back and the names of them and how to address them. Now that is basic functionality on how to discover the devices that have specific functions.
Starting point is 00:30:44 And example would be, I want to know all the computational storage devices that have compression, right? And if that is the case, you can do this query, and your buffer would be filled with the CSXs that are there. And once you have that, you could open the device and then query by that specific name, like compress. And the second API basically does that. You specify that specific CSF that you want in question.
Starting point is 00:31:20 And from that, you would get a structure, which is basically your info that I touched on earlier, which is you get information on what that CSF is about. One is it gives you an ID that you can use for execution. But secondary, it tells you its relative performance or its relative power. And if there is more than one compressed, let's say, you can make the choice whether you want to use this in a lower power environment or a high performance environment and so forth. So we give you that choice. And the other thing, if the vendor chooses to expose, is how many instances of this function can run, right?
Starting point is 00:32:01 So that count will automatically reflect. And this totally depends on how this function was activated. So with this API, you would, these two APIs, you would discover CSS. But the last one is basically what you would use to get that specific ID. And once you get that ID, the next step would be to execute.
Starting point is 00:32:25 So with execute, we have a rec. The request is shown on the right. And what you have is the request has that CSF ID we touched on earlier. So you provide the CSF ID and you specify the arguments or the parameters for this function to run. You specify the number of parameters that are there and the set of parameters. Now, what happens is this is a very abstracted form. This may not match your actual implementation of your device. As you know, the APIs are abstracted. You have an in-between layer that maps the abstracted APIs to your device-specific implementation.
Starting point is 00:33:08 So that in-between layer, what we called as plug-in earlier, will do the mapping of this list of arguments into the specific definition of your program or your function that is defined for your device stack. And what this API does is this API could be executed as a synchronous operation. That is, you can specify that I will wait until this operation completes, which means you don't specify
Starting point is 00:33:43 the next two items. That is, you don't specify the callback or you don't specify the event handle you provide as a null, which means the API, you requested the API to block until you complete this function. You can also make it asynchronous, and asynchronous could be the typical callback model or the event-driven model. So we give you this choice of three execution modes. And the last one that you have is a completion value. Now, this is an optimization for those functions that you just want to know
Starting point is 00:34:18 whether they succeeded or not. Maybe you want to know more than that. They probably return. Here we specify a 64-bit value, let's say a checksum of some type, right? So you would get that back in the same request, that your return value comes back as part of execution. But if you have larger, let's say, results
Starting point is 00:34:42 that the execution did, and we'll cover that by an example, then you may need results that the execution did, and we'll cover that by an example. Then you may need to use an additional API, which is basically copy the contents of the results back to the host. Okay, so with that, let's switch to the programming example. What we have here is the figure on the left. So you're going to allocate some device memory. And once you have allocated some device memory,
Starting point is 00:35:09 you want to load storage data in that memory. And once you have done that, you want to run a CSF. And here, an example is a data filter. And lastly, once that CSF has executed, you want to copy the results or the contents of that filter operation back to host memory. So four steps. How do we do it programmatically?
Starting point is 00:35:35 So you allocate some device memory. So you would need two buffers here. One is for your input, and one is for your output. So the first buffer is for load from storage, and second is to copy the results, so you allocate them. CS Alloc Mem is the API. It's pretty straightforward. It takes a device handle.
Starting point is 00:35:57 Again, abstracted handles used here. You specify the size of your request. You specify right now what is called a mem flag. It's suggested as zero, but this is meant for expansion. We plan to use this for different memory types in the future, and we are touching on different memory types, starting with NVMe, but there could be other devices wherein memory may be dispersed somewhere else
Starting point is 00:36:22 in your subsystem and so forth. So that flag is kept for future expansion. And then you specify a storage entity to receive a memory handle. So that is your input memory handle or your results handle. And the last option is for those devices that expose that memory, that is they are mapped into system address space, you could specify storage for a virtual address. So here it is specified as null, which means I don't need a virtual address.
Starting point is 00:36:57 My device doesn't support mapping that memory into host address space. NVMe TB4091 is one such way. But you could have maybe an FPGA-type model wherein the FPGA is mapped to the APIs. You can specify to receive a virtual address, so you can use that virtual address to use file system calls to load data into that memory. So first step, you allocate the device memory.
Starting point is 00:37:28 The second step is you load data into storage. Here is the code, total code that is required to load data into that buffer that you had allocated earlier. What we're showing here is this API can work with block requests and can also work with file-based requests. This example shows a file-based request, which means the API has an abstraction layer that can convert that file information into a block request internally, so you don't have to do that. With some usages, like where memory is not exposed, you have to go through this API, which means you need the support of file system built into your APIs to make that happen. Now,
Starting point is 00:38:16 because we have a pluggable API subsystem wherein you can have plugins that add a new feature, you can make this possible, and it is extensible. Okay, moving on. The next one is you have allocated data, you have loaded data from storage, you want to execute that CSF that you had activated earlier, right? So... you call Q compute request, and this is, it has three arguments here.
Starting point is 00:38:50 Basically, you have the input memory, that is where the data resides, the size of the data to work on, and then you have output buffer, which is your results handle. And this is all, this query,, scan query is the CSF that is running here. That's all it takes. Now you could have a different function here. You could throw in a checksum. You could throw in compression. You could throw any of them here. But as the example
Starting point is 00:39:16 shows, this is how you build with the API. And if your function takes more arguments, you just specify the number of arguments as in the second line, numArgs, and then you build those arguments as part of the request. Okay. Lastly, as the CSF has executed, you would like to copy those results that the scan conducted. So you would do a copy memory request, and basically a copy memory request is you specify how you want the copy to occur. And here what we're showing is copy from device. So basically you're saying copy from the device memory into host memory, and you provide a host virtual address, which is the second option there, and then you specify the device memory that you want to copy from. That would be your handle.
Starting point is 00:40:12 Again, it's opaque to the user, but within that handle, you can say it is at this offset for this size. And that's what it says, byte offset of this many bytes, and you do a copy mem. And once you have copied it, four steps, you have executed it. So let's look at it from another API I have not touched on earlier, which we call as kube-batch-request.
Starting point is 00:40:36 So batch-request is an example where with competition storage, it is guaranteed that you have minimum three APIs to execute, right? So you have your input data, you have your compute, and then you have your output data that needs to be copied. And instead of doing this in three steps, you could do this in one step with the kubebatch request. There was a question earlier that you may have multiple computes
Starting point is 00:41:04 in the pipeline, and if that is the case, this queue batch request is the right API to do that. You could just add them, and you could batch them, and you can submit them. And since you have created that batch request, you can reuse that batch request just by changing a few parameters here and there. So you don't need to configure that whole request every time. You just reuse it. So with that, I conclude the programming example. I'll quickly touch on the APIs and NVMe. So I think Kim covered this in detail
Starting point is 00:41:40 on what NVMe has and what's the work that is going on. Basically, the compute namespace and the memory namespace. We already have the storage namespace that we are well aware of, which has NVM, ZNS, and so forth, and KV. What this figure is trying to show is those terms we had touched on earlier in the last two sessions and how they map to the NVMe, right? So we have compute, which is CSE. That becomes the compute namespace and so forth. So this is the mapping that SNEA is working with NVMe to make these two work together.
Starting point is 00:42:23 All right? So with that, I'd like to conclude that we have a rich set of APIs, well-abstracted, but they are good enough to work with different types of device types. And we have a.8 specification that is out there. We have other sessions on computational storage. Please attend to get a better view of how things are working. You can also join us on our standardization efforts. So we have on the SNEA side as well as on the NVMe side.
Starting point is 00:42:56 If you all are interested, please go ahead. Download that document. Provide us feedback. Tell us how we can make this better. That works for you. And help us build the ecosystem. Yeah, that's all I had. All right, questions here. Do you have any support in the API for any concept of resource arbitration or priority? Okay, so the question was, is there any arbitration for different resource types for your execution environment?
Starting point is 00:43:54 Is that right, Peter, to put it? Yeah. So as you know, this is still at the abstraction level. That actual implementation will depend on your device and your execution environment. Right now, what we have done is we have provided the means to configure them and to work with them. That may be the next level of detail that we may have to do
Starting point is 00:44:16 as we get to maybe you're probably touching in a multi-tenant environment and so forth, how are these resources going to affect each other. Yeah, so we do have some of it here. you're probably touching in a multi-tenant environment and so forth, how are these resources going to affect each other? Yeah. So we do have some of it here. It is 0.8, and we still have work to do, as you know. But yeah, that's a good question. Yeah, sure.
Starting point is 00:44:38 Question to... Are you also looking at supporting SPDK? Yeah, so... Good question. So the question was, are you going to support SPDK? As you saw in the stack, the APIs sit on a high level. Like I said, your drivers could be sitting in user space,
Starting point is 00:45:04 and SPDK is an option that can be easily supported. If I wear my vendor hat, we did try something of that nature back at Samsung, and it is possible, yeah. It's a mapping exercise in the end. Any more questions? Yeah, it should, so, yes, like I said, it is neutral in a way that you don't expose your device-specific connectivity, whether it is local or remote. I think that is a property of your device that it is configured, as you know, in VMware Fabrics. You get a virtual device, right, like it is local. So we are trying to use the same attributes that you configure it for your device type, and you can use computational storage as if it is local.
Starting point is 00:46:08 Thank you. Thanks for listening. For additional information on the material presented in this podcast, be sure to check out our educational library at snea.org slash library. To learn more about the Storage Developer Conference, visit storagedeveloper.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.