Storage Developer Conference - #65: Accelerated NVMe over Fabrics Target/Host via SPDK

Episode Date: February 28, 2018

...

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage Developer Conference. The link to the slides is available in the show notes at snea.org slash podcasts. You are listening to STC Podcast Episode 65.
Starting point is 00:00:40 So, you might have guessed, I'm Paul Luce, and myself and half of our division is here today to give you this talk. In fact, Brooke even sent us an email a couple days ago and just assumed this was a panel discussion. We said, no, it's actually four of us that are going to go through this. So we're going to talk about NVMover Fabrics Target by SPDK and vHost implementation. I'm going to give you the intro on SPDK, and I'm going to kind of assume that a lot of people have at least heard of it. Anybody heard of SPDK? Yeah, it's been pretty popular here. I think there were some talks last year, maybe the year before. I was out here last year
Starting point is 00:01:16 doing an NVML talk, and out here a few years before that doing Swift erasure coding, and then dialing way back to the first days of the OFA OFA Windows NVM Express driver came out here and did a talk on that as myself and another small team of engineers put that together but SPDK this thing isn't moving there we go okay so I'm just going to do like three quick slides since there was a huge show of hands. What it is, why it is, and how it is is kind of the way this is going to flow. So what it is, basically a bunch of software building blocks.
Starting point is 00:01:53 It's all software building blocks. And we'll see real quickly on the architecture slide, if you haven't used it before, there are a bunch of different libraries, a bunch of different tools. You can mix and match them to do whatever makes sense for your environment with the focused all-around performance. It's all open source, all BSD licensed.
Starting point is 00:02:12 I've got a slide talking about how our community has grown. Really, over the last six months, a lot of strides in building the community up and really building up, taking advantage of the software underneath and trying to get more people involved. And, of course, the theme is all user space and pulled mode stuff to really get the most efficiency out of drivers, most efficiency out of hardware, without using the kernel. All right, so there's everything that SPDK is.
Starting point is 00:02:38 We could easily spend a whole talk just going over this, but I want to give time for my colleagues to get into the meat of what we're about here this afternoon. I will point out just for my own shameless plug here, Blobstore here, I've got a talk coming on Wednesday at 1.30 if you want to hear more about that, deep dive into that. Essentially, SPDK is kind of
Starting point is 00:02:57 layered into three sections. We've got our protocol components, we've got our services layer where we add value and people can add value with different system services. You can add using our block device abstraction layer, add a compression or encryption or kind of anything you want to do in sort of a filter pattern as you get through the stack. And then we've got our pulled mode device drivers at the bottom. Various different points of integration that have been done really all over the last year. I'll talk a little bit about the RocksDB one on Wednesday.
Starting point is 00:03:28 Zia's got a talk on the Ceph one Tuesday, is that right? Yeah, Thursday. Thursday, I don't know, sometime this week. And then we've got some pretty cool application framework stuff that is sort of a helper for putting all this stuff together. I'll talk a little bit about that Wednesday when we go through a Blobstore example. But that's kind of what it is, this collection of stuff. Okay, and then why everybody raised their hands.
Starting point is 00:04:01 You all know what SPDK is for. It's for performance, right? It's for getting the most out of the hardware that you're using. It's for really helping to deal with the issue that unless you've been living under a rock since NVMe came out and started shining a spotlight on the software overhead, right, that's what this is all for. It's all to remove the software overhead
Starting point is 00:04:20 and really expose the benefit of next-generation media and today's and next-generation CPUs. So you can see some of the the high-level benchmarks. We'll talk more about that today and also on Wednesday. But you know 10x more IOPS with NVMe over fabrics, 8x more IOPS per core with NVMeXpress. The per core is a really important piece and you'll see that in the numbers really. It's really about maximizing the use of a core to get real work out of it instead of having to do a lot of busy work lots of better tail latency with rocks DB workloads got some good numbers on that we'll show you on Wednesday and then overall more efficient use of
Starting point is 00:04:56 development resources you might say what does that mean what does that mean it means right we are really building this community. So this started out really as a science project in a lab in Arizona five years ago or something. It grew slowly and got more sort of official inside of Intel. Eventually it became clear that it made sense to open source this. A lot of this was common core components that you would use in building an optimized storage application to use CPU and next generation drives as efficiently as you can. So it was open-sourced, but it was still not really that accessible and not usable by a community.
Starting point is 00:05:32 It was just kind of like dumped into GitHub, and one day master would just change on you because somebody had a huge pull request that got accepted. So what we've been focusing on this year, and it made some huge strides, is really getting a community in place and turning this into not an Intel project that's being shared with everybody but an SPDK project that everybody contributes to so we've got an
Starting point is 00:05:51 IRC channel we've got a distribution mailing list all of the reviews are through Garrett hub now so every single review that's pending every patch that's pushed all of the review are visible and everybody is invited to contribute. We've got a maintainer model. We've got Trello, which is what we're using for our backlog. Yeah? Male Speaker 1 in audience Okay. Under this slide, I have a question on the previous slide. Okay.
Starting point is 00:06:20 The only other thing I want to mention on this slide was Trello. In addition to having our reviews in the open and the code in the open, all of our discussions and designs in the open, Trello is where we keep our backlog. So anybody that has an idea or any large project that's work in progress now will show up on Trello. And anybody can go up and look at the design documents, contribute to the design discussion, or find meaningful work to do.
Starting point is 00:06:41 So we've got it categorized in low-hanging fruit and big things that you can do. So you can go up there and find, is there a small patch I can do to get involved and help learn how this code works, make a contribution. You can go out there and find meaningful work really easily. So that's all up there. Okay, I want to take the one quick question on this slide.
Starting point is 00:06:59 No, next one. This one. So your comment about the 2% in APT, is that for IO determinants? No. This one. This is, the key word here is some ROXDB workloads. So I'm going to defer your question. Maybe we can talk about it afterwards, or we can talk about it on Wednesday, because we've got the performance engineer that took these measurements on Wednesday.
Starting point is 00:07:21 I'm giving the first 80% of the talk on Blobstore itself. The last 20%, he'll be covering RocksDB workloads on top of Blobstore. So you'll get all the nitty-gritty info. Okay, and then one last word about the community. We had a summit in April. I think it was right here in this hotel. I'm pretty sure it was.
Starting point is 00:07:41 And it was a sort of come all, come one, come all. There were 200 people or so here, and it was really presentation style. It was very much like this. We've got a meetup in Phoenix. It's actually full now, November 6th through 8th. We were planning on maybe 15 developers from all different companies around the world.
Starting point is 00:07:57 We're up to 22 now. It's kind of like, oh, my gosh, we've got to get a bigger boat. We had to get a bigger room. But we have 22 developers coming to Chandler, and all we're going to do for three days is do design discussions and write code. And the one rule is there's no PowerPoint allowed. So if anybody is interested, even though we're full,
Starting point is 00:08:15 I might be able to squeeze in a few more. If you're familiar with SPDK and you come with your development environment and you're ready to contribute to good, meaningful design discussions, we'd love to have you. We can easily make some room for another one or two maybe. Okay, so that's the high level, what is SPDK while we're doing it.
Starting point is 00:08:34 Now I'm going to turn it over to Zia, are you coming up next? And he'll start getting into the meat of the stock. So my name is Yan. So let me introduce the MVN Neova Fabrics target and the vhost part. Actually for the, let me see the next page. For the MVN Neova Fabrics component, we released our first version in 2016. And we also have some updates in 17.03
Starting point is 00:09:29 with some performance improvement. And also in next release, 17.10, we will continue the scalability and the performance improvement for thousands connections. And we also have the SBK's own MME over Fabrics cost or initiator, it is first released in last year. And we also have some performance enhancement in SGL patch to remove the data copy.
Starting point is 00:10:05 So the performance of both mm over target and the host is both good. And in this page, it shows the throughput comparison between SBK and manual fabric target and Maniowa Fabric target and the kernel Maniowa Fabric target. Let's see the diagram in the left. We can see that in this diagram, both SBDK and the kernel can achieve
Starting point is 00:10:43 the network bandwidth. There are three Merox cards with 150 gigabit GPS line rate, and both kernel and SBKMF target can achieve that. But for the core utilization, we can see that SBDK only uses three cores, but kernel uses 30. So the performance for IOPS for SBDK
Starting point is 00:11:19 is about 10x better. And the performance gaming there are three reasons. The first is that we have our own SBDK user space and menu driver and it is already
Starting point is 00:11:38 open sourced and used by many customers companies. And the second is that we use the RDN-MQ pair polling. There will be no interrupt. And the third one is also important. For the connections from client, our ping to the dedicated CPU cores.
Starting point is 00:12:03 It means that there will be no migrations for the connections and no resource contentions for each connection. So this is all their performance gaming. And this diagram shows the experiment that we used two machines. They are connecting directly. And we will test the round-trip time from the initiator to the target. The round-trip time means that the initiator sends the I.O.
Starting point is 00:12:49 and the ending time will be the initiator receives the request and there are two types of comparison. It will be read and write. So let's focus on the left part with the left tool. It is compare the kernel target with the kernel MF initiate and the SPK MF target with the SPK initiate. We can see the total round-trip time for the kernel case is about 25 microseconds.
Starting point is 00:13:33 But for the SPDK, we only have 14 for the read. It means that the latency for the read lead is reduced by 44 percentage. And there are same case for the right. It reduced latency by 36 percentage. And this diagram shows the detailed analysis. If we use the SBK target with the kernel MF initiator, we divide the total round chip time into three parts.
Starting point is 00:14:18 The first is the fabric arrive time. The second is the device time. It means IO processing is target site. And the last is fabric departure time. We can see that from the device time, it means that the I.O. executed in SDK MF target, it starts at 6.05 microsecond.
Starting point is 00:14:53 And the last is 13.05 microsecond. It is about seven microseconds. So the time is very, very small. And if we replace the kernel MF initiator with the SDK MF initiator, we can see that the time of the fabric arrive times, it means that the MF initiator sends us a request and after that the server receives that,
Starting point is 00:15:37 we can see that the previous case, the time is about six microseconds. If we replace with the MF-initiator, SDK MF-initiator, it will be reduced into two microseconds. And also there are some latency reduction for the fabric departure time. So from the two diagram by eating it decreases the latency from 20 microsecond to 14 microsecond and this is my first part for the MF MNU a
Starting point is 00:16:23 fabrics target and during the shooter and as the next part for the MNE over Fabrics target and the administrator, and the next part is the accelerator vhost target. And before we introduce SDK vhost target, there will be some simple introduction for VortaIO. Actually, VortaIO is a pair virtualized driver specification. In the guest VM, there will be the VotaIO front-end drivers and in the hypervisor side, there will be
Starting point is 00:16:57 VotaIO back-end drivers and the hypervisor can use either device emulation or a pass-through or others to emulate the functions of the virtual backend drivers. And there are many type of virtual drivers. For example, virtual NAT, virtual SCSI, virtual block. And for SBDK, we only focus on VATIO block and VATIO SCSI performance improvement.
Starting point is 00:17:36 And the relationship between the vhost and the existing VATIO is that the vhost and the existing VTIO is that the vhost solution, it pulls out the VTIO backend drivers from the hypervisor to the vhost target. And the vhost target can be implemented in two types. The first one is the vhost kernel and the second is the vhost user space. And in SBK we implemented it in user space.
Starting point is 00:18:15 And for the benefit of vhost is that the vhost protocol separates the IO processing. For the inner protocol, the guest VM will negotiate with the vhost target for the following three information. For example, the memory, the number of the virtual cues, and the location of the virtual cues. Yeah. Is there a demo tomorrow being hosted?
Starting point is 00:18:57 Yeah, actually, we have some slides showing that there will be some booths. Yeah. Yeah. And in this page, it shows the detailed implementation of SBDK Rehost Target. Currently, SBDK support both acceleration for virtual SCSI and-block in guest.
Starting point is 00:19:28 And all the codes are open sourced. The QMU will sets up V-host target through Unix domain socket memory. And there's a guest VN submits IO to a V host target through the VATIO queues. And during that, there will be no QMU intercept. Also, during the IO execution, there will be no VN exit.
Starting point is 00:20:07 And for the completion interrupt, it is implemented by the event ID. And definitely, for the QML, we need to have some changes. For example, the QML, we need to have some changes. For example, the QML needs to allocate some huge pages for the guest VM. But definitely, for the guest OS, it will be no change. So it will be transparent to all the applications inside their guest OS. And this page shows the comparison
Starting point is 00:20:53 SBK's vHost with existing solutions. We compare the virtual SCSI target, vHost kernel target, and the SPK we host a user space target. The difference is about the protocols. You can see for the first one, it just use virtual SCSI PCI in QEMU.
Starting point is 00:21:22 But the second, we use the vhost SCSI PCI in QEMU. But the second, we use V-host SCSI PCI. And for SBK, we use V-host user SCSI PCI. And also in the target side, they are also different. For the kernel V-host target, it will route the V-host to the Linux IO target with kernel MN drivers. But for SBK, we just use our SBK user space MN drivers.
Starting point is 00:21:56 And this page shows some performance evaluation. Let's first see the left diagram. It shows the core distribution from the VM and the IO processing part. We can see that for the VMs, we are allocated the same cost, so there will be no difference. But for the cost to handle the IOs, it is quite different. You can see that if we use a cumulative IO,
Starting point is 00:22:32 it will be used about 10 CPU costs. And for SBKV costs, we will just use one CPU core. And the right diagram shows performance comparison. It shows that during such configuration in the left diagram, SBDK's performance is still much better than QMU-Virtio and Vhost kernel. If we calculate the per-core performance, I think that SBK-Vhost will be much better.
Starting point is 00:23:21 Maybe it is about 10x better than the existing QML VitaIO solution. And this page is about the demo you can see during the SDC conference. The user case is the software accelerator virtual machine storage. We will have the Intel Xeon scalable processor, the latest processor in Intel's product platform.
Starting point is 00:23:59 We will deploy 48 virtual machines. On 24 direct attached Intel Optum and many SSDs. And each SSDs will be partitioned into two parts and serve two virtual machines. And this page shows the performance of the 48 virtual machines. We compare the performance between V-host kernel
Starting point is 00:24:33 and the V-host SDK. We can see that during the three type of cases, for example, the 4K read, write, and the 4K 70% read, and the 30% write, we all again, we all about the average performance gain is about 3.2, 2x better. And for the details, I think you can see the Intel's booth
Starting point is 00:25:13 around the meeting rooms. So this is all my part. The next part will be my colleagues' romance part. It is about the VTune for SPK. Okay. Mike. Sorry. Sorry..
Starting point is 00:25:50 So my name is Roman and I am technical lead at Intel between amplifier team. Now what I'm going to talk to you about today is a how we can can help in performance debugging of this PDK. Um, just raise our hands. How many people have heard of Intel at this point? How many of you have used the video features to determine how efficiently your code is passing for the core pipeline? Cool. Good to see hands. Okay, so what is intelligent amplifier?
Starting point is 00:26:24 It's a performance profiling tool which can help you analyze your code, identifying efficient algorithms and software usage, and give you performance tuning advice. So what is performance debugging of SPDK? On a very high level, it consists of understanding of three things. CPU utilization, how application passes through the framework,
Starting point is 00:26:47 and performance monitoring of physical interfaces. And sometimes, applications could be stalled due to approach and bandwidth limits of any physical link. So let's start from CPU utilization. I don't want to say things that people already know. I just mentioned that SPDK operates in polling mode. What that means is that CPU is running high over time the thing here is that modern CPU is still so much faster even then all those fancy new and very mid-lives but CPU
Starting point is 00:27:13 cycles are precious and it always good to have kind of justification of that high CPU load and that's not the whole story it's pretty also bypasses current block. And what that means is that we lose access to all those nice and handy tools like IO stack, block stack, and others which calculates IO performance metrics like IOPS, throughput, and so on. So debugging SPDK in terms of IO may become a real challenge, mostly because of standard tools are IO inapplicable, as in case of IO performance metrics, or just give a trivial answer, as in case of IO performance metrics or just give a trivial answer as in case of CPU utilization.
Starting point is 00:27:49 And this is where VTune can help. VTune covers all the tooling gaps and can give you a complete picture of IO performance. And on the next slide I'll show supported usage models. So this is a quick slide announcing the SPDK usage models which are already supported in Intel VTune. I fully expect many use cases with different Safari configurations to appear over time. So if you have any, and you would like to have profiling capabilities there, so I would
Starting point is 00:28:21 love to hear about that. And and one thing which is worth to mention at this point there is a special option in between to specify what you want to profile it's analysis target so it could be either executable process or the whole system so we can either launch your application or attach to the already running process or just profile whole system. Now let's take a look how it, how it shows it. So this is summary window. It's a overall performance data of your target and you can use it as a starting point of your application.
Starting point is 00:29:00 Our report consists of five sections representing application and system level statistics i will briefly walk through all of them and then focus on the specific so first one is cpu metrics it consists of almost 150 various metrics which helps estimate overall application execution next is pretty info and SPDK throughput represents performance data collected from the framework layer. Bandwidth utilization histogram shows how much time system bandwidth was utilized by a certain value.
Starting point is 00:29:36 And the last one, collection and platform info. Here you can find information about target application, operating system, CPU type, and so so on so let's get back to spdk info section and take a closer look so here you can find overall are your performance data and it's pretty key effective time metric which I will describe a bit later so technically it shows amount of bytes read and written from the storage device and the amount of operations which with a breakdown per access type.
Starting point is 00:30:11 So we can expand each section and see how is device performs individually. And uh, and as initial takeaways here, we can notice that there is some utilization imbalance here and if it's not what is expected, then definitely there is some utilization imbalance here, and if it's not what is expected, then definitely there is something to do here. Now let's take a look at the SPDK effective time. So that metric represents polling efficiency. This is technically its fraction of cycles
Starting point is 00:30:39 when CPU falls into the polling loop and actually found any completions. So what we can learn here, consider case if we do not identify relatively good SPDK effective time. Does it mean we are done? The answer is maybe. It all depends on I.O. performance. If it's okay then yes, we are done, but if not,
Starting point is 00:31:00 then we probably overload the core with devices and it's better to bring more core to HPDK and rebalance the application. Another common case is if the device relatively good I.O. performance. And same question is here, but answer is still maybe since now it all depends on effective time. And if it's low, when we definitely have potential
Starting point is 00:31:22 to get even more performance from that core. Just bring more devices here and you'll get more performance. Now let's take a look at the SPDK throughput section. That section represents performance data as a throughput utilization histogram. That histogram introduces threshold to categorize throughput as low, medium, and high. So you can set up reasonable performance targets and see how... and see speedy activity by throughput utilization level.
Starting point is 00:31:58 Now we can switch to the bottom-up view and see where application performance is limited so bottom up view is designed to provide performance picture at a glance it consists of three blocks thread are your statistic and pci bandwidth and let's take a look at them separately. So first one is FRED. FRED presents a list of FREDs which was running at the system file collection. That dark brown filling in the background represents CPU utilization and it's nearly 100% as expected. And that light brown filling in the foreground represents pretty effective time and what we can learn here is that approximately half of
Starting point is 00:32:51 CPU cycles was really used by that workload on that particular system to move data next is next section is our statistic it consists of standard IO throughput and IOPS. We can just bring back what was missed due to bypassing kernel block layer. Performance data is given per device and also it can be seen per read and write, and also total. And in the bottom there is a PCI bandwidth.
Starting point is 00:33:23 And this is where I would like to draw your attention. It's a new feature which is available starting Xeon V5, known as a Skylake server. On that platform we can monitor PCI data not on the per package basis but per device. And here I have two socket system and four locally attached device. Three of them are attached to socket one
Starting point is 00:33:49 and one device is attached to socket zero. All of them are identified by VTune, models are identified and also we can see traffic creating pretty good with the data collected at the same player and data collected from the PMU for the encore. Some magic here okay so and every time good question comes what next so we can start from the basic idea to switch back to the thread block and since it colored by the throughput layers which we choose on the summary page and we can see for
Starting point is 00:34:34 example in locate recessions and see how how low throughput is really low and for example in that moment we can figure out that it's not just low, it's even dropped for approximately 160 milliseconds. And next obvious case is to figure out which function were executed at that moment, prior that moment and next, and try to do some corrections. So, I think that's it for the main talk. directions. So
Starting point is 00:35:07 I think that's it for the minute talk. Any questions so far? Yeah. Yes. Direct from Encore. Yes, each one is. It's applicable for server platform. Got you. And this is obviously just performance riders on a certain class of Intel platforms? Yes, sure. Any more questions?
Starting point is 00:35:57 So Z, who's gonna take summary? It's most of your parts actually. Anyway. You have the mic on. I do but... Drive it home baby. So we do great and big job here. Let me read it. So you can read it also.
Starting point is 00:36:36 And ask me any question. Where are you going next with SPD-NIC? What's on the top of that to do this? Huh, sorry? What are the next things to be done in SPDK? What do you think are the next steps? It's a question to you. Definitely, even if I have a mic. Hold on a second. Okay, so the question was what are we doing next with SPDK?
Starting point is 00:37:05 We're doing just all sorts of cool shit with SPDK. I mean, what you saw today, there's still a lot of room for improvement and growth around all of our fabric implementations for NVMOF, both the initial side and the target side. We're doing a lot with Blobstore, which I'll be talking about Wednesday. And for those of you that aren't familiar with it at all, it's kind of an object storage-y sort of thingy-ma-bob that we call it just a block allocator. But we're putting some more modules on top of that and really able to present blobs basically as devices.
Starting point is 00:37:37 So it's essentially an LVM type implementation. So we're kind of taking all of the useful things that you see in different user storage stacks or different kernel storage stacks and making kind of the user space version of them, in addition to continuing to harden and do performance improvements across all these different things. Yes? I haven't heard any discussions, but it's possible. The way most of those kind of things go, as we become more of a worldwide community and less of an Intel-driven focused thing, we really rely on what you go out and see on Trello, like I mentioned earlier, to get an idea of what people are interested in,
Starting point is 00:38:21 where the community wants to take the project, or you jump on IRC and see what people are talking about so when we talk about future features it's no longer the case that it's what Intel wants to do it's the case of what people are doing in the community and where they're taking it we at Intel are still you know the major contributors and we're continuing to drive the things that we hear about and the things that are already on that block diagram we showed earlier. But really anything goes in the future. It's what people have interest in and are willing to resource, you know, quite honestly.
Starting point is 00:38:54 Good question, though. Another one? On the protocol side, have you seen any other fans other than iStudy? On the target side? On the protocol side. Protocol side. I haven't heard a whole lot of interest there outside of that either. There's a lot of work on the iSCSI side for us to do still, and we're expecting that to be a big topic at our developer meetup in November.
Starting point is 00:39:20 We've had a lot of feedback on new features that people would like to see, and, again, we're trying to encourage more people to contribute features as opposed to request features So so there's still a lot of work even on the ice cozy set But I've heard too much about new protocols that people want to work on other than you know the NVMA Did you just ask that question on the mailing list? It just came up two days ago. Yeah, and there's a bunch of different viewpoints on how that works. Some people are sort of using it as a library today.
Starting point is 00:39:56 You don't build it as a shared library. And, in fact, Daniel, who's one of the maintainers, had a really good answer for it in an email that I wish I could just recite. But, you know, essentially a lot of the components in SPDK are at differing levels of maturity. Some of them, like the NVMe driver, have been around for quite a while and is pretty mature. The API has stabilized. Other ones, like Blobstore, the API is still developing. And it's difficult in some of those cases to really lock them down and say,
Starting point is 00:40:27 all right, let's build this as a shared library and use it that way. So right now it kind of varies per module, so there isn't really any grandiose plan to build anything that way. Nothing to do with library version? Yeah, nothing in the near future. But again, it's a completely open project.
Starting point is 00:40:42 If somebody came in and wanted to take like the NVMe module, for for example and build it that way then certainly nobody would stop them okay anything else sure we got all night we last talk And again, that's going to sort of depend on how we gauge the interest in the community. We have some interest at Intel in looking into that, and we're looking at some different open source options right now for the TCPIP portion. But I haven't heard a whole lot of rumblings on that either. And, again, we're looking forward to our developer meetup in November.
Starting point is 00:41:33 That and the iSCSI work and some of the vHost work, we're expecting to be hot topics for discussion where we'll gauge interest and see who wants to participate and contribute what. There's a lot of investigation in some of these areas. But, yeah, that's up there. This is my question. Does DPTK have a good kind of portfolio for TCP auto-devices for different vendors, or is that part of the support for the whole line of stuff?
Starting point is 00:42:00 Yeah, you know, I'd – I'm familiar with how TCP auto-devices might be utilized by user-based drivers. Yeah, I'm not super familiar with that either, so that's something we could follow up on offline with the DBDA folks. Yep. Well, you know, so there was some history there. I don't think that's on these slides, but I'm somewhere, I think, Intel decided not to open source. Well, you know, so there was some history there. I don't think that's on these slides, but I'm not sure. Yeah, yeah. It was?
Starting point is 00:42:31 History, yeah. Yeah, yeah. I don't know a bit about that. Yes, it seems there, yeah. Yeah, I know that previously, we have LibUNS inside SDK. It is only for the NDA customers. But finally, I mean that libuness in SBDK is abandoned by Intel.
Starting point is 00:42:52 The reason is that it says that DBDK team will be responsible for the user space TCP IP stack development. So libuness is just stopped. Maybe kind of a temporary stop in SBDK, but we don't know whether that will be happening in the future, yeah. Does that make sense? Yeah, LibUNS is what was experimentally used for a little while, but it didn't make it into SBDK
Starting point is 00:43:23 with any of the same license, so if you find it on any of the old materials or anything, that's basically old news. We don't have a users-based TCPIP stack for SBDK right now. It'd be nice if we did, but we don't. Okay, I think the bar is open if that's the last question. Now Stephen's chomping at the bit.
Starting point is 00:43:46 Thanks, everybody. Thanks for listening. If you have questions about the material presented in this podcast, be sure and join our developers mailing list by sending an email to developers-subscribe at sneha.org. Here you can ask questions and discuss this topic further with your peers in the storage developer community. For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.