Storage Developer Conference - #133: NVMe based Video and Storage solutions for Edged based Computational Storage
Episode Date: September 2, 2020...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the
SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage
developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual
Storage Developer Conference.
The link to the slides is available in the show notes
at snea.org slash podcasts.
You are listening to STC Podcast
Episode 133.
I'm here to talk today about our solution
for NVMe-based video encoding at scale.
We've thrown up a lot of buzzwords in the title because that's really important to call yourself on the edge these days and use the words computational storage.
But let's not get too worried about names.
Some people are going to argue whether we're truly computational storage or not, and that's fine.
We don't have to be because we're addressing the market.
We do see a need for computational storage standardization, and I'll get to why.
And we want to be part of that.
But whether we're truly computational storage, you can decide or not.
I'm going to have three phases to my presentation today.
I'm going to first go over the background about video
and what's happening in video, not only historically but today,
and really to show people the need for a really good solution
to allow video encoding at scale,
to be able to grow very fast, very quickly.
I'm going to go into detail on our implementation of it.
Some of it may be a little deeper than we need to go.
Part of the reason for that is because I want people to understand
the kind of challenges that happen when a company like ours
tries to build a product out to the market quick
and try to leverage a storage interface for that
and also the need for computational storage standardization
and how that can help.
And then I'm going to look at some sort of virtualization solutions
and other kinds of options that maybe our customers could potentially use
to be able to create sort of higher-scale environments
than sort of the initial sort of server-based environment that we're starting with.
So starting with the introduction about video encoding at scale,
I'm going to go back in history because I think what I want to try to do
is establish a trend for you to see.
So let's go all the way back to the 60s.
Now, if you remember the 60s, which you probably don't,
well, you may, I don't,
but in the 60s, there were only three channels of television
for all of the United States.
That was it.
If you watched TV at home,
every single person at home had access to just three channels. That's it.
In the 90s, we saw an explosion of channels from the, you know, three in the 60s. It grew all the
way to maybe 100 channels in the 90s. And in the early 2000s, we started seeing the advent of start seeing some personalization of video with the advent of PVRs, if you remember.
In the early part of the 2010s, we saw video distribution going to Netflix.
So we had video on demand and YouTube where people could record their own video.
But we're seeing a trend now, and it's growing extremely fast, faster than you can imagine,
where people are starting to see the next level of personalization.
And that's like Facebook Live, Periscope, Twitch.
I don't know if you have teenage kids, but every parent that I know that has teenage kids all want to be Twitch superstars.
So everybody's creating their own video content. of streams with large numbers of viewers out to a large number of streams with a very small number of viewers.
So your Twitch channel might have ten people watching it.
Or your live video might have, your Periscope feed might have five people watching it. But each one of these video feeds needs to be distributed. Not only that,
they need to be distributed in all the different formats that the end user might see. So if I
record my video in 1080p30, it needs to be re-encoded into what they call encoding ladder.
So I might need to generate 1080p, 720p, 460, 280. Each one of those feeds
need to be created live. And so it explodes the amount of video that's needed for the end device.
And this, you know, explosion of need demand on the systems for videos is really pushing what have been traditional solutions for video distribution.
So what we're seeing is that people are starting to think about different alternatives for encoding video in different spots in the network.
So you can encode video in the data center centrally, right?
Or you can actually take that mainstream
and then send it to your regional data center
and then do your transcoding to create the encoding ladders.
Or alternatively, you can push it all the way to the edge.
The first, the bandwidth, the cost bandwidth of these
connections between the data center and the region, or the region and the edge, can be very, very
expensive. So these CDNs, these content distributor networks, they're really looking at ways of saving
costs. So they want to push the encoding ladder generation further and further towards the edge.
But if you think about that,
the amount of encoding capability you need grows
because now you may need to create encoding ladders
seven different times for the same video feed
because you have to do it in six different edges
or seven different edges. And so that really pushes the video requirements even further. So if we talk
about, you know, video streaming, we may, you know, generate the content at the data center
and distribute it out for video streaming. We also have video surveillance as another application
where we see the needs for video encoding.
And then there's even interactive video
where you might want to have very, very short latencies.
So encoding at the edge becomes extremely important.
So all these cases are stuff we're seeing in our customers' demand.
But what are the solutions today?
Well, the vast majority of video coding
actually happening right now,
where all these live videos,
is using software encoding.
Just using Xeon processors
and scaling it out.
There are people looking at GPU solutions
or NFPGA solutions.
But by far, the most efficient solution for both power and rack space is an ASIC solution,
where you have a custom hardware that's built specifically just for hardware accelerating video.
And the beauty of that is you can basically take what is at least one rack,
entire rack, CPU rack of processing,
compress it down to one new server,
and that's it.
So when we were talking yesterday
in the computational storage working group
about applications where you can get a 10x improvement,
this is one of them, right?
We basically can encode at scale. We've chosen
NVMe interface, ASIC-based, very high density. And I'll get into why we took this approach
for NVMe-based and storage-based. So we saw this exploding market,
and we needed to take advantage of it.
But we're a small company,
and our customers need to be able to implement things very fast.
So we needed to find a way to get it to market fast,
we needed to get it to scale quickly,
and we needed to be able to leverage all the existing technology
that already exists.
So what technology is out there and we need to be able to leverage all the existing technology that already exists.
So what technology is out there that already is built to scale?
What servers are aligned to be able to scale one particular technology up?
Well, storage.
Storage servers have many slots in them, many interfaces, many PCIe interfaces.
And so it makes a logical conclusion for us to be able to implement it in a storage form factor.
We get to leverage all this existing infrastructure built around hardware infrastructure,
built around scaling storage, and be able to use that for our video encoding.
But not only that, the NVMe interface provides a bigger opportunity as well.
So the NVMe interface has a lot of existing infrastructure in place.
We could develop our own PCIe driver and maybe potentially even make it more efficient than NVMe.
But we would have to be installing our driver
into all these people's kernels.
We'd have to build their trust
that this is something that is reasonable,
not going to corrupt their systems,
whereas we can take advantage of this.
And NVMe, there's so much going on in terms of investing in NVMe.
NVMe for fabrics, NVMe MI,
all these different things that would take a small company like us,
it's impossible for us to be able to put that infrastructure in place for our customers.
Further, we do have an SSD solution. And so we can combine and do combine
our SSD with our transcoding if people want that too. So it makes logical sense to be able to
combine the two and create one product. So what we've done is basically create for our combined products. So we have three products.
One's a pure SSD.
One's a pure transcoder.
And one's an SSD plus transcoder.
So the SSD plus transcoder uses multiple namespaces.
Namespace 0, namespace 1.
Namespace 1 is your standard block device.
High performance enterprise class SSD controller.
The other solution, hey, I got the same problem as Scott.
Automatically moving slides.
Yeah, I got to keep up with my slides.
So the other namespace is an encoder-decoder module, right?
So we have both decoder and encoder, hardware encoder, inside our design.
And it's interfaced with specific transactions.
We have our own user space library that we use to...
Now it jumps ahead even further.
Okay, there.
So, we built a stack on top
of our device. So, this
stack
starts with FFmpeg,
and I don't know, people who are familiar with video,
probably not a lot here, but FFmpeg is an
open source video
library to help
coding and decoding.
Within that FFmpeg is a library called libavcodec.
It's part of the open source library.
But underneath that is what we call exocoder codeclib.
So that is a custom library for our device that basically creates an API
between libavcodec and our device that basically creates an API between libAV codec and our device
to be able to create the commands to the NVMe driver
that then create the video transcoding solution.
So all this is open source. It's public.
Many of our customers use FFmpeg, but we provide user documentation to allow customers to interact
directly with LibX coder if they want custom applications for video encoding.
How we interact with this device is what we call use vendor-specific commands.
It calls a kernel function called IOCTL,
some people call it, I-O-C-T-L,
which then sends commands to the NVMe driver.
Some people are taking different approaches,
and I'll talk to the other approaches a bit later,
but the beauty
of the vendor-specific commands is they're simple to architect. You know, it's very intuitive in
nature. The iOctoPath isn't the most optimized, but if you're talking 10,000, maybe 50,000 IOPS.
It's not a big deal.
It is not perfectly optimized.
It does require administrative privileges,
which some people may not like to use superuser for everything.
The Windows support for vendor-specific commands is just new. I think it was introduced in May.
And it has its own set of issues.
And currently right now, as far as I'm aware, maybe people in this room know more than me,
but when my engineers tried using NVMe over Fabrics,
we had trouble getting the vendor-specific commands to work properly.
So that's probably a feature coming down the pipe later on.
Give you an example.
This is just an example of the kinds of things
that we do with our library.
So we have an open, a close, a query.
So you can just query the device.
You can do writes and reads.
So writes are basically, you know, you want to transfer.
Say if you're doing a decode,
you transfer encoded data to the device. And then for encode, you would transmit
uncompressed data to the device. And then reads would be transferring,
say if you're doing decode, it would be transferring the unencoded data back to the host.
So we implement this.
I'm just going to give some sample commands
just to give you a feeling.
For those who are familiar with NVMe vendor-specific commands,
there's a bunch of vendor fields that you can use.
Generally, we use up from 10 to 15.
So a command would start with an opcode. So for open, for example, we just use 61, pretty arbitrary kind of opcode. And then we'll send
in command word 10, we'll send the configuration data. And then when it completes, it'll tell which encoder instance was opened.
For write, we have a separate opcode for that.
Again, you put which decoder ID you're writing to, the instance, format, stream.
All that data gets put into one of the command words.
So this is a very clean solution, I think, in terms of using the vendor-specific commands.
Oh, go ahead.
I'm sorry.
I don't understand the vendor-specific commands.
You see a lot of them in block,
but they're level, right?
So if you have a view
of a file,
what about the translation of
where the blocks are?
Do you have a lot of bands?
Do you need the blocks?
So this is not really block-based.
So what we actually do is we send the PRPs directly to the device.
So we have a memory buffer in the host system that contains the frame, right?
The encoded frame or unencoded frame.
And then we pass to the device the PRP for that memory space. And then we
do a DMA of that memory space directly to our device. So it's not block-based. We just
use the NVMe face. For the encoder-decoder part of it, it's not block-based at all. We could take an I-O command approach, and
some people are using this for computational storage. They're transferring data using the
blocks and just doing read-writes. This is much more efficient in the kernel. They've
optimized this path significantly. It's low latency, high priority.
But this is a real kind of hack job.
There's no command words in there, CWWs that we have 10 through 15.
We can't send that kind of configuration data with it.
So we kind of have to hack the LBAs, maybe use LBA 0 to 15 for one thing and 16 to 25 for another thing.
It's not a very clean solution. And since we didn't need the absolute optimal performance,
we've gone with the vendor-specific command. So one of the things we found is that when you
start using these kind of libraries out of what they're supposed to be, you get a few little surprises, right? We found that we were trying to optimize our performance. We find that
memory movements, memory copies are obviously the most critical thing for our performance.
If we, when you're talking about YUV data, which is the raw uncompressed video, there's
a lot of data that flows through. And so what we found is that
because we're not using block-based, we weren't actually transferring in 512-byte
segments. But what we found is that for whatever reason in iOctl, if you don't send your data
512-byte aligned, it copies it from user space to kernel space, which is a copy,
and that was killing our performance. So little subtle things that are interesting, right? So the next thing is when we talk about having
an SSD and a video in one unit, they do compete for resources. Now, we could have over-designed
our hardware so that there was an infinite number of resources and the SSD and the video would never compete.
But realistically, when you're talking an embedded system,
that's very difficult to do.
So we don't actually have enough resources to be able to give full SSD performance
and full video transcoding at exactly the same time.
So we've had to come up with some solutions for that.
We've come up with different categories of transfers, right? So live video, real-time video is our highest priority. It basically is, you don't want to see if you're watching your friend
eat their soup, you don't want it to be
jittery, right? Because, you know, if you're live streaming eating soup, it's pretty nasty to see,
like, it's not good, right? So live video is our number one priority. SSD comes underneath. And we
basically set the amount of live video that we allow to be a threshold that we can get guaranteed good
quality of service on the SSD side. So as long as you meet the live video, you'll have SSD remaining
that's good. I mean, it's still not, our pure SSD solution is a bit higher performing. I think,
for example, you know, our write throughput is one gigabyte per second for this product versus one and a half gigabytes per second for our real product.
So you lose a bit of write throughput, but overall it's still a pretty good performing device.
And then the remaining processes and done kind of on a best effort basis is just non-real-time encoding.
So we have that as a lower priority task.
Now, when we structured this,
we would have loved to set up our queues like this.
So we have our NVMe host,
and you can just replicate this over the number of cores, ideally,
but we'd have a queue for the non-real-time video,
a queue for the SSD, and a queue for the live streaming video.
And then we have on our device a resource monitor that helps to basically throttle the
traffic between the different options to make sure that the quality of service on both the
live video and SSD are met, and then we can just use whatever remaining video is left.
And we could take advantage of the host queues to be able to prevent head-of-line blocking.
For example, if I'm putting a lot of non-real-time video in one queue and it's not getting through,
it can't fill the queue and block it.
But the default NVMe drivers don't do this, as everybody knows. They do this, which
is a lot messier. I mean, we could use the SPDK or something and maybe come up with our
own custom queues, but right now we're just using the standard NVMe driver queues. And
you'll see that here, every CPU just gets one IO queue. That's it, right?
So we must share that queue with all the different processes.
And so what we have to do is empty all the queues into our device,
create our own internal queues,
and use our libraries essentially to limit the amount of traffic
so that we don't overload our own system.
So it's not ideal, but it's how you work with these things.
And I'll get to how we can improve things.
Now, a question we get a lot is, why don't you just go directly to storage?
Why don't you just take your video and just throw it on storage?
And the real challenge we have is all our applications for our customers,
all the infrastructure is all file-based. And the amount of, how do I get the LBAs on my drive to my drive to understand where that codec's supposed to put that data.
It's a very complicated process. Further,
a lot of times people don't even want to put the video
on one drive. Maybe they want to use RAID and stripe it across
many drives. In which case, how do I deal with that?
There's another factor here that people don't realize. They think, oh, video, it's super high bandwidth. It's super crazy. Okay. So our
devices do encode eight streams of 1080p30. A very high quality 1080p30-string might be 16 megabits per second. That's like very, very high quality, which is 2 megabytes per second.
So that's 16 megabytes per second per card.
And we have a host server that does 10 of them.
So that's 160 megabytes per second of encoded video.
That is really not very hard for a storage and storage system to handle right it's really not
a lot of bandwidth so it's not a big priority to try to find a way of lowering that now the
uncompressed video if people wanted to store that that's what's really important that's probably
more like 200 megabytes per second per stream so So that's where all our bandwidth is. But most people
don't want to store uncompressed video. They always store the compressed video. So this solution here,
while elegant and neat, and my marketing guy's like, we can't introduce a product that doesn't
have this. I'm like, our customers are not going to care, right? And they're going to care if they tell us to change their file system on them, right?
So we don't have this feature.
I don't think it's that important.
But maybe one day we'll try to introduce it.
But why do we need computational storage?
Now, I've gone over some of the quirks and the weird things that happen. I just like to see that we sort of unify around,
you know, some of the key aspects of computational storage,
basically using a storage interface
for other applications than storage.
I don't like to call it computational storage myself
because I'm using the storage interface
for something that's not even storage at all.
Our transcoder product alone cannot store information.
But there's big advantages to using that storage interface for that.
And I would like to be able to take advantage of that a bit more.
I think we've seen some presentations today really stressing, you know, the value that it can provide.
And I hope going into the detail
of one particular implementation of this
can show, you know, exactly the key aspects.
We could probably do something better with queues.
Does it make sense to have one queue per core
if you're doing computational storage?
Does it make sense to be able to do identification,
be able for me to identify my device to the host system
and not look like, like if I use the default NVMe CLI
and I do an NVMe list,
it's going to appear as my computational storage device,
my transcoder is going to appear as a,
I think it's a two gigabyte SSD,
which clearly it's not.
I'd like to be able to have that not look like a hack approach.
And maybe one day we can provide hooks to be able to allow file-based access
or some sort of maybe key value or some aspect that makes it easy for me
to translate applications into being able to directly store onto my drive.
If we go beyond this, I can look at some options that we've looked at.
Now that we have an NVMe-based solution,
and we can take advantage of all the existing infrastructure,
we can actually propose even more scalable solutions than if we had a custom driver or a custom system.
For example, what I want to call just a bunch of J-Bot, just a bunch of transcoders.
Just have a box.
All it's got is transcoders.
Now, there's two ways you could implement this.
You could use NVMe over fabrics,
or you could use the kind of solution
that Supermicro was presenting the other day
where you just have a PCIe switch
and then have it connected with mini-SAS cables
to the host system.
But the main thing is you want to be able to sort of separate the storage,
the transcoding, and the compute
because our customers may need a different amount of compute
depending on their application.
So a lot of times you may do a straight transcode, like 1080p to 1080p.
So you may go 1080p through like 16
megabits per second, and then you may say, I want a lower quality, lower bandwidth version. So I
might go to two megabits per second. That requires a very small amount of compute. Other people may
need deinterlaced. Deinterlaced is, you know, it's surprising how much interlaced video is out there right now.
And if you want to deinterlace a video and not make it look like total, a really bad quality video,
it's actually quite a bit of CPU.
Not nearly as much as encoding, but when we're talking about how much we cut down the CPU usage,
it's quite high.
And scaling also is a CPU user.
And so that costs us a lot of CPU.
So that variability in compute would mean that it makes sense
to kind of separate the compute and then allow different applications
to be able to sort of optimize their compute
to transcode ratio, rather than if you put everything in a one-host system, you've got
a set amount of compute for the set amount of transcode.
And people rolling this out, these transcoders are rolling out in a very large scale, right?
So we'll have customers that are buying, you know, tens of thousands of units
of these things. So having, optimizing that is an important part for them. The other option that we
can provide our customers is to be able to use virtualization. So you can use a hypervisor to be
able to expose the NVMe device to a virtual machine.
And we've tried this out.
It works well.
And we've checked performance.
And actually it performs just as good through a hypervisor as it does with direct connection.
It's not.
I don't know.
I always worry.
I always worry when you add layers
and there is potential that there's a problem.
No, no, no, no, no.
The other option is to use containers.
We've done this as well.
So we've been able to use Docker
and be able to implement our encoder
and actually have it running within a Docker
container. This would be another option
for our customers to be able to roll it out.
Then
finally, another potential
option for the future
is to be able to use SRILV and
essentially split our encoder up.
Say if you want to have, we can do eight streams,
but maybe only one virtual machine wants to do just four.
So we can essentially cut it in half
and be able to provide four to one virtual machine,
four to another using SRIOV.
So, I mean, basically I've covered, you know,
the real demand in video and how video has been growing over the last, you know, 50 years.
And we're seeing this explosion of live video.
Ultimately, we presented a solution using the storage interface to be able to allow our customers to scale that video very efficiently, very quickly,
and be able to address a very fast-growing market. And finally, you know, we provided some options
for virtualization for our customers to be able to scale out and compartmentalize their
delivery for different opportunities. And that's it. Any questions?
Yep.
Yep.
Yep. Yeah. Yeah.
Yeah, I mean, basically, from our perspective,
the vast majority of the hardware is shared between the SSD and the transcoder.
So you've got the same enclosure, you've got the same PCB,
you've got the same devices on there.
So all we have to do is add NAND, and we've got ourselves an SSD on there. So it's a space-saving, essentially a physical area-saving feature.
Plus, we're using up all the slots on the host system for the transcoder.
We want to be able to take advantage of that.
So if you want to put storage on there, now you've got to use one of the slots up for storage.
So if we can allow that
slot to be used for both storage and
transcoder, we can really be efficient
for physical area.
Yeah, I see that this is like a
packaging concept
where you're taking advantage
of all the PCI slots that
are included in the server.
Yeah.
But 24 is a 10. Yeah, so the one you... Yeah.
Yeah.
So the 1U, I mean, we're at U.2 form factor right now.
So, I mean, there are servers that do more than 10 in 1U, but the vast majority are 10.
And then 2U, it's like 24, right?
So I can understand the...
But I'm still wondering why...
It could have been something other than the VV
and still not the package.
I mean, essentially, the interface is PCIe.
So we could have done something different.
But, like I said, we don't get to leverage all that exists, the interface is PCIe. So we could have done something different. But
like I said, we don't get to leverage
all that exists, all the infrastructure that already
exists for NVMe.
Yeah, but we saw.
Pardon me? I guess I'm not following the question. The incoming video will come in that I didn't trust. The first one, I didn't trust. So you were transcoding what happened after there.
Because that's a new order.
So it's a package.
You still have the same one in the same place.
Of course, you cannot.
You can now be able to have to change your software stack.
It's not going to be a.
It's not going to be a PRP.
It's going to be whatever.
And then we treat it as a package. So the package is going to be a solution. It's going even an NDPRT. It's an NDPRT. It's going to be a fiber network.
It's going to be active.
So it's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active.
It's going to be active. It's going to be active. It's going to be active. It's going to be active. It's going to be active. It's going to be active. It's going to be active. It's going to be active. It seems to me there's not a high amount of anonymized device.
Yeah, I mean, most customers do some processing on the video. Either the uncompressed video or even the incoming streams,
they can split the streams, they can change the audio,
they can do different things.
So generally you have to come into a host system anyways.
You have to come on to the processor.
You have to take that video on.
And we decompress it,
and then we send the uncompressed video back to the host and the host can do what it wants with it right so it will add overlays it will scale it
it will do deinterlays you know all these different things that it will do and then it'll send the
video back to us to be compressed to be sent back out again right so you could do it on a nick i
know i know i was i think at fms i talked to one of the smart
nick companies and they were like yeah we could do video but i think the scale of video that they're
talking about is nowhere near where we are right like we're talking about you know 80 1080p streams
per one new server that's just like out of the picture for a smart NIC to even think about doing, right?
Really for us, we're really focused on the hardware encoder solution
and we sort of provide the FF up to,
like our sort of interface goes up provide the FFM up to, like our sort of
interface goes up to the FFM peg level
and
most of our customers have their own
proprietary solution for
both taking in the video and
taking it out.
So it really varies a lot.
And some customers don't even use
FFM peg, right? They use
something special and they have to basically work with our libraries to be FFmpeg, right? They use something special,
and they have to basically work with our libraries
to be able to interact with it, right?
But actually, I would say probably 95% of our customers
use FFmpeg, so it does cover it.
But that's sort of our boundary, right,
where we own, and that's it.
You had a question?
Just working on some of the scatter gather
aspects for computational starters.
Yeah, yeah, I'd like to know more about,
we're using peer peas right now,
and I'm sure scatter gather is probably much more efficient.
Yeah. The first thing to follow in the traditional FAD model, which is very good in the wireless and backwards, etc.,
is a host push type model where you put the list and you go tell the device.
So there's a device whole model where you send a DMA descriptor saying,
here's where you can get the data out of host memory and then let the device manage that.
Of course, that doesn't necessarily get you
to a product that's in the fabric
so you'll have to complicate it.
That's sort of the stuff we're looking at
and having these sorts of real-world
working customer-deployed products
is a great way to validate
how we do in this network
as far as I'm concerned.
You don't actually have to work in somewhere if the spec's broken. Yeah. Yeah, and...
Oh, go ahead.
Yeah, I've seen a lot of talk on that.
And, yeah, to be honest, we haven't, you know,
we haven't really explored it that much, how it could help us.
I think it's going to be able to do it.
Okay.
Using weight and priority cues that are available.
That would need to be with SPDK, right, to be able to do that.
That's the main thing.
Right now, I don't do that.
Yeah. Yeah, so I'm going to, you know, our main product that's actually in production right now
is the transcoder-only product.
Our transcoder plus SSD is announced but not production,
so we are still optimizing that particular product.
Thanks for listening.
If you have questions about the material
presented in this podcast,
be sure and join our
developers mailing list by
sending an email to
developers-subscribe
at
sneha.org. Here you
can ask questions and discuss
this topic further with your peers in the Storage Developer community.
For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.