Storage Developer Conference - #124: Standardization for a Key-Value Interface underway at SNIA and NVM Express
Episode Date: April 14, 2020...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the
SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage
developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual
Storage Developer Conference.
The link to the slides is available in the show notes
at snea.org slash podcasts.
You are listening to SDC Podcast
Episode 124.
Good morning and welcome.
Thank you for coming to SDC.
And thank you for coming to hear about what's going on with key value storage in the standards arena,
both at SNEA as well as at NVME.
I am Bill Martin.
I am from Samsung.
I also happen to be the co-chair of the
SNEA Technical Council and representative on various standards committees. But I'm here today
to present predominantly from an industry standards point of view on the standardization
efforts that are underway for key value interfaces.
I do have to, as a member of Samsung, provide a disclaimer that nothing I say can be held against me or my company.
So with that said, we will move on to what is key value.
Key value is a mechanism that is different from your traditional block storage
paradigm where you store user data associated with a key. That key can be anything you want it to be.
It could be a URL. It could be simply a name. It can be anything you want it to be. The key may be any length. It is a byte granularity.
The value, which is your user data, may be of any length.
It is also a byte granularity.
Now, with a caveat here, that's the concept, not necessarily the practice in terms of length. So this is a definition, but it's not the definition of what's being standardized today,
and we'll talk about a little bit more what we're standardizing, where we see ourselves
going in the future, and how we plan to get there.
So what are the differences between KV and block interfaces? Well, in the block
interface, most of you are probably very familiar with block interface since you are storage
developers. But user data is in multiples of a block length. The block length will be different
on different devices, but nonetheless, it's in multiples of that block length.
Logical block address is a number from zero to a maximum logical block address on your device.
Now, various devices may have thin provisioning where you don't have enough storage for all your logical block addresses, but nonetheless you're limited to
a logical block address that fits within some fixed space. With key value, you have variable
length user data. That can be one byte in theory. Storing one byte may not be the best thing to be doing,
but in theory it could be a byte.
It can be a megabyte.
There is no definition of it in terms of blocks.
It's simply defined in terms of bytes, how many bytes you have. The key which addresses your data is also variable length,
and it is an identifier. And so instead of being stuck to a
logical block address that is of some fixed size, you have a key. Now, it doesn't mean that if you
took a key that is 16 bytes long and went and started at the number zero and went to the number all Fs for that 16 bytes,
that there is storage for all of those keys.
So it is inherently a thinly provisioned device where you have a lot more potential keys that you could define
than what you can actually store on the device.
So differences between object and KV interface.
This is a little bit more subtle.
There's been lots of things out there about object interfaces,
and why are we doing something that isn't an object interface?
Well, KV is a tool to facilitate object storage,
but it's not a full-scale object storage.
In particular, keys are not required to be ordered
or be able to be retrieved in an ordered fashion. In other words, if you took and did some
alphanumeric ordering of your keys and wanted to find all of your keys, key value doesn't provide
for you to do that in some alphanumeric order. It will give them to you in a repeatable order
as long as you haven't stored something new or erased something
from the device. So if there's no change in state, they will always come back in the same
order, but there's no requirement for being able to retrieve your keys with some fixed
alphanumeric ordering scheme. It's also not searchable based on the contents of the value.
So I can't say to my key value device from an upper layer,
go find me all of the key value pairs that have this particular social security number in them.
It's not intended for that level of object storage or object searching.
So with object storage, the keys can be requested in some form of order.
There is more functionality to search the objects. In other words, you can
ask your object storage for something that has a given value within it.
It also, one of the other key factors that's a difference between key value and object storage is
object storage generally supports logging or other mechanisms to maintain your database integrity. Key value is an underlying technology to help support object storage,
but the logging to make certain that things are there, to make certain that there is
database integrity, not data integrity per se, but that if you stored a whole bunch of things that you are consistent in the event
of a power failure, those types of mechanisms are in the software above key value that actually
runs your object database.
Now, that doesn't mean that you don't have some level of atomicity. You do with what we're defining have atomicity of the key value
pair. So an individual key value pair, if you store it and you get a power fail in the
middle of it, you either get all your old data or you get all your new data. But there's
no logging to come back and tell what have you
or what haven't you done or a replay mechanism.
So what are the layers of the key value SSD system?
So you've got the application layer.
That application layer is talking to a key value API.
And we'll talk a lot more about that because the example here is the SNEA key
value API, which is half of what this talk is about. Then there is a library that talks to a
key value protocol host interface. And then that talks across a key value wire protocol.
The example I've shown up here is an NVMe KV command set. Again, that's the other
half of the talk. That's where we'll spend the other half of our time. And then at the bottom
end, you have a key value SSD or a key value device. I put SSD because I work for an SSD
company. I don't work for a spinning rust company. You could have a key value piece of spinning rust. So again,
that's an example, but basically you have a device that understands the key value wire protocol
and can store things appropriately. So first I'm going to talk about the SNEA KV API status.
The first half of this talk is going to talk about the KV API.
So, the status of it is that it has been, version 1.0 has been approved.
It is publicly available.
It's got a website there that you can get it out off of SNEA.
In addition to that, there is open source code available
on GitHub at that particular location, which the slides are available online for you. All
of this is there so you can easily get at those links. It allows library calls independent
of the underlying transport. So what that means is if I'm talking key value over NVMe
or key value over SCSI
or, heaven forbid, key value over SATA,
all of those, the API will be the same for all of those.
So what does the API define?
First, I want to talk about structure.
So it has a multi-level structure.
It has a key space, which is equivalent of a namespace in NVMe,
but it is a space for which, in that entire space, your keys are unique.
So if I have two different key spaces, and I have the same key,
they have different values potentially attached to them.
They are not unique across multiple key spaces.
So there is a uniqueness there of your keys within a particular key space.
There is a key group. The key group is unique within a key space.
It is a group of key value pairs, and those key value pairs are enumerated by some portion of your key and a mask associated with that portion of the key.
So you could say all keys that have this group of bits set to some value would be a key group.
So that is a structure that allows you to identify a group of keys within a key space.
The key value pair is the fundamental building block that we are talking about,
which is a key associated with a value. In addition to that the API
defines a structure about device information which is characteristics of
the entire device which includes all of your key spaces. So this would be how
much storage do you have for all key spaces?
How many key spaces do you support?
What is, from a device point of view, what's your maximum key size?
What's your key granularity? What are your optimum value sizes so that you can attempt to store things optimally on the device.
What is your value granularity?
The value granularity is an interesting characteristic in the fact that
when you are storing a key value payer,
you may have a value granularity that is, say, 2k bytes. If I attempt to store a value that is
one and a half k, I may actually utilize 2k worth of my resources because I don't want to fragment
my physical back-end storage to accommodate these little chunks. Part of what this does is this actually avoids
some of the right amplification on an SSD of the fact that you're not trying to do a lot of
garbage collection. When you store a key value pair, if you work within that optimal granularity, then when I need to, when I'm ready to delete it
and write something else, it's a single chunk or multiple individual chunks of my memory
space, so I delete them completely and I don't have to do garbage collection to say, oh,
in this physical area, I have part of this key and part of some other key
that are both in that one physical area. If you follow the optimal granularity sizes,
then you don't have that issue. And then there's key space info, which is the characteristics of a
specific key space. And that will have your information about how large a specific key space.
And that will have your information about how large is your key space,
how many keys are currently allocated within that key space,
how much storage do you have available, et cetera.
So that would be specific to a key space that you have created within the device.
So then what does the API define in terms of access?
It defines a store operation, which is fairly simple.
You store a key value pair. It defines a retrieve operation where you retrieve a key value pair
based on a key that you give to it.
It defines a delete operation to delete a single key.
It defines a delete group operation, which goes back to the concept that I mentioned earlier of key groups,
where a key group is keys that are defined by some portion of the key matching
some particular pattern. And it allows you to delete all the key value pairs in that
key group. It has an exist function to say I want to query the device and ask does this
particular key exist in the device? The exist function is
written in a way that you could actually pass it an array of keys. So you may ask if a whole
set of keys happen to exist, and you get a response that indicates on a per-key basis whether those exist.
It provides a list function.
That list function follows the rules that I said earlier in the fact that your list will be consistent
from one call to another, assuming you have not done any store or delete operations
in between the two,
but it is not required to be in any sort of alphanumeric ordering.
Now, one of the things about the list operation
is it needs to be consistent from one time to the next
because if I cannot buffer all of the keys when I do a list,
I need to have a mechanism to go out and ask for more keys.
And so there is a parameter passed in which is a key.
So if I do a list of keys and I pass in all zeros,
so I say start at the beginning of whatever your list looks like, whatever the data structure
you store is, I return a set of keys that fill up whatever buffer I've allocated for
that. If I then take and do another list, starting by asking for the last key that I
retrieved in the previous list, it will continue on from there and give me the rest of what's out
there. And that way I can walk through and get a list of everything independent of whatever buffer size I'm transferring in. Then there's an iterator, and move into what the iterator function does.
So the iterator function is a mechanism to enable a device to prepare a key group of keys
for iteration by matching a given bit pattern.
So what this does is, if I know what my key group is,
this function actually allows me to go down to the device and say,
by the way, here's the key group that I want to have.
It allows the device to then say, oh, so that's your key group.
I need to create data structures to support that key group.
So I need to go through, if I've already done a whole
bunch of stores, then I need to go through and figure out what all of the things you've stored
on the media happen to fall into that key group. So when you want to do something with it, I'm not
going and trying to walk my list to find all of those pieces of key value data that happen to match that particular key group.
So it's a preparatory mechanism for the device to prepare the data for whatever you're doing with the key group.
It also is a mechanism to go out to the device and say, as you do future stores, you need
to be aware of key groups and be able to continue to update your data structures that are going
to come down later and reference those key groups.
So the iterator function is something that first off allows you to prepare the device, and then it allows you to list or delete key value
pairs within that key group.
So the iterator function is what allows you to then say, okay, I want to list my keys,
but I only want to list the keys that fall within this particular key group.
Or I want to delete all of the keys in this key group.
No.
The same ordering requirements that I already specified.
There's no ordering requirements,
but if you look at that two times in a row without doing an intervening store or delete, then it will
be consistent. Now, a device could be built that gave you ordering, okay? But that's not of the API.
So, now I'd like to move, so that's
that's the extent of what we have done currently in
the key value API.
The one other thing I want to throw in as one final caveat on this
maybe belonged at the beginning is there are a couple of things that as we have
gotten further along the development path we found are going to need some enhancements or
improvements in particular the key value api allows for asynchronous commands we're missing
a couple of parameters that need to be passed for that, so there will be an update to the key value API.
However, I don't expect that to come out until after we get fairly well completed
with the NVMe key value command set.
And at the end, I'll talk about schedules related to that.
So we have a version out of the API.
There are some things that are known already that need to be put into a next version of it.
But overall it is fairly solid.
So within NVMe, we are working on a key value command standard,
and it will be a separate specification from the NVMe base specification.
Right now within the committee, that piece of the standard is pretty close to done,
but it relies on other work in NVMe to provide supporting structure for NVMe-KV.
It happens that that underlying structure is also necessary to facilitate zoned namespaces.
So the structure that's under there is going to support both of those concepts,
and there's two basic things that are being provided there.
One is defining namespace types.
So right now, everything is of one type, which is NVMe or NVM.
So the command set that's currently defined in NVMe-based specification is the NVM command set,
which in reality what that is is the logical block command set.
So this will support a logical block command set, i.e., the NVM command set, a zoned namespace type, and a key value namespace type.
So that's namespace, it is of a given type, and therefore, it also only
supports one command set.
So I cannot have a single namespace that supports both a key value command set and a block command
set. So I can't take for a particular namespace and send it a store command or a write command
and expect it to operate on both of those.
It will only operate on one command set.
So the other piece of work that is the more complicated piece of work within the NVMe base specification
is to separate out how do we deal with multiple command sets. And that's a matter of figuring
out things that were written because they were for an IO command set. They were written
to be specific to the NVM command set. but a number of those things may actually need to be modified
to say well these could apply to any io command set whether that's logical block zone namespace
or key value so it's a matter of sorting out all of those things and getting them separated out so that we can talk about things
that apply to a single command set, things that apply to multiple command sets that are not part
of the administrative commands. The administrative commands in NVMe should remain the same. I see
no reason that we will be changing any of the administrative commands
with the exception of how you administer namespaces.
So the expectation is to release for membership review sometime later this year,
all three of those things that I've talked about,
which is the namespace types multiple command sets,
the key value command set,
and the zone namespace command set.
I anticipate by the end of this year
we should be moving into membership review on all of those.
So the namespace types, multiple command set, keys, I've touched on some of these already.
One command set per namespace type.
I already talked fairly much about that. The NVM command set is namespace type. I already talked fairly much about that.
The NVM command set is traditional block storage.
The key value command set is currently being defined
within the working group.
So if you are an NVMe member, come participate.
It's interesting work.
And if you're not an NVMe member, come join NVMe.
Zone namespace command set is also being defined within the NVMe working group.
Same thing.
If you're interested, come participate.
So features of the key value command set.
Okay.
This is where I get into some of the limitations that I mentioned early on.
While we talk about a key being any length, being variable length,
for the first pass at the KV command set, the key is limited to 16 bytes.
Okay, this is a limitation that comes out of the standpoint that you
have a certain command size that has been defined in NVMe. We could go to a larger size,
but then all commands require that larger size. Doesn't sound like the place we want
to go, at least for the first implementation. Most of the customers we've talked to
are okay with a 16-byte key.
So that fits within the existing command structure
without impacting other aspects of that.
So currently the key is limited to 16 bytes.
Another thing, and this is actually an overarching, probably should have been up in the
header stuff before we even got to the API. A key of four bytes cannot match a key of eight bytes,
even if the eight byte key has padding. So what that means is, if I use this from a numeric point of view and I have a key that is AD8790EF,
that cannot match a key that is 0000, 0000, AD8790EF.
Those two are different keys, even though they may numerically look the same, because they're
of different length, okay? So we're trying to make certain that people understand that you can't
just take those two things and match them up and say, oh, but these are the same even though
they're different lengths. Your key length is a part of what determines what the key really is.
A key is of byte granularity.
So you cannot have a 9-bit key.
You have an 8-bit key.
You have a 16-bit key, you have a 16-bit key.
That is a limitation, and I don't see that changing anywhere as we go forward.
Future work will extend the key length.
There are a few alternatives that have been discussed,
have a good idea of what direction we'll go with that, but
that's future work that will be sometime next year.
One thing that's important to note is that
you can mix and match key lengths.
You don't have to have every key be the same length. You can have an 8-bit
key coexisting with 16-bit keys, etc.,
and that's application defined.
Right.
So thank you, David, very much.
So basically, in a given namespace, you may have, you have variable length keys, which
means that the length of the key is defined in the command.
So I can do a store command of a 4-byte key, key value payer,
and my next command do a store of a 6-byte key length, key value payer, and so forth.
When I retrieve things like the list,
they're not necessarily going to be ordered by the length of the keys or any other structure.
They could be.
That's a vendor-specific implementation of how they store that information.
So you may get a list that has a 4-byte key, a 6-byte key, an 8-byte key, another 4-byte
key, and so forth in that list structure. And that is, there are characteristics of the namespace that the device can provide
that can tell you what are your limits as to what you can do with this particular device.
So you could actually do a device that is strictly a fixed size key. That's a valid
implementation, but all of that information is information passed from the device to the host,
allowing the host to then figure out how it wants to configure that particular namespace.
So features of the key value command set,
they are very comparable to what's in the API.
I wonder why, since I helped to do both of them,
kind of had a little desire to make them look very similar to each other. So we have a store where you give it a key value pair, it stores that key value pair.
You have a retrieve.
You retrieve the key value pair based on the key that you give it.
You have a delete, which deletes a single key value pair.
One thing you'll note is missing on this particular slide,
I don't have a delete of a group.
That is not being defined in the first pass
at the key value command set.
You have a list which follows all the rules
that I've been over a couple of times
in terms of the ordering of that list.
You have exist.
Exist in the initial implementation.
Does a specific key exist in the device?
It does not support the API version of doing a list of multiple keys.
Now, the thing of note here is the API has something sitting behind it
that is your driver for your particular device.
That driver could actually do some of those enhanced functions in the host
to then turn it into multiple exist commands or multiple list commands.
That driver could actually do all of the work necessary to support key groups.
So all of that can still be done using the existing API, using the first generation of
NVMe KV command set.
It's just that a piece of that work gets done in the host.
So future work on NVMe.
I think the number one thing that I'm getting pressure for is an extended key length.
We have customers, we have all sorts of ideas on how big is big enough.
2K?
I see some heads nod no. I see some heads nod no.
I see other heads nod yes.
So what's big enough?
Unknown at the moment, but bigger than 16 bytes, definitely.
An append command.
So right now, all of the store and retrieve operations are a single complete
key value pair. We don't have a way to add data to an existing key value pair.
The next thing here is retrieve index. We don't have a mechanism currently to retrieve
a certain portion of a key value pair. You can only retrieve
the entire thing. So those are a couple of items that have been requested and talked about for
the future. They're not in the current command set. Key group support, as I said earlier. You can do key group support up in the host
for the API purpose, but there's a strong desire to move that grouping function, the
iterator function, the key group function, down into the storage device. So there's a desire to enhance the command set to support that key group function.
I've heard at least one question here about sorting of your index.
There is a desire to indicate whether or not a particular key value device
is capable of providing your list in a sorted order.
And if the device is capable of it, then we need to pass information about that capability
and have some mechanism to indicate what type of sorting do I want of my keys.
So sorted keys is something that we need to consider for future work.
Exist multiple and delete multiple.
Again, these are things that the API has already specified, can be done in the host,
but eventually, as we have more capabilities on the device,
moving those functions down to the device would be very useful.
So applications for key value SSDs. They can run the gamut. What I've tried to
do, I actually stole something from a Samsung presentation almost three years ago. I don't
want to go into a whole lot of detail, but my purpose of stealing this was to show the fact you have different applications
that have different pieces of host
code that are necessary to interface
to the API, to your KV stack, and eventually
to your key value device at the bottom.
So when you get all the way over
at just a NoSQL database,
there's very little necessary
from the API point of view.
When you get over to a full object storage system,
there's a lot more host interfacing
that needs to be done
to talk to the API, to talk to the key value storage.
So it really, it's a storage subsystem at the bottom that is designed to help facilitate other object storage or key value storage protocols,
but not to be the end-all that does it all in the device.
So software driver support.
Samsung has provided source code, open source code that is available.
Actually, this slide I missed.
The first line on this, or second line on this slide that says currently proprietary needs to come out
because it's no longer proprietary.
We have provided at this link publicly available open source code that is written to the SNEA API.
And there is both a key value SSD support as well as the NVMe user space driver.
There is also, we have a
SEFT object storage design
specifically for our key value SSD
that you can go pull down,
look at how we have done a
SEFT implementation.
So there was something else I wanted to say in here, but I just lost it.
Okay, so that's the extent of the presentation.
Last time I presented this, I found I had more questions than I had time for,
so I tried having more time at this presentation to leave it open for
more questions. So I will open it up to questions
with the caveat that the questions
stand between you and lunch.
I will not be held responsible. Yes?
Yes, so when you do a list of commands, there's a list in logical block, physical block.
What order?
So first off, we don't have logical blocks, which brought me to the other point I wanted to bring up.
But we don't have logical blocks, so there's no logical block ordering.
It is vendor implementation specific. And so what that means is a vendor may say, okay, I have a hash table that I store my key value pairs in.
Whatever order that hash table happens to be in is how you get those key value pairs.
There's no mechanism for the end user to have some idea of what the order would be.
Yes?
Yes?
Yes?
So, we... How do you support that over fabric?
So we... Thank you.
So...
Yes, I'll repeat the question.
The question was, have we looked at how to support key value drives over fabric?
One of the biggest things we looked at was specifically making certain
that we were not using any NVMe commands
that were not transportable across NVMe-OF.
So in particular, at one point in time, we were looking at using PRPs.
There's some concern that PRPs are not supportable over fabric, so we have avoided using PRPs
and stuck to SGLs.
So we have attempted to make this 100% fabric transparent.
No.
So the question was, can you
support a zero byte key, which would
be only one key of zero byte could be possible?
And no, we don't support a key length of less than one byte.
I don't think that, yeah, I don't believe we've really looked at that particular corner case.
It's an interesting one.
I don't know of a reason why we wouldn't support it. It's not difficult.
From a relative performance point of view,
if someone were to abuse this and make the key
a block number and store a 4K block in it, which then
lets you write a block application on it.
Would it be, it'll be slower, obviously, than a block device.
Do you have any idea?
I don't believe it would be.
Okay, the question was, assuming someone wrote an application
which wanted to use a logical block address as your key,
and the size of your data was 4K bytes and therefore implemented basically
a logical block scheme on top of this, would it be slower?
Do I have any estimates?
My answer to that question is I don't believe it will be slower.
I have customers who have talked about things of that nature.
So it's not something beyond the scope of what I see being done with this.
And there's no reason in the world for it to be any different than a current logical
block addressed system.
One last question.
For that funny edge case, do you have any idea what kind of space overhead you're looking
at in an implementation? I don't see any space overhead above what's already there in the current flash translation layer.
So the one thing that brought to mind, and I want to make a point and then I'll come to your question.
One of the things I didn't put down on future directions, it has been talked about whether or not we need to do some sort of data protection.
And the question that has come up with that is, well,
how do you define a data protection model for a variable length user data block of data?
So that is something that we've left to theorize in the future, but a point, if you're thinking
about logical block addressing and the fact that you have protection information potentially
with it, we have kind of talked around that a little bit.
So another question back there, yes.
Two questions. a little bit. So, another question back there. Yes.
Okay, so the two questions.
First question, do you need to provide a length for the key in each and every command?
The answer to that is yes. You need to provide a length of the key because you have to know how long the key is to know what you're storing.
Because I can have two keys that
look the same, but they're different lengths. So there is a length associated with each one.
The second question was, are we looking to provide some sort of support for programming in Java or
other languages of that nature? And we currently, from a Samsung point of view, are not looking at that.
We're looking at predominantly providing the underlying support necessary for Linux.
So. So that's where basically what I've said is we're not expecting to store a CRC with the data for the first release,
and we have to determine what we do in the future.
What I would anticipate doing is something along the lines of what SCSI did,
of having a particular length of data that you associate some sort of protection information with,
the CRC, et cetera, and saying, okay, we will put a CRC with each n bytes of data, 4K, 8K, whatever it may be.
But right now, we have no support for that at all.
Please repeat the difference between list and iterator.
In the API, but not in the command set.
So the question was to repeat the difference between list and iterator.
List is a matter of going down to the device and getting information about all the keys that happen to be on the device.
Iterator is a process to go down to the device and say,
I want to associate all the keys with this particular characteristic
as part of a key group. So it's formatting, it is a command to tell the storage device,
be prepared that if I query you with a request for a key group,
that you're prepared whatever preparation is necessary, which may be time-consuming,
that you've had a chance to prepare up front
for commands related to that group.
Is this for phasing them?
Correct.
But is that not based on some mask that matches the key?
Thank you.
So, yes, it is based on a mask of the key.
But depending on how you've stored it, if you've stored all your data in a hash table,
it may not be easy to go get all the keys that match a particular mask in real time for a command.
You don't want command timeouts, so this allows you to prepare and say,
oh, I've been told that I want to have a group that has this particular mask.
Let me go find all the keys that match that mask and store them in a data structure that is easy for me
to provide back the information related to that group.
You're pre-building a list.
Right.
Okay.
So just to let you know, I've got several questions.
I'll try to get to as many as I can.
I've already gotten the five-minute warning.
We'll start at the back over here.
Say that one more time.
So will the list operation be a constant-time operation
with respect to the number of keys that are stored there?
So I believe your question was will the list operation...
Will the exist operation be a fixed time operation?
I don't know that anything can be truly fixed time.
It is expected to be an operation that returns relatively quickly. So as quick as any other operation that you would
as quick as say a retrieve operation or maybe even
quicker than a retrieve operation. But I don't know that we have anything that is
a fixed operation time.
What I want to know is will it depend on the number of keys that are stored there on the device?
That's implementation specific.
I have no idea.
Yes.
What is the granularity of the atomic right?
The granularity of an atomic right is a key value pair.
I think that we need to consider the clock. The granularity of an atomic write is a key value pair. Okay.
What do you mean by class of service?
Again, implementation specific.
Yes. Does the spec provide an interface to update multiple KVLKs?
One more time.
Does the spec provide an interface to update multiple KVLKs
so that you can keep the transaction?
Okay.
So, no, the first level, that's part of where.
The first level spec, all it does is provide an interface to store a single key value pair in a single command.
We don't even have an append command that allows you, which is kind of what I think you're talking about as an update.
Or I guess you're talking.
Okay.
Yeah, no.
We're not looking at that at the moment.
Okay.
I'd like to thank you all for coming.
I have gotten the sign that we're out of time.
Thanks for listening.
If you have questions about the material
presented in this podcast,
be sure and join our developers mailing list by sending
an email to developers-subscribe at sneha.org. Here you can ask questions and discuss this topic
further with your peers in the storage developer community. For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.