Storage Developer Conference - #19: Multi-vendor Key Management with KMIP
Episode Date: August 25, 2016...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Chair. Welcome to the SDC
Podcast. Every week, the SDC Podcast presents important technical topics to the developer
community. Each episode is hand-selected by the SNEA Technical Council from the presentations
at our annual Storage Developer Conference. The link to the slides is available in the show notes at snea.org slash podcast.
You are listening to SDC Podcast Episode 19.
Today we hear from Tim Hudson, CTO and Technical Director with Cripsoft,
as he presents Multi-Vendor Key Management with KMIP from the 2015 Storage Developer Conference.
I'm Tim Hudson, CTR Technical Director at Cripsoft,
and here to talk about multi-vendor key management with KMIP.
So for those of you who didn't read the details beyond the actual title of the slide,
that's a copy of the abstract.
And in essence, what I'm going to talk about is the experiences we've had in implementing K-MIP
and interoperating with other implementations of K-MIP over the last six years.
And it's been an interesting journey for us.
We've certainly had many a surprise in interoperating with other vendors,
and this presentation just covers some of those details.
So, of course, why do we bother with multi-vendor key management?
What's the purpose of it?
Go back 10 years ago, and I happen to have been doing this 10 years ago,
and for every vendor's key management server that you wanted to talk to,
you had to go and get their SDK and integrate it to understand their vocabulary,
handle their bugs, their quirks, talk to their
piece of software and plug it in. And it used to get pretty exciting at times when you were doing
integrations along those lines when many vendors shared common componentry underneath but actually
had different versions. So if you wanted to support multiple vendors often you couldn't because they
had clashing components. And this was all the problem, you know, back in the dark ages of key management
about 10 years ago. So one of the things that vendors did is, if I'm talking to a key management
server, I shouldn't have to use an SDK from the vendor that built the server. I shouldn't have to
use a proprietary protocol. I should just be able to have an open standard, do one integration, one common vocabulary,
and have it work with everybody
so I can go about doing my business, which isn't key management,
and enable my users to choose
whichever flavour of key management server vendor they wish to go with.
And that's effectively the promise
and the impetus behind the OASIS key management interoperability protocol. So the positives of course
as I've just said, one SDK you have to integrate, common vocabulary
choose who you get it from or you could actually build it yourself if you'd like
the negatives, well it's a standard
you have to follow it, if you don't follow it, it won't work
the vocabulary might not match what you have in your product,
and that's been a challenge for a lot of vendors in the K-MIP space.
You've trained your users over 10 years or so for your vocabulary.
Along comes a standard that's slightly different.
Heaven forbid it uses the same word for a completely different meaning.
Then you've got to turn around and educate your users,
or you try and hide it from them.
And then when they get diagnostic messages or they're interacting with the server,
they have issues along those lines.
And the standard may have picked to implement more mandatory items
than what you've got on your product, so you could have a mismatch there.
And some products are very specialised in their usage,
and as a standard we tend to be
general purpose so the scope of what needs to be implemented could be quite a lot wider.
And of course if it's your software talking to somebody else's software you're not in control
of both ends and that leads to a lot of issues in terms of supporting the integrations because it's inherently two parties involved.
And that's certainly been some of the challenges
in terms of doing a single integration and following a standard.
So that's actually the downside of a standard,
but the positives outweigh the negatives.
So who's using KMIP?
So this is a nice little summary.
And from our perspective,
we have arbitrarily broken it up into three different sectors,
storage, infrastructure, and cloud.
Now, infrastructure is basically the security vendors
and the people building infrastructure sold as infrastructure.
And there's a range of different types of products.
And a number of the vendors you see sitting at the bottom here
actually sell products in multiple areas. Some of them sell multiple types of each types of products, and a number of the vendors you see sitting at the bottom here actually sell products in multiple areas.
Some of them sell multiple types of each of the products,
and some of them, you know,
there are three competing products doing the same thing,
all of which can share a key manager.
So as you can see, if you look down the bottom there,
and you're familiar with the SNEA membership,
you'll see there's a lot of people who are part of SNEA
and in the storage space
and doing interoperable storage solutions
that are also involved in interoperable key management.
So if we go back and we drew this slide five years ago,
we would have put up about four vendors.
Draw it now, it's pretty much most of the vendors that are out there.
So it's great to say everybody's doing K-Met,
but what are they actually doing with it?
What types of things are happening?
So what I've done is in those areas you saw on the previous slide,
I've pulled out the ones relevant in a storage environment.
What are people doing with storage?
So simple vaulting of a master key.
Nothing wrong with that as your key management solution
if that's all your product needs.
Perfect use of KMIP.
You could just be using KMIP to share configuration information
that's not actually security related.
It's just, hey, we're all talking to a key manager.
I've got lots of nodes in my cluster.
That's out there at central.
I can share my configuration.
Nothing wrong with that. Not really crypto, not really security, but because out there at central, I can share my configuration. Nothing wrong with that. Not
really crypto, not really security, but because the capability's there, there's nothing wrong
with using it. You might want to turn around and actually do some security policy enforcement.
Don't use this key for more than two gigabytes of encryption. Don't have this key out there and
use it more than 23 times on a Tuesday.
Don't allow more than three people to ever have access to this key.
Only use it once, once it's been used.
Don't let it be used again.
And KMIP supports all of those scenarios in a cross-vendor interoperable manner.
For some vendors, it's simply a matter of, well, if I use a key manager and the key manager's gone and got FIPS validation,
I can ignore that pain.
And that's a perfectly good use of KMIP.
In fact, it's a very common use of KMIP.
For other vendors, they're used to working in an environment
where you need to be able to do re-keying.
So, you know, I've got this massive block of storage.
I've got a policy that says
don't use any particular key for more than six months.
I'm not going to decrypt and re-encrypt my storage at every six month period. I want to be able to slowly
swap it over. So re-keying in a sane environment without breaking your storage infrastructure.
And of course without breaking your layered storage solutions for things like deduplication,
data mobility and lots of other fun things that other people at SDC will be talking about.
For some vendors, the key manager is just a place to back stuff up,
back up and restore.
Here's my configuration information.
Here's my set of keys.
I'm not actually even going to break down what those keys are.
Just look after my security stuff for me.
Again, a fairly reasonable use of KMIP.
Not a lot of extra functionality you can
put on top of it, but there's nothing wrong with it. So KMIP lets you do all of these things.
Get to tape libraries. Well, tape libraries requirements are pretty simple. Give me a key,
and when I ask for it again, give it back to me. That's all a tape library needs from a key
management server. So if you turn around and have a look at the integration or testing of a tape
library, it's very straightforward. It doesn't need much from a key management server. So if you turn around and have a look at the integration or testing of a tape library, it's very straightforward.
It doesn't need much from a key management server.
Again, if the key management server's FIPS validated,
and most of them are, you get the FIPS benefit by using that.
You could FIPS validate your whole tape library if you want.
You don't need to if you just need to show
that your keys are generated in a FIPS environment.
You might actually validate your tape drive and use them in a FIPS environment and between the two
you cover most of your requirements. Encrypting switches and storage controllers, they do a fairly
interesting series of things from a KMIT perspective and some of this is simply a number
of the very early encryption products in the storage space were done at the switch level.
Plug in an encrypting switch and make all of these devices
that don't yet have encryption work with it.
So the sophistication of the applications in that space
is beyond what a couple of the newer generations of technology are using.
And it's simply historic.
It was in there.
It needed lots of features to be able to work, And that's been reflected through into the use of the key
management server. Okay, so we talked about a lot of people are doing it. We've talked about a little
bit about what they're doing with it. So overall, what does that mean? What's happening from a
KMIP adoption perspective? So as I mentioned, go back about five or six years, very few vendors. So this up here on the blue line at the top is the number of implementations of KMIP in
products that are generally available.
So it's nothing about market penetration, it's the number of products.
So we're not talking about how many units of each have shipped, just that it's out there,
that capability is available.
The green line above it is information in terms of
we know of the implementation, you can't yet buy it.
And as we're a technology supplier,
we get to hear about things a little sooner than when they're deployed.
So in general, what we find is, you know,
we're aware of what's about to come out in the market
over the next 12 to 18 months,
because a lot of the time they're either our customer,
they've licensed our technology,
or they're interoperating with us prior to deployment,
so they've done their own implementation and we get to be aware of it,
and the whole concept behind that is
by the time the product hits the customer, it just works.
So we get to do all of the debugging.
In terms of breaking down into those three market segments earlier,
down at the bottom you can see we've got storage in blue,
security or infrastructure in red, and cloud.
Storage led the adoption of enterprise key management.
And if anybody tells you otherwise,
they haven't actually been looking at the market.
So storage had the encryption key management problem
and drove the adoption of
enterprise key management. The security and infrastructure vendors are coming in behind at
about the same rate, just starting later, and cloud and general purpose applications are following
after that. And we're seeing exactly the same adoption trends just spread out and starting a
little bit later in each of those sectors. And that's a good thing. And with our knowledge in terms of what's happening in the market
and what's going to come out in products over the next 12 to 18 months,
those little dotter bits, they're not guesses.
That's what we know is happening.
So what's in KMIP?
What have we been doing with KMIP as a specification?
So 1.0 was an official standard as of October 2010.
So started development in 2007, official standard in 2010.
We've done a whole pile of work on it, got a new release out a couple of years later,
late 2010 to early 2013, and then another couple of years after that we have another release.
We finished doing 1.3 as of December last year. We've just got to go through the documentation
ratification process. The technical committee is working on 1.4. And this is simply how it goes.
You can have a look at between each of the different documents here. In essence,
the technical committee has been concentrating on things that matter to implementers. Test cases,
test cases, test cases. And the profile
documents you see under KMIP 1.2 contain test cases.
And those test cases have been produced to benefit implementers
and to also enable conformance testing programs like the one that
SNEO in the SSIF runs as a KMIP conformance testing programs like the one that SNIA in the SSIF runs
as a KMIP conformance testing program.
It's based on those documented test cases,
which any vendor can go, take them and implement them
and submit products for conformance testing
and get independent conformance testing results from SNIA.
Is the increase on those profiles from 16 pages up to 871?
Yeah, test cases.
Those profiles are part of the test cases?
Those profiles got test cases.
So effectively, originally the profiles as of came at 1011 was,
you need these features.
And we learned, uh-uh, that's not good enough
because we'll have vendors claiming to have the features,
but the products don't work together.
So an interoperability standard
where the end user experiences non-interoperability
doesn't work out very well.
So as a technical committee, it's like test cases,
and under SNEA, there's a relationship between OASIS and SNEA.
SNEA does all the conformance testing,
so it's an independent third party.
You don't have to trust what the vendors say.
You can go and get it conformance tested.
And John down the back is part of the team looking after that
at the tech centre out at Colorado Springs.
Wonderful place to visit.
So what is KMIP?
This is a...
It's going to be about probably a five-minute tour of KMIP? This is a, well, it's going to be about a, probably a five-minute tour of KMIP.
So KMIP is an incredibly simple binary tag type length value encoding.
There are three byte tags, one byte types,
four byte lengths, and then the payload.
That's it.
That's the encoding complexity for KMIP.
So it should be pretty straightforward to get it
and get it right and not make mistakes.
So if we turn around and have a look,
this is a KMIP message, and it's colour-coded,
so you can see the tag type length value,
and the little bit sitting in there in pink,
well, there's actually padding to align on 8-byte boundaries.
So it's pretty regular.
And if we split it out in the XML representation, so KMIP
as a base defines a binary standard, tag type length value. That's wired into the specification
and mandatory to implement. As of KMIP 1.2, you can also support this format. It's an XML
representation of the same information. And it's been deliberately designed to be entirely round-trippable.
So you can go from TTLV into XML, XML into TTLV.
And because KMF's got built-in extensibility,
there's also a mechanism for adding an arbitrary tag.
So you can say, I don't know what this tag is.
It's a vendor-specific tag,
but I know how to encode it in both XML and
TTLV. And, oh, we don't have JSON listed there. I obviously dropped that slide out. There's also a
JSON representation. And those are documented in terms of how do you transform between them.
In the specification, as of KMIT 1.2, they have the same level of authority as the spec itself.
They are OASIS standard documents.
And it says how to get from TTL, VD, XML and back.
So as you can see here, the message is, please go and create me an AES 128-bit symmetric
key.
I plan to use it for encrypt and decrypt.
And what goes over the wire is that.
And that can be sent to any of the KMIP servers
out there on the market,
and they'll come back and say,
yes, sir, here is your unique identifier.
So it's pretty straightforward.
The concepts are fairly simple.
You can see up at the top of the message
there's a protocol version.
That's where you would express saying I'm doing 1.0, 1.1, 1.2, 1.3 or 1.4.
As of KMIP 1.1, you can ask what different versions the server supports
and it comes back with a discovery protocol.
You can query what types of objects.
So fundamentally in KMIP, and this is the stuff that doesn't come through
particularly clearly in the specification,
it's assumed you understand the mental model behind what the original authors had in mind,
which they didn't quite write down.
So in KMIP, every object that KMIP is looking after has a value.
The value is set at creation.
It is immutable.
So if you need to change the value of an object,
you won't be doing that in KMIP.
You'll create another object.
It will have a different value,
and then you can reference the attributes across from it.
You may not actually have the value.
As of KMIP 1.2, the value of a key can be left out.
And you go, why would a key management server look after keys where it doesn't have the key material?
It would do that when the key material
is being managed in a secure device
like a hardware security module.
So if the hardware security module has the key,
the key manager can do the policy management,
the hardware security module can do the securing.
And those sorts of things happen.
And of course, values of keys can be in a variety
of different formats. KMIT lets you
put them in and retrieve them in different formats.
The server can do conversions
for you.
Of course, every object's got to have a type.
We couldn't have typeless objects.
One of the types, of course, is a vendor extension
type, so we do have the
concept of typeless objects. And we
can turn around and support a full range of security types
that you would expect to see.
So it's pretty much everything in there,
certificates, symmetric, asymmetric.
KMIPS view of the universe,
asymmetric keys are represented as two separate entities,
the public and the private.
Key splits are supported as well,
so the usual range of algorithms, including Bloom-Shamir.
Templates which are deprecated as of
1.2, I'll talk about that later.
Secret data where you can basically store stuff
you'd like to have protected like passwords.
An opaque object
where you want to store stuff but the
server is not allowed to try and
interpret it. So it's please look
after this but hands off the object.
And a PGP key because a number of members in the technical committee
wanted to be able to put their PGP keys in, that's effectively a PGP
key blob. So objects have got attributes.
Attributes is stuff, stuff about the objects. The stuff
has a name, has a type, can be simple or complex so it can be integers,
big integers or it could be integers, big integers, or it
could be a structure containing numerous fields. Some of the attributes are set on the server,
some on the client, some can be changed, some can't be changed, some are server only, some
are client only, some can be multiple, some can be singleton. Welcome to KMEP. How do
you know which is which? You pull up the specification and look at the rules. So every attribute has a rule to say how to answer
all of those questions. We turn around and have a look at KMIP. So what can you do with
operations? So on the right hand side here in the grey, those
are the KMIP operations. The bits in blue are just a logical grouping
that doesn't exist within the specification to give you an idea of the sorts of
things you can do.
So normally, establishment,
that's where you do all your creates, all your registers.
Register is you've created the key outside the system,
you want the key manager to look after it.
And then create, derive, certify, create key pair.
And there's a whole pile of stuff down in the middle about re-keying,
so rotating keys, re-key key pair, re-key, re-certify.
Do exactly what you would think given those names.
And as of KMIT 1.2, you see down the bottom, cryptographic operations.
The server can do the cryptographic operations on your behalf for the keys it is managing
if it wishes to and you have the appropriate privileges.
That's what the little 1.2 down the bottom is to indicate.
So it's a simple spec.
You saw tag, type, length, value.
How hard could it be?
So here is a subset of the implementation errors
we have experienced with other vendors.
So padding and coding.
Putting in the wrong tag value.
Putting the fields in the wrong order when they've
got a defined fixed order, not using the right TLS version, cipher suites, not actually using
TLS, one vendor using SSLV2, not meant to work.
Apparently it works for some servers.
Missing mandatory, very, very common. So this is something that you must support,
it's mandatory, not supported. And then the more annoying one, mandating optional. This
is something the server, the spec says is optional and you require it. So add those
two things in and you get a whole pile of interoperability challenges and invalid sign handling.
Unsigned means no negative numbers.
Apparently that's a higher order challenging concept to follow.
So the more complex things, just missing core concepts within the specification, adding vendor extra capability that makes the behaviour very different,
getting confused about the concepts in KMIP,
which is understandable because some of them are fairly tersely documented,
picking a feature set that isn't particularly useful to anybody.
Now, I implement KMIP, but by the way,
I'm not going to support symmetric keys.
Now, we have vendors who have done that,
and it lets you put KMIP on your data sheet,
won't let a KMIP client that wants to do symmetric keys,
won't let a tape library talk to you.
And, of course, whose fault is it
when the end user experiences non-interoperability?
Oh, it must be the specification.
Now, that KMIP.
Now, if you'd used our vendor proprietary protocol,
you wouldn't have had that problem.
Another slightly more interesting thing
is assuming the message sequence flows and contents.
And that's from vendors who have done a partial implementation
that have kind of said,
if you send me that test case that's going to work,
you send me anything else that won't.
So we kind of recognise the test case
or not particularly recognise it and give a canned response.
So why do these errors occur?
So the spec's got a lot of good text in it.
There's examples of all of the encoding.
There's the hexadecimal TTLV packets in the test cases in 1.0,
yet every vendor gets it wrong.
So here's an example.
We talked about padding.
So on the left-hand side here,
outside the red box,
that's cut and paste out of the specification.
So you'd think reading that as a developer,
hmm, item length 32-bit binary integer.
Yeah, I reckon that's like going to be really, really clear
that's four bytes long.
No, not to some implementers.
So the implementation errors, you know, just around padding.
So I've got four of the different items in there in padding.
Padding seems to be a really complicated concept
for some implementers to follow.
So what have we turned around and done to address this?
Except as a technical committee,
you can't put more text in the specification
to fix the problem of somebody not reading the specification.
More text doesn't work.
Adding more examples is hitting the person harder
with a hammer that they're not bothering to pick up and use.
So that doesn't work either.
And documenting test cases,
well, if they're going to ignore the test cases,
there's nothing you can do about that.
So what do you do?
You make sure that there's a way
that the products can be shown to work together.
So the particular hammer that we've applied here as a technical committee to the problem
is we do regular plug fests, we do regular interop events,
and we have a formal conformance testing program under SNEA.
So the only way to make sure that two implementations work together
is to show that they work together.
It's not a specification issue, it's an implementation issue.
So you've just got to test, test, test, test.
And I can tell you, when we were doing our implementation of KMIP,
we put the specification to one side and worked from the test cases
until we got things up and going and matching the test cases,
then went back to the specification
to fill in the details as to what was missing.
And that's just a developer-centric approach.
So what do you do about conceptual confusion?
You know, if a whole pile of vendors
make a similar mistake thinking,
hmm, what are we going to do about that?
So we have one particular thing in K-MIP, templates.
Templates is a convenience parameter passing mechanism.
So if I'm going to turn around and create 3,000 keys
that are all the same type with the same length,
I can, in a template, say,
here's the common element I'm going to send in every one of my messages
and just reference it by name.
That was the intent in the specification.
Lots and lots of vendors decided,
hmm, templates, that sounds like policy.
I'm going to put all my policy against it.
So, as a technical committee,
we've deprecated templates.
Fixes the problem.
If it's too confusing,
remove it from the specification.
Now, deprecation in KMIP terms means
we mark that in the next major release
it is intended to be removed
and we discourage its usage. We never rip something out of the specification that
will break existing vendor usage. So it's advance notice and at this rate
it'll probably be four years or so by the time it's actually removed
but it's clearly marked and came at 1.2. Templates? Deprecated.
We've got some vendors that like to create
objects during search. So you're busy doing a query, tell me what you've got some vendors that like to create objects during search
so you're busy doing a query, tell me what you've got in your system
and the system goes off on the side creating new objects for you
so I wanted to know what you already had and you've just made some fresh ones for me
not quite the behaviour you want
I'm going to send you a username and a password for authentication
well we've got some implementations that ignore the password.
So don't say, please don't send me the password.
You say, anything will do.
Nothing or anything, it's all okay, they all work.
Of course, for other vendors,
it's critical that that password-based authentication works within the protocol.
We've got some vendors that require keys to have names,
and we've got vendors that have a limited subset of
ASCII that's allowed within the names of keys.
Now KMIP is a specification we went out there to make sure, UTF-8, as long
a string as you like, in any character set you like, representing all
the languages and funny strange symbols that are in use.
So it's UTF-8 throughout. But of course some vendors think, well,
no spaces, no punctuation. Nobody will ever want a name greater than
64 bytes, will they?
So as I said, we've turned around and done a whole pile of things in the specification.
Deprecated. For people who want to create objects
while they're busy locating them, we added a special way for the client to indicate that it's
expecting that behaviour. Please do it. So it's not a matter of saying, hey,
that's strange and unusual and fighting it out to get the one true way in the specification.
It's, we accept that you have an unusual use case that we don't think anybody
on the planet needs. Here's a way to get that unusual use case to work.
We've added the concept of an alternate name
simply to solve some of the confusion around name usage,
and we're putting in more test cases and profiles
that can be conformance tested.
So if you're turning around and deploying into the Japanese or Chinese market
where names are not going to be in ASCII,
then we'll have a profile that will enable that to be tested
and the team out at Colorado Springs
will enable conformance testing of that.
So pattern matching, other implementation errors.
This was one of the more difficult ones for us
as a technical committee to decide.
How do you spot implementations that don't really implement a spec but can pass
a test case as long as they know that test case is coming? Well
you make the test cases a little bit more complicated such that in
order to pass a test case, in order to be able to
survive a plug fest, you have to be so
clever in your implementation of pattern matching you might as well have done an implementation.
And that's the approach that we've taken. If you can't get the
fields right, we'll put them around the other way. And that's something that
we weren't anticipating as a technical committee needing to do, but it was
pretty straightforward. It's like, sit there and have a look, try it, oh, they don't
handle that. Right, let's put that in the implementation. Let's put that in the profile. Let's handle those
sorts of things. Because what customers want is stuff that actually works together, no matter who
built it. Vendor A to Vendor B, it should just work. So that's what we've ended up doing.
A lot more test cases and profiles. And in fact fact there's been a lot of the work of the technical committee
over the last four years.
I'm the co-editor of both the test cases and the profile documents,
so it means I get to spend lots of time with implementations
and working through that, lots of times with plugfests.
So what sort of stuff should you be thinking about when you go oh man it's a nasty universe
out there we've got the standard we've got lots of vendors supporting it we've got a whole pile
of ways implementers get it wrong what should i really be caring about so one of the other things
that you have to be aware of is there's a different viewpoint within the key management server vendor community about what the purpose of key management is.
So what is the one thing a key manager must never do?
It isn't a constant answer.
So for some folks in the key management community, never lose the key.
If you only implement one thing, that's the one thing I want.
For others, it's okay to lose the key,
but never give it to the wrong person.
That's unforgivable.
And when you design your products with those two competing requirements,
they come out quite different.
And there's nothing wrong with those viewpoints, those perspectives.
They're just different.
So if you're never trying to lose a key,
you want to know that you've got a cluster of key management servers,
you want to know before the key management server gives you back your key,
it's replicated to a majority of the nodes in the cluster.
Pretty basic stuff.
But if you're more worried about not giving it to the wrong person,
that wouldn't have been important in your implementation.
So you might happily hand out a key
that if the power goes out, your data centre goes down,
the network connections drop,
you might start encrypting data with a key
that you can't retrieve from the key management server vendor.
And those sorts of things are not covered in the specification.
They're not covered in the conformance testing program because they're
effectively operational parameters. They're things that you need to be aware of.
Being able to continue serving keys. So what happens if you're sitting on an external database
and the database goes down? Do you stop serving keys? What happens if you're in a degraded
mode where you're able to serve keys but you're not able to update any status, so your database
is effectively in read-only mode? Should you serve a key out if you're not able to store
that the key has been served? Does that make sense? These sorts of things are just not
covered in the specification or in the current generation products.
Context.
You've got a key management server.
It's managing the keys on behalf of a client.
What does it need to know about those keys?
It needs to have some context because in the concept of key management,
you need to be able to think about the fact
that generally the person who's driving the device that uses the key management, you need to be able to think about the fact that generally the person who's driving the device that
uses the key management server and the person who logs into the
management console of the key management server, they're two different people.
They are different roles within an organisation and they need some mechanism
whereby they can communicate. So I'll give you a great example.
Let's say we create our key
and we don't bother attaching any context to it.
We're in a tape library context.
So we've got a key,
we've used it to encrypt the tape cartridge.
Tape cartridge's got a nice barcode label on it.
It went out to Iron Mountain for storage
and on the way to Iron Mountain, it got lost.
Damn, we need to get rid of that key.
How do you get rid of it if the barcode isn't attached as an attribute to the key?
So your security administrator comes along and says well when did you write it? Well I'm not quite
sure. What do you know about it? Well it's this barcode. Well I have no knowledge of that within
the system. So that's why we have things like the tape library profile. It's a profile that says you will put your barcode in this
attribute under KMIT 1.0 and 1.1. It will be this custom extension in this
format. Under KMIT 1.2 it goes precisely here in a place we made for it.
And that way the security administrator when talking to the person who lost the tape, has got some common context cross-reference.
Now, that's a really simple example.
I can't identify the key.
But there are other things in terms of which node requires this key, which user put it in, when did it get created, when is it intended to be retired. So KMIP as a specification was designed to attach whatever arbitrary stuff that you need
against the security objects that it's managing so that
that context is available for the two people that get involved
in a conversation to be able to meaningfully communicate.
And not all vendors are at the same level
in terms of making it easy for the security administrator
to turn around and find me a key by barcode.
Some vendors, that's one of the operations in the console.
Other vendors, it's, well, I can list every attribute of every key
and you can page through it one at a time,
but at least the capability's there.
And as a specification, we saw that we don't know what that context is.
It's in user domain specific,
so we've got to be able to handle everything.
So in KMIP terms,
there are actually some customers who have decided
everything sounds kind of good.
Can I put four gigabyte attributes against the key?
And we go, yes, it's not a very sensible idea, but you can.
The spec will let you do it, but please don't.
And, you know, why would you want to turn around and put four gig?
It's probably they're using it for something
that they probably shouldn't be doing,
but technically the specification allows it.
And if we turn around and we find a use case as a technical committee
where somebody needs 4 gigabyte attributes attached,
then we're going to have to extend the protocol
to make it efficient to retrieve parts of those 4 gigabyte attributes.
And in fact, we can have, for most of the multivalued attributes,
you can have an arbitrary number of instances.
So an attribute by a name has an index.
That index is a 32-bit quantity.
So you can have an awful lot of instances.
But at the moment, within the protocol,
you can only say retrieve all instances of an attribute.
You can't say retrieve a particular one.
So if we turn around and find people start
wanting to put tens of thousands
of instances of an
attribute in place, the protocol
will get extended to make that efficient.
But until somebody has a use case for
it, the capabilities there,
try it out, see, and then as we see
more use cases that
make sense around that come into the technical committee,
we extend the specification in a compatible manner.
So one of the things that we've sort of always asked is,
what do I need to care about when I'm thinking about
looking at key management vendors?
And the most important thing when you've got something like KMIP in place
is make sure your requirements you have right now are met,
but also think about the types of requirements you might have in the future.
And that's where having an open standard
and the capability that's sitting inside KMIP helps.
And what are you using your products for from a key management perspective?
What's your end user experience going to be?
What will that security administrator be able to do? And if the answer is nothing,
then you've built key management that has no practical value for your end users. So there
needs to be some capability that can be realised in there for it to make sense for users to deploy.
What will the security administrators do? How many keys are you going to put in your system?
For some vendors, the key management server holds one key
because that's all their security solution needs.
For others, we're talking millions,
tens of millions of keys in active use.
For some vendors, it's, you know,
I'm going to do a key per file,
I'm going to rotate them every day,
and they're going to build up over time,
but only in the case of a disaster am I going to come to a key per file, I'm going to rotate them every day, and they're going to build up over time, but only in the case of a disaster
am I going to come to the key management server vendor.
So lots of creates, very few retrieves.
Pretty easy to handle.
But if you've designed a system that's looking for a handful of keys
and managing a handful of keys,
and you ram a couple of tens of millions at it,
generally isn't going to work particularly well.
So those are the sorts of things to be aware that there's a varying set of capabilities
in the industry.
So what do you need to look at? So how do you spot a vendor
that's not quite as adopting
of KMIP or embracing of KMIP as other vendors?
So if the only indication you can find of KMIP support embracing of KMIP as other vendors. So if the only indication you can find of KMIP supports
on the data sheet and nowhere else,
keep digging or keep looking.
If you've got a whole pile of,
I interoperate with all of these devices,
but it doesn't specify what protocol,
might be a vendor proprietary protocol.
So what you're really looking for is vendors that are open,
that participate in plug
fests, participate in interoperability
go through conformance testing
programs. Those are the sorts of vendors where
you can at least say, hey there's at least
one other vendor they work with. It's
real.
Capabilities
that aren't clearly separated
so I have all of this capability
but I'm not going to tell you under which protocol.
So, you know, FIPS validated,
I can handle this level of transaction rates.
By the way, that's my proprietary protocol.
It's not KMAP.
Or I have clustering, but it doesn't work for KMAP.
Those are the sorts of things to keep an eye out
when you're interacting with vendors.
So, buyer beware. Because key management is such a broad space and because the vendors are targeted at different markets and have different
experiences, you really need to sit down and look at the capability of claims in
detail. Don't make assumptions because nine times out of ten,
your assumptions are going to be wrong. Thanks, Nancy. And interoperability,
that's one of those things, the only time a product's interoperable is when it interoperates.
If anybody tries to sell you on any other definition, it's going to cause pain down the track. And the most important
thing is if you want independent verification of conformance, you need to go to an independent,
trusted third party that isn't any of the vendors involved. And that's why it's critically important
to be aware of and support the SNEA SSIF KMIP conformance testing
program. They are not a vendor. They are independent. There is a testing program that runs
where the vendors involved and the management of the program itself outside of the guys doing the
work doesn't even get to see who the vendor is. So the chair of the SSIF does not know who is testing
until the test results are
public. And that's something that you'll
only get when you're using an independent, trusted
organisation like SNEA.
And that's one of the things for which we're
very much appreciative
of Wayne Adams' support over the years
in terms of helping establish the program.
It's good to see you here today.
And that ends my slides.
Questions?
Yes, John.
Is there an expectation on when 1.3 is trying to come up?
So 1.3, the specification work was finished in December.
We did a plug fest just prior to the RSA show.
At the moment, the spec's done, the usage guide's done,
the test cases are done.
It's waiting on the editor of the profiles
to actually sit down and finish documenting them.
And we had a change in OASIS requirements
and the format's just got to be done a little different.
So we anticipate it'll be up for public review within the next month or two, John, and then go through the OASIS back-end
process, which generally takes about six months. And the focus in the technical committee at the
moment's all on 1.4. What are we putting in 1.4? And we had a great meeting around the RSA show,
and we sat down and figured out we've got about four specification versions
worth of ideas to go into 1.4.
They're not all going to make it.
And that will generally be about 18 months away, I think, John.
Is FIPS 140 a FIPS 140 requirement or not?
There's no requirement to have FIPS 140-2.
We do have a profile that represents it.
And it came at 1.3.
You can query the server for its FIPS 140 validation claims
or common criteria or any other validation scheme.
Most server vendors have a FIPS version available
at varying levels,
either software through to hardware at level 2 and level 3.
Any other questions?
Right, we get to break early then.
Thanks for listening.
If you have questions about the material presented in this podcast,
be sure and join our developers mailing list
by sending an email to developers-subscribe at sneha.org.
Here you can ask questions and discuss this topic further with your peers in
the developer community. For additional information about the Storage Developer Conference, visit
storagedeveloper.org.