Storage Developer Conference - #61: Persistent Memory Security
Episode Date: January 15, 2018...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair.
Welcome to the SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage developer community.
Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual Storage
Developer Conference. The link to the slides is available in the show notes at snea.org
slash podcasts. You are listening to STC Podcast Episode 61. So I'm Mark Carlson, I work for Toshiba And I'm presenting this on behalf of the NVM programming twig
Which is working on this new cool stuff I want to tell you about
So we'll talk about some of the persistent memory technology briefly
I'm sure you've got a lot of it in this room all week
And briefly cover the NVM programming model
Tom Kalpi did a really good job on that yesterday here.
So maybe a slide or two.
I'm not going to talk about NVDMs other than how do you secure them.
And then we'll get into persistent memory security
and a threat model that we've been working on
and collaborating with Trusted Computing Group, PMIM working group.
The main issue is, at this point,
if you have an NVDIM,
you've got to shred it when it leaves your data center.
That's the model that we're going by.
If we can keep those NV dims from being shredded on
exit to the data center then they'll they'll start to be you know more
confident confidence in in adopting them overall and we want to have to do
strange things so persistent memory it's a type of non-volatile memory.
And the NVM programming model talks about persistent memory
media and distinguishes it from a persistent memory access
or a byte access.
So the media itself of persistent memory is a small circle within a larger NVM circle.
And so what we're talking about is NVM, all NVM technologies.
You could have a DRAM in front of NAND, and that would be a new, it's a byte-accessible persistent memory, but it's not persistent memory media.
Does that make sense?
So persistent RAM disks appears as a disk to applications and it's accessed as traditional
layer blocks.
You don't have to rewrite your application to use persistent memory if you look at it
like a disk.
And that's how NAND got started, right?
They made it into a disk.
It looks like a disk and all the software just sort of works with it a lot faster.
And so that's going to happen with persistent memory as well.
And then there's memory-like non-volatile memory.
And that you do access in a byte accessible mode. And the application is stored directly
in the byte addressable memory.
There's no I-O or even DMA required.
So we focus on persistent memory with memory kind of access,
byte addressable.
OK? memory with memory kind of access, biodegressable.
That's the most difficult thing to secure.
So the programming model you've heard about this week,
nvm.pm.volumeMode is your disk, looks like a disk,
but it's a software abstraction.
And it's really landing in persistent memory.
And things like address ranges and thin provisioning management you want to support.
And then there's the NVM PM file mode,
and this is a PM-aware file system
that is able to take advantage of the fact that you don't have to do I.O.
for files anymore.
So on persistent memory security work in the NVM programming model, we want to discover
any gaps in the existing technologies related to PM security and the shredding of NVDIMMs is a big gap. But there's others. And so we're
working on this threat model and we want to suggest some requirements that could
resolve some of those gaps. If we did things differently, right? If we had
self-encrypting NVDIMMs, maybe you wouldn't have to shred them.
So we've established an alliance with the Trusted Computing Group,
and it's a relationship between the two groups
where SNEA provides the application user-level roles,
behaviors, and threat models,
and the TCG provides the actual solution.
And then the TCG,
we're also approaching JEDEC.
John, you're still working on that, right?
Okay.
So JEDEC provides those NVDIMM specifications.
If there's some changes that we need to do
to implement some of these security features,
the interface might need change
as well how do you manage nvdms today right how are you going to get a secure
key down into a nvdm right or how do you tell it to encrypt all the memory
accesses are coming in so those are the kind of things that we need to think
about questions so far okay so many assets of security are
actually unchanged by persistent memory right administrative security isn't
changed key management external to device isn't changed memory protection
isn't changed memory mapping isn't changed right Memory mapping isn't changed, right? So the first order requirement that I just said
is, can we encrypt the data at rest in an NVDIMM or other persistent memory device?
So when you authenticate or re-authenticate, you want to be able to trigger some operation inside.
Real-time encryption, is that possible is it is it just make
these things too expensive and then there's this whole continuity of
principal identity right if if I'm the one that stuck the data into this NV dim
and that date that NV dim ends up in a a different system, how does the new user of it authenticate to my data?
I mean, you really need, if you're going to do this,
you have to sort of think about those kind of things
because the whole idea of a removable persistent memory component
is that if the rest of the system dies,
your data is still there.
It's portable now. I can take it and put it in a new system and should be able to come back with all of the features that I have. Okay? So the protection granularity is important too and especially at the file
and volume layers right so if I make persistent memory is showing up as a
virtual disk or partition or device that's the thing that you want to
encrypt at rest right on that level of granularity.
But if you have a persistent memory-aware file system, now you've got memory map files,
and that is the granularity at which you may want to do some of these security features.
And then achieving isolation analogous to external storage. So maybe you have a limited access enablement window
where it's available over, you know,
you have to make it available to the operating system
via a secure mechanism.
And then how do you rapidly transition your privileges
or escalate permissions?
The employee that put their data on the NVDIM
has left the company.
You don't know how it was secured,
but you need his data.
So one of the things that we sort of focused on is this isn't really going to be a very big market
unless you know the cloud guys adopt it right hyperscalers adopt it they start
charging you for a virtual machine that has persistent memory under it right
and then charge you more yeah but you're gonna pay it because it's so much faster, right? So how do you establish
trust in a cloud environment and in a multi-tenant environment
with these NVDIMMs, with persistent memory?
I want a machine with a gigabyte of persistent memory.
What is the cloud provider
going to do with my data when I'm no longer running that
machine? Is it still in the persistent memory? Did he stage it off to an SSD or what, right? Or is it
sitting out in a hard drive? So we want to speak to this multi-tenancy hardware support, what features in the NVDIMM would be required
for multi-tenancy cloud use, right?
Or even internal private cloud use
still in a virtualized environment.
So both of these hopefully can be addressed
by the encryption at rest solution you come up with
and think about the issues that we discussed on the prior two slides so here we have a cloud customer and we
have a cloud data center infrastructure right there's a little key down here it
says this cloud infrastructure is insecure, right, because you've got multiple people running in it.
There's some security mechanism
that secures this insecure environment for this customer,
and then there's the customer-secured pieces as well.
So that's the thing.
So the cloud data centers
are not necessarily trusted by the customers, right?
Their competitors could have their stuff running in that same cloud, right?
So the customer establishes an account.
He becomes a tenant.
He's running an application in an isolated container or VM.
And now that application securely mounts some piece of storage,
persistent memory that is isolated, hopefully,
from other clients in that cloud, right?
And then the customer manages and uses keys
to secure his own use of all of that.
And typically that would be an external key manager and and here the customer
customers keys are in a piece of customer managed infrastructure so he
trusts it right there's other scenarios where the cloud provider is providing
the key management but that it has its own issues we
didn't go there so persistent manage and my clone use case and this is this is
you want to boot a machine but you want to boot it from a golden Linux image
right so you have this boot image here that you want to make actually immutable. You
don't want anybody going in and changing the gold Linux image for every customer, right?
So basically what you want to do is to instantiate your app, you actually do a copy on write
to the machine that's running there.
And at that point, it can change because it's running now.
But this never changes.
So the PM boot image is this trusted gold standards.
It's immutable.
And then the tenant runs in clones of that boot image and writes
only happen over here they don't happen over here and then you can add
additional security features such as digital signature you sign this image
right a lot of them do and before you trust it to run in your cloud you you
verify that signature.
Immutability is ensured by the cloud provider enforced by features of the iOS,
of the operating system,
and the memory controller,
and perhaps maybe even the NVDIMM.
The storage access is authorized
based on the customer-provided keys
that we talked about on the last image.
So it's mounted after the image creations
and then becomes part of that tenant environment.
Questions so far?
Okay.
So what are the features
of the multi-tenant infrastructure?
You have this isolated execution environment
for customer applications.
The customer provides a key to enable the execution and you may have many execution
environments per customer right the whole idea is you can scale these things
out to address the needs of your own customers right so the secure the access
to cloud storage is already secured. Customer provides a key to access those files and objects.
Typically, it's like an X509 certificate
in most of these interfaces.
And again, where you store your data in the cloud storage
also has many different locations or buckets,
some of these interfaces call it.
And then per tenant, you have a storage volume or partition,
and you want to be able to enable the secure erase of the
deleted data.
So we're talking about something around,
something on the order of 10 keys per drive, maybe.
I don't know.
It depends on how big the drive is.
So both persistent and non-persistent storage usage has to be taken account.
And then the storage persistence partitions themselves are not necessarily attaining cloud scale.
You don't need access by thousands of machines usually.
So if you think about what's on the disk,
it actually has...
There's two classes of tenants.
One is a tenant who relies on the security
and the granularity of the cloud provider
to secure each little piece of data.
The other is tenants who achieve data security
using the provider- provider supported hardware secure race
features though that ends up into a partition tenant mapping here that then gets laid out on
different pieces of storage or
persistent memory
So as far as is this is concerned these are all shared amongst the different
Tenants and it's only other infrastructure above the actual storage as this is concerned, these are all shared amongst the different tenants,
and it's only other infrastructure above the actual storage
persistent memory thing that
actually gives you security
in the granularity. Now, if you're
trusting on the NVDIMM to provide
the actual security features and hardware,
then
it's a different partition mapping.
Questions on this? Okay. So key management is necessary. The secure key management techniques applied including the use of
key encryption keys which is a keychain kind of technology and then any retention of the unencrypted data that's in the process of being encrypted
or scheduled for the same must be guaranteed to be unrecoverable after any event that can
compromise security, such as power loss, reset, or component renewal.
But this is the same thing we've been doing for disk drives for years, right?
SSDs, right?
But it's a different interface now.
You're talking about byte addressable on the memory bus.
So there's standards out there already.
Oasis KMIP is a key management interoperability protocol, I believe.
Have you seen a single memory bus that thinks it's byte addressable?
Or once I know where it's at?
When it actually gets out to the device,
I have to use Andy's flush command.
The other thing is auditing.
It's great to say this, but unless you provide the right features of an event, a security event,
there's no place to log it.
So part of this is actually logging the security operations that are done on the actual NVDM itself
and providing the hooks so that you can find out who accessed what part of the NVDM when
and who gave them permission and those kind of things.
Other considerations include the code origin and delivery production.
I mentioned the signing of the images,
but maybe you also sign any other piece of thing that you
want to trust so obviously executable but there's also non repudiation which
means you know definitely it came from your boss and it's not a spoof email is
trying to get you to download malware or something. So that's non-repudiation.
But it also uses encryption.
Now, on the subject of memory protection,
memory protection is primarily an OS process-centric idea.
One can view those virtual machines as processes run on hypervisors.
Containers...
They are what?
They are what?
They are... Yeah.
All applications run in one of our processes.
Memory management units enforce memory protection
using both virtual address-based mapping
and physical address protection.
And details of both of those
are dependent upon which MMU you're running.
So what are the security implications of that as well?
So you can see we're sort of just exploring
all these different use cases,
understanding what the requirements are
so that we can work with the TCG group
and the JEDEC group to come up with
some solutions here pretty soon.
And it's all part of the NVM programming model, for sure.
So we have a threat model,
and to introduce you to that,
we've come up with different roles
to use in describing the threat model.
So we have a term called customer.
This is the security principal or data owner, person, or organization.
So that's the customer.
There's a developer, which might be a storage application developer, DevOps, whatever.
He's using the libraries and so forth.
And then there's a security officer, right? Not only assigning rights, but running the audits
and engaging external auditing companies and so forth.
And then there's an administrator,
the guy that does system configuration manager.
And the reason we say that's insecure
is because he doesn't own the data.
He's responsible to manage the infrastructure, but he shouldn't have't own the data. He's responsible to manage the infrastructure,
but he shouldn't have access to the data.
And then there's deliverers and repairers
and factory channel support and supply chain,
and I want to be able to send these NVDMs
back to the manufacturer if they fail.
I want to be able to sell them on eBay.
Okay?
So threat model is a bit of an eye chart,
but we have kind of a category of attack,
and we'll show those.
Within, let's say, cross-tenant,
there's aspects like privacy and confidentiality,
integrity, availability,
denial of service attacks, for example.
So then we say, who's the attacker?
And remember, the previous slide had the roles.
So in the cross-ten tenant privacy confidentiality threat the
attacker would be a tenant administrator or repair and the approach that we take
sorry this is all white text this is applicable existing approach and new
issues with persistent memory right so if we say none here, we haven't identified any holes,
if you will, that persistent memory has opened up.
But we wanna still document those,
so at least it shows that we consider it.
So traditional authorization, authentication,
encryption at rest, separation of roles,
and memory protection are all in this cross-tenant privacy and confidentiality.
The OS, the hypervisor, takes pretty much care of those already.
Persistent memory doesn't create any new issues in those.
Integrity, cross-tenant integrity, developer tenant administrators. The existing approach is
a traditional authorization authentication, the separation of roles,
and the memory protection. Now there is in the persistent memory case an
increased scope of damage due to mismanaged pointers, memory resources, and
so forth. If I say in my persistent memory, I save a pointer.
How long is that pointer good for?
Not very long, but it's persistent now.
It doesn't go away when the machine dies.
Availability, denial of service,
both to the tenant and the developer.
In order to do availability, you sign a contract with the cloud provider, perhaps.
But the per-tenant quality of service that is existing might be disrupted by a potential for rapid disruption with limited detection window in the persistent memory case in the cross tenant I think that's a bug but so
applicable existing approaches secure, physical cryptographic during deletion,
and more rapid free space recycling in memory
rather than disk, right?
Because you're gonna move stuff in and out
of persistent memory quite often.
Either, especially as you initially put these things in,
it's gonna be a small piece of persistent memory
that accelerates the rest of the stuff.
It's very expensive per byte.
So then there's a category called insider,
should be obvious.
Local hardware attacks, like from a DMA engine perhaps.
And for memory protection and pertinent quality of service
applied to IO we're starting to think about this we haven't said there's no
issues but we haven't identified any specific ones
RDMA it's RDMA a threat to persistent memory right if you're RDMA a threat to persistent memory, right? If you're RDMA-ing into here, how is that secure?
And there are ways to secure RDMA,
but it's not a matter of protecting the data that's going in so much as who put it in there.
How do they have permission to put it in there, right?
So if you're doing persistent memory over fabrics,
what's the security threat model around that?
So we're thinking about that as well.
And then there's malware.
Insider delivers malware into the system.
Digital signing, virus protection are already there.
I don't think persistent memory adds anything
or opens up any holes.
And then access by admin and support,
getting in there and see their tenant's data.
And how do you actually assure their customers
that your administrators can't see their data.
All right, so that's where we've gotten so far.
We're working on a white paper, and it will detail all of this.
We did an earlier draft that just had that threat model in it and sent it over to TCG, so we'll be updating that.
We have a new version. We didn't get it out publicly in
time for this conference yet. But in the next week or two, we'll probably have it out there.
So how do you control the security features of persistent memories and NVDIMMs?
Is, for example, they're an area of that NVDIMM that's reserved for the control.
Now, this wouldn't require JetEq to do anything, right?
We just say this little piece of memory is the sort of hypervisor-supervisor area memory.
The tenants don't get to write to it.
But you go in and write to this reserved area.
That gives you the, you know,
assign this granularity, map that granularity, this tenant has access to that, and this other
tenant doesn't. So that's one possible thing that we might work on with TCG around.
What about iOctal support for supporting a root of trust. You'd have to reestablish that root of trust on a power event,
reset, hot plug, and heartbeat loss.
And then another idea is shadowing of a volatile area, DRAM,
with persistent memory backing store.
So maybe your DRAM, the app's writing in clear, right?
But when you send it off to the NAND in the DIMM,
it becomes encrypted.
By the way, on A and B,
there's actually a really nice memory encrypting scheme now
that they support for virtual machines.
That's the CPU.
You have to support it in the CPU.
Yeah, yeah, and then you have to put it in the memory controller.
And if you can do that at memory speeds, that's pretty good.
Thank you.
Yeah.
Is that the Ryzen?
Well, the whole family.
Yeah.
Yeah.
No, we just have to standardize the memory controller.
All right.
You're going to have to do it. So, yes, we're still working on this.
It's not done yet.
We're looking for input.
If you have feedback, send us the feedback.
But we really want to rally the industry around this NIA view of persistent memory.
People are starting to use it in NVMe now with some of the TPARs that are coming forward and mentioning that.
So we have application
centric, vendor neutral, it's achievable today,
and it goes beyond just storage. Applications, memory, networking, processors
all need to think about how they are affected by persistent memory.
Like we had this panel this morning, right?
How are the fabrics going to be affected?
How are the SSDs going to be affected by persistent memory?
NVMe just approved, I think, persistent memory regions and and now that'll see some
more features over the over the next few months and actually this it's not up
there yet but but that's where it will land when we finally publicize it the
programming model the latest program was version.2, I encourage you to go download it, read it, understand it.
Go get the slides from Andy's talk as well.
All right.
Ooh, we're doing good for the pool.
So you guys aren't very interactive.
Anybody have any questions or comments?
Let's get on to the drinks, then.
Can you just...
Can you go back to the table?
Sure.
There's three slides at the table.
Most of them have no issue with TMS. right but but what's the security model for memory volatile but there's features when memory becomes persistent
that now, I mean, nobody shreds their DRAM chips
when they leave the data center, do they?
They're forgettable.
I'm just saying that a lot of the things
that you're talking about here,
persistent does not really even apply
to the existing DRAM as well.
If that was truly the case, we'd have none down all the columns.
Well, it's not all of them.
I'm just saying.
Yeah, yeah, yeah.
You've listed and go to the next slide and the next.
Yeah, if it's blank, it is blank.
That means we can't agree that it's none.
Maybe we just haven't found one yet.
This is an exercise left to the reader, I see. that it's none maybe we just haven't found one yet no it's it's between the groups that are working on this stuff right we send this over the wall TCG
they say no way you forgot about this yeah Yeah? Male Speaker 2 I don't know much about it, but knowing a lot about SED and FTE, do we have an equipment
for the solid state devices?
You know, they just protect a limited amount.
It's only for device portability that it's any good.
Yeah, you can secure a race of single namespace.
I don't know.
Is it supported in Linux?
Secure a race of a single namespace?
Well, it's supported if The device does it, right?
I mean, if you look at it, you need...
Well, we've got the newer secure arrays,
which is always the whole subsystem,
but we've also got the secure...
What do you call it? Delete?
What's the name?
Well, anyway, if you do the form in NBM,
you can do the secure, secure,
and that's actually called secure erase,
but it really depends on your device.
You've got the format NPM and NBME
is this really weird command
where there's a couple bits in identify
that tell you,
is it going to erase a single namespace
or is it going to erase all of it
if you specify the namespace?
I think it's really uncracked. He's right over there.
It gets fundamentally odd when you're dealing with man and it's erased structures and moving
things in the background because we don't really control what's used.
No, no, no, that's not the problem. That's very understandable. The problem with NVMe
is... No, I understand that. I'm getting there.
And when you have configurable namespaces and you have different areas of the device,
sometimes you can build architectures which separate that, but it's more
typical that you don't, because then you get the best word leveling, you can get the best
performance, there's trade-offs. So it's easier
if we sanitize the whole device and you know what you're getting from your sanitize, we can guarantee it.
No, no, it absolutely is. I mean, the point is, if you look
at Forman NBM, the one option,
the most common one, isn't even
and graced. It's a crypto race.
You just throw away a key, not the
data. But the point is,
the interface that's designed in NBME
is the worst possible one. Any sane
person would say there's a formant
namespace that's an optional command,
and a formant subsystem that's an optional command.
There's not one command that always takes a namespace that you an optional command, and a form of subsystem that's an optional command. Not one command that always takes namespace, even sometimes it's more than that.
Well, it's generally acknowledged that the higher you go in the
hierarchy, the more off you are. So if you can move from the device level
that FDE provided, for example, you can move up to the subsystem layer
and do it in the storage controller, you're a lot better off, like the example with
the store-wise software base.
If you can move up to the OS level,
like ZOS encryption facility, you're
even better because now you're protected in flight.
If you move up to source data creation
at the top of the heart, you're actually at the best level.
But there's other issues involved.
None of these are freebies, by the way.
There's a price to pay at each level, obviously.
But when you're talking in beta press and leaving stuff behind, which is what happens
with the crypto scramble, anybody, everybody
with a quantum computer can break that stuff.
Sometimes you actually do want to do blocker races. You want to do overrides
and stuff like that. And to do that well, you want to do blocker rates as you want to do overwrite stuff like that.
And to do that well, you need to do it all
or you don't really know that you've done it.
So if I have a FIPS 140-2 certified
operating system level encryption capability
built into iOS, is that, are you saying that's easily,
anyone can crash that?
There's a reason that there's a post-quantum NIST process in place right now
to figure out how to do encryption
once you know about quantum computing.
Be careful.
There were a lot of people asleep tonight.
Sorry.
So it goes.
By the way,
Trump has nothing to do with it.
Alright, so
Eden, Eden
is talking.
Yeah,
no, no,
Sneha already has a really good handle
on the security ciphers to use.
We have a TLS spec,
which actually scopes TLS down
into only a handful of cipher suites
that are used in that.
So I imagine that we would recommend similar kind of cipher suites that are used in that. So I imagine that we would recommend
similar kind of cipher suites as well for this.
Okay, so...
You know, TLS has specified all these AES, 256, you know...
Yeah, I understand.
For example, the common one for NVMe or SSDs is AES-XDS, Yeah. Yeah. Yeah. Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
Yeah. Yeah. Yeah's the best part.
But if you want to do a byte, you can.
Well, not really.
Who says the byte or the block or the cache line
is even the decryption block size?
Because if you look at the OS memory management,
well, it's mostly page size space.
It would be really nice around number
to use per k as your encryption log.
Another fun side of that is, so if you're going to have some inline encryption,
you don't want to have six million gates to deal with that.
You have a limitation by how big you make it.
I like that. I like the requirement that we have 4K encryption units.
It has a lot of scores of devices.
It's a matter of verification.
The other thing about
catch line sizes is catch line sizes
aren't the same everywhere.
There's power systems out there that have 128
byte catch lines.
There are arm systems that have
32 byte catch lines, even if they're unusual
for most of the people.
So the catch line size is not a standardized value, right?
So you need to build.
So your crypto needs to think of the idea
that your media might be moving between systems
with different.
But then what if there are multiple 128 bits?
You're kind of OK.
So one thing that's different
is when you move something in the block space,
stuff sort of moves, and you expect it to move, and it goes from
some, either one nice collection of 0 to n,
or several nice collections of 0 to n. But in the
NVDIMM memory mode, it's just different. And that's where the at rest stuff is kind of interesting because
when you're moving an NVDIMM, you probably don't want to move the contents, period.
Well, I mean, I don't understand why everyone...
I don't understand why everyone is so focused...
I don't see why everyone is so focused on NVDIMMs.
To be honest, a storage-like semantic of a parallel memory bus that is synchronized is pretty stupid.
So initially, we'll see things like NDE and EMR because that reduces PCAE.
Eventually, we'll move to asynchronous operations on memory bus.
JEDEC is already working on that.
Eventually, not too soon, we'll move to serial memory connections So we've seen persistent memory media show up in block device, right?
That's where Optane lives, or 3D Trust Point, right?
Male Speaker 1 The point is people are trying to play out point right but if you put in NVMe device, all those issues go away. It's the NVMe device, right?
No, but I could build a cache controller right now
that does loads and stores from NVMe device.
It's all about the cache controller, right?
Yeah.
So it doesn't really matter how your bus connection
is looking like.
It's a programming model.
Well, it's actually...
You could, but fundamentally,
memory generally doesn't have a driver.
No, it has no persistent memory.
Look at how much proper code it has to drive a means.
Those things are over.
Look at Gen Z.
Yeah, we'll end up hopefully removing a lot of that stuff.
Guys, we just moved it up to the OS level.
The whole discussion is irrelevant. Right? removing a lot of that stuff. Guys, we just moved it up to the OS level. Yeah.
The whole discussion is irrelevant.
Right?
You're still shredding NVMs, NVDMs,
and perhaps NVMe devices with persistent media in them
before they leave the...
The other advantage we get is, you know,
there's a lot of interest in key shredding,
and we have customers that have done it thousands of times and continue to do it dozens of times every day.
And the reason that's of such interest is, number one, you don't want to leave stuff behind.
Number two, if you have an intrusion detection, you've got to shut the door real quick,
and it's a lot quicker to shred the keys than to just throw away devices, right?
So one of the advantages of moving up the hierarchy
and getting to the OS level is I can shred the keys
in data centers.
Okay, if I can shred the master keys at that level, man,
I'm doing what the rest, this is an interesting discussion,
don't get me wrong, okay, but it doesn't matter after that.
Right, so that's really the problem. With the crypto scramble that we get,
unfortunately, if they give microseconds,
if it's in the device,
live with it.
Okay, any other discussion, questions?
Sorry for the folks on the video podcast
when it comes out. We just had a long interactive discussion without
microphones.
Thanks guys.
Thank you.
Thanks for listening. If you have questions about the material presented in this podcast,
be sure and join our developers mailing list by sending an email to developers-subscribe at sneha.org.
Here you can ask questions and discuss this topic further with your peers in the storage developer
community. For additional information about the Storage Developer Conference, visit www.storagedeveloper.org.