Storage Developer Conference - #15: Storage Class Memory Support in the Windows Operating System
Episode Date: August 8, 2016...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Chair. Welcome to the SDC
Podcast. Every week, the SDC Podcast presents important technical topics to the developer
community. Each episode is hand-selected by the SNEA Technical Council from the presentations
at our annual Storage Developer Conference. The link to the slides is available in the show notes at snea.org slash podcast.
You are listening to SDC Podcast Episode 15.
Today we hear from Neil Christensen, Principal Development Lead with Microsoft,
as he presents Storage Class Memory Support in the Windows Operating System
from the 2015 storage developer conference.
My name is Neil Christensen. I'm in the storage and file systems team at Microsoft,
and we have been working on storage class memory support in the operating system and in the file
systems themselves. And I'm just going to talk a little bit today about the current work that we're working on. The quick question is, what
is storage class memory? It's kind of exciting. If you look, there's a lot of non-volatile
memory tracks over the next couple of days. And it's kind of nice I get to talk about
this in advance of all of those. But there's some great talks, and I recommend them. It's a big paradigm shift, storage class memory.
It's non-volatile storage with RAM-like performance,
with low latency and high bandwidth.
This is a big change.
The storage resides on the memory bus,
and there's a lot of different terms for them.
You've probably heard all these terms.
I was just listing some of them.
They're not necessarily 100% equivalent,
but you'll hear all these terms essentially referring to the same thing.
This is just an example of one of them.
It's an NBDIM.
This one's actually created by Viking.
Basically, it is RAM that, to protect volatility,
is backed with flash in a power fail-safe mode.
That's what that giant capacitor is for.
The capacitor holds enough storage so it can dump the RAM to the flash on power loss,
and then when you power it back on, it'll dump the other way.
So basically, you're getting non-volatile access at RAM speeds.
And I'm sure many of you have heard about the 3 3D crosspoint announcement by Intel and Micron.
Again, a different technology.
In the same realm, they've talked about, I guess they just announced their Optrain DIMMs,
and we'll have to see how all that goes.
But this is an exciting area where there's a lot going on.
This is just kind of the standardization work.
We have been very, at Microsoft, we've been very of the standardization work we have been very
at Microsoft we've been very involved with standardization
because we wanted this kind of common
in the past
different vendors were doing their own proprietary implementations
and so we've been engaged over the last several years
to do a lot of standardization work
I won't go into any of this stuff
but it's been a high priority for us
and we've been very involved with it
so one of the things we are in active development of it's been a high priority for us and we've been very involved with it.
So one of the things we are in active development of is new storage class memory drivers.
And in fact, in the Windows environment, there's a new bus driver and a new disk drivers.
And I won't go into the details of these because this isn't the goal, but this is necessary abstraction to make this available to file systems
and the rest of the operating system is to have these concepts available and added to the system.
Now, let's talk a little bit about what the goals of storage class memory is for Windows.
Specifically, one of our primary goals is to grant zero copy access to the persistent memory to applications.
We still need to have existing applications still run without modification.
There will be some apps that do want to change to take advantage of characteristics,
but we also need the ability to maintain our huge existing ecosystem. We do want to provide an option
to support 100% backward compatibility
with existing storage semantics.
But even with that,
realize that the storage class memory
introduces new types of failure modes.
Failures will behave a little bit differently.
And then also, we do want to, again,
to maintain some backwards compatibility,
have some sector granular failure modes for app,
for the app compatibility scenarios,
because there's a lot of applications, believe it or not,
that make a lot of assumptions around how underlying storage,
specifically the concepts of sectors,
all of this is actually exposed all the way to applications.
One of the things that Intel has been working on
in this idea of block-level semantics on storage class memory
is this library called BTT, or the Block Translation Table.
Basically,
the idea around this is it's some software.
It's not a hardware thing at this point. It's software that basically gives
you sector concepts
on non-volatile
memory, on byte-addressable storage.
And so,
example, there's no sub-sector torn rights.
On power loss, you either see the old sector or the new sector,
which helps maintain compatibility with existing applications that have built-in assumptions around storage.
And a simple example is the NTFS file system for Windows.
It has code for detecting torn rights across sectors.
It does not work very well
for detecting sub-sector torn rights.
And so BTT is a
concept that
you can add. It's just
software libraries that you can add to your system
to give you that semantic to maintain compatibility.
Our drivers
will support BTT,
but it will be an option to
be able to enable or disable that.
And just to reiterate this, storage class memory is a disruptive technology.
Customers want the fastest performance, and it's interesting because that fastest performance,
the system software that we provide, our IO stacks, everything, get in the way of the fastest performance.
But our customers also want application compatibility, and these collide with each other.
They want to be as fast as possible, but hey, I don't want to have to rewrite everything.
And so we've been working on a model that for those that want the fastest access as possible,
they can have that,
but if they want to maintain compatibility, we can do that.
It's kind of what I'm going to be talking about.
So what we're doing is we're modifying,
we're making changes to the memory manager
and cache manager and file systems inside of Windows,
and we're going to actually have volumes
that can run in one of two modes,
either block mode, which is our compatibility mode. It will still be a screaming fast device because it's running on this direct access on the RAM bus storage. And then we have what we
call our DAS mode, stands for direct access storage mode. And I'll talk a little bit about the advantages of DAS mode over block mode.
The decision point as to what mode a volume is in is a format time decision. So you choose up front
if I want it to be in this mode or that mode. And if you want to change the mode, you have to
reformat the data, reformat the volume. So let's talk just briefly about block mode volumes. It
maintains existing storage semantics.
In other words, all IO operations traverse through the storage stack
down to our storage class memory storage driver, which I mentioned earlier.
We have optimized the path length in there.
We've removed some layers in the storage stack just to make it as efficient as possible,
and you'll see improvements with that over time.
This model is fully compatible with existing applications.
It'll be supportable by all the Windows file systems
and works with existing file system and storage filters.
So it's a pretty compatible model.
You still have incredibly fast storage.
You should see a big win, but you maintain that app compatibility.
And if a customer wants this, then we will have it and make it available for them. On the other hand, we have
this new concept, which we call our DAS mode volumes. It introduces new storage concepts,
specifically around that memory mapped files provide applications with zero copy access. In other words, we map the direct attached storage
directly into the application's address space at a file granular concept, and I'll talk a little
bit more about how we do this. There is some compatibility functionality loss because of
this paradigm change. You don't get all of the features that you have, and I'll talk about why. We will be supporting both NTFS and RFS in this new mode.
This is just a quick picture which talks about block mode versus direct access mode.
You can see with memory map files, we memory map it directly into the application,
and then they're just kind of going straight to the hardware.
There you have to go through the driver.
We still have modes for going through the driver
even on direct access and I'll talk about that in a minute.
And if someone has a question, you're more than welcome to ask.
So the big win is through memory mapped IO.
We made a conscious decision
to not change any of the existing memory mapped APIs.
We wanted an existing application to be able to run without change on this new type of hardware
and get the benefits of it without any modification.
And so basically how this works, I'm just going to briefly tell you how this works,
is that when an application creates a memory mapped file,
the memory manager actually asks the file system
if the section should be created in DAS mode
or if it should be in compatibility mode.
And if the answer is, hey, we're on the right hardware
and it's been formatted properly, then yes,
we tell the memory manager, yes,
we want to do this new mode of hardware.
And then what happens is
our memory manager actually asks the file systems,
takes a given offset length into the file,
and going through the layers of the storage stack,
will translate those to physical addresses that are returned back to the memory manager.
And then the memory manager just creates its mappings,
and thus they have direct access to the storage.
There are interesting challenges with that model. and thus they have direct access to the storage.
There are interesting challenges with that model.
And I'll get into that.
This is what we call our zero copy access.
It's a term that we have been using.
It gives an application direct access to the storage.
This block translation table that I mentioned, BTT, is not at all used.
An application has, if they want to run in this mode,
they have to deal with the different failure patterns that they might see from traditional memory map files.
And so while an application should be able to run just fine in this scenario,
on a power loss scenario, the failure patterns are most likely different
than what they've been seen in the past.
And so they might have to make some tweaks.
We'll have to see over time.
The important thing that you lose is there's no paging reads or paging writes anymore
coming in and out of the file system when you set up this mode.
Again, direct access to the underlying storage.
Now, for compatibility,
and there's three different I.O. patterns
that are generally used.
You have memory mapped I.O.,
you have cached I.O.,
you have non-cached I.O.
So this is talking about cached I.O.
When cached I.O. is requested
for a file in DAS mode,
what we do is,
how our cache manager works
is it also creates memory mapped sections.
It's tightly integrated with the memory manager. And so what we do is the cache manager works is it also creates memory map sections. It's tightly integrated with the memory manager.
And so what we do is the cache manager creates a direct access section for caching.
And therefore, even if you do cached I.O.,
it's coming into the file system, goes into the cache manager,
and then talks directly to storage.
So it's not as fast
as memory mapping, but it should be a lot faster because we're not going down the storage stack.
We're not having that overhead and latency of going up and down the storage stack.
And this is basically one copy access to storage. You don't get, I think that's on the next side.
Obviously, the cached IO is completely coherent with memory mapped I.O.
Again, no block translation table is in use.
Applications can see different failure patterns.
And as well as in the memory mapped case, there is no paging reads or paging writes for cached I.O.
And again, I'll talk about what the impact of that loss is. And then the last type of I.O. that is very common is what we call non for cached I.O. And again, I'll talk about what the impact of that loss is.
And then the last type of I.O. that is very common is what we call non-cached I.O.
Now, we had many discussions about how non-cached I.O. should be handled,
and we have closed on the model that we actually,
there's too many applications that do non-cached I.O. and expect a certain behavior,
especially around sector granular failure modes.
So today, we are planning to use
that non-cached I.O. will actually traverse the storage stack.
The storage driver can use the block translation table,
and it maintains a lot of the existing storage semantics it does today.
Examples of it is SQL database.
It's going to use non-cached I.O. typically for its load.
If they want to take advantage of the new models,
they may have to be modified to do that.
Again, this makes it compatible.
You can run it on this DAS mode volume.
You can run it on a black mode volume.
It'll basically behave the same.
But it gives you that level of compatibility.
Our approach is that if you really want the performance,
you're going to have to switch your coding model
to use memory map files.
So if they want to create a database that has memory map files,
then they can do that,
and they'll have the quickest access that they can.
File system metadata is interesting. To maintain compatibility with our logging models,
at least for today, we are sending all file system metadata down through the storage stack so we can
maintain our proper write-ahead logging semantics.
We have ideas on how we can make that a lot more efficient in the future,
do some changes there,
but we're going to start out just by sending all of them down because, A, there's not that many metadata IOs compared to data IOs,
and it helps us to bring it up quicker.
Okay, so this is where I talk about
some file system functionality
that actually disappears
when you have a direct access mode volume.
It's interesting, if you think about it,
with the loss of paging reads and paging writes,
basically the software loses hook points
to where it can do manipulation of the user's data.
This is how,
as an example,
how encryption works. You encrypt on
the non-cached IOs, you encrypt on the
paging IOs, and when you bring them back in, you unencrypt.
Well, guess what?
When someone has the ability to directly
talk to the storage themselves without
any software intervention by the
operating system,
you lose these hook points. And so some of the features that we lose are encryption and compression. NTFS has this feature called TXF, which is a transactional semantics. For now,
we're disabling that. The RFS file system has some features that you lose. Integrity stream support, which is a concept of checksummed user data.
It's basically the copy-on-write semantics.
Well, when you direct map, you don't have the ability to copy-on-write because there is no write.
All you see is direct access to the storage.
JR later today is going to be talking about a couple of these other features in two hours here in this same room, I believe.
There's no volume level encryption support for the BitLocker.
We lose the ability to create volume snapshots,
again, because there's no hook point to know when data is changing.
And also out of Spaces, which is our volume manager,
you kind of lose the ability to mirror data,
and you lose the ability to do parity storage as well.
Again, because you don't have a hook point.
You don't have a point where you can update a parity stripe
because you don't know that it's actually changed.
So there is some definite impact with this new mode of a volume.
That's why we have block mode.
So if people want to run with RAID 6 on their storage, they still can, even with this type of hardware.
But we want to make the paradigm shift.
We want to start providing new functionalities to applications and to users that they just flat out can't get today.
We want to give them the best performance
they can possibly have.
So one of the interesting points
that at least for now
we're not going to support is sparse files.
The reason for that is
just passing the information
to the memory manager and setting up.
We actually have a handshake we can do with the memory manager.
It's just we're not going to implement it in the first version.
So sparse files in the Windows environment
will be disabled for now,
but it will come in support in the future
because we know how to do it.
An interesting challenge that we had
is, you know, when you have a memory map file,
how do you know when to update the time of the file, the modified time of a file? We had many discussions with the owner of the memory
manager about, hey, can you just tell us that when one page has changed in this file, in this
view of a file? And he said, no, that's too much overhead. I can't do it.
And so basically what we've decided to do is make these updates when a writable section is created.
So if someone creates a writable section,
we're just going to say, hey, the file's modified.
We also have a thing called the USN Journal.
It's basically a change journal
that tells you that you can track at a high level.
It's a fairly granular model where you can track, yeah, a file's been created.
It's been deleted.
It's been modified.
It's been extended.
We're also going to update the USN journal at creation of a writable map section that it's changed.
Because the truth is we don't know if they're going to actually change it,
but we're under the assumption that if someone's going to open a writable,
they're probably going to modify it.
And then also we have directory change notification,
which is, again, another way of communicating to an application
that something's going on in a given directory.
As an example, the Windows shell uses this to know that a file's been created in a directory
or modified or things like that.
Again, triggering at create section time.
So different concepts.
Before I get that, there's other areas that are interesting, that are problematic.
There was this concept I didn't even realize existed called dangling middles.
A middle inside of the windows, those that are not familiar with the Windows environment
is simply a descriptor
that translates logical addresses
to physical addresses and they're used in file
systems because at the bottom
the storage drivers need to DMA to the actual
physical addresses and so it's just a descriptor
well it turns out you can have these
things laying around
after a file's been closed
been deleted whatever
up to this point, the memory manager
kind of hid this concept from us and said, eh, it doesn't matter because it's going to RAM and no
one's using it and it can hang there forever. Well, one of the things we discovered when we
started this project is those things can hang around and when it's directly mapped,
guess what? You better not put another file in that place. If someone deletes that file and you allocate it to a new file,
you need to be careful of that because all of a sudden you have this entity
that is in the operating system that still has a pointer to that storage
that's not tied to anything.
It's not tied to a file anymore or anything,
but it can then essentially corrupt a file after the fact.
And so we've had to build some infrastructure
to track these dangling middles.
And our model is basically do not reallocate that space.
Someone got it when the file existed.
If they want to clobber the free space after the file is deleted
or the old contents, they can do that.
But we can't give it to a new file, so we actually have to track this state,
which is something we've never had to do before.
In the Windows ecosystem,
we have a thing called file system filters.
And basically, a file system filter is a driver
that layers on top of the file system
and can monitor all the operations that come in
and come out of the file system.
And basically, what these do,
these are used to augment file system functionality.
Here are some different classes of filters.
Antivirus is a very common usage of these file system filters, but there's many more.
And lots of different products for the Windows environment that do these.
Our direct access mode on volumes then becomes problematic for these file system filters.
And so what we've had to do to adapt our ecosystem again is say,
you know, these filters don't know all the concepts of this new hardware.
So we're going to actually implement an opt-in model such that existing filters
will simply not be notified about these new types of volumes until at registration time
they say, oh, I understand this concept, and then we'll let them opt in. Antivirus filters, this is
what I talked about right here. It's minimally impacted, but there is one area that's problematic.
They actually watch for write operations to know that a file has been changed.
Generally, an antivirus filter scans when the file is opened,
watches to see if the file has changed. If it is, they may scan on close
or they may scan on the next open. It's up to them. But they want to track that.
Well, guess what? If you now have a model where
you don't see paging writes anymore, which was again their hook point
of how they could see regular writes, they could see paging rights.
Well, they lost their hook point by having direct access sections created.
And so this allowed them, or so this impacts them.
And so they're going to have to tweak their algorithms.
I don't think it's a big change, but they're going to have to tweak their detection of modified files because of this change. Data transformation filters is actually kind of challenging. These are,
as examples, encryption and compression filters. For the same reason that BitLocker is no longer
supported on these classes of volumes, software-level BitLocker, and why we don't have the built-in
encryption that's in natively in
NTFS as an example, is because there is no hook point to transform the data. And so these filters
are just broken. It is expected at some point that there will be hardware level encryption
introduced. We'll have to wait for that because it'd be crazy not to and so the answer to encryption is is hardware support and then components like the locker will be
opted into that hardware support will do the right thing in these environments
but what will when that happens I don't know we'll have to see how that goes
there is one last area that we're working on.
There is this thing called the NVML library.
It's a non-volatile memory library that's been developed by Intel.
There's actually a general session talk tomorrow by Andy Rudolph from Intel.
I'm pretty sure he's talking about this thing,
but it's a general session, so you all get to listen to it.
It defines a set of application APIs for directly manipulating files
in this environment on SCM hardware.
And it provides some abstractions to use to do certain concepts.
And it's available today on GitHub for Linux.
There are definitely people using it.
And we are actually working with Intel
at this point to do a port to
the Windows environment.
Exactly what form that'll take when
that's done is still to be determined.
But it is something we're working
on. One of the reasons for that is
they provide some level of application
portability between Linux and Windows,
which is a good thing.
And that is
the bulk of my presentation.
Any questions out there
about what we're doing and
why?
Go ahead in the back.
I'm sorry, what?
What are your plans on execute in place?
Our plans on execute in place have not solidified yet,
but it is something that we're discussing.
And we do realize that it is an important thing at the correct point in the future.
Execute in place is very important,
especially when you get into small footprint devices that are using this type of thing.
Definitely considering the options that we want to proceed in that area.
Go ahead.
I was going to ask the same thing.
Oh, okay.
Yes.
Do these memories generally reside on one NUMA node,
or is it just like the memory?
Are there MMA cuts that you do have with NUMA so you first of all, it can be a NUMA nodes,
that a different NUMA node can have their own storage class memory.
And Landy, who is the guy who owns Memory Manager,
he's talking about how he wants to manage storage class memory in a NUMA environment.
Typically, our file systems and the cache manager have been generally agnostic to NUMA environments.
We'll see what this does to it. Don't worry about it.
Yeah, okay.
You can give the better word, David.
And so NUMA is actually an important area.
At this point, we're just getting it running and getting it functional
and getting it fully functional and available.
But it's a good question.
Definitely NUMA is an important area.
Another important area is how does Hyper-V,
which is our virtualization technology,
incorporate storage class memory into it?
We've had those discussions,
but we're not ready to talk about exactly
what we're doing in that area at this time.
But it is definitely being thought about and considered.
Any other questions? Go ahead.
I guess this is Skylake and above on the Intel side of things.
Is that not correct?
I am no expert on the Linux side and the names that they use.
Say it again.
Skylark?
I don't know what Skylark is.
That's the new memory management unit that Intel makes compatible with the PC.
Oh, okay.
I'm sorry.
Again, I don't know.
Is that a software piece in the Linux environment,
or is that hardware?
That's hardware.
I'm no expert in this area.
You'd have to talk to guys that are.
When you talked about memory management,
you didn't say anything.
Sorry.
The memory manager,
which is the software memory manager inside of Windows,
is what I was referring to.
Not any hardware level.
We should be fairly agnostic to it, whatever the hardware is doing.
One of the areas that is interesting in dealing with storage class memory
is the different failure modes that come up.
They can fail in new and creative ways,
especially when you think about NVDIMM with a capacitor,
because now you have the concept of,
hey, your storage is working fine,
but it's not persistent anymore.
And how do you deal with failure modes like that?
It gets a little creative.
Yeah, it's oops.
And so one of the areas that we're actively working on is,
A, how do we expose these new failure modes to the system?
And these are besides concepts of torn rights at a subsector level and everything else,
but the hardware itself can fail in new and creative ways, which we always love.
And then how do you map that to existing semantics
so that it's understandable to applications, and how do you deal with it?
Any other questions?
Go ahead.
Have you described the rollout plan?
That is an excellent question.
No, I have purposely not described the rollout plan,
but it is an excellent question.
We do have a rollout plan.
I'm just not free to divulge it at this time,
but it's an excellent question.
I was waiting for that one.
But we do
have a plan and
when we're ready to talk about it more
we will.
Go ahead.
Along with the other hooks and stuff that
people aren't getting for the antivirus, the other filters
and such are, are you talking with people
or are people talking with you about
data forensics?
You know, we do have a data
forensics group
the question was
with all the change in the environment
have we been talking to the data forensics
people and we do have a data
forensics group and no we have not talked
to them yet about what this is going to do
to them but it is an excellent
question but no we've again we're trying to get to them, but it is an excellent question.
But no, again, we're trying to get it to work.
But they have a lot of interesting tools and stuff that they have developed over the years,
and it'll be interesting to have those discussions
and see how impactful this is to them.
Just so you know,
REFS is impactful to the data forensics guys too.
They're not ready for that file system as it rolls out.
So they have some work to do in multiple areas.
Go ahead.
On a percentage basis, how far are you to complete?
Boy.
That's a great question.
When will you roll it out?
Yeah, that's when will you roll it out.
In another form, I like the translation that you've done.
I'll just say that it is running and works today.
Is it complete? No.
But I can't give you an actual percentage,
but we have it functional.
But we're not done yet.
Anything else?
oh go ahead are we also targeting Windows what?
oh oh
that's an interesting question
I don't know what I did with my display
oh there we go
to begin with we're targeting server,
but obviously client is an important area for this going forward.
Long-term, client is a big area for this.
So the work that we're doing is not client or server specific.
It's more of a business decision as to where it is available first
versus a technical or anything else decision.
It works on both.
Yes?
When will we know the effect on performance?
I believe that at the time that we are ready
to announce the release schedule,
we should also announce the performance gains.
I mean, it's storage class memory.
If you just compare it to anything out there today, it's kick-butt faster.
It performs great.
One of the areas that we need to get data on, much better data on,
is comparing on storage class memory block mode to direct access mode.
That's the area to see.
If it's 5% different, is it worth the number of people switching?
If it's 50% different, that changes those numbers.
There will always be some apps, our database buddies,
that will always want the fastest possible thing.
It doesn't matter.
And then there's everything else, right?
Anything else?
Go ahead.
Okay, so there's a Santa sculpture DIMM,
which is another kind of NVDM that's got block access.
There's no reason to use this software with that.
You just use standard Windows, right?
Yeah, this is just using standard Windows. There's no reason to use this software with that. You just use standard Windows, right? Yeah, this is just using standard Windows.
There's nothing magic.
There are DIMMs that have only block mode supports,
and those can run in our block mode,
but they won't work in this direct access mode
because we need byte-addressable access.
But they don't need what you're developing.
No.
They really can't use it.
Because this is just natively in the system.
It's available to all the applications.
Yes, go ahead.
Are you going to update the Windows ecosystem?
All provided there?
You know, that is such a great question, and I don't know.
I don't have a good answer to that.
I would hope that we would, but that's not an area that I spend a lot of time dealing with.
So I don't have an answer today.
Go ahead.
Are you going to catch up any of the existing
fielded server OSes?
The age-old
question.
Will we backport?
The model Windows
has had for a while now
is that we don't backport
I don't know what will happen with this
we'll have to work out, we'll have to see what our
customers and everyone else tell us
but generally
our upper management says we're moving forward
let the world move forward
with us
how many systems will be down
it's a fair question
what JR said is how many systems will be running down level
that are running on this type of hardware
and so it's an excellent question
I promise you that that question has been asked many times
and will be asked many more
and there are people that do want it to run down level
they want to run it on Windows 7.
They want to run it
whatever. Yes, sir?
You mentioned that you're working
on 14 in the mail
or something like that. Yes.
What are the target languages for findings?
Just
C or are you
going to use a lot of other languages?
Basically, I think it's what languages the open source project is targeting today.
We would just support whatever that...
It shouldn't be limited to C.
I don't believe it's limited to C.
But I don't have the answer to exactly what languages that thing supports.
This is a great question for Andy Rudolph for tomorrow.
At his group session in the morning
because whatever it's doing
we will just follow.
Do you plan to cover that Andy?
I do plan to cover that.
Oh excellent. Oh there's Andy. I didn't see him
over there.
See I've been talking about you. I didn't even know where you were.
So there you go. You'll get an answer
tomorrow.
I'll tell you when he's going to roll this.
Thank you, Andy.
We appreciate that.
Well, I think our time is, well, I guess we actually have more time.
So I guess I went a little fast.
So I'll go ahead and you can reclaim a few minutes of time if anyone has any.
Oh, there's one question in the back, and then we'll –
if you have questions, feel free to come forward.
Yeah.
Can we have constant, consistent performance improvement,
whereas in this case, can it be slow?
If you have a heart rate and you never even have to warm it up,
does it de-ramp fast?
Right.
In this case, it'll never de-ramp fast. Right. In this case, it'll never be DRAM fast.
No, in this case, it's always DRAM fast.
There's no warming up because the memory map points directly to the non-volatile storage, which...
Well, I guess it depends on which storage class memory technology you're using.
If you use an NVDIMM today, that literally runs at RAM speeds.
What 3D Crosspoint will run at, we'll have to see when Intel finally rolls it out,
exactly what those performance characteristics are.
When are they going to roll those out?
Well, Andy, why don't you tell us that?
Yeah, I don't know when they're rolling it out.
The latest news that I heard about Optane is
sometime next year, but that's
just what they announced publicly. I don't know.
I'm no expert in this area.
So it does depend on the underlying technology.
But I do understand what you're saying.
Right.
Yeah, well, how...
I don't know.
We've talked about whether we expose
exactly what the underlying hardware is.
Those are still open discussions we have
if we want to expose that to apps or not,
or they just say,
hey, it's storage class memory and it's all equivalent.
Oh, yeah, yeah.
As Jair pointed out, it's complicated by virtualization, whether it's all equivalent. It's complicated by virtualization. Oh, yeah, yeah. As Jair pointed out, it's
complicated by virtualization, whether it's in
the storage stack through volume
management type concepts or
virtualization in Hyper-V.
It's all complicated.
Also, when you come to RAID, you said that
it supports RAID.
No, it does not support RAID
because, again, you don't know when the data
has been modified to update your RAID stripes.
Yeah, there's definitely some functionality loss in those areas.
Well, thank you very much for your time and patience.
And if you have any questions, feel free to come forward.
Thanks for listening. If you have questions about the material presented in this podcast, be sure and join our developers mailing list
by sending an email to developers-subscribe at sneha.org.
Here you can ask questions and discuss this topic further
with your peers in the developer community.
For additional information about the Storage Developer Conference,
visit storagedeveloper.org.