Storage Developer Conference - #101: Introduction to Persistent Memory Configuration and Analysis Tools
Episode Date: July 8, 2019...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Co-Chair. Welcome to the
SDC Podcast. Every week, the SDC Podcast presents important technical topics to the storage
developer community. Each episode is hand-selected by the SNEA Technical Council from the presentations at our annual
Storage Developer Conference.
The link to the slides is available in the show notes
at snea.org slash podcasts.
You are listening to SDC Podcast
Episode 101.
All right, so Storage Developer Conference
will come to an end today.
And after a great series of lectures, you're going to your office tomorrow.
You see this email in your inbox saying,
hey, the server that you've been long waiting for,
fully loaded with persistent memory, has arrived,
and it's been racked in the data center lab.
Please go ahead and configure a server
so application developers can start doing their work.
By the way, they also asked you to run some performance benchmarking
on this persistent memory aware server
so they can baseline their performance when they do their application.
So you go into the data center lab, you power up your server,
you're in the operating system.
Now you're wondering, have the DIMMs come provisioned?
What are the tools available to provision?
Are they part of the operating system?
Do I really need a vendor-specific provisioning tool?
By the way, what are those benchmarking tools that are persistent memory-aware already?
And so on, right?
A lot more questions.
So as a system administrator or an application developer,
this is the presentation that will help you get started
on persistent memory, what are the tools available,
and also how to use them.
So I'm Usha Upadhyayla from Intel Data Center Group,
and I have my co-presenter, Steve.
So we'll be going through...
Where's my pointer?
Sorry.
We'll be going through a list of persistent memory tools,
configuration tools, benchmarking tools, and analysis tools.
And then Steve will walk you through some of the tools
and give a demo of how to use these tools. And then Steve will walk you through some of the tools and give demo of how to use his tools.
But first, what is persistent memory? If you have not had a chance to attend Andy's talk this
morning, I just want to put a slide up there so that we're all on the same page. So persistent
memory is byte addressable like DRAM, but it retains its content, like storage, and it's
cache coherent. You can do load store access. But why does it really matter now? There have
been NRE DIMMs in the market for some time. So with DIMMs that are powered by 3D cross-point
media, we have much larger capacity than DRAM, higher endurance, and lower latency than SSDs.
And that's creating a new tier in the industry,
basically between the DRAM and the block devices.
It also has the ability to do in-place updates,
which basically eliminates page caching.
And you don't have to do kernel,
you don't have to go back and forth into the kernel.
And it also has the ability to do DMA and RDMA.
So here are the list of tools that we put together.
From configuration and provisioning tools perspective,
there are a lot of vendor-specific and vendor-agnostic tools.
At the pre-boot level, we have a tool that is created by Intel,
and it is specifically written to be able to manage Intel 3D cross-point-based DIMMs.
It's called IPM-CTL.
This is an open-source tool, but it's specifically written for Intel persistent memory DEMs.
It's cross OS. It works in Linux. It works
in Windows and also in ESXi environments.
There is also a Linux
tool called NDCTL. Again, this is specifically written for
this is a vendor agnostic tool,
and it works with any NVDIMS. I'm not as familiar with Windows as Linux, but IPMCTL works on Windows
as well, but there are certain PowerShell commands that are created to be able to provision the persistent memory DIMMs
from Windows environment as well.
From benchmarking perspective,
I'm not sure how many of you are aware
of already the Intel Memory Latency Checker
that has been enabled to work with persistent memory.
We have another slide coming up.
We'll go a little bit more details into that.
FIO.
We've added a few up. We'll go a little bit more details into that. FIO. We've added a few IO engines
that can generate and understand the persistent memory
and can generate the IO to persistent memory.
PMIM Bench is a developer benchmarking tool.
If you're writing an application based on PMDK,
if you're updating the code for PMDK
or you're modifying the code,
this is the tool that helps you kind of look and see
if you have introduced any regressions.
From the analysis perspective,
VTune amplifier has been modified
to enable for persistent memory.
There's a new tool called Intel Persistent Inspector.
Basically, what this helps is
identify persistent memory programming errors. What that means is that, you know, Andy and others
have talked about how flushing is important to make sure that the data is durable and has reached
the par fail save domain. So the tool basically helps you identify
if there are missing flushes,
if there are redundant flushes,
or if there have been some reordering.
So you've actually updated a...
You have done a store.
You haven't really flushed it all the way to the durable region,
but you have actually done another store on it.
So this is the tool that actually helps you identify those
and provides some recommendations.
Intel V2 and Platform Profiler,
this is another tool that helps you understand if there are imbalances.
Are you using your hardware properly?
Is there anything that you can do with your application understand if there are imbalances. Are you using your hardware properly?
Is there anything that you can do with your application to make sure you take advantage of the performance of your hardware?
The PMEM pool and PMEM check,
these are basically for applications that are written on top of or using PMDK.
So persistent memory pool is an offline analysis utility. That will help you look at
the pools. Again, we're going to talk about what these persistent memory pools are,
but it'll help you look at your pool, persistent memory pool, and also check for any data
consistency errors. You can dump the user data and then basically look at how you can repair
the data as well, and many more. PMIM check is very similar to persistent inspector,
but it is based on the Valgrind framework. So again, it helps you with detecting persistent
memory-related programming errors, similar to what I just talked about for the persistent memory,
but again, that's based on Valgrind.
This was originally written to validate PMDK,
but it can be used if you write a program outside of PMDK as well.
And we had to make changes to Valgrind also,
so it's persistent memory aware.
So before we move on, just want to make sure that
the tools that are written and the other things
that are around persistent memory programming
are all based on industry standards,
and here are the standards that we use
when creating the tools.
Okay, so before going into what are these tools and how to really use them,
there are certain concepts that we need to understand.
So when you have persistent memory raw capacity,
you need to basically divide them
into something called regions.
So regions can be created in an interleaved manner.
So they can be one-way interleaved or they can be N-way interleaved.
So here we have an example of four DIMMs.
And here we are showing that in the first diagram, on the left-most side,
we show that the region can be created one per DIMM
or it can be interleaved across the four DIMMs.
So when you create a region,
all that it's doing is creating a contiguous memory.
So when the reads and writes are done,
they are done striped across a contiguous memory.
But creating a region doesn't really expose that to the application.
You need to do another step,
just like on SSD, that creates something called namespaces. So namespaces now generate a physical,
the logical device, so applications can write, do their IOT. So these are the fundamental
concepts on which persistent memory is provisioned and then exposed to the application.
Now, I talked about IPMCTL and NDCTL.
So there are a lot of common supported functions from IPMCTL and NDCTL for provisioning.
So you're able to create regions, namespaces.
You can enable, disable, destroy, all that.
But you really need IPMCTL to create the initial regions
on the raw capacity of Intel Optin DC persistent memory.
And again, it's available at the user space
as well as at the pre-boot level.
And it's available across different operating systems.
In addition to provisioning commands,
the tools also provide monitoring and maintenance support.
So you can look at the health of the DIMMs.
You can inject errors.
And IPM CTL also provides you a way to set security policies. When you create a region,
you can actually set a passphrase so it's much more secure and things like that.
So now we talked about, you know, I have a persistent raw region.
I've divided them into, you know, things that operating system has created.
There's logical devices now applications need to talk to.
But how can we make application developers' life easier to talk to these logical devices?
So Persistent Memory Development Kit
is built on top of SNEA programming model,
and we have created a bunch of libraries,
some low-level for low-level support,
some of them have transactional support,
we have several language bindings,
and it's basically a set of user-level libraries
that allow you to be able to allocate memory from persistent regions,
and then, again, as SNEA programming model says, it's memory-mapped region.
And then you are able to do some, can be abstracted from all the details of what do you need to do if you
want to update data structure that is larger than eight bytes. So we provide transactional
support, and at the low level, you don't have to worry about the nitty-gritty details of,
okay, is this flash command available on the platform? All those things are abstracted
out for you to make it easy to adopt persistent memory programming.
And it has CAC++ support that is production quality. There are a few other that are coming in the pipeline. There is a Java support at different levels. We have low-level support
for Java and the container support, but they're all right now labeled as experimental.
The dark green box that I show,
it's labeled as volatile memory.
But I've been talking all along about persistent memory.
What is that doing over there?
So we've added support for persistent memory as part of a kind for libmem kind.
So you can basically use libmem kind to allocate areas on persistent memory that could look volatile.
Meaning if you have certain transient data structures
that you really don't want to persist,
you can use libmemkind to create that volatile region
which will go away across the boots
and the application doesn't really need to persist.
So that's something that's not part of PMDK,
but it is enabled in LibMemKind,
and it's available.
So again, just to summarize, we've started from
at the bottom-most level of raw memory capacity.
We build them into regions, into namespaces,
and there are two different ways applications can access persistent memory.
One is through file system DAX,
and that's where PMDK comes into picture
through persistent memory pools
to abstract it even more to the application developers
so they don't have to really worry
about the memory mapping of the files
or other transactional things and stuff like that.
The other thing, the other way for application
to access persistent memory is through device DAX.
Again, there is no file system involvement in here.
You get direct access to the device
that is created in device tax mode.
So what are persistent memory pools?
So we talked about memory mapping of the files
and how PMDK exposes it.
So persistent memory pools are a way of exposing
a single or set of memory mapped files
as a pool for the application.
So an application can create different memory mapped files,
and again, those different memory maps files
are shown up as pools.
And then it's only intended to work with DAX file systems.
So basically, how do you identify, you know,
if your application crashes, you come back up?
How do you really need to
get back to the data structure
that you're working with?
So you kind of give
the pool ID,
and from the pool ID, there are different ways
of getting to the
root of the pool, and then you can
get your data structures that you've been working with.
So that's the end of how you provision your DIMMs and how you really expose your persistent memory to your application.
So from benchmarking tools perspective, again, memory latency checker is something that
when you run it, you can run it without any parameters.
When you run it, it looks at the topology of your platform.
It generates latency and bandwidth information.
You can change the loads on your system,
and then it generates latency and bandwidth,
how they change with the change in the load on the system.
We've added support for persistent memory
in the sense that you can provide the mount point,
the memory-mapped mount point to MLC or a memory-mapped file,
and then it basically runs the latency and bandwidth
happening to that memory region
and generates the numbers that way.
So from FIO perspective, there have been three
different IO engines that are added called LibPMM,
DevDax, and LibPMM Block.
So what this basically does is when you start your
FIO with this IO engine, it uses the LibPMM API
that is part of PMDK to do I.O. to the memory mapped region.
Again, PMBench is more for developers, so that's when you really are contributing to the PMDK code.
I'll let Steve take over from here.
He'll do the analysis tools and then do a walkthrough of the tools
and do some demonstration.
Thanks, Lucia.
So I'm going to look at the analysis tools,
of which we have quite a few.
So we already mentioned these persistent memory pools.
We can create them using the APIs that are available.
But the PMEM pool was really designed
for offline administration of these things.
So we can print information about them.
We can create them.
We can do data validation of our metadata using this type of stuff. We can dump out your user data.
It's not really meant as a backup tool per se, but we can do some forensics on that data
to figure out where things may have gone wrong. And we can remove them and convert them. Convert
means we're going to upgrade to the next version of our pool metadata. So this information is available.
Source code is open source.
And again, it's available on anything that supports PMDK,
so whether it be Linux or Windows.
PMM check is not the same as PMM pool check.
PMM pool check checks your metadata of the pool itself.
PMM check, as Usha said earlier, actually checks your application.
Are you doing flushes in the right order?
Are you doing them often enough?
Are you not doing them in the correct order?
So again, we support Valgrind, Hellgrind, and Memcheck,
so you can run your application
under these virtual machine engines.
It's not going to give you the best performance
for your application,
but we're here for consistency checking, right?
We want to make sure that your data gets down to the media in the right way.
So we can log all the persistent memory operations for post-analysis.
And we give you a nice little summary of saying,
this intruder isn't made persistent.
You might want to go back and put a flush in here or whatever.
But again, the idea of PMDK is that all of this is done for you, right?
So you don't have to worry too much about it.
And we do it similar to database transactions.
Again, that's a reoccurring theme with PMDK and most of our libraries
is that they are transaction-based, right?
We're going to get you down to a consistent state
using known partially acid operations.
So, again, we have the VTune suite, which many of you may be familiar with.
And within that, we have Amplifier.
So the Amplifier takes your application and instruments it
and gives you some really nice results about what you're doing with it,
where your hot data structures are,
where you can improve your code.
I mean, it's not going to tell you exactly which line for line,
but you can get down to code if you compile your code with symbols.
And we'll tell you roughly where your optimizations could be
within your application for persistence.
The platform profiler is a more holistic view
of the physical hardware.
So it'll tell you what your CPUs are doing,
what the back-end persistent DIMMs are doing,
what your DDR DIMMs are doing,
and you can see whether your workloads are working
or will work for a persistent work.
The persistence inspector is similar to the PMAM check
in as much that it'll analyze a running application
and give you really useful information
again about where your hot data is
whether or not you should be moving that hot data to DRAM
or whether you can kind of move it off to persistence
because it's slightly cooler, you don't care about it too much
and then the advisor is an interesting tool.
It was really designed for vectorization,
but there's been some additions here
where we can do memory access profiling.
So you can really see now,
almost down to an address level,
where your hot areas are.
And again, all of these tools together
will give you a really good understanding
of what your applications are doing and where you can make improvements.
All of this stuff is available for download. You can go to the URL at the bottom. System
memory. We have tutorials, videos, loads of really cool information about it. I'm not
necessarily using all of this stuff yet, because that's coming.
But the Studio 2019 is in beta right now. So we encourage you to go download it, play
with it, and ask questions to us.
All right, so provisioning. This is kind of the meat of what I wanted to get to. So we've
introduced the concepts, the terminologies, your namespaces, your regions.
What does that really mean when you're at the terminal? You need to type this stuff
in. You might be using Ansible for remote replication or remote distribution of your
systems. I'm going to walk through what you would type at the keyboard to get this stuff
working with your applications. I'm going to walk through a couple of example configurations,
the common configurations that we see.
It's going to be Linux-based, so apologies for the Windows guys,
but it is going to be pretty similar.
And this works whether or not you have physical DIMMs in your box
or you're emulating this stuff using DRAM or some fast SSD if you're using some kind of virtualization topology.
So we start here at the bottom.
This is just a single case, and all my examples are per socket.
So whether or not you've got a one socket, two socket, four or eight, it doesn't really matter.
So we use our IPMCTL create command
to create what we call a goal,
which gives us our region.
This is no different than DRAM, effectively.
Layered on top of that,
we just create a single namespace of the same size
or as close to the same size as possible for our region.
Here we just use the ndctl create namespace command.
Nice and easy.
So that gives us a device called pmem0 under our dev directory on Linux. And then on top of that
we can just use either the ext4 or the xfs makefs commands to create our file system.
Now the important bit here is that when you mount it, you mount it with a minus O DAX flag.
So that's the only difference here.
And that tells the kernel, hey, look, this is a DAX-enabled file system.
I'm going to bypass all my page caches and go straight down to the media.
Now, because this is a DAX file system, we are going to create our PMEM pool on top of this file system.
And this is what the applications will memory map.
This is a nice, simple configuration.
Nothing too hard here.
All our DIMMs are interleaved at the bottom,
so we get a really nice namespace.
So the purpose of this one is that we can now split
these interleaved DIMMs that have a single region, and we can
split it, in this case, in half. So now we use the same commands as before, but now we
just specify the size when we're creating our namespace. And you can specify these.
There is a minimum size. Most people are not going to use that. And the naming convention here is the first decimal number of the namespace itself
matches the region of which you created it on,
and then we just increment the minor number
to say this is 0, this is 1, this could be 2, 3, 4.
It doesn't really matter.
And each one of these gets their own dedicated device
in our device file system
and again on top of this we can just create a regular file system
AXD4 or XFS
again mounting it with the minus O DAX flag, that's the important bit
and then we can create a single persistent memory pool
on each one of these
so this might be useful for some applications that don't want
384 gigs worth of persistent memory
in an application.
You might only want 192.
And there's really no limit to the number of namespaces
that you can create here.
So if you want smaller namespaces, that's fine.
Just create as many as you need.
Yeah?
So those all correspond,
meaning there can only be one namespace, one PM1,
that you can't have two DAX file systems on top of?
You can.
Oh, you can. Okay, so it can sort of branch out.
Ta-da.
There you go. Great question.
So, yes, and this is where I'm going with these.
And there's one more after this.
So here, we can now take this namespace from the previous
example. We've still got our two
devices. We now use
standard file system
partitioning tools, such as
McPart, FDIS, GPart, whatever, whatever is
your favorite, and we can further partition
these again.
And on top of these, we can create more
persistent memory pool.
And again, there's no reason why you would need to create a persistent memory pool that's the same size as either
your DAX file system. You can specify the size here and create smaller persistent memory
pools per file system. So we've talked about FSDACs. We've talked about FS DAX, we've talked about device DAX.
Here we're just showing that you can do it on the same region.
So it's another mix for optimal configurations.
And the difference here is that when you create your namespace,
the default is to create a file system DAX namespace.
But if you specify the mode as being dev DAX,
you actually get your device DAX,
device being exposed further up.
So there's nothing different too much there.
And then here, we get rid of interleaving on our DIMMs.
So we don't care too much about interleaving,
but we can now, per DIMM, we get an individual region.
And the only difference here is that now we say
that the app direct is non-interleaved.
So our regions are no longer interleaved using the DIMMs.
Everything else is the same.
We can create two namespaces, two file system decks,
one device decks.
So once you've kind of got to persistent memory pools,
how do we create those?
Well, I've given you a couple of examples here of how to create them.
Yeah?
I wonder why you would have different reasons
for separate deals.
Why you would not interleave them?
Well, in a virtualization world,
do you really want to lose all of your stuff?
I mean, if one of these DIMMs should fail,
in the very unlikely event that it ever will,
you would effectively lose
all of your 384 gigs worth of data, right?
And you could affect multiple guests on your box.
Whereas if you're happier to live without the performance of interleaving,
then if one of these DIMMs was to fail,
you would only affect the guests using that DIMM.
Or you could just go in and replace it,
and you'd have to rebuild this, obviously,
but you'd have to rebuild this, obviously, but you have to rebuild your guests anyway.
So here's a real-world example of creating a 4-gig pool using a layout, and the layout is, as Usha mentioned earlier, you have object pool, block pool, and log pool, and those
map to our PMDK libraries.
So depending on which features you want,
you would create a different layout for that.
Now, the name, my layout,
is really the tag that we talked about previously.
So this is how you can individually identify
which pool belongs to you
and make sure that when you open that pool
that you know which data structure points to your data.
So once we've created this pool,
we can just do PMM info
and dump out some semi-useful information
depending on what you want to look at.
But this is the metadata that describes this pool.
Now, I just created it,
so there's no real data in there yet,
but this is the type of information that you can get out of this thing.
So we get the UUIDs, we know which end unit it is,
and because it's an object pool,
there's some object-related header information down here
that PMDK knows about and knows how to access the data from there.
Now if there was data in here, you can use the minus S flag,
and you can get more statistics, right? So how full is your pool? How much space is available?
And a few other interesting bits that might be useful to application developers, right?
So when you're allocating data to this, this might be useful to you. Now, I haven't really introduced it too much, but there's a thing
called pool sets. So going back to some of these examples where we have maybe multiple
pools on different namespaces in different regions, now you can say, well, I want to
create a persistent memory pool on region zero, but I also want to might replicate that data to a pool on region 1.
So how do you do that?
Well, that's where persistent memory pool sets come in.
So it's just a very small configuration file,
and you can say I want to replicate my data locally to a different region,
or I might want to replicate it to a different host.
This isn't the same as RDMA.
It uses a slightly different mechanism,
but there is the option of data replication using PMDK.
So, real demo time.
Cool.
So, in the interest of live demo failures,
I recorded this earlier.
Yeah.
I should have said I didn't want to wait for the reboot.
So this is me working with these DIMMs on my lab box back at Intel.
So we're going to work through a couple of those examples.
Now, this stuff's really fast, so we're going to go quickly through it in the interest of time.
I'm going to hopefully kind of keep up with myself.
Let's go back one. I'm going to hopefully kind of keep up with myself with...
Let's go back one.
Previous.
All right.
So let's keep up.
So here's my DIMMs.
IPM CTL show topology.
I've got three DCMPM DIMMs in there and some regular stuff.
I don't have a goal yet.
So I'm just going to create an AppDirect goal,
which says, okay, you've got three DIMMs in there.
I'm just going to go off and create two regions.
It knows because two are on socket zero, one's on socket one.
So after a quick reboot, we can show the memory resources,
showing we've got 749 gig of data.
We've got R2.
Let me slow this down just a little bit.
All right.
All right.
So after a reboot, we get our resources,
750-ish gig of available capacity and two regions.
Again, one on socket zero, one on socket one.
One's interleaved and one's not interleaved.
Okay. Right, so NDCTL.
This isn't my laptop. Okay, so ndctl.
This is the Linux utility that shows you,
with a minus d, shows you the DIMMs.
We call them nmembs.
But you can see the UIDs for each of our DIMMs
that matches what IPMCTL was trying to show us.
Now we can list the regions using the minus R flag.
So again, this is what our goal created for us.
We've got two regions based off of the configuration.
Let's go forward a bit. off of the configuration. Now, minus n shows us the namespaces.
So you can see we're kind of working up the layers here
with our diagram.
So in the previous example, we didn't have any namespaces.
When you do IPMCTL, create goal, you get your regions,
and you get the dims, but you don't get a free namespace.
That's up to you to create.
Everything else is done at the hardware level.
This is the software level.
The IN says, show me stuff that is either disabled by me or by the hardware or that doesn't yet exist. So we see two namespaces here. They're
kind of placeholders, seeds as we call them, for creating two namespaces. So if we just run ndctl create namespace, we get back a namespace.
Now this one just so happened because I didn't specify which one of these two namespaces or regions to use.
We just get back the first one that it decided to use, which was namespace 1, using region 1.
So this is my 500 gig namespace.
And if we skip forward.
All right.
So now here I create, I purposefully create,
a namespace on region 0.
I could have done this to begin with.
I was just too lazy.
So.
And here we go.
So this is my 256 gig namespace.
Now the default mode for this is fsdax, as I mentioned before.
Again, you can specify which mode you want when creating these namespaces.
And now that we've created namespaces, I can now list my namespaces. And now that we've created namespaces, I can now list my namespaces. So here we have
namespace one and namespace zero with the respective modes and sizes.
All right. So what happens if we created our FSDax namespace
and now we said, oh, that was a goof.
Let's change it to a DeviceDax namespace.
Well, you can destroy it and recreate it.
That's fine.
But we offer this force option, and the E is edit.
So we can change our mode from FSDax to DeviceDax.
And it just goes off in the background
and effectively changes
all the metadata, destroys it, recreates
it, and you end up with
a device dax
namespace. And
once it's done, it prints out this useful
information saying, hey look, yep, I worked and here's
all the information related to it.
So if we just list the slash dev and look for PMEM and DAX, we end up with what we expect,
right? We've got DAX 0 using namespace 0 and PMEM 1 using namespace 1. And this is what the application can then mount, right? so we can use that with PMDK
and then all we do is create a file system
on top of our devpmem1
so this one is just an ext4 file system
we create a mount point directory for it
we mount it, again using the
minus O DAX flag, that's the really important bit
we need to keep re-emphasizing.
And there she is, at the bottom of the list of a regular DF output.
So now we can create our persistent memory pool on top of our DAX file system. And here
I'm just creating a 10 gigabyte persistent memory pool. Nothing too special here, no pool sets.
And then I just print out the PMM pool info that we saw earlier.
So again, this is just a blank template.
It shows you all the metadata that gets created initially.
So the purpose of our talk here was to introduce you to the tools,
introduce you to what to do when you get your hands on the keyboard.
What we did was we just kind of overlaid the tools on top of the SNARE programming model
to show that we have really good coverage of all of this stuff.
We're not trying to replace any of the existing tools that you guys are used to.
What we're trying to do is introduce new tools into your toolbox
that you can go off day one today
and go and play with this stuff.
Like I say, you don't even need physical hardware.
You can go off and emulate this stuff.
And I'll talk about that,
of how you can do that in a minute.
So resources.
We live PMMIO.
That's our homepage for Persistent Memory Developer Kit.
The source code is all available, available at these links.
We have a really active Google group,
so if you have questions about PMDK, about persistent memory in general,
come join us, go ask your question here.
This is all linked from the homepage of PMMIO.
We have an IRC channel as well that's fully monitored,
mostly during America's Air,
because that's when Andy and I are online,
but it's all good.
The Intel Developer Zone is a really good resource.
We have some colleagues here that are writing content
on a daily basis of how to get you guys started
with all sorts of really interesting stuff.
So we mentioned the LLPL, the PCJ,
the low-level persistent collections for Java.
There's tutorials on there.
There's tutorials on PMMIO in the blog section.
We introduce you to PMM pools.
We introduce you how to use PMM pool sets.
There's tutorials about programming this stuff.
Tons and tons of information, which is really good.
But something that we haven't shipped yet.
So NDCTL, that lives on PMMIO.
So you can go and have a look there.
IPMCTL lives in its own little repository
on the Intel GitHub site.
LibMemkind has its own website.
You can go and have a look there if you are interested in using the Memkind library to do volatile accesses.
We have a link to the SNIR MVM programming model of where all this stuff came from. But
I would probably encourage you to go to docs.pmemio. I mean, that's something that we've been working
on, I've been working on for quite some time. It's got some pretty good getting started guides.
So everything that I've shown you here
is documented probably slightly better over there.
And there's trains, so if you're a Windows person,
you can click on all the tabs for Windows information.
There's various flavors of Linux,
depending on whether you're Ubuntu or SUSE or Red Hat or whatever.
You can go and get all the commands in the right order.
There's NDCTL user guides,
and that's just an evolving site for documentation.
So if there's anything that you see that's missing,
reach out, let us know.
We'll try and document it.
There's some stuff that we haven't documented yet
that I'd really like to document,
which is really our surrounding the cloud
and how to get this stuff working in the cloud.
But until the cloud service providers
get their boxes and expose it,
it's kind of difficult to write the documentation.
So what we want to do,
this is actually to you guys now,
is we want to drive the excitement for this technology.
I mean, it's pretty highly disruptive. Applications have to change.
People have to think differently when using this type of technology.
Whether you're a system administrator, whether you're an application developer,
architect, whoever you are, this stuff's going to be
different ways of thinking. The tools that we presented today
exist today.
Roll them into your toolbox.
Roll them into your development
processes, your server
provisioning processes.
Try and
feed back to us because
we have colleagues from all the tools
here. We'd love to see how
you guys are using this stuff.
If there's any enhancements you want, let us know.
The previous slide had some where to get helpful information from.
Reach out to us. We're pretty active. If we're not traveling,
we're trying to respond as quickly as possible to emails and IMs
and everything else. We do work with customers directly.
We do have several programs available
where we can probably get you early access
to some of this stuff,
either with servers that we ship to you
or development clouds.
We have programs for academia,
that type of stuff.
So this is where we would like you guys
to get the ball rolling,
get your applications up and running on the technology.
If you need help, reach out.
But we'd really want to see what you guys build.
I mean, there's a ton of applications that you can use,
databases are the predominant one right now,
but let's see what the community as a whole can build
going forward from here.
So with that, I'll invite Usha
back. We can take some questions
if anybody has any.
You're first. Go ahead, sir.
I know the DPTK
folks do a lot of meetups and things.
Yes.
Do you guys do that as well?
We would love to.
Most
of us, well, Usha's local
here. The rest of us are pretty much based in Colorado.
We would love to get into some meetups.
Look for something in the early part of 2020.
All right.
Thank you.
Thanks for listening.
If you have questions about the material presented in this podcast,
be sure and join our developers mailing
list by sending an email to developers-subscribe at sneha.org. Here you can ask questions and
discuss this topic further with your peers in the storage developer community. For additional information about the Storage Developer Conference,
visit