Grey Beards on Systems - 0102 GreyBeards talk big memory data with Charles Fan, CEO & Co-founder, MemVerge
Episode Date: May 27, 2020It’s been a couple of months since we last talked with a startup, so the GreyBeards thought it was time. We reached out to Charles Fan (@CharlesFan14), CEO and Co-Founder of MemVerge to find out abo...ut their big memory solution or as Charles likes to call it, “software defined (big) memory”. Although neither Matt or … Continue reading "0102 GreyBeards talk big memory data with Charles Fan, CEO & Co-founder, MemVerge"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Matt Lieb.
Welcome to the next episode of the Graybeards on Storage podcast, the show where we get
Graybeards Storage bloggers to talk with system vendors and other experts to discuss upcoming
products, technologies, and trends affecting the data center today. This Great Business Storage episode was
recorded May 20th, 2020. We have with us here today, Charles Fan, CEO of Memverge. So Charles,
why don't you tell us a little bit about yourself and your company? Great. Thanks, Ray. My name is
Charles Fan and I'm a co-founder and CEO of Manverge. Before starting Manverge about three years ago,
I was the SVP GM of the storage business unit at VMware.
And I was leading the team that developed the vSAN product.
So what we do is we develop software
on top of persistent memory.
And we call our software Big Memory Software
because we believe persistent memory changes the game,
both the game of storage and the game of computing.
The new persistent memory hardware,
pioneered by Intel with their Optane Persistent Memory product
that came out last year,
really defined the best world
between memory and storage.
And essentially, you can use it as memory
or you can use it as storage.
And what we are trying to do
is bring both the capacity,
the low cost,
and the persistence of this new media
to the application of both today and tomorrow
so that you don't have to change your application.
And then you can take advantage of both byte addressability
and persistence of this new media immediately.
So, I mean, when we had the discussions last year with Intel
about the data center persistent memory,
there were a couple of different, I'll call it modes of operation for persistent memory. Do you want to talk a little
bit about which mode of operation you're using or are you using them all? Yeah, yeah. And I think
that's a great question because that goes into the heart of what we do. If you're just getting the obtained persistent memory, there are two modes
at the top of this taxonomy of modes. They're called memory mode or app direct mode. When you're
using it as memory mode, it's a compatibility mode where the operating system sees it just like DRAM,
except it is bigger and it is a little bit slower.
And then memory mode, it's just like any other volatile DRAM.
Right. It is not persistent. It's volatile. It's compatible. You do not need to change your app
and it works right away. The downside is you don't get to enjoy the persistence capability
of this memory. And also there's a fixed ratio with the 2LM algorithm
where you tier between the DRAM and PMAM
to deliver this memory service,
and it's a ratio of 8 to 1.
So there's a various set of restrictions,
but the benefit you get is a transparent,
compatible experience with your existing apps.
But just a larger storage, larger memory footprint, I guess, right?
Right. It gives you a larger memory footprint.
So if you are constrained in the memory capacity,
this gives you a good solution.
Now, on the AppDirect mode,
there are actually two sub-modes as well.
There is a mode called AppDirect Storage Mode.
And what this does is similar to memory mode,
but on the opposite side.
So this delivers a 4K block device
to your operating system.
So that just looks like an SSD or a hard drive.
So it's also compatibility mode that you do not have to
change your application and you can just use it as a fast SSD. So that's the benefit is you don't
have to change anything. Now the downside, you don't get to use a bad addressable memory API,
or you don't get to really get the full performance from this
underlying media the low latency that at a couple hundred nanosecond is what you
will now see from the storage mode it basically access it with 4k blocks you
have the ability to carve up your existing storage into, I mean, even among a device to use some segments of it as
memory and another segment of it as storage? Or is that all or none kind of thing?
Yeah, you can. So that's called mixed mode, where you can have both memory mode and storage mode configured on the same
device. The catch is that these configurations are done in the BIOS at the boot up time.
So once you configure it, it's fixed. And to change the configuration, you need to reboot
your machine. No on-the-fly changes. Right. No on-the-fly dynamic changes. And now, in addition to these two modes,
we think the most powerful and most interesting is the third mode, or the second sub-mode of
AppDirect. That's a real AppDirect mode. So in this mode, essentially, you can have native access to the persistent memory and enjoy both the persistence and byte addressability at the same time.
Now, the catch with this third mode is you need to be a developer and you need to write your new app to use this mode.
So this is not a mode that's compatible either with memory
or storage. This is a new animal. It has persistent pointers. It's a new way of reasoning
about how you keep your data. And so this is a powerful and the most interesting mode, but it does require
a rewrite of your application. It's a new API, new programming model. So that's what you have
from hardware. Okay. It seemed like to me that those various modes could be shared across CPUs or chips or something like that? Yes.
So if you configured a memory in a certain mode,
both sockets in the server can access that memory,
although there is a performance difference.
The CPU that's in the right socket
that's directly connected to this memory will get to access it faster,
where the other one needs to go through the UPI link, and that will be a little bit slower.
Okay. So you're saying that you're using application direct mode,
and you're using the byte-addressable persistent storage version of this. Is that?
Yeah. So let me get into that so
what i just introduced is without memorge if you just get the intel obtained you have these modes
to choose from so what does memorge software do is we are a virtualization layer we are like a
persistent memory virtualization layer we manage the persistent a persistent memory virtualization layer. We manage the
persistent memory in AppDirect mode, in the last mode I introduced, the most powerful,
interesting, but require a new programming model. So because we control our layer of software,
so we can do the AppDirect method. And then we present to the applications above us in various APIs. It will
support a volatile memory API that's compatible with existing memory API, similar to the Intel
memory mode. But we can do it with more flexibility and with dynamic reconfiguration as well.
And then the most important feature of our software-defined memory mode is that it has data services.
And these data services actually take native advantage of the persistence capability of the memory in order to deliver them themselves.
So I'll give you a good example.
One of the most important data services when you are talking about persisting data,
which now with persistent memory, memory can actually, persistent data is snapshot.
Every self-respecting storage systems have snapshots.
And that often becomes a differentiating capability between different
storage systems.
A snapshot assumes that there's something like a volume or a file structure
that you can be doing a snapshot on.
So does MemBirds present like a volume type of interface?
So not necessarily.
So when you are placing snapshot in the context of a file storage,
yes, it needs to take a snapshot of that file directory structure
or the metadata along with the data. But in a more general form, what snapshot does
is it gives you an instant image of your state, whether it's application state or data state,
and that allows you to roll back or to recover from this image of state at any time
and this could be a file directory and in our case it is for the entire application
so what we have invented here is world's first zero i o inory snapshot that we can capture the entire state of an
application that includes you know all the state that are in memory that is
supposed to persist as well as the state even in the CPU cache and we can capture
them at that moment of time and the trick is we can do that instantly. Now before, on memory, there's really no real snapshot.
They were checkpointing.
When you do the checkpointing, you capture the application state.
And then you move that state to a storage device to persist it.
And that movement actually can take minutes if you have a few hundred gigabytes of memory state to capture.
Now, with persistent memory, we can do in-place checkpointing.
Okay, but for an application, let's say it's using persistent memory,
there's both DRAM state as well as PMEM state.
Are you taking both of those and moving those to a persistent memory?
So essentially, we are managing both of those.
And we are making sure they get to move to the persistent memory first.
And then we can do an instant in-place snapshot.
So when we actually perform the snapshot,
there is no I-O incurred.
That's why we call it zero I-O, because the underlying memory
media, in this case, persistent memory,
is persistent in themselves.
So while we are presenting a volatile memory API
to the application, whenever the application does a snapshot
or when the administrator invokes a snapshot, we would instantly capture the entire memory state and persist them without
disrupting the running application. And we can do this repeatedly as frequently as once a minute.
So you can capture a sequence of memory state and being able to roll back or recover or clone from any of those
previous memory states. Charles, what's the use case for doing a snapshot like this every minute?
Sure. So that's a great question. So there are two main use cases that we have worked with customers so far.
The first use case is a crash recovery use case.
And I'll maybe spend the next two minutes kind of describing how this workflow works.
So this is most typically for in-memory databases or in-memory applications.
The examples of applications we work with include, there's a time-series in-memory database called KDB.
We are also starting to work with some animation studios where they're rendering software, which are running in-memory.
So what's in common between these in-mem which are running in-memory.
So what's in common between these in-memory databases and in-memory applications is they are all in-memory because the demand of performance from whether it's the stream of stock trades
coming in at 100,000 trades per second and never stopped while the market is open.
I've seen KDB used in stack benchmarks and stuff like that
for security trading and that sort of stuff.
Yeah, yeah, it's been very active.
Yeah, I think KDB is not widely known,
but it is very widely used in trading and market data infrastructure.
Incredibly high transactional databases, incredibly volatile data.
Exactly.
Among the leading financial institutions, it has a market share of approximately 100%.
Maybe that's a slight exaggeration.
They are a good partner of ours, so I'm doing some free advertisements with them.
But we are also working with quite a few other in-memory databases.
Redis is another one we support well.
There's Hazelcast.
So there are quite a number of in-memory databases that are also popular with other environments.
And we support these all.
And what's common with them is while they are running, the database states are entirely kept in memory.
And only the logs get to persist onto storage, onto SSDs.
So if in the case of a crash,
when this database goes down,
the recovery process involves,
you take the last copy of database,
it persisted, which is typically a day ago,
the end of market of the previous day.
So you load that into your memory and then you replay your log to catch up to the crash state.
And that replay can take a couple of hours because there are billions of trades that has been locked.
And so the crash recovery process can take hours to complete.
And that is a big pain point for these customers,
even though it's not a frequent event.
But when it happens, it's a big pain
for these covers to recover from these crashes.
Now, with our snapshot,
if they rerun on our software-defined storage service,
if we capture the application state every few minutes,
as frequent of every minute,
and we can do this in a non-disruptive and instant way.
So it doesn't impact the actual performance of the database while it's running.
And then when you crash in this case, you just recover from the last snapshot,
which is a few minutes ago or one minute ago.
And that recovery can be instant.
And then you just have to replay the log for a few seconds
to capture, you know, to recover the database back to the crash point.
So in that case, would the logs also be on persistent memory,
or would they be on...
Yeah, the logs can be on persistent memory.
The logs can also be on? Yeah, so logs can be on persistent memory. The logs can
also be on SSD if you like. So either way, just because the log replay, you don't have to replay
from last night, from the beginning of the market, opening of the market. You just have to replay
from the last snapshot. And that's like a one minute worth of trades. And you can replay that in maybe 20 seconds.
So the whole recovery process can be reduced from a couple hours maybe to one minute. So that's two orders of magnitude improvement on the crash recovery use case.
So you mentioned earlier that, and I'm not exactly certain I caught it properly,
but you said you're mimicking a volatile memory through Application Direct?
Yes, yes, exactly.
So if you look at what we do, we do literally three things.
So this is taking a step back.
The first thing is compatibility. This is where we translate the AppDirect method into backward compatible APIs.
And we add other features through data services, but we all this without application change so that we make persistent memory compatible with existing applications while they can enjoy all the benefits this new media can provide.
So the first one is compatibility.
And I think that was your question.
The second one is availability.
And this touches on the, for example,
the crash recoveries through snapshot.
In fact, I think for data services,
the biggest value add from data services
is to increase the availability of
this persistence system.
And I'll dive more into that a bit later as I describe the feature of our product.
The third functionality of our big memory software is scalability.
And this is where you need to scale the memory you need from application beyond what's provided from a single
server. So today, persistent memory is very good. It provides a bigger memory capacity than DRAM
could. For a typical two-socket Intel server, it can support six terabytes of obtained persistent memory,
Gen 1, the first generation of it.
With Gen 2 that's coming out later this year,
it's going to support eight terabytes per server,
and it's expected to continue to increase with the Gen 3 that will come later.
However, if you are moving all the application data into memory, even 8 terabytes might not be enough.
And our software allows the persistent memory and DRAM to be pulled together across a pool of servers for them to become a single memory pool to serve an application.
So the memory can be tiered to the memory on the neighboring node
that go beyond the limit of the single side.
What's the, I'll call,
intercluster
interface between
server A that has two sockets
and six terabytes and server B
that has another two sockets
and another six terabytes
of persistent memory?
So on the physical side, we recommend RDMA
and we support both Rocky RDMA over-converged Ethernet
as well as InfiniBand.
And we also support selected low latency UDP network
such as SolarFlare.
So essentially these are all network
with a single digit microsecond latency
down to about two microseconds between servers.
And we run our software on those interconnects.
And in the future, when CXL comes out
and when the PCIe switching coming out,
we will be supporting those interconnect
that's going to drive the latency down to hundreds of nanoseconds on the same order of magnitude
as a persistent memory. And on top of it, we virtualize these memories through this RDMA
protocol and present them really in a single pool. So for the application, what they see is just like DRAM.
They don't see anything different.
It's a big pool of DRAM.
So if I were the developer,
I'm trying to program in this memory,
I basically can have up to 100 terabytes
of memory on my fingertips.
And I can design my data structure
such that there's literally infinite memory
at my disposal.
And I can essentially have newer interesting algorithms for real-time data processing.
So you're talking, you know, setting up 20-something, you know, 16 servers, dual sockets, 6 terabytes each,
and you've got 100 terabytes of complete memory, and you've got one address space with 100 terabytes of memory?
Up to that amount.
Of course, there could be bottlenecks elsewhere
in the operating system,
in the size of memory you can address.
But on the overall system,
with a GA version that's come out later this year,
will support 32 nodes of memory
being pooled together into a single pool.
And so later this year,
Intel's 8-terabyte configuration will be supported.
So essentially, you have 32 times 8, 256 terabytes of memory available.
In this configuration, though, I can envision servers that are memory
slash persistent memory heavy as part of the pool where other more
commodity processor related servers become a part of that physical cluster. Does that make sense?
Yeah, I think that will be a configuration that can be supported, but an optimal configuration, at least with the initial
deployment, typically that every server is pretty heavy on Optane memory. The reason is that
even though with this very fast network, they are still slower than local memory. So with running application directly
on these servers, in supplying the memory service to them, we prioritize local memory
because it is faster. Therefore, it is desirable for every server in the cluster where the
application is run have enough memory in itself. So most of the memory need can be achieved locally.
These sockets, you know, 32 cores each,
you know, we're talking times 32.
It's a lot of cores. Yeah, you essentially have a little supercomputer.
I can see the movement from machine learning and AI
is completely exploding. And that truly goes beyond that of faster because it's really operating on the memory
bus rather than PCIe, at least to a greater extent.
Yeah, yeah.
I think you are hitting the nail on the head there.
In fact, the two biggest initial use cases that we have identified. Number one with financial
environment with the low latency trading and market data use cases. The second one
and perhaps a bigger one is the AI ML use case like you just described.
Those are dealing with data that are both large in capacity as well as fast in terms of latency,
especially when it comes to inference, when it needs to make decisions on the fly as the new data come in.
So that's where there is a common problem in a good set of AI ML problems.
They call it DGM. So what's a DGM problem? It's called, is data greater than memory?
When the data doesn't fit in memory, when you need to involve a disk, SSD, to place
some of those data, those could be a feature library, your embeddings, or could be the
model themselves, then the performance could drop 100 or 1,000 times just because of the
difference between those two media, between the memory.
And you're talking primarily the inferencing side of the AI, not the learning side, or
both?
So this would apply to both the training and the inferencing side, but this has a bigger
impact on the inferencing side because of demand on latency.
With training, it's a throughput game,
how fast you can move data.
And in our working with the AI ML customers,
we can improve training, like a 4X, 5X,
by providing a big memory solution to them.
Now, on the inferencing side,
we can improve inferencing speed by 200x, 500x. So
much more dramatic improvement on the performance on the inferencing side, especially.
It's very interesting. Another thing that occurs to me is the sort of opening up of your database configuration structures.
It used to mean you'd have to shard off portions of the application
to different server sets so that you could run
sort of various portions simultaneously.
And this essentially eliminates that.
Right.
So this will be an alternative to a database sharding,
which is popular because there are no other better method than that.
And sharding can be difficult for certain kind of database.
If you have your data sets completely independent of each other, then you can shard them if you don't have a lot of drawings and so on between them. But then there are some databases that are very interrelated. And one good example are the graph
databases where all the data kind of entangled together. It's almost impossible to shard.
And this basically gives you a big memory solution that you can fit the entire graph database,
even if they are a few terabytes or tens of terabytes, you can provide a single
solution that they can all be fitting the memory and have amazing speed.
And therefore your application operates a lot more efficiently and your development
with that application becomes a more trivial approach because you don't have to account
for those various shards of that database.
Yes, exactly.
And talking about sharding, let me introduce,
I think is the coolest feature we have so far from our big memory software.
By the way, the name of the product is called Memory Machine.
It sounds like hardware, but it's actually purely software product.
We got inspiration, number one, from machine learning, number two, from virtual machines.
These are essentially, I think, what we are trying to do to memory what VMware did for CPUs by providing this virtualization layer and building the bridge to the application without requiring application change.
And within the memory machine software, as I mentioned,
we have the snapshot capability.
And besides the crash recovery use case,
the second use case is what we call
the app cloning use case.
And this is an alternative
to the sharding use case you mentioned.
So sometimes when people do sharding, it's not only when memory doesn't fit,
it's also to increase the number of CPUs you can put on a job,
basically to allow you to scale out your processing capabilities.
And how does our app clone work?
With these in-memory databases, many of them are single thread, like Redis or KDB.
These are single thread in-memory applications that are super fast, but they can only use one core because they are overloaded, you need to create another instance,
another replica or clone of the database
to alleviate the load on the primary instance.
And before you shard it or you replicate it
to create more instances,
and the process takes a long time
and it also takes more resources.
What we can do is we call thin app cloning.
So app cloning can be thick or thin.
And when we do the thin app cloning, we can two virtual memory spaces for the two instances of the application.
So you basically create an instant application clone without replicating the actual memory resource. So you mentioned, we didn't really talk about how your snapshot is implemented,
and you talked about thin versus thick clones.
So, I mean, thin seems to imply that you're providing, you know,
like a modification sort of level of snapshotting where anytime a write occurs to either the snapshot or the
primary data, you replicate that block or byte, I guess, in this case, it's being written and
manipulate some pointers, I guess, that says how to constitute this space.
Yeah, I think at a certain point, I need to stop talking because this is the core technology
that we are developing.
I think you're zooming right into it.
So how we are doing this snapshotting is in some way on the logical level, similar to
how storage snapshot is taking place.
When you have writable snapshot, you do all these copy on write or other methods of keeping
the fork of different virtual images of that data space.
And that's similar here in our case, but we are applying to a media that's a thousand
times faster, which is memory media, that makes it more difficult to do.
So as we are doing this, we filed a number of patents to protect our algorithm, essentially
allow us to manage the ability to provide multiple virtual copies of the memory while maintaining the same physical root and keeping track of all
the changes in the memory through software.
Snapshots can be done a number of ways.
Obviously, it's crucial in this case to be able to do it quickly and efficiently.
And snapshots can take a lot of space or a little space, depending on how you implement
them.
You mentioned thick clones versus thin clones.
Obviously, thick clones will take roughly the same amount of space
as the original route of the address space.
Yeah.
So we have some of the best engineers in this area
who have created the world's best snapshotting system before.
And we spent the last three years perfecting this.
And they believe they have done the best work in their life with this snapshotting capability.
Someday I'd like to take a peek underneath the covers and see what this thing does.
But that's a subject for another discussion.
You mentioned snapshots and you mentioned app cloning.
Are there other data services that Membridge supplies?
Sure. So the three key data services that we focused on first, snapshot is the first one.
As I mentioned, you can do crash recovery, you can do cloning, you can also simply roll back
your application if you like, the classic standard memory snapshot use case. Besides that, the other two key data services
that we are supporting first is replication,
that we can support replication of our memory state.
We actually have a very good first use case for it,
which is a real-time pub subsystem
for those market data trading platforms,
that we can have a single-digit microsecond latency
to quickly replicate the trading data to hundreds of processes
across multiple servers, and with a high level of consistency
and low level of latency and jitter.
So there's a replication capability
we are building to our memory machine.
That's the second one.
The third is tiering,
that we allow the DRAM and persistent memory,
both local PMEM and remote PMEM to be tiered together
to create one large memory pool.
And with intelligent placement, caching,
and movement capabilities underneath.
So snapshot, replication, tiering
are the first three data services that we are providing.
Talk to me a little bit about how replication,
I mean, what's the interface between replicas?
Are you doing, I guess, a synchronous replication?
So every byte written to processor A, let's say, is replicated to its twin someplace else?
So memory replication is done a little differently from storage replication.
But conceptually, we do support both synchronous and asynchronous replication.
It's a configurable option as you do this replication.
And we are still working on various use cases of replication. So far we have made available
the first use case, which is this PubSub use case. On our roadmap, we will essentially build other use cases on top of our memory-to-memory replication module, but that's up and coming.
But conceptually, you can see that it's a configurable module support both synchronous and asynchronous memory-to-memory replication.
And the tiering solution is sort of the way you provide the scale-out memory?
Yeah, that's one. And even locally on a single node, there are some differences between our
software memory machine tiering with Intel memory mode, that we are more flexible in
terms of ratio between DRAM and PMEM. And also it can be dynamically reconfigured.
And in certain cases, our performance is also better than the hardware.
So tell me how dynamic reconfiguration works.
So from an Intel perspective, you could say this much is memory mode
and this much is AppDirect mode, and you set that up in in the BIOS and until you boot, you can't change it.
In your case, I guess the whole persistent memory is, could be, well,
it depends on the BIOS, I guess, you know,
it could be a portion of it that's memory mode,
the rest of it's application direct.
And within that application direct segment, let's say,
how does your reconfiguration work?
And what are we reconfiguring, I guess?
Sure, sure.
So with our memory machine, the entire persistent memory
are in App Direct mode.
So we don't need it to be in memory mode or storage mode.
It's all in App Direct mode.
And then we're presenting it to the different application. or storage mode. It's all in AppDirect mode. And then we're presenting
it to the different applications.
There could be multiple applications
running on this server
and there could even be
multiple virtual
machines running on this
physical server. So you're effectively
presenting multiple address
spaces, I guess? Right.
We basically provide multiple address spaces.
We can configure it at a per application level.
You can have different configurations of your memory space.
And reconfiguration would say, okay,
this application was using a terabyte and this other application was using
five.
So I need to change it to three and three or something like that.
Is that how this would work?
Yeah.
It can, dynamically.
And also,
uh,
the,
the actual,
uh,
actual content of that one or three or five terabytes can change.
You know,
you can say,
I want this memory to be one part DRAM,
eight part PMAM.
And the other one that you can say is one part,
four part.
The other one, I say, you know, have it all to be PMAM. The other one, you can say it's one-part, four-part. The other one, I say
have it all to be PMEM.
And then at a certain time,
you think you need a little bit more or less
DRAM for this memory space, you can
configure that on the fly.
Let me try to understand
what you've just said there. The whole persistent memory
is effectively AppDirect and is managed
by the memory machine.
But I could say a certain portion of, let's say, the six terabytes,
one terabyte could be memory mode accessed,
and the other could be App Direct accessed.
And you're effectively mimicking that access through the memory machine?
Yeah, okay.
So the short answer is yes,
but let me give you a more complete picture in an example.
Let's say if you have a total of six terabytes
of persistent memory,
they are all configured in the hardware
in the AppDirect mode.
And now you run memory machine,
you let us manage this six terabytes persistent memory
and maybe another 512 gigabytes of
DRAM, let's say.
And then above us are, let's say, five different applications running.
And let's say application one, they just want to use a memory service.
And we basically provide them a memory service, transparent memory service to them.
And this can be, I can use one terabyte of PMAM plus 100 gigabytes of DRAM,
put that together, presenting a 1.1 terabytes of memory service to that. And let's say another
second application, I need a lot of memory, but I don't care as much about speed. I need three terabytes of memory service, but all on PMAM,
and I want to be able to do one snapshot per minute configuration on that three terabytes.
I'll configure a three terabyte memory space to that application number two, also with our memory mode,
but turn down the snapshot capability.
The third application,
maybe they just want to use AppDirect.
We can pass through AppDirect to them as well.
And the fourth application,
maybe it's a new developer,
want to use a new method of using persistent memory.
And we actually deliver the SDK.
In addition to the transparent memory service,
in the first two applications,
I can provide the SDK to the developers
where their control of our memory service
can be even more granular and more powerful.
So in the fifth application,
maybe they just want to use memory again
and we can just provide a memory service.
So between them, we used up most of the PMAM
and most of the DRAM.
And in the middle, if any of them want to change,
we can change the configuration for them on the fly.
And all these services are enabled from our software.
So software-defined memory service,
really with different service levels
to different applications.
And the operating environment is controlled through an administrative panel available
to a web service, or is there an management server or something like that that controls
this?
So both.
We have a REST API that connects to our graphic user interface to essentially have a dashboard.
You get a view of all the applications
and what memory you're configuring for them.
And you can configure different data services
for each of those applications from your panel,
from your management panel, GUI.
We also supply a command line,
which allows you to automate a lot of these tasks
through scripting, or you can just,
for a lot of administrators,
they prefer to go in command line
and do anything they want. So
both GUI and command line are used
to manage our
memory virtualization platform.
Well, listen, this has been extremely
impressive, Charles.
I didn't realize you had all this capability.
So, Matt,
is there any last questions for Charles before
we close? Well, my mind is spinning a bit, Ray.
I think that as I try to digest, I'll probably come up with more.
But at the moment, I'd have to say no.
All right.
Charles, anything you'd like to say to our listening audience before we close?
Yeah.
So a couple of things.
First of all, you haven't missed much because all this,
we essentially just announced yesterday. So this is all fresh off the press. And we have been
heads down developing this over the three years. So it's our pleasure to share this with you.
And we believe what we created is going to be a transformative technology.
This is not another storage technology.
This is a different and a new animal.
This is what we call big memory.
This is basically turning memory to become much bigger, persistent, and highly available.
And the effect, I believe, if we do everything well, and if the hardware
side cooperates, over the next 10 years, we'll see more and more mainstream applications moving to
in-memory processing mode. And the performance tier of storage will become less necessary as
the memory starts answering to all the needs that an application needed.
So this means tens of billions of dollars of market
shifting from a storage system to an in-memory system.
We think it's a big happening in the industry,
and we are happy to be the first mover in this space.
And our EAP early access program is available now.
So come to our website if any of the listeners are interested in kicking the tire.
I think there are a number of use cases where you can see benefits of 100x, 200x from this big memory processing.
Well, this has been great.
Thank you very much, Charles, for being on our show today.
My pleasure.
Next time, we'll talk to another system storage technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast, tell your friends about it.
And please review us on iTunes and Google Play and Spotify as this will help us get the word out.
That's it for now.
Bye, Matt.
Well, bye, Ray.
And bye, Charles.
Okay, bye-bye.
Until next time.