Grey Beards on Systems - 115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops
Episode Date: March 22, 2021We seem to be on a computational tangent this year. So we thought it best to talk with Moshe Twitto, CTO and Co-Founder at Pliops (@pliopsltd). We had first seen them at SFD21 (see videos of their ses...sions here) and their talk on how they could speed up database IO was pretty impressive. Essentially, they … Continue reading "115-GreyBeards talk database acceleration with Moshe Twitto, CTO&Co-founder, Pliops"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Keith Townsend.
Welcome to another sponsored episode of the Greybeards on Storage podcast,
a show where we get Greybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, and trends affecting the data center today.
This Greybeards on Storage episode was recorded on March 16th, 2021.
We have with us here today Moshe Tweedo, CTO and co-founder of PlyOps.
So Moshe, why don't you tell us a little bit about yourself and what PlyOps is up to?
Hi Ray, it's great to be here.
I am the CTO and co-founder of PlyOps, as I said.
Before that, I worked for around eight years for Samsung
in the Advanced Flash Solution Lab here in Israel.
We developed various APIs for Samsung flash-based portfolio,
including error correction codes, compression algorithms,
FTI algorithms, et cetera.
This is my storage-related background.
I founded PlyOps with Uri Beitler,
who also worked with me at Samsung about three and a half years ago.
Our product is Storage Controller.
Basically, we started from our key technology is the highly efficient key value store accelerated by hardware that has that has very good
performance characteristics although if from from the all the important metrics.
That is read amplification, route amplification, space amplification, DDR consumption.
All of those metrics are very, very good in our implementation.
And then we evolve further.
So you mentioned that it's a key value store solution.
So how does that play out in the storage space?
I mean, because, I mean, storage, we've got, you know,
blocks or file kinds of solutions.
We've got SSDs, obviously, and disk drives and stuff,
but they don't speak key value kinds of storage protocols.
If a key value protocol exists.
It's a great question.
And indeed, beside the core technology in our product,
the key value engine,
but as I said, in order to be able to have easy integration
into existing systems
with no need to change anything in the application, we expose beside
the native key value API that we have, we also expose the conventional block API.
And under the hood, we use the key value engine. The reason that we can leverage the the key value engine also for the block API is
because we do transparent compression and after compression you get a variable size LBAs.
So if you can handle those very efficiently you can get a high performance with a high performance storage with compression.
And besides that, we also do integration into various applications at the storage engine level.
We can elaborate on different aspects as you wish.
So you mentioned compression.
So you're doing compression on this plyops. So the plyops is effectively
a storage controller board that plugs into the PCIe interface. Is that how this works?
Yes, exactly.
And so it talks to storage behind it or it talks to storage that might be on the PCIe bus as well?
It's a dual mode. Dual mode one is called inline, where we expose the PCIe lens towards the SSDs, NVMe SSDs in
that case, and control them directly.
And we have the other mode, accelerator mode, where we are sidecar accelerator, and they
have software component in the host that do some sort of filter driver
that orchestrate the entire operation.
And so the actual access to the SSDs
is done from the host side,
but our card provides acceleration services,
mainly the indexing itself, the key value that I mentioned, the key value
index, the compression and region coding, and similar stuff.
Oh, so this is pretty interesting.
I get the technology of accelerating a key value store. I get the ability of kind of an off-processor car that can do that.
But I'm missing the application of it.
Where would we see the benefits of key value store acceleration, encryption, compression, et cetera, in application performance or specific use cases?
So are you talking about our key value API part or the block
API? Okay. So
we have our native key value API
which is compatible
almost compatible with all the functionality of
RocksDB API, which is very popular on storage engine. So each database, each application that
already use RocksDB as a backend storage engine can be integrated
into our, with our system transparently,
or very similar to transparently.
Just need to recompile with our library
instead of conventional RocksDB library.
And for other applications,
we are working on doing this type of integration ourselves.
One example is Redis. Redis on Flash. I'm talking about Redis is in memory database, but it has also Redis on Flash version.
It's a proprietary version by Redis Labs. And that version uses RocksDB as its storage engine. It was a matter of about two
hours of work for us to integrate our solution, our key value storage engine into Redis on Flash,
and we demonstrated a huge gain over conventional radius on flash.
Basically, we can show DRM-like performance
for a wide range of heat ratio numbers with our solution,
while conventional radius on flash,
when combined with conventional SSDs with OXDB,
is very sensitive to the heat ratio for higher heat ratios like 95 percent 90 percent things like that it it operates well and provide adequate performance but as the
heat ratio start to deteriorate throughout 80 percent and below performance also to RedShoppy. Without solution, we can keep flat performance
down to 10% hit ratio.
So this is one example of integration.
We've done similar integrations to more applications.
Yeah, yeah.
So when you say hit ratio, you're talking about host caching in that case. So when Redis on Flash natively has a hit rate that drops below 80%, its performance degrades pretty seriously.
But with your system in between the two and using, you know, the key value store capability that you bring to the table, you're saying you can take the host hit ratio down to 10%
and still perform very well?
Yes, yes.
And we have it, we already demonstrated that.
And there are several hyperscalers
that have a huge Redis deployment.
And they're interested in that capability
because we basically offer them to reduce
the footprint of the Redis servers.
So this is, you know, I would say key value activity
is metadata intensive.
So, I mean, you're looking for, you know,
keys and the blocks that are associated with those keys, and you have a key store and,
you know, a value store, I guess. Is all that kind of laid out on the SSD by RocksDB already?
And you're just, so I'm trying to think how you play this, you know this technology advantage here yeah because there's
there's a couple of there's a couple of misses mismatches for me mentally as i go to connect
the dots there's the hit ratio for in-memory databases so the my ability to say that hey you
know what this table that i want to look up is already in memory, and bam,
I get all the advantages of an SAP HANA or Redis in-memory database. But to explain to kind of the
layperson who's not familiar with how in-memory databases work, if I have to go to disk, I have
the, that's called a miss, or I didn't hit the in-memory database. This solution in
between, between like the SSD or the spinning disk, doesn't matter if I have to go to SSD or
spinning disk, really doesn't matter if I don't hit what's in memory, my database performance
will drop tremendously. So what I'm hearing is that this improves
that even when I have to go to SSD,
I'm getting the performance
that I expect out of my in-memory database,
even when that hit ratio drops down to 10%,
which sounds rather amazing.
Yes, indeed.
And there is a relatively clear explanation for that.
If you look at just the SSDs for block performance,
if you look at, let's say, four NVMe SSDs,
they can provide huge performance in terms of IOPS and latencies are not so bad either. You can
get around 100 microseconds and something like that. If you increase the QD, you can
go up to 200 microseconds, something like that for a very high EQ. And the requirements for systems like Redis, for example,
is to be under one millisecond latency,
including the network latency and everything.
So if we just look at the disk itself
and assume that for each operation of get or put,
for example, you just need to have one simple disk access, then
everything should work about fine. The issue is with this middleware, the storage engine
that lies in between. This is everything, because those storage engines generate a very high
read and write amplifications. So what you're saying is for every read that the host does or every write that the host does,
the storage engine does multiple writes and multiple reads.
Exactly.
For RocksDB, for example, you can get numbers
between 20 to 40 of what amplification.
Every amplification, everything is,
do you have hundreds of tunable parameters?
So I'm talking about the common settings.
You can get a red amplification of a factor
between four to 10, something like that.
So, and this layer in between consumes a lot of the available bandwidth,
let's say, the available pure effective bandwidth of the SSD.
So in our case, as I said at the beginning of the talk,
we have excellent trade-offs between read-write and space simplification.
So we basically behave behave for key value,
we behave as conventional disk behave for blocks.
So we have the rich functionality of key value,
similar to OXDB,
but we are able to expect performance
that is similar to conventional block, which is high.
This is where the gain comes from.
And this is why we can have a very good performance.
And now the measurement, and we did measurement for wireless,
for example, by the book.
So we don't just try to show off with the hero numbers with huge latency.
We drove the system until we get one millisecond latency.
And we compare that to conventional system without our solution.
So we both are functional ready servers.
And that specific example, with under one millisecond latency,
in our case, we were able to extend the hit ratio down to 10% and get a brute forcing this challenge is to go with stuff like Intel's PMEM and PMEM for Micron or whomever to just increase the physical amount of storage that's in a given server or cluster with the higher latency but faster than storage P memory.
That still has limitations.
If you have petabytes of storage that you want to access via Redis or SAP database,
you can only go so far.
So this seems like a pretty good shim to put in between those types of solutions and your storage subsystem because essentially you're creating a more direct or block level interface to that storage to get you similar performance.
Obviously, you can't beat the speed of light or the speed of physics but you're optimizing
what's available in your storage subsystem yes except except i would say it's not blocked but
it's key value right it's a key value access capability that's that's moved out to this
board that's talking to the drives directly right yes And this is a key factor here, because there are a lot
of manipulations on the on the data layout and organization
and sorting and things like that, that typically a storage
engineer doing in order to be able to persist data on on disk.
And we actually lever our solution just to make those operations
much more lighter and this is why we can again by the way you talked about
PMEM we have on our board we integrate some sort of of NVMM. It's a combination of DDR, SuperCAPs and SSD that we have on board.
And we expose, we use that for various reasons. Maybe we can talk about some of the use cases later, but one of them is that similar to PMR concept of NVMe,
we expose several gigabytes as a DRAM disk actually.
And by simple integration to transaction logs, for example,
and write buffers, we can remove one of the
maybe the number one performance bottleneck for writes in many database system which is the
transaction log part. This is generally this is the only part of writes that is done in foreground that affect directly the latency of the user.
And we can improve that significantly
by exposing this DL based drive.
Memory drive kind of thing.
Yeah.
So you mentioned the board.
I mean, FPGAs, ARM processors, you know, ASICs, what sort of technology is sitting on this board? I mean, you mentioned the NVDIMMs. I got that. Is there a lot of DRAM. The NVDIMM is actually, we manufacture this NVDIMM by the combination of the components that I mentioned before, DDR, SuperCAP, and SSD that we have on board.
Besides that, we also have FPGA. And in those days, we are developing the ASIC version of the product. So next generation will be ASIC based.
Okay, so it's not like an ARM multi-core processor
sitting out there doing all this stuff.
It's all FPGA at the moment moving to ASIC at some point.
Yes, the angle that we took for computational storage
is again, everything we're trying
to make our technology accessible to users as we can.
This is why, by the way, we also we build upon the key value engine.
We have the first penetration to market was with our block API,
which is very easy to integrate.
And the areas customers that we have
are very experimented and tested
and using the block API.
Now we're in the second phase of QValue POCs.
And if you look at the computational storage, which is a great
trend and initiative, the thing that we found missing there is that you rely,
you basically rely on the user to utilize your computational resources
that you provide along the drive.
But we took the more gradual approach
where we provide a well-defined value proposition
that can be consumed relatively easily.
For the block API, it's seamless.
For key value, and for more databases API it's seamless for key value it's in for more
databases it's not it's relatively easy and we provide a specific value
proposition to our customers while later on they can also utilize programmable
capabilities so our ASIC will include a-ARM cores inside that will be accessible by the user as well, either through the new initiative of NBME for potential storage or other.
But we want our product to be accessible from day one and provide value, specific value from day one.
Okay, so back to the specific value from day one.
So you mentioned compression, you mentioned encryption,
you mentioned erasure coding.
Are those the sorts of things that you're bringing to the table
for pure block-oriented host access?
Yes, this is part of the thing that we provide.
We also provide the, as I said, the DRAM-based disk.
We provide write-automicity.
We provide smart caching, for example.
We detect automatically journal parts, let's say, journal parts of writes,
which are generally characterized by very high write rate,
if you compare them to conventional LBAs.
So we detect this automatically, and those writes rarely goes to the disk.
For example, if you look at the double write of InnoDB or journaling of certain file systems like XFS or XTFO and things like that, cache algorithm and enjoy higher performance because they never go down to
the disk. They're absorbed inside our onboard NVDIMM so they are
persistent but never goes to the disk itself which improves performance.
You gotta hold on here Moshe. I mean, so at some point, those writes have to be destaged, right?
I mean, you can't just sit there.
No, no, not at all.
I'll give you an example.
I'll give you an example.
Take the double write of InnoDB, for example.
It's a typically 20 megabyte of LBA region.
And for each LBA that is written
to the main storage, to the table space,
the same LBA is also written cyclically
to this double-ride buffer.
So effectively, you get a huge amount of invalidation
for this region. So it's basically out of... I'm understood.
So the same range of LBAs are written over and over again.
Yeah.
Take, for example, one gigabyte of writes to the main storage.
Right. You get one gigabyte of writes also for those 20 megabytes of WLB.
Yeah, yeah.
So just the last 20 megabytes out of the one gigabyte is written to the disk.
So if I'm understanding correctly,
is that the controller is absorbing the overhead?
Yes, because it is overwritten.
This is the reason.
Yeah, okay.
I get the overwritten stuff,
but at some point that 20 megabytes of data has to be written back to the SSDs, right?
Of course.
Yeah, so what I'm hearing is that we're just not writing the,
we're not doing the, we're not,
the inefficiency that causes the latency.
And we've talked about this in other,
like super high performance block storage,
is that when you're looking at it from a OS perspective
and you're doing writes and the OS has crust crust when or overhead uh when doing
writes that creates latency and then the disk itself has the disk protocol itself has overhead
which causes latency you guys are cutting out that latency at the disk layer so I still have the
and correct me if I'm wrong, I still have the overhead from my
OS file system when I'm using the block API. Yes, yes. The file system, so you're absolutely
correct. Of course, so we have, again, taking the example of WI, just for the sake of the discussion, we have two modes for that.
One is the automatic removal.
In that case, indeed, as you mentioned,
the file system part is still doing
the same amount of writes exactly as before.
But I cut down by a huge factor the traffic to the disk.
We have also, I mentioned that we have also write atomicity support,
which allows us to eliminate the double write altogether.
Double write is a mechanism that was intended to protect against ripped blocks.
In case of sudden power down,
when we're part of the between node is updated
and the part is not updated.
But if we provide rather atomicity,
we can make double write totally redundant.
The point is that you need to get support for that.
From protocol level.
Yes, exactly.
And so we can get almost,
so we've done measurements for conventional operation,
operation with no double-write,
with double-write atomicity
and the one with automatic removal.
Of course, the best performance we get from double write
elimination, but we are coming very close to that,
also with the automatic detection of the...
Transient writes, transient data, kinds of things,
these temporary overridden data, left and right,
and stuff like that.
So I agree with you that you don't get the entire game,
but you get very close to it.
And if I wanted the ultimate performance,
I just bypass the file system and use the API
to get an example of the ResDB
to get the ultimate performance.
Yeah, if you want key value API associated solution.
And I can see this easily being more than just
a RedisDB type of application.
Anything, any application that I can write a driver for
to access the key value store
to get and put data
would also have the same performance advantages.
Of course, of course.
Yeah, yeah.
And the nice thing about, I guess,
RocksDB is that it's a fairly widely used solution
at that level.
And because they're already there,
it's relatively easy and painless to plug that
into this device. So back to the block stuff again. So you're de-staging slowly the data
that's transiently written over and over again. That's easy. You're also compressing the data. So what's being written to the physical
SSDs are variable length blocks. Is that what's going on? Or are you packing those variable length
blocks into physical blocks? And so you must have some sort of a, almost a flash translation layer
inside the PlyOps board, right? Because you're getting an LBA write or read,
but what's actually written on the SSDs
is variable length segments, right?
So of course, for the SSD itself,
we write in complete LBAs,
because this is the way you need to write to SSD.
More than that, we buffer a large amount of writes,
because again, because we have this memory backed
and your arm,
and in this staging phase,
we write it to the SSDs in very large batches.
So, and you said absolutely correct, we have our own kind of
FTL-like structure, which is actually the key value interface because eventually the LBA is the key and the block is the value. So if you have this highly versatile
and powerful key value engine, it's some sort of FTL, if you think about that.
It just has much more capabilities.
So you can use compression very easily
with almost no work that is needed to be done.
This is for block.
Yeah, I mean, but the challenge is
you've got to write the data into the physical SSD
in a fashion that it accepts, which is a physical block.
And, you know, for some of these devices, they would like real large physical blocks.
I mean, QLC, I'm thinking megabytes or something like that, right?
You're writing several gigabytes.
You're writing gigabytes. You write in gigabytes. And even that,
if you look only at that part,
it's not enough
because the erase units on the SSDs
on certain models
for Micron and NAND, for example,
it's huge.
The erase unit itself
is around gigabytes.
So if you work over four SSDs,
for example, and you strap you walk over four SSDs, for example,
and you strap the data with four SSDs,
each of them get about 500 megabyte.
If you just look at that,
if I write a random 500 megabyte blocks to SSD,
you will still get rat amplification inside the SSD.
So I will make life harder for myself.
So, but the point is that part of the layout algorithm
that we have, we use some sort of, how to call it?
Some sort of bin packing algorithm
where we give priority,
we try to, the word, to make large sequences
of LBA range of blocks invalidated on the same time.
So if I chose a certain range of 500 megabyte of,
I'm talking about one SSD here,
when I'm working over four SSDs, just for example. So if I chose a certain 500 megabyte region
for override, that is invalidation. If you look from the SSD perspective, internally,
there is invalidation for 500 megabytes. then I tend to choose an adjacent 500 megabyte
for the next part. And so I'm balancing between my right amplification and the
internal amplification to get optimal results. So this is what this algorithm aims to do.
So overall for measurements, either from random, of course,
random, which is very easy, and also for skewed,
for hot-code workload, we get rat amplification
of a disk in the range between 1 to 1.14.
This is the level of internal rat amplification
that we measure from the disk itself.
So this algorithm allows us to work over conventional SSD and get very low internal amplification with no substantial increase in our
right amplification. Actually, we started by using a Synex Labs open channel SSD because we wanted at the beginning of our journey,
we wanted to control the exact placement to guarantee no run amplification inside the SSD.
But back then there was a huge hype for open channel SSDs and we hoped that we were going to conquer
the world and be prevalent, but eventually it wasn't this case.
So we anticipate that relatively early and moved from Open Channel SSDs to conventional
SSDs.
And then we put this algorithm in place and we managed to get the same performance even from conventional SSDs, and then we put this algorithm in place, and we managed to get the same performance even from conventional SSDs. By the way, the New Zealand SSDs are
excellent candidates for us for the same reason.
So by reducing the write amplification at the SSD, you can, A, increase the endurance, right? And B, actually increase maybe physical capacity. I mean,
obviously compression increases the logical capacity of the SSD. But yeah, a lot of SSDs
have certain amount of buffer that they allocate for, you and brights and stuff like that. I don't know
if it's on the order of 20%. In some cases, it might even be larger. So they might say that they
have one terabyte of SSD capacity, but in reality, they have 1.2 terabytes of physical NAND capacity,
but they're only showing the one terabyte. Are you somehow reducing that as well, or is that
just the way that conventional SSDs work?
Great question. Actually, this was one of the reasons
that we wanted to work with open channel SSDs, and now with the SSSD,
because you get the entire capacity for yourself, and you don't rely on
the internal garbage
collection mechanism. And if the SSD is right optimized, for example, and has this 20%
internal over-provisioning, of course, there is nothing that we can do for that part.
It's a wasted capacity from our perspective. This is why we recommend our customer to use the cheapest SSD
that they can get their hands on.
Typically, there are even enterprise-grade SSDs,
and of course, data center-grade SSDs,
you get a lot of those with 7% OP,
which is much lower than the 20% that you mentioned.
So we recommend customer to use the lowest cost
as they do that they can have,
because we don't really need this internal OP.
We don't need and we can compensate for that.
We had several discussions with the vendors.
It's possible theoretically,
but again, we want to be able to sell actual products.
So we don't have support for such SSDs
with less OP than specified.
So you mentioned encryption as well.
So you provide, where are the keys
for something like that maintained?
If you're encrypting the data on the SSDs,
and would they have encrypted with the same key across all the SSDs that you control?
We have one secret key that is that this is something that is it's not it's not novel and innovative it's
conventional we have one key internal rooted trust key that used to encrypt user keys
and user keys you can get we expose multiple namespaces or in the key value, we call it names, instances.
So we can get different encryption key from the user for each database.
We'll have one single master key to encrypt those keys.
Yes.
Yeah, I understand that.
Key metadata is sitting on the card only or is it sitting on
the back end as well or is this something that's that's paged in and out or is it something that
you try to retain complete access to it on the board okay uh okay one of the benefits of our indexing scheme is its extremely low memory footprint.
Take, for example, conventional SSD.
Conventional SSD, as you know, requires around 0.1% DRAM to SSD ratio
in order to store the simple fixed size FTL mapping.
So you need about 4 bytes per LBA.
This is typically the number.
In our case, we need half this number.
We need around on average two bytes per object
while supporting variable size block.
So actually it's basically key value.
So this is why we can enjoy
from caching the entire metadata in DRAM
with no need for, you know,
for paging, as I mentioned,
either in writes or do index read during read.
And it can get very high and consistent performance
and low latencies, and low tail latencies.
So let me just try to understand.
Summarizing that, number one,
you accelerate any key value database
that would use RocksDB
and could accelerate any key value database that doesn't,
as long as they're willing to utilize your API.
And that's correct, right?
And even if you decide not to use a key value database API approach,
even if you're just using normal SSD block access,
you also offer performance and capacity and security.
Yes, exactly.
In the last FMS, for example,
we demonstrated some of our gains to MongoDB
with the block API, with the simple block API.
MongoDB, MySQL, and RedB.
What about high availability and stuff like that?
So a lot of these, you know, some of these databases, I guess, because they're in memory,
aren't necessarily high availability solutions.
But some of the more enterprise sorts of activity and stuff requires high availability.
Do you provide multiple PlyOps boards in that case?
So currently not.
It's a good question. As you said, we generally get those requests for more enterprise kind of customers.
Because you can look at our solution also as some sort of backend for storage, even for external storage services or products.
Because we just take care of the entire backend part,
we do seamless compression, erasure coding, all this stuff.
So on those parts of collaboration that we engage with,
the issue of fiber-related was raised.
This is something that with our current version,
we don't provide. There is no problem to provide it in terms of our architecture but we didn't have the chance
to to productize that so we rely on on other parts of the system to do the availability for example
if you look at mongodb we have clustering of mongodb i don't need to i don't need to do anything. I don't have any additional support internally for that. I see. There's another question.
What was it? Erasure coding. So are you using like RAID 6, two drive failures or RAID 5,
one drive failure kinds of things? Is that what you're calling ratio coding or yes we're using grade six
with the ability to reconstruct the data from half in case of single a disk failure we can
reconstruct the data from half of the stripe which improves the improved performance in integrated
mode i think that the main thing in erasure coding or RAID, for that matter,
is the performance in degraded mode. Because if you look at conventional RAID solutions,
RAID 5 or RAID 6 solution, there is the penalty in normal live performance, but this is not the main problem with those solutions.
The main problem is what happens to the performance when you are in degraded mode,
when you have this- Rebuilding the data, yeah.
Exactly. In that case, you see a huge decrease in performance and your server is,
for modern data centers, your server is as good as dead.
Because it's not responsive at all for extended periods of time.
This is why many customers just abandon the RAID option and rely on other levels of replication and clustering and things like that.
This is why if you look at the recommendation of stack for cassandra and mongodb they all recommend you to use rate zero or a above jbof mode
exactly because of this is because of the performance and in the case of
our relation coding excels in that that even during rebid we also demonstrate that in last fms
even during rebid we have around 10 in last FMS. Even during rebuild, we have around 10%,
less than 10% degradation in the application performance
during rebuild.
And so it's not just providing similar
RAID 5, RAID 6 capabilities.
It's about the performance of our solution.
And this is where we really make a difference between conventional solutions and our solution. And this is where we really make a difference between conventional
solutions and our solution. So I guess as we start to wrap up and we bring this up level,
a lot of geeky stuff, a lot of stuff I didn't understand. I'll be honest with you. Some of it,
I think the overall concept I get, I think one of the big questions,
especially as movement shifts from bare metal to consuming even in-memory database via
cloud services, and I know you mentioned cloud providers earlier, how are you guys going to
market with this solution as bare metal becomes a little less appealing to organizations
other than hyperscalers.
Are you talking about virtualization?
No, not just virtualization.
So if I'm vacating my data center and I just don't have access to the bare metal to throw
this into a system.
Are you guys partnering with cloud providers?
How can I consume?
So are you going after the hyperscaler market, for instance, as a supplier to that market?
Because a lot of the hardware is moving out of the data center into the cloud.
Yes.
So hyperscalers certainly is our first engagement.
Also what we call the superscalers, the companies that have capabilities,
the internal capabilities to support such solution.
If you ask me if we are approaching traditional enterprise companies. This is on our plans,
but our main success is with hyperscalers and superscalers.
I got you.
So you're not going at the data center per se
as much as you're going at these hyperscalers
and superscalers from a market perspective.
And I'm glad to hear that
because that makes a lot of sense to me
for consuming this type of environment.
I see customers moving away, individual enterprises moving away from adding specialty hardware.
But if this specialty hardware is sold to and maintained by the hyperscalers, it makes it much more appealing.
Yeah, yeah, yeah.
Well, this has been great.
Thank you very much.
So Keith, any last questions for Moshe?
No, I think this is a really neat solution
to a problem that's really difficult to crack,
especially as we look at not having access
to bare metal hardware anymore.
And practical problems with like the Linux kernel
isn't changing anytime fast to,
and with any kind of skill to solve the cruft that we have with file systems and writing to storage.
So I'm happy to see that they've embraced RocksDB and Block as well. It's a really,
really interesting solution.
Moshe, anything you'd like to say to our listening audience before we close?
No, just thank you very much
for the opportunity to chat with you.
I enjoyed it very much.
Okay.
Thank you very much, Moshe,
for being on our show today.
And that's it for now.
Bye, Keith.
Bye, Ray. And bye, Moshe. Until being on our show today. And that's it for now. Bye, Keith. Bye, Ray.
And bye, Moshe.
Until next time.
Yeah.
Bye-bye.
Next time, we will talk to the system storage technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast, tell your friends about it.
Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out.