Grey Beards on Systems - 105: Greybeards talk new datacenter architecture with Pradeep Sindhu, CEO & Co-founder, Fungible
Episode Date: August 18, 2020Neither Ray nor Keith has met Pradeep before, but Ray was very interested in Fungible’s technology. Turns out Pradeep Sindhu, CEO and Co-founder, Fungible has had a long and varied career in the ind...ustry starting at Xerox Parc, then co-founding and becoming chief scientist at Juniper, and now reachitecting the data center with Fungible. Pradeep … Continue reading "105: Greybeards talk new datacenter architecture with Pradeep Sindhu, CEO & Co-founder, Fungible"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Keith Townsend.
Welcome to the next episode of the Greybirds on Storage podcast, a show where we get Greybird
Storage bloggers to talk with system vendors and other experts to discuss upcoming products,
technologies, and trends affecting the data center today. This Graybird on Storage
episode was recorded on August 12th, 2020. We have with us here today Pradeep Sandhu,
CEO and co-founder of Fungible. So Pradeep, why don't you tell us a little bit about yourself
and your company's product? Hi, Ray and Keith. Thank you very much for having me on. I'm the
CEO and co-founder of Fungible, which is a Silicon Valley startup.
We were founded in 2015 with the explicit goal to revolutionize the performance, economics,
reliability, and security of scale-out data centers. And the key technology that we have
is a new category of microprocessor that we call the Fungible DPU. It's associated software
and systems that we are building using the fungible DPU. It's associated software and systems that we are
building using the fungible DPU. What does the DPU do? I mean, I read some of the stuff that was
on your website. It's not quite composable infrastructure, but it is composable infrastructure.
I mean, how would you compare it to some of the more standard composable infrastructure that's out there today? Because the DPU applies not just to storage, let's first talk about what the DPU was
designed to do. Storage is a very, very important use case for the DPU, but the DPU was designed to
solve a broader class of problems. We see two fundamental problems facing data centers today.
One is that the power and space footprint
is way too large. And this is because of two issues. One is that a certain very important
class of computations that we call data-centric are not executed efficiently because they're
executed on general purpose CPUs. And the second one is that we are unable to pool resources because the industry does not have a scalable fabric.
So those are the two fundamental problems that our DPU was designed to solve.
And we have solved them in a way in which the DPU is fully programmable.
And so this is why we refer to the DPU as a new category of microprocessor. In fact, some people have referred to the fungible DPU
as a third socket inside data centers
after x86 and GPUs.
Pradeep, this is a very aggressive solution here.
I mean, you're taking the compute architecture
that has existed for, God, I'll say 50 years now,
and you're sort of turning it on its head to some extent. You're providing the DPU as yet another, the CPU almost becomes an accelerator to the DPU
to some extent. Is that how it's playing out? No, I wouldn't think of it that way. I would
think of it this way. So if you look at the sweep information technology infrastructure,
starting all the way back when the concept of general purpose computing was invented in the 1950s by John von Neumann.
You know, we had many, many different CPU types.
It went to a single type of machine, a single type of microprocessor called the x86.
And then scale-out was invented where people put many, many, many x86s on a network.
I think that particular architecture where you have homogeneous x86 general purpose
servers connected to a network, that architecture has reached the end of its useful life.
God, that's a hell of a statement.
It's a hell of a statement, but I have rarely been completely wrong about industry trends.
So I think that we are entering what we call a data-centric era. And in this data-centric era, you will have an architecture that is somewhat different than the past architecture.
It will still be scale-out, but it won't be scale-out of a uniform set of x86-based servers.
It'll be a scale-out of multiple different types of servers.
And so let me give you examples. There'll be a
handful of different server types. GPUs and FPGAs and storage, all these would be servers under this
environment? Exactly. As an example, you would have general purpose x86 server, just like you
have today. You would have GPU servers, which are disaggregated from x86 servers. That is not the
case today. You would have high
performance storage servers containing SSDs, and then you would have hard drive servers if,
for example, hard drives survive. And that is anybody's bet in the long term. And each of
these server types would be powered by a DPU. In the compute servers, which is the GPU servers and the x86 servers, the DPU is entirely complementary to
the x86 and the GPU. It's not trying to do the same work that these engines are doing. The GPUs
and the CPUs would be running applications. What the DPU would be doing would be running
infrastructure workloads. For example, the network stack, the storage stack, the security stack,
and the virtualization stack. Now, the virtualization stack in days of yore all ran on
the CPU. Network stack can be different places. The storage stack can be different places. But
the virtualization stack has always been totally tied to the CPU, in my knowledge. I'm digesting this because I didn't read the white paper.
And it sounds like a variation of the attempt by HPE to drive the memory-driven architecture of their machine.
So the ideal is the entire data center is a computer, and it needs a control plane.
And it sounds like to me that the DPU is the foundation of that control plane. And it sounds like to me that the DPU is the foundation of that control plane. So
data center services such as networking, virtualization, storage, et cetera,
is run by the DPU. The IO is controlled by the DPU. And then the CPU, GPU, et cetera, are just
different subsystems of this overall architecture.
That is very close to being accurate, except what I would say is that what the DPU does
in this picture is it actually does two things. One, which it offloads data-centric computations
from the CPU and the GPU. So it's in the data path. It's not just the control plane. It is vehemently
in the data path. So I guess before this is really, I think you're making a critical
differentiation here. So the DPU is still the core of the idea. So your APIs, your overall
operating system for the data center is running on the DPU. But to kind of raise concept that x86 has always been the driver for virtualization,
you're saying that you're abstracting that layer of virtualization.
You're saying if you need x86 virtualization, yeah, we'll offload that to x86 CPUs.
But the API into the data center and the control plane and data plane is the DPU.
So the DPU is the basis for high performance disaggregated scale out. I think it's important
for me to just underline and define actually what is a data centric computation. So I think
we'll all agree that most modern applications are written as microservices.
I think we can also agree that modern applications want to touch very, very large amounts of data,
so large that they don't fit in a single server or even dozens of servers. You need to shard the
data across hundreds, maybe thousands of servers. And so when you have that computing environment, there is a category of computation, basic computation, which we call data-centric. And the percentage of the data center power that has been devoted to data-centric computations has been going up steadily for the last 30 years.
Yeah, it's not something you would have noticed, quite frankly, although it is implied by what's going on in the industry. But it's not a trend that people have been tracking numerically, I'll say. Well, the key point is that we recognized this
back when I was at Juniper in 2014, and that was the basis of the company. So let me just define
what a data-centric computation is for your listeners. A data-centric computation is a
computation where all work arrives at a server in the form of packets on a wire.
These could be PCIe packets. They could be Ethernet packets. It doesn't matter.
The key point is packet switching. Second is that these packets are coming from hundreds,
maybe thousands of different contexts, and they're intermingled with each other.
The third aspect is that the workload involves modifying a significant amount
of state, state which cannot be kept on chip. It has to be kept on an external memory. And the last
piece is that the workload's I.O. dominates arithmetic and logic. So you're talking about
the context switching that happens in the CPU when work comes in from various different
applications that are running or different pieces of data that
show up? What I'm saying is that general purpose CPUs as well as GPUs were never designed to
context switch every packet. Because today on a 100 gigabit per second network, a 64 byte packet
arrives every 5.12 nanoseconds. CPUs were not designed to context switch that fast.
Now, the average packet size is much bigger, of course, thank goodness. It's only 400 to 800 bytes.
Typically, that sort of packet would be processed by a NIC and placed in memory someplace for some
computation running on the CPU. And the CPU might get interrupted or, you know,
the packet might be put on a queue that the CPU goes out and process. It doesn't necessarily have
to context switch unless the packet is associated with a different application. Not true, because
there's some work that needs to be done to gather the packet and stuff like that. So yeah, I guess
there is context switching. This problem is very well understood by people. And basically, the way I would put it, I think you would agree with me that
this kind of data center computation is much more prevalent today in the data center compared to
a decade ago. Okay, that's clear. And getting more so with microservices, containers, and all that
stuff that's going on. And so because this is much more prevalent, I would hazard a guess that
today, probably a third of the power consumption in the data center is because of these data-centric
computations in the modern data center. And because Moore's law is slowing down and will
flatten completely in the next couple of generations of technology, you better have another engine
which executes these workloads much more efficiently.
And the main point I would like to make
is that our DPU executes data-centric workloads
somewhere between 10 and 20 times more efficiently
than general-purpose x86, general general purpose ARM, or GPUs.
Because the ASICs and FPGAs that you're running?
Absolutely. It is because of the architectural innovations in the DPU.
So like I have Ubuntu and I've got a network stack. You can extract a network stack out of
Ubuntu and run it somewhere in hardware in a NIC or something like that. And you can probably do
the same thing with services, clients, and or servers. But what are you running on the TPU? Is it a Linux operating
system? Is it your own grown software? So first of all, let's go back to computer science
fundamentals, right? When you have operating systems on a machine, the purpose of operating
systems is to allow multiple contexts to share the resources of that machine.
OK, nowhere does it say that the operating system software should be in the data path of storage, in the data path of networking and in the data path of the memory.
So, for example, when when an application does reads and writes to memory, do you think
the operating system gets in the way? Of course not. It cannot afford to. Well, who told you that
the network stack needs to be run in software? What I'm saying is that when you had networks
which are 1 gigabit, 100 megabit per second, 10 gigabit per second, you could afford to do that.
All I'm saying is that today, when you have networks which are 100 gigabit per second, when you have storage devices whose latencies are in the 10 microsecond
to 100 microsecond, you cannot afford to do these things on software running on general purpose
processors. And sooner or later, the world is going to come to realize that. And this is where
the DPU comes in. Let's talk about how we've worked around that in the past 10, 15 years. We're getting to the
point where we're saying, okay, obviously the CPU was not designed by that. I think all of
this smart NIC movement is an acknowledgement of that. That if we put the CPU processor closer to the wire, we lighten the load on the CPU,
and there's going to be, I think, a whole birth potentially of alternative movement
that we create an operating system around, an orchestration system around these intelligent NICs. How does that not scale to solve the problem
versus going to a completely new processor architecture?
So let me first characterize these so-called smart NICs.
You know, smart NICs are essentially
a bunch of general purpose CPU cores,
mostly ARM, and one or two accelerators, which are kind of ships in
the night. So it's an integration of those two kinds of things. Just to be clear, if a computation
runs inefficiently on a general purpose x86 CPU, it's not going to run more efficiently on an ARM
CPU. Okay, this is the mistake that people are making.
SmartNIC vendors have not made any architectural improvements.
The DPU makes fundamental architectural improvements.
I mean, you're splitting off work that the x86 CPU would have had to do anyways and putting
it on the SmartNIC, which has an ARM card.
So it's not more efficient from
a power perspective. It may be more effective from a processing perspective. The SmartNIC concept
was introduced by Annapurna Labs maybe five years ago. And Annapurna Labs worked very nicely for AWS
for one reason, which is that they were trying to arbitrage the difference in cost between x86 cores and ARM cores.
And that worked out very nicely for AWS. But, you know, now the delta in cost between x86 cores
and ARM cores is narrowed, and you're not going to make use of that arbitrage. So one has to make
architectural improvements to solve this basic problem.
How do you execute data center computations more efficiently?
And by the way, that's only one piece of what the DPU does.
The second piece is there's a huge unsolved problem in the data center, which is how do I build a data center fabric,
which provides full cross-sectional bandwidth, which provides a flat
network, which provides low latency and low jitter, which enables disaggregation? So there are two
things that the fungible DPU does and does better than anything else. And they're both architectural
innovations. I don't call taking a bunch of ARM cores, putting them on a chip and integrating it
with a few accelerators. This is not innovation in my book. Architectural innovation is you are trying to do something
maybe 10, 20, 30 times more efficiently than is possible today. That's what the fungible
GPU does. And it does it in a programmable way. Yeah. This is not just ambitious or aggressive.
These are some big, big industry moving changes. So let's talk about how does this
get adopted in a brownfield? So when you're talking about data centers, data centers of this
scale, what is the target audience for injecting this overall design? Because I'm not taking my
existing data center and retrofitting it with DPUs.
This seems like a ground-up approach to building data centers.
It's a fundamentals-based approach, but one thing which is very important for me to say
is that the DPU is designed to be inserted brownfield into existing scale-out data centers.
Let me describe now how that is possible.
So first of all, the DPU has fully industry standard interfaces, PCI Express on one side
and standard Ethernet N by 25 gig on the other side. And the network technology that we depend on is standard IP over Ethernet. In other words, our TrueFabric
innovation is built on top of fully open standards. So in other words, my TrueFabric protocol will run
alongside standard TCP IP over Ethernet, and it will still provide all the benefits that
TrueFabric provides with respect to low latency, full cross-sectional bandwidth, and the ability to hyper-disaggregate resources, which basically
means the ability to disaggregate most resources in the data center. In fact, all resources other
than DRAM and perhaps Optane, which is at 300 nanoseconds. You're saying the PMEM there, you're not talking Optane SSDs, but the PMEM version of Optane.
The PMEM version of Optane, precisely the one that connects to the DRAM pins of a CPU.
The DRAM bus.
Yes, yes.
So in that sort of situation, I plug a DPU into my server that's connected to the network
and it starts doing its thing from a networking
perspective, but to really gain the advantage of having the DPU in place, don't have to
re-architect the server so they have like a GPU server or a storage server, you know,
a CPU server kind of thing.
I'm going to answer that question right now, but then let's also focus on, since you guys
have a storage podcast, what are the
benefits of the deep-seated storage? And we'll get there in a second, but you're exactly right.
What people are doing already with disaggregation and pooling is in all the hyperscale data centers,
there is a movement to try to disaggregate storage servers from compute servers, AI servers from vanilla servers.
So that has already happened. The problem is that this disaggregation that hyperscalers are doing
is not very efficient because they have not solved the two fundamental problems.
I mean, they're all x86 servers.
Exactly. This is the issue. This is the issue. You've touched
exactly on the issue. Look, the x86 is very powerful. It's the best available engine to
run applications. All I'm saying is it's not ideally suited to run data-centric computations.
It can do it. Of course it can do it. It's Turing complete, but it will not do it efficiently.
It didn't matter five years ago, 10 years ago. It matters now.
And as we go forward, it'll matter more and more.
And people will understand this eventually.
So let's go to storage.
One question that you might ask is,
hey, what are the key problems
that the DPU solves in the storage space?
First and foremost, what the DPU allows us to do
is it allows us to cleanly separate the control plane
and the data plane of a storage service.
I think you'll agree with me that most storage implementations mix control plane and data plane
in a thing called a controller. And in networking, the separation between control plane and data
plane was made some 25 years ago by a company called Juniper Networks. And routing and switching was never the same after that
because equipment became much more reliable
and much more agile.
Everybody builds routers and switches that way today.
Nobody questions it.
In storage, that step has yet to be taken.
Fungible is the first company to do it.
The advantages that you have
when you separate control plane
and data plane and you use x86 for running the control plane and the DPU for running the data
plane is you end up with higher performance, better scale, better economics, better durability,
and better agility. So that's first and foremost. But that factoring of control plane and data plane
storage services is not a given yet.
We did talk last month with Nebulon, who's done this sort of thing themselves as well, but with what they call, I think, an SPU, a storage processing unit, different than yours.
But that aside, if you're going to try to introduce a DPU into a storage environment, what does the back end look like?
What does the front end look like? What does the front end look like?
How does the control plane get partitioned to the x86 and not to the DPU? Let me describe that,
right? So first of all, let's imagine the highest level that I'm providing a storage service over
the fabric. You know, when people talk about NVMe over fabric, you ask them a question,
hey, what's the fabric? And the
answer to that is blah, blah, blah. There's no concrete answer. Maybe it's this, maybe it's that.
Well, for us, it's not blah, blah, blah. It's true fabric. It's clear that for providing storage
services over the network, you need some very specific functionality. You need a network that
provides full cross-sectional bandwidth. You need a network that does not drop
packets willy-nilly. It provides low latency and low jitter. And that problem remains unsolved
today. Let's not dance around that problem. The solutions that are provided by InfiniBand,
by Fiber Channel, and by Rocky V2, they do not scale because they use hop by hop congestion control.
That's a fundamental issue.
The only end-to-end protocol known to mankind is TCP.
TCP is slow.
Basically, what we've done with True Fabric is we've taken the fundamental principles
of TCP and implemented them in a way where you can get hundreds of gigabits per second per engine,
800, 400 gigabits per second, and also provide very low latency and very low jitter.
Now, in order to use our FCP, you need DPUs at both ends, both at the client end and the target
end. But that obviously is not a green field. So let me now describe how a cluster of DPU-powered storage servers could be inserted into a brownfield environment.
You can imagine that a client, an x86 client with a vanilla NIC can talk to a fungible DPU-powered storage server using NVMe over TCP.
Now, if you're not allowed to put the DPU at both ends,
I can use the only thing that I have available to me is either Rocky V2
or NVMe over RDMA or NVMe over TCP.
I don't have anything else. Now, the client,
you know, maybe can make use of several hundred IOPS,
but now when you get to a DPU-powered storage server,
the collection of DPU-powered storage servers
acts as a single entity that provides storage as a service over the fabric.
Well, what does it provide? It provides block store as a service. It provides object store as a service.
And it provides file system storage as a service over the fabric.
So effectively, if it's a file system, you're going to do open, close, read, write.
And the open and close commands are control plane commands, to be clear.
Read and write are data path commands.
And so what I'm suggesting is that if you think of the storage service as living as a service on the network, I want to separate this implementation of the service into two parts.
These sorts of things, a file service, an object storage service, and a block storage service, typically take organizations multiple years to produce and to produce well. And they require, you know,
data integrity and, you know, locking and serialization and ensuring that no two clients
can overwrite the same data and the data has to be protected, RAID protected, erasure code.
There's lots of software involved in the data path and the control path of a storage server.
Are you re-implementing all this on a DPU?
You are absolutely correct that there's a lot of software.
And we are not just a silicon.
While we've been talking about DPUs, the know, the DPU, the fungible DPU is an enabler.
Our company is writing software for NVMe or BlockStore,
which is the first storage service that we will provide.
And we're well aware that these problems are hard.
But what we're saying is that once you introduce the DPU
as a legitimate target side storage server, and you use the x86 to run the control plane
of that same storage service, you end up with a much better implementation than has existed in the past.
So let me go through what those benefits might be.
So I think that today, especially in enterprise data centers,
the storage ecosystem is extraordinarily complicated.
And it's complicated because we have a bunch of silos of storage. And these silos each have their own management complexity. And when data is siloed, it is less useful. So the second
piece is that the DPU will fundamentally improve the durability of data by making use of fully exploiting the network
and to do erasure coding pervasively across the network.
We will also dramatically improve the performance of storage.
Essentially, we promise to provide storage at the speed of the devices.
I mean, you will acknowledge that SSDs today,
the bandwidth is some 25 to 50 times the bandwidth
of a spinning disk.
And the latency is somewhere between 1 50th and 1 500th
of the latency of hard drives.
So what we wanna do is we wanna provide
this exquisitely large improvement
in performance directly to applications. And what I'm suggesting to you is that the network
paradigms available today do not permit that, while TrueFabric and the DPU fully enable that.
Lastly, actually, next to last,
we plan to improve the security of data,
again, dramatically,
by doing pervasive encryption
both at rest and in motion.
And finally, to provide computation
right next to where the SSDs sit.
So, for example,
today, if you try to do
very large joins of two large databases,
we can do that probably 50 to 100 times faster
than is possible today
because we enable very high performance
processing of data over the network.
And this TPU enabled storage server,
is there an x86 CPU as well?
There is inside the storage server,
there is no need for an x86 CPU
because the entire computation is handled by the TPU
because all the TPU is doing is doing the data path functions.
So let's be very clear about separation between control plane and data plane. Let's take NVMe
over Fabric as the service. When I'm creating a volume, when I'm deleting a volume, or when I'm
assigning attributes to a volume, those are control plane functions. When I'm deleting a volume or when I'm assigning attributes to a volume, those are control plane functions.
When I'm reading a block and writing a block,
those are data path functions.
And the control plane functions of creating a volume,
deleting volume, assigning attributes,
are done by the control plane in our architecture,
which is implemented by x86, which is the right engine for
the right job. When we're talking about reading a block, writing a block, the DPU is the right
engine. That factoring that you're going through with respect to the control plane and the data
plane, I know in theory that can be done, but today it's very unusual to see that. And there's mountains of software involved
in doing that sort of thing across those two different computational architectures.
The other question I have is the DPU. It's not ARM. It's not x86. What is its programmability
environment? Is it except Python? Is it except C?
What is it? Is it run on an operating system? Probably not.
Well, if you will permit me, the DPU runs standard Linux on it as the control plane inside the DPU.
Now, Linux, as you know, is an interrupt-driven environment,
and we are talking about the DPU executing primarily data path functions. It is quite
well understood that data path functions need to be run in a run-to-completion manner,
not an interrupt-driven manner. This is the reason Intel has defined things like DPDK and SPDK.
And despite all of the attempts to run to completion on standard CPUs,
the experiment has not turned out great, okay?
Because you're still limited by architectural limitations.
We're talking about, this is the storage world that we exist in. There is literally
exabytes of data behind storage servers that are running x86, run to completion types of code
in this environment. More than likely, it's all running on Linux as well. Yes. Fundamentally, I don't have a problem with the vision. I think it is reasonable to say that
we've had this architecture for 30 years driving the data center. I think I looked at a stat from
a few years ago. We were expected to have 44 zettabytes of data sitting on hard disks somewhere.
We need to process that data.
x86 by itself won't get us there.
So we need a solution to that.
But the problem with that is that we have 30 years of technical debt, which is what Ray is alluding to. And overcoming that
technical debt to, I think one of the challenges for Fungible is intellectually getting past
the technical debt is 30 years of it. You guys may have the solution, but getting past the
mentality of the 30 years of technical debt.
Well, let me maybe say two things to that.
What I would say is that, first of all, back in 1996, people told me the same thing about
networking, which is, oh, you're trying to do what?
You're trying to separate control plane and data plane? Are you nuts? And today, nobody questions that.
The point I'm simply trying to make is that architectures that might look grand and glorious
eventually fade away because they don't scale anymore. I'm just trying to tell you, in my
opinion, storage architectures in the future will be done
via separation of control and data.
Now you're telling me that there's hundreds of millions
of lines of code in storage.
I agree with that.
I will just tell you that when you decompose the problem
into data path and control plane,
that the amount of data path software is de minimis. It's probably one to two
percent of the overall complexity. So the bulk of the complexity stays exactly where it should stay,
which is on x86. But that sort of factoring that seems easy to do in theory is not necessarily
easy to do in practice. And I've been through it to some extent with networking,
so you know the challenges that are involved here.
But you had to reimplement all the networking hardware involved, right?
Yes.
But what I'm trying to tell you is that we are,
while I don't want to make any announcements on this podcast,
what I will say is please stay tuned for systems announcements
from Fungible over the next few months. And I think we will surprise people. And we understand
fully that, you know, doing a storage product from scratch, it's hard. We understand that fully, but we are well on our way to doing it.
And we, you know, when you have the architecture fundamentally right, we're not fighting mother
nature. That's what I'm trying to say. So let me, you know, you had asked a question about
the programming environment of the DPU. So please allow me to answer that question.
Inside the DPU, you have essentially two different pieces.
One which is a control plane which is embedded in the DPU.
That control plane executes on standard Linux of your choice.
You want to run Ubuntu?
Please run Ubuntu.
You want to do Python, PHP, whatever?
You do it, okay?
But the main action on the DPU
is a run to completion data path,
which is a combination of general purpose CPUs
and very tightly coupled accelerators.
And you program this thing in C, in ANSI C.
You don't program it in microcode.
And this is the correct way to write programs in the data path.
It's run to completion.
And this is not theoretical stuff.
We are essentially done with implementing
the entire data path of storage.
So I'm not speaking that this is an experiment
we will do sometime in the future.
What I can tell you is compared to a dual socket x86 server with SSDs plugged in, we are about 20 times faster.
Measure in terms of IOPS.
Need I say more?
That's a great place for us to stop at this point, Pradeep.
And we haven't really talked about how you're going to get this thing to market and all that stuff.
And, you know, there's probably, I've got at least a thousand other questions on this
subject. You know, I would love to continue this conversation because we didn't get into
the real details. And by the way, the problems that you guys point out about
architectures that have been in place for a long time. It takes sometimes a long time to break through with them.
But, you know, I have always been attracted to problems
that were fundamentally hard industry problems
which would move the lever by a large amount.
And this is one such problem.
This is why I had no reason to leave
Juniper Networks and start another company. I did this only because I saw the opportunity
to make a difference. That's certainly the case. I can see that.
So Keith, any last questions for Pradeep before we close?
No, not that it wouldn't take another hour or two. This has been a really interesting conversation.
Guys, I would love to continue this conversation.
You know, it's really kind of you to invite me to your podcast.
And I hope we do continue the conversation on the topic,
specifically on the topic of storage.
And perhaps you can do another podcast in a couple of months.
Okay.
Pradeep, anything else you would like to say to our listening audience before we close?
Well, the only thing I'd like to say is
please stay tuned for really interesting announcements
from Fungible on products that are based on the Fungible DPU.
Well, this has been great.
Thank you very much, Pradeep, for being on our show today.
Thank you, Ray, and thank you, Keith.
You've been very kind.
Okay. Next time we'll talk to another system storage technology Keith. You've been very kind. Okay, next time
we'll talk to another system storage technology person. Any questions you want us to ask, please
let us know. If you enjoy our podcast, tell your friends about it, and please review us on iTunes
and Google Play as this will help get the word out. That's it for now. Bye, Keith. Bye, Ray.
And bye, Pradeep. Bye-bye. Until next time.