Grey Beards on Systems - 140: Greybeards talk data orchestration with Matt Leib, Product Marketing Manager for IBM Spectrum Fusion
Episode Date: December 12, 2022As our listeners should know, Matt Leib (@MBleib) was a GreyBeards co-host But since then, Matt has joined IBM to become Product Marketing Manager on IBM Spectrum Fusion, a data orchestration solution... for Red Hat OpenShift environments. Matt’s been in and around the storage and data management industry for many years which is why we … Continue reading "140: Greybeards talk data orchestration with Matt Leib, Product Marketing Manager for IBM Spectrum Fusion"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Keith Townsend.
Welcome to another sponsored episode of the Greybeards on Storage podcast,
a show where we get Greybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, and trends affecting the data center today.
And now it's my great pleasure to introduce Matt Lieb,
Foreigner Greybeards on Storage, co-host, old friend,
and now product marketing manager for IBM Spectrum Fusion.
So Matt, why don't you tell us a little bit about what you've been doing
since your days with the Greybeards and what IBM Spectrum Fusion is all about?
Well, thank you, Ray. It's really good to be here.
So when I left the Greybeards, which was just about a year ago or so, I joined IBM
as a product marketing manager, and I was assigned a product called Spectrum Fusion. Spectrum Fusion being an amorphous name that
a lot of people don't tie to IBM, but it really is aligned to our entire storage portfolio in a number of ways. Essentially, what it is is an orchestration element for the storage in a Kubernetes open
shift environment.
As you know, the hardest part in moving from a POC to a full-blown production system
is that you don't have orchestration for the storage elements.
And we are bringing that sort of agility to the storage side
that OpenShift and Kubernetes and, for that matter, Ansible
bring to the application side in an attempt to reduce latencies and essentially take a centralized point of management over the entire platform.
So that's what Spectrum Fusion is in a nutshell.
When I look at the website for IBM Spectrum Fusion, it talks about HCI and software
defined storage. It really doesn't talk orchestration much at all, Matt. You're right.
And that is in part my fault. So we are in the midst of a website update. There will be some content changes. Most of that website has been
dedicated to the initial offering, which was our HCI product, which is a purpose-built,
I'm not sure that HCI is the ideal naming convention for it, because it's really a
converged architecture with storage nodes, storage-rich nodes, and application-centric
processor-based nodes, built-in switching, etc. And it's a really wonderful architecture. But what we've added to it
are a couple of iterations, including a software-defined storage element, which initially
installed onto a VMware cluster. That was our first SDS implementation. And that's grown.
So we now have iterations that sit on AWS and Azure, as well as, of course, IBM Cloud.
And those are growing.
We can offer you software-defined iterations sitting on IBM Z and other platforms to be coming soon.
So I guess to start out, what would help me frame kind of what is IBM Fusion is to say,
what's the primary use case? Well, think about it like this, right?
As a storage administrator, you've always wanted all things sort of encompassed in one user interface.
And that includes provisioning new storage, replicating out to different S3 elements
or even internally into NFS-based elements.
And most importantly, your backup infrastructure.
So replication takes care of some of that, but it doesn't take care of the, you know,
the true elements of what backup really brings to the table. You know, recovery, of course,
is a big piece of it. And when you recover from, say, one S3 site on AWS or a blob site on Azure
into, say, your home environment, wherever that might be, you're not necessarily guaranteed with multiple replications out in the world that you're getting the most current data.
And so that can be, you know, a bad element to a restoration.
You want everything as current as possible, as well as, quite frankly, a site-to-site failover capacity
for DR. And those components are built into Spectrum Fusion. And there are sort of variables
depending on what platform you're coming from and going to in terms of a failover
scenario. But if you're doing it on purely knowledge of the metadata across the entire platform
and taking a snapshot out of blob and bringing it into your single point of reference,
we can do that for you.
Matt, I am really confused here.
So you started off talking about HCI for OpenShift,
and you mentioned that it's a VMware cluster that's running the solution,
and now you're talking about the backup DR kinds of capabilities.
And, of course, we started talking about orchestration as the main play here.
What the heck is this damn thing, man?
So it is an orchestration element. In its sort of base configuration, what you've got is the ability
to take this storage associated with the OpenShift applications and place it where you want it,
when you want it, recover it from where it is to where you want it to be, et cetera, et cetera, et cetera. The goal here is to bring the same agility to the data
that OpenShift brings to the application.
So the key here is the storage.
And by the way, the two iterations you discussed were an OpenShift cluster based on bare metal.
That's the HCI model, as well as a different iteration where it's deployed within, at least initially, your on-premise VMware infrastructure.
Oh, so it does support OpenShift bare metal.
It's not running a VMware cluster under OpenShift.
No, no, no, no.
In fact, the initial offering for the SDS iteration was a hypervisor-based VMware cluster with OpenShift loaded as close to ring zero as possible.
As it moved on, we can now deploy the whole data platform management environment within AWS
or within Azure, or for that matter, on IBM Cloud.
So this is basically a data platform with data orchestration
encompassed in one solution?
Exactly, right?
It is an entire data fabric, yes.
You're confused, right?
Talk to me, Kubernetes. So Kubernetesubernetes this is a data fabric for kubernetes or is it data i'm still kind of confused here man so it is an attempt to to make kubernetes
based data as accessible to the application as it possibly can be, right? We have methodologies,
for example, for reducing the latency between where the data resides and the application resides.
So if you're, say you've decided to place your app and your OpenShift environment within AWS, just for example. But your data exists primarily
in an on-premises NFS environment, and a replication for that sits in Azure.
These are really geographically diverse environments, right?
So another thing that Spectrum Fusion brings to the equation is an intelligent caching methodology.
And this I liken to sort of 10 years ago when we put sort of hot data, cool data, and cold data within a single frame of storage.
A hybrid storage solution. Yeah, yeah, yeah. And truly hybrid. In that manner, we support anything that's S3 compatible. We support
anything, and I mean literally anything, including competitor storage, that is NFS compatible.
Block is coming soon. I'm sure you heard the announcement that IBM is now doing the management
level product management, product marketing side of Ceph and ODF as well.
So those are part and parcel to us and this solution as well.
So I guess bring it up to the OpenShift platform team.
This team is responsible for basically the underlay and delivering OpenShift to a developer community,
making most of this as invisible as possible.
What skill set should this platform team need to deploy IBM Fusion
and integrate it with their existing OpenShift platform? So as far as implementing it goes, it's just an application that gets loaded into OpenShift, right?
Spectrum Fusion is a container-based app that gets installed in the same way any other app gets installed into OpenShift. You take your operator, you initiate your process,
and 30 seconds later, you have an app installed.
Yeah, but now we're talking storage orchestration,
data movement orchestration,
all this other stuff that comes along with this guy, right?
Absolutely.
And within that same interface,
you can deploy additional storage on a cloud provider as a target for either replication or primary source of data.
You can grow or shrink that as desired.
You can back up and recover all through that same Spectrum Fusion interface.
And within the HCI environment, if you've got, say you've got shared data centers with a primary A to B sort of architecture using the HCI infrastructure, you can fail over, initially it's going to be a 50-mile radius with site-to-site failover A to B, B to A.
That's all done through that same user interface.
Really what we tried to do, and we did incorporate some of the existing IBM sort of tools, by that I mean Spectrum Scale, Spectrum
Discover, et cetera, into this solution.
Because once you've got a wheel invented, why reinvent it?
But this overall view kind of sets us apart from sort of the rest of the world for storage within an open shift environment.
Well, it does the same thing for anything with Kubernetes storage.
I mean, nobody I know talks about Kubernetes storage in a hybrid cloud environment,
saying that I can run an application on one KAS cluster and have the data sitting in another cluster someplace else.
I mean, it's a 50-mile thing.
Is that because you're doing synchronous replication?
There is replication taking place.
All that is is designed through the interface.
And the interface.
So, I mean, Kubernetes, all this stuff typically is done through DevOps and, you know, things like YAML files and, you know, stuff like that.
Not a lot of GUI kinds of stuff.
You know, I guess CLI is available in Kubernetes.
So, I mean, when you're talking about the Fusion interface, what are we talking about?
It's a GUI.
Obviously, you know, it can be controlled via command lines.
Everything is API capable or API enabled.
And the S3, for example, API is absolutely the same one that AWS uses. We have a goal of making everything as standardized
and easy to function as possible.
But if it's an API, you can control it through REST.
So let's talk about the type of applications
people would build and the types of data
that would host on there.
We're talking about cloud-native type of things,
I'm using CockroachDB for my database or whatever.
What type of, like, is this for unstructured data,
targeted unstructured data, databases?
What are we hosting on this?
We're hosting anything that's OpenShift compatible, right?
Any database, and pick it, Mongo, et cetera.
Any application, I mean, obviously, you know, if an application is somewhat stateless, the benefit, pardon me, to that data orchestration layer is less,
right? But if the application is stateful, meaning it's got any significant data attached to it,
and typically that today that means databases, then it's a great platform for that.
Yeah, but what you're talking, the back end of something like that
would be like a Rocks database or a Block file or a file system
or object storage.
So where does Fusion fit into that list of, I mean,
you mentioned early on that Block was coming, so Block's not there.
Well, Block is there. What we have done up until this point is relied on an interpreter to support
block storage. We don't need that anymore. And that was the reason that
that SEF was so significant to this.
SEF has justified everything though, right?
Exactly.
So what applications would we host?
Any application that can be containerized?
So I think what I'm missing is the portability so i'm addressing my i'm addressing my storage
via cni or uh what is it uh csi and namespaces in general so i'm saying hey store this new data
to this namespace what kubernetes has not been good at in the past.
Let's say that this storage has been presented
as a volume attached to the host.
For system volumes, yeah.
Or system volumes.
That's not very portable
because it's attached at that cluster level
and there's no orchestration to move that to another cluster, etc.
There's no abstraction when we're talking about physically mapped disks.
But the question, Keith, there is how big can blob namespace, an object, and an AWS namespace into this same namespace, which you can, then all of that data is accessible by the application through the container native storage.
So this becomes that container native storage, and you take care of the underlay for me.
Exactly right. You just point to your storage element or elements as they exist, and then they become available to the application.
I'm kind of familiar with persistent volume and persistent volume claims and things of that nature i so when i'm talking to fusion that does the persistent volume claim end up being on a
fusion device per se i mean is that how this works no it's whatever device well it's aware It's made aware of within Fusion, but it isn't a Fusion device.
It's an NFS location, an S3 location, and mixing and matching is totally acceptable.
No need for interpreters, as I say, and it's just visible through Spectrum Fusion to the application, regardless of where it's located.
So it sounds to me that Spectrum Fusion is a combination of things.
Or I suspect the history of this is several different products. So the HCI solution to provide the, you know, that traditional block
level storage or NFS based storage from a SDS mechanism. Then there's this software layer part,
this orchestration piece, which we've been focused on that can front end either that HCI storage or NFS storage from another
storage provider or even blocks EBS or S3 storage from AWS.
Basically, two components.
Where the storage will end up being,
there's one set of products that used to be Fusion IO or IBM Fusion,
not Fusion IO, but IBM Fusion. And then,
the IBM Fusion orchestrator, storage orchestrator,
which can basically abstract any type of storage for,
for your openShift environment.
And I'll take it a step further. Can I take it one step further?
Go for it, Matt.
All right. Because it really is an intriguing product. We have a caching element, which I
alluded to earlier. But that caching element means that if you have your application in a number of locations still visible through Spectrum Fusion up through OpenShift,
then you could potentially be accessing that in other iterations, let's say not an IBM Spectrum Fusion iteration,
you could be accessing old data by the application, right? But Spectrum Fusion has
that intelligent cache, meaning that it copies an element of that storage, say 25 megs of necessary data gets flagged as hot and it gets moved, as opposed to migrating the entire data set of, for example, large database, so that that existing application point can access the most current data.
This just copies a small amount of required data as it's needed.
So it's essentially coming down as sort of a hybrid cloud solution or a multi-cloud solution for
Kubernetes. That sounds familiar. Yeah. I'm just trying to zero in on what we're talking about
here. I'm still sort of struggling because, I mean, again, you mentioned OpenShift and you
mentioned VMware in the same breath.
And I'm thinking, I can't, I don't see how these two work together.
I mean, if you had talked about Tanzu, okay, I could see this,
but we're not, we're talking OpenShift.
So this is a, this will be a great one for Lightboard,
but the way that I'm envisioning it.
And we're going to be doing those. We've got Lightboard videos coming up.
But quite frankly, the software defined as it sits on VMware versus the hardware
iteration, while the same software manages the storage, the differentiation there is simply this is a different delivery
iteration
it's a different methodology
for delivering that
as I'm picturing my varying
options
I need software defined storage
period
let's take away
Kubernetes, OpenShift
VMware, whatever I Let's take away kind of Kubernetes, OpenShift, VMware, whatever. I
need a storage platform. That storage platform needs a OS. It needs clustering software. It needs
an underlay. IBM Fusion gives me the option of being able to do this in EC2, VMware vSphere, OpenShift, Bare Metal, etc.
I have a few different options to build my SDS software platform.
So I can then take the orchestration software and present that software-defined storage to OpenShift. However, I'm not limited to that HCI type of architecture.
That orchestrator can present NFS from Dell EMC or NetApp or et cetera.
So there's two distinct parts of it.
There's the software-defined, there's two distinct parts of it. There's the software defined, there's the,
there's creating the storage pool, physical pool with what was the, what we labeled HCI in the
past, and now it's just software defined storage. And then there's the orchestration of that
software layer. So two distinct functions that help to provide the overall solution. You don't have to bring sounds to me and correct me if I'm wrong, Matt. I don't have to do the STS stuff. I don't have to say, okay, I can use EBS or NFS or whatever.
And then the part that actually interfaces with OpenShift,
and I think this is where it can get confusing, Ray.
I can kind of have OpenShift providing storage to OpenShift.
Well, you can with ODF, right?
And that is, you know, it's a Red Hat product.
Well, not just with ODF.
I can take, if I'm hearing you right, I can take bare metal OpenShift.
I can say, okay, I need some bare metal servers to form my SDS.
I can use Kubernetes or OpenShift to be that SDS managed by IBM Fusion. your global data into essentially a large pool of structured, unstructured, anything
sitting on whatever storage is defined.
The beauty of the HCI product is it's really designed for performance, right?
It's all NVMe storage.
It's all high-speed switch gear.
It's the fully fleshed-out physical architecture
to support what is ultimately an AI platform.
Of course, we're not talking about AI.
We've got special products within IBM that really are purpose built for AI and we compute layer all the way down to running
at ring one in a VMware environment, right?
What we like about the HCI environment is that it reduces the overhead on the compute element by taking away that VMware, for lack of a better word,
that performance tax, as well as the physical cost.
So if I'm a OpenShift platform engineer architect, I can cater to my performance layers. If I want the highest level performance for a
database, I can build it on, I can build this on bare metal, go fast HCI that's managed and presented by IBM Fusion,
kind of the HCI solution.
If I don't really care, if I have a performance layer
that performance isn't necessarily the key metric,
I can present this via some cheap and deep NAS that presents storage as
NFS.
And then this front end by IBM fusion orchestrator.
But at the end of the day,
as a platform engineer,
I'm creating my storage classes.
I have the option to get as geeky and have all the knobs that I want up to I really don't care about storage.
I just need to present it to clusters. And if my application developers ever or my DevOps team
ever tells me they need more speed, I can, since it's abstracted to begin with, I can go back and redeploy this on HCI without really impacting my application architecture or namespace, etc.
I'm changing it all on the back end.
Exactly.
And the beauty there is it's transparent to the application user.
It's all managed through, again, we're reducing silos.
We don't have to go to a storage manager.
We do this through the OpenShift administrator.
It's all through that same interface.
Yeah, so if I'm in a public cloud or private cloud and I have access to EC2 or whatever bare metal or VM that IBM Fusion HCI supports. I can build a SDS solution,
sans my storage and compute team, or I can go to my VMware team and say, hey, give me these
eight VMs with this capability and I can build it there, or I can even give the platform over
to my vSphere team and say, hey, build me a dedicated SDS cluster. I can have it,
have her, I can do a Burger King. I have it my way. Well, that's absolutely right. And we have many customers right now that are running, you know, multiple HCI racks,
as well as officially software defined sitting on VMware. We have, that's an option. And again,
that storage is presented through all of those unique nodes into one storage pool with orchestration elements built in.
I'll take it a step further
because we're really making massive developments in this.
We're taking that HCI model into newer iterations.
The hyper-converged will allow, moving forward, any server element
that's been certified as viable for OpenShift. So if you want to use, you don't want to use
Lenovo servers, which are the de facto standard for the HCI, but you want to stand it up on a bunch of spare Dell equipment,
as long as it's certified for OpenShift, it works. That's the bring your own hardware model.
We're also doing smaller iterations where currently it's a number of storage-rich nodes and a number of compute-rich nodes,
down to something like a three-node cluster,
which will allow for edge use cases,
which I think are going to be very significant,
both for smaller customers and for, you know,
if you've orchestrated or created your entire application platform onto container methodology, you could have your remote offices each run one of these three node clusters, and then all feed that data back into one centralized point of reference with replications and scheduling built in.
We've been talking about the storage aspect of this, but you mentioned backup and recovery as
part of the solution? Yeah. We see that as table stakes, right? We see backup and recovery,
as well as site-to-site failover as what it should be, which is part
of the storage environment, right? I would say not every vendor believes that, Matt.
No, I understand that. And I think that the reason that we, the biggest reason that we do believe
that is because we can do it, right? We'd probably be out there saying, well, you can't, because how can you do that?
But the answer is you can. You mentioned discover as discovery kinds of things as well in here.
So it's like spectrum protects, spectrum discover, spectrum scale and spectrum. I have no idea what
else you might be putting in this thing. Is that what's sort of packaged into this thing or what?
So we took elements that we already had.
Spectrum Discover is a perfect example.
It was a technology that was a standalone, and that's essentially metadata tracking.
And we said, well, we need that in Spectrum Fusion. Can we containerize it? Yes.
Can we package it up inside Spectrum Fusion so that any application can be aware of what data
is most current, regardless of where the application resides and where the data resides? And the answer is, yes, we can.
And we have.
We don't like to talk about these as separate components.
It's all part of Spectrum Fusion.
So I'm going to push that analogy one point forward.
Yeah, one place forward.
I'm coming out of reInvent last week. And I got briefed by the IBM cloud team over the past few weeks and IBM is pushing this idea of cloud packs. So I should be able to go to AWS and just buy this from the marketplace without really giving this a lot of thought. Have you reached that level of maturity yet? So Marketplace has a whole lot of governance over it.
Currently, if I want to deploy this on AWS,
I deploy an OpenShift cluster via Marketplace,
but then I buy Spectrum Fusion from IBM
and install it as an application into that existing OpenShift cluster.
As time moves on, we do anticipate that governance going through and the customer's ability to purchase Spectrum Fusion and OpenShift for one marketplace line item. Swipe a credit card,
install your cluster. Where does Ceph fit into this sort of thing? I mean, up until this point,
I don't know, it's been probably a month and a half or two where Ceph has sort of been moved
into IBM storage program management. But I mean, it's a non-trivial solution. I mean, obviously,
it's its own storage, its own object storage, file system, block. It's got everything, right?
Is Fusion going to tie into Ceph? Or is Ceph going to be available behind Fusion? So, again, we are storage agnostic.
The question that you asked was, would Ceph be a part of that agnosticity?
And the answer is yes.
So separate kind of program project, but just like if I was consuming NFS from another storage vendor. I'm consuming and presenting Ceph from IBM Ceph platforms.
Ceph's got object too.
I mean, I guess we talked about S3.
Yeah, Ceph is unique in the industry in that it's really an overarching,
you know, the word unified is used a lot in this industry.
But we believe that the Ceph offering brings that unified storage up to Spectrum Fusion and it becomes, again, invisible to the user.
Yeah, I would question the validity of that use case.
I don't know if I want to look at the complexity of Ceph.
They just don't seem like compatible use cases like Fusion, IBM Fusion.
I want to simplify and provide a consistent level of integration to my underlay for my developers and platform team.
Let's do that by presenting by front-ending stuff.
I just don't.
I don't personally buy that.
So you've got your choice
there. What will make it easier
is when we come out
with an appliance
that is sort of plug-and-play
but it's dedicated to
self-storage.
That would make it easier.
I don't want to build my own, build
and optimize my own self-placement.
I understand that. But anyway, the development that Red Hat has placed into Ceph
over the course of the last year or two has been substantial.
It's not just overarching huge pieces of spinning disk.
It supports newer architectures and newer disc type formats.
I mean, there are other appliances available for Ceph as well, if you want to go down that right
route and stuff. Yeah, I just don't understand. I don't understand kind of the...
Adding the complexity of Ceph to the simplicity. Yeah, I don't understand what am I getting in return for adopting Ceph.
Because if I remember correctly, Ceph was kind of aimed at solving this problem for non-Kubernetes environments for like the open stack days.
And Ceph was, you know, this cool platform that you can bring different types of storage, blah, blah, and present different types of storage.
And IBM Fusion just seems like a more modern way to solve that problem.
And I can leave that complexity behind and just choose simpler platforms, either NFS, S3, block, and call it a day.
I'm not 100.
I think that's a great conversation for another day of like,
where would you select Ceph in a modern architecture
versus having to have to just support a legacy deployment of Ceph.
It's amazing that we're getting to the days
where we're talking about legacy deployments of Ceph.
Yeah.
I'm trying to understand what's the software-defined storage solution underneath Fusion.
I mean, we talked about, you know, it provides storage agnosticity, and I understand all that, how that would work and would be beneficial.
But, you know, a lot of people that are going to be deploying Fusion might be deploying
it in a software defined or an HCI side.
So, I mean.
Yeah.
So, I think, Ray, I think where we're losing our friend, Matt, who's now going over to
the dark side of the vendor life.
And it took me a little bit to decipher kind of market max marketing speed versus Matt, the independent analyst.
OK, go ahead.
There's there's there's several products here that have been umbrella into Fusion IBM Fusion.
There was a IBM Fusion HCI solution, which was their software defined solution.
I don't think Matt wants to call it that
anymore. I think Matt wants to call it part of the IBM Fusion platform. And part of that platform is
the old product or older product or an older name product called IBM Fusion HCI that provided that
software-defined storage layer. And that has a bunch of capability. There
was IBM Fusion data protection that had a bunch of capability. There was IBM Fusion, whatever.
The marketing has moved on to just call it all IBM Fusion with feature capabilities. And what
you and I have been confusing the whole conversation is kind of like, wait, is it an SDS solution?
Is it an orchestrator?
Is it a backup solution?
And the answer is yes.
Well, I mean, so IBM has their own HCI.
I mean, they've had VersaStack out there for, God, a long time.
It's a VMware-focused solution.
It doesn't talk OpenShift as far as I understand,
but you've got that, and you've had container storage
through CSI solutions for your Flash systems
and your other various storage systems.
So those all certainly existed as well.
I was just trying to, you know, it's a combination
of OpenShift, Red Hat, and storage that kind of have enabled sort of a requirement for this.
I think I'm buying, you know, disclaimer, IBM is a client, so take it with a grain of salt.
So I think I'm buying the overall story of, now execution is another thing if i'm an open shift platform group and i want
open shift compatible storage whatever that may be whether it's data protection
uh uh software to find you know bring my own storage nodes uh front ending uh ebs or some
other thing if i wanted to present from a software perspective, if I'm specifically OpenShift platform group and I want the best OpenShift storage platform, IBM is trying to position into IBM's storage portfolio and pick individual pieces of this.
But specifically, if you want to know what IBM's Kubernetes OpenShift story is, it's this.
It is this. Absolutely.
Wow. Listen, this has been great. So, Keith, any last questions for Matt before we close?
You know what?
I want to stay friends with Matt, so I think we're going to end it there.
Matt is one of my best friends in the industry, and I don't want to get him in trouble at IBM.
All right.
That's good.
I think we tried really hard to penetrate that marketing shell that Matt has built up.
I have not.
We didn't quite get there.
All right, Matt.
Anything you'd like to say in your own defense?
In my own defense.
I am not completely a dark side of the market texture sort of conversation guy.
What I will say is I really think it's a, the biggest reason I came on into this role was my belief in what we are aiming to achieve.
You know, we can go back to when I did that Kubernetes paper for you, Keith.
The biggest shortfall in moving from POC to production in a Kubernetes environment is the very thing that Spectrum Fusion is attempting to address.
And I personally find it exceptionally exciting. And I can tell you from knowing the roadmap, I think that it is moving in the direction of putting all of that data into one sort of visible fabric
with oversight and visibility such that the environment can, for all intents and purposes,
take care of itself. Thank you, Matt. This has been great. I really appreciate you being back on our show today.
And even though you're not a Gray Birds on Storage co-host, we still like you.
Thank you.
And that's it for now.
Bye, Matt.
Bye, Ray.
Bye, Keith.
And bye, Keith.
Until next time.
Next time, we will talk to another system storage technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast, tell your friends about it.
Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out. Thank you.