Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 4x21: Why Would System Architects Utilize CXL?
Episode Date: March 27, 2023We're wrapping up this season of Utilizing Tech by asking the key question: Why would a system architect choose to utilize CXL in their designs? This episode of Utilizing CXL features Stephen Foskett,... Nathan Bennett, and Craig Rodgers discussing the practical prospects and benefits of CXL. Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Craig Rodgers: https://www.twitter.com/CraigRodgersms Nathan Bennett: https://www.twitter.com/vNathanBennett Follow Gestalt IT and Utilizing Tech Website: https://www.UtilizingTech.com/ Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/1789 Tags: #UtilizingCXL #CXLFabric #CXLMemoryExpansion @UtilizingTech
Transcript
Discussion (0)
Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT.
This season of Utilizing Tech focuses on Compute Express Link, or CXL,
a new technology that promises to revolutionize enterprise computing.
I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT.
Joining me today for this special season wrap-up episode are my co-hosts Nathan Bennett and Craig Rogers.
Good to be here, Stephen. I'm Nathan Bennett.
I'm a cloud guy around the interwebs, and I love creating new content around cloud and fun stuff.
Hi, I'm Craig Rogers. I'm product manager for infrastructure as a service for a cloud company.
And yeah, also doing stuff online and talking about tech.
And I'm Stephen Foskett,
also been talking about tech for a long time
on a variety of podcasts and writing blog posts
and appearing in videos and so on.
So I'm going to take over this podcast this week, Stephen.
I think we've had some great conversations
with some great leaders in the industry. We've had Intel, we had AMD, and we've had some great conversations with some great leaders in the
industry. We've had Intel, we had AMD, and we've had so many different vendors come to us and talk
about CXL. But here we are, end of the season, and let's really stop for a second and say
the so what question, right? Why would a system architect utilize CXL? We've heard some great stuff. And I think I've
never spoken so much about memory before in my life, which is different and enjoyable.
I think it's great for all of us to get out of our comfort zone once in a while. But let's start
with you, Craig. And let's just ask the question straightforward and go from there. Why should an architect care or utilize CXL? What do you think,
Craig? We know that memory is wasted. And we saw this with storage. Storage was wasted.
We pooled storage together. Memory is currently wasted. And we're looking at pooling that
together. And short-term use case, we can add more memory. There are some workloads that need that.
Near-term, we'll be able to pool.
And it is increasing efficiency.
At a cost, we've opened up more attack vectors for security,
but it is going to increase efficiency and reduce waste.
And memory is a very expensive resource.
It's potentially half the cost
of any modern server.
So it makes sense to optimize it.
But, you know, the more hops we have,
the more latency we're adding,
we're getting tiers.
So architects are going to have to make the decision of,
you know, this CXL equipment isn't going to be free.
The software isn't going to be free. The software isn't going to be free.
None of it is going to be free.
So architects are going to have to make an honest look
and see what the difference is simply adding more servers
to get the RAM they need,
or reducing maybe the number of the servers
to use the memory more efficiently,
adding on the cost of the CXL layer.
So, you know So to make an informed
architectural decision, they're going to have to look at both options.
Now, I like, Craig, that you brought up memory as well as an introduction to storage. Stephen,
you're a storage guy. You helped me so much in my career understand how these bits and boops
make the storage things go when they spin around and around.
And to your perspective, when we talk about CXL, we talk about storage, we talk about memory. How
do these things really apply to the architect in terms of what they're actually trying to achieve
within the data center? Well, so I guess I'm the more visionary. That doesn't sound right. I'm not
trying to give myself a pat on the back.
I'm actually trying to kind of cut myself off at the knees
because what I look at this stuff,
I see the longer term prospects
and the transformative prospects of it,
where I think, you know,
Craig, I think is looking at it in terms of like,
how does this achieve some benefit
when I'm designing a system?
What is this good for?
I'm like, whoa. Like what, what is the, what is this good for? I'm
like, whoa, this changes everything. And I think that that to me is the, the answer to this,
this architecture question. Yeah. My background is in storage. I've studied, you know, cloud and
networking and all other aspects as well. Thanks to Gestalt IT and Tech Field Day. I mean, that's
why we started Gestalt IT was to learn about all the different components
that make up a system.
And to me, the takeaway on CXL and the prospects for CXL
are really transformative
in terms of what the architecture looks like.
I mean, down to the basics of computing,
the von Neumann architecture that says essentially
that the CPU and the memory
and originally the storage were closely coupled. We broke that coupling with storage years ago.
We basically allowed storage to be loosely coupled, to be outside the box, to be a separate
device that you're talking to. We did the same in many cases with networking and coprocessors, GPUs, DPUs,
IPUs, all that kind of stuff. Essentially offloading from the CPU became sort of a
fabric of processors that work together and can access memory together. And
memory is just sitting there. It's just sitting there on the system board, on the
system bus, tightly coupled.
You know, it literally has pins running from the processor to the memory modules. And it just is
so obvious to me that that needs to be broken if we're going to get anywhere with system architecture.
And so, you know, that doesn't answer the practical question of what is a system architect going to do with CXL?
But, you know, that's kind of not your question.
Your question was basically, you know, where does this go?
What is this all about?
To me, that's what it's all about.
But, of course, you know, at the end of the day, you know, visions and visionary stuff, you know, nobody's going to buy it for that.
And there's been a lot of visionary tech out there.
What does it really
do? I guess that's the question. Yeah, I think that's a good point to bring up, Stephen, because
at the end of the day, with all of the visionaries that we've brought onto the podcast, the question
keeps on coming back. Is this just more nerd knobs that we're creating? Is this just more
great stuff that we can do things with and we don't really have a problem that we're creating? Is this just more great stuff that we can do things with? And we don't
really have a problem that we're solving. It's just more cool stuff that we can add and more
capabilities and that all these vendors are doing. Or are we actually trying to solve a problem?
I'm going to go back to you, Stephen, on that one and get your perspective on like,
what is the problem that we're actually trying to solve? Because I think you touched on it just a
little bit. And then we'll go over to Craig and see what your take is on it.
Yeah, and I think that that's,
again, I'm going to totally take off my Steven mask
and I'm going to put on like a normal person mask here.
What is the problem that's being solved by CXL?
Right now, this year, today,
the problem that's being solved by CXL
is inflexible configuration of system memory.
Essentially, if you have a system, it needs memory.
How much memory do you put?
Well, that should be dependent on what application you're running and what the demands of that application are, whether it's capacity or performance.
I need so much memory bandwidth. I need so much memory bandwidth.
I need so much memory capacity.
That's what the conversation should be.
The problem is that given the reality of systems right now, the conversation all too often is,
how do I deal with the fact that I have this many memory channels?
I can only put one or two DIMMs per channel
and DIMMs only come in these binary sizes.
And how do I deal with the fact that my system
almost by definition
is gonna have the wrong amount of memory?
And that's especially a big problem
because memory in big systems
is the single most expensive component.
So if something costs 50% of the cost of the system,
and it's completely inflexible,
and you only have like three choices you can make,
like I can have too little, too much,
or I don't know, throw up my hands, what am I going to do?
CXL, I think right now, that's the question
that it's answering. It is essentially saying, okay, put in too little, add a CXL memory expansion
module that gets you to the right amount and, you know, call it a day. And so to me, I think that's
what the technology is about. And that's actually why a system architect would utilize CXL today is because they want a right size memory.
Yeah, I absolutely agree.
All we can do is expand that memory today.
Near term, the next two years, pooling is likely going to be available.
But all we're doing here with CXL 1 and 2 is laying a foundation for what we all really want from CXL, which is rack-level composability.
And architects will want that because that's going to give them much more control over how they build out their compute solutions.
That's the angle here.
What we're doing is laying a foundation, getting familiar with the technology.
We're not going straight into CXL3.
And now you can compose racks.
Enterprises are going...
We're building a foundation here of evidence that CXL works,
that it's stable, that it doesn't pull down clusters of hosts.
We have to let it prove itself
to a certain extent before we get to the exciting stuff.
So right now, if nobody adopts it and it doesn't get proven, it's never going to, we're not
going to get to the fun stuff, you know, as I see it.
Yeah, I think to, to both of your of your perspectives, we already see this right now
with something like HCI solutions, right? So for instance, VMC on AWS is a really good example of,
hey, let's bring everything together in this conglomerated little blob. And then how do you
scale it? You add more blobs on top of that blob. And that's really great. But until someone says,
well, I just want more storage.
And it's like, great, put another blob on it.
And they say, well, wait a second.
I don't want more compute and more RAM.
That makes it way more expensive.
I just want more storage.
And they didn't really have an answer to it.
They do now.
But that's kind of the example of the problem
or the use case that CXL is directly attacking.
Now, when we talk about HCI,
it's a different discussion
because they have a software layer on top of it.
So let's kind of change this discussion around software.
And this is very,
everyone put on your visionary hat on now,
and let's take a look at what this would look like
in the future, right?
So what would we hope to see?
What do we want? And then what should architects expect in the next upcoming three or four years? And yeah, I'll just toss that
out to either one of you, whichever one wants to go first. Sure. I'll jump in first, just because
I like to cut Craig off. What I think architects should expect to see
in terms of software, well, number one, the most important thing is just basic functionality for
CXL-attached memory. In other words, can the system address that memory? And then number two,
how does the system deal with the fact that that memory has a different profile, performance profile,
access methods, that kind of thing from regular onboard system memory. Luckily, that path was
paved by Optane, by HBM, and to some extent by caching and NUMA. In other words, a lot of this
stuff has already been thought of and the first steps are taken to make different kinds of memory
available in a system. And so essentially that's getting rolled out in the Linux kernel,
it's going to be in VMware, vSphere, it's going to be in Windows. And that's kind of all you need,
right? I mean, you need basic support for this stuff. So I'll say, let's just take that off the table that, you know, you kind of need the
software that makes it go. But Craig, do you want to talk a little bit more about like kind of which
is the software that comes next? Yeah, that's exactly where I was going to go. You know,
again, borrowing into our storage knowledge and history, you know, things like prefetch, differentiators between storage vendors were how
well they moved data from spinning rust up to
SSDs to keep workloads performant. We're going to have the same
challenges around different tiers
of memory. And as you alluded to there,
the amount of even pins for the CPU
that are wasted, we could use a third of those pins. We learned recently
to have more PCI Express lanes.
There's solutions coming now
for CXL that we haven't even thought of yet. We aren't even aware
of the problems. So, you know,
we're going to hit problems. We have unknown unknowns as much as I hate that term,
but we're going to have solutions for those. So it'll be interesting to see these solutions
come and how they get addressed and how collaborative they are, how open they are, versus we're seeing a lot of proprietary solutions
using the CXL standard.
But are we going to see open source solutions
around these software layers within the CXL stack?
Yeah, that's an interesting question because, you know, like I said, like drivers, sure.
You know, basic operating system plumbing.
Yeah, right on.
But what about some of this advanced stuff?
So, I mean, you know, we've talked to,
you know, companies like Memverge
that are doing some really amazing stuff.
Basically, I mean, to a storage guy,
I look at that and I'm like, whoa, they're using storage like we use, they're using memory like we use storage. That's cool. Snapshots and moving stuff around and everything. But, you know, we have to ask ourselves, you know, like you're saying, Craig, like the practicality, the, you know, support for that, like, will we actually
be leveraging these kind of features? Or are we just going to basically make it work? I guess
that's the problem with any technology, right? I mean, you look at look at things like like
Bluetooth, USB, even PCI Express itself. Are we really using that technology for what it could do,
or are we only scratching the surface of it?
And that's what I worry about with CXL.
This is coming back around to a question we had early enough on
in the season of, is CXL going to be adopted?
We don't know.
You know, we talked about this before, where they might just need more RAM and memory pool
and that's it. I don't need a server with 96 GPUs.
I don't need a server with 400 petabytes
of storage or whatever gets attached.
We honestly don't know.
But I think it would be good for the industry to find out.
I think it would be good for the industry to find out.
And I agree with your big vision,
you know, come CXL3.
So in answer to your question,
then in three, four years,
we're going to be on the cusp of that exciting CXL3 stuff
being around the corner within a year, 18 months.
I would imagine around that timeline, we'll be looking at,
we'll be finding out about products coming,
about how composable we can actually make a rack.
And there's going to be a huge variance there between solutions
and the companies building those.
So in three, four years, I think it's going to be a lot more exciting.
But as we are now,
it's talking a whole lot about memory, a whole lot about memory.
So a whole lot about memory. I'm going to go back to something that you were discussing earlier,
Stephen, when we talked about disaggregating storage and it becoming a separate entity, not connected to the actual compute.
The idea around it was that storage is kind of this third external thing on the machine. And so
it made sense to take it away from the compute modules, but RAM, there's a locality that kind
of needs to be there currently in the way that we're currently seeing it in terms of how a CPU pulls it. Like we're seeing things like, I mean, I like to always
lean on Apple system of a chip, you know, CPUs, right? Because they have the RAM directly attached
right next to the CPU. They are doubling down on the RAM has to be as close to the CPU as possible.
And here we are saying, you know what? It can be kind of out this outside.
So two questions, and these are kind of hot takes.
One is, is RAM the new storage?
Hot take.
And two is, will we ever get to the point
where we see a fully modular,
your storage is here, your RAM is here,
and your compute is just this big box of CPUs
that generates an enormous amount of heat.
We'll go to storage, storage.
We'll go to Steven on that question first.
So is RAM the new storage?
I would say, surprisingly, kind of yes. In fact, I think storage is the new RAM and RAM is the new shades of gray. So you mentioned, you know, Apple.
Well, Intel's Xeon Pro or Xeon Max has the high bandwidth memory right there on the chip.
They also can access memory over the memory channels.
And they also can access memory over CXL.
So right there, we have Intel with a processor that already has three tiers of memory
right out of the bat. And then you look at it and you're like, well, Optane is persistent,
like storage. NAND flash can be addressed similarly in a memory type way. And we're
starting to see rumors of products like that coming out. And frankly, if you look at the modern, the way that storage is
being accessed now over NVMe, which is PCI Express as well, with disaggregation, with, you know,
intelligent distributed storage platforms, it really smells a lot like where CXL is going.
And we've talked about this throughout the season that a lot of
the technologies from storage in terms of fabrics and consistency and coherency of caches and so on,
all those things are being implemented in CXL as a way to enable CXL fabrics. I'm going to say
memory is not just the new storage.
Memory and storage are the new one thing.
I think that going forward,
we're going to see a lot of blurred lines between memory and storage.
And I think that's a good thing.
I agree.
And I was going a similar route in my thought process there.
You know, certain workloads that just want RAM, you know, they want RAM. SAP HANA, you know with certain workloads that just want ram you know they want
ram sap hana you know hbc workloads there's a number of workloads that really benefit from
having lots of fast ram like you said tiering has come along now it's how fast do you need that ram to be um hbm directly on the chip
you know leading into your your apple question there hbm on the chip is good for certain workloads
and for apple's primary workloads and you know certainly on norm devices it makes sense to have
that there and you know i'm sure there'll be other devices where it'll be a good fit, but it's not needed for all.
You know, it always comes down to the workload.
You have to know the application, know the workload,
and come up with an appropriate solution.
Not everybody needs HPM.
Will it generally speed things up?
Yes, I'm sure.
You know, but it might speed up certain workloads 3%.
It might speed up other workloads 300%.
You know, there's going to be a huge variance there.
But the ability to do both is what's great about this.
If you don't need super fast type and with memory on the chip,
you don't need it even RAM speeds,
you have the option of pushing it further away.
If you need it, go get it on the chip.
But we now have the
options you know we've never had this number of tiers of ram we now have probably more tiers
around than we do storage you know in the government years you know we're pushing it
further and further away you know right now it's only it's still in the same physical box. In two years, it won't be, you know?
So it's, it's, it depends. It's the usual IT answer. It depends.
So let's touch on one other area. And we're kind of, I feel like we're kind of tiptoeing around it,
dancing around it and skipping, doing a little skippy dance. But at the same time, let's,
let's talk hyperscalers. Let's talk about the big
guys in the room. Because when we talk about cloud, me being a cloud guy, I always view it as
a mass of resources behind a self-service portal, an area where you can go click a couple of buttons
and get whatever you want, right? At the end of the day, there needs to be infrastructure that is able to comply and
provide those type of solutions. With CXL, it seems like the hyperscalers are the key people
that are looking at what they can do with this. Are we actually going to see those type of services
provided externally to customers in terms of like, for instance, we see Graviton,
where they're using those particular type of CPUs. Do you think they're going to start utilizing
that type of disaggregated data center for those solutions?
I think absolutely 100% the hyperscalers will adopt CXL and early, simply due to the scale and our aforementioned cost of RAM.
We know they will have wasted RAM at massive scale worldwide, in data centers all over
the world.
It absolutely makes sense for them to be able to take better control of that.
How long it'll take to get to smaller cloud service providers.
I work for one, like after infrastructure for one, not hyperscaler scale, but I can certainly see the benefit.
When you're talking thousands of hosts versus millions, there's an absolute benefit there to being able to make better use of RAM, even at that
scale.
So it's going to be the trickle down effect.
I think we'll see it.
Hyperscalers definitely adopt and it'll go down to enterprise mid-size, et cetera.
It's just like all other recent technologies.
Yeah.
I'd say that, especially given the crossover between OCP and CXL. For those of you who aren't kind of in the industry, I mean,
OCP is basically an organization that's bringing hyperscaler server architecture and server
approaches to everything, every part of the data center, every part of enterprise tech.
And the things that are coming out of the OCP projects, the various OCP working groups and projects
are having just a tremendous impact
all across the server architecture world.
And it's coming to a data center server near you,
basically everything they work on.
And one of the things they're working on is CXL.
I mean, there's a huge crossover here.
We all, all three of us went to OCP Summit.
We went to the CXL Forum, which was an
all-day presentation marathon at the OCP Summit. Many of the people who are working on the CXL
spec are also on OCP working groups. There's just no question that hyperscalers are driving
this or demanding it. But that's my my one of my areas of concern,
because I'm kind of like, whoa, whoa, whoa, you're not Google. I mean, that's the old the old saying,
right? You know, if you're unless you're Google, you're not Google. And so will the hyperscalers
drive this technology toward where they need it to go, which is maybe, and will that not be where we, not hyperscalers,
not Google, need it to go? And I worry about that because I feel like absolutely they're going to
adopt it and absolutely they're embracing it. Because let me tell you, in terms of right-sizing
memory, as we heard from Microsoft and Meta, that's like a billion dollar question.
If they can get the right amount of memory in their servers, it makes a huge amount of sense.
And I can see them kind of being like, yeah, pooled memory, that's a great thing.
Like flexible memory is a great thing.
But what I need to hear is I need to hear like what's Dell and HPE and Lenovo and so on?
What are they going to do with this?
What is NetApp going to do with this?
What is Juniper Networks or Cisco going to do with this?
Because those are the ones that are going to help make the decision of what kind of
impact this is going to have outside hyperscalers.
And I'm not, I don't know.
You know, I don't know how it's going to work outside hyperscalers. And I'm not, I don't know, you know, I don't know
how it's going to work beyond hyperscalers. And I think that there's a reasonable chance that all
of this could be driven to the needs of Meta and Facebook, or you know, Meta and Microsoft and
Amazon and so on, and not driven toward the needs of, you know, everybody else. I don't know.
I think we could see interesting things
coming from the hyperscaler adoption.
Look at the amount of companies that exist now
because somebody high up with a lot of pool
in a hyperscaler left to develop their own solution,
their own company you know so i think someone will
learn or know how to do it at that hyperscaler level and think of a way to commercialize that
with using the likes of dell hp cisco lenovo etc you know on the wider general server market. I think there'd be,
I think somebody will see opportunity there
and it will get leveraged.
It's just right now, we don't know.
We don't see anything for mid-size.
We just don't know.
And I think that leads to a good opportunity
for us to wrap up this particular conversation, but also wrap up the season with our thoughts.
I mean, like I think at the end of the day, we need to kind of step up and kind of realize that we are these people in the industry.
We do have these ideas around what these things are.
And I'll kick us off because I was brought in here as kind of the guy that was new to CXL.
I was not, I'm not a memory guy.
I'm not a hardware guy.
I'm a developer advocate at heart.
And I believe in things like code and GUIs and stuff like that.
I don't do bleeding under my fingernails because I pushed a RAM module in too hard.
That's not what I do,
right? My fingers are very delicate. But at the end of the day, in my opinion, what I saw
throughout the season was just so much interesting technology around memory. And yes, we did talk
about memory a lot. And there is a good reason for that because that's kind of the forging forward
solution within CXL. There's so many other things within the Compute Express link that we can look
even outside of memory. I think we talked about networking a couple of times in terms of how
that was being used. And that leads to so many other discussions around just the data center
itself. So we keep saying, we're not sure where we're going to go within the next couple of years. But what we are sure is this is extremely powerful technology that can be used for multiple different people.
For the hypervisor, probably first.
But maybe it will come to a lowly guy like me that says, hey, I want a desktop CPU plugged into my laptop.
And then maybe it would work and actually work valuably within that capability standard.
So this is something that we can all kind of see, but it has to start somewhere.
And I think starting in the hypervisor is probably the most strategic way for it to start.
Craig, what were your thoughts on the season?
I don't think you're going to get your external GPU on your laptop next week.
I think it might be a few more weeks yet.
I do think we will start
seeing a lot
more products hit the market.
I'm fairly confident
memory expansion
is going to be adopted.
It is going to be adopted because it's
an expensive thing.
But it's anybody's guess as to where it goes after that,
whether adoption continues.
We'll see if it proves itself.
Yeah, that's right there.
It has to prove itself.
And I think that this is called utilizing tech,
which means the whole goal of this show, when we did our seasons on AI, when we decided to do CXL, the whole goal was
to figure out what is the practical application? What is the realistic workload for this technology?
And, and I think that we've seen, well, we did three seasons on AI and then AI exploded
after the, those three seasons.
And so we've been kind of sitting back and saying, whoa.
And I think that it's the same with CXL.
We've done now a couple dozen episodes.
We've talked to basically every company in the industry right now.
And now we're going to sit back and give it some time and say, where does this go?
And I think we'll see. Like Craig mentioned, memory expansion is just too much of a slam dunk benefit to too many people in terms of right-sizing system memory, in terms of bringing some flexibility.
I think that that's just definitely, that's 2023, 2024, like bread and butter. Where does it go beyond that? Well,
I guess we'll see. I think that there still is a lot of demand. I think there's demand for pooling.
Frankly, I am excited about disaggregation and about composable systems and about rack scale
architecture, but I'm not sure. Sometimes technology is funny that way. Sometimes
what you think it's going to
be for doesn't end up what it's for. And I think that there's actually a decent chance. You know,
you mentioned like having an external CPU on your laptop. Well, an external GPU, you can buy one for
your MacBook over Thunderbolt or your PC over Thunderbolt right now. I mean, that already
exists. Who would have thought that that would have been one of the big benefits of
Thunderbolt and, you know, kind of USB? Nobody would have maybe guessed that, you know, the
whole one cable that gives you power and Ethernet and memory expansion and all sorts of things over,
you know, it's your monitor. All that stuff is kind of cool, right? Well, maybe CXL technology is going to take a right at Albuquerque
and maybe it's going to be useful for memory expansion in workstations and maybe laptop.
I don't know. But I think that at the end of the day, technologies get adopted where they make the
most, I was going to say sense, but not really in the most sense, where they make the most financial impact, where they make the most performance impact, where it makes architectural sense to do.
And that, I think, is going to be ultimately where CXL goes.
It's going to follow the money.
It's going to follow the demands.
And if there's a demand for disaggregated rack scale architecture and composable systems, then we're going to have them because this thing can
make it happen. And if there's not a demand for that, and if the demand ends up being something
different, like some kind of new, you know, I don't know, maybe it's some kind of new blade
server or something. Well, maybe, maybe they will have that, but it's going to go where the,
where the demand is. And right now the demand needs memory expansion. We're going to get that. It's hard to say where it goes beyond
that. One final thing I'll say, again, as the storage guy, I want to see storage and networking
companies adopt CXL. And I want to see what they can do with this technology. I want to see what
a network switch or a router or a storage array with CXL looks like. And I'm waiting. Lay it on me. I want to see it.
I think that goes for all of us that when it comes to CXL, we want to see it. We want to see
where this goes. And you hear it every time that we talk. We hear the passion behind these voices
and these faces around what this technology could do to the market. And
we are ready for it. And we're so excited about it. And that's one of the reasons why we did this
show. So this has been Utilizing Tech on CXL. And I'm Nathan Bennett. You can find me and follow me
around on the Twitterverse as VNathanBennett. I do some YouTubing with at, at V Nathan Bennett as well,
where I talk about cloudy things that are very misty for me and maybe very
clear for others.
Craig,
where can people find you and learn more about you and what you do?
You can find me on LinkedIn.
I'm just under Craig Rogers.
I have a blog,
Craig Rogers,
Dakota DK.
I'm also on Twitter at Craig Rogers,
MS. And you may'm also on Twitter, at craigrodgersms.
And you may find me on other podcasts.
As a guest, I might even show up on Nathan's one day.
You never know.
But no, it's been great talking about the potential for disruption with CXL.
And with that, I guess I'll give your podcast back to you, Stephen,
Mr. Stephen Foskett. Thank you very much, Mr. Nathan Bennett. And thank you very much,
Mr. Craig Rogers, for joining us as co-hosts here. That's kind of how we roll with Utilizing Tech. And it's been a lot of fun having you bring your backgrounds, your expertise, your perspective
to talk about CXL technology. As I said, this is
season four of Utilizing Tech. If you go to utilizingtech.com, you'll see the previous seasons.
And I am happy to say there will be a season five. We are actually in the planning process.
We're going to be starting to record. Because this is a weekly podcast, we like to have a few
episodes ready before we kick off the season so that we can meet your expectation to have a new podcast
published every week. And that's what we're going to do. So tune back in in just a few weeks for
episodes focusing on edge computing. Utilizing edge is coming up next. Also, we recently had
our first edge field day event, and we'll have another edge field day event coming up next. Also, we recently had our first Edge Field Day event, and we'll have another Edge
Field Day event coming up in July. And I look forward to a new season, a new topic. I hope that
those of you who are listening will enjoy thinking about how maybe AI and CXL apply at the Edge,
or maybe some other technologies work their way in there. So thank you so much for listening to this season of Utilizing CXL, part of the Utilizing Tech podcast series.
If you enjoyed this episode, please do subscribe.
Again, there will be another season coming real soon.
Please leave us a rating or review or comment.
We would love to hear from you.
This podcast is brought to you by GestaltIT.com, your home for IT coverage from across the enterprise. For show notes and to tune
in for the entire season of Utilizing CXL, as well as the season's previous and the season's future
focused on AI and edge, just go to utilizingtech.com or find us on social media,
Twitter and Mastodon at Utilizing Tech. Thanks for listening and we'll see you next time.