Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 4x09: Looking Forward to CXL in Production in 2023
Episode Date: January 2, 2023As 2023 arrives, so do two server platforms that support CXL, along with associated chipsets, software, and memory expansion cards. This special episode of Utilizing Tech features the three hosts of t...his season, Stephen Foskett, Craig Rodgers, and Nathan Bennett, discussing the prospects for CXL in 2023. AMD recently introduced Genoa Epyc, featuring CXL and PCIe 5, and it is widely expected that Intel will introduce Sapphire Rapids Xeon very soon with similar support. We expect these new server platforms to be adopted quickly by hyperscalers and to reach the enterprise datacenter throughout the year. We wonder what CXL memory expansion might bring, from other types of DRAM to persistent memory and possible even Optane. The prospect for shared and pooled memory is perhaps a little further off, but we have already heard that this capability might come to CXL 1.1 via device-specific features. What could go wrong? Enterprise server vendors might not embrace composability for various reasons, and this could derail CXL in the datacenter. Another concern is security, especially for shared memory and devices. The big differentiator will be software that enables systems to take advantage of features from memory expansion to pooling to disaggregation to composability. Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Craig Rodgers: https://www.twitter.com/CraigRodgersms Nathan Bennett: https://www.twitter.com/vNathanBennett Follow Gestalt IT and Utilizing Tech Website: https://www.UtilizingTech.com/ Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/1789 Tags: #UtilizingCXL #Genoa #Epyc #SapphireRapids #DRAM #CXL
Transcript
Discussion (0)
Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT.
This season of Utilizing Tech focuses on CXL, a new technology that promises to revolutionize enterprise computing.
I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT.
Joining me today on a special episode are my co-hosts, Craig Rogers and Nathan Bennett,
to talk about what we're
going to be looking forward to in 2023 with regard to CXL. Welcome to the podcast, Nathan.
Thanks, Stephen. Always happy to take part of this podcast. I'm Nathan Bennett. I'm a
cloud architect and love talking about new tech and all these fun things that we get
to forward look into the future.
And Craig?
Hi, I'm Craig Rogers.
I'm product manager for infrastructure as a service for 1111 Systems.
And it's great to be here.
So the three of us got together to do the Utilizing Tech podcast as season three of
our Utilizing series.
So we did three seasons of utilizing AI,
and then we took a look around and said, you know, the CXL technology looks pretty cool.
Of course, it's all new. It's not something that we've seen really implemented yet. In fact,
it couldn't even be implemented until the release of a server platform that supported it. Thank you,
AMD, which has now happened.
So even in the time that we've been publishing this,
which is literally from October 2022 to December 2022,
the world has changed because AMD has released a platform that supports this.
But since then, we've spoken to a bunch of people.
We've participated in the CXL forum.
I was in the one in New York,
and we were at the one at OCP Summit in San Jose. We've talked to all sorts of companies.
I mean, it's really quite amazing what's going on, considering that this is all new technology.
I guess just to kick things off, Craig, what do you think about the state of the world in just the time since October to December?
The pace with which CXL products are going to be able to hit the market here,
considering how recent AMD released Epic 4,
and I'm sure Intel will be right around the corner with their offering on CXL.
But the products that were already developed, you know, and the breadth and scale,
the software needed to do it, you know, they've obviously been working on this for years,
you know, conceptualizing and putting into actual production these designs and building the hardware.
It's great to see that so many people,
we always keep coming back around to the sheer scale
and scope of the CXL Consortium members,
but it's been really impressive to see
just how many products will be able to hit the market rapidly
as soon as these servers are available.
That has blown me away.
Yeah, I think just to build on that,
between where I personally am late to the game
learning all of these different products
and these different vendors
that are jumping into the market on CXL
and learning the different solutions,
I'm really excited about where it's going
and what the solutions are going
to bring to market, but just to double click on that rapidness that needs to happen, I'm ready to
see that next evolution, that next step from concept of we need a platform and now it has a
platform. Okay, let's see what happens. Do people gather around the platform and now it has a platform okay let's see what happens do people gather around the
platform and just start rapidly developing for it and manufacturing around it or do they start
moving into how uh developing a different platform and do we get into that competitive nature that
you know the customer tends to win when there's more competition in the market than when there's
just a bunch of people working around a single platform. But that's where I want to see that development in 2023 and that growth,
because that's where it's going to be very exciting for the enterprise market as well
as hyperscalers. I think we all see the value for hyperscalers to adopt this type of methodology.
But with the competition, bringing it to the
enterprise marketplace as well, this is where we definitely will see, you know, cost savings,
if it continues to be competitive in that area. And I think that the, you know, we were just
on time here in terms of introducing this technology, introducing this topic, because
it really is, it really is taking off. I mean, just to kind of level this technology, introducing this topic, because it really is taking off.
I mean, just to kind of level set everyone,
AMD recently introduced their Genoa,
which is their next generation Epic server platform,
which we actually discussed here on the podcast in December.
And the Genoa platform was the first server platform that has support
for not just CXL, but also PCIe 5 and DDR5. And importantly, really kind of shakes up the design
of the server because it's more toward a one DIMM per channel architecture. It's more toward if you want to go beyond that,
you've got 12 DIMM memory channels.
And so if you want to go beyond what you can fit in those memory channels,
you really are going to be using CXL.
So I think one of the things that's most interesting to me
about the AMD announcement is that it really,
I don't want to say it requires CXL, but I'd say
that it's right there as a first class component of the server. And then here we are in January,
everybody, it's I guess the worst kept secret in the industry that Intel is going to make an
announcement next week talking about a new server platform of their own. Hmm, do you think it could
be Sapphire Rapids?
They haven't said, but everybody knows that it's going to be Sapphire Rapids.
Everybody knows it's going to be the fourth generation Xeon scalable.
And everybody expects, and once they know, everybody expects that it's going to include
PCIe 5 and CXL support.
Because if it doesn't, I think everybody's going to kind of lose their minds.
But, you know, let's assume that it does.
Let's assume that Intel introduces the fourth generation Xeon scalable.
Let's assume that it includes decent CXL support, just like AMD already has.
That puts us here at the beginning of 2023 with basically two server platforms from the leading vendors that support this technology. But like I
said, the important thing too is not just that it supports this technology, but that it kind of
needs this technology. You can only fit so many DIMM sockets on a motherboard. You know, how many
can you put in a row? And when does it become sort of counterproductive from a system motherboard
layout and cooling and all that kind of stuff?
Well, you know, why not put it on an expansion card?
Why not put it at an external chassis?
I think that that's the sort of things that Intel and AMD are going to be leaning into with their fourth generation server platforms.
And I guess, you know, Nathan, you know, you've got your eye on the hyperscalers and the cloud and what's being developed there.
How, I guess, if you had to take a guess, how quickly do you think Epic and Sapphire Rapids are going to be coming to the hyperscalers?
I guess it would be very rapidly.
I truly expect them to start picking this stuff up as quickly as they can, considering the capabilities that the new
platform, especially with AMD, is bringing to market.
I just don't see how they could not.
I mean, AWS really likes to talk about their silicons and their platforms like Graviton
and stuff like that.
But being able to bring everything that the Genoa brings to the market, they just have
to start adapting to it and
adopting it as quickly as possible.
I wouldn't surprise it that there's already some POC or some proofed out areas that are
already starting to utilize solutions like this.
And they're starting to get the framework for solutions for whatever the next platform may be, hint, hint, in other areas as
well, because that's just what they would do as good future looking hyperscalers would do. I mean,
AWS is a monster at this point with all the different platforms that they want to actually
start utilizing, but they need to start utilizing this particularly for that CXL modularity that comes with it, right?
The next steps that we would see definitely from AWS and other hyperscalers is how this really kind of starts bringing in the extra components into that hyperscaling market.
It's funny.
As I've said, the hyperscalers will be very much early adopters.
I think that's a fairly safe assumption,
especially those hyperscalers that are members of the consortium.
And I'm sure people at home or in businesses or in managed service providers
are thinking, you know, I'm not going to be able to get into that too quickly.
I'm not going to be able to get into that too quickly. I'm not going to be exposed to that.
But there's going to be a strong chance that almost any new server coming out within the hyperscalers
is going to be backed by CXL.
People are going to be exposed to it and not even realize.
It's going to be like a background service again
as part of that whole cloud service mentality where you're getting it as a service and I
think there's a lot of potential wins here you know a cheaper tiers of RAM and
you touched on the epic 4 you know going up to 96 cores at that number of cores
you can actually be RAM constrained, you know, and RAM bandwidth constrained. So CXL
is opening up additional performance as well as making it more efficient use of those resources.
So it'd be interesting to see. Yeah, that's one of those things that's kind of interesting is that
as we talked about with AMD, they expect that CXL memory won't actually be that much slower
than regular system RAM. And that the, you know, initially, you know, memory expansion on CXL,
you know, contrary to maybe what some of us thought, might actually just be another memory
channel. I was really shocked to hear that. And I'm actually really excited to hear that because
what that means is that we could have a lot more flexibility here in terms of memory capacity.
But also, another thing that we've been talking about so far on a lot of these episodes is this idea of mixing and matching different types of memory.
So, for example, as I mentioned, we know that AMD is using DDR5.
Of course, they support other types
of memory as well. But, you know, it would be interesting to see if DDR5 ends up going in the
system memory slots, and maybe DDR4 or even DDR3 goes in the CXL memory spots. Or, and, you know,
I mean, that I think would be sort of a slam duck. But I guess the next question is,
what about other memory technologies? What about persistent memory? What about using Flash as RAM
or some kind of hybrid device that has both DRAM and Flash on it or something like that?
It seems pretty likely that we'll be seeing that too. And of course, I got to say it,
what about Optane over CXL? I mean, why not, right?
You think that'll happen? Do you think we'll see Optane on CXL?
It may not be branded Optane, but I'm relatively certain that we will see persistent memory
over the CXL bus. Yeah, we did just have Intel introduce another generation of Optane at the
end of 2022, at least on SSDs. I think I particularly personally expect them to release
another generation of Optane persistent memory modules as well. And I would not be really all
that surprised if we saw Optane persistent memory over CXL. And of course,
that would, you know, I also wouldn't be surprised if that was supported on the AMD platform as well
as the Intel platform, which, you know, I know that people like to cry and say, oh, Optane's dead.
Well, maybe it's not quite dead. I guess we'll see. But as you say, there are other memory technologies too.
I've heard some cagey mentions that there are other memory technologies out there that might
appear on CXL at some point. I guess we'll just have to see. How about some of these other things?
The other thing that we've heard quite a lot about is sharing and pooling of memory between hosts.
Do you think we're going to see that in 2023, or do you think that's more further down the line?
I think that'll be further down the line.
That'll be a CXL2.
I think by then we really hit that.
One thing that was interesting about AMD's Epic launch, they met the 1.1 standard, but they added additional features that allowed some great things to 1.1 plus you know it's like
two plus switches versus layer three it's 1.1 plus in the CPU it'll be
interesting to see what Intel come up with and if there's any kind of feature
disparity there between the two platforms yeah because that's one of the
things that AMD mentioned was in terms of supporting, you know, some protocol components beyond 1.1 in Genoa.
I wonder, I guess we don't know yet what Intel will support, but I wouldn't be surprised if Intel also supports some.
And specifically, like we heard from AMD, that in some cases, some of these features are going to be implemented by the device itself. And if it's completely supported by the device, then it kind of doesn't matter what rev
of CXL the server is expecting. All it needs to do is know how to talk to the device and then the
device can do stuff. Right. So, yeah, but we'll see about the memory pooling and sharing and all
that kind of stuff. Yeah. I don't know. Nathan, do you think that's a sooner later?
I would say that's definitely later just to chime in on all this discussion
around, you know, all these different technologies with,
within these platforms, you know,
I think in terms of AWS and looking at what they're doing,
having just been at a reinventvent a couple months back,
it really does paint the picture of where the modularity of all these things really kind of
flow and how to take a virtual instance of these areas and kind of like share resources in these
areas. And that's why CXL is so cool is because instead of using a virtual hypervisor
or however you want to utilize moving those resources from one place to another,
we're talking about like physical connections and we're talking about data pipes
and we're talking about all these different buses.
And to me, that's one of the coolest things about CXL.
But at the end of the day, you know, speaking to what you
were talking about, Stephen, in terms of like, you have certain workloads using DDR3, certain
workloads using DDR4, DDR5, you know, having that capability, having that capability, the ability to
stipulate via hardware, what actually works for which workload, that's actually really,
not only interesting to like a hyperscaler,
but also to an enterprise. You know, there are still people, surprisingly, that can't get to
cloud. They have workloads that will not work in cloud and they have to work at home. And so
for the on-premises solutions that are out there, how do they create that same type of modularity
without having to lean on, you know, another type
of virtual virtualization that they may or may not be on? And
for those that want to go and bare metal, how do they create
that type of solution. So, you know, when we talk about like
memory pooling, and all that, all of those things that that
flow with it, these are already things that we are aware of,
but we're aware of it from a virtual standard, we're not aware of it from, you know, the hardware standard that we're talking about.
So it's really exciting to see where that's going to come from, but seeing the capabilities that,
you know, we're hearing announced is, is really awesome. And I just, you know, being the skeptic
on this podcast, I, I w I want to see it. I'm looking forward to seeing enterprises being able
to touch it, being able to start seeing their workloads actually running on it and being able
to see the red team and the blue team start hopefully battling it out so that we can see
what the next steps will be. Well, that leads me to what was going to be my next question,
which is when do you think this is coming to the Enterprise Data Center? So, I mean, typically it takes a little while for these new processor platforms to ramp up
and to start being rolled out. You know, put on your skeptic hat, put on your Enterprise Data
Center architect hat, and tell me when might you, well, maybe not you, when might people like you
be willing to embrace and adopt the next generation
of Xeon or Epic processors, but also this really novel technology? Nathan, what do you think? Give
me your guess. Yeah, again, I'm the skeptic. So I'm going to say, you know, it's a pretty far
out distance. You know, I would say probably about five years from when the first
like actual view of it being actually out there. Right. You know, when people are actually
utilizing it, that's when a customer is probably like, okay, I'll, I'll utilize it five years
after that. Right. So understand from my perspective, I'm not saying five years from
today, I'm saying five years from when it's actually something physically available and
something that they can actually touch and see.
That's what I'm saying.
Because at the end of the day, an enterprise, I've never seen an enterprise that want to just immediately jump into something like this.
Even with something like, you know, vSphere, when they were like, hey, you can get like 80% reduction in cost.
They still were like, I still don't want to give up my physical hardware because I love it. And so I still see that as like a, you know, my, my terminology that I tend to lean back
on is about five years from when they're able to actually touch it and play around with
it, at least in terms of like a.
Well, Craig, we got an extreme skeptic here.
Yeah, extreme skeptic.
Absolutely.
Because at the end of the day, I've been the the customer that's running on, you know, stuff that's like really far back. And, oh, just just to be to be super clear, I'm not saying five years to adopt something like Genoa. I'm not saying that I'm saying five years from being able to top adopt what we see as like the full CXL with modularity and all those different types of things normally technology comes up and you know starts off small
and goes to bigger and bigger and bigger in cases and for that five years
certainly proven but I think five years is a very long time for CXL adoption at
enterprise level you know certainly with the hyperscalers going. And, you know, I think it would be less.
I will agree to disagree on five.
Give me a date, Craig.
What do you think?
And I guess maybe we got to be a little more specific here.
Disaggregation is one thing.
Memory pooling is another thing. Memory expansion over the CXL bus, I guess, is the nearest thing, right?
Because we've already got product. We've already got software. We've already got server support from AMD.
I guess that could happen real quick.
But, of course, you've got to think about it, too, enterprises aren't like out there building their own servers or something. You know, it needs to have been supported by Lenovo or HPE or Dell or
somebody before they're going to be running it because it's not like they're just going to go
run out and like, you know, hey, I'm going to get the newest motherboard and the newest memory
expansion card and build it myself. No, no, no, that's not how enterprises work. So how about
this? Since
Nathan started with disaggregation, when do you think disaggregation will be used in the enterprise,
Craig? Capability-wise, I think we'll have it in three. And server manufacturers will push it
heavily. We're going through an element of cloud repatriation for a lot of workloads at the minute and CXL is is going to
provide a lot of flexibility and efficiency that they would have traditionally lost moving from the
cloud to our current on-premises type infrastructures so I think that'll help with
that and maybe even let more workloads be repatriated so it but it's like
anything else in it could go either way you know it could stop with memory expansion cards and
nobody does anything with switches nobody does anything with ai we just don't know
it'll be interesting it's going to have to prove itself as a technology. But I'd say three years will be sharing memory between hosts.
At five years, I think enterprises would be managing all sorts of different workloads and
accelerators through CXL. Wow. So I guess you guys aren't that far off. And actually, I think that
you're, I find it pretty accurate, to be honest with you. Like, I don't expect to see epic servers with memory
expansion in the enterprise data center until at least at earliest the middle of 2023, simply
because it's going to take a while for these vendors to get these things out there and qualified
and for them to pick their OEM partner and pick the supported devices and blah, blah, blah. I mean,
you know, it's just going to
take a little while. And also for them to qualify them in terms of support and reliability and get
spares in and all those things that companies get from enterprises, right? That's going to happen,
I think, at the end of the year in the enterprise. I think it's definitely going to come quicker to
the cloud. But I agree with you that disaggregation is going to come longer.
I'm actually going to go out on a limb here and say, I think that there's a decent chance that disaggregation will come quicker. But I also think there's a decent chance that disaggregation
doesn't come to the enterprise. So this is one of my biggest fears here, is that somebody like
Adele or somebody looks at this and says, wait a second, I'm not sure this is in our best interest. I'm not sure I really want
our customers to have this capability. And, you know, whether from legitimate concerns about
supportability or just overall concerns about kind of losing a grip on that customer relationship.
And I think there's actually a reasonable chance that some of those companies may not
really be on board with disaggregation and composability and building, you know,
composable systems. So I'm going to be, that's the thing that I'm going to be really keeping
an eye on. Because once I see Dell and HPE and Lenovo and the rest kind of announcing
that they're going to support this stuff and announcing their own products and embracing it,
then I'm going to sort of calm down a little bit and say, okay, yeah, this is coming to the
enterprise. Still not for a few years, but that it's going to come. I don't know. What do you
think? Is there a chance that this could get blocked by the OEMs? I'm going to jump out here and say absolutely.
Modularity, customization, all of these things are great terms, love to talk about in the tech world.
But at the end of the day, there's going to be someone in the enterprise that says, okay, but what do I do when it breaks?
If it's modular and I have all my memory in one big old bucket and that bucket breaks, what do I do?
If I have a modular and it has it in multiple buckets, how do I make sure that that bucket, when it breaks, has the ability to then immediately pass over to the other bucket? As a cloud architect, I have to understand high availability, redundancy, durability are my main focus, my main goals, right? And if we're talking about having the
capabilities of the cloud on hardware, we're not hitting those topics yet. And that those are the
topics that have to be hit and not only just hit, but capitalized on documented, you know,
have a community base around it, and then have people show the validation. I think the vSphere motion moment will be when they say, here is my modular CXL memory
module, and then they just unplug it.
And then you see everything continue to run just naturally.
If they're able to show that capabilities to the enterprise with a enterprise
solution, I'm not saying anything that's currently out right now. I'm saying something specific to
customers. Then they'll be able to start saying, okay, I get it. I see that capability and I see
the redundancy and durability. And that's, in my opinion, where it'll be a game changer because
modularity and customization is great, but you've got to hit those three and you've got to hit them very hard in order to get into that customer enterprise space. five, ten years refactoring their monolithic architectures to move to microservices architectures
and now to repatriate that workload they'll have to refactor again and take CXL into account.
They'll probably still use Kubernetes based workloads but there's more refactoring and we
know that's painful, takes time, costs money. So there'll have to be a clear and
present ROI on repatriation to move forward. Yeah, that's really, you know, these are really
great points and absolutely true. I think that, you know, us with our backgrounds in enterprise
tech have to bring some skepticism here. And actually, this is one reason that I think that it was smart that CXL started with memory expansion,
because as we heard here again and again from various companies,
you don't actually need software to do CXL memory expansion necessarily.
You can just throw a card in a server,
and as long as the basic functionality, the basic drivers are there,
it just works. And I think that that's going to be a lot less controversial for people to embrace
and adopt. My fear is that if that becomes popular, that may be all that CXL ever is.
We may never get beyond memory expansion because that's the sort of low-hanging
fruit. That's the big win in terms of giving flexibility to your server platforms and
right-sizing memory for cloud and for enterprise applications. And that may be all people want from
this technology. But that being said, there could be more. So VMware, Linux, hopefully Windows.
We're going to try to get Microsoft on the Utilizing CXL show here pretty quick.
Hopefully Windows will support this.
And then we've also talked with MemVerge.
They're doing a lot of really cool things with software.
What do you guys think of
this, the software aspect here? Is this going to make the difference between whether it just is
memory expansion or whether it's more? I think software is a huge play in this.
It just has to be because at the end of the day, the hardware is still kind of, you know, literally the nuts and
bolts of the solution. In order for a human to really appreciate it more, there needs to be a
softening of that hardware that allows that utilization to be much more valid, much more
configurable, customizable, and transferable from human to human. And that's where I think this software is going to need to be a pretty key component to this.
And so seeing something via Linux or Windows or Microsoft or VMware,
showing that virtualization, that customization behind it,
that's going to be a pretty key component.
What do you think, Craig? I completely agree, that's going to be a pretty key component. What do you think, Craig?
I completely agree. Software is going to be huge. CXL is going to have to be monitored,
secured, even something as simple as allocating RAM to a server. Where's that logged? Where's that monitored? What anomalies are being detected
on the backend? How is that being controlled over an API across multiple data centers controlling
these resources? Software is going to be a massive element. Let's look at VMware as a
software solution. Software can be extremely powerful, extremely powerful.
And there will be a lot of people and companies that will view the opportunity of having this hardware capability.
And they'll see a software solution that will matter and work.
And we might not even know who those companies are now.
They might not even exist.
And they could be the market leader in five years of something huge. So I absolutely agree on the software side. And another thing that you
mentioned there, Craig, that's super important is security. One of the things that I brought up with
AMD and will continue to bring up is this question of making sure that any kind of memory pooling or sharing is secure, because that could derail the whole thing too.
Imagine if word got out that there was an exploit that allowed someone to kind of plug into a server
and exfiltrate data directly out of the main memory or monitor the applications that are running on the CPU. This is not far-fetched
given what this capability of CXL is and how it's implemented. That's why I was excited to hear
vendors like AMD but also others that we've spoken to talking about security features like,
well, basic ones like basic end-to-end encryption,
but also more advanced things like figuring out, you know, not doing memory sharing until you have
software that can securely allocate who's allowed to see system memory. Because, man, could you
imagine if somebody could like plug something into your VMware server and snoop on all the memory
all the time? That would be a no-go.
People would literally unplug these servers.
And so security is just a huge, huge need.
That's one of the things I really liked about the Epic solution
where they have instruction sets for securing memory access.
They use the same instruction set for securing cxl memory as they
do for normal system ram so it doesn't need to be treated differently at a development standpoint
you know existing software that does anything with memory is just calling for that same instruction
and it'll just be referencing it on a different address so i I like the fact that they backed that hardware security in using the
same mechanisms as traditional RAM, you know, and people that already are familiar with and trust
and have coded for. So yeah, security is paramount. We don't know what Intel is going to do, but I am
going to guess that they know this. So I guess we'll find out. I'd be surprised. Yeah, I would be surprised if they
hadn't thought about things. But also, you know, things like root of trust to keep somebody from
plugging in an untrusted device and so on. You know, I think that this stuff is going to be
there. And software is going to make a huge difference there, too. You know, Nathan, what
do you think in the cloud space? You know, are there novel applications that you're looking forward to?
Any weird things that we might see come up?
Well, just more utilization of things like HPC within the cloud.
I think that's one thing that a lot of people look into and like, oh, well, you're still running on a virtualization.
You're not really running on bare metal.
And if you want to, you just like some uh bare metal or get some get provision some
bare metal and then run it on it right there's something to like just running just throwing iron
at problems and trying to figure out how to do that type of workloads but i think bringing this
to hyperscalers will allow more more solutions to start you maybe maybe taking that deep grab of resources that HPC needs,
but not necessarily having to have that high dollar of being provisioned at full server, right?
That's what I'm looking forward to in the cloud space where people can start still accepting
and getting the deduction in terms of
cost of what they want to purchase and what they want to run for their workloads, but still have
that modularity in terms of not having to be fully provisioned, you know, a server that you can just
get anywhere else. I'm looking forward to that being in the cloud space and seeing how people start utilizing that for AI, ML, and other HPC workload solutions.
So what all are you going to be looking forward to?
So it's 2023.
We're watching this technology roll out.
Be positive.
What are you looking forward to this year?
What are you looking forward to seeing appearing throughout the year?
Craig?
I'm looking forward to seeing Intel's next generation come out. I think it'll be healthy competition between Intel and AMD.
It'll be really interesting to see
how they're competing against each
other with their own individual edges, but also
I'm looking forward to seeing
what solutions are built and derived
from having this new baseline of hardware capability.
I would have to say,
since we're on the cusp of Intel,
of course that is top of mind.
I've brought up competition several times in this podcast.
I really hope if they don't come out
with something that doesn't
tie in CXL, they have something pretty quick after it because that's really what we expect.
We make fun of Team Red and Team Blue, but we kind of need them in the same space. We need
them around, right? And seeing where that comes from is going to be one area. Seeing more adoption and more manufacturing around persistent memory and these modules is where I'm kind of looking forward to as well, seeing what other manufacturers are going to market and starting to create these modules that people can start using.
The more adoption we see from the manufacturers, the more I'll feel like, okay, we're moving forward here.
Whereas instead of it being this thing that's really cool in someone's lab, it's this thing
that people are starting to actually see things hit the market and be like, oh, I can actually
pick this up and I can actually start utilizing it. And that's what excites me about CXL in 2023.
Yeah. And just to make some call outs here, the things that I'm going to be looking
forward to. So we mentioned Intel. Yeah, I will be shocked if they don't do CXL in their next
generation platform. But Arm also, I'm really excited to see where Arm goes with CXL support
on their cores, which they've already talked about. And hopefully we'll have them on the
podcast real soon here as well to talk about where they're going. Some other companies I'm
going to shout out to. Can't wait to see Micron jump in here with some CXL memory support. So,
so far we've got the other two biggest memory vendors here. I expect to see Micron come
out. I expect to see some storage support, specifically companies
like Solidigm coming out with Western Digital, coming out with some CXL storage, NVMe storage,
something. I don't know what it's going to look like. I think that's a little further off because
I don't think anybody's sure yet what that's going to look like. But I wouldn't be surprised
if we see some Samsung persistent memory based on flash. I wouldn't be surprised to see some Micron or SolidIme or
somebody like that in there as well. You know, there's a bunch of companies that we still haven't
quite heard from. Obviously the server vendors we've mentioned, HPE, Dell, Lenovo, they can't
say anything until, you know, until this is mentioned.
Though HPE did actually talk about Sapphire Rapids recently,
which I was pretty surprised to see.
But they, I'm sure, are champing at the bit to talk about what they're going to be doing
with these server platforms.
VMware, can't wait to see their announcements
at VMware Explorer this year,
because, again, I really expect them to lean into these next generation server platforms and their features in VMware.
And we know that they're working on CXL support as well.
But the big thing for me that I'm going to be looking for is more talk of things that aren't memory. So storage is one thing, but peripheral sharing, sharing accelerators, sharing GPUs,
doing all sorts of other things with composability. So that leads me to think about companies like Liquid who are out there doing some pretty cool stuff already in composability,
and I know are interested in working on CXL as well. So I'm going to be keeping an eye on them.
And some of these other companies that
are developing support chips. So Marvell, who we hope to have on the podcast here real soon,
Intelliprop as well. We know that they're working on some cool stuff that kind of brings some Gen Z
fabric things into the CXL space. So just a lot of stuff coming. And basically, I guess, stay here
for the Utilizing Tech podcast as we see
these products announced and as we, you know, sort of question them and kick the tires on the
announcements that companies are making. Another thing I'm looking forward to actually is coming
in March, and that's that we're going to be having a Tech Field Day event. It's actually on my
birthday. I know, personally identifiable information there.
Sorry, everybody.
Well, sorry, me.
Anyway, it's on my birthday.
But in March, you're going to see Tech Field Day event, and we'll probably see some of
these CXL companies presenting their solutions there, because by then, I think everybody
expects there to be two big server platforms that support it.
And so I can't wait to see what companies do with Tech Field Day.
So keep an eye on that.
Also, keep an eye on the podcast, of course, as we talk to these companies every single week.
Before we hop, is there anything else you all want to talk about, point other people to,
where else they can connect with you?
Craig?
You can reach me on Twitter, craig rogers ms uh my blog is craig rogers dakota gk and i'm also searchable on linkedin as craig rogers i'm on twitter at v nathan bennett i'm
searchable on linkedin as nathan bennett i blog at nathan uh nerdyate.life. I'm also one of those Mastodon folks starting out.
You can find me at vnathanbennett at awscommunity.social. And as for me, you can find me
at sfosket on most social media platforms. And you mentioned Mastodon, sfosket at techfieldday.net
is me on the Mastodons. You can also find me on the Twitters using that sfosket. And you mentioned Mastodon. sfoskit at techfieldday.net is me on the Mastodons. You can
also find me on the Twitters using that sfoskit. And of course, hosting the podcast, hosting the
weekly Gestalt IT rundown, the Gestalt IT on-premise podcast, all sorts of things like that.
Well, thank you guys both for joining us for this special Looking Forward to 2023 episode of Utilizing CXL.
We'll also be returning every week with more episodes.
So please do check this podcast out in your favorite podcast platform. And also please give us a description, a comment, a rating, a review.
We love to hear from that.
This podcast is brought to you by GestaltIT.com,
your home for IT coverage from across the enterprise.
For show notes and more episodes, though,
go to utilizingtech.com or find us on Twitter,
or Mastodon at Utilizing Tech.
Thanks for listening, and we'll see you next time.