Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 4x20: The History and Future of CXL with Jim Pappas
Episode Date: March 20, 2023Perhaps no one can tell the story of Compute Express Link (CXL) better than Jim Pappas, who was involved in the development of nearly every related technology, from PCI to UCIe. This episode wraps up ...the season of Utilizing Tech with Stephen Foskett and Craig Rodgers discussing the evolution of CXL with Jim Pappas, Director of Technology Initiatives at Intel and Chairman of the CXL Consortium. No matter how good the technology is, it needs widespread industry support, backwards and forwards compatibility, and open cooperation, and that's what made technologies like PCI, PCI Express, USB, and now CXL successful.  Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Craig Rodgers: https://www.twitter.com/CraigRodgersms  Guest: Jim Pappas, Director of Technology Initiatives, Intel and Chairman of the CXL Consortium: https://www.linkedin.com/in/jim-pappas-3624442/  Follow Gestalt IT and Utilizing Tech Website: https://www.UtilizingTech.com/ Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/1789 Tags: #CXL #CXLConsortium #UtilizingCXL @Intel @UtilizingTech
Transcript
Discussion (0)
Welcome to Utilizing Tech, the podcast about emerging technology from Gisdalt IT.
This season of Utilizing Tech focuses on Compute Express Link, or CXL,
a new technology that promises to revolutionize enterprise computing.
I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gisdalt IT.
Joining me today is my co-host, Craig Rogers.
Hi, Stephen. how are you?
I am very good. It's been an exciting season, hasn't it?
We've had 24 episodes, we've talked to everybody in the CXL space almost.
But I think you've certainly kept a good one to last. I'm excited to see what information we can learn more about the origins of CXL.
Absolutely.
So this is one that we've been kind of keeping, I don't know, keeping in our pockets, keeping hoping for.
Because, frankly, when you talk to people in the CXL industry, when you talk to people in the CXL
consortium, and if you said to basically anyone, who should we have on the podcast, who would be
a great way to learn about the past, the present, and the future of CXL, everyone says Jim Pappas.
And so guess what? That's who we've got here on our episode today.
We've got Jim Pappas. Jim is Director of Technology Initiatives at Intel.
But more importantly, for the purposes of this podcast, Jim is Chairman of the CXL Consortium and has been part of this literally since the very, very beginning,
has seen everything that's happened, not just with CXL,
but everything leading up to CXL. So Jim, welcome to the program, and we're very glad to have you
here. Nice to be here, Stephen and Craig. So give us a little bit of background. You've been,
like I said, you've been here since the very beginning. When I say the beginning,
CXL is built on so many technologies.
The key technology that it's built on, though, or at least that it leverages, is PCI Express.
And you were involved in that.
So take us in the way back machine, way back to where did all this come from?
Yes.
PCI started with five companies around the table.
I was representing my company back then.
I was not yet at Intel. That was my first technology initiative.
I was at a company that doesn't exist anymore, digital equipment.
And the five companies put together the PCI initiative, created the PCI SIG.
And that was the start of my second career. That was in June of 92.
So I'm on my 31st year into my second career here, completely just focused on driving technology
initiatives across the industry, and then working with the entire industry to ramp these and make them ubiquitous
there's been a number of products versions since pci i remember it coming you know moving from esa to to pci white slots appearing on the motherboard um obviously and we went near
versions of pci and with agp off to the side but then 64-bit PCI, and then obviously PCI Express, which has been transformative to the entire industry.
PCI Express has really driven huge innovation, huge innovation across the industry.
What was it like to be involved at that early point?
Well, all three that you mentioned, you know,
PCI was a parallel bus,
AGP was specific to graphics,
and then we did a big change with PCI
from parallel PCI to PCI Express
that was taking a parallel bus and making it bit serial.
You know, I wanna say something really sincere about the PCI Consortium.
It's been an absolute remarkable journey that that organization has had. And there's, I
think that there's two critical things that they've done. Number one, they really focused on forward and backward compatibility.
It's the most remarkable effort.
And this is so important for being able to get in the industry to invest in your products, in your technology, and and forward compatibility, PCI-SIG is the North Star.
Whenever we look at what we should do next, I look at that as the role model.
So, a remarkable job. You could literally take a PCI Express Gen 1 card and plug it into a PCI Express 5 system and expect it to work.
Vice versa, you could take a Gen 5 card and plug it into a PCI Gen 1 system and expect it to work.
In CXL, we role modeled this behavior as well.
You know, we actually baked it into our bylaws. It takes
an incredible vote. So it would take a very, very strong reason to break backward compatibility.
In the 31 years of PCAI, they made one large transition. That was from PCI to PCI Express.
And, you know, obviously that can't be plug compatible.
One, they're very different buses. But the, and it was a necessary change.
That's why they did it. But even when they did that from an application standpoint, it was still compatible.
So they, like I said, I use this term, you know, very religiously here.
They're the North Star of this, and it's probably one of the single biggest things that they did
for their success. The other one, the number two, was a focus on compliance and interoperability.
You know, many times in these standards organizations,
the engineers are architects. They believe that the specs are, you know, the be all and end all,
and then all the rest of the work is just, you know, grunt work. And nothing could be further
from the truth. Actually, good ideas are a dime a dozen. It's the organizations that stick to it and make sure
the industry is working, running a good compliance program. Those are the key things that really drive
these technologies forward. Yeah, I completely agree. And as somebody who's been, well, I haven't
been in the industry quite as long, but as somebody who's watched this evolution, literally on the consumer side, on the user side, on the enterprise IT side, watching the emergence of PCI Express, the evolution of PCI Express, I have to agree with you that the backwards and forwards compatibility and the intense focus on interoperability is probably
one of the best technology success stories that I have ever seen. Because as you say,
in order for a technology to be taken seriously, and in order for a technology to really take hold,
you have to have that. And we say this
with Tech Field Day all the time. It doesn't matter necessarily how great the technology is,
or as you say, how great the idea is. It has to be good enough. But if it's not coupled with
the right execution, if it's not coupled with the right people, the right environment, the right
ecosystem, it's not going to take over the world. I was reading a wonderful article by one of the
designers of the Commodore Amiga, of all things. And he said, we were designing the next generation
Amiga and we saw PCI, the announcement of PCI, and we said, oh, wow, we should have used that.
And I think that, to me, is one of the greatest endorsements of that technology, that other people designing the hardware in the industry, great hardware in the industry, immediately saw the potential of what you were going to do.
But that being said, there were competing standards, and they didn't take off. And the reason they didn't take off is, I think, those
other things you mentioned. Guaranteed compatibility, guaranteed interoperability,
widespread industry adoption. How do you do that? I mean, Jim, talk to us a little bit about that.
And then let's kind of lead into the earliest breath of CXL. How did you go from an idea technology like PCI or PCI Express into
basically adoption on every device in really every device?
Well, you know, after, after I, you know, we got PCI going,
that's when Intel called me and asked me if I'd come join Intel.
The argument was we've just modernized the inside of the box.
We made it modern.
Now, this was 30 years ago, and it was the PCI bus.
And it's still being used.
Variants of that are still being used today.
But now we need to figure out a way to
attach devices to the to the machine so i i joined intel we put together the team there were a few
working on some concepts already and we put together and we invented usb and that was my
second technology initiative so um you know once again you look at these two and they really paralleled each other how we drove it.
And by the way, CXL is very similar as well.
With USB, we had a massive amount of companies.
I think, you know, within a year or a year and a half or so, we had like 1100 companies. But they were building all kinds of, you know, little gizmos
and gadgets that, you know, that you've seen plugged into the machine. That was an exercise
of just scale. It was how do you manage that many companies? That was a real busy time of my life as
well. But you know, let's move forward now to CXL, because you asked about this.
Well, when there was actually three other technologies that for several years had been
trying to rid themselves. And one was Gen Z, one was C6, one was OpenCAPI.
And I have nothing bad to say about any of those technologies.
Any one of those could have done the job and move forward.
But the industry was stuck.
All of these were like stuck in the mud for a few years.
And the industry really just didn't know what to go.
Given that we have this
and so many different smart people were working on solutions
was validation that the need was there
for this type of technology, bringing coherent interfaces
and allowing things like memory, et cetera, to be expanded.
So the need was there, but the industry was just confused.
And meanwhile, Intel had been working on its own technology.
It was the predecessor to CXL.
It was called Intel Accelerator Link.
And we decided, many companies were asking us to make this open.
And we responded to that. And we decided to bring this in. And, you know, fundamentally,
it was very similar to those other three technologies. So you, if there's So if the industry's confused with three technologies out there, it's not really
obvious why bringing a fourth one in would make things better. But in fact, what happened was
the way that we brought it in and the support that we had really brought clarity to the industry.
And all of a sudden CXL just took off like a rocket.
It was, and it's been a wild ride ever since then.
So I would say that the first thing we did
is we really pulled together a rockstar group of companies to be the initial promoters
and when we put that slide up of the promoters it was kind of a jaw-dropping experience for the
industry it was you know they company said we need to be part of this then we uh we worked really
hard to incorporate we incorporated in um i I think it was September of 2020.
Yeah, it was.
And the interesting thing was, if you looked at those other three consortia, we expanded the board of CXL beyond their original promoters.
And the president of Gen Z was Curtis Bowman.
He was at Dell at the time.
He joined our board.
The Mr. Open Cappy, Steve Fields, he got a seat on the board.
And Mr. C6, Gaurav Singh, the president of C6 also got aboard a C. So all of a sudden we had these, you know,
these growing from three to four consortia,
but the presidents or chairman of each of these organizations were now part of CXL.
And from there, the industry just focused on CXL. It was really clear that that was going to
happen. And it was, you know, as I said earlier, the growth has been phenomenal. I think we have
240 something members, you know, today, which for a relatively new technology, which just barely is putting products into the market now, is a very, very good indicator.
On that note, I want to ask you to clarify something.
So why did these people join the CXL Consortium?
What was it about CXL that brought them in instead of, you know, why didn't they like lock up and say, no, no, no,
I'm going to support only my thing. You know, what was it about CXL that made it something that they
wanted to be part of? Well, for the three years, they kind of were supporting their own thing and
progress wasn't being made. You know, the customers really wanted to see this happen.
And like I said, it was pretty clear that the momentum was here.
And they, you know, as I said, all of these technologies were fundamentally good.
And, you know, we, for an example, we immediately formed a liaison agreement with the Gen Z
Consortium.
Gen Z was moving out into fabrics. Now at the time,
CXL was not defined for fabrics, but we were working on CXL2, but we were also talking about
what we're going to do for CXL3. So, you know, CXL3 was announced last August. And it is a fabric-based architecture.
And the Gen Z, you know, was in those meetings
and seeing what was going to go.
They saw the momentum.
And I believe that they felt,
and I give them a lot of credit for this,
that we could get what we want
if we focus on CXL and move the technology we're working on or the ideas of the technology into this.
And they did it a very mature way.
They didn't come in and say, you have to do exactly what we did.
But they came in with the usage models.
And they said, these usage models are important.
We've been working on this.
We know that they're going to be important and we think we should add those usage models to PCI.
And it was decided while we were doing CXL2 that that's probably where we're going for CXL3,
which is what caused them to decide to fold and join. They actually donated all of their assets.
You know, there are three types of assets.
There was their financial assets, their IP, and their copyrights.
And they moved them all over into,
and we made deals that made it easy for, like,
these members of the other consortium to join in for,
for into CXL.
We give them credit for the membership dues they had paid and the other
organization, things like that to make it easy. And, you know, basically the,
the, the,
the amount of gravity for CXL just increased as these types of actions
happened. It was really just, just an amazing thing to be in the middle of and feel and see happening.
I feel the consortium working for multiple companies, you know, multiple companies working
together, collaborating together to agree on something that suits everybody for their
needs.
It's very hard for anybody to pit against that when you're one
company with potentially proprietary system that nobody else, you know, you're taking on all of the
workload there and from a very narrow focus compared to the ever widening, you know, focus
of all the consortium members and they're obviously still all getting what they need out of the standard.
So I would attribute a lot of the recent pace
and success of CXL adoption around that.
And it was fantastic to see the likes of Gen Z
throwing in IP, throwing in financial resources.
You know, it absolutely would have helped
to get started going. 1.1, right now, it absolutely would have helped to get started.
Go on.
1.1, right now, we have memory expansion.
2.0, it starts getting a wee bit more exciting
around pooling of memory across multiple hosts.
3, it starts getting really interesting.
But short to near term,
what do you see the impact of CXL2
level devices to be in the market? Craig, as you said, CXL1.1 is
absolutely only a processor and a device connected together with wires, nothing between them.
It's a direct connection.
It's very similar to what every computer architecture has done,
where they could take two CPUs or multiple CPUs
and attach them with proprietary links and, you know,
be able to access each other's memory, be able to cache each other's
memory, have the caches remain coherent. Most computer architectures had that capability.
CXL brought this forward in an industry standard way for the first time. So that was what CXL did. This device could be a memory expander or a memory controller or a memory bridge.
I think those are all synonymous terms that are being used today.
And as you said, it just supports memory expansion for the most part. Now where CXL2 introduced switching, which kind of
brings it more up to speed where PCI was already, because PCI supported switching. So CXL2 kind of
allows you to form the types of topologies that you would have with CXL. So now with CXL 2.0, we could have multiple CPUs.
It's a handful of CPUs, let's call it.
I think it supports maybe up to 16.
And through a switching architecture
to multiple devices.
As you said, this could be used in the memory area. You could put a larger pool
of memory there and pool it amongst devices. Now, CXL2 supports memory pooling, but not sharing.
And let me define the difference there. Pooling will be, you take segments of memory and you assign it to one coherency domain.
You don't have two different processors in two different coherency domains accessing the same
single segment of memory. So you could segment the pool of memory and then assign those segments to various devices.
It doesn't support sharing.
By the way, sharing does come in CXL 3.0.
So, and I think that the application for that will be really important, especially in areas where the application, where actually,
where the CPU was built for a particular application. So the cloud service providers have,
you know, use Facebook. Their data centers run the Facebook app, essentially. You know,
it's a collection of apps, but you understand what I mean here.
And they could build hardware that's very specific to their needs.
That's an example.
HPC is another example where, you know, the National Labs, as an example, have this particular application they want to run.
And they build a machine specifically to do that. Memory sharing could
come in in a big way for those types of applications and allow us to get things done
that really have been out of reach up until now. But that's more of a CXL3 thing. So CXL3,
the big thing about CXL3 is we really unbound how many different processors and devices you could have.
You know, where I said it was like maybe I think it's 16 processor nodes with CXL2.
With CXL3, we support 4000 nodes.
So, you know, for sure we will have, you know, rack level computing at load store type of operations are easily done with the 4000 nodes.
And more than likely, with some investment from top-of-rack or middle-of-the-rack switches,
you'll probably see pod-level computing where some number of racks, 4, six, eight, 10, I don't know.
You will be able to build little clusters of computers at that time. We call them pods.
And those will be within reach of what's available to do with CXL3. So now you're going to see changes
in actually the architecture of data center
because these pods become very, very powerful components.
Yeah, on that note, I want to bring in,
well, actually a couple of things.
First off, as a storage guy,
I'm pleased to hear you being very careful about saying devices and not just memory, memory, memory.
Because to me, I think CXL is about a lot more than memory.
And I can't wait to see where it goes next.
Another point that you make, though, is, you know, technologies, especially Gen Z, were already pretty advanced in terms of fabric technology, but all that's part of the CXL
consortium now, which again, pat on the back, especially to Gen Z and OpenCAPI for coming in
and saying, let's all work together. It's great. Will there be a CXL4? Because I know that not all
the Gen Z technology is in there yet. I mean, is that what happens next or does it just add on to CXL3?
Well, right now we're closing up on CXL3.1.
You know, probably in the third quarter that will, 3.1 will be announced.
And more than likely it'd be a public spec.
We've, you know, to date, every time we've come out with what we call a final specification,
that's when all the IP protections go into place for this.
So it goes out for a wide consortium level review, and then we make it into a final specification.
And then as soon as we've made it a final specification the very next
vote has been to make it a public specification i expect that's the case i mean um once again back
to you know you said about you know i think craig you were saying about you know you you've you've
taken this technology and how do you make it available to the whole industry or taking the input?
When we donated the original, when Intel donated the original spec into the consortium, we did this knowing that now we no longer own it.
The consortia owns that specification.
And we get exactly one vote. So this is not just letting other people
come and play with our technology. We're donating it and it takes a new life of its own that is
driven by the entire industry. Any company could join the consortia. Any company can join at the contributor level,
which allows them to join any of the technical working groups. And then they have a say over
the direction that this technology is going to go. So this isn't Intel technology anymore.
Intel invented the basics of it. We donated it, and it's been collaborated on by the entire industry. And these
other consortia have joined in. So
what you're seeing now is a true industry effort.
Well, one of the big contributing
factors to adoption, I think, here is that
PCI Express and PCI before always solved
real problems you know we had hardware manufacturers you want them you know
standardized slots with pay grade throughputs and a level playing field
for their product PCI Express took that much further and for a very long time
maintaining that backwards forwards compatibility you know made
it usually hugely widespread you know it's a global it's the global way of doing it cxl is
now solving yet another industry problem so it it it should be relative it it appears certainly to
be relatively easy to get 200 titans in the tech industry all to agree to do it a certain way.
So the fact that it might not solve 100% of their problem, it might only solve 98%,
but if everybody's happy with 98%, you know, you have only 2% to sort out after.
So the standardization, I think, has been hugely contributing.
This alternate protocol mode is something that the PCI-SIG
put into place. And essentially, as the device is booted and being configured,
it can report that it supports an alternate protocol called CXL.
At that point, instead of becoming a PCI device, it takes the PCI protocol and puts it aside,
and now you support the CXL protocol instead.
The CXL protocol provides capabilities that PCI doesn't have, such as this coherent
traffic, all of the cache coherency, all the other capabilities that we have with CXL come into play.
But, you know, what's the advantage here for the CXL consortium? Well, the PCI-SIG's got 30 years of history,
31 now, actually,
and of doing form factors,
creating a ubiquitous standard
that's essential in every computer.
And there's connectors,
there's retimers, repeaters,
there's just very deep industry know-how to make the electricals work.
And we were able to leverage all of that.
And we did this thing in full transparency with the PCI-SIG.
It had been a long time since I had been to a PCI-SIG meeting, but I went, you know, as chairman of CXL and we described what
we were doing. And, you know, we've been working cooperatively with the PCI SIG ever since.
So PCI SIG had a big part of us being able to be successful.
So let's, I guess, shift gears here for a moment as we're getting to the end of this discussion.
Let's talk about where CXL goes next. We talked about the adoption, the fact that CXL 1.1 is here. We've got hosts,
we've got devices. CXL 2.0 is honestly pretty much here too. I'm hearing that there's a lot of
work being done to bring that to market real soon now as well. Obviously, we talked to companies that are looking at 3.0 devices as well and further.
What should we expect to see from CXL this year, next year, the next few years?
And then let's take a moment and think about where it goes in the future.
Well, I can only go so far out, but the, the, well, first of all, the, my team works with a lot of third parties who are building silicon. our first product with CXL on it.
For the last couple of years,
nearly 100% of the companies that are building products have been using our pre-production systems
to get up and running.
This is something that, you know,
my company has done and my team specifically
has been doing for many, many years at Intel.
So, you know, right now, what's going on? Memory is a big deal. And you thank me for talking about
more than memory. But, you know, memory was really out of scope for any other technology.
With the PCI bus, we could easily do storage.
We could easily do networking.
And we could do accelerators.
Now, all of these could get much better with CXL and the capabilities that CXL brings.
And they will get better. But up until now, memory has been off the table
and CXL brings memory to the table.
And also those other elements that we talked about,
there have been, you know, attempts and products
that put together composable systems.
And you've seen those come to date,
but the thing that's been hardest to do and nearly impossible is composable systems. And you've seen those come to date, but the thing that's been
hardest to do and nearly impossible is composable memory. And it also happens to be the most
valuable because that's where the largest dollar spend is. So CXL brings in the last
really hard element, the one that's been out of reach. So you may see composable systems really start to take off
and in composable architectures.
So I think that that's a big deal.
And this starts coming into play even with CXL2.
CXL3, even larger levels of composability.
The CXL, realistically, right now, the industry is mostly doing proof of concepts and early products.
You know, any large data center operator who's going to put in memory is going to want to test this thing out to make sure it's rock solid before they load up the data center with this and turn on the switch. So I would say that the CXL1
phase is really, there'll be volume that's shipped, but it's going to be used to prove out the
fundamental architecture. CXL2 will be the beginning of the hockey ramp, the hockey stick curve that really turns on the high volume.
CXL 3 and 3.1, which is what will be released later this year, is going to be very dramatic.
That's where really how data center architecture
will evolve. And what's in CXL4? You asked earlier if there will be one. We really don't know yet.
The, we're, our focus is completing CXL3.1. And what we've done in each stage here, we start off, what are the use cases that we want to cover?
And that's where we start.
And we will be, as soon as we finish 3.1, which will be later this next quarter, I don't think another quarter first to make it public,
we will start focusing on what are the use cases we want to. We won't
be doing spec development. We'll be working on what are the use cases. We'll be getting the input
from the industry, from our membership. We'll be testing this. We'll be talking to the industry, what do they want to see next? And then we'll start developing the specifications
that address those use cases. So I'm not trying to avoid the question. We really don't know what's
going to be in 4.0 yet, but we are very confident that there will be a 4.0 product perspective that makes sense
you know there's no you're already so far ahead on spec with 3.0 and 3.1 coming um it it completely
makes sense to see how the market reacts and adopts the what's already available to them. And they'll have to go through that,
learn, want to adapt, change.
And at that point, they're able to feedback
and say, we would like to do X, Y, and Z.
And that's going to be something
they may be looking at for four.
Plus, hopefully, Silicon Photonics
and all this other exciting tech.
Yeah, on that note,
I think we should really talk about UCI-E and all this other exciting tech. Yeah, on that note,
I think we should really talk about UCIE slightly here at the end too
in terms of where this goes.
So that's another technology
that we've kind of danced around a little bit
on the podcast,
but essentially this chiplet standard
that I know that you've worked on as well, Jim. It has a lot of similarity.
It rhymes with CXL. You want to talk about that for a moment?
On UCIE, I'm not currently a member of the board. However, I was elected as an advisor to the initial board for UCIE. And this was,
and my job specifically, was how to get the UCIE formed, up and running, incorporated,
and on, you know, two feet and going forward. So at the first, at the end of the year, at the end of 2022, I resigned my position as an advisor.
And they have a great board as well.
Now, from a technology standpoint, the UCIE is a chiplet architecture that goes way beyond just the wires.
It is very much entire architecture
of how you do chiplets, including protocol and all the way backing up all the way to the application
level. So initially there's two protocols, actually three protocols that are supported, two of them being the ones we've
talked about, the PCI bus and CXL. So those are both native protocols. The third protocol is
what's called UCI eRA, which is an unstructured protocol. You could use it however you want to
use it. Now, I'm very confident the number of these protocols are going to continue to rise.
So these aren't the only two protocols or structured protocols that will ever be supported.
We really expect that this's going to be two of the very early uses of UCIE.
And it's, once again, utilizing the ubiquitous infrastructure that PCIE already enjoys. And in the timeframe of UCIE, we do expect that CXL will have that same type of
ubiquity that PCI has today. So once again, just as CXL took advantage of the PCI
learnings, UCIE is taking advantage of CXL and PCI.
And it's pretty exciting to think that the same kind of interoperability, compatibility,
extensibility that we're talking about here between various components of the system that
will lead us to disaggregation and composability and pods and so on, is also being extended inside
the processor with UCIe. And to me, it is very much analogous to sort of the other side of the
mirror, if you would, of let's go inside the processor instead of just going outside the
processor and let's have that kind of interoperability and compatibility there. Now, it's not like people are going to be building their own
processors, but companies will. And I think that that's going to be really exciting to see where
that goes. So it's really been an amazing conversation. I hate to say it, but we have
kind of hit the limit here. I think we've gone longer with this one than any other episode.
But like I said, this is the cap
for our whole season of utilizing CXL.
And I can't imagine somebody to do it
that would do justice to it better than you, Jim.
I really appreciate having you join us in this conversation.
Thank you so much for joining us today.
Thank you, Stephen and Greg.
Likewise, Jim. Thank you so much for joining us today. Thank you, Stephen and Greg. Likewise, Jim. Thank you.
So Jim, if people want to continue this conversation, if they want to ask you questions or engage in some way, now, obviously, if they have the capability, they should probably look
at joining the CXL Consortium. They should probably look at contributing to these standards
and getting involved in that sort of standardization effort. But where can they continue the conversation with
you? Thank you, Stephen. Certainly, I believe that the very best way to get involved is to
join the consortium. And it's very easy to do. It's open to any company. And even to join at the contributor level, it's one of the least expensive in the computer industry.
It's $10,000 and you could join and be part of any of the working groups, 10,000 annually.
You know, most organizations are more money than that to be a contributing member to.
And then we have the adopter level.
We did something that is really unique,
is we made that level three.
So that was, you know,
we had such a powerful group of companies.
We were a little afraid that, you know,
we were going to get a reputation
as being just for the tech elite. And that's not a reputation that you know we were going to get a reputation as being just for the tech elite
and that's not a reputation that you want to have we really want to be able to have all aspects
of the industry to come and be able to participate so we created this adopter level which is free and
they the primarily people who want to build this should be at least be an adoptable because you get all of the IP protection that comes in a consortium like this.
So that's the big reason to join. But those are, you know, two ways that companies can join.
If you want to reach out to me, LinkedIn is probably best.
You know, Jim Pappas, if you search for Jim Pappas, Intel, throw CXL in there,
probably pretty easy to find.
Great.
Thank you so much.
And definitely looking forward to seeing more.
And I think we'll probably see you at some of the industry events.
I think you mentioned that you might be at Flash Memory Summit.
And yeah.
Yeah, Flash Memory Summit, I'm running a track called the System Architecture Track and CXL will be part of that.
Last year at Flash Memory Summit, almost every top presentation everywhere, no matter what it was, mentioned CXL.
So it's definitely a hot topic at Flash Memory Summit. And the other is Intel Innovation is, you know, I'll be there as well.
And if you see me, let's have a chat.
Great.
Same with us.
If you see us at industry events, come on, swing by, say hello.
Check out the Tech Field Day events.
We just had a Tech Field Day event where we talked to some of the companies from the CXL universe, including the CXL Consortium itself.
And you'll find those videos on YouTube.
Just look for Tech Field Day and CXL and you'll find those presentations.
Also, you can tune in for our weekly Gishtalt IT Rundown news program where we talk about,
well, we cover any CXL-related announcements.
Craig, where can we find you?
Where can we continue this with you?
I'm available on Twitter at CraigRogersMS,
and you'll also find me on LinkedIn.
That's CraigRogers.
Thanks.
And as for me,
you'll find me again back here for Utilizing Tech.
We're going to do a wrap-up episode next week,
and we'll be kicking off our next season of Utilizing Tech a few weeks after that.
So stay tuned.
Thank you as well for listening to Utilizing CXL,
part of the Utilizing Tech podcast series.
If you enjoyed this episode, please do subscribe.
As I said, we're going to have another season coming soon. You'll find us in your favorite podcast application or on YouTube. Just go to YouTube slash Gestalt IT video. This podcast is brought to you by GestaltIT.com, your home for IT coverage from across the enterprise. For show notes and more episodes, go to our website, utilizingtech.com or find us on Twitter or Mastodon at Utilizing Tech.
Thanks for listening, and we'll see you next week with our final episode of Utilizing CXL.