Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 4x2: Using CXL in Software with MemVerge
Episode Date: October 31, 2022Data throughput has grown in leaps and bounds with the advent of AI. But as COVID-era digital transformation left the existing systems stressed out, CXL arrived at the heels of that. The newest memory... solution that has got everybody talking, CXL is full of promises for AI computing. With the release of v3.0, CXL has started to gather more steam. More companies are now dipping their toes in CXL water bringing to the market their own brand of CXL products making the technology reachable for enterprises. In this episode of Utilizing CXL Stephen Foskett and Craig Rodgers sit down with guest Yue Li, Co-Founder and CTO at MemVerge and hold an illuminating discussion on the current CXL product market and things MemVerge is doing on the software side of things. Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Craig Rodgers: https://www.twitter.com/CraigRodgersms Guest: Yue Li, Co-Founder and CTO at MemVerge. Connect on LinkedIn: https://www.linkedin.com/in/theyueli/ Follow Gestalt IT and Utilizing Tech Website: https://www.UtilizingTech.com/ Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/1789
Transcript
Discussion (0)
Welcome to Utilizing Tech, the podcast from Gestalt IT.
This season on Utilizing Tech, we're focusing on CXL,
an exciting technology that has the potential to transform the IT industry.
I'm your host, Stephen Foskett from Gestalt IT,
and joining me today as my co-host is Craig Rogers.
I am a solutions architect,
and I'm very interested to continue learning about CXL.
And also joining us today on this episode, as always, we have a guest from the industry who's actually working to make this technology a reality.
Memverge is a company that you may have heard about on our Tech Field Day events or on Gestalt
IT coverage generally.
And I'm happy to have Yue Li here from MemVerge
joining us for this discussion.
Hi, my name is Yue Li, co-founder, CTO of MemVerge.
I'm very glad to join the discussion.
So we, as I said, have had MemVerge
present at Tech Field Day before.
And essentially MemVerge is, in my understanding,
software that enables you to do cool things with memory on servers, hierarchical or tiered memory,
as well as some kind of magical virtualization-y, snapshot-y kind of cool stuff, too. First, let's start off with the current state of affairs with CXL. So as our
listeners know, CXL is an emerging technology based on PCI Express that lets servers basically
break a lot of the boundaries that we've previously had for systems architecture. But now CXL is not that. Now CXL is something very new.
It's just emerging. And the product landscape consists of memory expansion cards from Samsung
and SK Hynix, of support coming soon from AMD and Intel's server platforms, and basically software from you, right?
Yes.
So tell us a little bit more,
as someone who's working in this space,
what's real now in, I guess, memory expansion over CXL?
Well, I think right now CXL just,
I think still at the starting point,
right now what we see is that basically CPU providers,
vendors, Intel, AMD, they are still sampling
the next generation CPUs that provides CXL support.
And also from the memory vendors,
they are pushing out also early samples,
engineering samples for memory expansion cards.
And there are also switch PCIe CXL switch vendors
that are also trying to create early prototype for those.
And also for us, we are a software company
that's trying to work on the next generation
memory management software
to work together with these hardwares.
So you fit nicely into this world because the first products are of CXL.
Now admittedly, CXL will not be memory forever.
It's not just about memory.
Right. But to start with,
it is just about memory.
At least that's where the products, the initial products are.
And that just happens to be your area.
Yes, I think as a matter of fact, in the early days, CXL was also invented for not
only memory but also for accelerators, such as GPUs and FPGAs also, to allow those processors
to communicate with each other using the memories on those boards, on those cards.
So that and I think the memory expansion card is a very natural derivation of those developments.
So it actually came later, but it became a very important and interesting use case for CXL.
So Yelei, tell me more about what your products are actually doing with memory.
Sure.
So, our product today is called Memory Machine.
So Memory Machine basically right now has two major features.
The first feature is called Memory Tiering.
So the Memory Tiering essentially is a software that allows you to pull different kinds of
memory together.
So, for example, in old days there will be the local DRAM,
which is faster, and also the, let's say,
inter-optimized persistent memory, which is slower.
But in the future, basically, it will be the DRAM
and other type of memory, such as the CXL memory,
that can be placed locally in the chassis
or remotely connected through some switches.
So it sounds as though you have already solved
a lot of the engineering challenges
around that memory tiering that would be applicable
whenever CXL is more widely available.
Yes, so as a matter of fact,
actually the Intel Optane Persistent Memory technology
is actually helping us to basically shape our product.
Because what the Optane memory and the CXL are sharing
is essentially almost the same software abstractions
in the Linux OS.
So for example, our software is actually not only designed,
was not designed for Optane, but was designed to work with something called DAX device.
The DAX device is what Linux operating system provides
to represent a different kind of memory
that's different from the DRAM.
So in the future, the CXL will same,
also will be represented using DAX device.
So if our software works for DAX device,
it will naturally work for CXL.
And also, the CXL memory also will be slightly slower
than the DRAM.
So in this case, the Intel opt-in
is actually perhaps a bit much more slower than the DRAM.
But you still basically need some algorithm
so that you can place the right data in the right tier
to balance out the performance.
So similar algorithm will apply to CXL,
so that it helps us to naturally work with CXL well.
So whilst we're all aware of the current roadmap
regarding Optane, you actually think it's an improvement
to be using CXL-based RAM versus existing Optane?
I think so.
So from what we observed, the CXL memory definitely,
I think, are providing, the DDR base of CXL memory right now,
are providing much higher throughput and a much lower latency, uh,
than Intel Optane, uh, persistent memory. Uh, this is just simply because they,
they are using, you know, they are just, uh,
one of the most important reasons they are using DRAM,
not the persistent memory media. And also they are using, uh, you know,
PCIe Gen 5 and Gen 6,
which are providing very large throughput
and lower latency as well.
So memory tiering is one aspect of your solution.
What other features are MemVerge providing?
Yes.
So the other features,
or the other set of features we provide
is called in-memory data services. And one of such examples is we is called in-memory data services.
And one of the such example is we call it
in-memory snapshots.
So the motivation of that is actually
to protect the application that has large memory footprint.
Because naturally, if you have a CXL
or if you have persistent memory,
your memory footprint will definitely grow up.
Because if you have to, typically the use case will be big memory applications.
Otherwise, we won't need those kind of memory, right?
So if your application has large memory footprints,
it will be very difficult to protect the data
if there's a crash, right?
For example, if you think about a big memory,
in-memory databases, if there's a crash
or if there's a failure,
then you have to reload all the data from the persistent storage back to the memory to recover the state, which
takes a very long time.
So what we offer essentially is the in-memory data snapshot technology, just like those
snapshot technology used for protecting storage systems.
We can take a snapshot of the in-memory states of those bigger memory applications, and also
this snapshot can help to recover
them instantaneously without using the slow storage.
I've seen some things online around your technology and how are customers leveraging
that capability?
I've seen some things around AWS and spot instances.
How are your customers using your technology then,
and what advantages?
So on the cloud, basically, we have an interesting use
case with spot instance.
So the spot instance basically is one kind of a VM instance
provided by most of the cloud vendors
to help you to reduce the cost.
So it's very cheap to use.
And also, your user can enjoy the same kind of
configurations they want from the on-demand instance. But there's always
a trade-off. The trade-off is that the cloud vendor can reclaim the resource
anytime and just give you from 30 seconds to two minutes to react on
that. So if they reclaim the resources then basically everything you are
running will be gone immediately. So what we reclaim the resources, then basically everything you are running
will be gone immediately.
So what we provide is we have
an in-memory snapshot capability.
So we build up a management software
that basically helps users to run their workload
on spot instance.
And meanwhile, we start taking, for example,
periodical snapshots of their in-memory states
and also trying to basically back them up to some shared
persistent stories such as s3 so that once the the cloud in cloud vendor starts reclaiming those instance we can automatically
Ship those snapshots to a new spot instance and suddenly recover the the workload
Lost it from the previous spot instance in the new instance so that the workload can continue running
without losing the past state.
So you don't have to start your compute from the scratch.
So as the end user, they basically
enjoy that still the same kind of spot instance, the low price,
and still can make sure they can base transparently, make their workload finish on time without introducing too much interruptions.
Sounds like a great new layer of protection.
And yeah, great capability in the cloud.
So getting back to though,
basically how the memory is used. So as you mentioned,
there are certain similarities,
certainly similarities between
Optane persistent memory and CXL memory,
especially in terms of how it is presented in Linux
and the sort of software that is needed
to enable those things.
And you happen to have created that software
that works with that. How is this memory presented to the application? Does the application see
it as different, or does the application just see more memory?
Yes, so that's an interesting question. I think right now, as far as I know, there
are at least two ways, or two important ways that the application will see this.
So the first way is probably more transparent, which is basically just a very natural memory expansion.
So for some kind of CPU vendors, for some CPUs, if you set the BIOS correctly,
so those memory will automatically be counted as the system memory. So if you boot up the system, you will see a large piece of memory you can use without doing any system setup.
And of course some of those are from DRAM and some of those are from the CXL memory.
So that's the first one that's most transparent.
The second way is actually to the CXL will be automatically represented as a device.
So if it's represented by the device,
when you put up the system, you actually
won't see the system memory show up.
And all these system memory actually
are captured by that device.
Then you need some software to basically virtualize
the device into the system memory, such as our software,
so that you can actually use it transparently.
Or you might want to use that device for other purposes,
such as maybe you can turn it into, let's say,
a shared memory storage system or shared memory file system. So in that case, you can also use it for storage applications as well.
Yeah, and I think ultimately,
do you think the goal of a big memory and memory expansion is going to be to
have transparent access to a huge pool of memory?
Or do you think that applications are going to want to have a discrete access to
different types of memory?
Yes. I think, I think both will come. But I, I personally,
I think the first one to transparently access the memory will come, but personally, I think the first one, to transparently access the memory,
will come first, because you don't expect,
because it's a new technology,
so you don't expect the old applications
to significantly get rewritten to leverage
those discrete different type of device, memory devices.
I know it happened, like Oracle or SAP HANA,
they modified, but it took lots of effort.
So for most of the existing applications,
VMware, KVM, containers,
they will not be modified.
So they will prefer to use the memory transparently first,
just to start basically tasting the sweetness
of the CXL memory expansion.
Then there will be application developers.
They say, oh, it's so nice, and I think I can do better.
So then in that case, they will try to ask for APIs and SDKs.
Their company, Samsung, and others, they are pushing out the SMDK and others.
We have our own development toolkit as well.
So in those cases, you can try to best leverage those devices with SDKs
to accurately allocate memory and manage the different tiers explicitly.
And also, you know, this already happened to SAP HANA and Oracle,
as I mentioned, in the era of Optane persistent memory.
Well, it's important that you mention SAP HANA and Oracle
and big data applications that have been created to use Optane PMEM. Are those,
it sounds like those would be in a good situation to also leverage CXL based
memory because they've already been modified to handle different classes of
memory. And this is another different class of memory, but it sounds like those would be pretty adaptable
to the CXL future.
Yes, I think, so in general, yes,
I think definitely their modification,
existing modification, should partially work
on the CXL memory.
Why do I say it's partially?
It is because one of the most important factor
they need to change their software for is the persistence. So if you look at Intel Optane, right, so Oracle, SAP, HANA,
they all actually, their design, their new design, all actually considers the persistence in their
design. So they actually call, you know, CPU cache flash instructions to persist the data they wrote
into the persistent memory. Now we are entering the CXR memory error and the persistent memory is no longer there.
So they actually need to modify those code or maybe some of those code can no longer
have the right assumption because the CXR memory no longer is persistent.
So they have to maybe turn those back into a cache mode because you cannot hold data
there anymore.
And I guess the other aspect is that Optane was bigger and cheaper than DRAM.
CXL-based memory, I hate to say it, but it's not going to be as big and cheap as Optane
promised to be because it's still DRAM and it's still
going to cost a lot of money. Yes. So I think that, you know,
from an architecture standpoint,
there's also that aspect of it that people may have optimized app,
an application or hoped for a bigger pool of, of,
of persistent memory based on Optane than they will be able to get with CXL.
Yes. Yes. I think in the CXL era, because of the use of DRAM,
it gets more challenging to reason about the cost.
So you have to pay for the CXL card.
In the future, maybe you have to pay for the CXL switches.
And the way to reduce cost is actually will also be different.
But it will be very, it sounds very familiar in the sense that it will be similar
to how you reason about storage cost reduction.
In this case, imagine that you will be,
the multiple computer server will share
a large memory pool, right?
In those cases, you may start introducing things
like sync provisioning and others similar to the storage system
that basically you pretend that the computer server
have large memory to use, but the actual memory
you have in the box is much less than that.
So that's another way to reduce the cost.
And the other way I think it will come is that,
of course, CXL starts with, seems like to start
with the DDR5 chips,
but I think it will naturally start using also DDR4 chips also,
which will be cheaper than DDR5.
And also, personally, I strongly believe
that the persistent memory technology
might come back from other vendors in the future
because I do see it's a promising technology has lots of value use cases as
well. Yeah. And in fact, I think that some of the companies that are,
specifically Samsung has a flash based persistent memory offering and I bet
that that's going to be in CXL as well. Yeah, I think so. Yeah.
It filled a nice gap between NVMe and RAM in terms of performance and latency.
You know, it was an order of magnitude better.
And I think that's the thing we need to talk.
We need to talk about the elephant in the room, which is Optane. Yeah.
So a lot of people are looking at CXL as sort of the replacement for Optane.
And that's somewhat true
because, as you said, as we've talked about here,
it kind of uses similar interfaces
and it fills similar gaps and so on.
But we've also talked about the fact that they're different.
You know, Intel, we know,
has ended development of Optane.
That doesn't mean that Optane is off the market.
And in fact, because CXL really needs the next generation
of Intel or AMD CPUs to work, the current generation
is very Optane dependent.
And in fact, Intel has said that they do,
or at least they previously said that they do plan to develop a third generation
of the Optane PMM300.
They're still shipping the 200s.
By all accounts,
they still have plenty of Optane in the market.
So Optane is not quite as dead as people say,
but it is a dead end.
And CXL is the next wide open road beyond it.
That's my opinion. Do you, what do you guys think? Actually, Craig,
let me ask you, what do you think of Optane's demise?
It, the announcement was,
what was a shock and surprise to many people in the industry. Um,
the benefits of Optane were seen and recognized,
and I think they will continue to be
with persistent memory via CXL.
There were a lot of workloads and use cases
where they benefited from it.
A few of the solutions that you mentioned
scale up very well in terms of memory,
so CXL is plugging a hole there.
I think we have to get the servers out.
We need Intel and AMD to get the servers out.
That'll enable companies like yourselves to have actual kit
that you can develop your products further on,
add more features and services,
and that'll be the same way across the entire industry.
It's a logical progression for CXL
because we know where it's leading,
but even in the near term, there's immediate use cases
for being able to add huge quantities of RAM
to a single server.
So I think it'll succeed initially based on that alone,
but in five, 10 years time it'll be much more functional,
but straight away it's solving a problem. Anything else to add about apptame?
Yeah, I think, I think a CXL is naturally, you call it extension.
I call it actually it's a, it's a,
it's a natural generalization of the PMAM.
So first of all, there is definitely a plan
that PMAM is moving to the CXL bus, right?
And also you can see that when the,
during the development of the PMAM,
the lots of operating system software,
such as DAX device support and the tiering support
and all the others, gets to become more and more mature.
So all these are basically great,
basically prepares well for the CXL.
And as a matter of fact,
today if you look at the kernel developers for PMM,
today they all become developers to supporting the CXL.
And if you look at the end customer,
I think also the end customer is basically,
there is certainly a strong need for big memory.
So, PMM is one of the solutions for that.
And CXL definitely will continue, exist, and will get much better to solve this problem in a better way.
And just to connect this with the previous seasons of utilizing tech where we focused on AI and ML, it seems to me that machine learning is one area that was moving toward big memory and a promising adopter of big memory.
I think that that's still going to happen.
And there's actually a lot of interesting use cases around CXL for AI and machine learning. You mentioned before memory pooling. So in the future,
CXL will enable a basically shelf of memory to be shared among multiple servers that are sitting,
you know, above or below it with reasonable latency and reasonable performance throughput.
And there will be software like yours that can enable those servers to share that data.
So if somebody's working on a machine learning training set, for example, you could do a
snapshot halfway through and then deploy that training set to other systems in order to accelerate that.
You could recover if something happened during training.
You could even have a situation where a set of data
is actively accessed by multiple systems at the same time
and parallelize training, which is pretty
exciting as well. So I think that just adding CXL to machine learning actually makes a lot of sense.
And I hope that those of our listeners who stuck with us through three seasons of utilizing AI
see that Optane actually was a promise, but CXL is actually delivering on a promise
of big memory for machine learning applications.
And that's exciting, and that's cool and relevant.
The other thing that I want to bring home here is kind of turning to the future, where
CXL technology goes next.
CXL wasn't just developed to enable big memory.
CXL was also enabled to share accelerators and resources.
And again, connecting this to utilizing AI,
the most common accelerator that will be shared over CXL is GPU.
And so we can see a future where there may be a shelf of memory that's shared among
multiple servers. There would probably also be a shelf of GPU accelerators that would be shared
dynamically with the servers that need them. That's a very exciting prospect to people with
big memory and machine learning applications. Can you help us, UA, since you are very plugged into the CXL market?
Give us a reality check of in six months,
what's going to be really on the market?
And I know you can't say necessarily that this is,
but what do you see being as product categories
that are on the market to utilize CXL?
Yes.
So I think most importantly, on the hardware side,
I'm really looking forward to CXL switch samples,
or early sample, or early reference implementation that
allows you to connect multiple compute servers with a server
chassis where you're plugging many CXL memory expander
cards or accelerator such as GPU and FPGA.
So on those, besides the switches,
I'm also looking forward to more maturing CXL memory expansion
cards from different vendors with better performance
and with all kinds of more different type of media
rather than DDR5.
And also, I'm looking forward to see the support of CXL
from the accelerator vendors, such as AMD's Intel,
their GPUs, and the FPGAs.
So what do you think, Craig?
You've been at OCP Summit.
You've been studying the products, the initial products,
and what the companies are coming out with.
Is this realistic? What do you think is the realistic time frame?
It is realistic. At the summit, we were able to see memory being shared between servers,
you know, on engineering samples, you know, not final products.
You know, it wouldn't be ASIC-backed. It would be FPGA.
So obviously not a final product, but fantastic to see that it's that it's doable you know it's it's currently in play and as with everything else these
days API driven which is going to give programmers developers and DevOps
everybody more control over how their workloads are run and the resources
available to those workloads so it'll open up a whole raft of options
in how we approach workloads in the future
because we've gained more agility,
more capability, and more RAM.
Yeah.
I was very interested at the summit.
It was fantastic to see so many big names all there
showing their latest and greatest.
We saw Samsung, we saw Marvell, we saw big brands
that sell a lot of equipment worldwide
were appeared to be heavily invested in CXL technology.
So it'll be interesting to see these products
come into market within the next six months, hopefully.
Yeah, absolutely.
And so at OCP Summit, just to let people know,
we worked with Memverge to host a CXL forum
where we spent an entire day with presentations
from dozens of companies in this industry,
whether they're companies that are developing hardware that will support CXL or companies that are developing software that will support it,
or, and I think that this is probably for me the most exciting thing,
companies that are using it.
So, for example, Uber, hyperscalers, Microsoft, Google, they are also looking at this technology as ways to solve real important business challenges that they have in terms of big memory and flexibility.
And they're all committed to this as well. And as Craig points out, too, if you look at the CXL Consortium website, if you look
at the CXL forum presenter list, it's hard to see a company that's not in this space.
In fact, unlike previous attempts at this technology, like Gen Z or CCIX or the rest,
CXL has basically every name associated with it.
And it's exciting to see a technology like that.
I mean, UA, again, you guys are in the midst of this.
What do you think are the interesting thing,
software business challenges that are gonna be solved
by some of these companies?
I mentioned Uber, for example.
What are they doing with CXL?
I don't know what exactly they are working on,
but for us, I think one of the most important challenges
is to provide a very easy-to-use management software
for CXL memory expansion cards or pooled memory.
Because you want to basically,
the goal is always trying to serve big memory,
to provide the user and user a big memory when necessary.
So the necessary part is most important,
is that you have to use software to make the memory show up
at the right time when it's needed.
And also you have to also carefully return those memory
back to the pool when applications releasing those.
Being able to schedule the memory to show up
between different servers and certainly with different
SLA requirement or priority is already
a very challenging task and it has to be easy to use.
I think currently that functionality
of returning the memory actually requires a
reboot on the server currently. However, it will become hot, hot, hot remove.
Yes. I'm aware of those.
I think Linus kernel developers are busy working on this feature to make it a
hot pluggable. Yes. Difficult challenge.
And it's going to be fun though when we have that,
because that's sort of the road that gives us,
that leads us to this sort of magic land,
magic wonderland that's still admittedly quite far off
of composability,
of being able to dynamically reconfigure a server,
add accelerators, add memory, add CPU,
add storage, add IO, whatever it is,
on the fly and basically have a server
that is the size you need it to be now.
And that would, I think, be a wonderful future.
Yes.
So I want to ask one more question at the end here of our guest.
And I'm surprising you with this question.
I apologize for that.
I'll give you a moment to think if you need it.
Think of, take away your experience.
Take away what's really happening. go a few years into the future.
What surprising way could CXL transform the industry, any part of the industry? What
surprising side effect will CXL have? It's a very good question. I actually
think about this very frequently because when I have time I do think about these problems.
So one of the things I keep on dreaming about
is a completely CXL based software defined data center. I think CXL itself is actually paradigm changing. Think about this.
Previously look at the HPC or enterprise compute. You want to compute the EDA workload. What you do is you
submit an EDA job or other intensive computer job to your scheduler. There's a job schedule, LSF,
Slurm, and you have to specify in the job file how many CPUs you want, how many GPUs you want, how much memory you want.
Then the scheduler will, okay, well, let me wait for a while. Let me try to look for the resource that is available.
Okay, now you have it. Now let me, let's run it there. And if there's none, then you have to wait.
Sometimes you have to queue for, you know, even for days for this pressure resource to show up. Now, with CXL, all these compute accelerators,
memory, storage, now are all basically shareable,
and they can all scale independently.
So now, if the software are all ready,
so you will see that now I want to submit a job
that requires this number of GPOs, this number of CPUs.
What the scheduler will do is,
okay, let me call the API of the switch
and let me make a virtual server for you.
So I will provision, grab two GPUs from this pool,
another 10 gigabyte memory from that pool,
and so on and so forth.
And we'll create that for you so that you can run this
maybe totally in the CXL chassis
without involving actually the host CPU at all.
So there could be a chance where you think about those cards.
Today, accelerated could be GPU cards, could be FPGA cards.
Think about their cards, or maybe related to computational storage.
They're cards that just carries ARM cores, and those are CPUs.
And essentially, you can use ARM cores on the CXL chassis,
and you can put your data in your CXL memory cards in the CXL chassis.
So all the compute happens on CXL devices,
and there's no longer anything to do with the host CPU.
And that's something I think is a bit crazy,
but I think it might be coming in the future.
And I think it might be a caveat to AMD and Intel.
So you think it's going to be transformative
to batch job processing, that approach?
Yes, I think so.
Because now you're in a whole new dimension to capabilities?
Not only to those batch jobs,
but to all the modern schedulers,
such as Kubernetes, Kubernetes scheduler,
or other container framework,
Singularity
and so on and so forth.
AI modeling and fairing?
Yes, yes.
Or even maybe for VMware, you know, the vMotion, right?
They can, you know, they can move the VM to any place.
They just create a new virtual server and then move the VMs there.
And it's funny because that sounds kind of amazing and far off, but what you're describing actually is very similar or related to what's currently happening, for example, with DPUs in VMware.
Yes.
And what's been proposed with computational storage.
Yes.
And even the HP machine.
All of these things are ideas that I think have been around in the industry for a while,
and CXL might make those things possible.
So I love this visionary idea,
and I hope that it works.
I do.
I think all the major clouds
are going to move towards that direction.
That's great.
Well, thank you so much for joining us
for this episode of Utilizing CXL.
It's great to have a company that is actually right there
making this happen today. Again, this whole podcast, Utilizing Tech, is focused on actually
making practical use of this technology, not just dreaming about where it goes. It's fun to dream
about where it goes, but in terms of practical use of the technology, memory expanders, next generation AMD and Intel chips, and MemVerge should work.
And that's really using it.
And I appreciate that.
So thank you so much for joining us.
Before we go, where, to connect with us.
Craig, anything you want to pitch? What's going on with you?
You can contact me on LinkedIn, Craig Rogers.
I'm at CraigRogersMS on Twitter, and my blog is CraigRogers.co.uk.
And as for me, you can find me on
most social media at sfoskit.
I also host a weekly
IT news show called the Gestalt IT
Rundown, as well as
a weekly podcast, the
On-Premise IT Podcast, where we
get folks like me and Craig
together to talk about various
industry topics, and you
can find those in your favorite podcast application.
Thank you for listening to Utilizing CXL,
part of the Utilizing Tech podcast series.
If you enjoyed this discussion, please do share it
and give us maybe a rating in your favorite podcast application,
because that does help.
This podcast was brought to you by Gestalt IT,
your home for IT coverage from across the enterprise.
For show notes and more episodes, go to utilizingtech.com or find us on Twitter
at Utilizing Tech. Thank you for joining and listening, and we'll see you next week.