Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 4x3: How CXL Can Right-Size Server Memory with Ryan Baxter of Micron

Episode Date: November 7, 2022

Although x86 servers have some configuration and expansion options, they are increasingly monolithic, putting customers at the whim of the system vendor. CXL promises to change this, allowing customer...s to configure, and reconfigure, servers according to their needs. On this episode of Utilizing CXL, Ryan Baxter of Micron Technology joins Stephen Foskett and Nathan Bennett to talk about the options that CXL brings to the table. One of the first benefits of CXL technology is memory expansion, allowing servers to be created with exactly the right amount of memory instead of over-populating memory to fill channels and meet needs. But it will soon allow truly modular servers with exactly the right combination of CPU, memory, storage, and I/O. Hosts:   Stephen Foskett: https://www.twitter.com/SFoskett Nathan Bennett: https://www.twitter.com/vNathanBennett  Guest: Ryan Baxter, Senior Director of Marketing, Micron Technology. Connect with Ryan on LinkedIn: https://www.linkedin.com/in/ryan-baxter-a6a24a3/  Follow Gestalt IT and Utilizing Tech Website: https://www.UtilizingTech.com/ Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/GestaltIT LinkedIn: https://www.linkedin.com/company/1789

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT. This season of Utilizing Tech focuses on CXL, a new technology that promises to revolutionize enterprise computing. I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. Joining me today as my co-host is Nathan Bennett. Hey, Stephen. of Gestalt IT. Joining me today as my co-host is Nathan Bennett. Hey, Stephen. So Nathan, you and I have spent a lot of time at Tech Field Day events and Cloud Field Day events and so on. And we talk about server architecture, we talk about servers, we talk about sort of the shape of things to come. And I think one of the things that kind of kills me is the fact that
Starting point is 00:00:43 increasingly servers, especially commercial servers, are pretty monolithic. Basically, you buy from, I don't know, a three or four letter vendor and it comes in and that's basically what you've got for the rest of time. You know what I mean? I know exactly what you mean. It's fun being the Kubernetes guy and hear monolithic being used in a different term than application. But the analogy still fits. Something that is solid. That's one main thing. If you want another of those things, you add another one of those things. If you want to modularize it or add another piece of it, you basically get a shaking head and they stay, well, probably not today. And then you have
Starting point is 00:01:23 to go and buy a whole other thing. And even though you do have some options and some flexibility, it's sometimes those options aren't really a lot. There's a few different choices. And you can put an add-in card in PCI Express, but that's basically what it is. And for me, that's one of the things that I'm most excited about with CXL. Number one, we're going to have PCIe switches and lots and lots of expansion slots and lots of compatibility, hopefully. And that's why we decided to invite on somebody from the panel that we did at OCP Summit as part of the CXL forum, Ryan Baxter from Micron. Welcome, Ryan. Hello, guys. Thanks for allowing me to join the party here. Just a bit about myself. I head up
Starting point is 00:02:07 the data center segment at Micron Technology, also heavily involved in driving the strategy for CXL for the company. We are pretty excited about this new interface. as Stephen had alluded to, really a, you know, first of its kind, an interface that really the industry has been really, you know, dying to have for several years now, perhaps decades. So, you know, we're pretty enthusiastic about where it's going to go, the optionality it brings to customers. So happy to join. Thank you very much. So, Ryan, a lot of people probably have heard of Micron. You guys are inside a lot of servers, including the one that's sitting right over there. I've got some memory and so on. But I think one of the things that kind of strikes me is that Micron has a lot of different technology, but you're kind of at the whim of the system vendors, right? I mean, the system vendor is the one who's going to be deciding
Starting point is 00:03:11 whether to include you or not. And CXL in a way, I mean, it kind of is a can opener, right? It breaks open the servers and gives the customer the ability to make more choices, right? Yeah, it really does. It's one of those things where it's still standard, right? We're not, you know, talking about completely getting away and making something so proprietary and, you know, extremely high performance for one use case. It's still a very much a standard front side interface, but where the optionality comes into play is what you do behind that interface. And that's really, really interesting to us and we're finding to our customers because it allows a company like Micron to co-innovate with its customers where you're dialing in certain aspects of this technology and this other technology and bringing them together to really drive a just right type of solution. Nothing more, nothing less. And, you know, contrast that to very deterministic types of interfaces, you know, like memory interfaces, which have, you know, served us very, very well for some time now.
Starting point is 00:04:22 But it really, you know, kind of, you know, halts the innovation that you can do because it's because of the determinism associated with interface. You have to be, you know, very much, you know, in the box that's previously been prescribed, whereas CXL, you know, it allows you to really think outside that box in a pretty significant way. I really like that because at the end of the day, we, you know, describing the box, this monolithic thing that, that we tend to be kind of locked into becomes this problem that we, nobody really likes. I, we, we deal with it. We kind of assume, this is just the way things are. Whereas CXL is breaking the mold. Now,
Starting point is 00:05:04 now we get to see things differently in terms of where technology or where a bus has really led us to. I find the great way that you phrased it there. Are you seeing from Micron the different use cases and solutions where this can really fit in in terms of outside of just the compute module? Or are we seeing more in this stabilized area? Yeah, we certainly are taking a pretty hard look at, again, because of the flexibility interfaces of Fords, we're taking a look at differences in things like power consumption, where I'd really love a particular capacity,
Starting point is 00:05:42 but I want a better power profile. Or we're looking at things like larger capacity or even non-binary capacities. You know, especially recently, cloud hyperscale customers have really dialed in their workloads and use cases. And frankly, we're down to the, you know, tenths of gigabytes per second or tenths of gigabytes in capacity in terms of the requirements. And, you know, that's certainly not something you can easily do in the standard memory interface. With CXL, you can do that, right? You can dial in, you know, exactly what customers are needing, whether it's capacity, whether it's bandwidth, and whether it's, you know, more than just memory expansion. You can actually start to think about how you can use compute
Starting point is 00:06:26 off of, say, a Type 1 or a Type 2 device in a CXL interface. So, again, it really kind of blows the lid off of our discussions. It's very interesting to see where these things are going and how much innovation can be unlocked just by virtue of the fact that this interface is now open and non-deterministic. Yeah, and I think that that's actually really important is that it's not necessarily even about reconfiguration and composability and rack scale and all these things that we would love to see happen as much as it is right sizing. And so if you're building out a server, you have to fill all
Starting point is 00:07:05 four memory channels or else you have wasted performance. But how big do you put in there? And as you said, non-binary, that word means something different in system architecture than it does in other realms. In system architecture, I think what you're talking about, and I'm pretty sure what you're talking about, is basically, do I put 32 gig, gig 128 gig 256 like these are the dim these are the dim sizes and if i need like if i've decided that my server needs like i don't know 728 gigs of ram it's really hard to hit that number without going way over and blowing up the bill of materials, you know? Right. Or way under, right? So the sort of chunks of granularity that you have to work with are much less usable in the way of standard DRAMs because as as you said, the 32, 64, there is some flexibility
Starting point is 00:08:08 because there are 96 gigabyte DIMMs coming in DDR5 timeframe, but still, CXL, it doesn't matter if it's 32 or 36 or 41 gigabytes. As long as the system architecture that's connected to it can use that amount of memory, it's possible. And it's just simply not possible with the standard DRAM footprint. Yeah. And the funny thing is, people would say, well, that seems like a corner case, but it's not a corner case when you think that a terabyte of memory costs as much as like the rest of the server, you know, and so suddenly people are basically spending I mean, the again and again, we hear about this Microsoft Azure study where they showed that basically they were spending like 50% too much on their servers, if I remember correctly, because they had
Starting point is 00:09:00 to hit this water high watermark or low watermark, but they ended up having to equip servers with this much more, just because that was the only way to get there. And it reminds me as well as some of the presentations we've had at Tech Field Day with some of the bigger enterprises and hyperscalers where they've talked about the really fine graining that they do when configuring servers. And they match exactly how much CPU in terms of cores and gigahertz and, you know, as well as, you know, how much memory, how much storage, how much network bandwidth, all of these things, they try to match it and right size it. And I remember one of the companies telling us that they decided not to go with, I think at the time, it was actually even 10 gig ethernet, because they found that
Starting point is 00:09:45 it would just blow up the entire bill of materials for the rest of the server in order to actually use all those channels. And, you know, these are the kind of questions that enterprise architects are making and cloud architects are making every day. And CXL, I think that this is the sort of short term power that it brings is that you can, it gives you the ability to more carefully size systems, right? That's right. That's right. And, you know, what, what maybe some of us call a corner case at a particular, you know, customer that operates at significant scale, you know,
Starting point is 00:10:18 could mean, you know, the difference of, you know, several hundred million dollars worth of cost savings to them when they deploy, you know deploy across their entire fleet of potentially millions of servers around the world. what the non-CXL-enabled server can provide. Up till now, it's just sort of been the situation where, well, we'll just sort of overbuild, or we'll build right up to the point where the use case or workload needs, and then we'll over-provision it just a little bit because we have to build for the search cases. With CXL, it really starts to change the equation at both the server scale as well as the rack scale.
Starting point is 00:11:08 And eventually, we believe the data center scale, where it starts to really become a pressure release valve, for instance, around what you can do when it comes to dealing with system constraints and data center constraints. So yeah, we're pretty excited about where it's going. I like this discussion as a cloud person, because we always talk about the decoupling of an application or a framework. It's really about moving things from this box into this magical land of APIs where things are just connected and happy. And this is really bringing us back into the box, but it's bringing us back into a box
Starting point is 00:11:49 where it's still modular. And going a little bit back to the point where you discuss power supplies, anybody that builds PCs understands that you can get two different types of power supplies. You can get a power supply that is non-modular or a power supply that is modular. And it comes with those specific areas and the specific cables that you need in order to plug in specifically what you need, rather than having this big blob of spaghetti that you have to figure out what goes where and try to figure out all these different things. CXL gives us that wonderful modularity and the use cases that you mentioned
Starting point is 00:12:25 are just to an extreme point, right? Because we live in this world of, I need this amount of CPU, but I need this amount of memory. And sometimes they don't really match. And so you have to play a strange game of Tetris that nobody wants to play. But you mentioned use cases. So where are you seeing the use cases for CXL and where are we actually growing from this into the future? Yeah. Great question, Nathan. Really, the first couple of use cases that we're seeing have to do, and of course, everything with CXL has to at least hang together with some sort of a total cost of ownership argument. You wouldn't just go to CXL because it's available. Customers typically leverage CXL because there's some sort of a total cost of ownership argument. You wouldn't just go to CXL because it's available.
Starting point is 00:13:11 The customers typically leverage CXL because there's some sort of cost optimization at the system platform or data center level to be able to realize. So one of those first use cases is really looking at very, very high capacity modules. And in DRAM, whenever you have to stack a module, which is typically indicative of the higher capacity modules today, that's going to be your 128 gigabyte module density and above. And a lot of customers do purchase those types of solutions. You get into a situation where you're extremely nonlinear when it comes to cost per bit, and therefore very nonlinear in terms of the price you have to pay for that particular module. We believe CXL can, despite some of the performance deltas that you will see as a result of having to signal through a controller, there are some latency deltas there,
Starting point is 00:14:03 but we believe that despite that, there's real cost savings associated with moving your memory footprint away from that 3DS or TSV-based solution, the very expensive stack solution that's on the main memory footprint, into some of that memory footprint residing off of CXL. And so what you're left with is a main memory footprint that's cost-effective, and the CXL bus essentially handling the spillover, if you will, or the footprint that you would otherwise have to drive in that very highly expensive stack configuration.
Starting point is 00:14:42 We call that TSV mitigation or through silicon via stack mitigation. So you're pushing your memory footprint to more cost optimized pieces or areas of the platform. So that's just one use case. Another use case coming up pretty quickly actually is when you run into a situation where line rates, you can't move any faster or you go from, say, a DDR5 speed grade to the next speed grade. Well, in order to do that, you have to reduce the number of RDMs that you have in a single channel because of signal integrity or loading issues. And so all those being equal overnight, you have to double the capacity of your modules.
Starting point is 00:15:26 And again, you're in that very high capacity, channel situation, you can lean on CXL to drive that memory capacity in conjunction with your main memory footprint so that you're not breaking the bank on the server investment or the platform investment you're making when it comes to memory. So those are two use cases. There are a number of others. Stephen mentioned pooling as an interesting use case, and it very much is top of mind at a number of our customers where they're looking at leveraging a smaller footprint as a baseline footprint for a memory subsystem in a server. But where that server in a surge capacity situation borrows some memory from the quote unquote CXL pool and returns it when it's done. And so effectively what you're doing is you're increasing utilization of your memory and therefore increasing the ROI of the investment you made in your memory subsystem. So those are
Starting point is 00:16:40 three use cases we're tracking very, very closely. And, and again, they're not use cases that are, you know, out in 2035 sometime, it's, it's, it's coming, you know, really right around the corner. And these are, these are very, what we believe are existential problems for some customers. CXL is, is, is a really, really nice way to deal with it. Yeah, it surprised me to see that this stuff isn't like science fiction, like memory pooling. And I think that that's kind of the next thing, next direction, maybe we should take this is, you know, so we talked about right sizing memory. But what about right sizing if you only have temporary needs? And I think that that's sort
Starting point is 00:17:19 of the next question here. So let's say you've got a system that needs to do some kind of big, big job, end of quarter, end of week, maybe you're doing ML training, and maybe you're doing, you know, some kind of big bot, you know, kind of more batch processing tasks. And you know that this server is going to need a lot of memory. Or maybe you've got some servers that need a bunch of memory certain times and could give that memory up at other times. So that's another exciting area, like maybe, you know, VDI systems, where, you know, the memory goes unused all weekend, or in the evenings or something. And it would be very cool to be able to kind of return that to the library, as it were, and let somebody else check out that book. And that, I think, is another really cool thing. I thought that that would be more far fetched. But
Starting point is 00:18:04 after, you know, talking to companies at OCP Summit and the CXL Forum, there are a bunch of companies that are working to make that happen sooner rather than later. And so I think that very surprisingly soon, we're going to see, and again, we don't know the exact timeline, but surprisingly soon, given the fact that there are multiple companies working on chips that would enable this, we're going to see a rack of memory that is shared by all the servers in the rack. And then any of those servers that need some memory can kind of check out a huge scoop of memory, use that for whatever it needs to use it for for a while, and then return it for the others to use. And I think that that's a really exciting thing, not just because it allows
Starting point is 00:18:45 companies to be more, I don't know, careful, you know, right sizing memory and not buying as much, but actually quite the opposite. I think that it means that suddenly having this pool of memory will open up new compute possibilities. And people will say, well, you know, if I'm not, you know, spending, you know, crazy money money because I can share this memory with other applications, what can I do with that memory? What would enable to be able to have just a huge amount of memory temporarily for this task? And I think that's exciting.
Starting point is 00:19:17 And maybe that's what Micron is looking. Because I mean, I imagine you guys are probably not hoping that people will spend less money on memory. No, well, pooling is, on paper, it's fascinating. It's very interesting from a memory utilization perspective. But it's like every other engineering problem. There's a trade-off associated with it. And you need to make sure that because you're now connecting potentially multiple nodes to a particular memory pool, you need to make sure that because you're now connecting potentially multiple nodes to a particular memory pool, you need to make sure that that memory pool is robust, perhaps more robust than
Starting point is 00:19:52 your standard memory footprint. Because as time moves on, scaling challenges are impacting everybody. There are memory errors that will likely come come up and we need to be able to deal with those in a rational manner and not have, you know, not take a, you know, a handful of notes down with it. So thinking about how the, what we call blast radius of a particular memory fail or a particular, you know, issue in the memory subsystem, how it's handled, how it's, you know, making sure that there are backups to, you know, to the way it plays out if and when it does. Because, you know, the business we're in is really difficult.
Starting point is 00:20:40 We're being asked to scale year on year on year. And, you know, we're running right up into the sort of limits of physics, if you will. In some cases, it's those, you know, atomic phenomena that are, you know, outside of our control that end up, you know, driving some unexpected outcomes in the memory subsystem. So, you know, it's up to companies like Micron to make sure that if and when that pooling application does start to, you know, really, you know, it's up to companies like Micron to make sure that if and when that pooling application does start to, you know, really, you know, see a significant pull for demand that we make even more robust memory subsystems that can, you know, that can support that particular use case. We believe CXL does absolutely drive innovation in what we're calling business outcomes. It really does provide a better system configuration at a lower price point, we believe. is it starts to really interplay with the elasticity associated with new businesses that can only be done at a particular price point or a particular cost point.
Starting point is 00:21:53 If CXL provides that cost point at a much more attractive and tenable cost point, then I think it does start to drive business processes and new business opportunities that just didn't exist unless you had access to tens of millions of dollars worth of investment to buy a very, very large system. So in a lot of ways, I think CXL drives a lot of very interesting outcomes when it comes to business in general. And it's driven by the fact that technology supports these outcomes. You know, again, these memory subsystems as currently
Starting point is 00:22:33 designed just wouldn't ever be able to do that. And the way CXL provides that release valve is, you know, that's what's interesting about it. we live in this world where things tend to be a little bit delicate they need to stay where they are connected and you don't really want to jostle them move them unplug them do you ever see because you because you mentioned blast radius and i unfortunately was one of those guys that was in the data center one time and tripped on something that i thought was just something on the ground, but ended up being a wire or something. And then, you know, bad things happen. Do you ever see this where like, we could be something that could be easily, like, I dread to use the term hot swappable, but I don't know of another term that
Starting point is 00:23:24 I can use, but something that could be so highly available that you could literally unplug and plug something else in. Do you see it moving towards that direction? You know, it's up a situation where you may not necessarily need a CPU to be involved in the way it's been involved up until now. And so you can think about ways that you'd be able to provide failover through non-CPU types of intervention. And so can a DRAM-based CXL module be physically hot-swappable? Perhaps.
Starting point is 00:24:16 What's probably more likely is that you can take out or you can remove a CXL, a DRAM-based CXL module, and that module or that information in that module is also stored somewhere else. So the failover has been established previously through that peer-to-peer sharing or, you know, it may not be peer-to-peer, but through the possibility that CXL provides so that you, you know, in the event of, of, you know, a, a sort of a system failure or some, some sort of an issue with a particular device that, that you have a copy exact, you know, aspect of that somewhere else in your data center, possibly somewhere else in the world that, that you can kind of continue where you left off and, and, and establish that sort of, sort of failover from that perspective,
Starting point is 00:25:06 rather than just having a single point of failure in the device itself. So I absolutely think it's possible. And it's interesting where these ideas are going. This is exactly the kinds of conversations we're having with our customers today. Yeah, I think that that's really the most interesting thing here is that by increasing the flexibility of system design, the thing that I'm really looking forward to, and I've mentioned this before on the podcast, is the sort of worked its way kind of into a corner in terms of, you know, this is how many cores are practical on a processor. This is how many processors are practical on a system. This is how much memory is practical on a system. This is how much expansion is practical. And, you know, we kind of get to this point. And then the whole world says, Oh, well, then what we're going to do is we're going to put a higher level of software
Starting point is 00:26:08 on that we're going to extract, you know, extrapolate, sort of a software defined system. And then we're going to use a bunch of these things together. But what if we could change the fundamental things? What if we could really kind of re rethink. And from CXL, it may seem like it's just enabling us to add, I don't know, some more memory or some more IO or something like that. But it really does act sort of as a can opener. It kind of allows us to think about, well, maybe the system doesn't have to look this way at all. Maybe the system can be very radically different.
Starting point is 00:26:43 And actually, Ryan, let me ask you, what do let me ask you, kind of take off your practical hat. What do you think kind of best case, where could this go in an unexpected and interesting way? How could the things change based on this technology? Yeah. Wow. Five, six, seven years down the road, you know, perhaps as much as not thinking about modularity, I need to add a little bit more compute or a little bit more, you know, capability to my server rack. The fact that the, you know the cpu or the server itself doesn't become the the sort of
Starting point is 00:27:26 you know order of granularity anymore you know it's it's i want to i want to add a little bit more compute and i don't necessarily need to add another server i can i can add a card i can i i need more memory i don't have to add a dim i can i can add a cx module, or I can connect a peer-to-peer, you know, sharing loop with this other server that perhaps already has some memory in it. Or, you know, so I think the the unit of upgradability, I think, completely changes when it comes to adding more capability to the data center. And that's just at the hardware level. At the software level, as you alluded to, that becomes even more interesting when you start to think about provisioning in certain spaces, memory defined or software defined memory, if you will,
Starting point is 00:28:20 which is really what a lot of customers are thinking about as the Holy Grail, if you will. So I think it really starts to change the way we think about how to bolt on performance over time. And it's not just a memory module or a CPU anymore. It's something in between. Perhaps a CXL-enabled module that has computer memory on board for both aspects. So I think the future is going to be very, very different when it comes to upgrading a particular capability. Yeah, and I think that that's one of the things that we heard about as well
Starting point is 00:28:59 at the CXL Forum was the idea that this isn't just for CPU. This is also for GPU or DPU or XPU, whatever you want to call it, because a lot of those devices are going to be able to share memory and that the shared memory can enable parallelism like we haven't seen before and access to resources like we haven't seen before. And one more thing that I'd like to ask you in particular, though, is that Micron is also involved in the world of storage. How does CXL impact storage? Yeah, so of course CXL wants to, it's a high performance, low latency interface, so that sort of naturally engages your mind toward thinking, okay, something like DRAM, but it need
Starting point is 00:29:48 not be the case. There are potential use cases out there that can, as long as the cost optimization is there, can leverage media that's not necessarily DRAM that can be kind of more storage-like potentially. Again, back to the peer-to-peer aspects of what's coming in certain aspects of specifications for CXL, a node that wants to interface with something that would typically be a memory, but it needs to talk in the form of an object. And so objects are typically more storage oriented, but with CXL, an object can be memory and sort of vice versa. Memory can, or something like a NAND could act like a
Starting point is 00:30:46 potentially byte storable, you know, type of interface. So again, it's really the flexibility and, you know, the tunability, I guess, of the media that you put behind it. It just sort of, whether it's storage or whether it's memory, I think the lines are going to get blurred there quite a bit. Because the presumption there is that if you're willing to trade off a little bit of performance by moving to more storage-like media, the possibilities are kind of endless because you no longer have to sort of abide by those deterministic rules anymore. So I think there could be some interesting use cases that are not just kind of DRAM high performance use cases, but could leverage potentially different media that offers some aspect of cost optimization, but doing so at a more attractive cost point. Yeah, I think everything that you're saying, Ryan, makes me really excited because we deal with these things in cloud. We deal with these things in virtualization, the idea of decoupling, the idea of taking something that has a bunch of resources and then divvying them out, like discussing this pooling strategy.
Starting point is 00:31:57 You know, VMware has been doing this for a long time because it takes this pool of stuff and then it shares it between a whole bunch of other things. You know, I love that I can use stuff in things in multiple different contexts. But at the end of the day, that's what CXL is really doing. It's changing the specifics of what the stuff and things are and saying, you know what, we can do stuff and things differently and do it in a hardware space and do it in a way where we don't have to deal with, quote unquote, the box anymore. Now it could be different perspectives. And so I really love that perspective, and I'm really excited from where we're going.
Starting point is 00:32:36 Well, thank you so much for joining us, Ryan. It was great to have you on the panel session that we did at the CXL Forum at OCP Summit. And, you know, it's great to kind of get your perspective on this as a company, you know, a person that's in the memory and storage business, as a company that's in the memory and storage business. You know, that's the low-hanging fruit for CXL. Where can we continue this conversation, connect with you? And, you know, where can people go to kind of learn more
Starting point is 00:33:04 about what you think about this? Yeah. I was actually on that panel with a number of colleagues who I highly respect. I would direct your focus to that particular Tech Field Day panel discussion where you have the opinions of other companies in the ecosystem as part of the OCP CXL forum. I think there's a lot to learn from other aspects, other opinions of ecosystem players. So I would encourage you to consult that panel as well as go to micron.com. We're always adding new and exciting items on our landing page and more to come for CXL. How about you, Nathan? Anything new?
Starting point is 00:33:52 I'm working on some things. Nothing has been released. I just released a blog on the DevOps mindset because I'm tired of people saying, hey, we use Terraform, we're DevOps, and it not being the case at all. So yeah, I've got a couple of things out there. You can find them at nerdynate.life if you feel like reading up on that. And as for me, I actually did a keynote at the CXL Forum in New York, and we're going to be publishing that as a Gestalt IT checksum editorial. So that'll probably be online by the time we release this episode. So just go to gestaltit.com for that. And check out the show notes for this
Starting point is 00:34:29 and we'll include links to all of these things. And we would love your feedback as well. So please do reach out to me, Stephen Foskett at S Foskett on the social medias. I would love to hear from you. Also, please do join Utilizing Tech, Utilizing CXL. If you enjoyed this show, head over to your favorite podcast application,
Starting point is 00:34:50 search for Utilizing Tech or Utilizing CXL and give us a subscription. We're available in most of them. You can also find us on YouTube. Just go to youtube.com slash Gisdalt IT video for the video version of this discussion. This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise. For show notes, as I mentioned,
Starting point is 00:35:10 and more episodes, go to utilizingtech.com or find us on Twitter at Utilizing Tech. Thanks for listening, and we'll talk to you again next week. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.