Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 06x02: Private AI is a Reality with Chris Wolf of VMware by Broadcom

Episode Date: February 26, 2024

AI is all about data, so it is no surprise that enterprises are deploying their own private AI inside the firewall. This episode of Utilizing Tech brings Chris Wolf, Global Head of AI and Advanced Ser...vices at VMware by Broadcom, to discuss private AI with Frederic Van Haren and Stephen Foskett. Companies looking to deploy AI are finding that it doesn't require nearly as much hardware as expected, software is widely available, and are reluctant to trust sensitive data to a service provider. VMware by Broadcom is deploying their own private AI code assist, keeping proprietary software and standards inside the firewall. But the solution also helps the AI team be more agile and responsive to the needs of the business and customers. One of the first use cases they found for private AI was customer support, and this is tightly integrated with internal documentation and sources to ensure valid responses. The biggest challenge is to integrate unstructured data, which can be spread across many locations, and this is actively being investigated by companies like Broadcom as well as projects like LlamaIndex. VMware has contributed back to the open source community, notably with the Ray workload scheduler, open source models, and related projects. It's important to build community and long-term engagement and support for open source as well, and this is in keeping with the overall trends in the AI community. Organizations looking to get started with private AI should consider the VMware by Broadcom reference architecture, which incorporates best practices at a smaller scale and pick a use case that provides immediate value. Hosts: Stephen Foskett, Organizer of Tech Field Day: ⁠https://www.linkedin.com/in/sfoskett/⁠ Frederic Van Haren, CTO and Founder of HighFens, Inc.: ⁠https://www.linkedin.com/in/fredericvharen/⁠ Chris Wolf, Global Head of AI and Advanced Services at VMware by Broadcom: https://www.linkedin.com/in/cswolf/ Follow Gestalt IT and Utilizing Tech Website: ⁠⁠⁠https://www.GestaltIT.com/⁠⁠⁠ Utilizing Tech: ⁠⁠⁠https://www.UtilizingTech.com/⁠⁠⁠ X/Twitter: ⁠⁠⁠https://www.twitter.com/GestaltIT⁠⁠⁠ X/Twitter: ⁠⁠⁠https://www.twitter.com/UtilizingTech⁠⁠⁠ LinkedIn: ⁠⁠⁠https://www.linkedin.com/company/Gestalt-IT Tags: #UtilizingAI, #AI, #PrivateAI, @VMware, @Broadcom, @UtilizingTech, @GestaltIT, @SFoskett, @FredericVHaren,

Transcript
Discussion (0)
Starting point is 00:00:00 AI is all about data, so it's no surprise that enterprises are deploying their own private AI inside the firewall. This episode of Utilizing Tech brings Chris Wolff, Global Head of AI and Advanced Services at VMware by Broadcom, to discuss private AI with myself and Frederick Van Haren. Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT, part of the Futurum Group. This season of Utilizing Tech is returning to the topic of artificial intelligence, where we will explore the practical applications and impact of AI on technological innovations in enterprise IT. I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. Joining me on this session as my co-host is Frederick Van Haren.
Starting point is 00:00:47 Frederick, welcome back. Thanks for having me. So you and I are both attended AI Field Day, of course, and one of the things that's happening in this whole AI is getting real space is companies are looking at repatriating or maybe patriating AI, essentially bringing it inside the firewall, making their own private AI. This is not a surprise to those of us in the industry, right? Yeah, you're right, Stephen. So a lot of organizations nowadays understand that collecting data is important for AI. And as a result of this, they have been looking at ways to get value out of that data.
Starting point is 00:01:26 Acquiring the hardware, buying GPUs and CPUs is not difficult anymore. It's readily available and really cost effective. The only problem is that they're difficult to get, right? There's a waiting line for the GPUs.
Starting point is 00:01:41 However, enterprises with that data acquire the hardware and luckily for them, the algorithms are open source. So it's really easy for them to be self-sufficient, going all the way from training to inference and deploying those models internally. Yeah. And I think that you're right. In fact, you and I and a lot of the people out here who are experimenting with AI aren't just doing it in the public cloud. I mean, I've deployed AI applications locally, and increasingly we're seeing CPUs and accelerators and, as you say, GPUs available with AI acceleration features that you can deploy right locally. You know, I've got one over there in this very room. It's pretty cool to see what you can do. Yeah, indeed. I think we are now in an area where
Starting point is 00:02:35 organizations by themselves can try all kinds of stuff around AI. It used to be that you needed like a PhD in order to do something. Nowadays, it's a lot easier. And certainly with the availability of data. I mean, there are still challenges. There's no doubt about that. But there is no lack of information. There's a lot of models out there available to people.
Starting point is 00:02:58 There are websites like Hug and Face who deliver and provide links and source codes to build your own. They even have full-blown communities where if you get stuck, you can ask questions. So I do think it's a great time to be and working in an AI. It wasn't the case five, 10 years ago, but nowadays it's relatively easy to do. Yeah, absolutely. So today we're going to be bringing in a special guest to discuss the deployment of private AI. And that is Chris Wolfe, who is Global Head of AI and Advanced Services at VMware by Broadcom. Welcome to the show, Chris.
Starting point is 00:03:38 Yeah, Stephen, thanks. Happy to be here. So you aren't just evangelizing AI. You're actually helping Broadcom and VMware deploy AI services internally, right? Yeah, I think we're a pretty unique charter. And I would expect maybe some other companies, if they had a chance to go empty canvas, might take a similar approach. So our organization is responsible for AI and advanced service development that goes into a lot of the products that the company sells. We're really focused on bringing AI services to the VCF division. But what sets us apart is our charter also includes operating the internal AI services for the company. And that has been really good for us because it's given us a lot of insight into what works, what doesn't work, how can you scale, what are the management challenges that
Starting point is 00:04:29 customers might run into and so forth. So it's a really good blend for us and it's proven to be really effective in terms of helping to shape what we do on the product side. So I really am super eager to hear sort of how you're doing this and the nuts and bolts of it. But first, I'd actually love to hear you tell us a little bit more about why deploy private AI. Why would VMware do this internally instead of externally in the public cloud or in a specialized provider or through software as a service or something? What are the driving factors behind this? Yeah, there's a bunch. I mean, when we first started looking at, you know, really the art of the possible, we had to do some of our own internal myth busting. You know, like you need
Starting point is 00:05:15 hundreds to thousands of GPUs if you're going to do anything with AI or generative AI, right? It was kind of the prevailing thought if you go back the last year or two. And one of the first use cases we had was we were looking at code assist or software development and really helping our software engineers to be more productive in their day jobs. So we looked at some public cloud services and our legal team had a really firm opinion on that, which was just no. There was too much that was unknown. There was too much risk. There was concern that VMware or software IP would be used to train models that could benefit competitors, as an example. We were concerned around IP leakage, right? These are genuine concerns we hear from a lot of organizations. So that led us to look at the StarCoder open source model on Hugging Face and say, well, let's deploy that.
Starting point is 00:06:08 Let's see how that works out for us. And we didn't pick like even like a real easy use case. We were going after C programming and ESXi kernel development, which is some of our more opinionated software engineers, which is great. Let's really put this thing through its paces. We did have to tune the model. So we needed to take some of the code commits from our really good software engineers, and that allowed the model to not learn just our coding style, but our commenting style, you know, how we like to do documentation for what we're building
Starting point is 00:06:51 around ESXi. But what happened was pretty interesting, I think, was really the outcome of this. So the model tuning, we used a couple of NVIDIA A100s. It took us half a day to refine the model with a relatively small data set. And the result was we had roughly from our survey, I believe the number was 92% of our software engineers that use the tool plan to continue using it, which these were two aha moments for me. So one was, you know, we had to use less compute resources than we thought, than we were anticipating. And that's been a theme as I've talked to other like CIOs and CDOs and CTOs and our customer segment. But
Starting point is 00:07:32 then the other one was like, I can't get 25% of VMware software engineers sometimes to agree on something. So when you have more than 90 saying, yes, this is great, that's, we really knew we were onto something at that point. So I wanted to give you an example, but, you know, PrivateEye started with really some requirements from legal. And then we started to see that we have all of these other use cases where we have this internal documentation that sits behind our firewalls. Customers have the same issues with sales contracts, legal contracts, all of these kind of things. And we started to see that we can apply this to lots of different use cases that aren't just use cases we have, but our customers have the same ones as well. Right. So how did you find those use cases? I mean, did people come to you or did you kind of found the use cases and build on top of them? Yeah, it was, I guess it was a little bit
Starting point is 00:08:20 of both. So there was some discovery that happened. You know, some of the things like in the early days, if you want to call it the early days, when like chat GPT was really taking off, there was a lot of thoughts internally, even in the engineering organizations that, well, I can hook this product into chat GPT and build like a customer chat interface. And it's like, yeah, you can, but you're not going to be, we're not going to let you. Because there's customers, it's the customer's data. They own the data. We can't arbitrarily say we're going to just start shipping it to a cloud and to an AI model, right? That is just, you know, absolutely not. So things around, you know, how can we localize AI, which is actually something in my organization we've been working on for the last three years. Concepts like federated machine learning has been an area working on for the last three years. Concepts like
Starting point is 00:09:05 federated machine learning has been an area of exploration for us for some time. But there were use cases that were kind of like just on the surface. And we were able to validate this with a lot of customers very early on. And the top one being like customer support. It doesn't matter if you're healthcare or you're IT or manufacturing or retail, everybody has a customer support use case. And to be able to help even just a support agent, to be able to get information quicker and the right information quicker is a use case that everybody has. So we have this internal retrieval augmented generation solution that is trained on all of the VMware internal documentation. We do updates every 24 hours. So this content is always fresh. And I can ask the
Starting point is 00:09:53 model anything. I can ask it for VRA scripts to do cloud integration, to configuration options, to troubleshooting details. You name it, you can ask it, and the model is not just gonna give you a response, but it's also gonna give you links to the sources of where it derived the data from, which is also, we found to be important. Because I can tell you our early days
Starting point is 00:10:19 of running even open source language models on premises are just in general, and this isn't just open source, this is a lot of them. When AI model is wrong, it's confidently wrong, right? Like you would ask it, who's the CEO of VMware? And it would like make up a name that sounded like CEO-ish, but it was never a CEO in the company's history. It's like, where did you even get this from? But that's also the difference between just like a base foundation model. And when you use something like RAG or retrieval augmented generation, you can back the foundation model with current sources that can really help to improve the accuracy of the model as well. So how do you validate the data? I mean,
Starting point is 00:11:02 you have your own data. How do you validate that the data is the right data and doesn't create hallucinations? Yeah, of course, you know, there's a lot of testing that has to go folks are really focused on the inference times, like how quickly can the model provide an answer? And you'll see like, well, the 7 billion parameter model provided an answer in 200 milliseconds. Well, that means nothing if the answer is wrong. Right. So this is where you have to find even that balance between accuracy and speed. And for us internally, we've had really good success with a lot of the 30 billion parameter models where you're not really compromising accuracy. You can get strong results
Starting point is 00:11:58 and you can still get that speed in terms of a turnaround in a couple of hundred milliseconds at reasonable scale. But there's a lot that has to go in. And then also what we've tried to do across the company is pull in other teams because we don't want to just test within our own organization. We've asked other parts of the Broadcom business to use the same services. You know, this is since VMware had closed the deal with Broadcom. So we have users in lots of different parts of the company, too, that's able to onboard and give us feedback.
Starting point is 00:12:29 And that's also helping. And I'd say the last thing I would mention, too, is because we're also a team that's operating services, we have a pretty unique opportunity where we get to talk to a lot of our customers, data science teams, and teams that are running their AI services as well. And what we do is we get together and compare notes. So like what open source models or commercial models are you using? What are you, you know, how are you, you know, tuning these models? What is your tuning data set? Like, how do you go about this? Like, all that information has been really helpful for us to grow and learn. And I think for our customers as well, we try to just keep an open exchange. Yeah, I think that it's easy to deploy a very, very high-performance Magic 8-Ball that will just make up an answer to whatever question you give it.
Starting point is 00:13:17 It's much better to have one that, even if it's not as high-performance, even if it doesn't have as many parameters, that actually searches the right data and recovers the right document. Because truly, that kind of data retrieval, I think there's a lot of applications for that beyond software development and customer support, but those are certainly great examples of that. Did you find it challenging to integrate it with an unstructured and varied data set like support information? Or was it something that you could easily feed into the model? How do you connect to all that data? Yeah, no, that is the biggest and hardest challenge today to the point that we've been developing some of our own services that we're looking to integrate into the product side
Starting point is 00:14:02 because that data scraping piece is difficult because you have unstructured data in lots of different places. Sometimes if you're scraping even like web data, you know, the way the pages are formatted or metadata on the pages might not actually be to proper standards. And then you're just having trouble
Starting point is 00:14:21 even scraping the data off the page. There is a good community in the Llama index community if you look at a lot of different data collectors that are being developed there that we've also been looking at as an effective way to complement really what we've done on our own. So our stack was completely homegrown, but we've recognized this as a really key customer need. You see a lot of talk right now around data preparation services and really helping to be able to feed data properly into a vector database to be used for generative AI. So yes, we've run into the problems. We have the scars. And I'd say the contribution we're looking to make is to take what we've learned and produce our own technology that can help customers to be able to collect data in a more automated fashion, do so in a way that's meeting their policy constraints as well, is aligned to their access policies. I mean, these are things where AI can go horribly wrong. Like if I use
Starting point is 00:15:25 elevated permissions to collect data, and then I have, you know, lower level permissions being used to query against the model, now you've created a backdoor, right? And that's a real issue for organizations that you have to be really careful with. So you talked a little bit about internal services and talking about sharing the models with customers. Do you also do data sharing with your customers? I mean, both ways, or are there privacy concerns around that? No, no, we've been pretty much open to the point that even we've had a couple of cases where we've even had our legal teams get together to talk about best practices, because even how you do a contract now around an AI service is fairly unsettled, I guess, from a law practice is concerned. There's not a lot of legal precedent around these things.
Starting point is 00:16:17 So we've tried to really take a community-based approach and just share what we've learned, listen to others. And I'll tell you, that's really how we've been doing things for years, not just even in AI, but in other areas of innovation, because it's allowed us to have a much higher success rate in what we do. I look around the industry and there's lots of places that have kind of given up on organic innovation. It's just too hard to get right. But it's not as hard if you really take the time to listen and ask the right questions and not lead witnesses. Right. You can really get to some good insights and find yourself to be successful. But, you know, for us, everything from like, hey, you know, what models are you using to to, you know, other aspects of the use cases, you know, especially comparing things around open
Starting point is 00:17:06 source for, you know, what projects are you using for model serving? Are you using commercial solutions? You know, these are all things that help in terms of, because I think we're all in kind of learn mode here. You don't know what you don't know. So you want to hear from others. And I would say on average, we're probably doing one to two data exchanges per week with different organizations, just learning from them and they're learning from us. So we're just talking essentially peer to peer. So we talked a little bit about data sharing. How about co-sharing? Do you contribute back to some of the projects or do you actually have your own projects that you're sharing with the community?
Starting point is 00:17:46 Yeah, we have both. So you look at, there's a VMware private AI GitHub repo that has a lot of projects that we've been contributing back. I think one of the funnest things we've done is contributions back to the Ray open source project. Ray does a lot of things. I mean, at the highest level, Ray is providing a way to
Starting point is 00:18:07 orchestrate and across and scale AI clusters. We've liked Ray though strategically because it also has a great ecosystem of software, AI software companies that integrate with it. So when you want to connect an AI software stack to infrastructure, you can do so using open source Ray. And we like it because I don't have to, I don't, an ISV or open source project doesn't have to know or care about VMware APIs to run on a VMware stack. They can just talk Ray. So we upstreamed all of our integration work into the Ray community. We've created a plugin to vCenter that can allow that really seamless turnkey integration with us as well. And that's been, I'd say, one of our early successes.
Starting point is 00:18:51 We joined the UXCell Foundation as a steering member. That's something that was initially founded by Intel and several partners around one API. So that's been another area of contribution. And we also, a lot of the models that we've worked on, we've open sourced. And you'll see a VMware AI Labs repo on Hugging Face. So you can check that out as well. And that's just the start. There's a lot more that we're looking to contribute to in other projects in the open source space that are really important to us.
Starting point is 00:19:21 And again, we want to do our role in the ecosystem. And for us, it's really about connecting AI to infrastructure in a really low cost and scalable way. Because the other thing that people are struggling with today is often when I get GPUs, it might take me three or six months or longer to get more. So you have to be really smart about maximizing the capacity of the GPUs you have. And this is where technologies like virtualization play a huge role.
Starting point is 00:19:53 Because when you get into inferencing, where I'm just now applying the model to an application or a use case, right, I'm oftentimes needing a fraction of a GPU. So I can start to slice up GPUs more effectively. And that's where you really start to get to some good cost savings if you're an enterprise organization. Yeah. So that's quite interesting from a contribution perspective. Do you expect people to rely on the VMware AI stack or are your contributions more generic where anybody can benefit from it? Yeah, a lot of our contributions are entirely generic. Of course, we want people to run on our stack because we think we have the best platform and choice for AI services in the ecosystem. The things we do around being able to virtualize GPUs, use technologies like DRS to do more intelligent placement of AI workloads, being able to provide a common set
Starting point is 00:20:53 of management operation tools for your AI and non-AI workloads. I think those are all good reasons to run on VMware. But if you look at what we've been doing historically over the past several years, we want applications and customers to want to run on our platform because they want to, not because they have to. And that starts with even having an upstream Kubernetes API service above our stack. So I can integrate with the Kubernetes API. That's what the IT ops teams can provide. And there is no stickiness there. You want to move to another Kubernetes-based infrastructure, go for it. We're not going to stand in your way. We want to do things the right way.
Starting point is 00:21:28 That's really important to our philosophy. That's been our philosophy around open source as well. And we want to be good members of the community. So you're going to continue to see a lot of good upstream work. I want to say, just to give you an idea, I just went in and approved these. I mean, just in my organization alone, we're contributing upstream, I want to say, to 60 or 70 different projects right now. Yeah, I think that's important because, you know, in all these spaces, what we've seen is that there's a lot of takers,
Starting point is 00:22:00 but the companies that are more open and contributing are the ones that are going to see a lot more success. And the aspect of virtualization as well, I think that a lot of people, when they hear the word VMware, they think virtual machines and that's kind of where it begins and ends. But that's really not what you're talking about here. I mean, for the most part, apart from the idea of sharing GPUs, a lot of the stuff you've been describing is not virtual machine bound at all. In the interests of kind of the customer deployment aspect, though, I'm curious, how did the conversation go? And how do you have those talks about contributing to open source, you know, working on projects that aren't proprietary IP, you know, stepping back from, you know, what VMware is offering to its customers. How do people in this space make that case and have that conversation internally
Starting point is 00:22:57 to be part of the community? Yeah, that's a really good question. I think we could do a podcast on this alone, you know, if you think about it. You know, for us, like, if you really break things down and simplify, you can start by asking yourself, well, if I'm the customer, how would I want to run this software stack? And what does it look like? Right? And that's led us to, you know, other open source projects I haven't mentioned, like Kubeflow, which we've done a lot of work in that community as well, you start to think about things a little differently. And, you know, obviously there's parts of our stack that the company monetizes. You know, the biggest thing is on-prem infrastructure is hard, right? It's really hard to do and get right and to scale. There's lots of very large companies that have tried and failed at it. It just happens to be something that VMware, we feel we're the best in the industry at in terms of providing a comparable on-prem IaaS. So that's where we're going to continue to focus. We want to stay in our lane there. And then our open source efforts really come down to, well, how can we enable communities and lots of other
Starting point is 00:24:05 apps and services to be successful on our platform? So that's also been, I'd say, you know, a part of our thinking is it, but you start with the customer problem statement and say, well, how would they want to really operate on this? And then the answer has become a little more clear, whether, you know, like we're doing work in PyTorch because it's such a popular open source AI framework. So there's that, you know, contributing to UXL because we want to promote customer choice. It's trying to make all the right moves. But at the end of the day, my firm belief is you win people's trust by not doing unnatural things to make them have to stay on your platform. You want them to stay because they want to, because they get the right amount of value.
Starting point is 00:24:50 So that's, I mean, at the high level, that's what's really driven our thinking. And then we do look at on a project by project basis, like what is the strategy behind this? Right. So I think that you want to be mindful of what you're doing in open source. I think what we've really tried to work hard to avoid is we don't, even with what we open source ourselves, we don't want to just like have somebody go and open source something and declare victory. You know, putting some binaries on your GitHub page to die is not a successful open source strategy, right? You should be thinking about, well, if is not a successful open source strategy, right? You should be thinking about, well, if we're going to open source something, what is our strategy to build community around it, right? How do we really engage? What are the opportunities for our
Starting point is 00:25:35 community members, right? And how should they be thinking about this? You know, one that I haven't mentioned, which I think is some of our more exciting work recently, is in confidential computing. We built a certifier framework that we've open sourced. It's with the Confidential Computing Consortium now, part of the Linux Foundation. And what that technology provides is an interop layer. So I can code an application once to take advantage of confidential computing constructs, and I can traverse AMD or Intel or other hardware-based TPMs, and I don't have to rewrite the application, which is huge, right, from a software developer and customer value point of view. But our thought on this was when we built a technology, when you wear your customer hat, you say, well, if this was
Starting point is 00:26:26 proprietary for VMware, am I just trading, you know, one, one point of stickiness for another? And the answer is, yeah, you are. So to us, the right thing to do was to open source it. And now what you've seen is there's a huge community of contributors outside of VMware that are, you know are deeply involved in this project because it's the right thing to do. We'll enable confidential computing interop on our platform for sure, but we don't think that that should be a proprietary control point that VMware owns. We think it should be in the Linux foundation where it can have neutral governance. So one of the questions I get a lot is how to get started, right? So how do you,
Starting point is 00:27:06 do you have any tips and tricks for organizations that have basic AI knowledge, but want to participate in private AI? Yeah, yeah, that's a great question. So that is the hard part, right? It's like, well, I don't even know what to do. So maybe I'll just like play around with some of these other services. But, you know, one place to get started is we've published a reference architecture. That's right now, I want to say it's 60 or 70 pages. And what's exciting about that is it's everything from, well, here's how you can take the Hugging Face star code or model and start doing your own internal code assist use case. We have that in there. Here's how you can think about sizing your network based on the size of the model that you're looking to run.
Starting point is 00:27:46 And you can do a lot of this with Ethernet networking with a small number of GPUs, just enough CPUs to run your apps or whatever, and you're off and running. The investment is typically less than you think in terms of those cases. The other thing I would suggest is to pick a use case that really resonates with your business where you can get the quick win to just be able to prove the value. So like a chat interface for us is something that's really just fundamental, I think, to most organizations. Having an internal search that's smarter, that can find answers more quickly, right? These are the low-hanging fruit. And VMware's plans is to take what we've already done with our internal apps.
Starting point is 00:28:27 In the coming months, you're going to see us open source these applications that we kind of call starter apps right now that's going to really help our customers in the industry where you're looking to kick the tires and kind of see what's possible and do it very efficiently. You know, to also make it simple for the industry, our joint solution with NVIDIA is going to be available in the coming months. So take a look at that. It's really the best of VMware, the best of NVIDIA technologies to give you a simple turnkey appliance to be able to just get started and get value from AI right away. So we were all involved in the AI Field Day event that
Starting point is 00:29:01 happened a couple of weeks ago. If you're interested in this conversation, you want to learn more, I would recommend starting by going to techfieldday.com or looking on YouTube for the videos of the VMware presentation at AI Field Day, which includes Chris and his team talking about a lot of these topics. Before we go, Chris, where can folks contact you and continue this conversation? Yeah, there's a few places. So you can go to viavia.vmware.com.ai. That'll take you to our AI landing page. And you can see from there AI labs, you can see our reference architecture, all the different things we're working on, our joint solution with Nvidia. It's all up there. The Tech Field Day content is going to be really where you want to spend some time because we have a ton of demos that we've presented
Starting point is 00:29:49 there as well across a variety of different platforms, including demoing our own internal use cases that we've been operating for quite some time now. So a lot of knowledge out there. Those would be some great places to start. Thanks so much. Frederick, thanks for coming to AI Field Day and co-hosting this podcast. What else are you into? Well, it's still the usual thing, trying to help customers, helping them understand what AI is, and provide them with a roadmap to successful AI. And as for me, we're going to be looking forward to another AI Field Day event, hopefully later this year.
Starting point is 00:30:24 We're also hosting a lot of other Field Day events. And of course, we're going to be continuing Utilizing AI, published every Monday. So thank you very much for listening. Utilizing AI is part of the Utilizing Tech podcast series. If you enjoyed this discussion, please do subscribe. You'll find us in all of your favorite podcast applications. Also, we would love it if you left us a rating and a nice review. Thank you. For show notes and more episodes, though, head over to our dedicated website, UtilizingTech.com, or you can find us on X Twitter and Mastodon at Utilizing Tech. Thanks for listening, and we will see you next Monday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.