Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 06x00: Looking Forward to the Year of AI with Frederic Van Haren and Mark Beccue

Episode Date: February 12, 2024

In the two years since we focused on AI on this podcast, OpenAI added a simple conversational interface to their deep-learning model and AI has exploded on society. This season of Utilizing Tech focus...es on practical applications for artificial intelligence and features co-hosts Frederic Van Haren and Mark Beccue along with Stephen Foskett. In this episode we return to AI with a look back at the incredible impact that generative AI has had on society. Humans traditionally interfaced with machines using keyboard and mouse, touch and gesture, but ChatGPT changed all that by enabling people to communicate with computers verbally. But this is just one of many potential AI model components that can be used to build business applications. The true power of generative AI will be realized when these other components appear, and when they are able to integrate custom data. We will also see innovation in the AI infrastructure stack, from GPUs to NPUs to CPUs, storage and data platforms, and even client devices. Follow Gestalt IT and Utilizing Tech Website: ⁠https://www.GestaltIT.com/⁠ Utilizing Tech: ⁠https://www.UtilizingTech.com/⁠ X/Twitter: ⁠https://www.twitter.com/GestaltIT⁠ X/Twitter: ⁠https://www.twitter.com/UtilizingTech⁠ LinkedIn: ⁠https://www.linkedin.com/company/Gestalt-IT Tags: @UtilizingTech, @GestaltIT, @TheFuturumGroup, @SFoskett, @FredericVHaren, @FutureBec, #UtilizingAI, #AI, #GenerativeAI, #AINetworking,

Transcript
Discussion (0)
Starting point is 00:00:00 In the two years since we focused on AI here on this podcast, OpenAI added a simple conversational interface to their deep learning model, and AI has absolutely exploded on society. This season of Utilizing Tech will focus on practical applications for artificial intelligence, featuring co-hosts Frederick Van Heron and Mark McHugh, along with myself, Stephen Foskett. In this episode, we're going to return to AI with a look back at the impact of generative AI and a look forward to the things that we're looking forward to learning here in 2024, the year of AI. Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT. This season of Utilizing Tech is returning to the topic of artificial intelligence. As we discussed on the first three seasons of Utilizing Tech, you probably enjoyed some of those conversations. But one of the things that happened back then is, well, we hadn't yet had the AI revolution that we're all living in today.
Starting point is 00:01:02 In fact, ChatGPT 3.5, which is the one that revolutionized everything, that came out just after we published the last episode of Utilizing AI. So we really didn't know what was coming. Well, let's dive right back in. So I'm your host, Stephen Foskett. And as I said, I organized Tech Field Day and published Gestalt IT, which is now consulting and services company active in HBC and AI. And you can find me on Hyphense.com or on LinkedIn as Frederick V. Haren. And Mark, it's great to have you here. I have been listening to the AI moment, and I'm glad to have you joining us here on Utilizing Tech.
Starting point is 00:02:06 Thanks, Stephen. It's great to be with such honorable people, such educated people as yourself and Frederick. Again, my name is Mark Beck you. I'm research director for AI at the Futurum Group. I didn't have a gray beard before last year, so that all came with the AI stuff that happened this year. You can find me on LinkedIn under Mark Beck, U-B-E-C-C-U-E, or at our website, futurumgroup.com. Absolutely. And, you know, Mark, I think it's given a lot of people gray hair, to be honest with you. I used to joke that it was my kids that caused my gray hair. But frankly, all of us in the industry are looking at what's been going on with AI. And, you know, this is not a let's make fun of and talk about what's wrong with AI show.
Starting point is 00:02:57 In fact, I think that we all are proponents of the technology of deep learning, of machine learning. Hey, I'm a proponent of expert systems. You hear that? But frankly, I think there's a lot of misuse of the technology. I think there's a lot of potential, frankly, blind use, in my opinion. So maybe we can just start by kind of looking back. Now, Mark, you're the analyst. So I'm going to put you on the spot here a little bit. Can you talk to us a little bit about where we've come in the last couple of years when it comes to AI? Right. That's a good point, Stephen. A couple of things. I'd say that the real difference in Frederick, maybe you can nod your head if you agree with this, was we've had deep learning models.
Starting point is 00:03:43 We've had AI models have been around for a long time. Matter of fact, the language models have been around for a while. interface in front of as a user interface to these models, which meant that you could use natural language to manipulate them, to make them do something. And I'm simplifying this a bit, but essentially what you had was a free chat GPT. You basically had to have a data scientist or somebody with some expertise to work with the model to make sure that it could do something. And since that time, we've democratized this idea in a sense. Now, what you said before, I'm not discounting all the mess that we've made with it, but what changed was, you know, every person could go in there and talk to a model and ask it to do something,
Starting point is 00:04:47 which is very unusual and different. So that what I would say would be one part. Second part I'd say is what we've experienced. And we said this, I said this at the beginning of last year. We know what we don't know. We do not know what was going to happen. I said at the beginning of 2023, I said, we have no idea what's going to happen this year. And the reason is because in tech disruption, when you have something come in that's that disruptive, it sparks innovation and it sparks a lot of mess, right? As you just mentioned, Stephen, it's chaos. There's a lot of things going on. So in that innovation period where you're going to have a lot of people experimenting with the technology, you have a lot of mistakes getting made, you have
Starting point is 00:05:39 a lot of bets being made. So it's classic tech disruption. If you looked at other periods when we've gone through that, this is the same idea. It's just happening a lot faster. So I think those are the two themes I'd talk about real quick as far as where we've been. Yeah, I think from an acceleration standpoint, I mean, we all know that the hardware is getting faster and that the algorithm is getting more and more complex. But I do think that the generative AI concept and usage, first of all, made it a lot easier, as you mentioned, where users could easily or more easily utilize AI and consume it and get something very innovative out of it. But I think, Stephen, you and I, maybe you remember the session we had about transfer learning. What generative AI really is doing for the market is traditionally people would consider their data their IP. So they would do their training, they would own the model, and they would also productize that model. So it's more like both under the same roof.
Starting point is 00:06:48 With generative AI, it's a lot different in the sense that transfer learning means that people are using somebody else's trained model. And it's a model that is so big and so complex that only a handful of organizations in the world can actually produce that language model. So I think what from an acceleration standpoint, we can imagine that instead of people building and spending a lot of time reinventing the wheel because building a language, large language model eventually becomes kind of reinventing the wheel, that brings a major acceleration factor to the market, because now you can start with something that took 10,000 or 12,000 hours of GPU. And I think that's not
Starting point is 00:07:36 only important from an acceleration, but also from a usage standpoint, because it's basically telling to people, look, we understand data is IP, but at the same time, by providing a pre-built model, we're kind of sharing that data. Now, that brings us probably to another topic, which is the abuse and the usage, right? Where the news agencies are saying, hey, that's great. You shared that data in our large language model and you share that with everybody. But in the end, the data slash the news, it's still ours. Yeah. And I think that there's a whole world of things that we can go into here. And certainly these are the things that we're going to be talking about all season long here with utilizing AI. And I'll just
Starting point is 00:08:20 promise the listeners, we're going to try to get into this as deeply as we can. And we're going to try to be as fair as we can here as well. Because like I said, we're not tech cheerleaders. We're also not naysayers and Luddites. We're here because we're interested in this technology. We're interested in where it goes. And frankly, because we know a thing or two about it. And that gives us a different perspective than sort of what you're going to get from generalist media coverage, which tends to be a lot of the sky is falling.
Starting point is 00:08:50 So let's kind of zoom in there a little bit. As both Mark and Frederick talked about, the revolution is centered on large language models and generative AI and how large language models can drive generative AI. In other words, you know, the challenge with this, it reminds me a little bit of the Macintosh. Computers were difficult. Computers were forbidding. You know, not many people use them. And then, you know, Xerox PARC showed Steve Jobs and Steve Wozniak and the rest of the crew the desktop metaphor. And Apple ran with it and they introduced the Lisa and the Macintosh. And suddenly computing drastically changed because suddenly it was accessible to people. It was friendly to people and everyone could use it.
Starting point is 00:09:39 That to me is chat GPT. The reason that AI is everywhere, the reason that everybody's talking about gen AI, as if that's a word, the reason that people are talking about bringing this technology everywhere is because of the fact that suddenly people can use it. Because as both of you have said here already in your introduction, this stuff was hard to use. It was hard to get into. It was hard to figure out. Well, basically speech to text to artificial intelligence is everywhere. It's going to be everywhere. So I guess let's start with that. You know, let's start. What do you think of this link between large language models and as it opening the door to AI? Yeah, I think it's the ability. It's the first step, right? I think while the large language
Starting point is 00:10:29 models are really interesting, it's because the interaction at our level, meaning words and providing meaningful results are really key. But I still think it's a building block, right? It's not an end result. It's the way we talk. It's our interface, right? Normally, we interface with two machines with keyboards and mouse. But I think that is a very important factor is that it makes it much easier for us to communicate with computers. I mean, if you go back in time, Google search, you had to really pick your words in order to get decent results. And so for us who have been using Google for a long time, it is a completely different experience for the new generation because they use full blown sentences. As a matter
Starting point is 00:11:14 of fact, they might not even use full blown sentences. They might be writing words in a bad grammar. But I think it's the beginning because if you use text as a way to communicate, then you can integrate and use that in other technologies and other AI methodologies. There's so many things, like you said, Stephen, that we could talk about here. came to mind for me immediately was the idea that uh for an enterprise particularly um there was a lot of i hope sandboxing so you um you know and and so let's let me set that aside for a second and also say um you know what got introduced if we we're not going to pick on any, but it starts with O and ends with I, was that this idea, this first thing was kind of software released into the wild without a whole lot of testing. So the testing was as we went, right? And so obviously that company that starts with O and ends with I had worked on this as they went and they were gathering information about how it would work and what it could do. But if you think about where we've come so far into Frederick's point,
Starting point is 00:12:31 that was not enterprise grade. So you had all these companies going out and messing around with this and they could do this because it didn't really have a lot of cost involved with it. You know, so that, you know, we're finding out now that the compute costs, you know, when you start to run your business model, your business case, you say, well, what is it going to take for me to do all this stuff? Then you start to, you know, pull back a lot of things. So to Frederick's point, I think this experimentation was, can we use a language model as a front end to other things?
Starting point is 00:13:05 And what's interesting to me is that what Frederick brought up was very important is that, you know, everybody's kind of starting to find out that, yeah, their data is the thing that's really important. So it's like, OK, well, can I attach these links? And I'm saying language model, not large language model, because that's a new thing we're getting into as well. So this idea that can we meld these things together and move forward, as Frederick said, from the front end using the language model as the portal, I guess, into where we want to go. Yeah, and I think that that's absolutely the right way to look at it. And certainly those of us, you know, the three of us who are watching this industry realize that, you know, to Frederick's point, this is a module.
Starting point is 00:13:49 This is but one module of an artificial intelligence solution. It's an incredibly important one. But again, to bring back my Macintosh metaphor, chat GPT and similar, you know, verbal text processing systems, it's like Mac Paint or Mac, right? You know, you basically you suddenly you've got a tool. It's so fun to play with that people are just playing with it. And just like back in the 80s, you know, suddenly there was all this explosion of of newsletters with all sorts of crazy fonts and, you know, drawings and everything all over it that all look like garbage. Eventually it gets real and we realize that this is a tool and that tools have, and then you go to the movies today, or you look what actual digital artists can do here, you know, 30, 40 years later.
Starting point is 00:14:34 And it's like, well, you know, this tool, this simple module has enabled some great things, but just messing around with a tool is not what makes it great. And I think that we're getting to that point now here in 2024, where people are realizing that messing around with chat GPT is really not an end in itself. I think that using that to feed into a business process, that's an end. That's useful. Yeah. So the way I look at it is if you look at the traditional programming style today, when you program, you choose your language to program in, and then you look at the libraries you can attach, right? So you have libraries for web interface and so on. I think the future
Starting point is 00:15:18 of the AI application is where the libraries are really being replaced by pre-trained models. And where your programming language is just a language that understands how to include those pre-trained language models. And you can have an assembly of, like, if you want to build a robot, you can integrate a large language model. You can introduce vision. You can introduce pre-trained dynamic modelers and so on. And so where AI applications are really going to be pick and choose just the way we did it with libraries today.
Starting point is 00:15:52 That's really what I see in generative AI. I mean, somebody had to do it and somebody had to show that building a large language model economically doesn't have to be a success as long as people use it. You can generate revenue at the inference side, right? At the time you consume the pre-training model. Well, and that's a really good point because I thought about that as well as originally when we started looking at this, our focus is always from front to back. So you say, what are the use cases, right?
Starting point is 00:16:23 And if we said, what are really the use cases that are going to unleash the power of generative AI? And I think that that's really still undetermined. There's some things that are starting to surface that make sense, but there's a lot that don't. So, you know, to your point, Frederick, there seems to be a lot of promise around shortening the code development process. There's a lot of promise in that idea. If you think about how useful that is and what the ROI is, that tends to be one that I think a lot of people would agree makes sense. But there's others that's not so much. One other I'll say that I think has really shown a lot of promise is what I'll call company search.
Starting point is 00:17:06 So if you said you're the ability for this to extract from a company's data, what they could find in that. And it could be anything from, you know, tell me about all the owners manuals we have or whatever. You know, it's it's that kind of idea. There's iterations to that, but it's, it's, it's something interesting. The ones, and I'll give you the cons where I think that we're, we're, we'll probably talk about this a little bit, but I'm not a big fan of text generation in the sense that it's, there seems to be a lot of people looking at this to shorten, to make business process automation a little better. When they're talking about writing sales emails or marketing material and
Starting point is 00:17:55 stuff like that, I could talk for days about that. I'll just say that I don't think the models are built. They're built to be parroting. And you talk about sarcastic parroting and things like that. They're not very personal. It's not you. You know, we could be creating a lot of junk, which we've already seen. So I think that one has challenges. I'll give you one more.
Starting point is 00:18:21 I think it's really cool. And there was a company that doesn't use a language model in the States, but it is a foundation model. That's Adobe's Firefly, which is a really well done generative AI use case where it's basically this model, these things that they've had, which are Photoshop, making that immensely better for those for the kind of folks that are using that kind of thing. Yeah, I completely agree with you there, Mark. I feel like, back to my metaphor, I feel like the text generation is just, oh, look at me, look what I can do, without actually stopping and thinking, what's the point of that? Because no, it's not convincing. Here's a note to all of y'all who are using, you know, chat GPT to write business plans and marketing materials. No, it's not convincing. We can see what's happening and it's not good. That's not what it's great for.
Starting point is 00:19:16 What it's great for is, like you said, you know, needles and haystacks. And that's what we were talking about for, you know, season after season on utilizing AI as well. I'm very excited about companies that are figuring out ways to, as you said, to bring corporate data into a generative AI world in order to find, you know, help me find information, help me draw correlations, help me locate what I'm, what I'm needing. I'm not an expert with search terms. I'm not an expert programmer. I just need to figure out this thing, you know, and there's a lot of companies, especially companies in the data space, I think that are figuring out ways of cleverly being, bringing business application data into the world of large language models and into the world of generative AI. And that I think is going to be very, very powerful and very, very useful in a way that frankly, you know, hey, ChatGPT, write me an article about
Starting point is 00:20:17 something you don't know about is less useful. Yeah, I think Chat GPT today is more an assistant than a creator, right? So I use chat GPT not to create content, but I use chat GPT because English is my third language and I don't want to look an idiot when I post something, right? But I think with the early stage of transfer learning and generative AI. I do think that the number one use case, as Mark, you brought up, is what I call the bring your own data concept. Somebody has some data in a catalog or anything else, and they really want to have something like that because in their world, it seems really feasible.
Starting point is 00:21:02 They think, well, you already have search capabilities and the ability to get content out of a system. How bad is it to add my data? And I think there's a lot out there, but it's probably going to take a year or maybe two years before this gets integrated and new modules come out and new updated large language models or generative AI models in general.
Starting point is 00:21:29 But look, I'm very excited about this, but it's one thing I think generative AI did for AI is to make AI easier to understand. I mean, the technology behind it is still crazy complex, but the fact that people had difficulties understanding what AI could do for a self-driving car, I think with generative AI, there is a use case of AI that is simple to understand, simple to use, and simple to think about what you can do with it. And I think that is one of the key components. I mean, certainly when we talk to enterprises, that's the first thing we hear, right? It's people saying, oh, now we understand AI.
Starting point is 00:22:11 Well, you probably don't, but at least you have a better understanding what you can do with AI. So one of the things, I guess, practically speaking, that we need to think about in addition to these high-level AI platform questions is the AI infrastructure question. And that's, I think, something that impacts those of us who are in the tech community. And that, frankly, is a lot of our audience. People who listened to Utilizing Edge and Utilizing CXL, they have a lot of questions, infrastructure questions. And I don't just mean bits and bytes and nuts and bolts. Sort of the whole infrastructure platform that underlays AI. One of the things I want to bring up is I think that we are very much in our infancy when we're
Starting point is 00:22:50 approaching this because we are still focused on massive clusters of GPUs training large language models. That's not, in my opinion, that's not going to be what the future of AI looks like in the enterprise. It's not going to be, I need, you know, 50,000 GPUs to train my own large language model. I think it's going to be a very different thing. Then we have to think about, you know, how we're applying those models, how we're integrating those models, and then also how we're using those models on the desktop. And so that gets us to a whole world of different things. So we have to think about, you know, in addition to GPUs, we have, you know, special purpose, you know, AI acceleration engines, we've got AI instructions coming into CPUs, we've got the AI PC, I think
Starting point is 00:23:36 we're definitely going to be talking about the AI PC this season. All of these things are part of the AI puzzle, in addition to communications, data storage, data platforms, applications, and so on. So what of this area, you know, Mark, you're talking to companies here all the time in this space. What do you think are the key infrastructure components that we're going to see interest in this year? Yeah. So I think you're going to see a theme. First of all, if you guys nod your heads, if you agree, but I haven't seen things move this quickly in my long time with the gray beard in tech. It's amazing. So there's two in this.
Starting point is 00:24:21 One was, you're absolutely right. If we think about the compute side, what happens for enterprises to really for us to really lean in and for market adoption of any kind of AI to really accelerate? You have to have economies of scale. You have to have price. You know, price cost has to work right and what the big worry was is these even if you set aside training which is a massive uh compute ai compute workload inference in itself is still big and it's going to be big going forward because it's ongoing right so you have all of this so the the estimates of this uh cost for ai compute were astronomical and it it's like, well, that won't economically work, right? So it has to change. Part of that equation is if you're NVIDIA and you invented GPUs to do computer graphics,
Starting point is 00:25:19 all of a sudden it works pretty good for this because it's parallel processing and all those things. Wow, we're in a great spot, right? What did we just notice is now we're seeing really rapidly a couple of other players coming in and saying, well, now, you know, we have purpose built for AI workloads. Think of it that way. So the compute starts with those processors, those accelerators. So you start there and you're saying, well, that has to, you have to push the cost down. You have to get more efficient at running any AI workload. So we're seeing that.
Starting point is 00:25:55 And what's going to be interesting in these, particularly data center accelerators and chips coming into the market for one, which would be that there was a there was a you know compression where we didn't have enough of this out there so the cost stays high right now we'll see you know possibly these newer chips coming on from intel and amd and others where uh you know maybe that we have more out there which should drive that cost down. We're going to see a lot of that this year, for one thing, just to start there. Then I'm going to leave this for Frederick and you to talk about for a sec, but say that's you've got the difference between the cloud compute and the on-prem compute. Right. So all of these things are different.
Starting point is 00:26:43 There's a scale to those. You've got enterprises making decisions about that, depending on where their data is or what they want to do with it. So lots of fun stuff going forward as we look at that. When we talk about AI infrastructure, I mean, we do have to split it between training and inference. I think in the past, when we talked about infrastructure, it was heavily focused on training, just because inference is a different beast.
Starting point is 00:27:06 I do think that the future of the AI infrastructure and the complexity and the offering will be on the inference side, meaning more on the application side. So for training, it really makes sense to do all the work in a public cloud. For inference, it does make a lot more sense to do this in the public cloud. And I do think there will be more companies that will specialize
Starting point is 00:27:32 on the inference side, right? Think about edge devices, distributed devices, which is the opposite of training, right? In training, you want to consolidate, while with inference, it's more consumer-driven, so it's at the edge and so on. And then secondly, we want to consolidate, while with inference, it's more consumer driven, so it's at the edge and so on. And then secondly, we talk about infrastructure, but it's the software that makes the hardware work. So we do see a lot of interest for AI software companies that deliver a higher ROI, faster integration of large language models or other models.
Starting point is 00:28:06 There's a whole software ecosystem that is actually growing faster and has a higher demand for compute than for GPUs. Yeah, and I think that that's the real interesting thing. As I said, we're still at early phases here. And to say, oh, well, NVIDIA has this thing locked up with their GPUs is just not the case. Yes, they're absolutely killing it in that market right now, but soon it's going to be everywhere. You know, you look at, you know, we talk to, you know, the chip design companies, they're all putting neural engines in their chips. They're all adopting, you know, matrix processing instructions and things like that in their chips. And it's all around inferencing. And I completely agree with you, Frederick, that
Starting point is 00:28:50 inferencing is really the next hammer to hit the industry. I disagree in that I think that inferencing may be more local and I think training may be more centralized, but I guess we'll all see where that goes because, you know, I mean, long-term, you know, you never bet against software taking up all the hardware or all the hardware resources available to it. So yeah, we'll see where it comes, but I definitely agree with both of you. And I think we're all three in agreement that this, the how does it get real and get used and get rolled out? That's going to be the question that's going to be the most interesting to watch from a technical perspective. So before we go, I guess, any final words? What are you looking forward to in this season of utilizing
Starting point is 00:29:38 AI, Frederick? Well, I think what I see in the market is really enterprises looking to bridge their needs for AI and what to do with AI. Think roadmaps and what the infrastructure options are for them. And unfortunately, there are too many options. If there was only one or two options, it would make it a lot easier. But there's so many options. But that makes it exciting, right? That makes AI very interesting. So that's what I'm looking forward to, right?
Starting point is 00:30:09 Talking to more enterprises, understand the market, and see the technology evolve as it has been going in the last couple of years. For me, it's two things. I'm interested that these models are getting narrower and more nimble and taking less compute in themselves. So that plays to what you're talking about, Stephen, where you're moving to the edge. Really interesting things going on, all very much smaller than we've seen. Second thing I'd say, I believe 2024 is going to be the year of the data abstraction, data management piece, where we're going to see all of these companies that we're going to be talking about that are bubbling up the enterprise's data to the right place. So, you know, there's this bring the AI to the data idea and how that gets applied.
Starting point is 00:31:01 It's going to be a lot to talk about this year. Yeah, I agree. And, you know, there's so much we could say and so much I am excited to see where we go. I don't want to give short shrift to the many companies that are doing really remarkable things already with AI. I think that you could say, for example, that one of the reasons HPE wanted to buy Juniper was for AI-based capabilities. I think that you could say the same about some of these other companies that we're looking at in the space, and they're getting a lot of practical uses of the technology. But for me, the one thing that I'm really eager and keen on diving into is the data side of things. I really want to learn how can we integrate enterprise data into AI? How can we make that one thing that sort of helped me find my answer in our proprietary data? I don't know how that works, and I'm really excited to see where products go in that direction.
Starting point is 00:32:02 And of course, being a nerd, I want to see the latest and greatest fast matrix processors and stuff like that too. But, you know, let's make it practical. Well, thanks so much. This is going to be a weekly podcast. Join us every Monday for Utilizing Tech focused on AI starting now. We're also going to be hosting AI Field Day.
Starting point is 00:32:23 So go to techfielday.com to learn more about that. Before we go, tell us a little bit more. Where can we find the two of you in addition to co-hosting here on Utilizing AI? Let's start with you, Mark. So, again, I'm on LinkedIn, Mark Beck-Yu. I'm also on the former Twitter as Future Beck, at Future Beck. But also my podcast is called The AI Moment, and it's roughly every week. It's available on YouTube.
Starting point is 00:32:58 And me, you can find me on LinkedIn as Frederick V. Heron. And if you want to see technology-wise what we're doing with our customers, you can find us on hyphens.com. And as for me, as I said, I'm going to be hosting AI Field Day, and I'm very excited about that. That's going to be a great event. So go to techfieldday.com to learn more about that. You'll also find me hosting our Tuesday podcast, the On-Premise IT Podcast, as well as our Wednesday Gestalt IT News Rundown. You can find me on LinkedIn, Twitter, Mastodon as well. And of course, I will be listening in to the fantastic AI moment. Thank you, Mark. If you enjoyed this discussion, please do give us a subscription. You'll find us in every podcast application and you'll find us on YouTube as well. And we would love to hear from you. This podcast is brought to you by gestaltit.com,
Starting point is 00:33:46 your home for IT coverage from across the enterprise, as well as the Futurum Group. For show notes and more episodes, head over to our dedicated website, utilizingtech.com, or find us on Twitter and Mastodon at Utilizing Tech. Thanks for listening, and we will see you next Monday.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.