Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 09x06: AI Agents are Coming to the Real World with Olivier Blanchard of the Futurum Group

Episode Date: November 3, 2025

We are surrounded by intelligent devices and these increasingly use various AI interfaces and processing capabilities to support the needs of users. This episode of Utilizing Tech focuses on edge AI a...nd AI assistants and agents with Olivier Blanchard of The Futurum Group along with Frederic Van Haren and Stephen Foskett. We have all used so-called intelligent assistants, starting with Apple Siri and Amazon Alexa and continuing with Google, Microsoft, and many others. These voice interfaces are increasingly functioning as agents, connecting with various data sources and tools to perform tasks on the behalf of the user. AI assistants have arrived at the same overall paradigm as AI agents, and there is an inevitable crossover between these technologies, as the best of breed components are adopted. The needs of AI assistants has also lead to an increase in availability of specialized processing on PCs and even personal devices, and this helps offload the tremendous need for power and space of AI applications.Guest: Olivier Blanchard, Research Director at The Futurum GroupHosts: ⁠⁠⁠⁠⁠⁠⁠Stephen Foskett⁠⁠⁠⁠⁠⁠⁠, President of the Tech Field Day Business Unit and Organizer of the ⁠⁠⁠⁠⁠⁠⁠Tech Field Day Event Series⁠⁠⁠⁠⁠⁠⁠⁠⁠Frederic Van Haren⁠⁠⁠, Founder and CTO of HighFens, Inc. ⁠⁠⁠Guy Currier⁠⁠⁠, Chief Analyst at Visible Impact, The Futurum Group.For more episodes of Utilizing Tech, head to ⁠⁠⁠⁠⁠⁠⁠the dedicated website⁠⁠⁠⁠⁠⁠⁠ and follow the show ⁠⁠⁠⁠⁠⁠⁠on X/Twitter⁠⁠⁠⁠⁠⁠⁠, ⁠⁠⁠⁠⁠⁠⁠on Bluesky⁠⁠⁠⁠⁠⁠⁠, and ⁠⁠⁠⁠⁠⁠⁠on Mastodon⁠⁠⁠⁠⁠⁠⁠.

Transcript
Discussion (0)
Starting point is 00:00:00 We are surrounded by intelligent devices, and these increasingly use various AI interfaces and processing capabilities to support the needs of users. This episode of Utilizing Tech focuses on Edge, AI, and AI assistance and agents with Olivier Blanchard from the Futurum Group. Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group. This brand new season focuses on practical applications of agentic AI and other related innovations in artificial intelligence.
Starting point is 00:00:30 I'm your host, Stephen Foskett, organizer of Tech Field Day events for Futrum. And joining me this week as my co-host is Mr. Frederick Van Herron. Frederick, welcome to the show, as always. Thanks, glad to be here. So I'm Frederick Van Herron. I'm the founder and CTO of High Fence. It's an HPC and AI consulting company and services. You know, Frederick, you've been involved in AI for a long, long time.
Starting point is 00:00:56 You worked with a lot of, well, I guess, what people thought were AI before chat GPT came out, those AI assistants, like the S word and the A word that I'm not going to say, because I don't want to activate everybody's assistance here. You know, those are also AI agents, right? Yeah, to a certain degree, there are agents. I mean, they are kind of the basic elements of agentic AI. So agentic AI consists of agents. You know, there's multimodal, one of speech. There are other components to it. But speech is the way. we interact with humans and that's that's a very important interface and increasingly um well actually
Starting point is 00:01:37 even today and increasingly in the future those um types of applications are interacting with other elements on the back end so for example if you ask the s lady for a sports score she may actually tell you the sports score and if you ask the a lady to turn on and off your lights she may do it for you, if those things are connected. I could see a crossover between these technologies. Could you? Yeah, definitely. I mean, it's such an important piece to see those technologies advance and getting kind of,
Starting point is 00:02:19 let's call them, smarter or maybe be able to reflect. And that's really where AI is going, right? It's the ability to be a little bit smarter than the previous. previous generation. And I think agentic AI is a tool that can help us to get there. So when we think of these AI agents, of course, we think of smart PCs and mobile devices and all these other AI power devices in our lives. And so that's why I am thrilled to welcome on this show today. Somebody from the Futurum Group who focuses on that exact area. Olivia Blanchard, welcome to the show. Hey, thanks for having me.
Starting point is 00:02:58 Yes, I should probably introduce myself real quickly. So I'm Olivia Blanchard, first time on the show. I'm a research director with Futurum. And my focus is intelligent devices. So basically from the hardware, full stack, all the way to the user experience. And so to your point, just a moment ago, that includes everything that's on the edge. So wearables like smart watches, XR, hearables, like smart buds, the smart rings, mobile devices, tablets, PCs, smart TV, smart speakers, the whole smart home thing, all the way out to
Starting point is 00:03:34 smart vehicles, and that's on connectivity, ADAS, and all of the intelligence solutions packed into it, and all of the IoT and IoT. So part of my focus at Futurum is helping answer questions around the edge AI and how assistance and agentic AI work at the edge, as part of of a bigger ecosystem of sort of like cloud to edge AI and Gentic. And so I sort of owned that piece of it, which is still sort of a work in progress, not for me, but for the industry. Absolutely it is. And I think that a lot of people got a little frustrated with those agents early on. But frankly, I have to admit, I never did. I remained enthusiastic for the Apple S. lady since my very first, you know, since my very first try, simply because it is so remarkable
Starting point is 00:04:29 to be able to interact with a computing device and with online services in such a natural way by just talking to it. Now, sometimes it doesn't understand what I'm saying and sometimes it can't do those things. But it's pretty impressive what's happened behind the scenes. I think people don't really give Apple and especially Google and Microsoft credit for the development that they've done on those assistants to really make them intelligent. Is that what you've seen over the years? Are you asking me? Yeah, yeah, I agree.
Starting point is 00:05:02 I would add Alexa to this. Amazon has done just a fantastic job of creating a natural language first assistant, an agenic platform, as opposed to some of the other ones that are sort of like either, you know, typed or tapped on a screen or language. And so I think it was a little bit harder. for Amazon to do what it does. And also it created this ecosystem that's much more contained
Starting point is 00:05:28 where Gemini, Siri, and others are sort of like out pretty much anywhere. Whatever the device is, on your phone, on your PC, Amazon decided to do something that's more tied into the home or tied into a vehicle cabin. And the use cases for that
Starting point is 00:05:48 seem like there would be a lot of overlap, but they're actually very different. and very unique. And so I feel like we already have some differentiation and some different sort of trajectories for Agentic and AI assistance in the ecosystem that's visible and that's already in existence today. So what do you see being agentic?
Starting point is 00:06:10 What does that mean to you? I mean, we're talking about intelligence and at the inch. What does that mean for you, agentic AI? For me, it means essentially having the ability, whether it's in the cloud or on the device, and hypothetically, it shouldn't matter to the user where the processing does or happens. But it's the ability to be able to assign a task
Starting point is 00:06:37 to a specific app or through a specific app or not at all, just basically just by creating a prompt, calling up an agent or an assistant and saying, hey, this is what I'm trying to do. You know, make reservations for three at restaurant X for this day, this time. Or, hey, I'm looking for a gift for my cousin and they really like horses and chihuahuas. What should I shop for? Or, you know, setting up an appointment or moving an appointment on my calendar, reading out an email because I'm multitasking and I need to be hands
Starting point is 00:07:13 free. It's all of these different tasks that if you had a human assistant holding your devices next to you, they would be able to do this for you. but having it sort of ubiquitous wherever you happen to be and just be picked up by a microphone or a device and taken care of either in the cloud or on the device or a combination of it. So I think that some people might, I don't know, take, you know, issue and say, wait a second, wait a second.
Starting point is 00:07:41 You know, Siri is not agentic AI. But here's the thing. So we've been talking a lot here on this season about what is in. and isn't agentic AI. And I think you could make a case that essentially these, let's call them, um, AI assistants are coming at the problem of agentic AI just like enterprise applications
Starting point is 00:08:09 are coming at the problem of agentic AI and just like some of the more academic examples that we've seen. And it seems to me that they are all colliding on the same set of, frameworks, essentially, that you have a variety of sources that input information, they trigger actions, they communicate with different data services and different platforms in a standardized way. I mean, sure, Alexa may not have used MCP before, but really what was happening underneath, you know, with Apple and HomeKit and things like that, was kind of, addressing the same need in much the same way that we're hearing about agentic AI in sort of more
Starting point is 00:09:00 IT circles. Does that sound to you like the direction that things are happening? And also, do you see a real collision of, you know, agent-to-agent protocol and a model, you know, context protocol as supporting these AI assistance as well? I think, yes, there's going to be some friction just because you know you have some walled gardens let's let's talk about apple for instance um and and and april's entire stack is is fairly walled in and and built up in a very specific way uh where android and so jemini is very different and then amazon devices uh or amazon uh Alexa with devices and licensing to sort of Alexa capable devices is a third model altogether.
Starting point is 00:09:52 I think that what we need to think about, because we can have conversations about, is something an assistant or an agent. And it's an interesting conversation to have, and it's fun to have that conversation and think about what the differences and similarities are, where the overlap ends and begins. But I think that ultimately, it's a conversation by hierarchies. I think, I don't think of assistance versus agents in terms of, where we are today, where we're still trying to figure out how to make this all work with all these different sort of competing but overlapping ecosystems. I think about where we want to
Starting point is 00:10:27 end up, which is sort of like ubiquitous agents and assistance where we don't have those conversations anymore. It's all part of the same thing. And the way that I look at it is ultimately what we may end up with is a scenario in which we have one overarching assistance. And that could be the S lady, as you call her. It could be Alexa. It could be something else. depending on your environment, right? Alexa might be home, Siri might be your phone or Gemini might be your phone because your Android in your car. It might be something totally different. It might be co-pilot on your PC. Who knows? You might be moving around between assistants based on the overall use case or your immediate environment. But underneath those assistants that are sort of like the entryway, like the the butler-in-chief of all of these different agents, the rest of your digital staff, you have these agents that are increasingly specialized. So you might have agents that are only working for you specifically at your job. There might be some agents that are assigned to your entire business unit that everybody has access to in the business unit.
Starting point is 00:11:32 There might be other agents in your companies that you don't have access to or you might only have temporary or limited access to. And in your home or in your day-to-day, you might have agents that are specifically designed for certain types of tasks. So I imagine an environment where you have an overarching butler or assistance, and you might even be able to personalize them by giving them your whatever name you want. So it's no longer Gemini or Siri. It's, you know, it's Martha or Billy Bob. And you're assigning your name and a personality to them. You can customize them any way you want. And then underneath that, you have agents that specifically or perform specific tasks.
Starting point is 00:12:11 So you might have an agent for shopping. You might have an agent for calendar management for the whole family. you, the spouse and the kids, you might have an agent that only takes care of your travel because you love to travel. You might have an agent that only focuses on your stock portfolio and your financial decisions and your banking. Healthcare, all of that. And so you create this ecosystem around yourself of assistants and agents that might be sort of like very general oversight type roles and some that might play very specific specialized roles that are very limited in scope. And a mix of that might be in the cloud for sort of general use assistance and agents. And you might have some that are hyper-local and behind a firewall because they're so hyper-personalized. And they focus on tasks that are so sensitive in terms of your data and your privacy that they're not necessarily safer. You don't feel comfortable having them in the cloud. They're going to be housed in like an AI hub where they're going to be on your device locked down so that,
Starting point is 00:13:16 Nobody can harvest your data. Nobody can intercept your data or those interactions that you have with those agents. Yeah, so when I think about agentic AI, I kind of see two tracks. So the first track is what you mentioned, you know, Apple and Google, who are kind of improving their systems behind the scenes. And then the second track I see is the fact that with agentic AI, there has been an effort to standardize on the agents. In other words, it's the ability to reuse agents across the board. And it opens up a market for end users to use agentic AI without too much knowledge about what's happening, building up agentic AI. Do you see some use cases that are useful for end users with agentic AI?
Starting point is 00:14:08 Oh, absolutely. I think another way to think about it is right now we still live in an app ecosystem, right? everything's an app. And so you have to open the app. You have to tap on your screen or do whatever you're doing, whatever the device is. And you have to sort of learn how to navigate an app, whether it's a banking app or work or Slack, whatever it is, even Gmail. You have to find things. You have to know how to you have to know your way around. And generally, it's not hands free. I think of an agent as sort of like the next interface with apps, where apps have an agent or an agentic layer where you don't necessarily have to open the app, but it's going to understand
Starting point is 00:14:48 the complete chain of events of tasks that it needs to perform in order to arrive at the task that you want. So instead of opening the app, finding the right screen on the menu, tapping on the thing, sliding the sliders, getting to where you want to go, it's basically like, hey, transfer $500 from account A at this bank to account B at this other bank, on. on September the 13th or, you know, pay my mortgage or, you know, do some of those things that normally would take more actions from you. And so I think that how we're going to start using the first, we're seeing the first use cases of agentic is through this, this natural
Starting point is 00:15:29 language, conversational sort of a final task being given to the agents. And the agent being essentially sort of like the automated task layer of step one, step two, step three, step four, this sort of binary, is this the right, is this the right thing or is this the wrong thing? And finding its way to the outcome to deliver the order, the task order that was given by the user. I already see stuff like this with, you know, Gemini or Open AI, for instance, whether you're creating an image through generative AI or having your, whatever, your assistant of choice, give you a summary of a movie or making a book recommendation or finding a movie for you on your, you know, Alexa enabled, you know, fire TV. I already see all of those things happening. Like, every morning I wake up and the first thing I do when I'm making my coffee is I speak to my, I, I can't say the name because it might activate half the speakers in my office, but I speak to my A-Lady device and ask for a news summary except excluding sports. I'm weird that way.
Starting point is 00:16:54 I don't care about sports or not the sports that I would be getting scores about. And it does that. And I don't have to touch anything. It's really simple. And I think that those are the types of use cases that people are already comfortable with now, that they're already experienced. I think they're going to get not just marginally, but significantly better in the next year. And we're going to start seeing that being implemented in more sort of regular productivity apps, whether it's Google Docs or Microsoft Word or Microsoft Excel, being able to speak commands about this is what I,
Starting point is 00:17:32 this is what I want to end up with. Make that happen. and these agents and assistants getting better and better at understanding where they need to get, what the outcome needs to be, and how to deliver it in the best way possible, where it's not something that's, you know, 30% baked, and then the user has to do a lot of work on the back end to make it, to edit it, make it right, but to get to as close to a perfect outcome as possible. And we're going to see some significant improvements in that user experience in the next 12 to 18 months. And when it comes to truly agentic applications, I know that people have already done this. I was talking to somebody who was telling me that they had, they used the Alexa Plus AI SDK, and they built a basically a skill
Starting point is 00:18:29 to connect to MCP and they were able to do basically anything in the universe with Alexa through that skill. I'm also involved, for example, in the home assistant community, they are 100%
Starting point is 00:18:45 focused on basically making their voice assistant be a true agentic platform for home automation and IoT. and they're not joking around. I mean, it's not just, you know, we're using that term wrong. They're 100% trying to end up in the same place that enterprise applications need to be.
Starting point is 00:19:07 And that's why, you know, I'm most excited about what's happening here with Agentic AI because one of the coolest aspects of it and one of the coolest kind of, I guess, side effects of MCP is that we have a standard way to specify services and capabilities and data and a standard way for AI agents to interact, but not just with other AI agents. They can interact with all sorts of things as long as it can be specified that way. Again, you're right that Amazon, in my mind,
Starting point is 00:19:39 was kind of the pioneer of this by allowing people to build their own Alexa skills and connect Alexa to all sorts of amazing things. That's definitely where Google is going. That's definitely where Microsoft is going as well. And it's pretty exciting to see where this could end up. But it's also a little bit worrying to see where this could end up. Do I really want to say, like, okay, Google, delete my production database?
Starting point is 00:20:07 I'm not sure I want that. What do we say about protection of these systems? Yeah, well, obviously, you need to have some guardrails in place. And I think that's, it's the same if you give anybody in your organization access to the database. If they could just delete it at the stroke of a finger, that would be bad. So having the same permissions and the same guardrails for AI is probably a good thing. Just like even in, you know, consumer facing applications having parental controls and having different, you know, like sliders, is, hey, you can do this, this, this, and that,
Starting point is 00:20:50 but you don't have the ability to do this without some kind of prompt that says, hey, are you sure you want to do this? Because this is final. So I think we're used to having this level of sort of natural protection against bad decisions or malicious actions. I don't think they change radically with the use of agents. All we're doing really, if you really think about it, is switching from doing things with our fingertips to doing things
Starting point is 00:21:18 with voice. But where I think things might get really interesting is as the interface can change, because we're seeing definitely an acceleration in voice interfaces. And they can be relatively private, even if you're at a public place, by using smart earbuds. I think that we're going to see smart glasses become much more of a common interface for this next phase of assistant the Nagentic experiences where I don't see the phone disappearing anytime soon. We're not getting rid of screens. But I think that the ability to speak and hear, speak with an assistant or an agent and hear prompts back or responses back in a way that doesn't require us to talk into a watch or carry a microphone or do something with our phone so that we can do it anywhere,
Starting point is 00:22:09 any time is going to be very high value. I think the utility of smart glasses specifically to be able to interact with agents verbally is, it feels natural to me. And I think that's going to accelerate adoption of that particular type of device. Yeah, we rely a lot more on conversational agents today than ever. I mean, one of the risks is voice clothing, right? I mean, you're already seeing it where voice cloning used to be, like, so difficult art. Nowadays, it's a lot easier.
Starting point is 00:22:46 So that would be one of my concerns regarding to security and so on. They have been long enough in speech. At some point, you know, decades ago, we asked ourselves the following question, at what time will speech technology be so good that there is no need for a keyboard? Do you feel that agentic AI is getting there? I mean, it's all conversational. The keyboard is really not necessarily item anymore. But do you feel that agentic AI is getting us closer?
Starting point is 00:23:21 Yes and no. Yes, the quality of speech recognition and speech to text is already outstanding, and it's only going to get better. It's not quite there all the time, but it's so close, I think, within, you know, 16 to 18 months, I think we'll be as close as we need to get with it. The issue isn't so much to me about replacing keyboards. I think it replaces keyboards in some instances. I mean, if you're trying to record your thoughts or record a conversation and just, you know, work,
Starting point is 00:23:56 like essentially have it work as a transcriptionist, transcription works great. It's fantastic. But if you're writing something, whether you're a creative, You're writing a script or you're writing a report or you're a student and you need to write an essay on something, you're writing poetry, whatever it is. I feel like whether it's a keyboard or even a tablet where you're writing things by hand and then it turns your handwritten notes into text that can then be exported to a Google Doc or whatever. I think the act of writing and the pace of putting your thoughts onto a device, whether it's a piece of paper or a screen, is still important. And I don't think that agentic capabilities and conversational sort of speech to text capability is replace that. So my take on it, and it might be a generational thing, but I don't really see a time when keyboards disappear completely just because we have.
Starting point is 00:25:00 We have voice to tax capabilities. We tend to think differently when we speak than when we're actually writing something. And I don't see that going away anytime soon. I think that's a really good point. And for me, at least, I don't remember who it was. One of the people that was saying it. I think it was John Gruber was saying that writing is thinking. And I completely agree with that.
Starting point is 00:25:27 It's hard for me to truly comprehend. something unless I have written it, unless I've written about it. And I think that that's one of the challenges of using text or speech to text in for me is that I don't feel like it's, it's thinking for me yet. I need to really kind of absorb it. But there's another aspect of this that I think is important as well. One of the big criticisms that generative AI has faced is the environmental impact, the incredible power and cooling and footprint of these systems, well, when we think about AI assistance, they run on a much lighter weight platform. They are designed to run, as you say, on mobile devices, on smart glasses, on earbuds,
Starting point is 00:26:20 and as you also wisely pointed out, transparently between your device and the cloud. So you don't need to know or care whether it's running on your earbud, on your phone, or on the cloud. It just works for you in a way that's transparent. I think there's a lot of hope there that, you know, potentially a lot of these concerns about, you know, power and cooling and floor space and everything could be wiped away as we get smaller models that kind of team up in an agentic fashion to. to meet our needs instead of trying to build sort of this super intelligent super brain. So I guess do you see that crossover between mobile devices and AI assistance maybe kind of solving some of the problems of AI generally? I do actually.
Starting point is 00:27:14 And I want to draw a distinction between generative AI of, you know, like the amount of processing power that it takes to, from a prompt, create entire videos, right, that are, are very GPU intensive and doing that at scale and having an agent's perform a task on the device, right? Those are very different things. So I think that that generative AI and huge large-scale use cases are probably driving this environmental, this potentially negative impact on the environment with the data centers. But as the processors, the systems on chip, become much more high performance.
Starting point is 00:27:57 generation over generation, and the models themselves, the software, becomes also more efficient generation over generation. You're starting to see models that were only, that you could only run in the cloud a year ago, be compatible. You can run them on pocket devices today. That's going to continue to evolve. I think that definitely, absolutely, yes, moving a lot of the inference, especially. Some of the training, but mostly inference from the cloud to devices, and including having maybe an AI hub at home, something with a GB10 or even a GB300 from Nvidia, just running on a desktop that's essentially doing, performing a lot of those processes, whether it's the training of models, training of agents, or the inference of doing things locally for added speed and added
Starting point is 00:28:52 security, those things can take a lot of that load out of the cloud and move it locally, whether it's, you know, in your pocket, on the desktop, in your car, in different, like, devices around you. I feel like there comes a point two years from now, maybe three years from now, where the orchestration on device, where essentially there's the processor on device decides, okay, is this something that I could run locally, or is this a request that needs to go to the cloud, or are there portions that I could run locally, and portions that need to be run in the cloud will get a lot better. I think it'll be a lot more mature, and we will start seeing this sort
Starting point is 00:29:32 of orchestrated workload handoff of anything that can be run locally for speed and security will be run locally, and everything else can be pushed to the cloud. So hopefully there's a different conversation there about data center buildouts and how the efficiency of device chip sets of models becoming more efficient being able to run more efficiently on smaller chips and the ability once the software ecosystem catches up to run a lot of this stuff locally might actually deflate the need for as much cloud and data center build out for AI that we're talking about today And I think it will be good for the environment. I think it'll be good for user experiences as well.
Starting point is 00:30:23 But it might end up sort of capping off demands for the big GPUs and the big CPUs for the data centers. So what does a Gentic AI mean for the AI PC? Can you talk a little bit about that? Yeah, GenTech AI for PCs specifically, to me, there are sort of like three sort of major value properties. positions there. One is security. I think that running a lot of sensitive workloads, whether it's the prompts themselves or the data collection or even what you're doing, the finished product, if you will, being able to run locally or behind a firewall can protect an organization and a user from snooping or from that data ending up in the cloud somewhere.
Starting point is 00:31:13 So I think that if you're a CIO, you want some of those workloads to be, to be kept inside of your organization, whether it's on a co-pilot plus PC or if it's on a more advanced professional level like, you know, GB300 powered high, high-end AIPC for professionals. So there's that element of it. The other one is productivity. If you can run models locally, if you can do most of the things that normally you would need a cloud-based agent to do on your device, it doesn't matter if you have bandwidth or not. It doesn't matter if you're at 35,000 feet and the Wi-Fi isn't working. You can work from anywhere at all times. You can connect to the internet. You decide you don't want to connect to the internet. Your agentic capabilities are on
Starting point is 00:32:02 device. And the third thing is just speed. There is lag between putting out a prompt and waiting for a response or the workloads happening in the cloud to come back to your device. And whether it's a bandwidth issue or it's another issue. It might be a bandwidth for you in terms of the network. It might be a bandwidth issue with the servers and the cloud, whether it's sort of like semi-local or it's a public cloud somewhere. Having the ability to perform a genetic task directly on device, it's going to be a lot faster. Even your interactions with the agents that they're verbal are going to be a lot snappier and they're going to feel more natural than if you have to wait for a response to like,
Starting point is 00:32:47 basically your prompt to go out, a response to come back. All that stuff can happen in semi real time in a way that feels organic. And so you don't waste time waiting for the processing to move back and forth and all the data packets to go out and come back in. Yeah, so it's security, it's productivity, and its bandwidth.
Starting point is 00:33:12 So thank you so much for joining us. It was great to talk to you about this. It's great to finally have you on one of the shows. I look forward to getting you to an AI Field Day event at some point. Before we go, where can people continue this conversation with you, Olivia? Well, anywhere they want and anywhere they can, but mostly, you can Google me, just know that there is another Olivia Blanchard, who is an economist. I am not he. We don't look alike, but we get confused a lot because we've both written books. But mainly, obviously, future ongroup.com, you can read all of my stuff there.
Starting point is 00:33:46 You can also find me on LinkedIn and on X at OA Blanchard. And that should be a good start. Yeah, you can find me on LinkedIn in our website, highfence.com. And later this week, you can also see me live as a delegate at AI Field Day. I was going to just mention that. tune in techfielday.com or Techstrong.tv.TV for a live video from AI Field Day. So thank you both for joining us, and thank you everyone for listening to this episode of Utilizing Tech. You can find this podcast in your favorite podcast applications as well as on YouTube.
Starting point is 00:34:22 If you enjoyed the discussion, please consider leaving us a rating or a nice review. This podcast was brought to you by Tech Field Day, which is part of the Futurum Group. For show notes and more episodes, though, head over to our dedicated website, which is UtilizingTech.com. or find us on X-Twitter, Blue Sky and Mastodon at Utilizing Tech. Thanks for listening, and we will catch you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.