Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 09x09: AI Gets Personal with Agents Acting on Our Behalf
Episode Date: November 24, 2025Agentic AI is an autonomous system that learns, adapts, and uses tools on the behalf of its users. This final episode of Season 9 of Utilizing Tech brings hosts Stephen Foskett, Frederic Van Haren, an...d Guy Currier together to reflect on the lessons we've learned over the last few months. AI keeps advancing incredibly rapidly, and we timed this season with the emergence of practical agentic AI platforms, AI Field Day 7, and a report on enterprise AI from The Futurum Group. During the conversation, the panel references Kamiwaza, Articul8, ApertureData, NetApp, Perplexity, OpenAI, and more. Agents have to be personal, focused yet flexible, and capable of integrating with each other, data, and tools. We also discussed the need for platforms, with companies like OpenAI and Microsoft positioning themselves to be the platform for AI applications even as enterprise software companies like ServiceNow and Salesforce are trying to do the same. We also have many companies developing platforms for orchestration and operation of AI, and data platforms designed to support agents. Ultimately, agentic AI will be a core capability of next-generation applications, with autonomous agents interacting with tools and helping us perform daily tasks.Hosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesFrederic Van Haren, Founder and CTO of HighFens, Inc. Guy Currier, Chief Analyst at Visible Impact, The Futurum Group.For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
Agentic AI is an autonomous system that learns, adapts, and uses tools on behalf of its users.
This final episode of Season 9 of Utilizing Tech brings host Stephen Fosket, Frederick Van Herron, and Guy Courier together to reflect on the lessons we've learned over the last few months.
Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group.
This season focused on the practical applications of Agentic AI and other related innovations.
and artificial intelligence.
I'm your host, Stephen Foskett,
president and organizer of the Tech Field Day event series,
and joining me for this final episode of Season 9
are my two co-hosts from the season,
Frederick Van Herron and Guy Currier.
Frederick, Guy, welcome to the show.
Well, thanks for having me.
I'm Frederick Van Herron, the founder and CTO of HighFense,
and we provide HBC and AI consulting services.
Yeah, it's great to be here.
Guy Currier.
I'm an analyst at the Futurum Group,
and an occasional participation in participant in Tech Field Day as well.
Absolutely.
And I'm Stephen Foskate, organizer of Tech Field Day, including the AI Field Day event that the three of us all attended here during the recording of this season of the podcast.
And going forward, host of the new Utilizing AI podcast over on Techstrong.A.I.
But of course, we'll be back with future seasons of utilizing tech as well.
let's sort of, I guess, wrap up
season nine here and talk a little bit
about the lessons that we've learned.
Rather than making this just a retrospective
of the various guests that we've had this season,
let's talk about some of the takeaways.
Frederick, I'll start with you.
Yeah, I think we learn a lot.
I think agentic AI is still
kind of a moving target in the sense
when we ask people about a definition.
You know, the definitions can vary a little bit
But I think overall people have a great understanding or a better understanding of what AI can do as far as reasoning and thinking with large language models.
And I think that's what we saw during the episodes.
I mean, we had some people talking about applications.
We had some people talking about use cases.
I think overall, in combination with AI field day, I think we.
we got a, at least I got a kind of a view on what people are doing on both sides of the fence,
customers as well as vendors.
I think one of the things that has most impressed me is how rapidly it's developing, rapid,
everything in AI is developing rapidly.
In fact, it's probably changed significantly since we started this podcast series.
So that's one.
I think as a practical matter, though, there's so much that you can do just to step into, to move sort of, let's say, beyond, certainly beyond chat, certainly beyond basic use of AI and into Agenic.
It's a question of connecting Agenic to systems and to each other and doing a little design work. It's not necessarily that far. You don't have to do the latest and the greatest.
And there are a lot of interesting platforms and ways to do this. There are a few AI studios out there to help you build agents, some that are incorporated with.
with maybe services that you already have.
Yeah, Guy, I think that's a good point.
I also think that applications and use cases
typically came from vendors or at least the view from vendors.
I think a Gentic AI and MCP,
and like you said, the studios that people are delivering to the market
are helping people that typically were not engaged,
or at least not on the vendor site,
now can build applications that are kind of,
very close to solving their problem as opposed to applications that are solving other people's problems.
Yeah, that's a good point. And that came across, of course, on our episode with Articulate,
but also throughout the season, when it comes to Agenic, if the idea is that you're going to make,
and again, we spent a lot of time trying to define Agenic earlier in the season, but let's sort of roll with that.
If the idea is that you're trying to make AI agents that can act autonomously, that can ingest and process data, that can call other tools on your behalf, it really is important to make sure that they are up to the task, that they're not just sort of generic and that they are able to respond to the needs, not just of the business as a whole, but of the users, the people who are trying to make use of those agents.
And that came up, you know, many times throughout this.
I see a really strong parallel between the process automation space and the agentic
AI space in that in both cases, it's sort of a twist on that whole no-code, low-code concept
or, you know, a way for people to make AI do things on their behalf.
And it's funny because during this recording of this season, a couple of things happened.
One, as I talked about it, AI Field Day, the Futurum Group released a report on enterprise
agentic AI platforms where they highlighted some big companies, you know, Salesforce, Microsoft,
ServiceNow, IBM, those kind of companies. At the same time, we also saw people really leaning
into things like the perplexity web browser and the new OpenAI web browser as a way to
basically have a personal agentic system. And at the same time, of course, you know, Apple is
hopefully going to be releasing more AI features on iOS and Google just keeps pushing Android
forward. And all of these, I think, reflect that, that idea that agentic should be personal,
should be usable, should be something that people can really interact with or else it's really
not going to be able to achieve the goals, right?
One of the things I definitely picked up from the series, specifically, was when I did
the episode with Kamiwaza, because one of the structures that Luke Norris there
provided was around three different types of, was a view on three different types of agents
because there were ones that are, and I don't remember the types, but there were
There's ones that are responsive.
There are ones that are autonomous or operate on their own.
And then there's a third type I'm not remembering right now.
That was really helpful because one of the mind shifts that I, you know, have experienced since we started this was as to what agentic AI is what an AI agent is.
because I think just, you know, kind of to be a kind of classic about it or something,
I started off by just saying an AI agent is just an agent.
It's just got AI in it.
Maybe oversimplification, but it seemed to me that if you just think of it as an agent that uses AI,
then that helps you understand what it is that you would do with it.
That was clearly wrong because the way you build it,
The way you run it, how it works, is different than a standard agent.
It's still an agent.
It's a kind of agent, but it's different enough that it's not just an agent that includes AI.
Definitely not.
It can do things on its own.
It can learn, which is to say it can be self-trained.
It can interact in ways that are not just unpredictable, but what do they call it,
a nondeterminate of it or something where it might do something different to the second time?
Yeah.
That is different from every other agent we've ever computed or, sorry, programmed and used.
Yeah, what I think is nice about Agentic AI is that we had access to large language models in the last couple of years,
but the big challenge was how do you integrate all of these components with your own data, right?
Because in the end, it's your own data that makes or creates the value of an application.
And I think that's one of the things that Gentic AI, and maybe, you know, maybe not necessarily a definition, but it's, it kind of allows you to bring those different large language models together with your data through standards APIs, if you wish.
And I think on top of that, that makes it very accessible to individuals. I think if you look at people building applications around large language models in the past, those applications were very
static, meaning the large language model couldn't change. You couldn't interact with another large
language model. You couldn't daisy chain large language models. And today, you do have these
capabilities. And it brings a kind of an interesting factor to the foreground, which is an application
today is more dynamic than ever. In other words, it's not a finite state when you build an
application. It's a work in progress. And you can see that too.
with vibe coding, where they're not suggesting to build or create a prompt that defines your
whole application.
They're basically saying, do it in individual steps.
And that's really interesting because I believe that that allows you to build applications
that, to Stephen's point, are more personal because you can iterate through it and you
can start with the baseline and then add functionality.
To tie this with one of the episodes we have is that I, from a speech background,
I always think about text as being the main communication piece.
There is multimodal, right?
This audio, video, text and this input and output.
What's really interesting, you know, is this idea of, so, I mean,
do we all remember when Google starts?
published services, new services of various kinds.
It's going back 20 years or 15 years, whatever it was, in beta,
and the beta lasted forever.
And this was kind of, may not have been Google leading the way,
but it seemed that way to me, the perpetual betas.
What you're talking about is true perpetual betas almost.
It's almost like the agent, the software is never done.
It's always going to be, you can prod it into evolving,
you can evolve on its own.
it's never static anymore.
That's a kind of a wild concept.
It's a little bit what Satya Nadella was referring to a year or so ago
when he said there was not going to be SaaS anymore,
which is to say, you know, the creation of agents,
creation of software,
and then it's their destruction or they'll go off into a hole
until you need them again.
Maybe you'll never need them again.
That's a very different way to interact with systems, very different.
Yeah, exactly.
And I definitely see that dynamism happening here.
here. It's, in a way, you know, even beyond non-deterministic, it is almost, as you're saying,
both of you are saying, that, you know, the application you use or the workflow you use might
be different today than tomorrow than the next day. And it's interesting, right before we
launched this season, OpenAI introduced chat GPT5. And one of the hallmarks of GPT5, as I actually
said back on the first episode is that it is, well, I guess depending on how you want to define
it, is almost agentic. And I've been doing a lot more work with GPT5 recently. And it is really
interesting if you watch the workflow there, how it interacts with you. And to both of your
points, essentially, you know, you ask it a question. And it is calling specialty tools to answer
your question. It is responding to you by looking up data, doing a web search, using a calculator,
you know, using Mathematica, using, you know, variety of different ways to produce what it is
that you're asking for. And specifically, you know, I've been having it process, to Frederick's point
about multimodal data, I've been having it process images and, and give me JSON. And it, and
it is wild to watch that workflow because it is using, like I said, it's using Mathematica.
It's using other generative AI tools to process images and identify items in the images.
And it uses, you know, all these different things.
That's, I think, what we're looking for here with these next generation of tools.
It's not about making an artificial superintelligence.
It's about making a system that can really step through, you know, define the next phase, figure out the tool to use, use that tool, take that output, go to the next step, go to the next step.
It's a very personal way to do this.
Unfortunately, it's also kind of frustrating and it's been kind of frustrating for me as I've been using these tools because I just want to shake it sometimes and say, no, you went in the wrong direction halfway through.
But at the same time, I feel like it's more likely to generate an answer.
or than trying to come up with some super machine intelligence.
Yeah, and that's a perspective from a user's side.
Slightly off topic, but I watched an interview from the people that started Claude Cote Cote.
And basically, it was kind of interesting to listen to them.
Everything is almost accidental.
They had no intention of building it, but somehow they found something,
and they build it.
And another statement they made, which I found interesting,
is that not only does an agentic AI system learn from its users,
it's actually also learning from its own output.
And that kind of tells me that we as consumers of agentic AI
might not always understand what's going on.
The people building agentic AI, large language models,
even they themselves have no idea where it's going.
They also being led by the large language model by itself.
I wonder if, though, just to spice things up a bit, if we're looking ahead,
one of the issues right now in training, right, is in training foundational models.
It's not quite an issue yet.
It seems to be getting there, is that there is a sort of a peak,
there's sort of an optimal training level in terms of quantity of data and number of cycles
and stuff because of the amount of available data.
In other words, we could run out of data, as much as we talk about the explosion of data,
we more or less run out of data in training the models and the use of synthetic data or
what have you, or more specifically, we're looking ahead to where some of the data being
used to train is actually output of AI that already used original data that was the output of
humans, right? So that's that decreasing quality possibility. This could get accelerated with
agentic. What you're talking about, Frederick, is sort of systems of systems and where the systems
get abstracted far enough from sort of the original human origin, so to speak, they are still
derivative. AI is still a simulation. I tend to, you know, object a little bit towards like
reasoning or thinking or that sort of thing or even the word intelligence because of it. And I'm
just, I'm thinking ahead to how we utilize AI going forward in a way that remains productive,
even if it starts to be a whole lot of AI talking to each other. That's actually a really
interesting point. I don't want to get all philosophical on y'all, but we already did
see some examples of AI agents talking to other agents
in developing their own mechanisms of communication,
their own vocabulary.
We're trying to use, I mean, I don't know about you guys,
but I basically want AI to give me JSON if I'm using it
in any kind of application, as an application agent,
assistant, that kind of thing.
I don't know that JSON is the optimal format for agents to talk to each other.
You know, I mean, with MCP, you know, again,
we're trying to impose our human will on these things.
I would not be at all surprised if future AI agents interact with each other in an API
and exchange data in a format that is, I don't want to say completely illegible,
but at least not what we would have designed because it turns out that that's an easier,
better, more efficient, or just sort of evolutionary sort of way of exchanging information.
Because essentially, if we're going to set this stuff out there doing things on our behalf,
we've got to let it do its thing.
And, you know, we can't micromanage it and babysit it.
Right.
I think as formats are concerned, I mean, Jason is a very good format, and you can pretty much
communicate any type of data you want.
The challenge with Jason is that Jason is not really meant for large amounts of data.
I think the challenge becomes when you want to exchange a lot of data in a small amount of time.
Like, for example, cars driving and exchanging videos with each other,
that's where I believe Jason wouldn't do so great.
But yes, you know, Jason or something else, there is definitely room for some kind of more advanced format.
But we have been through so many iterations.
I don't think Jason is a bad format at all.
Yeah, JSON is certainly the worst format apart from all the other ones.
God, at least they're not using XML.
You're channeling Winston Churchill there for us, aren't you?
Yeah, I think, though, Frederick Stevens' point as I took it,
or maybe I'm taking it a step beyond, is these systems are going to start
designing their own interfaces to talk to each other.
right i think that's the that's the next step right is that the machines decide how to communicate
with each other i mean the bottom line is as long as the the communication channels are
authentic and and follow certain guidance maybe maybe they will and who knows right i mean
jason is still a human readable format right i mean it's the reality is though machines don't
text, right? They need binary. And so they could talk in four bits as opposed to eight bits or
whatever, right? It's, it's, or in who knows what, it'll be like reading a machine ballot in Texas.
You won't know exactly if you're getting to vote for who you thought you voted for. I'm making a
joke. It's a reliable system, but I do object to using barcodes to vote. There, I just said it.
But yeah, who knows how they want to talk to each other? They'll find some option.
way to them, or in fact, they might find a suboptimal way that really doesn't work,
but that they came up with because they're non-deterministic.
So let's kind of turn the page and talk a little bit, too, about some of the other aspects.
You know, we talked earlier about the platforms that are being used.
Part of the conversation that we had this time around was talking about running agentic applications
on personal devices to the cloud.
You know, Frederick brought in the concept of multimodal data.
Let's talk about some of those other elements that are evolving AI.
And again, I feel like overall it's all about making it more useful, more personal, more actionable.
So what's your take on, I guess, the agentic platform concept?
You know, we've heard the enterprises embracing things like the Salesforce agent force platform.
You know, we've heard companies talk about a variety of, you know, really kind of nuts and bolts,
almost, you know, VM manager kind of platforms for running these things.
We've talked about as well the different ways that Apple and Google are evolving their ecosystems to run
agents or AI instances locally as well as in the cloud. Where is that all going? What is the
sort of the common through line that you're seeing in platforms to run AI applications, Guy?
Well, I think that you're giving me an opening to rant about data platforms. I think there are
three platforms we're talking about. And they can integrate. They can be presented. They can be
presented to the user just as a single platform.
But there's, there's, the word that strikes me in terms of agentic AI activity is
dynamic, more so than applications, more so, certainly more so than, you know, model-based
inference, even including RAG, the type of data, the type of access required, the types of,
let's call them queries or needs, data needs for an agent will vary considerably, and the data
platforms need to be able to keep up with that, not just from a quality standpoint, an availability
standpoint, and all that other sort of stuff, not just having the data in the right place at the
right time, which is a lot of what they do, but from a security and access standpoint, that's
extremely important, especially when it comes to things like defense against bad AI actions
and actors and that sort of thing, via MCP or via prompt injections or whatever it might be.
So that's one of the three.
The other two would be the platform on which you build and run and manage the life cycle of the agents.
And then the third, I think, I just forgot actually to tell you the truth.
There's a third one in there that I'll remember in a moment.
Yeah, when I talk about platforms, I mean, platform is such a generic term.
I mean, when I talk about a data platform, it's almost like the life cycle management around data, right?
Because in the end, data drives it.
It used to be different, but today the algorithm is pretty generic as long as you bring the right data to the table.
And so life cycle management is really important.
and I look at a data platform as the component that make sure that your data is clean, fresh, always up to date, and allows you to select data based on privacy and other regulations.
And then you have the...
And also allow or not allow data depending on use and application user and, yeah.
Right. And I think that's at least...
And again, I'm not necessarily an infrastructure person from the ground up.
You know, I started as a data scientist.
So me, a data platform is the old storage market, right,
where people were talking about storage devices.
To me, today, I don't talk about storage devices.
I talk about data platforms.
So it's a lot more than just that.
And then you have the execution platforms, right?
The super glues, if you wish, like MCP and those platforms,
you know, Apple has their own platform and it's almost like we have a standard with MCP
and then those platforms kind of use those MCP servers or concepts to kind of build their own
world. The only caveat I have with so many organizations building their own platform is
that innovation goes so fast that it's going to be very, very difficult for people to
stick with a particular platform. You know, some platforms are going to disappear
and new platforms will be created.
I just ask myself from a consumer standpoint,
what does that mean for me, right,
if everybody has their own platform?
Yeah, that's true, because in the cloud infrastructure space,
we've seen very much that there was a proliferation of platforms
and now everything is sort of coalesced around Kubernetes, for example,
to run cloud-native applications.
And I think that a lot of the reason,
And one of the reasons that everything runs on Kubernetes is not because it's the best thing ever,
but just because it's a thing that can run anything.
And I wonder as well, do we need that?
Are we going to have that?
My suspicion is that companies like OpenAI and Microsoft especially are going to be trying to position themselves as sort of the arbiters of those future agentic platforms.
And I wonder to what extent that will happen.
I mean, Open AI has made a big bet to become the first mover.
to provide sort of the, you know, to be the windows of AI.
At the same time, all these enterprise companies would love to do that too.
I mean, I'm sure that Salesforce and Service Now and, you know,
companies like that would love to be the standard platform that you run enterprise AI applications on.
And I don't think they would even argue with me that that was their goal.
So do you think that we will have different platforms for personal versus enterprise?
Do you think that there's, do you think there is going to be a Windows of AI?
No.
No way.
This is worse than the cloud.
I mean, the cloud allowed, granted, there's been this force of gravity towards Linux and cloud native and all that sort of thing.
Got it.
But that has not made Windows obsolete in Cloud Native or in web application and an application development, not to mention other, you know,
systems. What I mean by it's worse than the cloud is the cloud birthed the API and the so-called
API economy, which is, you know, marketing term of art, but refers to APIs everywhere all the
time such that you can plug things together. Now, does that, does that, you know, is that,
it's not magical, it's not magic pixie dust. You need to do a fair amount of infrastructure work
among other things in order to get things to work like you expect them to. But this gets,
AI is so able to permeate every layer that trying to be some kind of standard in any way for an AI stack is a fool's game.
It's ridiculous.
I think Windows, or at least Microsoft in general, was trying to dominate the household,
meaning that they wanted to run on the box in your house.
I think in Agentic AI platforms, I don't think they have that goal.
I think their goal is to give you an API key and that you're hitting their data centers, right?
Because they can provide that service a lot better.
Because imagine that, you know, if we all expect fast returns, right?
We always expect that when you give a prompt that you get immediate results,
if they would aim for something like Windows or like a box at home,
they have no control over performance and latency.
So I think platforms, agentic AI platforms,
are not like Windows, but more like remote services
where you just use an API key to hit that particular service center
and then get a fast response.
So as we're nearing the end of our episode here,
on the end of our season.
I want to ask maybe a difficult question to the two of you.
And that difficult question is we've been talking about agentic AI.
We started trying to define it.
We've talked around it a lot.
We've given a lot of examples and a lot of descriptions of where companies are going,
where people are going.
But is this really a thing?
You know, let's meet our audience where they are and say,
is agentic AI really a thing?
Are we going to be talking about this in a year or in five years or in 10 years?
Or it's just this just 20-25's thing and the 2025 way of talking about AI?
So, Guy, what do you think?
Is agentic AI really a thing with legs?
I think it is.
I think we will be talking about it in a year.
I'm not sure about three years, though.
So the development of this particular wave of revolutionary tech is so fast.
I used to say about cloud computing that we would stop call it cloud computing eventually
we would just call it computing.
I was kind of wrong because we just kind of call everything cloud now, more or less.
But it does disappear as a distinctive term, and I think that agentic AI will disappear
as a distinctive term, and it will just be AI.
On the third hand, we have A. GI coming, which I do not expect to be what it purports to be.
But I think that we're just going to start calling it all AI, including things that we don't call AI now.
So Agenic in a year, yes. In three years, not so sure.
Yeah, to me, first of all, Agenic A. A.G.A.I. To me, is a reference to the ability to use multiple,
language models, interchange, daisy chain, and bring in your own data in an efficient way.
That to me is the Gentic AI.
And so, well, a Gentic AI is a term still exists in three years, almost guaranteed no,
because the marketing people will come up with something else.
But technology-wise, I think the question for me in the future will be,
will large language models as they exist today,
today still be at the core of modern AI, whatever it is, in a few years.
Or will there be something else than a large language model?
That, to me, is kind of what will happen in the next couple of years.
But hang on just one minute, because the second half of your question, Stephen, was,
will we be talking about agentic AI in a year?
Are we at the beginning or are we at the middle?
as recently as Agentic AI came into the discussion, are we at the beginning, are we at the
middle of it? If we're at the middle of it, we might not be talking about Agentic I. Maybe we'll
be talking about recursive AI, which is something relatively new to me, which is AI that,
well, Frederick knows what it is. It's AI that creates itself, creates other, you know, so to
speak, creates itself. Maybe that's so. Maybe that's so. Maybe we will,
the agentic AI will be so last year, so five minutes ago, in a year's time.
I think that the, if you were asking me there, Guy, I think that the, the analogy you gave
of cloud computing is true. And that's sort of, I remember having that conversation 20 years
ago, this isn't cloud computing. This is just computing. This is just how things should be run.
And I think that most of the concepts have already been incorporated into every,
day applications and basically what we call modern applications run on modern platforms. And I think
the same is going to be true of agentic AI. Frankly, I don't think that we're going to be talking
about agentic AI as sort of a capitalized proper noun. I think it's going to be AI agents that
operate on our behalf. And I think that that's what people wanted AI to do anyway. And so that's
going to be the future of it. So we shall see. That's that for this.
season of utilizing tech.
Thank you, Guy.
Thank you, of course, Frederick.
This is your nth season here supporting us on our podcast.
It's always wonderful to have you.
You know, Guy, it's been welcome, great to welcome you.
Before we go, tell us where we can continue the conversation with you, Guy.
Well, you can find me at futuremgroup.com.
You can find my writings there.
LinkedIn is a great way to see, like, you know, when I'm engaged in
at a conference or a tech field day or something else.
You'll see me there.
And I'm also at guycourer.biscay.
Dot social, blue sky.
Yeah, and you can find me on LinkedIn and on our website, hyphenz.com.
And as for me, you'll find me at S. Foskid on most social media networks.
I do a lot on LinkedIn.
I'm on Blue Sky and Mastodon, even.
And you would love to connect with you there.
So thank you very much, everyone, for listening.
Again, as a reminder, go to Techstrong.a.ai, where you will find a new podcast called Utilizing A.I. It's not the same format. But you will see these faces there. I guarantee we're going to have Guy and Frederick join us on Utilizing AI. We're recording that and publishing a new episode every Wednesday. So find that, again, at Techstrong.A.I or on YouTube or in your favorite podcast application.
Utilizing tech will return.
We will return with a new topic.
We have so far talked about a lot about AI.
We've talked about data infrastructure.
We've talked about Edge.
We talked about even some hardcore tech, CXL, for one season.
Again, you'll find utilizing tech in your favorite podcast applications as well, also on YouTube.
We would still love to hear from you.
If you enjoyed this discussion, please do reach out.
Maybe a suggestion for what we should cover next.
season. I'd love to be love to entertain that and maybe we can welcome you as a guest on the
podcast. This podcast is brought to you by Tech Field Day, which is part of the Futurum group.
For show notes and more episodes, head over to our dedicated website UtilizingTech.com
or find us on ex-Twitter, Blue Sky, and Mastodon at Utilizing Tech.
Thanks for listening and we will catch you on the next season of Utilizing Tech.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you.
