Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 09x01: Utilizing Agentic AI with Frederic Van Haren, Guy Currier, and Stephen Foskett
Episode Date: September 29, 2025AI is the hottest topic in tech right now, evolving dramatically over the previous eight seasons of this podcast. We are kicking off Utilizing Tech season nine with a discussion of the state of the ar...t of Agentic AI with Frederic Van Haren of HighFens, Guy Currier of Visible Impact, and Stephen Foskett of Tech Field Day. Generative AI augments our capabilities, and is being used every day by millions of people. Agentic AI combines reasoning with actions, enabling AI to perform actions on our behalf. Although AI does not reason like us, the way it manipulates data resembles intelligence, and iterative analysis can result in a chain of thought that strongly resembles reasoning. Agents can then receive context and take actions based on this using a framework like Model Context Protocol (MCP). These techniques help move generative AI from concept to production, building real applications rather than simply processing text. This season of Utilizing Tech will help our listeners understand the emerging agentic AI, and how this technology can make end users more productive and build profitable businesses using AI technology.Hosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesFrederic Van Haren, Founder and CTO of HighFens, Inc. Guy Currier, Chief Analyst at Visible Impact, The Futurum Group.For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
AI is the hottest topic in tech right now, evolving dramatically over the previous eight
seasons of this podcast. We're kicking off season nine of utilizing tech with a discussion
of the state of the art of agentic AI with Frederick Van Herron, Guy Currier, and myself, Stephen
Fosket, learn about agentic AI and learn about season nine of utilizing tech in this episode.
Welcome to Utilizing Tech, the podcast about emerging technology,
from Tech Field Day, part of the Futurum Group.
This brand new season focuses on practical applications
for AI and specifically agentic AI and related technologies.
I'm your host, Steven Foskid,
organizer of the Tech Field Day event,
including our AI Field Day event.
And joining me for this season is a familiar face
and a new one.
Before we begin, let's go ahead and meet them.
Well, thanks for having me.
I'm Frederick Van Herron, the founder of High Fence,
We are a consultancy and services organization helping customers accelerate their AI journey.
And you can find me on LinkedIn as Frederick V. Heron.
Yeah, I'm Guy Courier.
I'm an analyst at Futurum Group.
I'm also the chief analyst for another Futurum Group subsidiary, visible impact.
And we help vendors articulate and bring to market their offerings, including AI offerings.
But I also have a background in market research and product management, product marketing, including AI back before it was AI.
And I'm Stephen Foskitt, as I mentioned.
This is, in fact, the ninth season of utilizing tech.
Of those nine seasons, by my count, six focused on, and not including this one, I have the eight seasons previous, six focused on AI and various sorts.
Last season, we talked about AI at the edge.
Before that, we talked about AI data infrastructure.
And as Guy said, we actually started, Frederick, you and I talking about AI before ChatGPT was released.
In fact, we finished our first three seasons before chat, before AI became the topic that it is today.
I mean, it's safe to say that as far as technology goes, AI is the most important thing in the world.
That sounded like one of those movie openings, right?
AI is the most important thing in the world.
Do you concur, is AI the biggest topic?
I don't want to say the most important, but the biggest topic, Frederick?
I think so.
I think a lot of the innovation that is happening today is heavily focused on AI.
Maybe a little bit too much sometimes.
That's why people sometimes are kind of worried that AI is maybe a little bit too much hype as opposed to practical.
But I definitely believe that a lot of the funding and a lot of the innovation today is going towards that direction.
And if you, you know, you and I, we have talked so long about AI.
We have seen the traditional AI.
We have seen the generative AI.
And now it's agentic AI.
I mean, to a certain degree, what's in a word, right?
We can define a little bit about agentic AI.
But I definitely believe that AI is really going to stay a hot topic.
The problem, of course, is AI is a generic word, right?
So we kind of, as podcasters, it's kind of our responsibility to kind of narrow down and define things.
Yeah, I think that actually your first choice of words, Stephen, were the right ones.
Important in the sense, maybe not of market size or of current impact, although everyone seems to have encountered it.
at this point. I use the word everyone loosely. But in terms of its ability to transform for the good
and for the bad, to make things better, to make things worse and to do that in either case
extremely rapidly, I don't think we've ever seen anything like it. So I think important is
actually the right word, even if we rightly should put it in its place, explain what it is
what it isn't. There's a lot of misconceptions about what it is and all of that. I don't think
there's any more effective conversation to have right now just pretty much across the technology
and business landscape than the AI conversation. So if that's not important, I don't know what
is. You know, my litmus test for the importance of things is, and no offense to grandmothers here,
but, you know, have the grandmothers of the world heard about it?
And that is certainly the case.
Well, my grandmother is not with us anymore,
but my mother-in-law asked me about AI and chat GPT the other day.
My father has said, sounds like this AI thing is going to be replacing jobs.
You know, everyone I talk to, if they find out that I'm in tech,
you know, they want to know what is this really?
mean. And I'm always sounding a cautious note for them. You know, I don't think that AI is not
important. Far from it. But I feel like at this point, we are still in the new toy phase of
AI rather than the let's get some work done here phase. You know, Frederick, you know, what do you
think? What would you say if a non-tech person came to you and said, you know, what is this,
what is this Agenic AI? Maybe they've heard of Agenic. What does that mean?
Right. So first of all, I mean, when there, I guess there are two questions there is when
when people ask me about AI in general, you know, I tried to explain it as it augments our
capabilities. I mean, you bring, you brought up your grandfather. My mother is 90 years old and she
uses chat GPT. And I didn't teach her chat GPT. She's using it for translation and for writing,
you know, documents. So I think we're entering a phase where when I explain AI to somebody,
there's a low-hanging fruit, right? It's the grammar, the translation, looking up things,
you know, instead of Googling nowadays, it's chat GPT. So that's a generic term. As far as a gentic
AI, the way I explain it to people is, first of all, if they're familiar with chat GPT,
then I will refer to it, you know, this is a type of generative AI. And I will say that
agentic AI does two things. And one, the first thing it does is it introduces the concept of
an agent. And an agent is like translation or sending an email. And then the second component
that makes agentic AI, agentic AI, is the reasoning. And so the way I explained that,
to people is, is that agentic AI is similar to kind of thinking before saying something,
right? The traditional large language models of generative AI spits out the first thing that
comes to minds and sends it to the users. Agentic AI is where there's a little bit more reasoning
just like us human. So in a nutshell, agentic AI, the concept of agents or plugins, if you wish,
And then the second piece is the fact that there is more reasoning going on than traditional generative AI.
Reasoning, it's an interesting choice of words.
The usual word that I stress when people ask me about AI and how to use it is simulation.
AI is not intelligent, despite the name.
It's a simulation of intelligence.
Generative AI in particular, and I do mention that mostly I'm talking about generative
AI, since that's anywhere between 90 and 99% of the attention right now.
Generative AI in particular is designed to simulate reasoning or simulate speech,
simulate, well, it can simulate a lot of strings.
It can add a lot of strings to a lot of other strings, so it can simulate a DNA strand,
for example, based on input.
And I think that's a really important.
important distinction to make. It's the source of hallucinations. It's the source of AI's general
stupidity. But what AI does do in terms of simulation is extremely useful and helpful, especially as long
as you keep that in mind. So I don't know if I'd use the word reasoning for agentic AI. I mean,
the idea of agency is just that, like you said, it's something that can go do things. And an AI agent that
can go and do things without having specific algorithms or sets of instructions that can,
more or less, with permission, prompt itself to send that email based on certain conditions.
And then, really importantly, do what amounts to learning or retraining as it goes so it can do it
better. I think that's where I land on in terms of agentic AI.
Yeah, definitely. I mean, we can call it whatever we want, right? Reasoning,
or other terminology.
The bottom line is that there is data,
there is background information and historical data,
and that historical data is being manipulated by math, right?
The reason why in a lot of the technical industry,
the word reasoning is being used,
it's because it's referring to the fact that the first answer
the system comes up with is not necessarily,
the right answer, right? So you can you can call a simulation or iterative approach if you want.
The idea is that just like with us humans is that the first answer is not necessarily the right answer. It could be,
but it doesn't necessarily mean it's the right thing. From a technology standpoint, it basically means there is a lot more going on
when you ask the system, an agentic AI system, a question while you could ask a generative
AI system exactly the same question. The agentic AI will do a lot more in the background
than a generative AI. And it's very difficult to explain it to people, right? Because people even
don't necessarily understand to work reasoning or simulation, right? It's in the end, it's still a
machine. It's not a human, right? And nobody's trying to.
to say that generative AI, or I should say, agentic AI is a replacement for a human, right?
Yeah.
And I think that that's the key there is that, well, I don't want to get too philosophical here on episode one.
But I do think that you could make an argument could be made that at some point it doesn't matter,
whether it's thinking or not
if the result is the result
that a thinking machine would come up
with. I think that also
it is very, very true to say that
it is not thinking
the way that we would consider thinking.
It is statistical, but that
the combination of
iteration, as
Frederick said, and
selective use of
data can result in
something that is the same effect
as an intelligent
system, even if it's not one of us.
Yeah, and agentic does take us a little bit out of the bind that generative leads us
into in terms of that simulation idea.
The way I usually put it is that the design point for generative AI is a simulation.
It's not truth.
This is commonly well known within AI, and I think a lot of the general public
was picking up on this, that the results produced by AI can be just dead wrong. The problem is
that because simulation is the design point, it appears true. It appears equally true. And to Frederick's
point, and to your point, when you're thinking about agentic AI, you're thinking about a process,
you're thinking about some automated or semi-automated process, even if it looks like the same
chatbot, the generator of AI is behind. That process is designed. It's designed by humans. It can be
adapted to some degree by the agent itself. And so the result is that you're missing some of that
simulative character. So I don't think that's philosophizing at all. I think it's a useful
corrective for us to understand what's going on right now with AI and what it's what it's promises.
I just worry that even the creators of these agents are fooled themselves into what they're capable of and what they're doing.
Well, hopefully that won't be happening here.
I think that we've got some folks here who really do understand, you know, what's really going on.
But, you know, Frederick mentioned another aspect as well of a genetic AI that's, I think, equally important.
And that is the ability to, in a way, to perform work.
And, of course, it has to have context.
It has to have a chain of thought, sort of an iterative reasoning process, to analyze that data and decide, you know, not to say, I am anthropomorphizing to output an action.
And then it has to have the ability to take that action.
And that has led to a need for standard frameworks to allow these AI agents to interact with each other.
And one of the ones that we're hearing a lot about, and I think we're going to hear a lot about this season, is what's called MCP or model context protocol.
Which of you would like to explain what MCP is?
Well, Frederick's been on the firing line first.
So I'll go first, then he will correct me, if that's okay.
Because I think of it as being pretty relatively simple.
MCP is a way for AI-based applications or AI agents to seek context, to request context and to receive it.
Largely, it can be from other AIs or AI engines, and it can do this using a relatively standard API-like interface.
So it can be programmed in or it can be, it can discover these resources, you know, in its system.
And that is what allows these systems of AIs, including Agenic AI, to be more effective and to work together.
Yeah, exactly. It's, you know, Agentic AIs, as I mentioned, it's about agents. You have different agents. A lot of organizations are deploying and delivering agents.
that consumers can pull together
and an MCP server
has the ability to pull all these agents
together and generate the content.
I mean, it's important to note
that there's a few versions of the MCP server.
Some are task-driven, others have a different approach.
But in the end, you can look at it as a way to standardize,
right?
You have a bunch of agents that are very capable.
You have the ability to daisy chain,
those agents right and so you can build very and when I say you I mean you as a non-technical person
or consumer can build a reasonable workable application with MCP servers it has to be said that
MCP there are probably like two or three different server types today it's evolving really quickly
but you can see how many organizations are jumping on board and delivering capabilities right so
For example, Docker desktop is an application you can install on your desktop, which comes with MCP servers ready to go.
And can, Stephen, can I add a little, a little context, a little bit to this, the importance of MCP.
MCP being an open standard.
So, you know, AIs, to just give a general label to all this stuff, AIs have interacted with each other before programmatically, before less than a year ago is when it's,
The first MCP specification was published.
I mean, this thing has grown super rapidly.
But here's what I want to say.
The import of something like MCP cannot be overstated.
If you think of just a regular generative AI model,
it's taking a string of things and outputting a string of things
that should follow that string of things.
Usually the string of things is words, and it follows with more words.
So when you are doing good prompt engineering,
you're adding all this context and all this stuff to make
that input of words and attachments, it's all, you know, a string of things to generate more
things. The more you provide, the more complete, the more on point it all is, the better your
output. That's generative AI. Now, imagine that the AI did not have to rely on whatever you
happen to put in, but could go out and seek other contexts. That's what MCP allows. That is
critical for an AI to be agentic and not just generative.
Yeah. And, you know, ultimately, like, as Frederick was saying, I think the thing about
MCP that is exciting is, to me, the way that it encapsulates this context in a way that
is standardized. In fact, I could see MCP being leveraged by non-Gen AI technologies as well,
because it is very much, it just makes a lot of sense.
Those of us who've been using, for example,
process automation technologies for years now,
or the sort of, if this, then that type technologies
have encountered the problems of basically AI rot or API rot,
making sure that as things are upgraded,
that they continue to work,
figuring out how to pass data from, I hate to use the word agent, but from agent to agent,
from component to component.
And MCP actually takes a lot of that work and in the context of generative AI moves it forward
into an extensible framework.
Now, that's exciting beyond AI, but in the context of AI, it's especially exciting
because what it means is that you can basically give a package, a payload to the next worker in the chain,
the next AI, you know, agent in the chain, and say, here, do something with this.
And unlike conventional APIs that are somewhat brittle and fragile, it can be a lot more robust because it uses generative AI.
At least that's how it's been to me.
what do you think of that frederick yeah it's exactly right i mean i know you don't like the word agent
but the agent in an agentic ai environment doesn't have to be ai driven it can be something very
very simple and to your point you can have agents that are non-a-i-driven but by by enabling it with
an mcp server you end up with an application that can do a lot more than
the components individually by himself.
I'm not sure if we actually defined MCP.
You know, it stands for a model context protocol in case people want to look it up.
But you're absolutely right.
I mean, I think what a GEN-TIC does, what GEN-TIC does for the community and people out
there, it enables people to do a lot more.
And we see that, right?
People that we're asking for basic applications in the past are now asking for
similar applications or at least similar functionality, but then driven by an MCP server.
And it's fascinating how easy it is to set it up, right? And we have said it before,
but the speed of innovation is incredible. It's certainly combined with vibe coding.
I mean, who needs an engineer to build a prototype? I'm not talking about production,
but prototype-wise. It's an incredible time to be around.
I do think that we need engineers.
And I know you're not being an absolutist about this, Frederick, not at all.
But it's that it's that whole idea that when you're using generative AI, for example,
it really helps to have expertise in the area that you are working on so that you can utilize
what comes out for good and recognize the part that might be problematic.
And in the same way, not for nothing, I think, you know, if,
If there's such a thing as elegant code, there's probably such a thing as elegance in a vibe code.
Well, sure, I could, I'm with you on that.
I actually am concerned that as people are vibe coding more, they may mistake vibes for quality and think that they actually have developed not a prototype, but a finished product.
That's right.
That doesn't sound great. But that being said, I hope that, I hope that that won't happen.
And I am actually, you know, fairly optimistic about a lot of the work that's been happening.
I mean, if you look at what MCP does, it constrains the context that the next link in the chain can work with.
You look at some of the other guardrail type things that are being put up around AI systems.
I think that that's all good because a lot of the problems that we've been having with AI, you know, I mean, certainly.
My biggest problem with using AI is that it's non-deterministic.
You know, I can throw a set of data at Gemini and get this output,
and then I can throw it the same set of data at Gemini and get a completely different output.
And that's challenging for me as a developer.
I think that there's many ways in which we can kind of address that
with additional guardrails and boundaries and context setting that can hopefully
help kind of constrain some of that randomness.
But at the same time, I do think that it's exciting where this stuff can go.
Again, one of the words that I used before was brittle.
I have found agentic systems prior to AI to be extremely brittle,
to the point that I became very frustrated in a lot of these process automation technologies
because essentially those links in the chain would be changed
without notice somewhere.
And so even though it was deterministic,
it always gave the same output,
it sure didn't once they changed the API on me.
And, you know, I actually in this,
actually these days,
I'm using generative AI as sort of an API
super glue already where I'll throw it
some JSON from something that I know sometimes
is a little bit iffy
and say, give me a JSON output
from this JSON input.
And the result is usually a lot more sturdy and reliable than anything else.
And that's what I'm hoping that we'll see with agentic paths, agent-to-agent and MCP.
Yeah, so we talked about two different sides of agentic AI.
One is a developer site, which is an interesting piece by itself.
But I have to reiterate what I always say is, is AI in general, whatever it is to augment our capabilities, not to replace.
So you will never hear me say that vibe coding replaces a developer.
When I use vibe coding, if I even can call it like that,
it's the equivalent of me buying a book and looking up for a reference,
you know, an API call.
Now I go to vibe coding and it will provide me with some reasonable guidance around API calls.
And then there's the flip side on people consuming a Gentic AI, right?
I mean, I think another thing I have a problem with, with people kind of assuming that whatever
a Gentic AI spits out, that it has to be exactly what you expect.
I mean, it's having different opinions.
It's not bad, right?
It's the same data, different opinions.
That's what we all do, right?
That's why we have this conversation.
We all have the same data or similar data, but we might have a different opinion.
I think it's important to note that the Gentic AI by itself,
might give you different answers and evolves, right?
I mean, another thing, which we haven't talked about, agentic AI,
but RAG is really important in a genetic AI.
So RAG is the retrieve, augment, generate,
which is the ability to inject almost in real-time information
and change the behavior of an agentic AI system, right?
So the expectation is that the system should behave differently
if you ask the same question over and over
and over. Yeah, I'm looking forward to learning how practitioners and, you know, I suppose
vendors as well are putting borders around or, I guess, identifying scenarios is really what I'm
really thinking of that here's a good scenario for this type of AI work. Here's a good scenario to
avoid. And I don't mean, I guess the reason I eventually avoided the word scenarios is I'm talking
about sort of, not, oh, this is really great for computer vision. This is really, not that kind
of scenario. I mean scenarios where the type of work, the type of environment, the type of decision
making required, some will be obvious regardless of the application for agentic or for generative
or for both. And some will be obvious ones to avoid. I think that kind of everyone's throwing
AI at everything all the time right now in a certain sense. And that makes sense when no one is
really sure exactly where it's going to be productive and we're not productive. But I wonder if there
are practitioners out there right now who have enough experience at this point to be able to say,
no, we don't have the right personnel or we don't. This is not useful for this particular type of
work. I'd like to find that out. So as we look forward to the next
you know eight episodes of this season
I want to take a moment here before the end
to ask each of you
you know what would be your ideal
outcome for this?
What would you like to learn
and who would you like to talk to
over the next coming weeks
to learn that, to reach that?
Guy, you want to kick us off?
What would you like to learn this season?
I'd like to learn if twofold, if there are productivity measures that are lighting people on fire.
I've been maintaining from the beginning that productivity is a secondary benefit of AI,
that it's just, it helps you or humans or certain types of work to be more reliable
because you can just get started instantly, no writer's block.
So it's more reliable and that you can fit more review cycles in so you can,
come up with higher quality work and that productivity flows from that. But I do think that
productivity is why everyone's in it. And I want to understand what people are seeing. And I think
there's less productivity out there than advertised, but people are still pursuing it. So why
they're doing that? I think that there are lots of benefits that don't necessarily come down
to dollars and cents or ROI or that sort of thing. And I would love to help understand and
shape the discussion around that.
Would you, Frederick?
Yeah, I think, I mean,
the engineer in me says, I want to learn
about innovation, right? What don't
I know and what's around the corner?
And I think that's
the first thing is always to learn
something from other people.
The second thing is
the fact that
systems are becoming more and more
complex, it's to
the point where the people who provide the models and provide agents don't even know how
they're a final product will be used so it's governance and security I really would like to find
out and this is the holy grail right how do you how do you diagnose or analyze a given agentic
AI system for governance security and bias even even today it's a problem right we're we're
giving a system and we have no good metric to validate those components.
And as technology goes faster and faster, there is a tendency for, you know, leaving that
behind or as an afterthought, which, you know, Stephen, you and I have been talking for it
for a long time, you know, governance security is a big concern.
I think it's getting worse.
And then my final statement is I look at AI as an assistant.
So I would like AI to be a better assistance to me in my work, in my private life.
Really good points, especially the security, you know, let's not use a fancy word.
Let's just say sort of, you know, human control, if you like or something.
I think that's a big worry.
And maybe that's an area where we need to mature a bit.
Just saying like you did Stephen, like it's non-deterministic, that's a fan.
way of saying, I don't know what it's going to do. And, you know, that can be a problem. But
that's true of the humans we interact with. So, you know, better get used to it. That's certainly
true. You know, I didn't expect you to, no, that's certainly true. And I would add, you know,
one more thing I'd like to see is I'd like to hear about productive uses of this technology.
I really want to know what are people doing with this that they couldn't do before.
And that to me is the hallmark of any kind of successful technology.
I think right now we've got a really cool thing going,
but we need to make sure that this isn't just a parlor trick,
that this isn't just a toy.
It needs to be something that's useful.
And so again, back to the title of this podcast, way back eight seasons ago,
when we said utilizing AI, why did we call it that?
That means to make productive use of a technology.
And so let's figure out how we can actually utilize AI.
Now that we've got technology that works,
now that we have a context protocol,
now that we have the ability to connect AI
with external data sources,
we've got infrastructure.
How are we actually using this stuff?
And that's actually one of the things we're going to talk about
on the very next episode.
So on the first episode of the regular episode of this season,
we're going to be talking to a great leader and thinker on this
about how his company is building AI models and agentic systems
that are specific to industry verticals.
So they're not just putting a chat bot on the side of the website to say,
how can I help you?
They're building applications that do things in,
specific verticals. And so you'll learn a lot more about that. And over the season, we're going
to be inviting more people like that, whether it is companies that are designing and building
products or thinkers, doers, who are out there creating this or thinking about it and advising
on it. And hopefully, when November comes around and this season is done, you will have learned
a thing or two, because I'm pretty sure that I will have, along with Frederick and Guy. So thank you
very much for listening. It's great to have you join us for this season of utilizing tech.
You will find this podcast in your favorite podcast application. You can also find videos of
it on YouTube, and you can find it streaming in the Tech Strong app on Roku and Apple TV and other
places like that. If you enjoyed this discussion, please leave us a rating, leave us a nice
review. This podcast is brought to you by Tech Field Day, which is part of the Futurum Group.
and more episodes, head over to our dedicated website, UtilizingTech.com, or follow us on
X-Twitter, Blue Sky, and Mastodon at Utilizing Tech. Thanks for listening, and we will see you
next week.
Thank you.