Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 09x08: A Realistic Approach to Agentic AI with Nick Patience of The Futurum Group
Episode Date: November 17, 2025As customers try to figure out how to present data to Agentic AI applications, many of them are realizing that it’s time for the storage infrastructure team to step up and take a seat at the table. ...In this episode of Utilizing Tech, recorded live at NetApp Insight in Las Vegas, hosts Stephen Foskett and Guy Currier from The Futurum Group sit down with Ingo Fuchs, Chief Technologist for AI at NetApp, to explore the critical role of data infrastructure in supporting enterprise AI and agentic AI applications. As organizations move AI workloads into production, traditional infrastructures—especially storage teams—must take a more active role in enabling performance, efficiency, and governance. Ingo emphasizes the emerging needs for data quality, control, compliance, and currency, particularly as AI agents begin making decisions and interacting with sensitive enterprise data. The conversation highlights how NetApp’s capabilities, such as AI Data Engine and native infrastructure integrations, enable real-time data pipeline management, enforce guardrails, and ensure consistent and secure data delivery. This shift represents a transformative intersection of storage, infrastructure, and AI operations, paving the way for scalable and reliable enterprise AI solutions.Guest: Nick Patience, VP and Practice Lead for AI at The Futurum GroupHosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesFrederic Van Haren, Founder and CTO of HighFens, Inc. Guy Currier, Chief Analyst at Visible Impact, The Futurum Group.For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
Although companies are just starting to deploy generative AI, industry attention is already turning to AI agents.
This episode of Utilizing Tech brings a realistic perspective on the Agentic AI timeline with Nick Patience, VP and Practice Lead for AI at the Futurum Group.
Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group.
This season focuses on practical applications of Agentic AI and other related innovations and artificial intelligence.
I'm your host, Stephen Foskid, organizer of the Tech Field Day event series, including AI Field Day.
And joining me this week for the co-hosting seat is Mr. Frederick Van Herron.
Welcome to the show, Frederick.
Thank you. Glad to be here.
So my name is Frederick Van Herron.
I'm the founder and CTO of HighFense, and we provide HBC and AI consulting services.
You know, Frederick, we've been talking quite a lot about a genetic AI this season.
I guess that's the topic, so it's no surprise.
But I guess, you know, we're still in early phases of rolling this stuff out.
I think that people forget how quickly this field has moved.
Yeah, I totally agree.
I think people are still digesting what generative AI is,
and guess what?
Now we're talking about agentic AI and agents.
I think it just proves how fast all of this is going in the AI world.
The question really is, is how can consumers kind of follow?
and learn about all these new technologies as they come out.
I agree.
And I think that's especially difficult for enterprise buyers who fear that, you know,
there's so much news about this.
They fear that they're being left behind maybe, but they're really not.
This is really early stages of the development of this technology.
And that's why this week we decided to bring in one of the folks here from the Futurum
group who really focuses on this, Nick Patience, who is able to.
to maybe provide a little perspective and a little realism, where are we really when it comes
to agentic AI? So, Nick, welcome to the show. Thanks, Stephen, thanks for having me.
So, as Stephen said, my name's Nick Patience. I am the vice president and AI platforms
practice lead at Futurum Research, another part of the Futuram group. So I'm the kind of
principal AI analyst here. Everybody's an AI analyst to a certain extent, but I really focus on
the things that are that are fundamental to AI and obviously,
Agenic is part of that.
My history, my background,
I've been looking at AI for over 25 years.
I started another analyst company called 451 research back in 2000,
and I was early on focused on machine learning and text analytics and all those things,
and I've really just stayed focused on that.
And obviously, the whole kind of interest level in this space has exploded
since late 2022 when ChatGVT was launched.
So let's start off there.
You know, you've been watching this for a long time, as have we.
And I think that sometimes we as well get sort of pulled into all the news and the announcements and the hype and forget that a lot of this is still off in the future.
What is your perspective on the timeline, especially around Agentic AI?
Yeah, you're right.
We're incredibly early with Agentic.
When you kind of think, we alluded to just at the top of the podcast, the,
the kind of compression of time that's gone from machine learning to other kinds of predictive
AI and then, you know, all we're coming up right up against the third anniversary of the launch of
ChatGBT in November of 2022. And then just, you know, so just less than three years later,
we're also now trying to ask enterprises to embrace Agenic when they've only really beginning
to understand how to operationalize generative AI in the form of, you know, chatbots and people
writing prompts into them and then all the kind of the interesting stuff and the scary stuff that
that ensued from that so i think if you know if you're kind of thinking about an s curve you know
we're on very much the very flat flat bit of the bottom and but there's a kind of there's always a
pressure on and enterprises obviously um you know used to be really exclusively the domain of of the tech
industry itself and financial services companies um that had you know larger software development
teams but really every company is is embracing technology these days so they're all under
there's kind of pressure, there's phomo, there's the fear of missing out.
Meanwhile, on every day, they've got to run a business.
And so this is, this is what they're up against.
And there's obviously, you know, also the kind of the timeline between what seems, you know,
magical to them becoming normal to them becoming boring, you know, used to take decades.
And now it sometimes feels that it takes days.
So, you know, new models coming out almost daily.
And then tools on top of those models.
And so it's incredibly difficult one to keep up.
That's why they engage analysts like us, a future room, to help them, you know, get a kind of the big picture,
but also some, you know, some specific guidance on it.
No doubt that it's early stage.
What do you see as agentic AI applications that are being delivered today?
And again, we all understand it's early, but do you see kind of a trend or kind of?
kind of applications that are making a breakthrough?
I think it's similar to every kind of AI trend we've seen from back in the predictive days.
You start with the things that are horizontal, so they're not vertical-specific, and every
company has some sort of customer service challenge ahead of them.
That doesn't matter whether they're B2C, B2B, or any combination thereof.
So usually that is the first kind of opportunity.
So when it was, you know, when we're talking about predictive models, you know,
talking about classification of tickets and things like that.
Now we've moved way beyond that.
And now the ability to, quote, understand natural language is, you know,
with generative AI has opened up a whole slew of opportunities for people to kind of
at least semi-autimate customer service at scale and at a scale that they never they never
could. So if they have only a handful of customer service people, but they have thousands of
software agents, you know, there's a clear opportunity there to be able to deal with people,
enable them to interact with natural language, and then, you know, hopefully, you know,
resolve their issues or if not, then escalate them to humans. And so we see, you know, we see a lot of
a lot of that. I guess the other things
we're working towards
is, you know, we're working towards some sort of
workflow automation, but that's, that is, that gets
very, very specific to each company. And so
that's, that's a, yeah, that's, that's more, that's more
challenging. I guess some of the more novel use cases we've had
since generative AI, you know, came along. And let's be clear,
obviously, agenic doesn't work without genitive AI, is the
ability to analyze unstructured data
at scale and then search for hidden patterns, turn those patterns into some sort of actionable
insight, and do that over and over again, like having, you know, thousands of interns.
And so I think, you know, then we've seen the kind of rise of copilot like things, whether
it's the original ones like from Microsoft or other similar tools that just sit alongside
us at work and do, you know, very simple tasks such as, you know, suggesting times for
meetings and things like that and who might want to be in it and summarizing meetings which
has now almost become standard again think about how quickly that's gone from you know that can't
be done to more or less every meeting is recorded and summarized that's you know a matter of you
know a couple of years and so you know all those those kind of use cases there where we've got the
everybody every company of any size has got that problem and what I think will happen is similar
to what happened with predictive AI is eventually it will get very
verticalized so you know if you're if you're a car maker or you're a bank you have quite
different problems down once you get down beyond those initial um horizontal use cases that
everybody has and that's because uh i is dependent on data and if you're the car company your
data set is completely different than if you're the the financial services company more or less
obviously there's there's finance involved in cars and things like that but you get what i mean
and so then the application um becomes you know vertical specific
and then almost, you know, company specific.
But I think we're so, we're a very, very long way from that situation with Agentic.
And, yeah, we're really, really early.
And we've looked into some really narrow domains of, I say, customer support, customer help.
So some of those kind of, you know, software companies and, you know, websites always have, you know, something.com slash help.
And we did actually a project where we looked at how many of those help services are agentic.
or how many of them are not agentic in the sense that they rely on humans
prompting them to do things all the way through.
And it's amazing how the lack of true agents, autonomous agents,
exist in those kind of environments.
And bear in mind, that is the tech industry itself,
and that is the help within their own domain.
So this is not trying to solve a massive problem.
This is trying to understand what's going on with their own applications
and things like that.
So it's not a way of not denigrating anybody for that situation.
It's just that we are, you know, the hype always is obviously going to be well ahead of
the reality.
And, you know, part of my job is to try and keep our feet on the ground while also
seeing where we're going to go in the next, you know, three to five years.
I think it's really interesting.
I love that phrase predictive AI as a better way to phrase what we've currently, what we
currently have, because in many ways, that's really what most of our LLMs are used for, or at least
what they're doing.
They're predicting what the interaction should result in, whether that's generation of code
or generation of support answers, as opposed to Agentic, which theoretically would include
some sort of tool use, some sort of external references and calculation.
I know that you, one of the things I'd love to hear from you about is non-generative steps in agentic AI tool chains.
You know, but it occurs to me as you were speaking there, I think one of the challenges that we've got here sort of in terms of vernacular is those predictive AI chatbots that are in the, you know, company.com slash help URL.
Those are called agents.
In fact, they're usually called agents.
I interacted unsuccessfully with the United Airlines agent yesterday using their terrible predictive AI.
And, you know, I wonder if we have sort of a semantic challenge here.
Is that part of your role as well, trying to help clarify what we mean by all these terms?
Yeah, it's definitely in part.
I mean, I think what we will get is a, there will become, there will be more clarification.
The reason, I guess, obviously, these things were called agents is it goes back to that customer service focus.
And as you say, your kind of experience is not atypical.
And so I think gradually over time when we get to the point where the software is working behind the scenes,
so the agent, the genetic software is, you know, taking action without a human necessarily knowing
or a human having to any have an interaction with it.
that's when you know the maybe the agent word might be more appropriate i think it's you know at
a moment and an agent obviously comes from a customer service a person um you know that that that
that nomenclature where that originates but what if it's um executing you know 1500 workflow
steps um without you knowing and and something gets done in the background then i think it's uh yeah
that's the kind of goal we're trying to get to um and also when you mentioned on some of the um
the kind of the generative AI and the, you know,
the maybe the probabilistic and the deterministic estimate.
You didn't use those words, but that kind of difference.
I think one thing I'd just like to point out that we're looking at the moment is
because of the early generative AI use cases were humans typing things in
and getting results back.
Yeah, those kind of, and that's probabilistic.
That's using a large language model, which has a model, you know, of, you know,
scraped from the web, and that's really useful for doing creative tasks.
And creativity doesn't have to be, you know, literally artistic, but, you know,
that obviously is great in those situations.
But creative, obviously, suggesting, you know, ideation, you know, give me some ideas
of what I should be talking, writing about here, or, you know, we've got a meeting about
this, you know, can you produce an agenda on it?
That's all very creative stuff.
And, yeah, they're pretty good at that.
And obviously, there's a load of, you know, there's hallucinations.
We know about quite a lot of them.
and there's some things that are just a complaint incorrect
and you have to work with them.
But that's great.
But there's also a need for agentic automation
of deterministic processes.
So the classic example is, you know,
payroll runs on the 15th of the month.
I don't want a creative suggestion that says,
why don't you delay that to 19th?
And then that to cause an action
that delays everybody getting paid.
That's not useful at all.
And so those kind of things,
there's an element there of agentic automation.
I think, you know, this is what you're trying to get to.
This is what the agents are going to be doing.
They're going to be automating these processes.
And on the side, there's this fantastic generative aspect of it
where humans can interact with natural language.
But we kind of need to, organizations need to understand
that there's a place for some of that
and there's a place for the straight-up deterministic automation.
So if you think back, you know, I always like to think the history of the software industry
is a history of automating repetitive human processes, starting back in the 50s with mainframes
in accounting in finance, you know, and they were literally number crunching.
And then we've moved along all the way through, we've always been focused on, usually being
focused on structured data in relation with databases and then data warehouses and then data lakes
and things like that.
And we built up all these kind of software tools on top, analytics tools.
business intelligence things like all those kind of things and then huge application
suites and there were basically following rules and executing you know processes but those
rules had to be written by humans they had to be overseen by humans and so on and so
forth what we're going to move towards is is you know the more agentic future where you know
there is there are a set some some some some completely probabilistic situations but the
software is in some cases managing itself and execute
on our behalf. And I think one thing, one kind of rule of
theoristic, I guess, for organizations to think about is if
your problem involves a load of structured data, such as your
customer records, your employee records, and things like that,
and that's where the automation is coming from, then that's probably
going to end up in a fair amount of deterministic processes.
If you're, the problem you're trying to solve involves a load of
unstructured data, so we've got thousands of PDFs and we're trying to
extract tables from them and then turn that table into something useful that we can then
use, then you're going to end up with more probabilistic kind of challenges. So it's just a way
of framing things, but I think we're only in the agentic space in terms of, you know, the software
that's out there. Yeah, I think we're only really just starting to think about that. I know this
sounds silly, but, you know, here we are in November. Yeah, we're only starting to think about that in the last
a few weeks.
So this stuff is moving so incredibly quickly.
I go to a lot of technology vendor conferences.
You know, this time, you know, this week is another one.
This Microsoft Ignite.
But I've been to have been to many others this year.
And we'll do again, you know, next year.
And I've been doing for, obviously, for a long time.
So you see kind of, you see these kind of trends.
But, you know, these days, stuff is moving almost daily.
And that's very hard for organizations to keep up with.
It's also hard for analysts to do, but it is our sole focus.
So at least we don't have an excuse of having to do a whole bunch of other things.
Yeah, it's difficult enough to deal with and learn the new technology,
let alone learning the new technology.
I mean, there's no doubt that agentic AI can help with automation.
The problem, I think, is that the technology behind AI is becoming so complex as time goes by,
that more and more people use AI,
but less and less people understand the technology behind AI.
In some cases, I believe people use agentic AI
and believe that it's more trustful than a human.
Do we have a trust problem with agentic AI?
Oh, yeah.
I mean, I think there will be, yeah.
I mean, because obviously it's so much more powerful.
because if you had to rely on a human writing prompts in to get things done all the time,
that's useful and it's up to a point.
But if you get to the point where software is executing software, then you know, you can see
how that could scale very quickly and become incredibly powerful and potentially, you know,
dangerous.
So yes, there's definitely a trust problem to be solved.
I think we are still not really thinking about that at a deep level
because a lot of pilots that are in enterprises now are just very much that.
They're pilots within sandboxes.
They're not really dealing with anything of enormous scale where it could cause problems.
But I think, yeah, I think there's definitely a trust issue.
There's an old joke, such as there are jokes in AI,
that, you know, AI is anything that doesn't work yet.
In other words, this is impossible, I'll use some AI.
And then once it works with AI, people go, that's not AI.
That's just the way things work.
Well, it is still AI.
It's just solved the problem and it's moved on to another one.
So I think we're going to have a similar set of issues with the GENTIC that we had
with kind of traditional machine learning.
Well, you know, your point there about software executing software, I think is an interesting one because you're right that that's where the trust factor is needed most, but it's not just executing software. It's software writing software and then executing that software and acting autonomously. And I think that that's really where not just the risk and the threat, the trust comes in, but also where the promise comes in. I mean, if you look at what these companies that are developing this technology,
are saying, they're saying that that's basically the promised land. So, you know, AI will become
truly AI. In fact, I've heard a big backlash against that whole phrase AI. People don't want
to use artificial intelligence to describe anything that's not verifiably and independently intelligent.
And they're saying, basically, that we will get there and we'll know we've got there when it is
truly autonomous, when it is taking action on a
its own, when it is creating its own motivations, when it's writing its own software, when
it's executing things completely on its own, that's pretty concerning when it is such a black
box, as you also pointed out. I mean, you know, people don't understand how it works. Even people
very close to it don't understand how it works. And people seem to be enamored of it and already
taking it as intelligent when it's, we really haven't gotten to that point now. What's the prognosis
here for when this will be truly intelligent.
If you mean artificial general intelligence, the ability to do everything a human can do?
I don't want to necessarily push you into that corner, but when it's truly able to act on its own.
Well, it depends what it is, doesn't it? It depends on the domain and depends on the problem you're
trying to solve. You know, you're talking about code generation there. I mean, that's obviously
been, you know, probably the biggest, apart from the kind of custom service stuff,
it's had the biggest effect on organization's ability to, you know, to automate something
in the last couple of years. And that's, that's taken off hugely. And that's, I think,
you know, it's, it might be challenging if you're an entry level, you know, a graduate has just
graduated with a computer science degree looking for a coding job. I'm sure that is definitely
an issue. But when you think of all the legacy code for the people who
no longer with us, who wrote all the COBOL and the LISP and all these other
kind of languages that are relatively important, but aging, you know, there's an enormous
opportunity there to automate the, you know, the maintenance and regeneration of that
code. But I think the, I don't really, I must admit, on the kind of AI safety
spectrum, I'm not
particularly that concerned
that we are going to head to some sort
of AGI oblivion
anytime soon.
I kind of think some of the people that
pushed that
were doing that for a reason.
And that could be a kind of
you know, I can't think of the way
to put it politely.
But there's a reason why people might want to say
you know, I told you so if something bad
happens. But there's so many things
have got to happen, you know, and, you know, in order for software to have, you know,
major real world effects, obviously, it does. And, you know, our airplanes use software and our
trains do and our cars increasingly do, obviously. But I think, you know, I think it's so, we're not just,
I don't believe we're one, you know, one model away from Armageddon, any one point. I think there's
so many controls that will be in place. The fact that some of the people building the models don't
fully understand how they work is real and that's genuine and I think that's that is a challenge
but they're working on it and you know I think it's it's one of those things that once this
space matures as it as it does in all forms of software you will have governance tools and
trust tools in place and I think you you have to have I mean there's that's going to be that's
going to be major opportunity for the software companies to build those things but it's obviously
challenge for the enterprises
that want to buy them
and use agentic
agentic software.
But I think, you know,
I think we're a, you know,
there's definitely, there's definitely a kind of,
there's a platform shift happening.
And when platform shifts like this happen,
you get a,
you get a whole load of,
some things become kind of features,
you know,
within, you know,
the incumbent set of software and some things turn into companies.
And I think one, the,
yeah, from my, my point of view as an analyst,
what's happened,
in the last five years has been incredible,
where you've actually now got pure play AI companies
of massive scale, like Open AI and Anthropic and the others,
whereas you never had that before.
You always had, you know, the same, more or less the same names,
adding the features to what they had already.
And I think that's where life is going to get quite interesting
for everybody, both in terms of the vendors themselves,
the investors in this industry,
but obviously mainly for the enterprises.
and we'll be looking very closely at how that shakes out.
So in other words, how do you, how do the software as a service vendors
that are used to selling package applications and sell licenses on a subscription basis
and things like that to on a number of seats used,
you were already seeing the overhaul of the pricing, for instance, of software
by the promise of Agentic.
That is already happening.
and we're now seeing flexibility being offered by the software companies that they always resisted doing before.
So flexible pricing models and things like that.
So there is a kind of, they're looking, they're already looking at what they can think might happen in two, three years time and having to adjust.
So I think there's definitely always a need for people to be somewhat cautious as to what they're doing.
But, you know, when you think of some of the, you know, the kind of customer service issues that companies have when they're using agentic AI and that is, you know, they're not tickly, you know, people are usually not going to get hurt as a result of those things.
But obviously, when we come to much more critical domains, such as transportation or weaponry and things like that, then, you know, that's where you have to have some sort of, you know, government structure in place.
I don't mean just software.
I mean, you know, legislation.
And I think that's, you know, it's already happened in some parts of the world.
Sometimes you could see it's a bit too top down and a bit crude.
But it will happen.
AI will be a regulated industry.
There's no two ways about that because it is very powerful.
And I think that will happen.
So, you know, it's as much to do with trusting, you know, voting in the right people, I guess, to get the legislation right as anything else.
Yeah, it's dealing with software is not easy.
very difficult to validate software.
I mean, at some point, coding or writing software,
you needed a lot of knowledge from a hardware and a software perspective
and expertise in order to build clean and useful applications
that people understood.
Nowadays, the software is being generated by software.
I have to ask, you know, what's worse?
an agentic AI agent writing software and running it,
or a human vibe coding, generating code and deploying it.
Well, it's worse.
I don't think one's worse than the other.
I guess they're different.
I mean, the vibe coding thing is interesting because, you know,
it's opened up, you know, software development to so many different, you know,
people with completely different skill sets.
You know, and then, you know, obviously, you know, generative AI writing code is obviously, you know, doing something similar.
You know, it's going to affect people whose job is software development.
There's absolutely no two ways about that.
And it already has done.
But I think there's, you know, there'll be so much, so much code needs to be written because it's increasingly complex world and we can't manage everything manually that you're going to need software to manage things that software
currently doesn't manage.
So I think you are going to need both the ability for, you know, software to write his own
code, but also the vibe coding stuff is interesting because obviously the potential for that
is you're getting the domain experts directly into the process.
Now, we've always had, as you know, in software development teams, has always been this kind
of challenge to get the, you know, the line of business people involved at the right time.
That's why we went from, you know, waterfall to agile and all that kind of stuff.
stuff, and, you know, that if, but if you had the actual domain expert being able to write
their own little apps, then, you know, within a reasonable framework, a software development
lifecycle framework that has, you know, testing, QA and governance in place, then I think that's
quite an exciting notion. I think there's a lot of people who would like to be able to, you know,
build their own apps to some extent. Yeah, not everybody. But so I think it's some,
Yeah, it's an interesting, it's an interesting, you know, development, another one that's very recent.
Before we go, one more thing I wanted to hit on something you brought up right at the very beginning was the fact that the world of agentic AI and processes and tools and so on will not just be chatbots talking to chatbots, that we will be looking at additional types of tools in these tool chains.
Some of them may be deterministic conventional software platforms.
Some of them may be, you know, data platforms and different ways of querying structured
and unstructured and even multimodal data.
Others may be, you know, generative AI processes.
Do you see an entirely new type of software industry emerging here to support agentic tool chains?
I don't think an entirely new industry, no.
I suspect you're going to get, you know, some startups that are, that are, you know, covering part of the process.
Yeah, part of the kind of the software development process, the governance process, the kind of agentic ops process.
So we had it before when you kind of go back to, there's a kind of category called application performance management that cropped up and this has got nothing to do with AI at all, really.
I mean, this is just how managed, our applications are managed.
And, you know, that cropped up.
And then you have, you know, when predictive, you know,
when classic machine learning was started to come around, you had MLOPS,
machine learning operationalization.
And then, you know, then you have slightly different problems.
So when you've got a model, if you had a rules-based model that only does the thing that
is programmed to do, that's fine.
And obviously, if the thing falls over, you can restart it and stuff like that.
what the fundamental difference between that kind of software and AI is obviously the model
adapts. It learns. It decays and it drifts. It does all these things. It's almost, you know,
it's not, but it's almost organic in nature. And so that is where you have the, you know,
the challenge around that MLOPS is kind of supposed to create. And then we had a load of
specialist vendors. And a lot of them are still around that cropped up to do that. I think we're
going to get the same thing with the Gentic Ops, if that's, if that's, what the phrase
is going to be.
And, but some of those will get bought.
Some of those will survive and some of those will go by the wayside.
But you're also going to get all the application vendors of any scale.
So, you know, Salesforce, Oracle, Microsoft, Workday Service.
Now, all these companies are obviously building out their own agentic tools, platforms, applications.
They all want to be, everybody wants to be the platform, but not everybody can be.
And then you've got obviously the hypers doing their thing and then all sorts of other
companies offering, you know, agentic tools.
We are seeing a little bit of this bifurcation between the companies that are going
after the, as we said earlier, the creative, the probabilistic creative opportunities.
So those aiming at marketing departments have had quite a lot of recent, of strong traction
because they're solving a problem that really could not be solved until generative AI came
to, LLMs came around.
It was just basically impossible.
And then those that are dealing with more deterrentice.
terministic things sitting on top of relational databases, so CRM, ERP, and all those kind of things.
So we're seeing a little bit bifurcation there, but you're going to get some of some winners out of that, I suspect, that will either survive or get or have a good exit, the financial exit or something like that.
So I don't think, but then again, as I mentioned earlier, this is the first time where we've had PurePlay AI companies of any scale.
And open AI is certainly off scale.
and it has ambitions from the device to applications and everything in between.
Obviously, it's known as a model provider, but it's obviously trying to build chips.
It's trying to build devices.
However, how well it gets on with that, we don't know.
And, you know, they're obviously now influencing where data centers get built, how much energy is used.
This is an incredible, you know, development in the space of a decade from scratch.
And so you are going to get those kind of companies.
You wouldn't, you know, you wouldn't necessarily say that's peculiar to generative AI, but that's peculiar, but it is peculiar to generative AI, which Argentic is based on.
And so I think it's going to be really interesting to see how, you know, that company and one or two others of massive scale, you know, influence the way the rest of the software industry goes.
And I think, yeah, we're going to have standards.
We've got MCP servers and we've got A2A protocols and there's going to be more.
There has to be more.
then it's the question of who becomes, you know, who's the platform, who's an enabling layer
within it, they make it work properly, properly, who are just trying to sell apps and they'll
use anybody's agentic platform and tools vendors and stuff like that, and then obviously
the chips underneath. And so, you know, companies are always looking for companies in the
sense of enterprises that buy this stuff, are always looking for options and looking for diversity
and, yeah, whether that be down the silicon level where at the moment there isn't much
diversity, or right up the application level, there are obviously years. So I think it's going to be
a fascinating few years in the agenic AI space. Yeah, absolutely. And I feel like, like you said,
that these massive AI companies, along with a lot of the traditional vendors, you know,
companies like Oracle, Amazon, Google, are angling to build an AI platform, really. And so we'll be
definitely watching that.
We do have to wrap, unfortunately.
I think we could talk to you for many, many hours.
But unfortunately, the time frame for this episode is done.
So thank you so much for joining us.
Before we go, if everybody else wants to continue speaking with you,
where can they find you and where can they find your coverage?
We're at Futuramgroup.com.
And you can also find me on TwitterX at Nick Patience.
and on LinkedIn.
And yeah, I'd love to hear from anybody.
Absolutely.
And, of course, we will be continuing on as well
with utilizing AI podcast series for Future and Group.
So folks should look for that in their favorite podcast applications.
Frederick, as well, where can we continue this conversation?
Yeah, you can find me on LinkedIn or on our website,
highfence.com.
And as for me, you'll find me, as I said,
on Utilizing AI, on the Tech Strong Gang,
on many other platforms here within Futuram Group,
along with social media as S. Fosket.
Thank you for listening to this episode of Utilizing Tech.
You'll find this podcast in your favorite podcast application
as well as on YouTube, if you want to see what we look like.
If you enjoyed this discussion,
please do leave us a comment, a rating, a review.
We'd love to hear from you.
The podcast was brought to you by Tech Field Day,
which is part of the Futurum Group.
For show notes and more episodes, head over to our dedicated website, which is utilizingtech.com
or find us on X-Twitter, Blue Sky, and Mastodon at Utilizing Tech.
Thanks for listening, and we will catch you next week.
Thank you.
