In The Arena by TechArena - AI in 2025: From Hype to Practical Business Impact
Episode Date: July 18, 2025Join Intel’s Lynn Comp for an up-close TechArena Fireside Chat as she unpacks the reality of enterprise AI adoption, industry transformation, and the practical steps IT leaders must take to stay ahe...ad.
Transcript
Discussion (0)
Welcome to the Tech Arena featuring authentic discussions between tech's leading innovators
and our host, Allison Klein.
Now let's step into the arena.
Welcome to the Tech Arena Fireside Chat.
I'm Alison Klein.
And for this edition of our Fireside Chat, I've got Lynn Conv with me.
She heads up the AICOE at Intel and I'm so glad to have her here.
Lynn, why don't you go ahead and introduce yourself, say a little bit about your background
and a brief intro on exactly what that AICOE is responsible for doing.
Great. Lynn Conv and delighted to be back with Tech Arena.
My experience and background has been largely product management, product strategy,
in technology, taking through many waves of transformation.
And what does the AICOE do within Intel? We are the people that are hand in hand
coming alongside our sales teams with customers,
understanding what are the biggest challenges
and opportunities that customers are facing,
looking at this next AI transformation.
What do they have to think about
to be able to successfully pull off
their own transformation and get more value than they get downsides from implementing
that change?
So we do a lot of seller and customer engagement, but also we do a lot of this work, which is
making sure that we are evangelizing where Intel's values stand and where our mission
is related to this AI transformation.
So then 2025 is expected to be the year that enterprises start adopting generative AI and
we're going to see a hockey stick of growth.
You know, it makes me think about so many technology transitions that we've worked on
together in the past, where there's a
moment where technology stops becoming hyped and starts becoming really practical. And I guess one
of the questions that I have for you is, what are you seeing in the market on this front? And are we
at the point that practical enterprise use of generative AI is going to start making sense
to the bottom line?
There's a lot to unpack in that question.
So first of all, I do see in many cases that there are a lot of practical applications
that enterprises are deploying.
So there's a glass half full, glass half empty way to look at it.
Matt Garment at AWS had made a comment that somewhere between 1 and 200 POCs, proof of
concepts, are done before you find a handful of productive and deployable at scale generative
AI use cases.
You can look at that and say, wow, those are really low percentages, or you can look at
that and say that's five use cases that are deploying that are going to have high value at scale.
So whether that's a tipping point, I don't know that those are hockey stick numbers,
but I do see that there is an ongoing interest in finding where can this technology really
help my business in its unique way.
And the slowdown is most likely that to get the most out of AI, especially things where
you're looking at leveraging deep learning or machine learning with your own models or
your own data, it just takes time to consume it and then to brainstorm where you might
have it.
There are a number of IT companies I talked with a few last week.
Media and Entertainment, their
IT group has Greenfield and they have Keep the Lights On.
The Greenfield are the ones looking for the opportunities, new revenue streams they haven't
had before.
They're looking for opportunities in new supply chain approaches that they haven't done before.
And that allows the business to keep running with all the safeguards and everything else
necessary.
But at the same time, it frees them up with a strike team that can go find more and more allows the business to keep running with all the safeguards and everything else necessary,
but at the same time, it frees them up with a strike team that can go find more and more
of those use cases.
Because AI in particular is so unique to a specific data set, a specific business type,
and where you would apply it, not everybody has manufacturing lines, so you don't need
cameras.
I think that's really what we're seeing is the quote unquote holdup.
And then figuring out the economics.
Model economics are going at an accelerated pace so that they're a lot less expensive
and there's more choice.
Part of the reason that DeepSeek was so interesting to many is it was, their pricing at the time was 14 cents for a million tokens,
OpenAI is over $7 for the same number of tokens at the time.
And so that increased competition is really helpful
for finding those scale opportunities
where you are doing those AI calls
if you have to do them into the cloud
as opposed to doing them
with your own on-prem model and data.
I just wanna go back for a second.
I know you've spent a lot of time
in the industry managing through transformations.
What do you think is the moment that I'm talking about?
And I know that you know this,
where technology starts becoming hype
and actually starts becoming viable.
And at that point, people start questioning its validity.
Can you talk a little bit about that?
It's very interesting that you say that because just yesterday I was in one of those extended
meetings that we all know and love.
And when you look at the spending data, the energy around marketing, the push to incorporate something.
And if the spending isn't aligning with the revenue or the revenue reporting is a bit
muted in terms of mixed with other revenue types.
That's when I start wondering, are we pre-broad deployment, and we're still looking for what
are those key applications?
Going back to the comment that AWS made recently, 200 POCs to find three to five use cases really
tells me that we are still looking for what is that aha moment of it is so clear.
It's very much like when VMware was doing VMMs
and that infrastructure, this was 15, 20 years ago,
it was an interesting technology until they came out
with vSphere, which allowed IT to have a completely
different way of doing maintenance lifecycle updates,
upgrades, which at the time was one
of those areas where you would try and find access to resources you needed and you discover
that IT had shut it down because they were upgrading maintenance and doing an OS update.
That really clear value prop, if it was there, you would have 10 TOCs that turn into 10 deployments. So there's a few signs that I look for on the consumption side,
which is, are people turning the features on?
Are those becoming a distinct part of a P&L
that's reported by the vendors that are offering it?
Or is it AI is really big and here's our overall earnings?
So they're really muted signals, but that's where I start really looking for, okay, we're
going to move from it can do everything including curing cancer to this is a use case that every
business can benefit from.
It's the next digital transformation and automation standard.
I think it's interesting that you brought up curing cancer because obviously
traditional AI visualization, image recognition, natural language processing,
recommendations engines have been around for years and they're actually providing
practical value to the point that nobody even thinks about them anymore.
But there are things in the world of AI that are very mature compared to
generative AI and it's like somewhere we've
lost the plot that yes, those use cases are thriving. Yes. And it's fascinating. I've
come across some analyst data recently where if you look at consumption in businesses, Even 2028, 2030, the expectation is the generative AI contribution is about one-third of the
total dollar value or total dollar spend.
Whereas traditional natural language processing, machine learning, deep learning, which goes
into recommendation engines, and computer vision, those are two-thirds.
And so it becomes invisible, like you mentioned, the recommendation engines.
Do we have any idea what is behind the Amazon shopping recommendations?
No, but it's AI, and it's some of those preceding technologies.
So the economic value that they're driving, because they've been out longer, and their
use cases are more clear, despite them being embedded or invisible, they're
going to be hired.
So the spending, however, on generative AI, that's completely in the news.
The earnings from generative AI, they have yet to catch up.
I think that it's really interesting, and despite everything, we are existing in a world
where nation states are now battling out for AI
supremacy.
There are new models that are coming out almost every day.
And I guess one question that I have for you, because I know that you talk to a lot of IT
leaders that are navigating this space, how can enterprises manage that daily maelstrom
and make smart decisions for practical applications.
Last week I was at a customer event and this particular customer was managed service provider.
Speaking with a lot of my peers that were in the high tech kind of birds of a feather
group I found a couple interesting practices.
One of them is they usually have a Greenfield team.
They call it a KTLO, Keep the Lights On team.
The KTLO team continues running business process
and continues focusing on what are the known requirements,
the known SLAs.
The Greenfield team is out looking for opportunities
in the overall process, in the overall supply
chain to figure out are there areas where AI could really distinctly and uniquely solve
problems or find opportunities or new revenue that nothing else could find.
The other interesting practice that I found also was this practice of whitelisting. And so that is where it really gets into the question around the international entities
and this generation space race.
What was fascinating about DeepSeq in particular is first of all, Huntingface took advantage
of the fact that a lot of DeepSeq was open source, a lot of the techniques were open source, they created a hunting face version of that model and research so that it was an alternative
to just risking all your data running through where people at the time were uncertain.
Is it going to China servers?
Are they using US data?
A lot of governments shut down use of DeepSeq.
So open source really does help mitigate a lot of those concerns.
At the same time, we shouldn't be too over-indexed at the fact that we happen to live in the
US, we have a Silicon Valley we know and love.
The rest of the world, if this is such a huge revolution, if this is going to change the
way we live and work, the rest of the world is going to see that as something that they need to have
a part of.
They need to be able to invest in and benefit from.
And so there's been a couple examples in Europe where the chat GPT revolution came out and
essentially it was all English.
And so it's creating a barrier for people who are
non-English speakers and they started doing local language based GPT options so that people could
interact with it in their normal language. And that's really important because if you're using
a GPT, there's a lot of tips and tricks around prompt engineering that are quite frankly getting the language
right.
So you're creating a double barrier for somebody who would be benefiting from a GPT by asking
them to speak in a second language and then figure out all of the ways to get the machine
to do what you bloody well have asked it to do.
It just didn't understand.
Yeah, one of the things that I think about is
we've never had to consider which truth we're choosing
for technology, it's not something that's really come up.
We haven't really considered the risk of data
leaving an organization when using a model.
Trust and safety are really important
in terms of capabilities of any IT organization.
So I guess one question that I have for you is when you're looking at customer-facing tasks
in particular, what are you guiding customers to do in order to maintain that level of safety
and security as they ramp adoption and choose models? Yeah, that's such a great question.
There was just a blog published,
I believe it was last week by one of our fellows
and he's our lead security architect
for confidential AI at Intel.
And it talks about that in reference to agentic.
And there's two kinds of agentic AI,
one is autonomous, one is not.
The good thing about agentic AI in general is that it allows you to parse a problem out
to different functions that have models that might be more optimized.
So do you need chat GBT to do math?
Or you can just use a calculator, for example.
So the good news is you're getting into more of a heterogeneous managed or orchestrated AI,
and that allows you to use combinations of your own internal models trained on your own data,
plus benefit from the big models hosted in the cloud. The place where it gets a little
bit more precise or needs to be more precise is related to zero trust, as well as when you're talking about autonomy.
A lot of the chat bots and even some of the recent ones from the largest companies, their
response to customers of that chat bot were not necessarily very well optimized.
And so there is a level at which interacting with customer service does need to be under much tighter control.
And having your own models on-prem where you're injecting your corporate values, your corporate
mission, what do you stand for as opposed to risking the brand with someone else's model
that you're not sure how it will respond in the moment.
That's I think really a practice that a lot of
IT teams are having to come up with back to that whitelisting.
They're going to whitelist models and they're going to use many different models.
Models are becoming middleware.
That's a really interesting parallel.
So far, we haven't seen any major data breaches with
the corporate AI tool that I did predict in my predictions blog for 2025 that we
would see one. Can you add some color on how this might play out and the risk to companies?
Boy, that is, I think, one of the biggest questions right now. Going back to the customer
event last week, the team that is whitelisting AI tools for this transformation technology team,
it's a legal team. And the consequences to this company for their brand, their immediate
entertainment brand, as well as protection of their IP, they have extremely popular properties with rabid fan base, so they can't risk losing
their IP.
So it's a legal team and they're oriented and moving fast, but that legal team is trained
to think about indemnification.
They're trained to think about protecting their IP.
They're trained to think about the needs to be able to get underwriting, which is one of the reasons that many companies paid for
licenses to Linux as opposed to just downloading open source projects themselves. We are just at
the very initial stages of companies really having to think through that because the mindset so far
has been if you're not here now, you're late and you're
going to lose out, your competitors are going to eat your lunch.
So most companies will go for the emotional, I need to protect that business, I need to
have a business to protect.
We're going to get into the finer grain nuances of what does it mean to protect your business
with AI being implemented
from anything that could go wrong?
I don't think the practices are there yet.
You know, it's interesting when you talk about it,
you're thinking about it
from a centralized IT perspective.
I think back to the early days of cloud computing,
when IT organizations realized
that lines of business all over the company
were adopting AWS instances
and running different applications that were not under IT control.
And of course they did because it was so easy and it solved a business need.
So are we maybe in a similar situation?
You know, when I think about who's going to use Gen.
AI, I think about the marketing teams.
I think about who's going to use Gen. AI, I think about the marketing teams. I think about customer service organizations.
Are they going to think, oh, I need to actually talk to my IT department about how I see data
into this?
Or are they just going to, you know, it's a little wild west, I think is the thought.
I agree.
I know a lot of IT teams have basically autopiloted.
If they see you going to a specific website address, you get a little disclaimer warning
that pops up, be really careful what you share.
There are a lot of people that will on the weekend experiment using their gaming computers
and they will do work on it on their own, or they'll outsource it to firms that are
using those tools so they benefit from the fact that those tools exist
and work can get done a lot faster,
but they're not violating any confidentiality.
Because when you're dealing with a product launch,
before the launch, everything is on lockdown.
So how do you generate those marketing materials
and take advantage of those tools?
I think there's a lot of teams that understand
you have to walk that balance very carefully. I think that this is going to be so interesting
to watch play out. And I think that we're going to be going back into, I don't know when that was,
2016, 2017, when IT organizations started doing audits of how many cloud instances were running without IT control. Similar modeling of, hey, are you actually using this?
And I love your parallel with hiring external agencies to do that because I think that does
happen.
But beyond that, what other risks are you thinking about as we navigate from this vision
of potential to broad proliferation around practical use cases.
I do think that there are some really interesting studies around how using AI to develop software
changes how people are programming.
So there's a lack of traceability potentially that gets injected, depending on how senior
the software developers are. The allure of anybody can be their own coder is brilliant when you're looking at things
like Perplexity or Clod and those models that are more optimized for that.
What's been fascinating though is there was recently a founder who had their cloud costs
20x what they expected because they had used AI to create a code
fragment to develop something, an application that was calling cloud-hosted LLM models.
There was a memory leak and they ended up with what they thought was going to be a $2,000
bill was a $200,000 bill.
And this was a sole proprietor.
And we know, yeah, Allison, how many times did we hear stories of IT going, oh my gosh,
what happened to have my cloud bill spike this month?
Why is it so expensive?
So those auditing of those cloud-hosted models, I think they're just going to end up using
the same practices and mistakes will be made and Mac agreements will be signed that take into account the
accidental overspend, but also do bring you closer to that cloud service provider's business
using the models as the way in.
One of the things that I'm wondering is as we navigate through this, what's the role
of the industry to help enterprises through this and what would you like to see from the broader industry here?
There seems to be right now an extreme bifurcation between what is affordable for big business
and what is affordable for sole proprietorships, privately held companies. I think it was 62% of all jobs in the US in particular are companies that
are sub 1000 people. And so when you look at that employment base, especially for businesses that are
facing customers like laundromats or mailbox companies, are they going to spend the $200 a month for the pro version of AI
to get all those features or are they gonna have to choose something else? So I do
think that if it is a revolution that benefits everybody, there needs to be a
way that it benefits everybody. The investments are made to advance the
capabilities. I know that we've probably piqued folks' interest about our topic today.
It is a little bit of the zeitgeist of the industry right now.
If folks want to keep talking to you about this, Lynn, and engaging with your team,
where should they go for more information?
So I have a very active LinkedIn feed.
So following me on LinkedIn would be one quick and easy way.
There's also a community blog at intel.com,
so community.intel.com, where you can take a look
at the most recent blog on agentic AI
and confidential computing.
Intel's constantly staying ahead of that.
The other thing that I think is really fun,
our IT department isn't just implementing inside
and then saying nothing.
There's five to 10 different use cases and case studies,
white papers that are posted on intel.com
for Intel IT's journey in deploying different kinds of AI
from computer vision,
all the way through natural language processing,
in our manufacturing facilities.
And so all of that could be really helpful as well.
Awesome. Thank you so much for your time today.
I always learn something when I talk to you, Lynn, and today was no exception.
Can't wait to have you back.
Thanks, Alison. I appreciate it. It's been fun.
Thanks for joining the Tech Arena.
Subscribe and engage at our website, thetecharena.net.
All content is copyrighted by the Tech Arena.