Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 09x07: Achieving Business Outcomes using Agentic AI with Luke Norris of Kamiwaza AI
Episode Date: November 10, 2025Although there's a lot of skepticism around generative AI, companies are finding incredible practical uses for these new capabilities. This episode of Utilizing Tech brings Luke Norris of Kamiwaza... to discuss business outcomes that can be achieved by AI applications with hosts Guy Currier and Stephen Foskett. Agentic AI is an autonomous system that has a schedule, and action, or an interface to interact with it. Apps should represent the data rather than vice versa, so a dashboard should reconfigure and recompute to serve the needs of a user. Kamiwaza recently announced a capability to make applications accessible to people with disabilities, and this represents the kind of real-world benefit that can be delivered by AI. They previously worked with NOAA to make historic data more accessible to modern users.Luke Norris, CEO and Founder of Kamiwaza AIHosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesFrederic Van Haren, Founder and CTO of HighFens, Inc. Guy Currier, Chief Analyst at Visible Impact, The Futurum Group.For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
Although there's lots of skepticism around generative AI, companies are finding incredible practical uses for these new capabilities.
This episode of Utilizing Tech brings Luke Norris from Camoosa to discuss business outcomes that can be achieved with AI, along with hosts Guy Courier and myself, Stephen Foskitt.
Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group.
This brand new season focuses on practical applications for a gentty.
AI and other related innovations in practical artificial intelligence.
I'm your host, Stephen Foskett, organizer of the Tech Field Day events series.
And joining me this week as a co-host is my friend Guy Currier.
Guy, welcome to the show.
Thanks, Stephen.
It's great to be here.
And Guy, you know, you and I, we work together within the Futurum Group.
And one of the things I think that we both share in terms of sort of philosophy is
I guess in a way, a skepticism of gen AI, but an enthusiasm for the applications that we can
foresee for a lot of the technology that's coming out of it.
Yeah, it's really funny, right?
I think that I continue to think for two and a half years now, three years, that how generative
AI in particular works is very commonly misunderstood, even amongst the engineers building it.
And I don't mean that in a condescending way.
I think its development is literally to a design point or an engineering point to appear truthful, to appear correct, to appear, you know, to give you the outcome you're looking for rather than actually providing you the outcome you're looking for.
It's just inherent to it.
And at the same time, it is just extraordinarily useful.
and so obviously useful and helpful that I have that combination.
I don't know can you combine skepticism and enthusiasm?
I think we all are using it constantly now.
We all see so much potential in it.
The agentic AI is a further acceleration of multiple modes and forms of AI,
not all of which have that same drawback,
but now accelerating not just the capability,
of organizations and people using AI agents,
but also the potential challenges, issues, and so forth.
So it's really quite, let's say, stimulating.
Well, I think that the thing that we need to keep in mind is that it's not so much,
I mean, it's sort of new toy syndrome.
Like, we've got something cool.
It does something cool.
It's fun to play with.
Let's play with it.
But guess what?
Eventually, you know, we have to actually find a use for that new.
tool, unless it's like my garage, in which case you can just get new tools. But eventually,
it would be good to get new uses for these tools and to find practical applications for them.
And so that's why I'm really excited to welcome to the podcast this week, somebody who is
equally focused on what is the outcome? What are we going to do with this? Luke Norris from
Camelazza AI, Luke has presented at AI field.
a couple of times. We've talked to you a few times on podcast and so on. It's always a pleasure
to have you on here. Welcome to the show. Thanks for having me, and I'm hoping to turn some of
that skepticism into some reality on our little chat today. Yeah, and well, I think that the thing
that I love about the way that you approach AI, and this is Kemp, come through loud and clear in the
AI field day sessions that you've done, is that you are focused as a company, and I guess probably
as a person, too, considering it came from your mind. I mean, you're really focused on what is the
outcome. What is the business goal? What is the result of this? Not just how can we get this
thing running, but how can we actually make practical, might I say, utilizing this tech? What's your
approach? Do I have it right? You completely have it right. We actually, from the first day,
sort of tried to coin a concept of what we call outcome support, as long as,
As the enterprise is using us for inference, security, and we're connected to the data,
or orchestration engine, our enterprises can actually submit an outcome support ticket for a business
outcome that they're trying to achieve with generative AI.
And we built that into our product because we really wanted to move this from shelfware
to actual production as fast as possible in the Fortune 500.
So I'm going to push you a little bit more.
How do you actually do that?
That sounds like a lot of words.
What are you really doing?
So we have a predefined framework on development.
Let's just put it that way.
We've built out and stubbed out our own back-in process
and our own front-end capabilities.
Further, the orchestration engine,
and I'm waving my hands because most people just say orchestration,
et cetera, is the ability to connect to all the data in the enterprise
and all of its formats.
This is literal scans of PDFs with chicken scratch on it
all the way to connectors to SAP.
Once you then deploy a model,
and typically a multimodal model on that stack,
where it now could connect to all of the data securely
using the enterprise security, identity access, and all that.
Now we can work with lines of business owners.
Literally, we ask them for one-page detail
on an outcome they're trying to achieve.
We then convert that one page into effectively a PRD
and effectively a set of system prompts
that because we have that pre-stubbed sort of architecture,
we can then have an app published within about five to ten minutes,
which is just a conglomerant of agents with a GUI put on it.
And now we can actually work with that lane of business owner through that outcome,
literally doing QA testing effectively live stream.
Within about two days of that,
we typically have an app that's ready to be QA'd in a production environment.
This would be an operational outcome.
Yeah.
So something along lines of, you know, automate this part of quote to cash for this portfolio
or something along those lines.
Yeah, I think at the simple level, for sure.
No, it's not.
It's not double my revenue next month.
That's not that kind of business outcome.
Yeah, that might be a little hard at this moment, but not far.
But yeah, point in case, we worked with a global organization just this morning that had a
separate ERP system and data structure in Europe, a separate ERP system in America.
The data sets because it actually fits under GDPR and HIPAA regs, they couldn't combine them.
So they have two separate ERPs.
Now we have a single agent running in both of the countries.
It can actually connect into the ERP systems.
Then it sends the summaries back to another agent that built a dashboard in real time of those action flows.
And that was something that literally we accomplished almost on the phone call, and it should be in production by in the next week.
That's really cool.
It's what I meant by operational, but I suppose like, you know, semantically speaking, maybe that's not the right word.
But you are, I mean, every business has to do this.
What are they trying to accomplish?
And then how are they going to get there?
And a great deal of that, maybe every piece of the how are they going to get there is available to something like an agentic AI system.
And Kamiwaza is taking that as practically as an SLA.
That's interesting.
So to really accomplish this and to demystify it, the fact that we connect to all the data, we actually build a full entity and ontology scheme of the entire enterprise's data structure.
That means all of its structured data and unstructured data.
That entity and ontology is always running as well.
So this does take large GPU farms,
typically three servers of at least eight GPUs,
and it's scanning all change add moves the data
for once again all your structured systems
and all your unstructured data.
So now the agent actually understands
the lingua franca of the enterprise
via each entity.
So every role you have in the enterprise,
every system you have in the enterprise,
and every execution you're trying to achieve in the enterprise.
And once you have that foundation laid,
Now, net new agents, net new connectivity structures, net new tools are very easy for the AI to understand because it now has the literal lingua franca of the enterprise.
So when you say, I need a pull of report for sales, it knows to use the NCP call for the ERP system to actually pull that in.
It's interesting because the way that you're achieving it, now, one of the challenges with AI applications is basically, once you've built the model, how do you give it data, how do you give it access to,
data and how do you give it context and how do you chain that context together from agent to agent?
What you're describing sounds a little bit like where I'm hearing people want to go with
like model context protocol and stuff like that. Are you using that sort of technology at this
point? So MCP would typically be a function we would use from a tool calling perspective more
than any. But that same tool call could be an API. It could be a computer use agent that actually
turns on and goes to a legacy system that doesn't even have an API and actually literally
is inputting or scraping the data and pulling that back in. So NCP is literally a tool in the tool bag.
We actually call it a tool garden in our system. We have our app gardens where we publish
repeatable agents and we have a tool garden where we publish repeatable tools, which are typically
the schema or the connectivity capability for your off-the-shelf enterprise systems, SAP, so on
and so forth. So kind of a like a happier kind of thing.
where you basically have a whole bunch of different external applications that you can call and work with, you know, in the garden.
I like the tool garden.
That's cute.
I'm not sure tools growing gardens, but I wish they did.
Toolshed, I apologize.
At garden toolshed.
But actually, Zapier.
I like tool garden because I like the idea of having a tool garden for garden tools that grows garden tools.
Well, they could go into garden shed.
Yeah, exactly.
Actually, Zapier and, like, Bumi would be a tool in the actual tool garden that the agent would realize it can actually call out to to make SaaS-level connectivity queries.
Because everything from our perspective is from within the enterprise, and then those would be tools that you would use to connect outside the enterprise.
So if you'll both forgive me, sometimes I feel like we are also into and, you know, exposed to.
the lingo and the developments and everything, that there could be difficulty for, you know,
viewers, listeners to get oriented. And I actually think that agentic itself is maybe a little
difficult to get oriented around even, you know, when you've been listening to it, talking about
a reading about it. I like to say snarkily, you know, that an agentic AI is just an agent.
that uses AI as one of its tools.
You know, it's an agent.
An agent, we've had agents forever.
But I'm trying to wonder if there's a better way to look at it
based on what you're talking about, Luke,
because, I mean, an agent can look like a chap also.
Look, you know, that's your interface.
And I'm reminded of the gauntlet that Satya and Adela laid down a few months,
a bunch of months ago when he was saying there's not going to be software anymore.
He was repeating what he was hearing.
You know, I'm not saying he originated that idea.
You're sort of talking about the same thing, Luke, and it suddenly occurs to me,
hey, maybe an agent is what you use now or in the future instead of SaaS, instead of sophomore or what have you.
Like you were talking about the death of spreadsheets, or we were talking about the death of spreadsheets.
The death of spreadsheets is you're using the spreadsheet to accomplish something.
If you just have an agent, instead it's going to use spreadsheets, you're going to use whatever it's going to use to accomplish the same thing.
thing. I keep a spreadsheet that summarizes costs for certain projects. And I can just use the
agent and say, oh, here's a new cost. Here's another cost. Now, hey, report back to me on this
particular project for its costs. Is that maybe a better, a simpler way, even though it took
me forever to explain, to say what agentic AI is? It's the replacement for software.
I think you nailed it if you summarize it there at the end. It's almost a depth of software. And
what it is in our mind is an autonomous system that typically has three levels of
execution. The first is like a cron. So something that is running in at some particular time
that kicks it off. The second's an action that starts it. And the third is literally the chat
interface or some sort of human interface, whether it's speech or text or chat. And if you look at
it from that paradigm, you can limitlessly pretty much do anything in the workforce with those
agents. Do you always need the cron? What's the cron? What's the
for? So Cron's great for an event that kicks off every morning at 8 o'clock and repopulates
all of the services and structures, makes new dashboards available. So when the CEO sitting down,
he has an updated dashboard of all of his key metrics, all of the stuff that went on in the
company yesterday. And that'd be a great Cron job that you would set to kick off at a particular
time. But it could also theoretically be something that is kicked off programmatically, right?
I mean, you know, and I think that when people think of agentic AI, they often think more programmatically than based on, you know, calendar time.
It's some action that happened, whether it's an autonomous action from an application, you know, or whether it's a user's interaction.
And that's what I meant by.
So once again, typically like an impulse, and then an action and then a user interaction.
Those are the three things that we need.
Yeah, you've.
you've built the sort of an idealized, fully generalized architecture for an AI agent,
which is to say that think about these three parts, and maybe for now, one of the parts
is not even going to be used. But, you know, it might be needed later. I mean, I really like
the idea of, you know, thinking about dashboards and rebuilding and that sort of thing,
because now the agent is acting like a human agent would to some degree.
I mean, it's going to act how it's designed to act.
But if part of what it's doing is, well, there's routine things I need to do every day,
now I'm personifying the agent, sorry, Stephen.
But, you know, yeah, that's, that's, that's, that's, that's really good insight.
So we see the apps are now so fungible, the app should represent the data, not the other way around.
For the last 25 years, the enterprise is literally morphed data to be represented in an app.
And now it's the other way around.
The apps can literally morph themselves near instantly to the way the data is or the data needs to be presented.
So a dashboard is also really interesting because you can actually then measure how they're interacting with the dashboard and have it re-dashboard itself, recompute itself, represent itself, almost in real time.
So it's getting the most activity based out of it as well.
And once you start getting, like I said, the enterprise to think about that fungibility, the fact that the data itself can just be processed in any way, shape, or form, the outcomes just start to flow.
And it takes, you know, one or two months to get that first one going.
Then the third month, you get another one.
And next thing you know, you're 40 or 50, and they're just self-generating them.
It's amazing to be a part of.
Yeah, that's pretty cool.
And anyone who's used Salesforce dreams of the day that the dashboard is up to date ever.
Luke, let's talk a little bit about sort of the result of this.
Now, one of the cool things about Kimmaza, too, is that you guys are actually out there with clients building real applications that are really making use cases.
And actually, in the news, was an announcement that y'all worked with a partner, HPE, to deliver an application that could effectively make websites and business applications.
and business applications compliant with the needs of people with disabilities.
That's something that I really kind of keyed into
because that's a community that I work with closely outside of work,
and I care very much about their needs.
And I pointed out actually on Tech Strong Gang just last week
that that's important because even though making things accessible to people with disabilities
may seem sort of like a side thing,
that's the entire ballgame for those people.
Like, they literally cannot do anything if it's not accessible to them.
And so it seems like a small thing until you need it.
Absolutely.
It's a really cool notion, number one, because you're doing good, but number two, because
you're using this technology for something that actually benefits people instead of just
sort of waving your hands and saying there's going to be some use.
So talk to me more about some of these cool use cases you're starting to see.
So every quarter, we actually try to accomplish an AI for good use case.
If you actually go back to the first quarter of this year,
we worked with Department of Homeland Security,
and we unlocked 90 years of data that was in all generic formats
that literally couldn't be processed anymore.
We went to all of the silo data centers of colleges that had held this.
It was in GEMPAC format, which is a very legacy format.
We reprocess all of it, put it into PARC,
and then we actually added it to DHS's event,
management and crisis management planning.
So now they can get forecasts from NOAA that are future predictive,
but they can look back over the 90 years of everything that's happened with that data from
pressure systems, et cetera, and do a cross correlation on how that's actually going to affect
that area.
So now they could say this zip code, if there's going to be this level of pressure and this
weather, here's how it actually affected it the last six times it hit it over the 90 years.
And now they can put a real crisis management plan and all that was automated.
put forth to our crisis planners out there. Then in the second quarter, we worked with HPE for 508
compliance. As you said, it's sort of a major issue for all states, municipalities, colleges,
anybody that also receives federal funds are going to be compelled to get their websites 508
compliant. And if you think about it, a website has thousands of pictures. It has thousands of PDFs.
And those pictures and PDFs are, if you have visual impairment, you can't actually interact with
them. You don't know what's going on on the particular website.
So we wrote a visual agent and a processing agent that scans the entire website.
Every time it sees one of those pictures, it downloads it, puts it through a visual model.
Then it applies the metadata structure into the actual PDF, JPEG, etc.
And then re-uploads it with human in the loop back to the website.
So now when somebody with visual impairment is actually utilizing the website with their tool sets,
it actually will now explain what is in those pictures, what is in those PDFs.
It explains the actual HTML code, because we also.
to embed all the HTML code.
That effort for a very large municipality
is weeks and weeks of time per page, per page.
Yeah, I was going to say weeks and weeks.
I think you're being optimistic there, my friend.
Yes, and like City of Los Angeles,
a couple million pages, actually,
when you go through how in depth each one of those regulations are,
county code systems, et cetera.
Our AI agents literally run on private hardware,
that's why HPEs there,
because we're talking billions, if not trillions of token generation
to actually do this with the visual model.
models, et cetera. And we can scan that typically in a week or two and have all of it
ADA compliant and then upload it and more importantly maintain it, because now you can
have that agent go back over it and back over it every week or every month when you have
the cycles, and it can reprocess that and remaintain the website. So that was our AI for Good
project. That once again gets put into our app garden, and that's now available for
anybody that buys the Kamawasa orchestration engine.
Which is just incredible because, again, this is something.
yes, you're helping the businesses, but you're also helping real people who have real needs.
And that is just incredible.
I imagine that it's not all sunshine and rainbows.
I mean, I imagine that there are lots of other exciting businessy benefits that you can do here.
What other kind of real world applications can you talk about?
Because, again, too many people think that this is just chat bots and give me a recipe or, you know, tell me, you know, write my term paper.
for me. Let's talk about real uses.
Yeah, I mean, public uses that are on our website that we're easily able to talk about
and go through many of them. One of them I really liked is we worked with a company called
HealthBus. And as large corporations look to move to sort of single pay, private payer,
versus using insurance firms to become their own, it's a huge process of uploading a lot
of the documents about all the different insurance programs they have, 401K, et cetera.
Then that would literally be viewed by a human. They would go through, make sure all the data's
there, and then they would do their back-in process to actually quote that out and put that forward.
Now imagine you can upload a document in any format.
It could either be a PDF that has metadata, it can just be pictures, and they can immediately
scan that with an agent, understand that they've received all the data they need.
If not, they in real time let the user know that that was the wrong document or they didn't
upload enough of the information that was there, condensing the actual workflow from literally
months and months to now minutes and hours.
And once they get all that data, they can immediately.
then say we have enough data, here's the pre-process, and here's an early quote on what that would look like, and they can then follow it back up.
We talked to literally, I believe, in that particular use case, you're looking at main years of effort for the amount of people that would actually be involved in that, condensed to minutes of actual agents running.
Thinking about the model again, can you take that last example? That doesn't sound like one, well, can you take that last example and sort of map its functionality to the three?
And it'll be a nice reminder of what the three layers are,
since I forgot one of them already.
Yeah.
So as somebody uploads, that's a programmatic kickoff of an agent
because there's now an actual action that sort of kicks off.
And now you have a visual agent and an actual learning agent,
typically a coding agent, that now parses through the documents
and the various formats and services they've uploaded.
And if it's a picture or has pictures within it,
then the VLM, the visual language model,
can actually do the processing of it.
Further, it then moves to typically a user interaction
piece because you have the user that's going back and forth with it. So now the user can also
act with the agent saying it got the right documents or it didn't receive the right documents.
Last, there's typically a flow in there that has that cron job tech piece. And it's typically
the next day it kicks off to the salesperson for help us, saying this user had uploaded it.
We got them an instant quote, now follow up with them. And we actually worked that whole workflow
right through there. What kind of management, maintenance? I mean, I'm applying my software brain to
this agentic world.
So I'm thinking, you know, support, reliability, availability, all that stuff.
Is that just stupid at this point?
Or is that worth thinking about?
It's been fully thought through.
So we start with a cluster of three servers.
We have our own database structure on the back end.
That spans those three servers.
And that's highly available.
Then the apps, when they're actually deployed, once again, an app is just an agent
with a GUI.
is also deployed right into that cluster.
You have the GPUs that are used for the agents,
and then you have the actual apps that use the CPU and the standard memory.
And you can just scale that cluster infinitum.
We use Ray on the back end,
which is an infinitely scalable, near infinitely scalable sort of clustering capability.
And all of this is something that basically you are, you know,
delivering for customers to help them really, you know,
put this stuff into production because that's another thing that we're hearing.
You know, there was a recent,
a study that actually the Futurum analysts did that showed just how many AI projects are not actually coming to fruition.
But you're actually going to make sure that your customers are realizing these as well, right?
Yeah. So once again, once our orchestration engine is fully connected to the data, the security, and the processing, we provide that outcome-based support, typically one per month per enterprise.
That's about as much as they could digest up front, to be honest.
and it is to move that from sort of POC to production.
To be frank, it's our number one goal with all of our customers.
We embed a virtual forward-deployed engineer.
We don't actually send them out there.
But we get them tied in with a forward-deployed engineer,
whose sole job it is is to get that first app into production.
I'd seen all those reports about the level of failure.
I'll tell you we're batting 100%.
And I'm not saying that because of the technology.
I'm saying that because we put the effort in with the enterprises to achieve that.
And once you've achieved that,
those enterprises move to what we call that fifth industrial revolution,
well, they can get 25 to 30 percent of their entire corporation automated.
Just to be real clear here.
So we have a platform.
Kamiwasa has a platform.
And your customers can also use it themselves for their own development,
but you have an outcome-based support model,
and they could just roll with that and have one outcome per month, so to speak,
while maybe also figuring out how to add their own outcome-based work.
Yeah, we have hundreds of those agents in our app garden,
and you can literally just deploy them right onto our platform,
and you're 90% of their business.
Yeah, and keep in mind, the customers can fully program this.
They can get in and do everything.
We have lightweight programming.
We have no code agents as well for even business line of people
to actually start the journey.
Typically in the enterprise, they have their own,
developers, once they get that first outcome, that first production use case, their developers get
unleashed. And then we're really the backstop on the support for them. And they just start rolling
them out faster and faster and faster. It is interesting to talk to people who are doing this
stuff productivity. That's kind of what we're trying to do this season. And I really appreciate
the, yeah, this sort of, I don't know, real world product-y kind of, you know, let's get this thing done,
roll up our sleeves approach, instead of, like I said, a lot of the hand-waving that we're hearing
about chatbots and everything, I guess to finish up sort of, do you have a preview of what
you're doing next? What can we expect from Camoisa's future benefiting humanity with
AI? So we do have some big announcements that will be happening in November at HPE Discover,
that's their Barcelona one. We'll be rolling out some products. For us, products are just agents
that run on our platform. We're also, I'm doing a speech in December at Fort Lewis. It's a big
college here at the Four Corners in Colorado. And we're going to be working with the tribal nations
to actually help them build sovereign AI. I have a personal goal of helping them sort of continue
their language. And I believe AI is going to be really sort of instrumental in that because it's
not about just the language. It's not about the sound words. It's the full lingua franca, the ability
to have an understanding of the connectivity of it. And AI is at the point where it can really help them
with that. And then also just build out, you know, what I believe is effectively a smart nation,
their ability to have their own sovereignty, but have all of this capability running on their
local land. It's something that we're passionate about. And we're hoping to start to achieve that
with them in Q4 as one of our AI for good projects. I love that idea. That's so cool. And it's
one of those things that AI is uniquely suited to. I mean, you know, languages, you know, localization,
as we talked about, it's very cool. Well, thank you so much. Before,
we go, where can people who are interested in this learn more about Camelaza and where can
they connect with you personally?
You can connect with me personally on LinkedIn. I'm very active on it. Of course, you can go to
our website, www.comiawaza.a.com. And then like I said, we will be an HP Discover in
Barcelona and pretty much every other major technical conference going on early next year.
I imagine so. How about you, Guy, when were we going to see you next?
Well, let's see, QCon is coming up. I always go there and the Supercompute Conference as well in St. Louis. So you'll be able to see me on LinkedIn. I usually post there as well as in blue sky, just in my thoughts, my quick reactions. And at futuremgroup.com, you might see some published reports about that. So I was also just, you can look at my engagement at Cloudfield Day just about a week ago.
Well, thank you so much for joining us, both of you.
And those of you listening, thank you for listening to the Utilizing Tech podcast.
You can find this in your favorite podcast application.
Just look for Utilizing Tech as well as on YouTube.
If you enjoyed this discussion, please give us a rating and a nice review and maybe drop us a line.
We'd love to hear from you.
This podcast was brought to you by Tech Field Day, which is part of the Futurum Group.
For more show notes and more episodes, head over to our dedicated website, which is UtilizingTech.com.
or find us on X-Twitter, Blue Sky, and Mastodon at Utilizing Tech.
Thanks for listening, and we will catch you next week.
