In The Arena by TechArena - From AI Infra: AI Strategy, Security, & Human Impact with Daniel Wu
Episode Date: October 29, 2025Direct from AI Infra 2025, AI Expert & Author Daniel Wu shares how organizations build trustworthy systems—bridging academia and industry with governance and security for lasting impact....
Transcript
Discussion (0)
Welcome to Tech Arena, featuring authentic discussions between tech's leading innovators and our host, Alison Klein.
Now, let's step into the arena.
Welcome in the arena. My name's Allison Klein, and we are recording at AI Impra. This is a Data Insights episode, and that means Janice Storoski is here with me. Welcome, Janice.
Thank you, Allison. It's great to be here.
So, Janice, all week we're going to be doing recordings and they're going to be about
AI and what's happening in innovation in AI.
We're ticking off with a fantastic episode.
Tell me you is with us.
I am very excited to meet with one of the industry experts.
We have Daniel Wu with us.
Welcome to the show, Daniel.
Thank you for having me.
Yes. Daniel is a lot of things in AI, but really focused on strategy and strategic AI leadership
for Stanford University.
Do you want to comment a little bit more on some of the other things you're working on?
Sure.
Thank you, again, for inviting me to this at this little.
I'm an AI executive, also an educator and a book author.
I'm on the front lines of technology, and for the last 10 years, focus on specifically on enterprise AI strategy,
specifically about not just adoption by also scaling AI.
I'm also very passionate about bridging the gap between academia.
and industry. So I'm on the core staff for Stanford's AI professional program since 2019.
I've also co-author books on the latest AI topics about Legentic AI and generative AI
securities to help practitioners navigate this ever-evolving area.
And you were speaking at the AI Impress Summit this week. You've been on Tech Arena before. I had
such an awesome first conversation with you. Tell me a little bit about why you,
you've chosen to focus on artificial intelligence so deeply?
Yeah.
So my background in tech actually led me to focus on AI about over a decade ago, and I truly
believe the transformative power of AI and as a technologist and engineer by training,
I'm fascinated by all the awesome capabilities that I've been enabled by this technology.
But as a leader, leading enterprise technology organizations, I have another focus.
And that specifically is around the impact that get brought about by AI technology.
And I'm really passionate about channeling the powerful technology to better humanity.
And especially concern about lack of understanding of what this technology can do
and all the possible impact because the power and the scale of this technology.
that can brought about upon humanity.
So I've dedicated myself and particularly passionate about building robust framework
and help move from that concept, just building powerful solutions to building trustworthy systems.
And that's why I'm here.
So from your vantage point, how is data management advanced from a three-tiered hierarchy
to an AI-fueled distributed data pipeline?
That's an awesome question because that basically touched on the transformative aspect of AI technology.
And so when we look at the traditional data storage, like three-tier data storage model where you have hop, warm, and code,
I like to think about it as like a library.
So you have this frequently access data or the materials in your reference domain.
and these usually incur high costs, but these are frequently used data
and usually the latest information that you have.
Whereas in the library, you also have the main stack
where you have less accessed information and books there.
And finally, you might store certain books that are rarely touched on
in archives that are even off-site,
but these are cheaper and far-away storage.
So this kind of system work well in the world of structured data,
and where the goal of this three-tier structure is really to optimize the cost for data storage and access.
But this also is very slow and siloed and does not mean the requirement for today's AI to meet the scalability and the speed of AI.
I think it needs to move, and we're already seeing that start shifting from that very clearly defined three-tier approach to a system,
that looks more like a biological nervous system, I'll say.
So instead of having this library system,
you now have this very dynamic, intelligent system.
What we care about is really not where the data rests
and how much it costs to store the data.
Rather, the focus has shifted to how the data moves
and what it can do in motion
and how do you connect all these different data together.
So that's more like a nervous system
where information flows freely and you get whatever information you need to get
rather than classify them as static three tiers and put them in those places.
A benefit of that is also like in this three-tier approach,
moving from one tier to the other tier usually is a very manual process
and it is not automatic, but shifting it to more like a biological nervous system design
allows your data to slow freely and be able to leverage all the strength of all the different
type of data together. So I think to empower today's AI, really we need to rethink about
the data storage as well as the data movement. I love those analogies. They really brought
it to life to me, and I talk about this a lot, but that just gave me some new insight. I guess
the next question would be, you've painted a wonderful picture of the historic state and the future
state, how are organizations grappling with the migration of thinking about data from that
pyramid standpoint into this new model? And when you think about it from a standpoint of data
organization, is it different when they're considering model training or fine tuning of
different AI applications? Yeah, that's a very good question. I think today's organizations are
struggling because they are trying to figure out how do we leverage the legacy or the data
assets we have, you know, locked away in different sources, has fragmented across different
applications, and trying to leverage all that into building the AI solution, especially about
model training and fine tuning. I think the key thing here is really to start thinking about
a more scalable data framework in this new age of AI.
And it's specifically about leveraging,
not necessarily trying to move everything to one place
because that literally for certain organizations is impossible to tune.
But today, there's a new sort of ideas about you could leave your data where it is,
but build a layer on top of it.
And I think specifically,
Progenic AI has a great potential to play here.
Earlier today, I gave a talk about
agentic AI applications in finance space and in finance with tons of data. And the data comes
with different shape and form and they're store in different places. But with a genetic solution,
you could have agentic data retrieval agents that are built tailored to different type of data
sources and they work together. And that is akin to having these agents specialize in certain
type of data, but then at the end, you can standardize how they work together and be able to
want all these different data assets that you have to build your applications. That's awesome.
I'll have to watch that episode. Agentic AI is definitely a hot topic these days.
Daniel, as you've been really focused on, you know, education obviously being at Stanford,
and your journey has really been around AI. Can you tell us a little bit more about your
opinion as to what skills or areas do you think enterprise leaders need to encompass or develop
more urgently in today's society? Yeah, as you know, education is one of my passion here,
and I love this question. The short answer is the skills that are required, like the most
urgently required today for the leaders versus the practitioners. They are different,
but they're complementary and how elaborate more. So I think for leaders, their job is not to know
how to build the engine of the car
if you take that as an analogy.
However, they need to know
how to read a map, how to read
the gauge on the dashboard,
and be able to make the decision and safely
drive this car. So
translate that into the skills
they need. I think the key
thing here is they need to move beyond
buzzwords. Still, through
my conversation with other leaders, I feel
like the superficial understanding
of what AI
technology is, is it a danger
level, meaning that they are the decision maker for the organization, but they don't truly
understand what AI can do. And so sometimes that decision making is driven by what I call
fear missing out, because their competitors are using it, so they're jumping and use it. But they
don't have clear idea what this technology can do. And the flip side, the limitation and the risk
of adopting this technology. So I think the number one skill is to develop that AI intuition. It's
super important for leaders. And the second piece, I think, is to kind of tie to that is to
enhance the literacy around risk and governance. And for AI, this is not an optional thing that
you check off. For leaders, they need to really understand the flip side of the picture and
understand what the governance entails, specifically the fast advancement of AI technology
and the type of things that it enables drastically changed the risk landscape for any enterprise.
And the ignorant about it is a very dangerous thing for the leader.
And I think the third thing, leader's responsibility is to design AI-native kind of organization.
You can simply plug an AI team into an existing IT infrastructure,
a workflow organization to expect everything to work successfully.
So leaders need to start thinking about what they need to change,
especially around encouraging quick experimentation
and also foster an innovative culture within their organization
to empower their AI team.
And then finally, also make sure that overall, not just the AI team,
but overall big organization has the right level of AI literacy
so they can collaborate flawlessly and be able to achieve that synergy.
Now, flip the picture to the side of the practitioners on the ground.
These are the data scientists, ML engineers that are actually doing the work.
Now, for these people, I would encourage them to think beyond the technical work building a model,
or training in the model, and think more about how to scale out their impact
and connect with the business that they're contributing to.
I think the key thing here really is not to just focus on being the crisis,
of a model, but becoming an engineer of a holistic solution that worked for the business.
And I think key thing here is really to develop business acumen and understand the business
impacts so deeply that they're able to proactively propose better solution and cut off the
unnecessary complexity in their solution, right, because they understand the business goal so
deeply. So I think on the practitioner's side, go beyond the technical know-how, like the latest
technology and agentic AI and generative AI, but also dabble into the business side.
Now, I was having a conversation with someone earlier today about how we've gone from the birth
of chat GPT just a few years ago to a stage where some IT organizations feel like they're
living through death by 1,000 POCs and many projects launched,
but maybe struggling a little bit to get to broad deployment.
I know that you talk to a number of practitioners from across different industries in Stanford.
How do you see organizations navigating through that period into successful broad proliferation?
Yeah, that is a great question there.
I think that is the secret sauce every organization is.
struggling trying to fight. Being in the position of leading AIML team and develop them as well
as in the educational program at Stanford, I have a couple kind of recommendation there. I think
it's important to understand that everybody is trying to figure things out. There's really
no kind of set recipe out there for every organization to simply copy. Now, as you were talking about
more classical AI and machine learning, I think a lot of organizations or the mature enterprises
have succeeded finding what that playbook look like. And so the question for these people are
simply moving from if we can do this to how do we scale this faster. Now, going back to the
question about generative AI and even this year's GenTic AI, like this is so new that I think
collectively the enterprise world jump into this discovery.
effort together like 18 months to 24 months ago.
And I think we're still in the deep of it.
I say there's no one single organization can claim that they have
find the secret recipe to success there.
But there are certain things I think will be helpful.
So number one things I think is important is for organization to figure out
what real problem they're trying to solve.
Again, avoid that firm mentality of jumping into
something just because others are doing. Truly knowing what pain points the organization has,
and this differ from organization to organization. And so clearly, I can decide the strategic
priority and be able to focus on that and pick the right focus and start small with the vision
of building out the whole solution and knowing exactly what type of impact is going to pray.
And I think the second thing is for the whole organization to go through this, it does
work for every single project team to go hack out their own path through the jungle, so
to speak. So trying to develop some paved success route or path and establish, especially
standardized those very difficult part of the journey within the organization. So everybody can
follow that success and you can actually accelerate the path to the other side. And I think
Some of the major areas to standardize include things like data access, things like compliance
and control to make sure that you cover all the risk and all that.
And so what you want to do is make sure that the compliant path is that proven path.
So nobody needs to go around and figure out how to do this.
Yes, for sure.
I guess so with that, I'll just jump into it.
What surprises you lost about how quickly the AI tools are advancing and how it's
kind of shaping work across various industries.
Yeah, so I think what surprised me the most is not just the speed of adoption and innovation.
I think with that adoption, I'm surprised by three things.
And the first thing is if we go back like a couple years ago before the availability of LMs and GenAI,
be able to develop a solution that understand complex documents and be able to
generate coherent content requires literally a whole village.
But look at where we are today, it's surprising.
One single developer can have access to the world-class AI through simple API
and be able to build innovative application around it.
This is astonishing where we are today.
And I think that literally take down the entry barrier for AI innovation,
because you no longer need these top huge lab to do the work.
Now, anyone have access to these models can build something fantastic.
So that's one thing that kind of surprised me quite a bit, like it happens so quickly.
Now, with that, naturally, we also see the next phase of this cascading effect is how fast the professional's role has evolved.
So I know there was a lot of narrative about the job displacement that,
get caused by AI adoption. But I think we also need to look at the more optimistic picture.
We have seen a lot of professional roles involved that people just become more productive
because they start embracing the AI technology. We see financial analysts and now use AI
to sift through hundreds of reports that used to take them that 80% of their time to do the
data analysis. Now they use the genetic AI to do that really quickly. So now their role has
been upgraded to a role that's focused on really strategic perspective, invested in the
financial planning space. And for software developers, there's another example. We see these
co-pilot solutions out there. So now software engineers has moved from coding, literally writing
code line by line, to directing a co-pilot to write a code, and they evaluate the result that get
produced by these co-pilot solutions.
So they literally upgrade their job to become an architect, a software architect, rather than
just simple coder.
So these are an example of how quickly AI-specific generative AI has empowered professionals
to expand their role and upgrade his skill.
And I think finally, the third thing I'm surprised, and I think this is a pleasant surprise,
is how quickly the whole industry rally around the needs.
of developing the right framework infrastructure to support this.
Like we see vector databases, we see LOM operation platforms and solutions,
we see created data access layer, like all these things come together so quickly,
and the speed of innovation in that space is also pleasantly surprising to me.
So I think all those three things in essence go beyond just the speed,
but that cascading effect that we're seeing surprising the most.
That's amazing. Now, we are at the AI Infra Summit. So I'm going to ask you, moving away from
operationalizing models inside of organizations, why is infrastructure so important? Why is it
becoming more important every day? And how do you see this part of the industry responding
to the demands of the large cloud providers, the Neo-Cloud providers, or even on prime
implementation? Yeah. That is very good question, especially in today, about how
operationalizing AI solution because without the infrastructure, like you don't have AI.
And literally infrastructure advancement specifically on the compute and the cloud technology
were the key enabler for today's AI. That's why we're not in the third AI winter. We are
booming now. So the infrastructure is a foundational capability. And I'm happy to see the community
come together to think a lot about innovation in this space and not just in developing more
powerful model, but also how to actually practically deploy the solution in an enterprise
setting, right, in production. And so that would require a lot of new ways of thinking. So I think
in the past, you can think about it, like the infrastructure and the model, they are kind of
separate. And I can give you an example, like, and even today, a lot of
other organizations still function this way. You'll have data scientists and engineers
they're developing their models. And where they're done, they toss over the wall to the
infrastructure and MLOPS team, say, deploy this. And only then they discover we don't have
access to this kind of infrastructure to run model at this scale. So it's unfortunate this
is built in silo and trying to force-fitting them together. Now, for today's world, you can't
operate that way. You need to have that end-to-end view.
while you're designing the model and the entire solution,
you need to think about the actual production environment
and the infrastructure available to it, where it's going to run.
And have that model, has that, what I call the deployment environment awareness.
And so if you build a model that way to begin with,
then you'll be more successful in trying to get this into the info.
The other part, I think, in the cloud technology and infrastructure side,
is to push even more towards that dynamic aspect.
So today's LOM solutions there require large infrastructure
and is extremely expensive.
You want to have a static, pre-configured,
always on cluster running.
That's just not feasible.
So we need to move towards a more flexible dynamic
sort of on-demand provisioning.
So you can imagine the information.
infrastructure would provision just enough for a specific workload that is expecting and run that
workload. It could be training the model, fine-tuning the model, or run batch inferences. It doesn't
matter. But then after that job is done, it automatically vanishes, right? That is the most
efficient infrastructure solution for AI deployment, especially for today's AI. I know we're not
there yet, but I think a lot of cloud providers are moving course that direction, which
means that they move beyond being a hardware landlord, so to speak, to providing the
managed ASAPTRI kind of services on demand to these teams to deploy.
Thanks.
So looking ahead, say a few years from now, where do you see the biggest opportunities
for enterprises to kind of unlock the value of AI?
I think there's probably three areas I can think of.
The first area is continuing the path of using AI to improve the internal workflow business
process, the existing internal operations.
I think most organizations start AI journey in this area.
So there's no surprise, but there's a long way to go, especially technology at the van
so much.
There's a lot more flexibility adopting AI and the choice of different type of solution is
the more today.
So I think that will continue to be the table state.
for a lot of AI teams within enterprises, and I will not be surprised.
That would continue to spread beyond a tech organization into other functions, like legal,
like finance and operation and other functions.
So that's one area.
The second area, I think, it's clear that AI is going to continue to delight the customers.
And so it's applying AI to enhance the existing product and services.
And specifically, I would expect that the Gentic AI is going to be even more common in this space,
especially around not just hyper-personalization, which Argentic AI is a perfect fit to,
but also playing that co-pilot role with your users and be able to guide them.
So imagine the world where you no long can need to have that learning curve
of figuring out the new application, new user interface.
there is some intelligence right there built into the product that lead you and guide you
every step of the way. It literally is a personalized experience. So I expect that area to be
really taking off and very profitable for enterprises as well. The third area, I think,
important is accelerating the strategic functions within the organization with AI. And here,
agentic AI is a clear sort of direction to go and think about R&D.
And today, AI, agentic AI specifically is already revolutionizing scientific discovery,
like new drop development and all that.
And so I expect that to continue to grow.
That's awesome.
Daniel, every time I talk to you, I learned something new.
So thank you so much for your time today.
It's been a real pleasure.
I loved hearing about your insights about how organizations are adopting and where we're at,
but also about the infrastructure and how you see me.
I'm still blown away by your new data pipeline and then migration.
I love that story.
I'm sure listeners are going to want to engage with you more.
Where can we send them to engage and continue the dialogue?
Thank you for having me.
The easiest way to follow my work is to find me on LinkedIn.
Simply search for Daniel.
I usually post my work and my perspective.
And like, for example, the two books that I call offer on the Gentic AI and Gen.
to AI security.
And I also talk about panels and my keynotes.
Well, Denise, that's another episode of Data Insights.
Thank you so much for the great conversation to both of you.
Yes.
Thank you, Alison.
Thank you, Daniel.
Thank you.
Thanks for joining Tech Arena.
Subscribe and engage at our website, Tech Arena.
AI. All content is copyright by tuckering.
