No Priors: Artificial Intelligence | Technology | Startups - The Impact of AI, from Business Models to Cybersecurity, with Palo Alto Networks CEO Nikesh Arora
Episode Date: October 2, 2025Between the future of search, the biggest threats in cybersecurity, and the jobs and platforms of tomorrow, Nikesh Arora sees one common thread connecting and transforming them all—AI. Sarah Guo and... Elad Gil sit down with Nikesh Arora, CEO of cybersecurity giant Palo Alto Networks, to talk about a wide array of topics from agentic AI to leadership. Nikesh dives into the future of search, the disruptive potential of AI agents for existing business models, and how AI has both compressed the timeline for cyberattacks as well as fundamentally shifted defense strategies in cybersecurity. Plus, Nikesh shares his leadership philosophy, and why he’s so optimistic about AI. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @nikesharora | @PaloAltoNtwks Chapters: 00:00 – Nikesh Arora Introduction 00:39 – Nikesh on the Future of Search 04:46 – Shifting to an Agentic Model of Search 08:12 – AI-as-a-Service 16:55 – State of Enterprise Adoption 20:15 – Gen AI and Cybersecurity 27:35 – New Problems in Cybersecurity in the AI Age 29:53 – Deepfakes, Spearfishing, and Other Attacks 32:56 – Expanding Products at Palo Alto 35:49 – AI Agents and Human Replaceability 44:28 – Nikesh’s Thoughts on Growth at Scale 46:52 – Nikesh’s Leadership Tips 51:14 – Nikesh on Ambition 54:18 – Nikesh’s Thoughts on AI 58:21 – Conclusion
Transcript
Discussion (0)
Hi, listeners. Welcome back to No Pryors.
Today we're here with Nikesh Aurora, the CEO of Palo Alto Networks.
He joined Palo Alto in 2018 when it was the next-gen firewall player
and has since grown in it to six to seven times the size as a leader as a platform security company.
Previously, he was the SVP and CBO of Google during its massive growth phase from 2004 to 2014.
Welcome, Nikesh.
Nikesh, thanks so much for being with us.
My pleasure.
I don't know where to start
because I want to talk about AI,
I want to talk about security,
I want to talk about leadership.
I do think given your history
growing Google as chief business officer,
like, we have to ask you,
what do you think is the future of search
and how threatened is it?
Nothing like a slow little lowball
to welcome me to your show.
Got to work me up a little bit.
This guy uses at Google too at that point,
but I talked to him too much.
And what does you think?
I think we should have further
question to you. Oh, look at that. You don't know how to put his own mouth.
Blathing back. How you do all hard work. I think the idea when search came about, I still remember
going out there and trying to sell search to people and it was the, oh my God, do you mean I can
just go to the internet, type something and I can get the answer. And we spent two decades trying
to get all the information out there on the internet, so it was easily accessible to people.
And I think you saw the benefits. You saw the benefits of democratization of information.
Armors in India could get stuff and people get information. I think now,
But at an age, people are saying, great, don't give me all this stuff to sift through myself.
Try and make sense of all of it for me because it's too much.
And that's what you're seeing in today's generative AI models.
So I sort of, in my own words, I call that democratization of intelligence.
All of us will have the basic intelligence, which every other person next to us has,
because we can kind of go figure it out.
I don't have to hire the same people to solve the same problem for me,
the 10,000's time and pay the money because it's already been solved 9,000,
99 and the outcomes on the internet somewhere.
So I didn't do the extent that Google has sharpened its sort of scales on putting all
that information together, being able to synthesize it, understand it, being able to interpret
my intention as an end user, and try and present me the most likely outcome, I think that
should translate well to the notion of generative AI being able to summarize the same thing
than a much more enhanced order way for them.
So I think from that perspective, will they have the ability to transition the current search product into a future product, which is basically, you know, call it what you want to call it, you know, ask me anything or, I think it's so funny, like, you know, when you worked to Google 15 years ago, Larry had that vision. He used to talk about getting to a point where you answer my question, answer my intent as opposed to answer what I type. So he had the foresight to talk about that. He used to talk about AI. So I think from a product perspective, they are in a good position to.
be able to transition the product to what the end users need.
And you've seen that, with Gemini, you see that, with Chad GPD, you see that with a lot
of models which are getting the same place.
Let's not underestimate the distribution power.
There are two or three companies in the world which have distribution of the billions,
and whether it's Facebook with all their properties, with Apple, with their properties,
Google the property.
So they have the distribution.
They had the product chops.
They have the AI chops.
I think the bigger question is, how does the business model transform from what it has
being with 10 blue links and ads
being represented against them
and what's the new monetization
sort of that will come as a result
to be agents or something else
in terms of starting to take action?
Yeah, we can come there too,
but I think because yeah, as a search question,
search is an advertising revenue question.
So yes, the question is,
how does the advertising revenue morph
into some version of consumption or transaction metric
which is what, that's the intent
when I go say, what are the best blue pants in the world?
It's not just not doing it for academic interests
I'm actually going to transact.
So, yes, maybe an agent could do this
or the fastest flight to get to Rome.
So the agent could do that.
We'll talk about agents in a second.
I think that's much more disruptive than generative AI.
And to that extent, I think how the transition
the business model is going to be interesting.
I don't think anyone knows what the business model is.
But all I can say is that having been there
and admired what they do,
they do spend time getting the product adopted first.
And eventually, even there is tremendous amounts
of distribution for the product,
you find a model emerges.
I remember how the longest of years,
nobody quite knew how YouTube was going to make money.
And everybody was working at Netflix
as the one who were making money in streaming,
YouTube wasn't.
I think YouTube's a big-ass business now
compared to most of the streaming players in the world.
So I think they'll figure out
how that model transforms.
I do think the agentic challenge
is a much bigger challenge
than the generative AI challenge.
What sort of challenge do you think that is?
Or how do you think that's going to substantiate?
Look, the idea that if you step
back, you know, we spent 30 or 40 years building products where we focused in UI, right?
Product managed that effect and we glorified UI managers.
So we're trying to make sure that us common folks can interact with data and some sort
of engineering algorithm behind it because they're not smart enough to talk to the engineering
algorithm ourselves.
So you go to Expedia, it's a bunch of boxes we're supposed to fill in as humans and it gives
the answer to us because it's not having us to write in search query, SQL, XQ, or whatever
you want, to go find the information.
Now, I think generative AI has made that easier.
We can talk to the UI and natural language to some degree.
It can generate outcomes on the fly.
So I think to that extent, I think all the product development is going to change with
generative AI and the natural language capability.
Now, if you take that next step further and say, actually, don't need to come interact
to the UI, my agent can go do the task for you.
And step back and think of that.
Like, you know, 50% of the application in the world are some sort of transaction fulfillment
applications.
If there are five, I'm picking number, I think it's more than five million, but if there are
five million apps on the iPhone or the Android phone, half of them are trying to get you and I
to interact with the engineering algorithm and give them data.
If all those become agenetic actions of an Uber agent, then the question becomes, who sits
atop?
Yeah.
Who's the Uber?
And that's also a lot of Google's traditional revenue, at least on the advertising side,
is direct response ads.
It's not the branding ad.
So it's part of that business model in some sense.
Yes, it is.
Look, most direct response is lead gen, right, in marketing speak.
It's lead gen which eventually results in a transaction or fulfillment of information request.
So at some level, it is a precursor to our transaction.
People pay a lot more for the conservative transaction than for the lead.
So maybe the business model transition is, stop giving me leads, give me conservative transactions through agents.
Maybe I'll get paid more to buy you the airline ticket directly than have you be able to find an airline ticket provider, which is advertising versus transactions.
I think the opportunity is there from a business model perspective,
but I think before we go back to that stable world
where these business models have transformed,
we're going to go to a very disruptive phase,
but a lot of these apps will be rewritten.
And some of these apps will have to question,
are they direct consumer apps,
or are they APIs that, or perhaps our MCP Climb Server interaction,
we don't call them APIs anymore,
which will actually consummate that transaction.
How do you think it's most vulnerable?
Wow, you guys don't ask, like, simple questions.
You guys go for the jugular on every one of them.
That's why you're such a good investor.
Now, we're looking forward to your insights on this stuff.
No, look, I think the most vulnerable people are
where there is poor loyalty to the UI.
If it's just to sit a thin front end
to effectively a transaction processing system in the back,
do you care who you get your ticket to India?
My wife is coming back.
So you care who get your ticket to India from, which airline or which travel entity or which travel UI, perhaps?
You just want a ticket that allows you get on a plane and get to the other side.
Do you care who books your reservation for a restaurant?
So at some level where you're at the end, you're effectively a transaction processing interface.
You build some degree of brand loyalty through a whole bunch of sort of non-product experiences.
I think those become vulnerable.
There are, like, several more controversial ideas that Open AI is trying to prove out.
One now is the scalability of consumer subscription.
And another, I think is the...
How does that work?
Well, it's just like, can you...
I think it's actually quite surprising how many people are paying subs for this intelligence today.
Yes, yes.
I think the other is actually, and I want to talk about the B2B side, is that you should get paid for thinking harder and solving
harder tasks in like a scalable way.
This is what they want.
They want to sell like work, right, to businesses.
What do you react to work to businesses?
I think there's a view that the traditional way that you sell most enterprise software or
products is like it's a seat unit.
Yes.
Or it's some sort of like volume unit, like an appliance or something or a coverage unit, right?
Throughput based, yes.
And here, throughput traffic, et cetera, here the view would be like, well, if I solve a really
hard problem for you. I have a unit of work. It's essentially translating to a unit of compute
or sort of charging for value. And so I think there's a strong belief in some of the labs that they
should be able to charge for that. How do you react to either of those business model ideas?
Because you're now at Palo Alto in the business of selling direct value versus ads.
Yeah. Okay. So are we pivoting from consumer to business now? Yeah. Yeah. Let's talk about it.
Because we go away from the subscription, because you went off on the whole subscription idea of the
people should pay for subscription. Look, I think that is a bigger lead. The,
bigger leap is, in the consumer world, we are much more tolerant of inaccurate answers
sometimes or not perfect answers.
How many times you go to a search, even today, we're looking for something.
You don't find the right answer.
And he said, well, let me look again.
Oh, I must have asked the question wrong.
Let me ask the question.
You'll probably do that in your prompts in sort of chat GPT or Gemini or pick your favorite,
you know, LLM.
But in the enterprise world, there is not that tolerant.
for an inaccurate outcome,
especially if you get into the agentic world.
Oops, sorry, I made a mistake.
I meant you to turn off that server, not this one.
I blew up a whole bunch of my enterprise
because you gave me the wrong answer,
dear friend LLM.
So I don't think we're going to,
I don't think we're there yet, to be fair.
None of us are giving autonomy
to any form of LLMs to create any agentic task
or do any work for me.
We're all using them with human and humans
in the loop for suggestions,
and we're still sort of using the use
is where we are okay with multiple answers
where we're actually having the humans be,
it's almost like a glorified assistant
or a better assistant or somebody who's sort of a knowledge worker
who's summarizing a whole bunch of information
that I may not be able to have from the outside.
I don't think we're getting into precision tasks
and we're getting into precision actions yet.
So look, if you can generate precision tasks with accuracy,
in precision actions with accuracy, yeah, sure,
maybe you can charge me for a unit of work.
But I almost feel in the enterprise we're going to go back
to some version of redefined, for lack of a better word.
Let's call them AI as a service instead of SaaS, perhaps AIAS.
It's a racist term, yeah, yeah.
SAS wasn't a rich term, but sure.
Yeah, yeah, sounds worse.
But anyway, it's like, imagine that,
because you literally have to design the enterprise workflow end to end
and see how I can do that from an AI-first perspective, right?
Perhaps like the likes of cursor or vibe coding apps today,
they're trying to at least look at a part of the developer's workflow and say,
here's what you do, here's a bunch of tools that can help you through that journey,
you're human, and I'm going to see, based on how you use it over time,
I'm going to get smarter and smarter,
and be able to take over more and more of what you need to get done.
So I think it's kind of like these are AI apps under training.
When they grow up, they're going to take over more and more of our tasks
and allow the repetitive tasks to go away so you can apply yourself to new unique problems,
but I think there's a long time coming.
If you look at a lot of the platforms shifts that have happened in the past,
So, for example, with Microsoft OS, they eventually bundled on top the office suite, right?
Those were applications that were running on top.
They ended up rebuilding or buying them bundling and cross-selling.
If you look at Google as a platform, it forward integrated into the biggest areas of vertical search.
Yes.
Right?
Travel and local and a variety of other areas.
But they've been in a workspace too, right?
You're beginning to see Gemini show up.
Doing a workspace.
Doing show up.
Yeah.
Yeah, exactly.
So actually, coding is a great example, right?
Opening, I tried to buy windsurf.
Anthropic has clod.
People are trying to forward integrate there.
Anthropic also mentioned
they want to forward to integrate
into financial tooling right now.
Do you view this world
is basically the platform,
the big foundation lab companies
are likely to try and move
into the biggest business verticals
directly over time?
That's an interesting debate.
Let's go back ZY.
Look, the reason you're seeing
that developers is the first protocol
because they're the most likely
to experiment and to be able to work
with half-baked outcomes.
Like you give me 75% of the code
that I need.
I can parse through it.
and figure out the remaining 25, because that's what I do for a living.
It's much harder to do another professions where we're not fully sort of in tune with all the
guts of what we're doing.
So I think that's an interesting place to start.
And over time, that workflow will go out into testing and other parts of the software
development lifecycle.
So I think that's kind of interesting.
I've had this conversation specifically with some of the people who are driving some of the
larger LM businesses.
And I remember the very first phone call I made to Thomas and we talked about small models
and large models.
and he gave me good advice.
Like, don't, don't chase a cybersecurity model
or don't cheat a small model
because these large models become so much smarter
that the smaller models will not be able
to be as smart as these things.
So I do believe...
Have a sage.
Yeah, that was very good.
And so we decided not to know,
there was a member, like the very early days,
one of our competitors announced,
they were going to work on a cybersecurity alone.
Now if I go back and look, I'm pretty sure
that sort of, that was a great announcement.
I don't think anything's come out of it.
So we decided not to chase that
because it didn't seem to make sense.
That was pretty sage.
I do believe that after a point in time,
and I think you and I talked about this right before doing this,
you were giving me great insight that the models are converging,
their reasoning capabilities are converging.
So if you believe all these models are going to be extremely smart,
but somewhat similar in capability.
And I always say this to people, like, look, in the enterprise world,
you know, getting the smartest model in the world
is like hiring the smartest PhD from the best university you can find in the world.
For that PhD to be useful,
at Palo Alto, we still have to teach them our ways, right? Because they're not going to be useful
when they walk in the door. They have to understand our context. They have to understand how we do
things. They have to understand how to take our problem and solve the problem. Whether you call that,
you know, pharmaceuticals, you call that genetics, you have to make the model get smart about
genetics, very smart model, but you have to give a lot more training data in a particular
domain for it to become really smart in that domain. So I think the interesting opportunity
will be how can we take these models and apply them to domains where we have proprietary data.
Now, developer use case is a generic use case. There's enough code out there in the public domain
and you can actually get smart, 90% smart encoding. There's not enough genetic data out there in
public domain or pharmaceutical drug discovery data out of the public domain or cybersecurity
proprietary out of the public domain. So the question becomes, how do you take these models,
how do we take that brain, apply that to a domain, make it really smart? I think the challenge
right now is that if you're building a wrapper effectively as an AI as a service company
and all your wrapper does is enhance the capabilities of a model to put some guardrails around
it, then your biggest risk is the model slowly expands into those capabilities and you're
no longer in business. Now, the difference that those companies are going to have from others
who might survive are if you look at every SaaS company, it's actually the packaging of a
workflow the system of record, right?
My HR system is a connection to the workflow.
Every employee knows how to use that workflow.
And then every time it locks down a certain table, say, this is your system record.
This is what Negesh gets paid.
This is when he took holidays.
This is what his equity looks like.
This is when he vests.
It's less about the app.
It's more about that data and that system of record.
So eventually, if these apps have to live in the long term, they have to marry the
capabilities of AI with the fact that it becomes an enterprise system of
record, right? A model is not going to become my system of record. It will still be some
proprietary-locked database somewhere or some data tables pick your fig. And maybe the interaction
mechanism is no longer a workflow is some sort of agent or some sort of AI interface, which
allows that system of record to be maintained and creative, but there still has to be some rules.
So I think that's kind of quite the opportunity is as compared to just putting a wrapper
on our genetic processing today, I'm going to help you analyze legal contracts. Well, guess what?
I ran my legal contracts with chat GP, it works just fine.
Where are we actually given your visibility into enterprises and actual adoption or value in use cases?
So I think the use cases where there are two current major use cases, right?
One, let's call it, what you call it generalized or perhaps cross-enterprice consistent activities, right, generalized.
So do you have a legal team aside do you have legal terms?
Every enterprise has a legal team?
Do they have any particular proprietary knowledge compared to, you know, particular appellate?
likely. It's more, I need them for legal advice, not for Palo
advice. So in that use case, yes, could we use a, you know,
Harvey, equivalent, or whatever those are? Sure, it enhances their
productivity. They get their future faster. Could I possibly in the future
use some sort of AI-based interpretative app or interpreted
application, which helps me process my accounts to see if accounts payable
faster, codify them? Sure. So I could. So there's a whole bunch of
repetitive, generic tasks across enterprise, which I'm pretty sure,
could be done by some version of an AI wrapper
around LLM with some particular context
or my data. Sure. So to that
extent, I think we're all experimenting with those things, but my caution to my team is
don't try to build them. Somebody's going to build them for all of us.
We're much cheaper to rent them by some
perhaps metric of work. AIS, yes.
AIS metric or per seat, maybe, agentic seat. I don't know,
but there'll be some mechanism that they'll charge us on.
But we don't have to build it because it's going to cost me a lot more
to build my own, you know, accounts to be able,
accounts to be able to a smart AI system
compared to what I can buy off the show.
Is that what your customers believe now?
Like all the enterprises, the largest ones?
I think many of them do because this is not an easy problem to solve.
Yeah.
Like, first of all, finding the skill set,
finding people who understand this,
you know, living in this world of constantly evolving models
where you keep, and by the, you know, this better than need.
Like, there's no, like one model out
and take the next word and stick it in.
It works just the same way.
This is like getting a new PhD and training all over again,
saying, let me explain how we work here.
So from that perspective,
most rational players in the enterprise space, as in customers,
would want somebody who's the expert to build it
and for us to have some version of adaptability or adaptation to it,
make sure it's secure.
Like, none of us, no enterprise customer wants their data
to be floating in a multi-tenant environment saying,
oh, my God, my data is training other people's data.
Now, to the extent it's the council people, have a good time, right?
You know, you can understand how I codify stuff, have a good time.
But to the extent, it's proprietary data when I'm doing FDA trials.
I don't want my FDA trial data training somebody else's data.
But I think they will earn the side of caution and say,
I want my instance to be secured.
So I think we spend half our time before we look at any of these packaged AIS apps
talking to them understanding the security.
I don't want my source code to be training somebody else's coding app.
So to that extent, there's a whole bunch of conversation.
Is it ringed best?
Is my data mine?
Are you using it to train your model?
Are you using your system now?
And then we spent a lot of time testing it to make sure that they're not doing it.
So a lot of time and effort is spent there.
Not every enterprise is as discerning.
We have to be, because they're in the security business.
But I think you'll see some version of stability and acceptance there that people
will take these generic sort of AI as a service system or record,
sorry, AR service apps which are helping humans get better.
How do you think about that in the context of applications that you think
that may make the most sense for cybersecurity?
So if I look at founder activity, there's more and more activity around SOC.
Yeah.
There's a lot of activity around pen testing.
Yes.
There's activity around a lot of areas that are very human and tested, in some cases, repetitive tasks, which make a lot of sense for this form of generative AI to take over.
And then there's people incorporating AI into existing products like socket for sort of like a SNIC-like competitor or other aspects of code security.
I'm sort of curious from your vantage point.
What do you think are the most interesting areas of cyber AI?
So I think if you step back and think about cybersecurity, right?
There's a world of cybersecurity which operates and says,
that's the known bad, I found it, let me stop it.
Sure, that's a good thing.
I found a bad actual, let me stop it.
I found malware.
Let me stop it.
Now, to be able to stop bad things that are getting into your network,
you have to be deployed every sensor.
So the first thing in cybersecurity companies said,
look, I can't stop what I don't see.
So I had to be present every edge, every endpoint, every sensor of your organization.
Five years ago, we made a conscious choice.
Our strategy should be to get to be in as many sensor places or control points as we can.
So we did that.
We have a sassy product, an endpoint product.
So that's good.
I think sensor business will have to stay because if you're not there, you can't find anything.
It doesn't matter AI or an AI.
I've got to be able to be there to find it.
And then senses are pretty good at stopping the known bad.
Is it known bad?
I stop it. Well, most cybersecurity breaches happen because the unknown bad, because we stopped
all the known bad. And every time you build a new company, saying, let me go for it. So everything
you're talking about is trying to find the unknown bad or a vulnerability, which has left
the door open so bad guys can get it, which is a socket or a sneak like competitor. That's what
trying to do. Now, in cybersecurity, there are companies which are the sensor. Now, sensor allows
you to do two things on, stop the known bad, but also collect valuable data to analyze the data
to understand what they are non-bad may be.
So we get benefit from both.
We're at the sensor stopping the non-bad,
and we also collect a lot of data.
Traditionally, in cybersecurity,
people have been, I'll call it,
feature integrated from end to end.
Let me sit at the endpoint,
I'm going to trap all the data
around a particular topic.
Now, take that data into the cloud.
I'm going to analyze the data
and then tell you, oh, my God,
I found something suspicious.
Here are five suspicious things you should investigate.
But the problem is because I don't have context
when the data passed my sensor, it's gone.
So take a simple example,
I send you an email, a phishing link,
where you click and, you know,
when you go to the website, I steal your credentials.
Now, I'm an email security company,
so I stopped, I'm at the bed,
I see a bad email, I know how to stop it.
I don't know it's a bad email,
and then Elad clicks on it.
Now you've gone from an email product
somewhere else in the company,
gone to the internet through a firewall.
I have no idea what you did.
So all I can say is,
that link looked suspicious.
I clicked on it.
Maybe you want to investigate.
Now, no customer wants a list of 5,000 things they have to investigate.
So people are busy building these agents, say,
let me build an agent to help you investigate,
investigate that.
But I think the better answer is if I had all the context of the enterprise,
I can go mine that data to find what actually happened.
So we're taking the different pack.
We're saying, let's see if you consolidate all the data in the enterprise.
If we can, we can run a whole bunch of machine-loading algorithms
and do all these activities on top of that.
So the current startups you mentioned are like startups that are trying to do what AI wrappers
are trying to do on LLMs, and over time, we're going to get better and better, we're going
to squeeze their capability over time that you wouldn't need them.
Yeah, yeah, and there's a few different sets of them because there's also the ones that are
just trying to automate human services that are associated with the security world today.
And so pen testing would be one example where you often hire external consultants to help with that.
Sock may be another one where effectively you have this, you know,
you're basically looking at the operating center across all the different events
that are happening and trying to automate that.
So it's a little bit more of a human laborer.
The reason you're trying to automate the stock from the outside in
is because you're not running 10,000 machines and running bottles at the ingestion point.
Why should I first collect all data to analyze it after?
I should analyze the data and cross-corated ingestion to make sure I understand the bad things
and I put them up there and then say I am going to and then build agents to investigate
the bad thing.
Yeah, yeah.
Well, this is my question of, no, the environment in a large enterprise today is still
very fragmented, right?
They're just doing this, you know, past the hat trick that you're describing with all
these different tools.
Hopefully they're very dominated by yours.
Yes.
But as a lot of mentioned.
I mean, chosen by customers.
Yes.
Dominant is a bad word.
Chosen.
You've worked once in a large company.
Gently chosen.
Gently chosen in a highly competitive environment.
That's right, yes.
Of course.
But, like, you know, I was talking about a friend who.
run security at a large financial.
And he's like, we have a hundred and eight.
He's definitely got some cyber security. He's definitely got some power off.
He wants more.
But he's got 118 identity tools, right?
Wow.
I didn't think there were a hundred and eighteen tools.
There's an identity crisis.
There we go.
There we got to try.
So he bought an identity company.
He's told that.
Maybe the answer is just make it all cyber arc.
But I think his view is like, it's like just untenable for me to consolidate all of
this in the near term.
Yes.
And so I'm going to attach something after, which does the human consolidation, like the Slok Automation thing, right?
The interesting part is, we're the youngest industry in technology, because cybersecurity did not exist before connectivity came about.
Nobody had to, like, you know, you were in a mainframe, you sat in the office, you connected the mainframe to a pipe into the back end and you're off to the race.
So nobody actually was going to come intercept your traffic.
It's not scary before the web.
That's right.
So now the web and applications out there made a series.
So the 25-year-old industry.
So every time a bad problem showed up,
somebody built a point solution to solve that problem.
So it is a fragmented architecture.
It's a fragmented environment.
There's a lot of data being used
and analyzed multiple times or multiple vendors
because they were needed for that point in time.
But like any industry,
that capability commoditizes over time.
There's no genius in building a firewall.
Every firewall within plus or minus 10%
does the same thing.
Some things do it better.
The question is, what do you add onto
what other capabilities can you build
beyond that. So if you believe that, sure, in a cash, 80% of the capabilities are going to be
capabilities that even most companies still don't have, and they can be delivered in one
platform. On the other 20% we can put add-ons, but over time, you know, we will eventually come
and gobble those up because there will be a new 20% bill. Until one year ago, nobody was
talking about AI security, right? Now there are probably more companies than you can count
who all show up at Eli's house, please give me, and you. So please give me some money. I'm
starting a cybersecurity company for AI.
I do prompt injection.
I do malware checking on models.
I do model data poisoning.
Egentic attacks.
None of the same thing existed.
So, of course, we'll find people building features
to protect against those because we're too busy
still fixing the problems the last five years
where the mass market problems are.
So I think over time, what you'll see is
the platform approach is going to sort of eat
into these feature approach that's happening.
And there are companies of COBOL code out there as well.
So 180-9 inventors sounds like a lot.
I didn't know 218 vendors, to be honest, I think.
He might have been cybersecurity vendors, which I can believe is just also a lot.
What new problems from AI do you actually pay attention to?
You say, like, this is going to be a mass market problem.
Look, I think if you back up and read our own dog who would believe our own rhetoric,
if you believe your own rhetoric, then I, as a bad actor, should be able to unleash agents against an enterprise,
against every aspect of it
and figure out where the
breachable parts are or where the holes
are in a quick, a matter
of minutes or less
than an hour. And I should be able
to point my attack towards that vector.
I should be able to run simulations and how
should I attack this thing. And I should be getting
an exfolitated data. Now,
when I started seven years ago,
the average time to
identify a target, get through it,
and exfiltrate data was in a three to four-day
timeframe. The
fastest we see in that right now is 23 minutes. So if you're, if the bad actor can get in
an hour and exfiltrate data or shut down your endpoints with ransomware in under an hour,
then by physics, your response time has to be less than an hour. The average response time
is still in days. So from that perspective, the biggest threat that AI brings is that continues
to compress the timelines to be able to come, you know, either shut down your business,
cause a compromise, cause ransomware, cause economic disruption.
If that's what it is, I think the pressure just went up higher on our customers to get
their infrastructure and order.
So that's the risk and the opportunity.
Yeah, I think Alad mentioned pen testing, which hasn't traditionally been like a very strategic
part of the security landscape.
Yeah, pentesting is just to knock down on every part of your defense.
I think how the companies don't do pen testing because they're scared of what they'll find.
Yeah.
They're doing some minimum compliance level.
But I think from a technology perspective, to your point, like, what is pen testing
is trying to attack the area?
There are companies, like, run Sibble now that do this.
They can do it continuously in the 23 minutes you describe, and I'm like, that's exactly
what an attack can do.
We run 7 by 24 by 365 at parolta.
We don't have, we're going to hire third-party people.
So what you're talking about is a company, we run that as a default, because that's our
existence.
We get compromised.
We get breached.
We have a problem.
Yeah.
You mentioned email.
I think it's like a well-known.
issue that a lot of the breaches, they happen because of social engineering, because of email,
because, you know.
Credential take, 89% of the attacks happen because of credential theft.
Right.
Somebody becomes you or me.
Okay.
So people like us are not getting any smarter, and now you have models.
Come on.
Ah, like, you try it, one percent, one percent a day, right?
Tom and cats.
But, but you, you know, now, how concern are you about, like, deep fakes and generated spearfishing
and, you know, voice attacks and all that stuff?
So to the extent they enable the act of social engineering, yes, those are concerning because
I think most forms of two-factor authentication are going to be out of the window. I still
won't say which bankers, I call it, but I call him and they say, oh, can you please confirm
your identity? And they ask me three arcane questions, which I'm pretty sure, chat GPT or Gemini
will be able to answer in sub-seconds because they're only scarring the web to find public
information about me and has me questions. So I think all those forms of
authenticating who you are
are getting easier and easier
to compromise. So the question
becomes, so the problem
you have to figure out is
you can solve it their way or our
way, as in the way they're looking at it
or the way we're looking at it.
At the end of the day, every one of these social
engineering attacks, credential takeovers,
eventually initiates
some bad activity in enterprise.
And the bad activity
in the enterprise often takes on the form
what I will call anomalous behavior, right?
Suddenly, Sarah decided to exfiltrate all the data in Eli's company,
even though she used to do email with him every day.
Today, suddenly she's logged in and she's downloading everything onto her laptop.
This actually happened last week.
Thank you.
Great deal flow.
Is there nothing suspicious than that?
That sounds pretty suspicious.
So I can spend my time trying to make sure that nobody can take over Sarah's identity,
or I can make sure that if Sarah doesn't act anomalously.
And if she does, then I throw it in a block at that point in time.
So I'm looking at life a different way.
That's why we're buying this identity company.
Because identity companies today say, oh, I checked you in the door, you're fine.
And now you can roam anywhere in my entire enterprise and do whatever you want because
you were checking into the door.
Well, that doesn't work anymore.
I have to make sure that I, now today compute and data and AI is going to allow us to
analyze all the anomalous patterns.
And if I could say that, that's pretty weird.
She'd never done this in the last seven years.
Why is she doing this now?
Does she have the rights to do it?
And I could do it just in time rights.
I can stop you from accessing this stuff.
So you have to chain the name of the game.
You can't do it the same way.
So the whole idea of us buying an identity company
is to think about life saying,
stop giving people persistent rights,
give them just in time rights,
give them rights which are analyzing
for anomalous behavior.
In fact, what I want to do is I'm like,
actually don't need you to give me a second factor authentication.
I can see the way you type.
And if your identity starts typing differently, I'll block you.
So one of the things you've done incredibly well at Palo Alto over time
is really starting with a core set of products and then expanding
and really providing both the platform and the add-ons that you mentioned.
Was that something that you came into the company knowing that you wanted to do?
Is that something that you ended up adopting over time?
And a little bit curious how you thought about that puzzle.
Well, you know, you and I've worked together before.
And I remember, you know, you had Google, a smart young man and that hasn't changed.
I came to Palo Alto and I just analyzed the business problem.
you know, the product.
So, you know, I came with two things.
Well, I understand business, too.
Larry told me many years ago that if a technology company loses sight of the product,
it decimates over time.
It's been true across technology.
You can take your pick across the tons of companies which haven't made it.
The business problem in enterprises eventually, if you look at a enterprise company less than a billion dollars in revenue,
50 to 65 percent of cost is cost of sales, marketing, and customer support.
which leaves no room for margin.
If you look at the largest enterprise companies,
that number goes to 30%.
So actually, it's all about taking that 70%
bringing down to 30.
Because R&D, GNA, don't change a lot
past a billion to 10 billion or 100 billion.
You still maintain 12 to 16% of R&D cost,
and you still maintain, if you're like efficient,
like some of the large players in the market,
or 4% GNA, or you have 6% or 8%.
So your maximum leverage comes from sales and marketing,
customer support, and marketing.
And you realize, well, what is that?
That is the ability to convince one customer that you're really good at what you do
and be able to expand in that customer's environment with that trust that you can do more
and more things for them.
So that's the insight I came in with saying, why is it that Sarah's friend has 118 vendors
who's in their interest?
Because each of them gets vetted, BOC, this is security.
It's not like buying some like random thing on the side.
There's a big procurement process.
And there's a testing process, right?
Because you could be a bad security actor and it could take my whole enterprise down.
So if you go through that entire process of validation, verification, and trust,
why shouldn't I take that right that the customer has given me,
earn their trust, and expand my capabilities across the platform?
So that's the insight we came with and realized, you know, we walk in,
we have to sell firewalls.
Hey, you want a firewall?
No, I'm sorry, I'm good for the next five years.
Oops, I had a sales guy who was focused on the account.
That was his target.
He spent three months getting a meeting, going to the process, now he doesn't have anything to sell them.
So, okay, you don't want a firewall, you want this?
You don't want this, you want this?
Eventually, enterprise salespeople have.
to have enough of an offering set that allows them
to be able to present something to the customer.
The good news is in cybersecurity,
there's always some transformation going on
because some technology is coming to its end-of-life form
and the company has an innovator
and you're off to the next set of companies.
So that's the insight.
Yeah, it's a great insight.
And if you actually look at the margin structure
for some of these things that you mentioned,
sales, customer success, et cetera, or support,
a lot of those things now have AI apps
that are making people dramatically more efficient.
Do you view the future of those sort of functions being very AI-enabled and AI-heavy?
Or how do you think about that transformation across companies like Sierra, Decagon, Rocks, and others?
Yeah, I think I'll answer the first out of the question.
The second half is your domain and Sarah's domain.
I'm going to leave you to answer that question for him because I don't know the answer to the other companies.
But I think, look, if you fundamentally look at it and look at organizational efficiency, perhaps the best word people are scared of AI-based efficiency,
I seriously doubt that an AI agent will convince the CIA or see so fast
than my human that goes and hangs out with him and shows him the product.
So I think my sales teams are very happy in their existence.
They don't believe they're imminently threatened by AI.
So that's a good thing.
Interestingly, on the product development side, I think people will be...
Just to come back to that, sorry to interrupt you,
but there's other forms of sales enablement, you know,
a deck per customer that you customize or...
Yes, but those are...
Sort of SDR or sort of...
Yeah, those are marginal.
That's process efficiency, which will come through it.
But, you know, guess what?
Before we get there,
I'm probably going to need, you know,
15 AI-savvy people to build the workflow
because an AI model is not just going to spit out
a Pallato Sassy proposal by itself.
It's going to have to be trained.
Do you think that's customer segment specific
in terms of leverage you can get?
So a lot of people think, for example,
the SDR-level seller,
where you're sending a bunch of emails out
or you're sort of cold-calling or doing,
things like that, where it's a lower ACV customer.
Yeah, but I think that's only one side of it, right?
Like, if you stick to the SDR, we'll go back to the other problem in second.
But stick to SDR, what point in time does my email start getting read by an agent,
which is called the block all SDR, block all people trying to sell me shit?
Because I'm overwhelmed by the number of people who think they should write to the CEO
because they have some company which has five people who are automating something.
I think there's a...
That's interesting.
Does that actually decrease the effectiveness of those sales teams over time, then?
In other words, if effectively
have agents screening things,
does that create a block
for certain types of sales leads?
Well, I think the question,
the sales lead is the
isn't the issue, right?
The question is generally,
either I have a need,
I know it,
in which case,
hopefully my blocking agent
knows my needs
and eventually says,
ah, you know what?
Actually, we'd be looking for this thing
and here's an email
which satisfies our thing.
So maybe that'll do it.
Or as we do in marketing, you never need a new watch
and you never need a car,
but you just buy it because somebody put in front of you.
I don't think it happens that way in enterprise.
So we can't generate demand for something
we don't know we have a need,
but sometimes you can in cybersecurity.
So look, I think those will be marginal efficiency outcomes
because how many STRs do you have eventually
you do the selling?
We do lead generation.
We're in a large enterprise business.
We go to a large rigorous testing process.
So yeah, I can get a phone call,
but if my product doesn't show up,
then it doesn't matter how good the lead was.
It's interesting because I haven't thought of it as just becoming, you know, if we have agents doing this low-level communication first, it just could become a more efficient market for finding the problem match, right?
Yeah, maybe.
So I think if you go back to the organizational efficiency question, I was talking about CF as well, I'm saying, when are you going to take a whole bunch of stuff that you're doing with human beings and replace them in some form of, you know, at least 50% efficient agentic analytics or apps?
So I think we're going to get there.
I think the max
replaceability will be in, let's call it,
administrative areas.
I have 200 people doing documentation.
Do I really need 200 people doing documentation?
If models can do 90% of the work,
I probably need AI-savvy people
to write the guardrails around these models
and templates and they can print them.
So we'll find some efficiency.
You know, pick your number,
two, 300, 400, 400,
basis of having some efficiency.
The larger you are, the better you are
because you're going to save more money.
I think sales doesn't change as much.
I think product innovation is the better companies will innovate faster.
I have technical debt.
I'd love to get rid of a whole bunch of Salesforce code
or a bunch of SAP code that I have and get them more efficient.
I can't find good people to go to it because nobody wants to work on it.
They're all working on AI.
So can I get some version of, you know, agentic AI apps
that are going to get rid of, you know, a few hundred of those people?
Great.
So I can get inefficient product development ideas out of there.
But I'm not going to let go of any good product person because I'm going to get them to move faster because that's my competitive ed.
So I think we won't see as much attrition in the bi-coding outcomes or the product development outcomes.
We'll just see faster innovation.
So basically the statements of AI displacing all of human labor, that's very overstated from your perspective relative.
I think on those two segments is, I think customer support will have to go through the revolution.
And what I mean by that is I always joke internally and tell my people, customer support,
this because we build bad products. If you have great products, why would you have to have
customer support? It's complicated. It's hard to onboard. It's got too many dials. And that's why
it takes me so much time to make it. And then eventually it's not efficient. So we have most
recently moved customer support closer to our product teams. I told my product teams,
the day you have a bug, the day you have a customer issue because the product should fix
that before we build a new feature. So I think in concept, from a North Star perspective, we should be
able to take 80, 90% of customer support out in the next two to five years across the landscape.
That's what every good company should aspire to do. And when you take it out, it means your
product quality should get better. If I'm really using a good vibe coding agent, why shouldn't
it write better code? If it's writing better code, why shouldn't it find flaws in my code much sooner?
Why shouldn't run simulation? So I'm expecting product quality gets better. I'm expecting the fixes
come faster. I'm expecting that, you know, we can diagnose the customer's problem with data as opposed
to humans calling asking all those questions.
I think there's a lot of anxiety
from engineering leaders that the product
quality is not likely to get better.
There's just going to be way more of it.
More bad quality product or more product?
More just more product, right?
That doesn't mean it has to be bad product.
That's fair, yeah.
I think the combination of generated code
that people are not fully understanding
from an architectural point of view,
fully reading and reviewing,
and that volume overwhelming,
like the processes that we have today
in software development is an anxiety.
And let's let's let's let's let's let's let's barst that for a second. It is the worst right now that it's ever going to be exactly. It's only going to get better. So you're the most optimistic security person I know that's great. Is that by restricted to be optimistic in security? Well, it's not like a super it's not a super optimistic group. I'm just saying.
Look, I think at the end of the day, we have to do best we have to do a best job where we can and eventually life takes over. So it's fine. But I think from a from a from a quality.
perspective, anecdotally, I have seen examples which have huge promise. I've seen
the vibe coding agent find a vulnerability and security code at Palo Alto, which we wouldn't
have found unless it was out in the wild, which is a good thing for us. I've seen it take
500 lines of code and come back with 75 lines of code, which would have been much more efficient
in doing the task that 500 lines of code is in. I've seen it come back with code explainability,
which was written 15 years ago. We can't find the person who wrote it.
There's some amazing example of what the art of the possible is.
Now, if that's just what we get today, I think one, two, three or some now,
that stuff's just going to get better and better.
So from that perspective, do I believe product quality gets better?
100% believes that product quality gets better.
I don't think there's any debate in that topic, right?
Now, it doesn't, look, there is no solution for stupidity in the world.
So if you don't have good people reviewing this stuff, then yes, you can end up with bad
because AI is bad.
It's because you didn't set up the right card rails, the right process to do it.
I think we're going to get good quality outcomes.
I think we're going to see better and better capability.
I think the most vulnerable area in any enterprise are large amounts of humans doing repetitive tasks,
which are either generic, which is the easiest to replace because everybody is doing it.
We don't need to them.
Or even if they're specific and high critical mass, if I have 2,000 people on customers board,
I should be able to optimize a lot of that.
And hopefully deploy those people in much more meaningful things.
I mean, imagine.
And I have a child, you have children.
Do you really want them to grow up and become customer support people?
And say, every time, what do you do?
I just pick up the phone and listen to somebody grumbling at the other end
because something they bought is not working.
That sounds like a really horrible job.
So I think those jobs should go away.
What's wrong with that?
And they don't have the power to fix it.
Yes, that's worse.
It's like, you know, it's getting punched in the face and not being able to punch back.
So somebody's taught them customers always right.
Can we talk a little bit about leadership?
Sure.
You're very unique leader.
So this is a time of like at least a handful of,
of companies growing very quickly because they create some trades in billions of dollars.
Yeah, yeah.
Well, and like, you know, an optimist myself, I'd say like AI wrappers are not, they're creating
a lot of value and consumers and enterprise are buying very quickly.
Yeah.
Google from 2004 to 2014, Palo Alto, seven years in, you've added like the size of a Palo Alto
originally every single year since in terms of enterprise value.
That's wild.
Like, what makes you a great leader?
in terms of growth at scale?
Like, what advice do you have
for some of these people who are like?
Guinness Book of World Records
in terms of growth the first few years.
Look, if you step back, it's interesting.
Every business that you've identified
or you've looked at, we've talked about,
has a much larger time
than any of these companies is able to touch.
The markets are growing.
You have the opportunity to take share
and grow in that market.
So I'm a huge fan of growth businesses.
I hate the idea of going restructuring something
which is on a declining curve.
It'd be sort of scare me.
So it's good to find the right market
from a growth perspective.
I think I always have this principle
but nobody wakes up the morning
and goes to work to screw up.
Oh, you wake up saying, oh shit,
I went on my worst job possible.
No chance in hell.
You can find people, you can get a group of people together.
They can be innovative as hell
and go put a rocket into space faster than NASA.
These are all humans.
They're all people out there.
There's no difference in many of them
then people who work at Palo Alto, the people who work at Google or elsewhere.
So what creates the difference between great companies and companies that are not
as good?
Because I'd say within reason, the people, you can find those people in every company.
I think it boils down to understanding the market, setting the right notes,
or getting enough buy-in, talking about the why, not just the what you need to get done,
and getting people really excited and bought into it.
And then making sure they have the...
the resources to get their task done. If you do that, then my job is just like, set the strategy,
set the north start, put the right people in place, and then basically act as their shield
and keep blocking bad things or friction from slowing you down. So if you can do all of those
things in a way, you know, there is a high probability in create good outcome. Never guaranteed.
Are there any unique structures or approaches or tactics that you do that go against the grain?
We've talked to Jensen a few times, and he's pointed out, for example, he has like 40 direct
reports. He doesn't do one-on-ones. I actually read this. I actually read
that I actually tried
I actually expanded my staff meeting
from 8 to 25
after I read that.
It's interesting
and it solves a different problem
at least from my case.
I've discovered that
I'm not always sure
that these people
communicate the why
to their teams
that at least eliminates
one level of confusion
and why does he want this?
Oh, actually I heard him directly
this is what he wants to get done
because sometimes you have this notion
of you have to be player,
you have to be coach,
sometimes you have to be directive,
sometimes you have to be encouraging.
It's like, we're going to climb that mountain.
We aren't going to climb that mountain.
And it's like, if you go back,
so we're going to climb a mountain from
as people end up on that one.
So it's important for people
to understand the communication parts
and I've discovered
that communication actually is underrated
in organizations.
And usually the way I do my sort of,
call it 360 degree test
is I meet 50 employees every two weeks
and ask them questions.
Then I discover, oh my God,
these people are asking questions
which I thought was abundantly
clear why we're doing this, what we're doing this for, and discover, eventually that by the time
you get to the person, four or five levels removed from you, they actually don't understand
exactly why we're doing certain things. They have fundamental questions around what we're doing.
And that causes them to do it differently or not do it. So that becomes sort of an issue
from communicating. You asked me about, you know, what do we do, what have we done differently
compared other people? I think communication, talking to people, makes sure they're all bought in.
But I think from a business strategy perspective, we've taken a very different approach to M&A.
We've bought 27 companies so far.
We're about to buy our largest one if we get approval, get it done.
And I don't call it MNA.
I call it product development and research in a highly innovative market where all of you guys are kind enough to support innovation.
It's distributed R&D.
Exactly right.
Distributed R&D.
We're in this service for you.
Well, you guys do reasonably well for that.
So you can say thank you and I can say thank you.
And I can say, if you don't win and I don't win, if you don't win, I don't win, it's fine.
So we're happy to be in a situation where you get to a certain stage and we get it past that stage and take it to scale.
Because I don't believe we have all the smarts in the world.
There are lots of smart people out there who are trying to solve different problems.
And I don't believe we can solve all the problems simultaneously.
You take AI.
You know, we started off saying, oh, my God, when AI gets deployed and a lot,
is going to want to make sure that their AI instances are clustered and controlled and managed
because they don't want data leaking, they don't want, you know, external inputs, so we've built
a factory what we called an AI firewall.
It's great.
If we discovered this company, you know what?
People actually trying to figure out these models that they have are hackable, have malware,
and then, so we weren't doing that.
We were just protecting them once they were in there.
So the company called Protect.coma, which was actually assessing models.
They were doing persistent redteaming against the AI models, specifically, not just the enterprise,
because, you know, going back to your comment on red teaming, models morph, their responses
are non-predictable, non-deterministic.
The same model could answer the same question one way today and could answer it differently
one week later because it learned.
Now, if you start getting non-predictive responses, you have to inspect all the responses
to make sure none of them is malware.
Yeah.
Right?
So from that perspective, we do persistent red teaming of models.
We do scanning a model.
So they said, oh, shit, we haven't done it.
We found this company.
And we said, thank you very much.
the VC community for delivering that from an R&D perspective,
we put them part of our platform.
So we do rely on, like you said, R&D as a service from the VC community.
And that's been very helpful.
But we always go for number one or number two.
We never believe that you can take three or four in spit and shine
and make it look like one or two.
Because one or two don't go away.
They trade as one or two for a reason.
We more often than not make the leaders of the acquired companies,
the leaders of our business because we believe
they've acted faster than us in a much more
resource-co-strain environment, shown a tremendous amount
of resource-volves and hustle.
They would know the outcomes and innovation.
So I think from that perspective,
we probably have a higher hit rate
in the industry than most of the M&A
that has happened in the enterprise space.
So that thing, we do that differently.
Yeah.
You, at the beginning with Palo Alto and continued today,
like have been perhaps more ambitious
than any other security company.
I think that's right.
How do you convince an organization to be more ambitious?
Because, you know, my understanding of the cyber industry before is you had like endpoint businesses and firewall businesses, very sort of domain specific, right?
I don't think you have to convince humans to be more ambitious.
I think we are natively and naturally ambitious.
Like, you meet somebody saying, can you do more?
I've never heard someone say, I think I'm done.
Like everybody says, I want more.
I can do more.
Like, we live in a consumer society.
We're all taught to aspire for more.
So I don't think it's hard to make people that power
to feel that we have the right to play
at a bigger table on a constant basis.
And people actually like the idea of ambition,
aspiration, and winning.
Like, you know, trust me,
if our stock wasn't up six or seven times
in the last seven years,
a lot more people internally would have questions
on our strategies than they have now.
So I think it's a self-fulfilling prophecy.
It's a good thing across the board.
But I think more fundamentally,
if you step back, our industry is not fully formed.
It has 118 vendors, it's fragmented.
You know, you take a look at the CRM industry,
look at the ERP industry, look at the HR industry.
These things operate on single platforms, right?
Nobody has two sales forces deployed in an enterprise.
Nobody has two workdays deployed in enterprise.
Nobody has two SAPs deployed in enterprise.
Why?
Because you need end-to-end visibility,
a singular workflow, a singular set of,
analytics to solve the problem.
Our industry has started off because, oh, my God, we have a threat block to threat.
You know, don't work out.
So, like, we're playing back a mold.
So the point is, as I said, over time, as these requirements normalize, they somewhat sort of,
you know, what do you say, converge in the capabilities of vendors, then how does it
matter if you take from one versus the other and what becomes more important?
All these platform companies I talked about, it's not like they have unique features on a feature
by feature basis compared to the competitors over time.
those have normalized. But what they do have, they have an end-to-in visibility and capability
that integrates the functionality. That's why they're there. So if cybersecurity has to survive
in the long-term as a mature industry, we also have to become sort of a singular enterprise
platform. If you believe that, we're nowhere there, right? We've taken, we used to have 24
products when I came. We had four when I came. We took it to 24. We had 44 magic cordon's top
right mentions. We've turned that into three platforms. I'll say, you know, if you're
going in a journey with us is going to take you two to three years to get a platform deployed.
We're three right now because, had we said one, oh my God, my God, your friend can't go from
118 to one. It's like a boggles mind. So let's take you three first, or 103. And hopefully
get him to the next one, we get on the third one. So I think the idea, from our perspective,
is if we can become the platform of choice in the industry, that's a very big ambition,
very big North Star. But it's like, you don't get there if you don't start.
Maybe when you look forward both, okay, Palo Alto.
So, cybersecurity, AI.
Three questions.
What keeps you up in night?
Do you think most about?
I think most about AI.
Not, I'm glad we think about something you do.
I think more about AI from the vantage point that if our view of the world,
of how this is going to evolve, is not within the guardrails where it's going to be,
you may end up taking part of a lot in a different direction.
Because remember, we exist to help you secure technological advancement in a certain direction
before you fully deployed.
I want to give, like, you know, today our conversation with some of the big cloud providers
are how is everybody thinking about a genetic care?
I'm supposed to secure agents.
The problem is, I can't get one person to agree with the other person's definition of an agent.
I'm like, what's an agent?
Well, are you going to use MCP protocols to deploy?
Well, no, we just have connectors.
What's a connector in an LLM?
A connector is effectively API call,
microservices call.
Why are you using API calls?
There's been called API calls in the past.
Why aren't you using an CP server client and clients?
Well, we're going to get there.
Right?
Well, are you going to do an inspection of identity?
We're going to register the identity somewhere else.
Like, what's an agent?
How are you going to do an identity?
It's going to be delegated?
So there's like so many questions from an executioner perspective
from how the industry evolves that this kind of quite keeps me up in mind.
We talk about this every day.
we have a team of people getting together every day for two hours and we read everything.
We didn't talk about saying, what do you think about this?
What do you think about that?
Because we don't have an expert, then hopefully the collective wisdom of six or seven smart
people, those people who I bring together every week, every two, three times is probably
better.
So we're constantly trying to paint a picture of how the world of AI is going to evolve.
So we're building our opinion.
And based on that, we have to design a product.
So that's extremely bleeding edge.
Right.
And if you want to be the cybersecurity partner choice, we have to be.
be able to go with the bleeding his capability and tell our customer, look, you have a problem.
We're solving it faster than anybody else, and we can help you deploy AI securely.
So that's kind of what is kind of exciting, a little bothersome because, you know, they've got
to keep understanding.
You've got to think, oh, shit, we're just here.
We'll do that now.
Yeah.
As you've been thinking deeply about AI, have you thought about it from a broader societal
perspective impact of the world?
Like, are there specific threads that you think about most or worry about most or optimistic?
about most. I'm going to stick with Sarah's characterization of me as one of the more optimistic
people about these things. I'm going to expand that beyond cybersecurity. Like, I think it's exciting
technology. You know, you've been in Silicon Valley for a long time. You've been a Silicon Valley
where you've been investing for so long. The intensity, the excitement is palpable, right? You can't
turn around. It's really fun again, yeah. It's really fun again, right? You're like, you can have
dinners, you can debate all kinds of arcane topics, and I'm pretty sure you can find lots of people
different opinions, and there's so many different directions you can go to. You can talk
about policy implications, you talk about rappers, you can talk about LMs, you get about infrastructure.
It's like a whole new technological way, which has so many implications and majorly disrupted
implications. From that perspective, I think, exciting. Does every technology come as a double
head sword? Of course it does. Every revolution in history has come as a double ed sword. So why not
this one? He's not to believe in the power. There's more good people in the world than bad people.
The good people will hopefully continue to make sure that the bad.
things get controlled.
Now, is it going to,
is some bad things going to happen?
Most likely.
Are we going to find a way around it?
Most likely, hopefully.
We're going to be powered through a pandemic,
for crying out of life.
Awesome.
Thank you, Nakash.
Yeah, thank you so much.
Thank you for your time.
Appreciate it.
Find us on Twitter at No Pryor's Pod.
Subscribe to our YouTube channel.
If you want to see our faces,
follow the show on Apple Podcasts, Spotify,
or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find
transcripts for every episode at no-dashfires.com.
