No Priors: Artificial Intelligence | Technology | Startups - The Best of 2025 (So Far) with Sarah Guo and Elad Gil
Episode Date: October 31, 20252025 has thus far been a year of great leaps and advances in AI technology. And Sarah and Elad have spoken with some of the most enterprising founders and scientific minds in the field of AI today. So... we’re revisiting a few of our favorite conversations on No Priors so far in 2025 – Winston Weinberg (Harvey), Dr. Fei-Fei Li (World Labs), Brendan Foody (Mercor), Dan Hendrycks (Center for AI Safety), Noubar Afeyan (Flagship Pioneering), Brandon McKinzie and Eric Mitchell (OpenAI o3), Isa Fulford (OpenAI), Arvind Jain (Glen), and Dr. Shiv Rao (Abridge). Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Chapters: 00:00 – Episode Introduction 0:21 – Winston Weinberg on Leaning into New Capabilities 02:01 – Dr. Fei-Fei Li on Spatial Intelligence 04:13 – Brendan Foody on AI Disruption in the Workforce 06:10 – Dan Hendrycks on the Geopolitics of Superintelligence 08:06 – Noubar Afeyan on Entrepreneurship 10:38 – Brandon McKinzie and Eric Mitchell on Reasoning Models 12:41 – Isa Fulford on Training Deep Research 13:49 – Arvind Jain on Innovating Enterprise Search 16:21 – Dr. Shiv Rao on AI’s Human Impact 18:58 – Conclusion
Transcript
Discussion (0)
2020 has been another remarkable year in AI.
This week on NoPriars, we're sharing our favorite moments from the podcast from the year so far.
We've talked to visionary leaders at Harvey, Open AI, Glean, A Bridge, and more.
We also talked to legends of science like Dr. Fay-Fa-Lee and Nubara-Fa-en.
But first, let's start with a moment that captures the magic of leaning into new capabilities at the right time.
Harvey CEO Winston Wyberg discovered an extraordinary opportunity hidden in plain sight.
Gabe and I actually had met a couple years before, and I definitely didn't know anything about the
startup world and didn't have a plan of doing a startup. And what had happened was he showed me
GPD3, which at the time was public. And I was, first of all, just incredibly surprised that no one
was talking about GPD3 and no one was using it in any way, shape, or form. And he showed me that.
and I showed him kind of my legal workflows.
And we started the kind of aha moment was we went on our slash legal advice,
which is basically a subreddit where people ask a bunch of legal questions.
And almost every single answer is, so who do I sue?
Almost every single time.
And we took about 100 landlord-tenant questions.
And we came up with kind of some chain of thought prompts.
And this is before, you know, anyone was talking about chain of thought or anything like that.
and we applied it to those landlord-tenant questions, and we gave it to three landlord-tinent
attorneys, and we just said, nothing about AI. We just said, here is a question that a potential
client asked, and here is an answer. Would you send this answer without any edits to that client?
Would you be fine with that? You know, is that ethical? Is it a good enough answer to send? And
86 out of 100 was yes. And actually, we cold emailed the General Counsel of Open AI, and we sent him
these results. And his response basically was, oh, I had no idea the models were this good
at legal. And we met with the C-suite of Open AI a couple weeks after. Now, from legal reasoning
to spatial intelligence, the legendary Dr. Fay-Fei Lee opened our eyes to an entirely different
dimension of AI capability. I think from a neural and cognitive science point of view, that
spatial intelligence is a really hard problem that evolution has to solve for animals. And what's
really interesting is, I think animals have solved it to an extent, but not fully solved it.
It's one of the hardest problems because what is the problem animal has to solve?
Animals have to evolve the capability of collecting lights in something, which we call eyes, mostly.
And then with that collection of eyes, it has to reconstruct a 3D world in their,
mind somehow so that they can navigate and they can do things. And of course, they can interact
for humans. We're the most capable animal in terms of manipulation. We can do a lot of things.
And all this is spatial intelligence. To me, that's just rooted in our intelligence. What is
interesting is it's not a fully solved problem, even in animals. We, for example,
for humans, right?
If I ask you to close your eyes right now and draw out or build a 3D model of the
environment around you, it's not that easy.
We don't have that much capability to generate extremely complicated 3D model until we get trained.
You know, there are some of us, whether they're architects or designers or just people with a lot of training and a lot of talent.
And that's a hard thing to do.
And imagine you do it at your fingertip much more easily
and allow much more fluid interactivity and editability.
That would just be a whole different world for people, no pun intended.
Data is the beast feeding the AI train.
And thus, Merckor CEO, Brendan Foodie, is working with major AI labs on how to build what's next.
he gives a clear prediction about what's coming for the workforce.
I think displacement in a lot of roles is going to happen very quickly, and it's going to be very painful and a large political problem.
Like, I think we're going to have a big populist movement around this and all the displacement that's going to happen.
But one of the most important problems in the economy is figuring out how to respond to that, right?
Like, how do we figure out what everyone who's working in customer support or recruiting should be doing in a few years?
How do we reallocate wealth once we have, once we approach superintelligence, especially of the value and gains of that are more of the power law distribution.
And so I spend a lot of time thinking about, like, how that's going to play out.
And I think it's really at the heart of it.
What do you think happens eventually?
X percent of people get displaced from like color work.
What do you think they do?
I think there's going to be a lot more of the physical world.
I think that there's also going to be a lot of niche skills.
What does the physical world mean?
Well, it could be everything ranging from people that are creating robotics data
to people that are waiters at restaurants or are just like therapists
because people want like human interaction, like whatever that looks like.
I think that automation in the physical world is going to happen a lot slower than what's happening in the digital world just because of so many of the self-reinforcing gains and a lot of self-improvement that can happen in the virtual world, but not physical one.
Which brings us to one of the biggest questions of our time. How do we navigate the geopolitical implications of superintelligence?
Dan Hendricks, the director of the Center for AI Safety, has an answer.
Let's think of what happened in nuclear strategy.
Basically, a lot of states deterred each other from doing a first strike because they could
then retaliate.
They had a shared vulnerability.
So we're not going to do this really aggressive action of trying to make a bid to wipe you
out because that will end up causing us to be damaged.
And we have a somewhat similar situation later on when AI is more salient.
when it is viewed as pivotal to the future of a nation.
When people are on the verge of making a superintelligence more,
when they can, say, automate, you know,
pretty much all AI research,
I think states would try to deter each other
from trying to leverage that to develop it
into something like a super weapon that would allow the other countries to be crushed
or use those AIs to do some really rapid automated AI research
and development loop that could,
have it bootstrapped from its current levels to something that's a super intelligent vastly more
capable than any other system out there. I think that later on, it becomes so destabilizing
that China just says we're going to do something preemptive like do a cyber attack on your data
center. And the U.S. might do that to China. And Russia, coming out of Ukraine, will, you know,
reassess the situation, get situationally aware, think, oh, what's going on with the U.S.
and China? Oh, my goodness. They're so head on AI.
AI is looking like a big deal. Let's say it's later in the year when, you know, a big chunk of
software engineering is starting to be impacted by AI. Oh, wow, this is looking pretty relevant.
Hey, if you try and use this to crush us, we will prevent that by doing a cyber attack on you.
And we will keep tabs on your projects because it's pretty easy for them to do that espionage.
New Barfahan has been thinking about how biotech gets built and how to change the game for three
decades. His breakthroughs have impacted global health. He's the founder and CEO of flagship pioneering
and the co-founder of Moderna.
He wants to make entrepreneurship a scientific effort, not a random one, and he thinks AI can help.
The motivation for flagship stems from what I was doing before, which was that I started a company in 1987
when 24-year-old immigrants didn't start companies in this country.
But instead, it was kind of like former Merck senior executives or IBM senior executives
were the only ones who were entrusted with the massive amounts of venture capital,
namely two, three million dollars per round used to go into veg capital.
So this was very early days.
And I had the kind of chance, opportunity to start a company right out of my graduate school
and ended up raising quite a bit of venture money and eventually kind of went down a path
of entrepreneurship.
Along the way, one of the things that interested me was why it is that kind of the
entrepreneurial process was supposed to be random, improvisational,
kind of idiosyncratic, almost emotional, gamey, all of those things I kind of thought was
a bit of a put-off when it comes to actually doing things in a serious professional way.
And I kind of used to go around in the very early 90s saying, why isn't entrepreneurship a
profession?
And if it was going to be a profession, how could it be a profession?
What do you mean by gaming?
Because it's like supposed to fail most of the time.
And once in a while you win and then you celebrate the win.
And what I mean is like it...
It's random.
But not only random, but there's like winners and losers and keeping score.
I don't know.
It's maybe the wrong word, but I just mean like people even call gamification in the software space.
There is a version of this.
Like, I don't mind being playful because if you're overly serious, sometimes you miss things.
But it can't just all be play.
We take hard-earned money.
We deploy it to do things that are damn near impossible.
Once in a while, we reduce them to practice.
so they become not only possible but valuable.
And yet, people treat it like, oh, well, you know, it didn't work.
There's 20 different things we tried.
One of them worked.
That, I don't know, as an engineer by background, as a scientist, I just thought that what we do,
especially listen, in health care, especially in climate, especially in kind of like agriculture,
food security.
You can't think of this as, you know, like shots on goal and this night.
You've got to kind of say, hey, we can get better at this.
Reasoning is the biggest paradigm shift in AI architecture since the transformer.
Brandon McKinsey and Eric Mitchell from opening eye explained a crucial insight about reasoning models.
I can give maybe very concrete cases for like the visual reasoning side of things.
There's a lot of cases where, and back to also the model being able to estimate its own uncertainty,
you'll give it some kind of question about an image and the model will very transparently
tell you an interesting I thought like, I don't know, I can't really see the thing you're talking about very well.
Or like it almost knows like that its vision is not very good. And what would say,
it's kind of magical. It's like when you give it access to a tool, it's like, okay, well,
I got to figure something out. Let's see if I can like manipulate the image or crop around here
or something like this. And what that means is that it's, it's, it's like much more productive
use of tokens as it's doing that. And so your test time scaling slope, you know,
goes from something like this to, you know, something much deeper. And we've seen exactly
that, like the test time scaling slopes for without tool use and with tool use for
visual reasoning specifically are very noticeably different. Yeah, I was to say,
like for like writing code for something like um there are a lot of things that an lLM could try to
figure out on its own but would require a lot of uh attempts and self-verification that you could
write a very simple program to do in like a verifiable and and you know much faster way so um you know
i do some research on this company and like use this type of you know valuation model to tell me like
you know, what the valuation should be, like, you could have the model, like, try to crank through that
and, like, fit those coefficients or whatever in its context, or you could literally just have it,
like, write the code to just do it the right way and just know what the actual answer is.
And so, yeah, I think, like, part of this is you can just allocate compute a lot more efficiently
because you can defer stuff that the model doesn't have comparative advantage to doing
to a tool that is, like, really well-suited doing that thing.
Sometimes the most profound moments in AI development aren't the grand theoretical breakthroughs.
They're based on taste, data generation, and grinding work.
The visceral experience of watching something you hoped would work actually come to life.
Issa Falford from Open AI captures that moment perfectly.
Here, she's describing the training that went into deep research.
It really was one of those things where we thought that, you know, training on browsing tasks would work.
You know, felt like we had good conviction in it.
But actually, the first time you train a model on a new dataset,
using this algorithm and seeing it actually working and playing with the model was pretty
incredible, even though we thought it would work. So honestly, just that it worked so well was
pretty surprising, even though we thought it would, if that makes sense. Yeah. Yeah. It's the, it's a
real experience of like, oh, the path is paved, but strawberries or whatever. Exactly. But then sometimes
some of the things that it fails at are also surprising. Like, sometimes it will make a mistake
why it will do such smart things and then make a mistake
where I'm just thinking, why are you doing that?
Stop.
So I think there's definitely a lot of room for improvement.
But yeah, we've been impressed with the model so far.
One of the biggest surprises of AI
and a core principle for us here at conviction
is how it can make bad markets suddenly good ones.
The right technology can meet the right moment in unexpected ways.
Arvin Jane built glean and what everyone said was a graveyard market,
enterprise search.
It was like a graveyard, like, you know, of all these companies
that tried to solve the problem, and it didn't.
Part of it was just that, I think search is a hard problem.
In an enterprise, like, even getting access to all the data that you want to search,
it was such a big problem.
In the pre-SAS world, there was no way to sort of go into those data centers,
figure out where the servers were, where the storage systems were,
trying to connect with information in them.
It was a big challenge.
The SaaS actually solved that issue.
So, like, search products, like most of them, most of the companies started in the pre-Sass world,
they failed because you could just put in business.
built a turn to product, but SaaS actually allowed you to actually build something,
you know, which is my insight was that, like, look, you know, the enterprise world has changed.
We have these SaaS systems now, and SaaS systems don't have versions.
Like everybody, all customers have the same version, you know, they are open, they're interoperable,
you can actually hit them with APIs and get all the content.
I felt that the biggest problem was actually solved, which was that I could actually
easily go and bring all the enterprise information and data in one place.
and build this unified search system on top.
So that was actually a big unlock.
And by the way, the origins of Glean is,
so at Rubrik, you know, we had this problem.
Like, you know, we grew fast.
We had a lot of information across 300 different SaaS systems
and nobody could find anything in the company.
And people were complaining about it in our Pulse surveys.
And I was, you know, I always run IT in my startups.
And so there's a complaint that, you know, it came to me.
Like, I had to solve it.
So I tried to buy a search product and I realized there's nothing to buy.
I mean, that's really the origins of how Glean got started as a company.
end. So that was like, you know, one big issue. Like, you know, the SaaS made it easy for
to actually connect, you know, your enterprise data and knowledge to a search system. So that actually
made it possible for us to, for the very first time, build a turnkey product. But there are a lot
of other advances as well. You know, one is, you know, like, look, you know, businesses have so much
information and data. One interesting, you know, facts are one of our largest customers. They have
more than one billion documents inside their company. Now, here's this, you know, when
Alar and I, you know, when we were working on search at Google, you know, in 2004, the entire
internet was actually one billion documents. You know, there's a massive explosion of content
like inside businesses. So you have to build scalable systems and you couldn't build like a system
like that before in the pre-cloud era. Perhaps no story captures the human impact of this AI moment
and its potential better than what's happening in healthcare. Here's Shiv Rao, CEO and founder
of a bridge. It's pretty heroic in general for a doctor to give you feedback like, hey, this sucked.
and you've got to do better.
Like, you didn't recognize the way I said this medication, or I'm a gastroenterologist,
and I would never, you know, sequence my problems in my assessment and plan section of my note
this way.
It doesn't serve me well and makes me look like terrible as a doctor or whatever.
We get that feedback.
We love it.
It's oxygen.
But then we also get the feedback that's like, hey, this is amazing.
And I'm not going to retire anymore.
And I've got like years, decades left in my career now, thanks to this technology.
But in this channel love stories, all of that feedback, that positive feedback, we just,
get it like programmatically funneled so any one of our people inside of the company can always go
into that channel and it's like purpose you know it's like fulfillment immediately like you immediately
understand why we're all working so hard and why it makes sense because like being on this
very telephone pole like journey these last couple years uh is obviously like it's new for so many of us
and we're all kind of building new muscles but it's it's a lot of pressure but this is my favorite
bit of feedback. So this love story comes from a doctor at Tanner Health, which is a rural health
system. And she wrote to us. She wrote, I was sitting at dinner last week, and my son asked me,
Mommy, why aren't she working right now? I literally took my phone out and explained to him that
a bridge is a new tool that lets Mommy come home early and eat dinner with her family. I started
to tear up and looked over at my husband, who then said, Mommy's going to be able to eat dinner
with us every night now. And we get feedback like that, like every day, you know? And so, like,
There's dopamine hits, you know, in hypergrowth.
And, like, those are awesome.
But I think that they get us through, like, sprints.
But I think it's the oxytocin hits like this.
It's the purpose.
It's the fulfillment.
It's, like, that's, I think, what I think we're really after in this company.
And so, like, everybody's mission driven out there.
But I think this mission, like, it hits me at least a little bit different.
These conversations remind us that we're living through a hinge moment in history.
Stay tuned as we have more conversations with the builders and thinkers leading the way for the rest of the year.
If you like what we're doing, leave us review on Apple Podcasts or Spotify, comment on YouTube, or let us know who we should have with the guest. Thanks for listening.
Find us on Twitter at No PryorsPod. Subscribe to our YouTube channel if you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.
