No Priors: Artificial Intelligence | Technology | Startups - How Agentic AI is Transforming The Startup Landscape with Andrew Ng

Episode Date: August 21, 2025

Andrew Ng has always been at the bleeding edge of fast-evolving AI technologies, founding companies and projects like Google Brain, AI Fund, and DeepLearning.AI. So he knows better than anyone that fo...unders who operate the same way in 2025 as they did in 2022 are doing it wrong. Sarah Guo and Elad Gil sit down with Andrew Ng, the godfather of the AI revolution, to discuss the rise of agentic AI, and how the technology has changed everything from what makes a successful founder to the value of small teams. They talk about where future capability growth may come from, the potential for models to bootstrap themselves, and why Andrew doesn’t like the term “vibe coding.” Also, Andrew makes the case for why everybody in an organization—not just the engineers—should learn to code.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @AndrewYNg Chapters: 00:00 – Andrew Ng Introduction 00:32 – The Next Frontier for Capability Growth 01:29 – Andrew’s Definition of Agentic AI 02:44 – Obstacles to Building True Agents 06:09 – The Bleeding Edge of Agentic AI 08:12 – Will Models Bootstrap Themselves? 09:05 – Vibe Coding vs. AI Assisted Coding 09:56 – Is Vibe Coding Changing the Nature of Startups? 11:35 – Speeding Up Project Management 12:55 – The Evolution of the Successful Founder Profile 19:23 – Finding Great Product People 21:14 – Building for One User Profile vs. Many 22:47 – Requisites for Leaders and Teams in the AI Age 28:21 – The Value of Keeping Teams Small 32:13 – The Next Industry Transformations 34:04 – Future of Automation in Investing Firms and Incubators 37:39 – Technical People as First Time Founders 41:08– Broad Impact of AI Over the Next 5 Years 41:49 – Conclusion

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, listeners. Welcome back to No Pryors. Today, Alad and I are here with Andrew Ng. Andrew is one of the godfathers of the AI Revolution. He was the co-founder of Google Brain, Coursera, and the Venture Studio AI Fund. More recently, he coined the term agentic AI and joined the board of Amazon. Also, he was one of the very first people a decade ago to convince me that deep learning was the future. Welcome, Andrew. Andrew, thank you so much for being with us.
Starting point is 00:00:30 Always great to see you. I'm not sure where we should begin because you have such a broad view of these topics, but I feel like we should start with the biggest question, which is, you know, if you look forward at capability growth from here, where does it come from? It's come from more scale. Does it come from data work?
Starting point is 00:00:46 Multiple vectors of progress. So I think there is probably a little bit more juice out of the scalability limit to we speed, so hopefully we'll consume your progress there, but it's getting really, really difficult. society's perception of AI has been very skewed by the PR machinery of a handful of companies with amazing PR capabilities, and because that number of companies drove scales in narrative, people think of scale first of this vector progress. But I think, you know, agentic workflows, the way we built multimodal models,
Starting point is 00:01:13 we have a lot of work to build concrete applications. I think there are multiple vectors of progress, as well as wild cons, like brand new technologies, like can diffusion models, which are used to generate images for the most part, will that also work for generating text? I think that's exciting. So I think there'll be multiple ways of AI to make progress. You actually came up with the term, agentic AI.
Starting point is 00:01:31 What did you mean then? So when I decided to start top of agentic AI, which wasn't a thing when I started to use the term, my team was slightly annoyed at me. One of my team members that will name, here I said, Andrew, the world does not need you to make up another term. But I decided to do it anyway, and for whether it's stuck.
Starting point is 00:01:48 And the reason I started to talk about agentic AI was because a couple years ago, I saw people would spend a lot of time debating, this is an agent, this is not an agent, what is an agent? And I felt there's a lot of good work, and there was a spectrum of degrees of agency, where there are high autonomous agents that could plan, take multiple sets of reasons,
Starting point is 00:02:06 do a lot of stuff by themselves, and then things that were lower degrees of agency, where it prompted an element, were affecting this is an agent or not, and I felt like rather than debating, this is an agent or not, let's just say the degrees of agency and say it's all agentic,
Starting point is 00:02:19 so you can spend their time actually building this. So I started to push, the term agentic AI. What I did not expect was that several months later, a bunch of marketers we get a hold of this term and used as a sticker to stick about everything in sight. And so I think the term agentic AI really took off. I feel like the marketing hype has gone like that insanely fast, but the real business progress has also been rapidly growing, but maybe not as fast as the marketing. What do you think are the biggest obstacles right now to true agents actually being implemented as AI applications? Because to your point, I think we've been talking about it for a little while now,
Starting point is 00:02:52 there are certain things that we're missing initially that are now in place in terms of everything from certain forms of inference time compute on through to forms of memory and other things that allow you to maintain some sort of state against what you're doing. What do you view are the things that are still missing or need to get built or we'll sort of foment progress on that end? I think the technology component level, there's stuff that I hope will improve. For example, computer use, you know, kind of works, often doesn't work. I think so the god rails, e-vals is a huge problem.
Starting point is 00:03:16 How do we quickly evaluate these things and drive e-vowel? So I think the component is that's room for improvement. But what I see is the single biggest barrier to getting more agentic AI workflows implemented is actually talent. So when I look at the way many teams build agents, the single biggest differentiator that I see in the market is, does the team know how they drive a systematic error analysis process with evals? So you're building the agents by analyzing at any moment in time, what's working, what's not working, what do you improve, as opposed to less experienced teams kind of try things in a more random way than it just takes a long time. And when I look at a huge range of businesses, small and large, it feels like there's so much work that can be automated through agentic workflows, but, you know, the talent and the skills, and maybe the software tooling, I don't know,
Starting point is 00:04:01 just isn't there to drive that discipline engineering process to get the stuff built. How much of that engineering process could you imagine being automated with AI? You know, it turns out that a lot of this process of building agentic workflows, it requires ingesting external knowledge, which is often locked up in the heads of people. So until and unless we built AI avatars, you can interview employees doing the work and that's a visual AI that can look at the computer monitor, I think maybe eventually, you know,
Starting point is 00:04:32 but I think at least right now for the next year or two, I think there's a lot of work for human engineers to do to build more agentic workflows. So that's more the kind of collection of data, feedback, et cetera, for certain loops that people are doing. Is that other things that, I'm sort of curious like what that translates into tangibly versus... Yeah, so one example.
Starting point is 00:04:52 So I see a lot of workflows like, you know, maybe a customer email to your document, you're going to convert the document to text, then maybe do a web search for some compliance reason to see a working of a vendor you're not supposed to, and then look at a database record, see the pricing is right, save it somewhere else and so on. There's multi-state majority workflows,
Starting point is 00:05:08 kind of mixed-gen robotic process automation. So we implement this and it doesn't work, you know, is it a problem? If you've got the invoice date wrong, is that a problem or not? or if you routed a message to the wrong person for verification. So when you implement these things, you know, almost always, it doesn't work the first time, but then to know what's important for your business process.
Starting point is 00:05:29 And is it okay that, I don't know, I bothered the CEO of the company too many times, or it's the CEO of it doesn't mind, verifying some invoices. So all that external contextual knowledge, often, at least right now, I see thoughtful human product managers or human engineers having to just think through this and make these decisions. So can an AI agent do that someday? I don't know. Seems pretty difficult right now, maybe someday.
Starting point is 00:05:51 But it's not in the Internet pre-training data set, and it's not in a manual that we can automatically extract. I feel like for a lot of work to be done building agentic workflows, that data set is proprietary. It's not a general knowledge on the Internet. So figuring out, it's still exciting work to do. If you just look at the spectrum of agentic AI, what's the strongest example of agency you've seen?
Starting point is 00:06:14 I feel like leading edge of agentic AI, I've been really impressed by some of the AI coding agents. So I think in terms of economic value, I feel like there are two very clear and very apparent buckets. One is answering people's questions. Probably, you know, open-the-eye chat. It seems to market leader of that with real takeoff, lift-off velocity. The second massive bucket of economic value is coding agents, where coding agents, like my personal favorite cloud data to right now it's clock code. Maybe it will change at some point, but I just use it, love it.
Starting point is 00:06:48 Highly autonomous in terms of planning out, you know, what to do to build a software, building a checklist, going through it one of the times. So the ability to plan a multi-step thing, execute the multiple steps of a plan, is one of the most highly autonomous agents out there being used that actually works. There's other stuff that I think doesn't work, like something computer-use stuff, like, you know, go shot for something for me and browse online. Some of those things are really nice demos, but not yet production. I think that's because of looser criteria in terms of what needs to be done and more variability
Starting point is 00:07:22 around actions, or do you think there's a better training set or sort of set of outputs for coding? I'm sort of curious, like, why does one work so well or almost feels magical at times and others are, you know, really struggling as use cases so far? I think, you know, engineers really good at getting all sorts of stuff to work, but the economic value of coding is just clear and apparent and massive. So I think the sheer amounts of resources dedicated to this has led to a lot of smart people for whom they themselves are the user. So also good instinct on product, building really amazing coding agents.
Starting point is 00:07:59 And then I think, I don't know. You don't think it's a fundamental research challenge. You should think it's like capitalism at work and then domain knowledge in a lab. Oh, I think capitalism is great at solving fundamental research problems. At what point do you think models will effectively be bootstrapping themselves in terms of, you know, 99% of the code of a model will be written by agentic coding agents? Or the error analysis. I feel like we're, I'm, I'm really suspect we're slowly getting there. So some of the leading foundation model competes are clearly, well, they've said publicly, they're using AI to write a lot of the codes.
Starting point is 00:08:33 One thing I find exciting is AI models using agency. workflows to generate data for the next generation of models. So I think the Lama research we talked about this, where older version of Lama would be used to think for a long time to generate puzzles, that then you train the next generation of the model to try to solve really quickly without you to think as long. So I find that exciting to, yeah, multiple vectors of progress. It feels like, you know, AI is not just one way to make progress. There's so many smart people pushing forward in so many different ways.
Starting point is 00:09:04 I think you have rejected the term vibe coding. in favor of AI assisted coding? Like, what's the difference? You know, I'm assuming you do the latter. You're not viving. Yeah, vibe coding leads people to think, you know, like, I'm just going to go to vibes
Starting point is 00:09:21 and accept all the changes that curse to suggest or whatever. And it's fine that sometimes you could do that and it works, but I wish it was that easy. So when I'm coding for a day or for an afternoon, I'm not like going with the vibes. It's like a deeply intellectual exercise. And I think the term vibe coding makes people think it's easier than it is.
Starting point is 00:09:40 So frankly, after a day of using AI-assisted coding, like, I'm exhausted mentally, right? So I think of it as rapid engineering, where AI is letting us build serious systems, build products much faster than ever before. But it is, you know, engineering, just done really rapidly. Do you think that's changing the nature of startups, how many people you need, how you build things, how you approach things, or do you think it's still the same old kind of approach, but you just have people to get more leverage because they have these tools now? So, AIFund, we built startups, and it's really exciting to see how rapid engineering, AI-assist coding, is changing the way we build startups.
Starting point is 00:10:15 So there's so many things that, you know, would have taken a team of six engineers like three months to build that now, today, one of my friends arrived, which is building a weekend. And the fascinating thing I'm seeing is if we think about building a startup, the core loop of what we do, right? I want to build a product that uses love. So the core iteration of loop is write software, you know, it's a software engineering work, and then the product managers maybe go do user testing, look at it, go by gut, whatever, to decide how to improve the product. So when we look at this loop, the speed of coding is accelerating, the cost is falling. And so increasingly, the bottleneck is actually product management.
Starting point is 00:10:52 So the product management of bottleneck is now they can build what do we want much faster. Well, the bottleneck is deciding what do we actually want to build. So at previously, if it took you, say, three weeks to build a prototype, if you need a week to get user feedback, it's fine. But if you're now build a product in the day, then, boy, if you have to wait a week for user feedback, that's really painful. So I find my teams, frankly, increasingly relying on gut because we go and collect a lot of data that informs our very human mental model, our brain's mental model of what the user wants. and then we often have to have deep customer empathy and just make product decisions like that, really, really fast in a little bit of drive progress.
Starting point is 00:11:34 Have you seen anything that actually automates some aspects of that? I know that there have been some versions of things where people, for example, are trying to generate market research by having a series of bots kind of react in real time and that almost forms your market or your user base as a simulated environment of users. Have you seen any tool like that work or take off
Starting point is 00:11:51 or do you think that's coming or do you think that's too hard to do? Yeah, so there's a bunch of tools to try to speed up product management, I feel like, well, the recent Figma IPO is one great example of design, AI, Heidi, and AI, you know, Dylan did a great job. Then there are these tools that are trying to use AI to help interview prospective users. And as you say, we looked at some of the scientific papers on using a flock of AI agents to simulate, you know, a group of users and how to calibrate that. It all feels promising and early and hopefully wildly exciting in the future.
Starting point is 00:12:23 I don't think those tools are accelerating product managers, nearly as a lot of the future. as much as coding tools, accelerating software engineers. So this does treat more of the bottleneck on the product management side. It doesn't make sense to me that my partner, Mike, has this idea that I think is broadly applicable in a couple different ways of computers
Starting point is 00:12:40 can now interrogate humans at scale. And so there's companies like LISN labs working on this for consumer research type tasks, right? But you could also use it to understand tasks for training or for the data collection piece that you described. When you think about your teams that are in this this iteration loop, has like the founder profile that makes sense changed over time? To me, there are so many things that the world used to do in 2022 that just do not work in
Starting point is 00:13:06 2025. So in fact, often I ask myself, is there anything we're doing that today that we're also doing in 2022? And if so, let's take a look and see if it's still going to make sense today because a lot of stuff, a lot of workflows in 2003 don't make sense today. So I think today, the technology is moving so fast, founders that are on top of Gen AI technology, thus, you know, tech-oriented product leaders, I think are much more likely to succeed than someone that maybe is more business-oriented, more business-savvy, but it's not, doesn't have a good
Starting point is 00:13:39 feel for where AI is going. I think unless you have a good feel for what the technology can and cannot do, it's really difficult to think about strategy where they lead the company. We believe this too. Yeah, cool. Yeah, yeah, yeah. I think that's like old-school Silicon Valley even. Like if you look at Gates or Steve Jobs slash Wozniak or a lot of the really early
Starting point is 00:13:58 pioneers of the semiconductor computer, early internet era, they were all highly technical. And so I must feel like we kind of lost that for a little bit of time. And now it's very clear that you need technical leaders for technology companies. I think we used to think, oh, you know, they've had one exit before or so two exits even. So let's just back that founder again. But I think if that founder has stayed on top of AI, then that's fantastic. And I think part of it is in moments of technological disruption, AI rapidly changing, that's the rare knowledge. So actually, take mobile technology.
Starting point is 00:14:32 You know, like, everyone kind of knows what a mobile phone can and cannot do, right? What mobile app is, there's GPS, all that. Everyone kind of knows that. So you don't need to be very technical to have a gut for a connect build a mobile app for that. But AI is changing so rapidly, what can do with the voice app, what engineering workflows do, how wrapped the foundation models, it was a reasoning model. So having that knowledge is a much bigger differentiator, whereas knowing what a mobile app can do to build a mobile app.
Starting point is 00:14:57 It's an interesting point because when I look at the biggest mobile apps, they were all started by engineers. So WhatsApp was started by an engineer, Instagram was started by an engineer. I think Travis at Uber was technical-ish. Technically adjacent. Technical-adjacent. Instacart of Purva was an engineer at Amazon. Yeah, and Travis, right, had the insight that GPS enabled a new thing.
Starting point is 00:15:17 But so you have to be one of the people that saw GPS on mobile coming early to go and do that. Yeah. You have to be like really aware of the capabilities. Yeah, you have to know the technology. Yeah, it's super interesting. What other characteristics do you think are common? I mean, I know people have been talking about, for example, it almost felt like there was an era where being hardworking was kind of poo-pooed. Or do you think founders have to work hard?
Starting point is 00:15:39 Do you think people who succeed where I'm just sort of curious, like aggression, hours work, like what else may correlate or not correlate in your mind? you know, I work very hard, the periods in my life where, you know, I encourage others that once have a great career of an impact, like work hot, but even now I feel like a little bit of nervous of saying that because in some parts of society, it's considered not politically correct to say, well, working hot probably correlates your personal success. I think it's just a reality. I know that not everyone, at every point in their life, is in a time within a work hot. When my kids were first born, that week, I did not work very hot. was fine, right? So acknowledging that not everyone is in circumstances and work hard,
Starting point is 00:16:21 just a factual reality is people that work hard accomplish a lot more. But of course, we need to respect people that aren't in the Facebook. Yeah, I'd say something maybe a little less correct, which is I less politically correct, which is like I think there was an era where people thought like there was a, there's a statement that startups are for everyone. And like I do not believe that's true. Right. I think like, you know, you're trying to do a very unreasonable thing, which is like create a lot of value impacting people very quickly. And when you're trying to do an unreasonable thing, you probably have to work pretty hard, right?
Starting point is 00:16:52 And so I think people, I think that got very, the sort of work ethic required to like move the needle in the world very quickly, disappeared. Yeah, those a, was it a quote, I wish I remember who said this, but was it the only people that would change the world are the ones crazy enough to think they can? I think it does take someone with the boldness,
Starting point is 00:17:12 the decisiveness, they go and say, you know what? Just to say the world, I'm going to take a shot of changing. And there's only people with that conviction that I think can do this. It strikes me as being true in any endeavor. I used to work as a biologist. And I think it's true in biology. I think it's true in technology. I think it's true in almost every field that I've seen is it's the people who work really hard, do very well. And then in startups, at least the thing I tended to forget for a while was just how important competitiveness or people who really wanted to compete in when mattered. And sometimes people come across is really low key, but they still have.
Starting point is 00:17:43 have that drive and that urge and they want to be the ones who are the winners. And so I think that matters. And similarly, that was kind of put aside for a little bit, at least from a societal perspective relative to companies. Actually, I've seen, I feel I've seen two types. One is they really want their business to win. That's fine. Some do great. Some are they really want their customers to win. And they're so obsessed with serving the customer that that worked out. I used to say early it is a Coursera. Yes, I knew about competition, blah, blah, blah. But I was really obsessed with learners, with the customers, and that drove a lot of my behaviors. Now, that's a really good framework. And when I say competition, I don't mean necessarily with other companies,
Starting point is 00:18:22 but it's almost like with whatever metric you set for yourself or whatever thing you want to win at or be the best at. One thing I found is, in a startup environment, you're just going to make so many decisions every day. You just have to go by gut a lot of the time, right? I feel like, you know, building a startup feels more like playing tennis than solving calculus problems. Like, you just don't know the time of the thing. You're just going to make a decision. And I feel like, so this is why people that obsess day and night with a customer, with a company, think really deeply and have that conceptual knowledge that when someone says, do I ship product feature A or feature B,
Starting point is 00:18:57 you just got to know a lot of the time, not always. And it turns out there are so many, to use JetBez's term, like two-way doors in startups, because frankly, you have very little to lose. So just make a decision and it is wrong. Change of the week later is fine. But to be really decisive and move really fast, you need to have obsessed usually about the customer, maybe the technology,
Starting point is 00:19:18 to have that state of knowledge to make really rapid decisions and still be right most of the time. How do you think about that bottleneck in terms of product management that you mentioned or people who have good product instincts? Because I was talking to one of the best-known sort of tech public company CEOs, and his view was that in our...
Starting point is 00:19:34 all of Silicon Valley, or all of tech kind of globally, there's probably a few hundred at most great product people. Do you think that's true, or do you think there's a broader swath of people who are very capable at it? And then how do you find those people? Because I think that's actually a very rare skill set in terms of the people who are, you know, just like there's a 10x engineer, there's 10x product insights, it feels. Boy, that's a great question. I feel it's got to be more than a few hundred great product people. Maybe just as I think there are way more than a few hundred great AI people. I think there are, but I think one thing I find is very difficult is that user empathy
Starting point is 00:20:08 or that customer empathy because, you know, to form a model of the user or the customer, there's so many sources of data, you know, you run surveys, you talk to a handful of people, you mean market reports, you look at people's behavior on other parallel or competing apps or whatever, but there's so many sources of data, but to take all this data and then to, you know, get out of your own head to form a mental model for what your, right, maybe ideal customer profile or some user you want to serve, think and act so you can very quickly make decisions serve them better. That human empathy, one of my failures, one of the things I did not do well, an early
Starting point is 00:20:43 phase of my career, for some dumb reason, I tried to make a bunch of engineers, product managers, I gave them product manager training, and I found that I just foolishly made a bunch of really good engineers feel bad for not being good product managers, right? But I found that one correlate for whether someone would have good product instincts is that very high human empathy where you can synthesize lots of signals to really put yourself in the other person's shoes to then very rapidly make product decisions and how to serve them. You know, going back to coding assistance, it's really interesting. I think it is like reasonably well known that the cursor team, like they make their decisions actually very instinctively versus. spending a lot of time talking to users. And I think that makes sense if you are the user
Starting point is 00:21:31 and then your mental model of like yourself and what you want is actually applicable to a lot of people. And similarly, like I think, you know, these things change all the time. But I don't think ClaudeCode incorporates despite, you know, scale of usage, feedback data today from like a training loop perspective. And I think that surprises people because it is really just like what do we think the product should be at this stage.
Starting point is 00:21:56 So it turns out one advantage that startups have is while you are early, you can serve kind of one user profile. Today, if you're, I don't know, like Google, right? Google serves such a diverse set of user personas. You really have to think about a lot of different user personas, and that adds complexity the product changes. But when you're a startup trying to get your initial wage in the market, you know, if you pick even one human that is representative enough of a broad set of users, and you just build a product for one user that you're, you know, you have one ideal customer profile, one hypothetical person, then you should actually go quite far. And I think that for some of these businesses, be it Kressor or Cloud Code or something, if they
Starting point is 00:22:37 have internally a mental picture of a user, that's close enough to a very large perspective users, that you can actually go really far that way. The other thing that I've observed and curious, you guys see this in some of our companies, is just like the floor is lava, right? The ground is changing in terms of capability all the time. And the competition is also very fierce in the categories that are already obviously important and have multiple players. So leaders who are really effective in companies a generation ago are not necessarily that effective when recruited to these companies as they're scaling, like, because the pace of, it is a velocity of operation or the pace of change. It's interesting to see you say, like, I'm looking at what I was doing in like today and in 2022 and saying, like, is that still right versus if, you know, if you're an engineering leader or go to.
Starting point is 00:23:25 a market leader and you've, like, built your career being really great at how that's done, that may not be applicable anymore. I think it's a challenge for a lot of people. I know many great leaders in lots of different functions, still doing things the way they were in 2022. And I think it's just got to change. When new technology comes, I mean, you know, once upon the time, there's no such thing as web search today.
Starting point is 00:23:49 Would you hire anyone for any role that doesn't know how to search the web? And I think we're well past the point that for a lot of job roles, if you can't use OMs in an effective way, you're just much less affected than someone that can. And it turns out everyone in my team AI fund knows how to code. Everyone is a good have account. And I see for a lot of my team members, you know, when my, I don't know, assistant general counsel or my CFO or my front desk operator, when they learn how to code, they're not software engineers, but they do their job function better because by learning the language
Starting point is 00:24:22 computers, they can now tell a computer more precisely what they want to do for them, and the computer would do for them, and this makes them more effective their job function. I think the rapid pace of change is disconcerting to a lot of people. But I guess, I know, I feel like when the world is moving at this pace, we just have to change at the pace in the world. Yeah, to your point, show up in hires particularly around product, or product and design. So one sort of later stage AI company I'm involved with, they were doing a search for somebody to run product and somebody to run design. And in both cases, they selected for people who really understood how to use some of the vibe coding slash AI-assisted coding tools.
Starting point is 00:25:03 Because they said your point. It's like you can prototype something so rapidly. And if you can't even just mock it up really quickly to show what it could look like or feel like or do in a very simple way, you're wasting an enormous amount of time talking and writing up the product requirements document and everything else. And so I do think there's a shift in terms of how do you even think about what process. do you use to develop a product or even pitch it? Right? What should you show up with to a meeting when you're talking about a product?
Starting point is 00:25:25 The whole thing, apparently. Yeah, no, you should have a prototype in some cases. Actually, just give an example. Research engineering engineers for a row and hired their, interviewed someone with about 10 years of experience, full stack, very good resume, also interviewed the Fresh College Dread. But the difference was the person of 10 years of experience
Starting point is 00:25:42 had not used AI tools much at all. Fresh College Dread had. And my assessment was the Fresh College Dread, that the new AI would be much more productive and I decided to hire them instead of their great decision. Now, the flip side of this is the best engineers I work with today are not fresh-collar strats. They're people with, you know, 10, 15 or more years of experience, but they're also really on top of AI tools and that those engineers are just completely in a class of their own. So I feel like, I actually think software engineering
Starting point is 00:26:10 is a harbinger of what would happen in other disciplines because the tools are most advanced in software engineering. It's interesting. One company that I guess both of us are involved with this called Harvey. And I led their series B. And when I did that, I called a bunch of their customers. And the thing that was most interesting to me about some of those customer calls was, because the legal is notorious as being a tough profession
Starting point is 00:26:30 for adopting new technology, right? There aren't a dozen great legal software companies. Those customers that I called, which were big law firms or people who were quite far along in terms of adopting Harvey, they all thought this was a future. They all thought that AI was really going to matter for their vertical. And the main thing they would raise is questions like,
Starting point is 00:26:48 In a world where this is ubiquitous, suddenly instead of hiring 100 associates, I only hire 10. And how do I think about future partners and who to promote if I don't have a big pool? And so I thought that mindset shift was really interesting. And to your point, I feel like it's percolating into all these markets or industries. And it's sort of slowly happening. But as industry by industry, people are starting to rethink aspects of their business in really interesting ways. And it'll take a decade, two decades for this transformation to happen. But it's compelling to kind of see how people, like the earliest adopting verticals and something that people were thinking deepest about it.
Starting point is 00:27:18 It should be really interesting. I think, yeah, if you have a legal startup catalysts AI, that AI fund help out. It's doing very well as well. I think the nature of work in the future would be very interesting. So I feel like a lot of teams wound up, you know, outsourcing a lot of work, right, partly because of cost. But with AI and AI assistance, part of me wonders is a really small, really skilled team
Starting point is 00:27:45 with lots of AI tools. that going to outperform a much larger, you know, and maybe lower cost team that may or may not be. And they have less coordination cost. Yeah. So actually, so some of the most productive teams I'm on, you know, that I'm a part of now, is some of the smallest teams than, than very small teams, a really good engineers with lots of AI enablement. And very low-colded cost goes wrong to getting a person. So see, we'll see how the world evolves.
Starting point is 00:28:14 Too early to make a call, but you can see where I'm maybe, thank you. the world may or may not be headed. I work with several teams now, one of which is called Open Evidence and has like a pretty good penetration, like 50% of doctors in the U.S. now, where it's an explicit objective in the company to try to be as small as possible as they grow impact. And, you know, we'll see where these companies land because, you know, there's lots of functions that need to grow in a company over time. But that certainly wasn't an objective for like five years ago. I've heard that objective a lot. I've actually, I heard that objective a lot in the 2010's and there's a bunch of companies that I actually think underhired pretty dramatically or
Starting point is 00:28:53 stayed profitable and would brag about being profitable for gross wasn't as strong as it could be. So I actually feel like that's a trap. How would you calibrate then? Yeah. It's basically really, it's almost are you being laxadaisical or too accepting of the progress that your company's making because it's going just fine. It could be going much better, but it's still going great on a relative basis. And so you're like, oh, I'll keep the team small. I'll be super lean. I won't spend any money, look at me how profitable I am. And sometimes it's amazing, right? Capital efficiency is great. But sometimes you're actually missing the opportunity or not going as fast as you can. And usually I think what happens is in the early stage of a startup life, you're competing
Starting point is 00:29:28 with other startups. And if you're way ahead, it feels great. But eventually, if they're incumbents in your market, they come in. And the faster you capture the market and move up market, the less time you give them to sort of realize what's going on and catch on. And so often five, six, seven years in the life of a startup, you're actually competing with incumbents suddenly. And they just kill you with distribution or other things. And so I think people really missed the mark. And you could argue that was kind of Slack versus Teams. That was, you know, there's a few companies I won't name, but I feel like they're so proud of their profitability and they kind of blew up. I guess on the design side, that was sketch, right? Remember?
Starting point is 00:29:59 Bohemian coding, yeah. You know, they were based in the Netherlands. They were super happy. They were profitable. They were doing great. And then the Figma Wave kind of came. Do you think your company stay this small? Do you think your teams stay this small? Do I think my team stay this small? What do you mean? in terms of just efficiency of, like, can you actually get to, you know, affect millions and billions of people with 10, 50, 100-person teams? I think teams can definitely be smaller now than they used to be, but are we over-investing around the best thing? And then also, I think to your point, the analysis of market dynamics, right?
Starting point is 00:30:33 If it's a win-it-take-all market, then the incentives, just... Got to go. Yeah, it's going to go. Minecraft, I think, when it sold on Microsoft, was how many people were, like, five people or something? and it sold for a few billion dollars and it was massively used. I think people forget all these examples, right? It's just this, oh, suddenly you can do things really lean.
Starting point is 00:30:50 You could always do something, things lean before. The real question is, how much leverage did you have in headcount? How did you distribute? What did you actually need to invest money behind? And then I would almost argue that one of the reasons small teams are so efficient with AI is because small teams are efficient in general. They didn't hire a 30 extra crafty people who get in the way. And I think often people do that.
Starting point is 00:31:09 If you look at the big tech companies, for example, right now, many, not all of them, but many of them could probably shrink by 70% and be more effective, right? And so I do think people also forget the fact that A, there's AI efficiency, B, there's sort of high value capital being arbitraged into markets that normally wouldn't have them. Legals are a good example. Great engineers didn't want to work in legal. Now they do because of things like Harvey. Or health care.
Starting point is 00:31:30 Or health care, which again, suddenly you have these great people showing up. But I think also the other part of it is just small teams tend to be more effective. and AI helps you argue other reasons to keep teams highly small and performant, which I think is kind of under-discussed. I feel like one of the reason why that AI instinct is so important. I remember one week had two conversations with two different team members. One person came to me to say, hey, Andrew, going to do this, can you give me some more headcount to do this? I said no.
Starting point is 00:31:57 Later that week, I think independently, someone else, very similar, said, hey, Andrew, can you give me some budget to hire AI to do this? said yes. And so that realization, you're hiring AI, not, you know, a lot more humans with this. You just got to have those instincts. Yeah, that's interesting. If you think of what's happening in software engineering is the harbinger for like the next industry transformations. You spend a lot of time investing at the application level or like building things there. What do you think is next? Or do what do you want to be next? I feel like there's a lot of, at the tooling level, I'd actually prefer a ranked list, you know, for all investing in this stuff.
Starting point is 00:32:35 You know, there's actually one thing I find really interesting, which is where the economists doing all the studies on what are the jobs, you know, at highest risk of AI disruption. I think you're skeptical. I actually look at them sometimes for inspiration for, you know, where we should find ideas to build projects. One of my friends, Eric Brennerelson, right, he and his company work he links, which involved in this, very insightful in the nature of- Yeah, I like him. Yeah, good.
Starting point is 00:32:58 I mean, good. So I find talking to that sometimes useful. Although, actually, one of the lesson I've learned, though, is in the view of top-down market analysis, I think AI will talk a great environment. There's so many ideas that no one's worked on yet, because the tech is so new. So one thing I've learned is AI found we have an obsession with speed. All my life I've always had an obsession of speed, but now we have tools to go even faster than we could. And so one of the lessons I've learned is we really like concrete ideas.
Starting point is 00:33:26 So someone says, I did a market analysis, AI would transform health care. It's true, but I don't know what to do of that. But if someone, a subject-matic expert or an engineer, comes and says, I have an idea. Look at this part of healthcare operations and drive-fission and all this. They go, okay, great, that's a concrete idea. I don't know if it's a good idea or a bad idea, but this is concrete. At least we could, you know, very efficiently figure out do customers want this.
Starting point is 00:33:48 It's technically feasible and get going. So I find it at AI fun when I try to decide what to build, we screen a long list of ideas to try to select a small number that we want to go forward on. we don't like looking at ideas that are not concrete. What do you think investing firms or incubation studios like yours will not do two years from now, like not do manually, sorry. I think there's a lot could be automated, but the question is what are the tasks we should be automating? So, for example, you know, we don't make follow-on decisions that often, right,
Starting point is 00:34:21 because of portfolio or some dozens of companies. So do we need to fully automate that? Probably not because we're very hard to automate. I feel like doing deep research on individual companies and competitive research, that seems right for automation. So I personally use whether I open eyes deep researcher and other deep researcher in terms of tools a lot to just do at least a cursory market research things. LP reporting, that is a massive amount of paperwork that maybe you could simplify.
Starting point is 00:34:52 Yeah, I'm taking the strategy of general avoidance, besides, you know, basic compliance. You know, one of my partners, Bella, she worked at Bridgewater before where they had like an internal effort to take a chunk of capital and then try to disrupt what Bridgewater was doing with AI. And it's like, you know, macro investing. It's a very different style. But I think it probably gives us some indications where the human judgment piece of our business, I think is not obvious. Like, does an entrepreneur have the qualities that we're looking for when, you know, your resume on paper or your GitHub or. or, you know, what minor work history you have when you're a new grad, it's not very indicative. And so people have other ideas of doing this.
Starting point is 00:35:36 Like, I know investors that are, like, you know, looking at recordings of meetings with entrepreneurs and seeing if they can get some signal out of, like, communication style, for example. But I think that part is very hard. I do think you can be, like, programmatic about looking at materials, for example. and it's like ranking, you know, quality of teams overall. There's actually one thing. I feel like our AI models are getting really intelligent, but does it set of places where humans still have a huge advantage of the AI?
Starting point is 00:36:09 It is often if the human has additional context that for whatever reason the AI model can't get at, and it could be things like meeting the founder and sussing out there, you know, just how they are as a person and the leadership qualities, the communication, or whatever. And those things, maybe reviewing video, maybe eventually we can get that context of AI model. But I find that all these things like as humans, you know, we do a background reference check and someone makes an offhand comment that we catch that affects a decision.
Starting point is 00:36:39 Then how does an AI model get this information, especially when, you know, a friend will talk to me, but they don't really talk to my AI model. So I find that there are a lot of these tasks where human have a huge information advantage still because they're not figured out the plumbing or whatever it's needed to get information to the end model. The other thing I think is very durable is things that rely on are like a relationship advantage, right? If I'm convincing somebody to work at one of my companies
Starting point is 00:37:05 and they worked at a previous company and they trust me because of it or whatever reason, like, you know, all the information in the world about why this is a good opportunity isn't the same thing as me being like, Sally, you got to do this, it's going to work. It remains to be seen whether or not company building is actually that correlated with investment returns,
Starting point is 00:37:21 but I do think that that side of it feels harder to fully automate. Yeah, yeah, no, yeah, I think I trust because people know, and people do trust you, I trust you, right? Because you can say so many things it's very easy to lose trust, you know, so that makes sense.
Starting point is 00:37:39 Yeah. Actually, one thing I'm interested in your take on is, you know, we increasingly see highly technical people try to be a first-time founders, you know, set up the processes to set up first-time founders to learn all the hard lessons and all the craziness needed, right, to be a successful founder.
Starting point is 00:37:57 I spent a lot of time thinking through that, how to set up founders for success when they have, you know, 80% of the skills needed to be really great, but there's another just a little bit that we can help them with. That's a very manual process. I don't sweat it. You don't sweat it. I just feel it as like a mix of peer groups.
Starting point is 00:38:14 Like, can you surround people with other people who are either similar or one or two steps ahead of them on the founding journey? and then the second thing is complementary hire I think in general one of my big learnings is I feel like early in careers people try to complement
Starting point is 00:38:28 or try to build out the skills that they don't have in late in careers they lean into what they're really good at and then they hire people to do the rest and so if the company's working
Starting point is 00:38:36 I think you just hire people like Bill Gates would notoriously talk about his COO was always the person he'd learn the most off of and then once he'd a certain level scale he'd hire his next COO I see yeah and so I must view it through that lens
Starting point is 00:38:46 for founders yeah company should always make sense But I think the best way to learn something is to do it. And so that, therefore, just go, you know, you'll screw it up. It's fine as long as it's not existential, the business, who cares? So I tend to be very laxidaisical. I probably think too many things are existential for companies. Yeah, it's something.
Starting point is 00:39:03 It's like, do you have customers and are you building product? Most of the way, yeah. Are you building a product that users love, right? And then, of course, go to market is important and all that is important, but you're soft with the product first. Then usually, sometimes you can figure out the rest, too. I agree with that most of the time, but not always. Yeah, I think there's lots of, there's some counter examples,
Starting point is 00:39:21 but yeah, I generally agree with you. Sometimes you can build a sucky product and have a sales channel you can force it through, but I'd rather not, that's not my default. I don't love that either in his thing. It does work. There's a lot of really bad technology that is big companies right now. Okay, if you have these, you know, first time, very technical founders with gaps in their knowledge or skill set,
Starting point is 00:39:46 being like the core profile of folks you're backing again like do you augment them somehow like what's what helps them when they begin i think a lot of things that's actually one thing i realize that you know at venture firms venture studios we do so many reps that we just see a lot that even repeat founders have only done like twice in their life or even once or twice in their life so i find that when my firm sits alongside the founders and shares our instincts on you know when do we get custom be faster. Are you really on top of the latest technology trends? How do you just speed things up? How do you fundraise? Most people don't fundraise that much in their lives, right? Most founders just do it a handful of times. That hopes even very good founders with things that, because of what we
Starting point is 00:40:31 do, we've had more reps at. And then I think hiring others around them, peer group, I know these are things that you guys do. I think there's a lot we could do. It turns out even the best founders need help. So hopefully, you know, VCs, venture studios who can provide that to great founders. A lot's wiser about this than I am. I mean, I can't help myself, but like want to specifically try to upscale founders on a few things they have to be able to do, like recruiting, right? But I would agree that the higher leverage path is absolutely like you can put people around yourself to do this and to learn it on the job. Last question for you, what do you believe about broad impact of AI over the next five years, you think most people don't? I think many people
Starting point is 00:41:17 will be much more empowered and much more capable in a few years than they are today. And the capability of individuals is probably, of those that embrace AI will probably be far greater than most people realize. Two years ago, who would they realize that software engineers would be as productive as they are today when they embrace AI? I think in the future, people have all sorts of job functions. also for personal tasks, I think people and phrases will just be so much more powerful and so much more capable than they're pretty you can imagine. Awesome. Thanks, Andrew.
Starting point is 00:41:51 Thanks for doing. Thanks for doing. Thanks. Thanks, thank you. Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel. If you want to see our faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week.
Starting point is 00:42:05 And sign up for emails or find transcripts for every episode at no dash priors.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.