No Priors: Artificial Intelligence | Technology | Startups - AI Consolidation, Biotech Opportunities, and World Models with Sarah and Elad

Episode Date: May 29, 2025

In this episode of No Priors, Sarah and Elad unpack the current state of the AI market - whether it’s consolidating, what’s enabling or blocking key mergers, and where the most promising untapped... opportunities lie, particularly in biotech. They also explore the rise of world models and how AI’s novel methods for understanding complex systems may ultimately reshape how humans approach discovery and problem-solving.  Show Notes: 0:00 Is the AI market consolidating into clear winners? + the physics of the current landscape 7:01 Why more companies don’t merge (even when it makes sense)  10:09 Exploring biotech’s biggest commercial opportunities and the challenges founders face  17:14 Building world models  21:34 How AI is expanding the way humans reason, design, and evolve systems

Transcript
Discussion (0)
Starting point is 00:00:00 Alad, what's going on? How you doing, Sarah? I'm good. I can't tell if it is a very stable time in the market. Like, it's crystallizing into known businesses and models, or it's as fluid. What's your take? You know, it's interesting. AI is the one market in my career where I've sort of consistently said, the more I learn the less I know.
Starting point is 00:00:26 Every other market, you kind of learn more, you know, more you keep advancing. And I actually feel like that's shifted in the last couple months where I feel for a subset of areas, despite the rapid pace of innovation, all the really exciting new models and research findings and everything else, I actually feel like a bunch of markets have sort of consolidated. And it's kind of clear now who are the likely players or winners in like two or three big areas. And that may change, right? In three years, another new startup may launch and displace everybody or an incumbent may make a bold move or whatever it may be. But I feel like in the foundation model market, at least for LLMs, there's a clear view of sort of what's important and what isn't. At the application level, I think it's kind of clear who the winners are going to be in sort of at least the first set of services for health care related, things like medical scribing or other flows. In coding, it seems like it's consolidated into two or three players.
Starting point is 00:01:18 You know, maybe that's cursor, codium, cognition, and then Microsoft's like copilot, right? but there aren't probably like two dozen companies that are all still competing there. In customer success, it seems like things are kind of consolidating against Sierra and Decagon. So you kind of go through market by market and you're like, okay, there's a bunch of markets where it's kind of clear who we think some of the winners may end up being, or at least the ones who are going to be important for the next two, three years. And then I think there's a set of markets where it's still wide open, right? So you look at sales productivity tooling. There's going to be something really important there. There's going to be some financial analyst thing that's going to be really important.
Starting point is 00:01:57 There's going to be an accounting company that's really important. And the question is, has that not consolidated yet because of nobody yet doing the exact right product approach? Is it because the models aren't good enough and the capabilities have to get better? So it feels like there's a bunch of stuff that is still unknown, but it's way clearer than I think it was a year ago. I feel like for the first time in like two years or something, you know, when I first started investing in generative AI, you just went to I knew back the things that the people seemed really good and the market seemed interesting because there wasn't a lot of competition, right? So that's when I led the seed around for perplexity or invested in character or Harvey or some of these other things. That was, you know, pre-chat
Starting point is 00:02:33 GPT or pre-Midgering. Oh, the good old days, yeah. The good old days when nobody cared. When GPT3 was out and everybody's like, this is kind of crappy. But the scaling law was clear, right? So I thought a handful of people, you know, you being included kind of, I think we collectively saw that this stuff was going to be important. But then there was like a period of like, uncertainty for two years or something like that, maybe three years, where there was just so much innovation and so much change and so much rapid growth. And I think now finally we're hitting a period with least a subset of things are consolidating back down. And again, these may not be the winners five years from now, but they definitely seem to be emerging as the winners for the next two
Starting point is 00:03:11 years. So I think it's kind of a nice breather in terms of uncertainty and kind of having a bit more clarity and what's going to happen. I don't know. What do you think? I feel a little bit like I understand some temporary physics of the market a little bit better, right? It's like a race to find the verticals of relevance and then get something to work in a way that you just actually want. Maybe you have to go get proprietary data sources that you can retrieve against and like get distribution. And then ideally have users that can create or derive or extend knowledge from that.
Starting point is 00:03:41 Like the companies you just named and I don't think you explicitly said about to put like a bridge and open evidence. I think they fit into that shape. And then one thing you and I have talked about is, I'm actually quite unsure about sales. I like don't know how to think about how something wins there. Like you could go at it from a data perspective or adoption perspective, but it's been very fragmented market to date. But I agree with you on finance and accounting. I'd add pharma to that.
Starting point is 00:04:10 Like there are some industries that are really document driven where you can see something just becoming really important. Yeah, you can see something coming there. Yeah, there's companies like blue networking and pharma, for example, that are kind of doing interesting things. Yeah. Yeah. And so I think to some extent it's been clear what markets will be interesting or at least a subset of them. It just wasn't clear like who would win and how. And, you know, coding is an interesting analog where there's probably four different approaches to coding that everybody was taking simultaneously. And I think some of those approaches will consolidate over time, but also the entry points now seem way more clear in terms of how do you actually
Starting point is 00:04:40 win in that market. When before two years ago, there was a dozen different ways you could imagine somebody. And I wonder if the analog there is for the sales stuff you're talking about where it seems a little bit less certain right now, but maybe in two years we'll be like, of course it was whatever that workflow was. We had a debate internally at my firm as to like what it would take for another new entry point to work. And I think it would take a lot. I'm open-minded to it, but you know, what is still changing? Like you, you know, you increasingly have like open models and little models that like can do real things with code, right? Sure. Code stroll and this. And I think you'll see more there.
Starting point is 00:05:20 Microsoft, open-sourced co-pilot, we'll see what the impact of that is, but it's like they finally decide they need to fight cursor for meeting its lunch with its own open source, VS code, fork. There's some chance
Starting point is 00:05:32 that like making specific workflows from engineering work that don't work at sufficient quality today can create enough distribution. That's interesting. And then it's not clear, like you have the sync like IDE workflow and the async,
Starting point is 00:05:47 right and one question is how quickly does the quality of these like asynchronous code agents increase right open AI with codex they made a bet on async like cloud-based software engineering agent and then they bought the IDE with windsurf right so it's true it's true you can believe both yeah I think I think a lot of these things will just consolidate over time and so my view is that the market's going to see two types of consolidation there's going to be product consolidation and there'll be actual buys and the codium slash windsurf acquisition by Open AI is the first sort of step in that. But if I was a number one or number two in a market and I was a startup, I'd consider merging with the other party if there were the two main startup players because the real threat will be fighting the incumbents. And so I would kind of get ahead of it and say, okay, let's stop the startup to startup war and let's just focus on winning against the three or four incumbents that we have to go up against.
Starting point is 00:06:38 And so, you know, or you could just keep fighting and getting distracted by the other party, which is kind of what Uber lifted for a while. or, you know, there's other precedents, the ones that did merge are things like PayPal, right? There was X.com, which Musk was running, and then PayPal, which Peter Thiel was running, and they decided to merge because they were like, why are we competing with each other when there's so much in company competition? I think both paths will happen, but it may be something people should consider as well. What do you think prevents companies from thinking through that or doing that? Well, it's two things.
Starting point is 00:07:05 One is there's ego. Who's going to run it or do they want to subsume myself? Sure, I'm the number two, but blah, blah, blah, I'll still beat them. Or what role would I play or whatever? And to some extent, it's like, put that. that aside and just go win, you know? Like, who cares? Second is people worry too much about integration. What's the culture and what's the this and what's the that? And often it's just like, just merge it. And if it doesn't work, shut down parts of it and move on with life, whatever
Starting point is 00:07:29 parts, either in the buyer or the seller, it doesn't matter. But just, it's again, kind of a who cares pragmatically. Like, you can fix it all sorts of ways. Either the culture meshes or they don't. And if they don't mesh, you don't have to keep everybody, honestly, because everybody's going to do very well off the acquisition. You can do all sorts of like thank you packages and move on with life. And then third is sometimes there's dynamics around how you value the things relative to each other for private to private companies. And sometimes the easiest way to do that is you just choose some metric and say it's divisible by that metric. So for example, um, years ago when I was at Twitter, um, I drove an attempt to buy a major social network
Starting point is 00:08:07 that was up and coming. And the way we constructed that offer is we just took their users and our users added them up and then divided the ratio and made that offer as a portion of Twitter that we offered for the company. I think you can do that. Take your revenue plus my revenue, we add it up. And then what's the ratio? Or maybe it's users. It's whatever the right metric is for your business.
Starting point is 00:08:27 But I actually think you can do really simple things like that and just say, look, fair enough. Like plus or minus X percent isn't going to matter if we just all win. So people tend to overthink those things. They overthink role slash I'm giving up or ego or whatever. culture slash what is the surviving thing look like together and then what's the value or what's the relative value of the two pieces pragmatically it's like do you want to fight it out for the next five years or do you want to go in or you know and then your battleground shifts to the incumbents versus another startup yeah i've seen the the simple uh relative like relative
Starting point is 00:09:00 metric um also work i also think that this is both like founders and board members or investors like they're just unwilling to put something like inside the overton window I think people will feel like it is, it is capitulating, but it's capitulating in, you know, in service of winning. And so I think that's a big reason. People just like don't want to look like they're unwilling to go to war. Yeah, the pie basically gets bigger if you do that because you're focused on just winning the market versus competing with each other, but also your pricing dynamics shift. You're not competing on every deal with another startup, you know, like a lot of things kind of shift. And so I think there's all sorts of positive characteristics. Again, people will
Starting point is 00:09:41 win in these markets without it. And also, um, some of these markets are really big. And there is room for a number one and a number two and maybe a number three or maybe incumbents or, you know, payments was that way, right? We have adjun. We have stripe. We have PayPal. We have, you know, a dozen other payment processors. It's a very big fragmented market. And so some markets also can sustain multiple players. And that's fine too. I'm just saying like sometimes, you, you just want to say, hey, let's, let's put aside our differences and go in together. Okay. Some part of the market is consolidated, some could be better consolidated in terms of startups winning. There are areas that you and I have talked about where they feel like obvious commercial opportunities, but people
Starting point is 00:10:22 are not chasing them sufficiently, I think. And, you know, we've talked about engineering as one that I think AI will absolutely change. Like, you have a bunch of ideas in biotech. What's missing? Yeah, I mean, the biotech stuff I'm interested in, honestly, isn't AI related, although there's obviously really cool things happening in terms of models. There's a there's a whole separate threat of stuff I just think is neat. I'm not an active biotech investor. I'm the wrong person to pitch on things, et cetera. I mainly do software, AI, you know, et cetera, type of investing as well as the companies have started have largely been, you know, software-driven companies. I just think there's some really cool stuff now that the science and biotech is far enough along, or in basic science,
Starting point is 00:10:59 and that nobody or very few people are working on, right? And I'll give you maybe two or three examples. One is there's some really good data now for fertility out of Japan where you can basically take a cell, you can reprogram it to turn into either a sperm or egg, and they've made mice now with two fathers, for example. You know, you can differentiate one father cells into sperm, one father cells into eggs, and then you can have viable offspring. And that really opens up the capability for any adult to have kids with any other adult. So if a woman is over a certain age, she can suddenly produce either sperm or egg. You know, you can do it for different types of couples.
Starting point is 00:11:42 So there's stuff like that where you're like, why are so few people working on this? An even simpler version is just, you know, girls are born with one to two million oocytes, which are egg cells. By puberty, they end up with about 300,000. And then there aren't good technologies to basically mature those eggs. So if you're a woman, you should be able to mature your oocytes at different points in your life. And you should be able to harvest tons and tons of eggs if you ever want to have lots of kids, right? And so there's a lot of stuff like that that just nobody's doing.
Starting point is 00:12:08 Is the outcome of that, like, people choose differently, the inputs to them having kids? Like, the, like, for example, sperm or egg donor market is very different. Like, we're all just having kids with Elon, like you and me both. The crazy thing about that, honestly, is say that you meet Elon Musk or LeBron James or Taylor Swift or whoever it is somewhere, and you manage to swap some cells off of them, you shake their hand or whatever. Oh, no. You could potentially reproduce with them. Yeah, yeah, no, seriously. So some of the ramifications of this stuff is pretty crazy if you think about it, right?
Starting point is 00:12:44 But also societally, it's so impactful in terms of what you could do with that. And to your point, suddenly anybody could become an egg or sperm donor in any capacity. But it just seems like it has such big implications, even if you just say we're going to limit it to women over a certain age. Like, you know, something we can, or people who just aren't reproductively viable otherwise, right? it's pretty big deal in my opinion. But again, the science is there. They've worked through a lot of the pathways to get there. And now it's like, okay, I know one company doing it. It's driven by a very good founder, but one company, that's it. Another area would be you look at Botox, right? People are injecting a bacterial toxin into their skin to look younger, like literally a toxin. And that was a
Starting point is 00:13:31 $40 billion company, you know, one and a half billion dollars a year in revenue just for for cosmetic applications. Why isn't anybody doing actual real drugs and treatments for aging? There's all sorts of science around it. There's all sorts of biology. So there's nobody working on skin aging, balding, gray hair, all that kind of stuff. And then there's the stuff that's really impactful in terms of neurosensory, right? Like the muscle that holds the lens of your eye gets weaker with time. And so why don't you rejuvenate that? That's why everybody ends up with reading glasses in their 40s or hearing loss. You know, there's pathways for that. Or tooth regrowth. Like, you have a cavity, why don't you just grow a new tooth? And there's pathways for it. Again,
Starting point is 00:14:07 there's a lot of the biology worked out. Maybe there's more that needs to be done from basic science perspective. In many cases, for example, for dental stuff, there's genes like USAG1, which allow for two, three growth in certain animal models. So why don't we do that in people? What's your, like a hypothesis for why that, why there are areas of, to me, what seemed like clear demand if the science you suggest exists, like why isn't it being funded? Yeah, it's massive markets, I think there's three reasons. Number one, the biotech or biopharmaceutical market for founders is very different from the tech market. And the overall market structure is radically different. So basically, if you look at biotech, the last time a $50 billion plus biotech company was
Starting point is 00:14:52 started from scratch, excluding Moderna, which was kind of an accident of COVID, was in the 80s. I think it was regeneroned. And so it's been almost 40 years since we've had a de novo, like, tens of billions of dollar company created. And so all these companies are 50, 100 years old. And so imagine if tech was basically IBM versus HP right now. And you didn't have any young, founder-driven, aggressive companies. We wouldn't have the iPhone. We wouldn't have the internet.
Starting point is 00:15:19 We'd just be logging into IBM mainframes off of HP laptops. Do you know what I mean? There'd be no, like, progress or very little progress. So that's one issue. The funding models also are ones that a lot of, biotech money is either very early stage or very late stage. And a lot of the companies are started as incubations by biotech vCs. So they load up a company with $40 million.
Starting point is 00:15:41 They buy 40% of it up front, whatever it is. And then they kind of have to make it far enough that they can get almost public market money effectively. You know, a lot of the crossover funds then kick in. And the way that these funds are set up because they have so much ownership, they're really built to flip these companies into the arms of pharma. And that means you build against pharma pipelines. So if there's six or seven areas that all the pharma companies care about are biopharma, it's cancer and it's cardiovascular disease and neuroscience, you only build companies in those domains because your goal isn't to build a big standalone thing. Your goal is to sell it to a pharma company.
Starting point is 00:16:17 And so a lot of the dynamics are driven by that. And then there's a big regulatory capture that also prevents a lot of innovation. You know, the FDA will ask for, they'll push hard on endpoints for certain things that may not exist or, you know, so there's a bunch of, there's, those are. kind of the three main factors that make it kind of hard to do anything else. But there's all the science just kind of sitting there. I guess the last piece, the fourth is for some of these things, the people who are scientists who would work on them are a little bit. They don't want to work on something that's too commercial. It's kind of the purity of science. Because it's low status. It's low status. So how dare you work on fixing wrinkles? As a scientist, you need to be doing something
Starting point is 00:17:01 that's much more pure, et cetera, et cetera. So there's also a little bit of that kind of, I have to call it, prudishness around commerciality that exists. I guess back to our regular programming. I had a question for you on the AI side. And in particular, I know you've been thinking a bit about world models and RL and sort of how these things are overall relevant to capability and scaling, or the scaling of capabilities.
Starting point is 00:17:28 Do you want to explain a little bit about what you mean? by world model because I think, you know, we're people who are in the AI world get all this stuff. It'd be great for a more general purpose audience, just kind of walk through your thinking and what do you think is interesting what's going on there? Yeah, I don't know that this is actually a great, well, I think it's very important as an overall area in that, if you zoom all the way out, I actually think this is a time of more open research questions than ever, right? So scaling up model size and training data for big LMs has given us this like, really powerful foundation of knowledge and pattern recognition. But, you know, everybody talks about
Starting point is 00:18:06 agents, like what people want to do from here, like the way people think about AGI is not just predicting text, right? They want to go to broader intelligence into taking actions. And, like, practically, that could be actions. I feel like it's really important to describe what we mean when we say, like, reasoning or actions or something more concretely, because I don't know that everybody has a great mental model for these things. But it could be like planning or reading documents and drawing conclusions, using tools, receiving feedback, like going down different reasoning paths, evaluating your own work. It's like taking a series of actions in pursuit of a goal beyond just sequential text generation. And my understanding is that the labs have spent
Starting point is 00:18:52 some labs more than others, right, have spent a lot of money collecting traces of humans doing sophisticated tasks. Like this is how Alad looks at Japanese stem cell differentiation research, right? He does these tasks and he calls these people and then they try to do like behavior cloning. Monkey C. Monkey agent do. But for software engineering or investment research or whatever. But it tends to be really brittle when you go off the path with the cloning techniques. Model Elad monkey presses some button that like Alad the human never touched. It lands an out of distribution territory and then has no idea what's next and it fails right get stuck then people are trying reinforcement learning brought like a new generation reinforcement learning which is broadly like
Starting point is 00:19:41 trial and error training i think a lot of people who are paying attention in AI they have seen like agents play games famously chess and go or like more complicated ones with human interaction you're like taking actions in an environment and getting feedback in the form of a reward or a penalty and then you like play until you're better at the game, right? And for games, that's easy, right? Because you have clear rules and so you know very easily how to either reward or penalize an action, right? And that's very different, I think, from real world tasks in some cases.
Starting point is 00:20:11 And so this is like exactly like the problem or the expense of using RL more broadly. Like, what is the task if it's not just winning in chess or go? And then like how do you make the environment in like you're trying to make a copy of the universe, right, or at least some little piece of it that's rich enough to teach useful problem solving, but like cheap enough to run. I don't know how we get to the matrix. So it's very hard to design rewards and then you have a gap from reality, right? And then you need diversity or you're just memorizing a path through your game, even if that game is like the game of a lot doing research work or the game of a software engineering project and you're overfitting
Starting point is 00:20:52 instead of adapting. And so, you know, I don't actually have a ton of conclusions here, but I've spent a little bit of time trying to understand it. And there's an interesting set of researchers now who, you know, they're working on creating like, you know, more universal environments, world models, or just trying to get better trace data. And so I think this is, I actually don't know that I believe that any of the sort of more immediate term commercial applications of these models are interesting. People are like, oh, we can generate games or we'll have like gaming assets or we'll use the data for robotics training or some other thing. But I do think it is like really interesting as a conceptual path to more AI.
Starting point is 00:21:33 One thing that I think is kind of intriguing in what you said. And it's one of the points that I'll over extrapolate. If you look at the way AI has done certain things, for example, and go, because there is a utility function but no other constraints, it came up with all sorts of crazy moves that a human wouldn't have come up with, or at least hasn't come up with to date, and then humans started studying and copying these moves. They were completely out of the box, but they ended up with a superior outcome. And so I always wonder what that looks like for other areas of human endeavors. If coding shifted from, hey, let's copy how people write code into let's just solve this problem. How different is a type of code that's written? Or what sort
Starting point is 00:22:16 of traditional approaches are just broken that we can then learn from? Because, you've created a utility function with a unconstrained approach to actually figuring it out. And that happens sometimes in biology, right? You'll do these molecular evolution experiments where you'll, like, evolve a molecule to do something. And sometimes it'll do things in a really weird way that you just completely don't expect. You know, you suddenly have this catalyst that works really weird or this binding protein that doesn't do it the way you'd have expected at all. And it's because it's not designed, it's evolved. And so I think this whole notion of evolved systems or self-selecting systems,
Starting point is 00:22:51 can yield really weird insights. And so I'm really excited to see that kind of stuff in terms of the outcomes of that. Me too. One way I visualize this is just like, you know, model is looking in a part of the search space that like humans have not like traditionally been taught by the Go rule book or the prior games or whatever. And it could be in, you know, shape of protein or any other problem. Have you seen the TV show, Pantheon? No. What is that?
Starting point is 00:23:18 It's a TV show about AI and mind uploading. it's a kind of niche animation TV show. You should watch it. Everybody should watch it. But I think it's really interesting because the uploaded beings at some point like becoming your full self or at least for us would be humans learning to think differently.
Starting point is 00:23:38 It is breaking through your constraint of how you might traditionally solve the problem or see yourself. And so I do think that like thematically is one of the more inspiring things about AI. That's interesting. Yeah. I feel like there's a lot of sci-fi books
Starting point is 00:23:49 where eventually you have, like, upload, you know, your brain is uploaded into the cloud or whatever. And then there are all sorts of controls you suddenly have access to you that you didn't have before. So, for example, you should be able to fine tune your emotions or your emotional state and dial it up and down, literally with dials. I think there's always these really interesting meta questions of like, if a human upload were to occur, what does a transhuman species look like? And what are the capabilities set that aren't like a priori obvious that you suddenly exposed. I mean, obviously you could also spawn instances of yourself and have those things go do shit for you and then merge back in and maybe some of them don't want to merge back in
Starting point is 00:24:25 and then who's the real identity and, you know, all that stuff. So it's kind of fun. I'm told that the modulation of emotions and attention actually doesn't require upload. Like Fred and some professors we know would say it's just ultrasound devices coming soon to, you know, a consumer shelf near you. But we can talk about that on next year's episode. Yeah, sounds good. Okay, man. Been good to hang out. Mark, it is crystallized. See you guys next week. It's all crystal clear, except for most things. Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel if you want to see our faces.
Starting point is 00:25:00 Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.