a16z Podcast - Faster Science, Better Drugs
Episode Date: September 15, 2025Can we make science as fast as software? In this episode, Erik Torenberg talks with Patrick Hsu (cofounder of Arc Institute) and a16z general partner Jorge Conde about Arc’s “virtual cells” moo...nshot, which uses foundation models to simulate biology and guide experiments. They discuss why research is slow, what an AlphaFold-style moment for cell biology could look like, and how AI might improve drug discovery. The conversation also covers hype versus substance in AI for biology, clinical bottlenecks, capital intensity, and how breakthroughs like GLP-1s show the path from science to major business and health impact. Resources:Find Patrick on X: https://x.com/pdhsuFind Jorge on X: https://x.com/JorgeCondeBio Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
I want to make science faster.
Our moonshot is really to make virtual cells at ARC
and simulate human biology with foundation models.
Why are we so worried about modeling entire bodies over time
and we can't do it for an individual cell?
We can figure out how to model the fundamental unit of biology, the cell.
Then from that, we should be able to build.
My goal is to really try to figure out ways
that we can improve the human experience in our lifetime.
There are a few things that if we get them right in our lifetime,
will fundamentally change the world.
Today we're talking about making science move faster.
My guests are Patrick Shue, co-founder of the Arc Institute,
and A16Z general partner Jorge Condé.
We get into virtual cells and foundation models for biology,
why science gets stuck in incentive knots,
what an alpha-fold-level movement for cell biology could look like,
and how breakthroughs translate into actual drugs and business outcomes.
Let's get into it.
Patrick, welcome to the podcast. Thanks for joining. Thanks for having me on.
I've been trying to have you on for years, but finally, I could get your time.
Here I am. I'm excited to do it. It's going to be great.
For some of the audience who aren't familiar with you and your work at Arc and Beyond,
how do you describe what's your moonshot? What is what you're trying to do?
I want to make science faster, right? You know, we can frame this in high-level,
philosophical goals like accelerating scientific progress. Maybe that's not so,
tangible for people. I think the most important thing is science happens in the real world.
If it's not AI research, which moves as quickly as you can iterate on GPUs, right?
You have to actually move things around. Adams, clear liquids from tube to tube to actually make
life-changing medicines. And these are things that take place in real time. You have to actually
grow cells, tissues, and animals. And I think the promise of what we're doing today with
machine learning in biology is that we could actually accelerate and massively parallel
this. And so our moonshot is really to make virtual cells at ARC and simulate human biology
with foundation models. And, you know, we'd like to figure out something that feels useful for
experimentalists. People who are skeptical about technology, you know, they just want to see the data
and see the results that it's actually the default tool that they go to use when they want
to do something with cell biology. Okay, well, hold on. Let's back up. Why is science so slow in the
first place? Like, whose fault is that? Whose fault is that? Now, that is a long one. We should get
into it. We should get into it. It's really multifactorial.
Okay. It's this weird gaudy and not that ultimately comes down to incentives, right?
It comes down to, you know, people talk a lot about science funding and how science funding can be
better. But it's also about how, you know, the training system works, right? How we incentivize
long-term career growth, how we, you know, try to separate, you know, basic science work from,
you know, commercially viable work. And generally the space of problems that.
that people are able to work on today.
I think things are increasingly multidisciplinary.
It's very hard for individual research groups
or individual companies to be good at more than two things, right?
You might be able to do, you know, computational biology and genomics, right?
Or, you know, like chemical biology and molecular glues.
But, you know, how do you do five things at once?
It's increasingly hard.
And we really built ARCAS and organizational experiment
to try to see what happens when you bring together.
They're neuroscience and immunology and machine learning and chemical biology and genomics
all under one physical roof, right?
If you increase the collision frequency across these five distinct domains,
there would hopefully be a huge space of problems that you could work on that you wouldn't
be able to.
Now, obviously, in any university or any kind of geographical region, you have all of these
individual fields represented at large, right, across these different campuses.
But, you know, people are distributed.
and you want everyone together.
Okay, but if I may,
so I would have thought a university
was an attempt to bring in multiple disciplines
under one roof.
You're saying it's not.
It's too diffuse.
It's across an entire campus.
Okay, so the physical, like literally
the physical distance creates an efficiency.
That's part of it.
And I think the other part is folks
have their own incentive structures, right?
They need to publish their own papers.
They need to do their own thing
and, you know, make their own discovery.
And you're not really incentivized
to work together.
together. I think in many ways in the current academic system. And a lot of what we've done is to try
to have people work on bigger flagship projects that require much more than any individual
person or group or idea. That's cool. So like sort of the original hypothesis for the
Arc Institute is if you can bring multiple disciplines together to increase the collision
frequency, as you said, and if one could remove some of the cross incentives that may
exist in sort of traditional structures, the combination of those two things will make science faster.
Yeah, these are absolutely part of it, right? We have two flagship projects, one trying to find
Alzheimer's disease drug targets, the other two make these virtual cells. And the, I think it's not
just the people and the infrastructure, but also the models will hopefully literally make science
faster that you could, you know, do experiments at the speed of forward passes of a neural network
if these models could become accurate and useful. Yeah. So that will be one thing that solves
the length of discovery.
You compress the time discovery takes naturally
by just throwing technology at the problem
at the risk of oversimplifying.
Well, we're tech and optimist here, no?
We are.
Yeah.
Why has AI progress so much faster
in image generation and language models
than biology?
And if we could wave a wand,
like where are we excited to speed certain things up?
To be honest, it's a lot easier.
Yeah.
Maybe that's a hot take, right?
But technology is easier than biology.
Natural language and video modeling
is easier than modeling biology.
Correct.
And to some degree,
like, if you
understand and learn machine learning
and how to train these models,
you have already learned how to speak.
You already know how to look at pictures.
And so your ability to evaluate the generations
or predictions of these models are very native, right?
We don't speak the language of biology, right?
You know, at very best with an incredibly thick
accent, right? So you're training these DNA foundation models. I don't speak DNA natively.
So I only have a sense of the types of tokens that I'm feeding into the model and what's actually
coming out. Similarly, these virtual cell models, you know, I think a lot of the goal is to figure out
ways that you can actually interpret the weird fuzzy outputs that the model is giving you. And I think
that's what slows down the iteration cycle is you have to do these lab and the loop things where you have
to run actual experiments to actually test with experimental ground truth. And, you know, I think
increasing the speed and dimensionality of that is going to be really important. How much of this
is the fact that like, you know, you talk about, you know, we speak biology poorly or with a very
thick accent. How much of this is like if you're training on an image, we can see the image.
And so we can see how, you know, how good the output is. What about all the things in biology that we can't see
or don't even know exist yet.
Like how can we create a virtual cell?
And maybe we should come back to what a virtual cell model is, by the way, for the lay audience.
But like, how can we create a virtual cell model?
We're not even sure if we understand all of the components that are in a cell and how they function.
People talked a lot about this in NLP as well.
There's this long academic tradition in natural language processing, right?
And then it was just weird and non-intuitive and intensely controversial that you could just feed all
this unstructured data into a transformer and it would just work. Now, we're not saying this will
just work in all the other domains, including in biology, but I think there is this, you know,
controversy around what does it mean to be an accurate biological simulator? What does it mean
to be a virtual cell? It's true. We can't measure everything, right? We can't measure, I think,
things like metabolites and really high throughput with spatial resolution. And there are going to be
different phases of capability where initially they model individual cells.
Then they model pairs of cells.
Then they model cells in a tissue.
And then in a broader, physiologically intact animal environment.
And those are length scales and kind of layers of complexity that will aggregate and, you know, improve upon over time.
And I think the other kind of non-intuitive thing in many ways are the scaling laws that you get in data and in modeling.
I'll give me an example, right.
There's a lot of discussion in molecular biology about how, you know, RNAs don't reflect protein and protein functions.
right. And so, well, we don't have, you know, proteomic measurement technologies that are nearly
as scalable as transoptomic measurement technologies today, like that's the single cell resolution,
certainly. But we're getting there. And you can layer on certain nodes of protein information
that you can add on top of the RNA information. But in many ways, the RNA representation is a mirror,
right? It might be a lower resolution mirror for what's happening at the protein layer. But
eventually what is happening in protein signaling will get reflected in a transcriptional state, right?
And so for an individual cell, this may not be very accurate. But when you imagine the massive data
scale that we're generating in genomics and functional genomics, right, you start to gather
tremendous amounts of RNA data that will read in kind of like what's happening at the protein level
at some sort of mirror echo, right? And then that can, you know, be the case for metabolic
metabolic information as well and so on.
So it's a low pixel image,
but if we can get sort of zoomed out far enough,
we'll get a sense of what's going on.
You have to bet on what you can scale today, right?
We're able to scale single cell
and transcriptional information today.
We're able to add on, you know,
protein level information over time.
We'll need spatial information,
spatial tokens, and we'll need temporal dynamics as well.
And we'll, you know, I kind of bucket things into three tiers.
There's invention, engineering, and scaling.
And there are certain things today
biotechnologically that are scale-ready.
And then there are things that we still need to invent, right?
And that's part of why we felt like we needed a research institute
to be able to tackle these types of problems,
that we weren't just going to be an engineering shop
that's just trying to scale single-cell perturbation screens, right?
That, you know, would be interesting,
but in three years would feel very dated, I think, right?
And so there's a lot of novel technology investment
that we're making that we think will bear fruit over time.
Can we flesh out the virtual cell concept?
why that's the ambition we've landed on
what it's going to take to get there
or what are the bio next?
I would say the most kind of famous success
of ML in biology is alpha fold, right?
And this solved the protein folding problem
of, you know, when you take a sequence
of any amino acid, what is the protein look like, right?
And, you know, it's pretty good.
It's not perfect.
It certainly doesn't simulate the biophysics
and the molecular dynamics,
but it gives you a sense of what the end state is
with 90% plus accuracy, right?
And that's the alpha-fold moment that people talk about, right, where anytime you want to, you know, work with a protein, if you don't have an experimentally self-structure, you're just going to fold it with this algorithm.
And we kind of want to get to that point with virtual cells as well. And the way that at ARC we're operationalizing this is to do perturbation prediction, right?
Where the idea is you have some manifold of cell types and cell states, right? That can be a heart cell, a blood cell, a lung cell, and so on.
on, and you know that you can kind of move cells across this manifold, right? Sometimes they
become inflamed. Sometimes they become apoptotic. Sometimes they become cell cycle rested. They
become stressed. They're metabolically starved. They're hungry in some way. And so if you have this
sort of this representation of universal sort of cell space, right, can you figure out what are the
perturbations that you need to move cells around this manifold? And this is fundamentally what we do in
making drugs, right? Whether we have small molecules, which started out as natural products from
boiling leaves or antibodies when we injected proteins into cows and rabbits and sheep and took
their blood to get those antibodies, where we were basically trying to get to more and more specific
probes, right? And we had experimental ways to kind of cook these up. Now we have computational ways
to zero shot these binders. But ultimately what you're trying to do with these binders is to
inhibit something and then by doing so kind of click and drag it from a kind of toxic gain of
function disease causing state to a more quiescent homeostatic healthy one right and the thing that
is very clear and complex diseases right where you don't have a single cause of that disease is
there's some complex set of changes there's a combination of perturbations if you will that you want
to make to be able to move things around now you know the people talk about this
classically as things like polypharmacology, right?
But, you know, I think we're moving from a, oh, this thing happens to have, you know,
a whole bunch of different targets kind of by accident to we have the ability to manipulate
these things commentarily in a purposeful way, right?
That to go from cell state A to cell state B, there are these three changes I need to make
first, then these two changes, and then these six changes, right, over time, right?
And we kind of want models to be able to suggest this.
And the reason why we scoped virtual cell this way is because we felt it was just experimentally very practical.
You want something that's going to be a co-pilot for a wet lab biologist to decide, what am I going to do in the lab, right?
We're not trying to do something that's like a theory paper that's really interesting to read where, you know, the numbers go up on a ML benchmark.
But, you know, you practically can decide what are the 12 things that you're going to do in the lab in 12 different conditions.
and actually just test them, right?
And then that's how we kind of enter the kind of the lab and the loop aspect of model predictions
to experimental measurements to, you know, kind of improved or RL'd or whatever model kind of predictions again.
And the goal is to be able to do in silico target ID, where you can basically figure out new drug targets,
figure out then the compositions, the drug compositions you would need to actually make those changes.
I think if we could do that, we could make a new AI, like vertically integrated AI-enabled pharma company, right?
Which, you know, I think is obviously a very exciting idea today, but I think in many ways the kind of pitch and the framing of these companies precedes the fundamental research capability breakthroughs.
And that's what we're really invested in at ARC is kind of just making that happen along with many other amazing colleagues that they feel to just make this possible for it.
Yeah, the community.
So if the goal is, I'm oversimplified for you,
like if we wanted to get to the alpha moment where, you know,
it kind of gives you a useful structure,
folded structure 90% of the time to use your data point,
we wanted to take that comparison in the virtual cell model
and we said, okay, 90% of the time,
if I ask the model, I want to shift the cell from cell state A to cell state B,
and it's going to give me a list of perturbations.
And let's say that at 90% of the time, those perturbations, in fact, result in the shifting
experimentally, in the shifting from cell state A to cell state B.
How far away are we from that alpha-fold moment for virtual cells?
I find it helpful to frame these in terms of like GBT, 1, 2, 3, 4, 5 capabilities, right?
And I think most people would agree where somewhere between GPT 1 and 2, right?
A lot of the excitement was that we could achieve GPT1 in the first place that you could see a
path with scaling laws of some kind to kind of make successive generations where capabilities
would improve. But, you know, these are, you know, with like our Evo kind of DNA foundation
models that we developed at ARC with Brian He, right? One of the things that we've seen is that,
you know, these are really kind of these genome generations are like, quote unquote,
blurry pictures of life, right? We don't think if you synthesize these novel genomes, they would be
alive, but, you know, we don't think that's actually also impossibly far away. We'll just have to
kind of follow these capabilities. We're generating, we're taking a very integrated approach to
attack this problem, right, where you need to curate public data, you need to generate massive amounts
of internal and private data, build the benchmarks, and train new models and building
sort of architectures and kind of doing these things full stack. And we'll just kind of attack this
hill climb over time.
What's the GPT, I'll say GP3 moment going to look like? And by that, I mean sort of a public release that alters the public's conception of just what's possible here from a capability's perspective and also inspires a whole new generation of talent to rush into biology.
Well, the good thing with biology is we have a lot of ground truth, right? There are entire textbooks, right, that describe cell signaling and cell biology and how these things work. And so, you know, even without a virtual cell model at all, right, if you went into chat GPT or Claude and.
you basically, you know, you asked us some question about, you know, like receptor tyrosine
kinase signaling. It would have an opinion on how that works, right? And so I think you would
want the model to be able to predict perturbations that are kind of famous canonical examples
of biological discovery. So I'll give you an example. If you've loaded into the model, an IPSC,
kind of an induced pluripotent stem cell state or human embryonic stem cell state and fibroblast
cell state, right? Could it predict that the four Yamanaka factors would reprogram the fibroblast
into a stem-like state, right? And they essentially rediscover from the model, something that won the
Nobel Prize in 2009, right? That would be sort of one really kind of classic example. And then you could
go do the inverse. If you have a stem cell, can it discover neurogen in two? ASCL1, my OD, can it find
differentiation factors, will turn that into a neuron or into a muscle cell or, or
or so on. And, you know, these are kind of classic examples in developmental biology, but you could also
use this to try to discover or kind of recapitulate the mechanism of action of FDA-approved
drugs, right? And so you could say, for example, you know, if you kind of inhibit her too and, you know,
breast cancer, you know, cell states, right, it would be, you know, you would get this type of response.
Or it could predict the, you know, certain clones that, you know, will be able to kind of, kind of
be more metastatic or, you know, they'll be more resistance and they'll lead to minimal
residual disease. There are, I think, lots of kind of biological evals that you can kind of add
onto these models over time that are really tangible textbook examples as opposed to, I think,
what the kind of early generation of models do today, which is, you know, very quantitative
things like mean absolute error over like, you know, the differential express genes and stuff like
that, you know, that's, those are ML benchmarks. And we want to increase the sophistication
into something that you could explain to an old professor who has, you know, never touched
a terminal in their life. By the way, you talk about textbooks as ground truth. Do you think we're
going to find that a lot of the textbooks are wrong? I would say textbooks are compressed, right?
So, for example, when you look at these kind of classic cell signaling diagrams of A,
signals to B, which inhibits C, right? That's a very kind of two-dimensional representation of our
understanding of a complex system. Right, right, right. I mean, yes, textbooks are what they are.
They represent the corpus of reliable knowledge, but everyone knows that they're an incredible
number of exceptions. And part of what discovery is, is to find new exceptions, right?
Why don't you talk about the difference between the simulation of biology and the actual understanding
And what would it take to actually be able to model the extremely complex human body?
You know, some people don't like the phrase virtual cells because it sounds too media friendly.
It's not rigorous enough, right?
But I've always found it funny that, you know, but many people are okay with like digital twins and digital avatars, which, you know, talks about modeling biology at a way higher level of abstraction.
You know, I think virtual cells, if anything, is actually way more scoped and rigorous than
modeling a digital twin or avatar.
But, you know, I think these are useful words
because they describe the goal and the ambition, right?
That, no, in the long run,
we don't care about predicting the, you know,
kind of perturbation responses of an individual cell at all, actually, right?
Obviously, we want to be able to predict drug toxicity.
We want to be able to predict aging.
We want to be able to predict why a liver cell becomes seroton.
when you repeatedly challenge it with ethanol molecules or whatever, right?
And, you know, these sort of chemical or environmental perturbations should be predictable.
I think you just kind of have to layer on the complexity, right?
Like, why are we so worried about modeling entire bodies over time when we can't do it for an individual cell, right?
Where we sort of, you know, accept or broadly believe that this is a kind of, you know, fundamental unit of biological
computation if you will, right? And let's just kind of start there, right? Just like you kind of have
to start with, you know, things like math and code and language modeling, right? And things that
are just sort of easier to check. You can build to super intelligence over time. Yeah, I think that
makes sense, right? That's a very sort of laudable and ambitious goal. We can figure out how to model
the fundamental unit of biology, the cell. Then from that we should be able to build.
Like in early AI, we just started with like language.
translation. There's, you know, basic NLP tasks, right? This is long before, you know,
the tremendous ambitious scope that we have today. And I think we hopefully can mirror that
type of trajectory, if we're lucky. It seems that biotech and pharma has been a shrinking
in interest or the rate of growth. What's it going to take for these innovations in the science
to reflect themselves in business models and in growth for the industry? A lot of these
biotech startups would try to initially sell software to pharma companies and then they would kind of
realize oh wow we're like competing for SaaS budgets which aren't very large and then you know now
they're realizing oh we have to compete for R&D budgets right and I think you know there's this
narrative from the current generation these companies that oh our biological agents will compete for
R&D budgets and replace headcount or something like that right just like we're seeing in you know
agents across different verticals, right? Whether or not that will, I think, pan out, I think depends on
just whether or not these things meaningfully allow us to, you know, build drugs more effectively
in the pharma context, right? And I think that's just sort of the most important thing in this
industry. And so I think we believe in virtual cells, not just because we think it will be
a fountain of fundamental mechanistic insights for discovery, but also because if in the case
of success that could be industrially really useful, right? But, you know, we'll, we'll have to
see over time, right? If we have 90% of drugs failing clinical trials, right, that kind of means
two things, and you're not sure what percent of which, right? One is we're targeting the wrong
target in the first place. The second is the composition, the drug matter that we're using,
doesn't do the job, right? It's not clear for each individual failure, which one it is, or if it's
both or what proportion of each, and we'll have to kind of sort that out over time.
Like, you can imagine even in the case of success when we had 90% accurate virtual cells,
you'll probably end up with suggestions like, okay, now you need to target, you know,
this GPCR only in heart, but not in literally any other tissue, right?
We don't have the drug matter that can do that today.
And so that's also why, again, you probably need research to figure out novel chemical
biology matter that allows you to drug
triotropic targets in a tissue or cell type
specific way. And so, you know, I think
part of why biology is slow is because there's just
this Russian nesting doll of complexity
in terms of understanding, in terms of
perturbation, in terms of safety.
And, you know, the crazy
thing is the progress in just the short
time that I've been doing this is insane, right?
Like, I did my, you know, PhD at the Broad Institute in the heyday of developing single-cell
genomics, human genetics, CRISPR gene editing, you know, and, you know, so many other things.
And I think the kind of early 2010's papers on single-cell sequencing would have like 20 cells
or 40 cells, right?
And at ARC in the next, you know, kind of N, like, I don't know, relatively short amount of time,
we're going to generate a billion perturbed single cells, right?
That's, I mean, how's that for a moor's law?
Yeah, that's remarkable.
Yeah.
Yeah. Coriha, I want to hear your answers a couple of these questions, too, as the lead of our
biopractice, both on the GP3 moment, what that could look like, and also, like, I'm
curious if you think it's geo-b-1s or sort of building off that or if it's going to be
something different, and also, what's it going to take for the science to kind of reflect
itself in the business for the industry to grow?
Yeah, so I'll take the second one first, if I could.
So I think, you know, in terms of where the industry is right now, I think one of the big challenges we have is, as Patrick describes very nicely, like, you know, discovery's hard and it takes time. And, you know, the fail modes are exactly as you describe. Oftentimes when drugs fail, which they do 90% of the time in clinical trials, it's because we're going after the wrong thing or we made the wrong thing to go after the right thing, right? Like those are the two fail modes and that happens all too often. And so I think a lot of the stuff that Patrick is described is going to basically improve
our hit rate or our batting average on figuring out what to go after and then making the right
thing to go after said thing. The challenge we have, I think, in the industry is that the bottlenecks
still are the bottlenecks. And the biggest bottleneck we have, which is, you know, a necessary one is
we have to prove that whatever we make, that we have the right thing to go after the right thing,
so to speak, and that when we have it, that it's going to be as, you know, de-risk as possible
before you put it into humans. And we have to be good at making that.
And we've got to make them too.
Yeah, exactly.
And so that bottleneck is a necessarily important one.
That bottleneck should exist.
I'm not suggesting we've got to remove it.
But are there ways to reduce the cost and time associated with getting through the bottleneck
of human clinical trials?
And, you know, it's interesting because, you know, we talk about, you know, all of the
various stakeholders when you're making a drug.
There are the companies.
There's, of course, the science that supported the company that's trying to commercialize
a product and they're the regulatory agencies, you know, and everyone is trying to ensure again
that what's, you know, first and foremost is the ability to discover and commercialize
drugs that are safe and effective for humans, that middle part of actually getting through
that bottleneck is hard to speed up in a very obvious way. Like, you can increase the rate
the way you enroll clinical trials. You can use better technology to change the way we design
these clinical trials so maybe they can be faster or shorter, et cetera. But some of them just
have a natural timeline. You have to go through. Like if you want to demonstrate that a cancer
drug promotes survival, guess what? It's going to take some time to demonstrate a survival
benefit. Or if, you know, you want to do a longevity drug, that by definition is a lifetime,
you know, of a trial in terms of length. So there's a lot of these bottlenecks are really hard
to get through. So what helps the industry? I think there are a couple of things that help the
industry. One is capital intensity will hopefully at some point go down over time as technology
gets better. Capital intensity is something that our industry faces. In some ways, it looks a little bit
like AI now, right, in terms of the cost of training these models. But the capital intensity is very,
very high. That has not come down. So we got to get to success rates up to impact capital intensity
to get it down. The second thing is where can we compress time? So good models can help us
compress early discovery time. We still haven't seen, and I think it's coming, but it hasn't
happened yet. We haven't seen artificial intelligence or other technologies massively compressed the
amount of time in Texas to do the clinical development, the clinical trials, the enrollment of
patients, all those things. We're seeing some interesting things coming. We haven't seen sort of the payoff
there yet. And the third thing is if we can make better drugs going after better things,
the effect size should be higher, so therefore the answer should be obvious sooner. If we can get those
three things right, reduce capital intensity, compressed timelines, and effectively increase
effect size in some very tough, sort of intractable diseases, that is what I think fixes the
industry. And from where we sit at the early stage, at the early stage in terms of being
early stage investors, the reason why that helps us is if the capital intensity goes down and the
value creation goes up, it becomes easier to invest in these companies in the early days
because you get rewarded for coming in early.
The problem we have right now
is that most companies aren't,
you're not seeing rewards happening
when there's value inflection.
So you come in early,
you bear the brunt of the capital intensity,
and even if a company's successful,
that success isn't reflected in the valuation.
So we're not seeing the step-ups
that you see in other parts of the industry.
And that's just really,
really hard from an investing standpoint.
So I think we need to see those various factors addressed
for this space to really get fixed,
to use your word.
Yeah, that was great.
I have a lot to add on to this.
Please, add a way.
You know, just, you know, one, a few simple observations, right?
The first is the amount of market cap added to Lily and Novo, based on the, you know,
development of GLP-1s is like over a trillion dollars is more, you know, I mean, NOVA stock has
decreased a lot.
So, you know, trillion dollars, let's say, is more than the market cap of all biotech companies
combined over the last 40 years have been started, right?
And I think that, you know, one of the kind of interesting kind of correlaries of this is that, you know, when we have a 10% kind of clinical trial success rate for kind of preclinical drug matter, right, you tend to circle the wagons a bit and try to manage your risk, right?
And so the way that do this is you try to go after really well-established disease mechanisms where if I developed new drugs that go after well-understood biology, it should work the way that I hope.
it will in the trial, which is really, really expensive and costs a lot more in many ways
than the preclinical research, right? The problem with this is you go after very well-validated
disease mechanisms, but with really small patient populations, right? So then the expected value of
this actually is relatively low. One of the kind of things that we've seen with GLP-1s is
just the kind of value that you can create when you go after.
really large patient populations.
And I think that has culturally really net increased the ambition of the industry,
both from the investor and from the drug developer side.
And I think that's something that we should keep our foot on the gas for.
Yeah. And look, I think the trend on that is positive.
I would argue the trend on that is positive.
You're absolutely right.
Like the demonstration of the value that has been created with the increasing use of
gLP ones and the value transfer that's gone to companies like lily and no who i would argue is like
very merited right because they've cracked an endemic social problem um in terms of managing diabetes
and eventually helping manage obesity and so i think that's remarkable and there's a lot of value
that goes to that because they tackled they cracked a very very um challenging problem for for society
beyond just science so that's great and i agree with you like the the the the prize the juice needs to be
worth the squeeze, right? You're right. A lot of biotech has been around, like, go after the low-hanging
fruit because it's low-risk and we got to eat today, right? So you go get it, you know, and you try
to push off the big, the big ambitious indication, the large population, or the really
tough to crack disease. But, you know, I do think we're seeing more and more of that. And by the way,
like, we can get into some of these genetic medicines, but some of these genetic medicines
are going after some of the hardest problems, the things that you quite literally couldn't
address but for editing, you know, DNA. And, you know, I think that's incredibly, you know,
remarkable and laudable and frankly inspiring.
But the fundamental elements of the industry have to work.
So the capital formation is there to support those kinds of things.
And right now it's hard, right, because of the issues we talked about before.
15 years from now, we're back in this room.
We've barely escaped being part of the permanent underclass.
And we're reflecting on the, on sort of the GPT3 moment or maybe the legacy of GLP wants,
sort of beyond where they are now.
What do you think it could be, or I'm curious to get your take on what do you think is going to be the technological breakthrough that we're going to point back to and say, oh, this is really what's said it all, or do you think it's going to be sort of multi-factor combination?
Yeah, look, I think it's going to go back to sort of where we started this combination, conversation, excuse me, GLP ones as a drug are, you know, four decades in the making or something like that.
You know, these are not overnight successes.
but I do think what we are going to see more of
and our hope is that when you combine the fact
that we're getting better at understanding what to target,
getting better at designing medicines to hit those targets,
by the way, in a whole array of new creative ways.
So we have small molecules, the natural products
that we got from boiling leaves, as you said earlier,
those have gotten, you know,
we're getting really good at designing smarter
and better smaller molecules that do new things,
that function in ways that they didn't before.
We've gotten quite good at designing biologics or proteins
with a lot of help from things like alpha fold
that helps understand how proteins fold.
We're going to get a lot better at designing
some of the more complex modalities
like the gene therapies of the world
or the gene editors of the world.
And when you can do that
and combine that with our ability to hopefully use things
like virtual cell models to really understand
what to go after,
like we're going to have drugs.
I would hope and I would expect
that the industry will continue to bring forward drugs.
that have very large effect size for very difficult diseases
that hopefully affect a lot of patients.
If that's true, then we'll start to see
some of these really, really difficult diseases
that affect all of society get tackled.
Hopefully, you know, one by one by one by one.
And so we have obesity, we have metabolic disorder,
we're dealing with cardiomelabolic disease.
We're starting to see interesting, promising things happening
in neurodegenerative diseases.
You know, if we can, you know, tackle cancer
or at least several cancers that now have begun to be treated more like a chronic condition
than a death sentence that they were in the past, the more we see of that,
like I think that value to society will accrete over time.
And I think this should be an industry that is extraordinarily valued by society
and candidly by the markets.
We have to deliver.
If we play this out, right, and let's say these AI models work, right?
And you can make a trillion binders in silico that will, you know, be exquisite drug matter, right?
We still need to make these things physically and test them in animals and hopefully predictive models and then actually in people, right?
And I think, you know, that will increasingly be the bottleneck in many ways, right?
And, you know, my friend Dan Wang recently released a book called Breakneck, which talks about,
you know, kind of like the U.S. and China and the difference between the two countries and their
philosophy, the way they approach markets.
We're a country of lawyers or a country of engineers.
Exactly.
That's right, right.
China is an engineering state, right?
It's kind of political, you know, folks who have engineering degrees.
You know, you need to build bridges and roads and buildings.
And these are the ways that we solve our problems.
Whereas I think from, you know, the first 13 American presidents, 10 of them practiced law.
from 1980 to 2020, all Democratic presidential candidates,
both VP and president went to law school.
And so you kind of see the echoes of that in the FDA
and the regulatory regime and all the kind of the bottlenecks
that people talk about developing drugs stateside.
And increasingly you see folks thinking about how we can run phase ones overseas,
right, build data packages that we can, you know, bring back domestically for phase two efficacy trials.
I think that's interesting directionally, but it's not enough, right?
And, you know, I think we need to kind of figure out these two bottomsks, the making and the testing.
Even if we can solve the designing part.
Oh, I agree.
Yeah, yeah.
That's the bottleneck.
Yeah.
You know, we joke about it.
You have to do is you have to get a molecule that can go, you know, first in mice and then in mutts and then in monkeys.
and then in man, like there's, you know, that takes a long time and it's so hard to compress
that. And so when you do, you should make the journey worth, you know, make the journey worth
it, right? So when you fail on the other end of that, like, that's obviously horrible. And so
finding ways to make sure that when you walk that path, that it'll be a successful journey
as often as possible is what this industry desperately needs.
Alpha fold solved a protein folding problem, but what in it solved?
drug discovery or more broadly, what would it take to get AI drug discovery?
What is sort of the bottleneck on the tech side, at least?
On the tech side?
Yeah, maybe another way to ask the question is that because I was asked the founders
version of this question, like the AI ones, that are like, oh, we're going to do AI
for life for drug discovery.
So my question that I always like to ask founders is give me examples where you think
AI is hyped, potentially overly hyped, where there's real hope, like the sort of
or what do we expect, what's next,
and where we already see real heft?
So, like, if I asked you, like in AI,
where is there hype, where is their hope,
and where are we seeing heft today?
I would say there's hype in toxicity prediction models.
Okay.
So that's the idea that we will say,
I'm going to show you a molecule
and you're going to tell me,
the model's going to tell me if it's going to be toxic or not.
That's right, right?
There's heft in anything to do with proteins, right?
Obviously, protein,
binding, but increasingly in protein design, right?
I think there's real heft there.
And then, you know, where there's hype is in multimodal biological models, whatever that means.
Right.
And I think, you know, pick your favorite layers.
It could be, you know, molecular layers.
It could be spatial layers.
It could be, you know.
I mean, actually, I would say there's also heft in the pathology, AI prediction models, you know, like, you know,
automating the work of pathologists and radiologists, that's, that's a very interesting.
Yeah, that's a powerful use case, for sure. Yeah. And there's a lot of stuff where you don't have
to train, you know, weird biology foundation models and you can write, you know, regulatory
filings and reports and things like that. That's impactful and important. So now I go back to
Eric's question. Why don't, why hasn't AI turned out drugs yet? I think that was your question,
right? You know, AI for drugs is one of these weird things where,
Everyone who works in the industry is trying to claim that their drug is like the first
day I design molecule, right?
I feel like in, you know, I mean, increasingly in just a few years, this will just be a
native part of the stack, right?
Just like we use, you know, the internet and we use phones, we're going to have AI and all
parts of the stack, right?
And so it's just going to become a native part of everything that we do.
And so, you know, like, why hasn't it worked yet?
is this long multifactorial process that we've been talking about today.
There's designing, there's the making, there's the testing, there's the approvals side of it.
And, you know, I think the, I do think safety and efficacy as the kind of two pillars in the industry are the two things that we need to get right, right?
We need to be able to figure out faster ways that we can predict whether or not molecule will work.
and if it's going to be safe or not.
There are like ways that AI can operationalize this.
If you designed a small molecule, right,
you could now computationally dock it to every protein in the proteum
and see if it's likely to bind to off-target molecules.
You can use this to tune binding, selectivity, and affinity
that might be ways to predict, you know, safety and efficacy, right?
And, you know, how will that work?
Well, that's a feedback loop that will have to actually test in the last,
And that's part of what's slow is the testing, you know, takes real hours, days, months,
right, years.
And, you know, that's really why we've picked at ARC, the virtual cell models is our initial wedge
because we think it can integrate a lot of these different pieces.
In Dario Amade's essay, Machines of Love and Grace, he predicts, among other things,
the prevention of many infectious diseases and the doubling of lifespans, perhaps, in as soon as
the next decade.
What's your reaction to his essay, his bullishness in some of his predictions?
I think the core intuition that Dario had was the idea that important scientific discoveries are independent, right?
Or they're largely independent.
And if they are, you know, statistically independent, then it would stand to reason that we could multi-parallelize.
And so we had models that were sufficiently predictive and useful.
you could have not just a hundred of them, but millions, billions of these discovery agents
or processes running at a time, which should compress the timeline to new discoveries
and turn it into a computation problem. I think that is a very futuristic framing for something
that is actually very tangible today. And if we can have virtual cell models at work,
for example, that can start to do these kinds of things that we've been talking about.
Help us, you know, we can have, you know, molecular design models, we can have docking models.
We can then have, you know, when you bind to this thing in this cell versus all the other
off-target proteins, will a cell kind of be corrected in the right way, right?
These kind of layers of abstraction and complexity start to get to things that feel very
tangible through drug discovery. If you could actually traverse these steps reliably and in sequence,
you could start to see how you can get the compression, right? And so I think in the long run of time,
this should be possible. One of the course suppositions in building a good virtual cell model
is that we are feeding it all the relevant data. The right data, yeah. The right data. And so we'll work to,
you know, it's gene expression data or it's DNA data or, you know, any number of factors.
Protein and protein interactions, all the things you describe.
What if we're missing a core element?
Like, what if we just haven't discovered the quark or whatever?
Like, we just don't know what we don't know.
And therefore, what we're feeding the model is fundamentally or importantly incomplete.
I think that's almost certainly true, right?
Like, it seems almost obvious that we're not measuring many of the most important
things in biology, right? And you can of course find many important exceptions for any of these
measurement technologies. Like in biology, we ultimately have two ways to study it in high throughput,
it's imaging and sequencing, right? But there are so many other types of things that you would
care about that those things aren't necessarily going to do at scale, right? And that's really why I think
the stuff that we're talking about of the RNA layer as a mirror for other layers of biology
is one that we've spent a lot of time thinking about.
And there's a difference between a mechanistic model
and a meteorological simulation type of model.
So, for example, if you want to predict the weather, right,
you can build AI models that will predict
whether or not it will rain next Tuesday.
It won't explain physically or geologically or whatever,
why and how that happens.
But as long as it knows if it's going to rain next Tuesday,
you're probably happy, right?
And I would say similarly with a virtual cell model,
it may not tell me literally why.
Just like an alpha fold doesn't tell me literally,
why did the protein fold this way and how,
but it just told me the end state
and it was reasonably accurate.
I think that would already be very important.
Shifting gears a little bit.
We've been talking about science and biotech,
but in addition, you're an elite AI investor more broadly.
So I want to talk about how you're,
I want to talk about where your investment focus is right now,
just as it relates to A.M. more broadly, where are you excited? Where are you spending time?
Where are you, you know, looking forward to? Oh, yeah. My goal is to really try to figure out
ways that we can improve the human experience in our lifetime. I kind of think of, like,
if I think about the future that we're going to leave to our children, right? There are a few
things that if we get them right in our lifetime will fundamentally change the world, right? And,
you know, how we live in it. I think synthetic biology is obviously one, right?
you know, think, you know,
GLP-1s, right?
Things that improve sleep, right?
Things that can, you know, improve longevity, right?
These are, these are all things that are kind of, you know,
easy to get excited about.
I think brain computer interfaces is another area
where we're going to see really important breakthroughs
over the decades to come.
And then I think the third is in robotics,
both industrial and consumer robotics, right?
that allow us to basically scale physical labor in an interesting ways.
And you can kind of see how each of these three things,
even in the sort of medium cases of success,
really kind of changed the world.
And so I'm very interested in helping make these kinds of things possible.
And so there's sort of, you know,
in the kind of techno-optimist sort of vision of the world, right?
There's a few different types of scarcity, right?
There's, you know, it's very easy when you do research to come up with important ideas.
The hard thing is to tackle them in the right time frame.
It's like, you know, writing futuristic, sci-fi things is not that hard.
Being able to actually execute on it in the next five years or eight years, much, much harder, right?
And I would say, you know, academic discovery is littered with plenty of ideas that are interesting and important, but, you know, kind of.
long before their time. And in many ways, the story of technology development is, you know,
trying to use new technologies to solve old tricks, right? Like, most of our tools are, you know,
for productivity, right, in many ways, whether that's the industrial revolution or the
computing revolution or the current AI revolution. We're trying to kind of do the same stuff.
And, you know, and so, you know, I think there's a relatively small set of very powerful ideas.
New technologies give us new opportunities to attack them.
there's a set of people in teams that are going to be positioned to be able to do that.
They need to have technical innovation and then an intuition about product and business in a way
that, you know, you know, you kind of in the RPG dice role of the skills that you get in
these three domains, people start at different base levels, right?
And, you know, you might have an incredibly technical founder who doesn't know how to think
commercially or someone who's just natively a very commercial thinker who, you know, it doesn't
have very strong product sense, right, even though they could sell the crap out of it, right? And so I think
these sort of, this sort of three broad categories of capabilities, you need to kind of bring
together in a way that you can allocate capital to in the right times in order to make these
ideas possible in a really differentiated way. Like, this thing literally wouldn't happen if we
didn't get these people together and funded at the right time in the right way. And that's really
what motivates me. And these are kinds of the things that I've been excited about, you know, backing,
you know, longevity companies like New Limit, right? BCI companies like Nudge, right? Robotics companies
like the bot company, right? You know, these are some of the examples of kind of, you know,
things that I think must happen in the world and, man, therefore, should happen. And, you know,
how do we actually find the right people in the right time to actually kind of go on the fellowship
of the ring hunt? Yeah. Yeah.
If not too difficult, I want to ask Jorge's question adopted to these additional spaces, robotics, sort of BCI and longevity of appropriate in terms of, through questions, I believe, where what's overhyped, what's, where do you see an opportunity or path and what's got heft already?
I think the cool thing about agents generally is that they do real work, right?
compared to
SaaS companies that came before,
agents replace real productivity.
And I think they have a lot of errors today.
And I would say
the computer use agents
will probably trail the coding agents
by maybe a year.
But it's coming and we'll follow the trajectory
as these go from doing minutes of work
without error to hours to days.
And I think you're going to get
a completely different product
shape as we march through that across legal, BPO, you know, medicine, health care, whatever, right?
And we'll kind of follow that as an industry. And that's going to be really exciting. And I think
that's where we're going to see real heft. It's because most of the economy of services spent.
It's not software spent. And, you know, the reason why we're all excited about this stuff is that it can
attack, you know, the services economy. And I would say, like, you know, where is there hype?
there's tremendous amount right that's that's that's that's no doubt the hype is in the model
capabilities right and you know it's we we're working with an architecture that you know dates
back to 2017 right and if you look at the history of deep learning it's like kind of every eight
years there's something really different right um and we it feels like in 2025 we're really
overdue for some net new architecture and i think there are lots of really interesting research
that are bubbling up that could do that thing.
And in many ways, there's a set of really interesting academic ideas,
especially in the golden age of machine learning research from, I don't know, like 2009 to
2015, right?
There's so many interesting ideas, little archive papers that have like 30 citations or less.
And as the marginal cost of compute goes down year on year, I think you're going to be able
to take all of these ideas and actually scale them up, right?
where you don't see the scaling laws when you're training them at 100 million or 650 million parameters like back then.
But if you can scale them up to 1B, 7B, 35B, 70B, you start to see whether or not these ideas will pop, right?
And I think that's very exciting because, you know, there's just going to be a lot of opportunity for new superintelligence labs to do things, you know, beyond what the kind of, you know, established foundation model companies are doing today.
right as they kind of you know in addition to these research teams right you know these are in many
ways becoming applied AI companies right they need to build product shape and you know all kinds
of different enterprises and do RL for businesses and make money right and I think or build coding
agents and make API revenue and that's important and I think you know a timely race to survive
today, but I'm just, you know, a very blush on the research of, say, like a Sakana AI, right?
Which was founded by one of the authors of, Attention is All You Need, right? Ian Jones.
And they're doing incredibly interesting stuff on model merging and how you can have kind of sort of like evolutionary selection of, you know, kind of different, you know, models in MOE.
and I think the
they are sort of opportunities here
in the long run to move beyond
just like RL gyms, for example,
also to kind of figure out new ways to learn
and find like kind of reward signal
is going to be really exciting.
It's a great place to wrap.
Gearing towards closing,
anything upcoming for Arc
that you'd like us to know about anything you want to tease,
for people want to learn more,
which do they know about?
So Alpha Fold was in any ways came out of
a protein folding competition called Casp, right?
Critical assessment of the structure of proteins.
And, you know, we created our own virtual cell challenge at virtual cell challenge.
org where we have, you know, $100,000 prizes sponsored by Nvidia and 10X genomics and Ultima and others.
And it's an open competition that anyone can enter where you can train perturbation prediction models.
And we can openly and transparently assess these model capabilities, both today.
and in subsequent years, follow them to get to that chat GPT moment, right?
And so I'm extremely excited about this.
You know, we like more people to, you know, train models and apply both bio-ML experts and engineers in any other domain.
And, you know, I'm, you know, I just, I want this thing to exist in the world.
You know, hopefully we're important parts of making that happen.
But I just be happy that someone does it.
Yeah. That's the inspiring note to wrap on.
Patrick Jorge, thanks so much for the conversation.
Thanks so much, guys.
Thanks for having me.
Thanks for listening to the A16Z podcast.
If you enjoyed the episode,
let us know by leaving a review
at rate thispodcast.com slash A16Z.
We've got more great conversations coming your way.
See you next time.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or invest.
investment advice or be used to evaluate any investment or security and is not directed at any
investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may
also maintain investments in the companies discussed in this podcast. For more details, including
a link to our investments, please see A16Z.com forward slash disclosures.