No Priors: Artificial Intelligence | Technology | Startups - Listener Q&A: AI Investment Hype, Foundation Models, Regulation, Opportunity Areas, and More
Episode Date: April 27, 2023This week on No Priors, Sarah and Elad answer listener questions about tech and AI. Topics covered include the evolution of open-source models, Elon AI, regulating AI, areas of opportunity, and AI hyp...e in the investing environment. Sarah and Elad also delve into the impact of AI on drug development and healthcare, and the balance between regulation and innovation. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes: [0:00:06] - The March of Progress for Open Source Foundation Models [0:06:00] - Should AI Be Regulated? [0:13:49] - Investing in AI and Exploring the AI Opportunity Landscape [0:23:28] - The Impact of Regulation on Innovation [0:31:55] - AI in Healthcare and Biotech
Transcript
Discussion (0)
Hey, everyone, welcome to No Pryors.
Today we're going to switch things up a bit and just hang out and answer listener questions about tech and AI.
The topics people want us to talk about include everything from the evolution of open source models to the balconization of AI.
Elon AI, which I think will be super interesting to cover, regulating AI and AI hype in the investing environment.
Let's start with the march of progress for open source models.
I guess, Sarah, what have you been paying attention to and what are some of the more interesting things that you've used?
happening right now?
Yeah, so there's nothing out there today in open source that is like GPT4-35 or
anthropic clod quality, right?
So there's a, there's one player out in front and that's open AI, but I think the landscape
has changed a lot over the last couple months.
Like Facebook Lama is quite good.
Many startups are just using it despite its licensing issues, assuming Mark won't come
after them.
And then you have a number of other releases that have happened, right?
Tomorrow just released a pre-training dataset, which seems quite good.
Stability just refused release.
fusion Excel in the image gen space. And so I think the larger dynamic is that there's been an
increasing number of people and teams that now know how to train large models. The cost of a flop
is only going to go down. There's a lot of investment in like distilling models. And a lot of
researchers would claim that you and I know that it's going to be five X cheaper to train the same
size model the second time around. Like once you've made your mistakes and know what you're doing.
And then you have these other accelerants, like you can use these models to annotate your
datasets and increasingly do advanced self-supervision.
So if VCs are going to continue to fund foundation model efforts, including open-source
foundation model efforts, like, if I were a betting woman and I am, I bet there's a three-five
level model in the open-source ecosystem within a year.
And I didn't personally believe that would be true, like a few months ago.
I guess that puts it about two to three years behind when,
GPT 3.5 came out, though. And so do you think that's going to be the ongoing trend,
that there'll be a handful of companies that are ahead of open source by, you know,
one or two generations? Yeah, I think that's like the status quo. So if we just straight line
project, I imagine that will continue to happen. And the real question is like,
can you stay in the lead if you are open AI and, um, and like get, uh, get paid for that?
Or is that the, is that the objective of the organization anyway? Like I, I think, you know,
If you feel a great leader and a lot of resources and a lot of really talented people,
that's not something I want to bet against.
Is there anything you think is coming in terms of other big shifts in the model world,
either on the open source side or more generally?
Yeah, I mean, we should also talk about just like stuff that you're interested in investing in
and generally paying attention to.
But I think the big idea that's been very popular over the last few weeks are autonomous agents,
right? And I don't think that that's like a, I want to hear what you think about this, too.
I don't think that's necessarily an architectural change, but for our listeners, the basic ideas
to orchestrate LMs in this like iterative loop towards some high level goal, where they're doing
planning and if memory and prioritization, reflection. And so you're not necessarily changing the
architecture of the LM itself, but this orchestration allows you to like do many new things,
possibly, the classic example being like, make money on the internet for me. And there's a good
of hackers trying to figure out how to make agents that, for example, like, analyze demand,
find a supplier, set up a drop ship Shopify store, generate ads, then promote that store on
social, right? The whole loop being like one call to an agent with this high-level goal of
make money on the internet for me. Do you think this stuff is interesting around autonomous agents?
I think it's super interesting. And I think there's old saying that the future as here is just
not equally distributed. And I feel like that's one of those things that people in the
AI community have been talking about for a while, and there have been very clear ways to do it.
And then I think there's one or two people that went and implemented interesting things there
in terms of auto-GPT or other things. And then everybody's like, oh, my gosh, this can happen.
And I think a lot of people in the community are like, this is really cool. But at the same
time, yeah, of course it can happen, you know, because effectively you have some form of context
as an AI agent is acting. And then you use that context to inform the next motion and sort of update
you know, the prompt or what the model's going to do.
I think there's other forms of memory that people have been talking about that are super
interesting.
Like, how do you make that a bit more of a cohesive part of how an LLM or AI agent functions?
Because right now, effectively, every time you start a new instance of chat GPT, a new chat,
you've lost the context on all the other sessions you've had.
And so a lot of what people are thinking about is how do I create ongoing context so that
whatever chatbot or whatever API I'm using remembers everything else I've done with it over time
or perhaps everything it's done with every other user over time. And then that becomes really powerful
because you're effectively crowdsourcing and understanding of the world and then integrating it
into an AI system and agent. And so suddenly you have global context. Like imagine if you as a person
understood the life of every other person who's lived and then you had all the context around what
that means in terms of just how you operate in the world. Right. And so I think
I wouldn't operate anymore, Alad.
We'd be hive mind.
Yeah, exactly.
It's just the hive mind.
So I think that's where we're all heading.
So you heard it here first.
Okay, fine.
While we're on this topic of directionally AGI,
there's been a lot of call for regulation of AI from, you know,
Sam Almond to Satya to Elon Musk.
Do you think AI should be regulated?
You know, I think the first question is why do people want to regulate to begin with?
And I think there's, you know, two or three reasons.
One is, if you're an incumbent, it actually really benefits you to get lock-in.
And one of the best ways you can get lock-in is to have regulators get involved
because they start blocking innovation and creativity and new efforts.
And if you basically, there's that famous chart of where prices have gone up by industry
and where they've gone down.
And they've largely gone down in areas that have been unregulated traditionally.
That's things like software or certain types of food products or other things.
And then there's areas where prices.
have gone up dramatically, and that's education, it's health care, it's housing, it's the most
regulated industries. So regulation tends to lock in incumbents. It means you have fewer people
making drugs and you have fewer people doing all sorts of things that could actually be quite
useful. So that's one thing. The second is, I think, some people are just scared. And in some cases,
you could say, well, there's reasons to be scared, right? Like, what if the AI is used to unleash a virus,
or what if the AI is used to cause war? And if you look at the history of the 20th century,
humans have done that pretty well on their own already, right? It's not a new concept that bad things will happen, and often they're driven by other people versus technology, right? And of course, technology can have accidents or can be misused, but fundamentally, usually people have driven a lot of the really bad things that have happened over time. And there's a really long history of doomers who are wrong, right? And I should say, by the way, on AI, I'm a short-term optimist, long-term doomer, right? I actually think eventually there may be an existential threat from AI, but I think in the next to end years, you know, everything will be okay.
There may be accidents or maybe terrible things that happen,
but fundamentally, it won't be different from any other period.
But if you look at the doomerism in the past,
it's things like public intellectuals worried about swine flu and nothing happened.
A lot of people worried in the 70s about population collapse.
We're going to have too many people in the world will starve,
and we're going to have global famine, and that didn't happen.
And so we have a lot of examples of people in the past kind of predicting doom
when nothing happened.
And we also had that during COVID,
where a lot of people said COVID is the worst thing that ever happened to the world,
and then they would be hosting dinner parties unmasked inside with large groups,
you know, later that evening.
And so I just think you kind of have to look at people's actions versus their words.
And fundamentally, you know, my view would be, let's not regulate right now, at least most things.
I think the things that maybe should be regulated are things related to export controls.
So there may be advanced chip technology that we don't want to get out of the country.
and we already have those sort of export controls on other capabilities.
We may want to limit the use of AI for certain defense applications.
Do we really want a really smart,
hyper-intelligent AI agent driving swarms of offensive drones or weaponry?
And so there may be some need to do some sort of global regulation for things like that,
or at least something like what we've done for chemical weapons,
and then I think in the long run we may want to think twice about advanced robotics
and their implications as AI becomes more of existential threat.
to humanity. But overall, like, if I had to choose right now, I'd say don't regulate in the
short run, except for those areas that I mentioned. And then I think that the big pivot point
for regulation may actually come during the 2024 election. Because I think that's the moment
that people will show examples of AI being used to influence the election or influence
voting behavior, just like ads influence voting behavior, right? But AI could write better
ad copy or do other things. I worry a bit about that becoming the reason that people claim that
they should now regulate things, just like they got really aggressive about social networking.
So I don't know. What do you think? I largely agree with that. I feel like it's worth like
describing what I think are the two more rational cases. By the way, I don't think. I think it's too
early to regulate. I just want to make sure that's very clear. But I think the two rational cases I've heard
because I keep asking smart people that I don't think are taking cynical actions, why they're afraid, or, you know, short-term afraid, or why they think this makes sense.
And the two things I've heard are, one, you know, this is unlike the past because of the speed of progression, like this hard takeoff idea of like, especially if you, and, you know, I see you nodding and smiling, but when when very, very smart people who are working at the state of the art tell me that they're concerned within a 10-year band for humanity because of the ability of this current generation of models to be used to train the next generation of models.
and we're all very bad at thinking about compounding.
I'm like, okay, that's not a, like, completely unreasonable point of view.
I think the other is more of a, it's more of a tactical thing for the industry,
which is, as you said, for, you know, whether it be the election or some other trigger,
like, there's a version of the reaction to this from, you know, people who are afraid
or from, you know, political opportunists to go in two directions.
One is, like, mass surveillance, right?
Or one is, like, complete locked.
down, right? So I think the tactical thing is to, like, try to create a democratic process that
gets ahead of it with something, something that's a reasonable path forward. But I, largely,
I feel as far as, it's very early to be figuring that out. And then you also have the problem
of, like, if you're talking about the more existential risks or sort of the AGI risk, like,
alignment research is very tied to capability research, right? And so it's sort of impossible to be
like, we're going to stop making any progress on research, but figure out how to control this
stuff. Yeah, absolutely. And I think related to that, you know, I think it's really important to
your point to separate out almost what I consider technology risk from species risk, right? Technology
risk is there are some bad things that can happen due to technology being abused, right? And that
could be a nuclear disaster or that could be an AI being used to shut down a pipeline or to crash a
flight or to do something really bad. And those sorts of things already happen. But, you know,
could imagine it could accelerate it.
In that case, you could literally turn off a bunch of servers, right?
You could turn off every machine on the planet if you really needed to and humanity would
keep going and it'd be a reset, but we'd reset fine.
Separate from that, there's species level risk.
Like, is there an existential threat to humanity and that's like an asteroid hits the planet
and kills everybody?
And I think a lot of the people who talk about these things mix those two things.
And I think the true Dumer view is, while AGI eventually becomes a species and we compete
with it, and then it wipes out all humans.
And in order for an AI to rationally want to kill everybody,
you'd need some replacement for the physical world
because eventually all the hard drives would burn out and the AI would die, right,
if it existed as a species or a life form.
So you need physical form for the AI in order for it to truly be an existential threat.
And that's why if I were to focus on an area, it'd probably be robotics or something like that.
Because that's where you suddenly give physical form to something.
And if you're like, oh, isn't it great if AI can, I build my house.
And AI can now build a data center.
and now build a solar farm,
and you're eventually, now build a factory,
you've basically created an external system
that no longer needs people.
And that's when I think there's real risk.
And that's why on the 10-year time horizon,
I'm not that worried because robotics and atoms
in the real world takes a lot of time.
So even if you have this hyper-intelligent thing running,
the reality is if you really needed to,
you could turn off every server on the planet.
Yeah, I agree with the embodiment being like a key piece
in this theory of the AIs are going to kill us.
And we're pretty far away from that.
Okay, so one question we got from listeners, and then I'm sure you get all the time is there's a ton of hype in the AI investing in startup world right now.
What do you think of it? Is it justified? Is it appropriate?
Yeah, I think we've both lived through a couple different hype cycles now, right? There was hype cycles around social and mobile and then the cloud and then, you know, multiple crypto hype cycles.
And the reality is out of all those hype waves, interesting things emerged, right? And maybe in the standard hype cycle, 95 or 99% of,
of things fail. But there's still like the 1% that work, or maybe it's 5% work and 1% end up being
spectacular. And I think the hard part usually is to know what's actually going to work because
so many things seem so overlapping and similar. And so I remember when the mobile wave happened
or mobile and social at the same time, a bunch of different people I know started mobile photo
apps. And each one of those things took off. And so you'd suddenly see something go from zero
to a million users in a week. Just literally, it just spread virally. And none of them stopped
right, they all burnt out at cycles
and the only one that really stuck was Instagram
and in part that's because Instagram emphasized filters
which things like Camera Plus already had
and then in part it emphasized the network.
It's like let's have a follow model like Twitter
and that's the thing that really worked.
And so it feels to me like
if you'd gotten excited about the overall cycle,
you were right, but if you got involved
with a wrong set of photo apps
or you built the wrong thing,
then you were wrong in some sense.
I guess you were right about the trend
but wrong about the specific substantiation of it.
And it seems like the same thing here.
And so I think often it's that question
of, you know, Peter Thiel has a good saying, which is you don't want to be the first to market,
you want to be the last standing. And so I think it's a similar thing here. How do you end up
being the last person standing or last company? And it may be the same thing as being the first
mover, right? It's Amazon and books or things like that. But sometimes it means you actually
do something a bit smarter and you come later in the cycle and it's fine. So, I mean, are there
specific areas you're most excited about in this wave or cycle or opportunities that you think
are these things that are obviously going to happen or important to happen?
Absolutely. I mean, well, and I want your ideas. Some of them are shared ideas to be fair. But I would agree. Actually, at a data point, I was just over at Open AI yesterday, and they're biased perhaps in a way that I'm also definitely biased. But a friend was saying they actually think that investors are being somewhat wary at the application level right now because I can't figure out what's going to be standing. Right. It's a very different competitive dynamic.
But the market is extreme for researcher-led foundation model companies because everybody is pretty sure open-eyed is going to be around, right? And I agree the applications are going to be non-obvious. But as one example, like any investor that claims they knew image generation from text was a killer use case, like a year or two ago, besides you, it's just empirically wrong, given David's completely investor-free cap table and amazing business, right? So,
David, in case you're listening, I still love Mid Journey and want to invest.
But that's why this podcast exists.
But, you know, in terms of, like, specific things that I'm interested in now, I'd say, like,
I think there are a lot of things on the application side that are exciting.
So to start with some of those, I think, you know, voiced into this and dubbing are going to be
just a huge unlock for, like, content providers and publishers.
Like, I'd like to back something in that space.
I was just talking to some people at a very large financial, and they said the biggest potential
cost savings, you know, on order of tens of millions of dollars a year for us, is in turning every
line of code we have into explanations for a regulator.
And that's at once, like, pretty specific to them, but also not, right?
I think the areas of audit, tax, compliance, accounting, reconciliation.
Like, there's a lot of natural language understanding that could be better served by semantic understanding.
And so I think that's an obvious area.
I think annotation is changing again, right?
And we can use, this is like a very specific idea, but we can use LLMs too much more here.
We talked about agents.
And then this isn't necessarily a specific company idea, but I think,
architecturally,
retrieval is a field of active research,
but the idea of personalizing
LMs with enterprise data
is an important,
but very tricky one.
You have to do data management.
You have issues in scalability,
sync, access control.
You likely want to apply traditional IR
if you own both retrieval and the model,
you can do very magical things.
And so I think like the chat GPT
retrieval plug-in is super cool,
but it doesn't just serve a whole host of use cases.
And I think this entire like
half of the stack is still missing.
So those are like a couple of the things that we're sort of explicitly, like, hunting around.
But what are you paying attention to?
Yeah, I mean, I think we have a lot of overlap, as you know, so I'm super interested in sort
of voice synthesis, stubbing, and related both in terms of infrastructure, but then also in
terms of application areas.
And so I think that's going to be a really big C change that perhaps people aren't
paying enough attention to.
I'm actually quite long on compliance in general.
Like, I've done a bunch of things like Agent Sink and Medallion and other compliance-related
companies in sort of the old world.
And so I think that's just an area that there's always going to be, you know,
converting spreadsheets and offline processes and, you know, random checks and docs into code
is, you know, really powerful.
I think there's a lot to do on the app side.
I actually maybe on the other side of people who think that it's impossible to tell what's good
and, you know, nothing's defensible and everything's just a wrapper.
on GPT or whatever. And I actually think there's tons and tons to do there. I mean, Harvey AI,
which we're both involved with, I think it's a great example on the legal side. But I think
there's, there's, you know, two dozen things like that to build over time. And it probably
takes five years for all those things to get discovered and built and substantiated. So I don't
think it's like this year, there'll be 12 with them, but I think like every year there'll be
a couple of really interesting ones. And then there's probably a lot to do on the tooling side, right?
Obviously, Langchain is sort of a hot one in the area, but there's everything from, you know,
exploring vector dbs like chroma on through to other forms of infrastructure.
And so, you know, Lama and Dax and other things.
So I just think there's a lot to be done at every level of the stack.
It would be interesting to ask like what happens on the foundation model side,
because to some extent the question is if we locked in a few of the leaders or is more to come.
And I think the Elon must start up that's rumored to exist is sort of an interesting example of a new entrant.
And back to regulation, you know, Musk was asking for a six-month moratorium on progress,
which seems to me very self-serving if you're simultaneously starting an LLM company, you know.
Just hold off until I catch up, right?
Yeah.
And if I was in that position, I do the same thing.
Don't get me wrong, you know, so it's not meant as a dis.
It's just meant as a, you know, remember people's incentives.
But I do think there may be some interesting things to do on the foundation side.
And I do think some people are doing that in a vertical specific way.
They're saying, hey, we're going to build a healthcare-specific model.
and we're going to build a, you know, Bloomberg did like their Bloomberg GPT or whatever
was called on the financial side.
And so I think you can clearly see these verticals emerge, and a lot of people obviously
are debating, well, a general purpose model just cover all those use cases, are you going to have
bespoke sort of vertical models and what parts of the actual logic and synthesis and sort of
magic of these AI models comes from the fact that you've trained on a massive amount of data
in language and then you're applying it to a specific area with potentially,
unique data sets overlaid? Or is it something that's just, you know, that can be dealt with
vertically specific and you don't need that broad-based understanding of the world? So I think that's a
really interesting area of like exploration. And it's, I have no idea what to predict there. I don't
know if you have any thoughts on that. Well, I would agree. I think it's, I think there is real
opportunity for vertical specific models where you can imagine that control for either a compliance or
a safety or a just performance, like reliability of input data, makes sense, right, as well as
like if there are architectural differences because, for example, you have multimodal data in
healthcare and pharma, right? If you are looking at protein structures and radiology and
healthcare records, it's not clear that you would want to do that, train that in exactly the same
way as a general web text model, right? So I think that makes sense on the broader foundation model
question. We were talking about open source at the beginning. I think that opening I will continue
to be a leader, anthropic is very dangerous here, like really talented team. But the number of people
who know how to train large models and the cost of a flop goes down, right? And so I think there's like
just a lot of incentive in the ecosystem for additional players to compete.
What do you think is the opportunity for incumbents?
Or how do you, how should they react to all of this?
Yeah, I think, you know, obviously with every technology wave, there's a differential
split in terms of where market cap, revenue, employees, innovation, et cetera, goes in terms
of incumbents versus startups.
And, you know, every wave is a little bit different, right?
The internet wave was almost, you know, it's probably 80% startups in terms of value and 20% incumbents.
And then mobile was sort of the other way around.
It was 80% incumbents and 20% startups, right?
The big platforms for mobile were Google and Apple.
But then you had a lot of interesting apps like Instagram and Uber and others emerge.
For crypto is like 100% startup value, right?
And it feels like in this wave, it's probably 80-20 again, right?
It's probably Google will probably become a player, right?
Open AI is closely aligned with Microsoft.
of. And then, you know, Salesforce with AI is probably Salesforce, right? It probably isn't a new
company. It might be. I actually think certain companies are vulnerable for the first time because
these capabilities. And that includes everything from ERP providers where there's like a
defensive moat through integrations. And obviously, this could make integrating your data into
multiple things really easy and fast. Instead of six months to roll out SAP, maybe you could have
a next gen approach where it takes, you know, a day or two on a new product, right, to do all
the integrations that you would have spent six months on consulting fees for.
And so there may be certain types of companies that are vulnerable, but the reality is, I think,
in most cases, you know, if an incumbent is already doing something and they're quick to integrate
it, then it works great. The one area that may be really interesting is almost like there's
probably room for a new private equity approach, where if you think about how private equity
companies bid on things, they basically look at cash flows and costs and all the rest of it.
And if you can radically decrease cost for people-heavy businesses by using LLMs as like a replacement for certain types of work or at least in augmentation, then you can differentially bid on companies as a private equity shop.
And so I think like people who do buyouts could have this as a strategy.
I don't know that any of them well because most of them tend not to be very technology savvy, but I think there's really interesting alternative things to do at scale there that tend to be kind of under-discussed.
The healthcare side that you mentioned earlier, I think is kind of fascinating because if you look at the cost of developing a drug, for example,
say it's a billion or $2 billion to develop a drug, whatever it is.
Most of that early stage development is in the tens of millions of dollars at most.
And so I think a lot of the default focus of people who don't understand healthcare very well
is to say, I want to use this for drug development.
And it may help with certain aspects of drug development later,
but usually I think the places in healthcare where this will really get applied fast
is on the more operational or services intensive related side.
It's health care delivery.
It's lowering the cost of a doctor visit or telemedicine.
making payments easier and more streamlined
if you're dealing with insurance reimbursement.
And so I think there's really exciting things
to be done there. Like Collar, a company
I co-founded is, for example, thinking about
different application areas. And I just think that that's
like a real wealth of
fruitful areas for people to explore.
If they're healthcare savvy, and of course, with healthcare, the
technology isn't the issue.
Usually the go-to-market is a hard thing, right?
So I think market access is really hard there.
Yeah, I push back on that a little bit.
I'd start with saying I agree on just the
operational friction in health care that we can take down, right? There's so many processes,
like if you look at prior authorization, it's a battle on two sides to fill forms and, like,
compare, like, EHR data and clinical recommendations against a policy, right? And so there's a
piece of that you can't get rid of because, you know, insurance company has incentive not to pay
and, like, hopefully providers trying to provide the best care. But there is a piece you can get
of, right? Like, we have models that can, you know, read data, try to understand it, fill out form. And so I think that there are lots of interesting applications there. The minor pushback and, you know, much more about health care and pharma than I ever will. But, you know, VC is the job of having opinions anyway. And I think if this wave of AI can change the cost curve in drug development, it's because, you know, you're not actually impacting the $10 or $20 million up front on what's
considered like research, you're increasing the probability that you're right, right? And so like
all of the cost of, you know, expensive recruiting and clinical trials, it is more efficient because
you're right more often. You just understand more about that. I think the hard part is that a lot of
a lot of drug development ends up being, hey, this works great in mice, and let's try it in people now.
And to your point, there may be things that you can learn heuristically in terms of when do things
translate versus not. But I think one piece of it is just basic biological differences.
And then the second piece of it is, this is back to the point on regulatory capture.
To some extent, the incumbents have an incentive to drive up the cost of drug development
so no new startups can actually ever enter in terms of actually making it all the way to a long drug.
Oh, yeah, but it's interesting. It really is this weird regulatory capture. And so if you look at
the last time of biotech company, outside of Moderna, which I think is an exception,
because of COVID.
The last time a biotech company hit,
I don't remember what it's $30, $40, $50 billion in market cap,
something like that.
The last year, such a thing was founded was in the late 80s.
So it's been at this point, what is that?
35, 40 years without a new major biotech company started.
In terms of biopharma, actually developing drugs,
that's shocking, right?
In tech, during that same time period,
there's dozens of companies.
And if you actually look at the aggregate market cap
of the entire biopharma industry.
And as a reminder, healthcare is 20% of GDP.
And pharma is about 20% of that, right?
Or 10% of that.
If you add up the top four or five tech companies,
their market cap equals the entire industry for biopharma.
And that includes Pfizer and Eli Lilly and Gen and all these companies as well as all
the small startups and all the mid-cap companies and everything else.
And so then you ask, why is that?
And these are very profitable companies, right?
They have software-like margins in some cases.
And so as you start taking into the registry, you realize, wow, there's strong reasons for
incumbency to remain as incumbents.
And there is this regulatory process that really delays things quite a bit, in some cases
rightfully, in some cases wrongfully.
And if you look, for example, at the COVID era, we were able to develop multiple vaccines
and do clinical trials and multiple drugs really, really fast.
Part of that was we had a lot of patients, but part of that was we've remembered.
removed all the regulatory constraints.
And we didn't have mass scale adverse events and bad things happening to people.
We just moved really fast.
This actually also happened during World War II.
Winston Churchill wanted a way to treat soldiers in the field for gonorrhea.
And so they rediscovered and developed penicillin in nine months.
They again removed all the regulatory constraints and boom,
nine months later they had a drug that worked really well that was safe.
And so I think it's something to really think about deeply in terms of what are the incentives
that we're driving against
and how are we thinking
about cost-benefits societally
but also the second
you start adding a lot of regulation
things slow way down
and innovation goes way down
and costs go way up
and that's the reason
that you know
for the earlier conversation
I think regulation
of AI for most things
you know export controls make sense
if other things make sense
but for most things
it's probably a really bad idea right now
I would agree with that
I do think that there is
that was my rant by the way
no no stay on the sofa
box, learned something about gonorrhea today. But I think, like, you know, if you think about
the power of government, and I'm strongly on the, like, reduced regulation and encourage
innovation side, you also have these wartime examples of production of airplanes in World War II
going from a few hundred planes to 6,000 in also less than a year, right? And here we're fighting
like atoms, not bits, right? You have to build plants.
and, like, for all these engineering processes.
And so, you know, I think that there are ways in which, like, from a industrial policy,
national security perspective, like, if we wanted to be winning an AI in a really durable way,
like, I think the paths are pretty clear, actually.
Like, people need compute.
And, like, we have to make it a priority in the United States.
But I would also say in the field of pharma, I remember, like, asking you, like, I don't know,
seven, eight years ago, like, hey, a lot, like, I know you're interested in aging and, like,
weight loss and the intersection of areas where, like, where the demand is very consumer-driven,
right? You might break out of, and demand and also, like, the ability to access different
solutions that are on the edge of, like, consumer purchase, right? Especially as we have more,
like web diagnosed, doctor network diagnosed prescriptions, right? Do you think this is
interesting? And I'd send you a company or two. And you gave me the same extremely consistent
view, which was like, hey, despite the PhDs, like, you know, the data-driven person, investor
inside me says, like, don't do this. Just do tech companies. So no change.
You know, I think that the healthcare services and operation side is super interesting right now due to LLMs.
And so, you know, that's an area where I think there's lots and lots of room to do interesting things.
And I have invested in some software-related companies in the past like Benchling or Medallion in these areas.
But I think it's really about what's the healthcare infrastructure that can be served through software.
And then how can LLMs accelerate it?
I think drug development can be extremely useful society.
and really important and impactful
and obviously there can be really great outcomes for people
as well as financially it could be a really great thing
but it just comes back to like
why hasn't anybody built a generational company
in a really long time in the area
and there's all sorts of reasons behind that
I mean we tried that when I co-founded color right
the whole focus was trying to make healthcare more accessible
to people and I still really believe in that mission
so it's more just what are the obstacles to getting there
for different types of companies and do you want to take on those
opticals. And if nobody takes them on, the society really suffers. And so it's almost like,
how can you make sure that you remove as many obstacles as possible while still safeguarding the
public, right, so that people don't get hurt by the stuff. But at the same time, perhaps these
things have gotten too extreme. And that really, you know, strangles the ability for the industry
to innovate in ways that it could otherwise. So it's a really interesting area. Are there any other
topics that we should cover from the audience? I'm good. What do you think a lot? I think we got it all.
Thanks to everyone who submitted their questions.