a16z Podcast - Grand Challenges in Healthcare AI
Episode Date: September 7, 2024Vijay Pande, founding general partner, and Julie Yoo, general partner at a16z Bio + Health, come together to discuss the grand challenges facing healthcare AI today.The talk through the implications o...f AI integration in healthcare workflows, AI as a potential catalyst for value-based care, and the opportunity for innovation in clinical trials. They also talk about the AI startup they each wish would walk through the door. Resources: Find Vijay on Twitter: https://x.com/vijaypandeFInd Julie on Twitter: https://x.com/julesyooListen to more episode from Raising Health: https://a16z.com/podcasts/raising-health/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
The beauty of technology is that it can unlock not just incremental improvement, but completely
change the world we live in today.
Just think, 100 years ago, we didn't have the internet, commercial flight, or anything that
resolved self-driving cars.
But equally in healthcare, 100 years ago, we were still decades away from the double
helix discovery, antibiotics, IVF, and so much more.
And perhaps there's no better industry where we can make these truly monumental shifts than
in healthcare. And in today's episode, we explore these audacious grand challenges over the next
hundred years, or hopefully even less, in healthcare AI. The immediate part is something that
is so good, like not 10% better than what you have now, but like 10x better than what you have
now, that the adoption becomes natural. From always on clinical trials. A million clinical trials
are just organically running in my population every day, and I have no idea how to harness it.
Spot market pricing. On any given day, at any given day, at any
give an hour of the day, you might have a very different dynamic of supply.
More effective scheduling.
Doctors design their schedule in a very protective, like almost a defensive way,
because they felt wronged by the system.
Continuous monitoring.
How do you augment that with more continuous information, whether it be self-reported,
whether it be remote patient monitoring data?
And even the Holy Grail of all Holy Grails, which is AI doctor.
And how far off that might be.
Joining us today are A16Z Bion Health General Partners, VJ Ponde, and Julie U.
This episode also originally came from our sister podcast, Raising Health.
So if you do like this, don't forget to catch more just like it by searching Raising Health,
wherever you get your podcasts, or by clicking the link in our show notes.
All right, let's get started.
So Vijay, you and I talk about the fact all the time that health care is an industry that used to be intractable with respect to
the adoption of technology, but we are also super optimistic that healthcare is now potentially
one of the biggest beneficiaries of technology in the form of AI. And so we wanted to have a
conversation here today about basically what you and I do on a daily basis, which is riff
about some of the grand challenges that we see for builders in the healthcare AI space. And so
let's actually start just with your sort of high-level thoughts on where do you think AI will make
the biggest difference in health care immediately? Yeah, the immediately is the hard part of the
question, right? Because like if we talk about the 20-year-ar,
I think a lot can happen.
So the immediate part is an issue of both technology and also people, right?
What will people accept?
What will people adopt?
And in many ways, I think when you think about the history of technology and health care,
who will buy it, who will incorporate it, how will work into doctors' workflows.
So I think the immediate part is something that is so good, like not 10% better than what
you have now, but like 10x better than what you have now, that the adoption becomes natural
or so easy to adopt that even 10% better could work.
And so when you think about what could be 10x better,
it has to be something where maybe it's making decisions
or it's helping doctors as a co-pilot
and something that's like a superpower they didn't have.
Or if it's 10% better but easier to adopt,
maybe it doesn't even look like software.
Maybe it looks like staffing
or maybe it looks like you're texting something,
and that's easy to incorporate.
And even if that does a little bit,
that could still be important
because healthcare works at such great skills.
Right. Yeah.
And I think it's timely in the sense
that obviously one of the number one crises
that our healthcare industry is facing right now is a labor crisis
and that we have a shortage of labor to do these kind of highly specialized jobs
that we have, whether it be clinical or whether it be administrative,
but also those individuals who are in those jobs today
are extremely burnt out because of, ironically, the technology burden that we put on them,
whether it be in the form of revenue cycle tasks or EHR workflows and things like that.
So that's certainly something that we hear all the time.
I think the other thing that you touched upon is it's sort of the damn humans
that have gotten in the way in the past, not the technology per se,
but one of the hardest things in health care is behavior change
and whether that be on the part of the patient
to adopt some sort of new behavior
that helps them get better
or in the case of a clinician,
something that sort of changes the way that they do their job.
And I think that's, to me,
one of the biggest opportunities is how do you take things
that constitute behavior change
that have been proven in very niche populations
and productize them, package them in a way
that can all of a sudden be sort of globally applied
to the broader population that should benefit from?
And so, I mean, we have a number of examples in our portfolio that I think we'll touch on along those lines.
But those are very good and big points to think about right now.
So given what you just described, and let's assume that we all fundamentally believe in this optimistic view of what AI can do,
what are the use cases for which AI can actually have utility in the near term?
And we put forth a sort of a two-by-two, like good consultants.
And that's what we're supposed to do, right?
Yeah, exactly.
And we said, okay, on the one axis, you have B-to-B use cases.
So historically, a lot of technology first gets adopted.
by the people on the inside, but then on the other side of the axis, you have consumers
or patients. And then the other axis is things that are administrative in nature, so maybe more
back office versus clinical in nature, where you're actually delivering a clinical service
to an end patient. And you've talked about this. The stakes are very different in each of those
quadrants. The area that has had the lowest hanging fruit so far has been really the administrative
B2B side of that equation. How do you think about sort of the health care administration
and internal-facing set of use cases.
And let's talk about what we've seen out there
that we think has worked.
Well, that one seems like a no-brainer, right?
Because I think what everyone's worried about
is like being a doctor is really hard.
And if you're having an AI do clinical recommendations
or something like that,
we haven't even figured out where the human goes in the loop
and all these things.
And we will figure it out.
And that's work to do.
If we're thinking about today,
thinking about the back office,
is that we've got computers there already.
We've got algorithms already.
We also have tons of people.
And you can ask yourself, like,
Why do you have people doing RCM or other tasks?
Are these tasks that actually could be automatable?
And that actually could really make huge impact on sort of the cost.
But it also maybe changes how we think about this as we think of the back office as a data
problem instead of a staffing problem.
Yeah.
That's actually really interesting.
You said algorithm and what that makes me think of is the claim.
So if you think about like the way that payments flow in healthcare, 90% of payments in
healthcare are reimbursed revenue where the provider has to submit literally a claim to the
the payer that effectively an algorithm in many ways. You could think of like the claim is a unit
of a piece of logic that needs to be interpreted. I don't know if everyone's ever told me that.
That's actually kind of interesting. Yeah. But then right now, the way that it's processed is like
a very serialized workflow where first you have to interpret, okay, what kind of claim is this?
Is it a professional claim? Is it an inpatient or outpatient claim? And then which payer product is
it? There's like thousands of payer products in any given market. So which one do we bump that up
against and then within that plan product you have a whole bunch of rules about you know under
what circumstances should this kind of claim be a valid one etc so anyways you have this whole
value chain of decision making oftentimes you have to bring a nurse into that because there might
be some clinical judgment that's where like prior authorization comes into play so you can almost
imagine a world in which like what if we were able to eliminate the claim and basically say because
we have all this data as you alluded to as well as the automation of the set of decisions that need
to be made around that atom of data, that you could actually just eliminate that entire
end-to-end process and just have real-time payments. So that to me is basically you could
eliminate 30% of the waste in our system if you were able to do that. It's a holy grill problem.
What's holding that back?
I mean, one, it's just so entrenched. Two, there are companies that are out there that are doing
this, but you do have to digitize it. A lot of this is sitting in PDF documents.
Which ironically, PDFs are not digitized in some sense.
Yeah, they mean structured data. Unstructured.
And, yeah, we have like a company called Turquoise that is basically doing this for contracts.
And, I mean, the average payer-provider contract, which, by the way, could represent like hundreds of billions of dollars of revenue for any given pairing of entities is typically like a 200-page PDF document that is completely monolithic.
But any one line in that contract might have huge implications for both the revenue to that provider as well as the cost structure for that payer.
And yet those things don't get litigated, but for every two years when they come up for.
renegotiation, but no one's looking at that one line. They're looking at the whole kind of aggregate
thing. So, you know, what if you were to digitize structured data within that contract and be
able to run scenarios on it and say, like, what if the price for these 10 services was this versus
that? What would that implication be on the payments flow through that system? You can maybe even
redo these things faster than every two years, too. Yeah, exactly. So the other really interesting
thing about this concept of newly digitized streams of information and more longitudinal information
about patients is what if we could have an always-on clinical trial infrastructure in our country
such that on-demand you can slice and dice the population for exactly the characteristics of humans
you want and produce a data analysis either retrospectively or prospectively.
What do you think of that idea and what are the barriers for us to achieve something like that?
Yeah, the funny thing about that idea is, I mean, it's a very exciting idea because we could gain
so much knowledge and we can improve healthcare so much.
but, like, it's a ridiculous thing to imagine doing, like, without something like AI.
Without AI, I don't even know how you do that, how you could pay for that.
What do you mean by that?
Like, what parts of that do you think AI?
So really, in many ways, this becomes a data problem in terms of slicing and dicing
and then keeping track of, like, this person got this drug, and at this moment,
had this response, and sort of understanding the causal nature.
Right.
So what we love about clinical trials, and I'm going to get a little wonky, is that, like,
you have to have some sense of causality.
Like, I took this drug, this happened.
And people say correlation doesn't mean causation.
But that doesn't mean we can't do causation.
That's what trials are about.
Trials are all about causation.
So we have to understand the causal pathway.
But with all this data, AI is great,
especially certain types of AI, like Bayesian statistics,
are really good at causality.
And so we could actually have causal understanding.
And it could even be complicated.
Like, you know, you're not supposed to take, like,
grapefruit juice with their birth control pill
because the P450s will nullify this.
I don't know how someone figured that out,
but who knows what else is out there, you know,
that we just didn't find yet.
And if you could have an app that knows your diet,
and knows the basic things, knows your drugs, and has that information, that's kind of
mind-blowing, that how much we can just have no new drugs, no new treatments, and just
optimize.
We're such an unoptimized nature because that optimization is almost impossible without AI.
But then on top of that, once you've built out this infrastructure, now new drugs go into that
infrastructure and can optimize.
And finally, we don't just optimize for health, but we can optimize jointly optimize for health
and decreasing cost.
like this drug is like 10x better 10x more expensive than the other drug
but maybe the outcome for me is going to be no different
so maybe I should get the other one exactly you know and and and how to think
about that that's such a complex data problem and logistics problem
which is also another part of AI that I think we could actually really finally tackle
so as you can tell very excited yeah yeah no no I remember I had a conversation with like
the CIO of the VA many years ago where you know one of the ways he looked at his
population he's like Julie like
Like a million clinical trials are just organically running in my population every day.
And I have no idea how to harness it, you know?
And so, I mean, even with just EHR data alone, you can imagine the possibilities, let alone if you were also to layer on top of that, just your daily behavioral data and all that kind of stuff.
And that's where the LLMs could come in is just a conversational way to capture the day-to-day journal of what you're doing, what you're eating, who you're interacting with, all that.
And it's doctor behavior.
It's like the whole system.
We can finally debug.
And the funny thing, coming from a tech background, like, you know, if RCT seems unfamiliar,
this is basically a giant AB test.
Exactly.
You know, and so this is deeply, deeply entrenched in tech, like even every pixel is A-B-tested.
I wish healthcare could be optimized the way pixels are in webpages.
And you'd have like an A, B, C,D, on and on in terms of multivariate nature.
That degree of optimization is something that would be just fantastic.
Yeah.
And that gets to the notion.
The other sort of holy grail problem that you hear people talk about is, could you actually
ever have almost a spot market for pricing in healthcare. And on any given day, at any given
hour of the day, you might have a very different dynamic of supply that's available for a given
service. Why could you not price differently for it the same way we do in other industries?
So why can't that happen? I mean, today it's probably deemed illegal, honestly, by many of the
contracts because you're sort of bound by these, again, these monolithic agreements that
highly specify. And the fact that you have this claim system, there's no really no notion of a real
time adjudication of the actual price that needs to be paid for that service. And maybe this
doesn't require the fanciest type of AI per se, but certainly the notion of being able to run
machine learning on these things and say, how many of these rules are just useless? Because they
don't actually move the needle on cost or price. But which are the ones that are most consequential
that we actually should keep and therefore have some kind of automated and systematic way to
adjudicate them. So huge opportunity. And that again represents a very significant portion of what gets
wasted nurse system today. So that's a fun one to think about. And then relatedly, I mean,
I mentioned, at least in my career in health care, like one of the sort of general themes of the
problems that I'd like to go after are where there is like a fundamental mismatch between supply
and demand. And, you know, I think a lot of the companies in our portfolio represent that
problem space. And I had built a company that was in the scheduling space, which, you know,
when you think about the phenomenon by which I'm sure you've experienced this, you know, you as a
patient are told to wait weeks for a doctor's appointment. And you assume that that's because
every doctor is booked out solid, but it actually turns out that a lot of the capacity in our
system goes completely wasted. And if you could just simply get better visibility into the
underlying data streams, then potentially you could really mitigate the wait times and the
experience for the consumer, while also helping the providers kind of best use their time.
Are there any examples that you've seen that you think are an interesting representation
of that? I know we have a lot of companies that are trying to both increase transparency
of supply companies that are using coaching or provider groups
and then bringing that to bear in care models.
Well, here I think you're describing something
which is even like maybe before we even get to AI,
which is like we've got to get like off the whiteboards
and onto some more modern sort of computer approaches for things.
Systems of record.
And that actually, you know, we often talk about like technology versus people
where's the weak spot? Where's the problem?
Maybe the most heretical thing one could say
is that maybe the way of doing medicine has to change.
Tell me more.
What do you mean by that?
Yeah, well, and so for instance, like just the workflow of a doctor,
do you think how a doctor goes through their day,
how can we support the doctor to do what we all want them to do
and what they want to do, which is maximize patient welfare,
and maybe view it as something that takes it off of them.
So, like, I think about, like, Devoted and there's infrastructure or an Oko,
like they address this system internally,
and to the extent that they're not optimized,
that's something that even they know and they can see.
Right. Yeah, and that's an example in the case of companies like Devoted
who actually did start with clean sheet and built their own scheduling system
to basically accommodate the exact care model that they were going after.
Yeah, I remember from some of the work that we did at my old company,
you would see the way that doctors designed their schedule.
And very much to your point, they would design the templates of their schedules
specifically in a very protective, like almost a defensive way,
because they felt wronged by the way that the system sent patients to them.
It's a learned behavior because they want to protect their time.
Yeah, and they've gone screwed in the past, perhaps.
Exactly.
We understand what this feels like.
And, you know, let's say that we have 10 pitches with new entrepreneurs that we've never met in a row in a given day.
And you're just on, you know, for, like, hours straight.
And you know how, like, physically and mentally exhausting that is.
It's the same thing for doctors when they say, if you give me, you know, four new patients back to back in the morning,
that is far more taxing to me than if you intersperse repeat patients or other tasks and whatnot.
And after seeing the pitches, then we have to go on an epic and write about the pitches.
Exactly. And that's even worse. My God, if we had to do that.
So you could see how the unfortunate side effect of the way that the systems traditionally have been designed
has now caused this sort of ripple effect and defensive behavior.
But if you were to just kind of start from a clean sheet as companies like devoted are able to do,
could you actually design a much more logical system that actually learns from historical data, right?
And there could be almost like a reinforcement learning component to that,
where the doctors could provide feedback and you can learn over time.
So we talked about one idea, which is a clean sheet of paper idea.
Is that the only way?
What I'm hoping also happens as part of this wave of AI is that it's really,
a forcing function for people to actually take advantage of the data that we have now digitized,
right? So we're actually only what 10, 11 years post meaningful use in healthcare, which was the
act that incentivized doctors to actually adopt electronic health records to begin with. And it's
kind of remarkable to think that like even five years ago, less than 70% of doctors had an
electronic health record. So we're actually relatively new into the era of even having digitized
forms of information about patients over a longitudinal period of time. And so in many ways, we haven't
at all scratch the surface of exploding those data sets. And there really hasn't been that much
incentive to, historically, I would argue. But certainly not necessarily the technology capabilities
at kind of the middleware layer of the stack to be able to do anything meaningful and useful with
that data. But I think that's where the advent of the tremendous technotic shift that we've seen
on the AI side and what you can do with that information, how you can synthesize it, how you can
present it to someone in a way that's actually usable and friendly, that could be the tipping point
that actually gets people to unleash the data.
We're also, by the way, in a period of time
when provider organizations' hospitals are struggling financially,
and so many of them are looking at,
okay, how can I actually monetize my data assets, right?
Like, what do all of our companies want?
They want to eat data.
And one of the ways by which they can do that
is actually partnering with these provider organizations
who have these systems of record
that they haven't been able to exploit
and actually pay them and give them refshare or equity
or whatever it is to get access to proprietary data sets
to train their models.
It's interesting to ask,
like of the various crises, like the staffing crisis versus the issue hospitals are dealing
with, like, which crises are going to be catalysts and which will be impediments?
I think we kind of feel like the staffing crisis really is like tailwinds for AI.
Strangely, yes.
Yes, yes, yeah, yeah.
But maybe the, and COVID, I think, is tailwind for AI because we're so used to virtual.
Yes.
But like maybe not all these crises will be helpful.
And it'll be, I think that would be particularly interesting to see out and that's out.
Yeah, I think that's a great point.
I think certainly there are many who would view it not as a tailwind, but I think it's a good forcing function because now people are at a breaking point where the status quo way of solving that problem, which again is how do we produce more doctors? How do we produce more nurses? We just we can't do that physically. And so that is driving, I think, a lot of this adoption. We were remarking as a team that at this last JP Morgan conference, like 100% of the incumbent payers and providers got up on stage and talked about not just what they want to do with AI, but how they're actually deploying AI.
in practice because they found no other way to be able to solve those kind of more fundamental
problems. I think the other tailwind that some might call an impediment, but certainly builders
in our universe call a tailwind is the business model change in health care, right? So movement towards
value-based care fundamentally breaks the kind of the schema of like how health care has worked for
decades. And think incumbents are more likely to be on their heels with that dynamic versus
the upstarts who themselves can build their entire care model.
an operating model on the basis of those new payment domains.
I actually don't envy organizations who have to have one foot in each world, right?
Because having half of your shop in a fee-for-service model,
and then the other half in a value-based model is very, very challenging to do.
Well, it's fun to think that in a fee-for-service world, AI is nice,
but maybe it actually goes against what you want.
And in a value-based care world, actually AI is the catalyst, right?
Because if you can do things better, you can see the value to it.
Yeah, it's actually funny the example that that reminds me
is how the AI are getting sued right now.
So there's a bunch of major national payers
who are using AI algorithms to automate prior authorization.
And so all they're doing is taking the rules that humans wrote,
that humans were executing slowly and doing it faster.
And so now of a sudden everyone is complaining,
oh, the denial rate is going up.
But it's actually not the rate of denials is not going up.
It's the speed with which the denials are happening that's going up.
Don't blame the technology.
You blame the humans who actually wrote the rules.
And you're just seeing kind of an exacerbated version of it.
We're going to deal with a lot of that of kind of the finger pointing at the technology
where it's actually just implementing the broken system underneath it.
And that's why this kind of move to like new business models gives you the opportunity
to clean that up and start with logical ways to control spend.
Okay.
So we talked about health care administration.
We talked about scheduling opportunities.
Let's actually talk about the EHR itself.
So LLM as an EHR, what do you think about that?
Yeah.
Well, so I think the thing that's really underappreciative out of the LLM is, like, people think of it as like this Oracle or something like that.
But I think it's maybe at least for us, I think of it as a UI.
Yep.
Right.
And it's kind of funny because we start with like command line interfaces, you know, for those who ever dealt with that.
And like, then we have guis because that's better than command line.
But now we're back to text and typing things in, except instead of like some weird command that you have to memorize, you just like, just tell me what you want.
Right.
You just speak English.
Yeah, just speak English.
And we're so optimized for speaking English to each other.
I mean, that's like easy.
It doesn't require training in the same ways.
And so I think as a UI now, that makes sense.
And now when you're seeing as an EHR is an alum,
then I guess you're kind of meaning that the data is in there
and it can be queried like this and maybe synthesized.
That's all very natural.
I think obviously you want to be very clear about partitioning things.
And so maybe you're doing it with like rag or something like that,
whereas getting information coming back.
But maybe the question to turn around is like, again,
the technology sounds very plausible.
You can imagine you know a hackathon.
That would put some pieces together and get that.
done. But I think you need more than just like connecting to GPD4 or Gemini or something like that.
You need something medically specific. Yeah, yeah, absolutely. This is where I think one of the prime
examples where we certainly believe that a specialist, you know, model is necessary to understand
the specific nuances of how to interpret medical information versus general internet information.
And, I mean, that's certainly a big area of development for builders as we see it,
companies that are building tools that can do everything from summarize existing medical record data.
how do you tell the story of Vijay Ponday before he walks in the door so that you understand
his journey and not just look at a bunch of numerical records and sort of sporadic information
about different visits and whatnot, but really truly the story of him, including things like
your social determinants and what happens in the home and outside of the clinical setting.
We see a lot of companies kind of building that.
The other obvious use case is the scribing use case where have a conversation with your doctor,
actually look them in the eye, and rather than them sitting at a keyboard during your entire
is it. And that also gets written as a story. It's a story that then gets added to your
medical record and it can create that flywheel effect of continuing to add to the narrative of your
journey. One way to sort of think about AI in healthcare is take the existing jobs and then see
which ones can go. And that makes a lot of sense. I'm also curious if you slice and dice
to a different way because we don't have people with AI, how do we do differently? Because
people are assigned specific jobs because of the way humans work. But like maybe if when we're finally
said and done when AI can do everything, maybe the resident isn't the role that it would take.
Yeah.
But if you were to say like unbundle the job of the doctor, what components could you
re-bundle into a different thing?
Yeah, there was actually a time where this concept of like a dataist was sort of popularized
where like a main, like a baseline component of basically what every job in healthcare
is doing is some degree of data interpretation.
And so if you were to unbundle that component and create almost like a horizontal job
that was just doing data interpretation and this.
kind of thing. And maybe that's actually the better analog to what I described earlier.
It's like, what if there was a dataist role that effectively is an LLM that is synthesizing all
this information? I think the thing that's, you know, missing right now to make this a
reality is today's information architecture is very sporadic, right? So you're pretty
healthy person, you probably see your doctor maybe once or twice a year. And so how do you
augment that with more continuous information, whether it be self-reported, whether it be
room of patient monitoring data, whether it be just other information sources to create that
more holistic picture. But I like that notion of flipping the jobs on their head and thinking
about the components a different way. Well, the fun thing about the dataist is like, I think
the UI is pretty important, right? Because we're talking about a team of people. And medicine often
is done by a team. There might be a nurse or a PA or a doctor or a specialist and all these
people. And where the AI comes in, one idea is a co-pilot, which is like each one of the team
members has a co-pilot. But what's interesting about the dataist is, is like, the
the AI is a peer,
you know, a contributor, you know, part of the team.
And it has its role that actually everyone feels pretty good about.
And you think about it, like, you don't put a person to do the data this job.
I mean, in principle, with a calculator and a lot of time,
maybe it could do all what's necessary.
But you'd never have a human being do that.
And that might be a very easy first entry where it's like they're good at,
they're doing what they're good at.
Yeah, that actually reminds me of a company that we saw that,
what if every nurse in the inpatient ward,
because the inpatient setting is very chaotic,
very active. Things are like surprises happen all the time. And a lot of nursing teams have sort of either
live like walkie talkie type devices just on their shoulder or they'll have some real time
communication mechanism with the rest of their care team. And this company was saying, why not
put an LLM into the same walkie talkie signal and actually literally just have it be like almost
a Jiminy Cricket sitting on everyone's shoulder being like, hey, I'm sensing X pattern by virtue
of listening into your conversations. Let's all remember that this is happening with this patient
over here and I think there could be a safety issue over there.
It's almost like the way that I talk about Baymax all the time.
So like could everyone just have kind of a Baymax companion, you know, hang out in their
care team and be sort of the steward of all the information flow, synthesize it and
read it back when they find something that probably warrants an alert within that group.
I don't know if you spent much time in the EDI.
I like break this and cut that.
No, I have not had that privilege.
I have lost scars and stitches and so on.
And like, so I remember it was a few years ago.
I actually right around here I cut myself with a.
chef's knife.
Awesome.
I was probably showing off to the kids.
Oh, geez.
And so it wasn't looking good.
I was like, oh, I should get stitches.
And I go to the ED.
I'm there for like two hours.
Wow.
And I look around.
Yeah, just waiting.
And I look around and I'm like, this could be another four hours.
And like, I'm doing my math.
And maybe for the first hour I'm bugging the nurse and the inbound.
But like, after all, I just leave.
And like, there's various situations where I just want to talk to somebody.
But you can't have everybody talk to somebody.
to somebody because they're going to be overloaded.
If I could just be texting somebody.
Yeah.
I just want to know where things are and if it's busy, that's fine.
Yeah.
Or maybe I don't even need to be there.
Right.
You know, but like that triaging too could be huge.
Absolutely.
Yeah.
And this gets too.
Okay, so going back to the concept of unbundling the role of a clinician,
there's one part, which is actually the treatment part.
So that's a part where maybe we can't necessarily today build an LLM that will
stitch your finger.
Yeah.
But the notion of triage, getting you to the right site of care.
So should I stay home?
should I go to an urgent care clinic? Should I go immediately to the ED or am I okay just going to my
PCP? That is actually a critical role. We wrote a piece about this where we said my, my version of
that today is I call my doctor cousin, as all of my family members do. The poor lady, she's a
cardiologist, but she gets every single call about every single specialty under the sun and she'll
tell me literally like, should you take your son to the urgent care or does this need to go
immediately to the ED? That is like one of the roles that LLM construct could play, which actually
would also do a huge service to doctors that they don't have to be the ones who are fielding those
questions.
Well, especially the alternative is Dr. Google, right?
Correct.
Or WebMD or whatever.
And then the patient.
Which will tell you you have cancer.
Yeah.
The amateur is deciding, like, you know, and especially with my friends that you've probably
been through this, like with like your kid is sick.
And you're like, I probably don't need to go in, but like it's my kid.
So if it's like even 5% maybe I'll go.
Yep.
And that's just a drain on everybody, doctors and patients.
Yeah, yeah.
So we talked about EHRs and there's not.
notion of the patient story. You know, we're now getting into this notion of, okay, if you were to
take almost like the front door experience to healthcare and what are the big opportunities for
AI to make an impact there. One is simply, instead of going to Google, you know, going to a
specialized tool or whatever it might be that's trained in this way, what are the, one of the
questions that always comes up is what are like the regulatory rails on this? So like at what point do
you sort of cross the line into actually clinical decision making and how should I think about this
as a builder. I know you've obviously
done a ton of thinking on this and a ton of work
including like talking to the regulators and understanding
what they think. What kind of advice would you
give to entrepreneurs who are trying to
figure out where that line is and whether
or not they should cross it? Yeah. So a couple of things
I think some of the lines are more clear
than others. But in the cases where there
is any gray zone, I think the regulators are
I think eager to chat with
startups. Especially maybe it's on the software
side that might be on C but you know and so on
like to try to figure out
where you are and you know we've seen a lot of
successful founders who've done that type of collaboration.
And I think generally that's a pretty strong approach because then there's no surprises on either
side.
The tricky part is when nobody knows, you know.
And so I think it's not just about consultation, but it's also leading and sort of taking
the framework and the philosophy for how we regulate things right now and really understanding
is this software's device.
Is that the right framework?
And really sort of being a leader in terms of how we should be thinking about this.
And I think there's actually welcoming of that as well.
well, because it's new for everybody.
As you're alluding to, we are actually an industry that has a regulatory framework when
it comes to AI specifically.
And so in many, these are like some of the rare cases where healthcare is actually ahead
of the curve as far as technology goes, what's your sense of, you know, does generative
AI specifically, do LLMs constitute enough of a sea change relative to historical waves of
AI that it should warrant an entirely different regulatory framework?
Or do we think we should try to make, you know, the current system work for those new
technologies. Our space already has a ton of regulation. And so in that case, you have to now ask
what's the specific use case where more regulation is helpful for patients? I don't see people
talking about that. Yeah, the broader point of don't necessarily focus on regulating the technology,
but rather the thing for which the technology will be used. Yeah, the use. Yeah. Okay, let's go to
the Holy Grail of all Holy Grails, which is AI doctor. Yeah. How far off in the horizon do you think
that concept is where you could fully embody the full-stack role of a clinician making diagnostic
decisions and treatment decisions and whatnot. And what needs to be true do you think in just
the broader ecosystem for that to be the case? Yeah. So I think one of the two-by-toes I like,
you know, is sort of trying to understand which decisions are complex and which are simple
and then which answers are robust to mistakes and which are not robust to mistakes. And so
things that are simple and actually robust to mistakes,
those can already be done by machine learning and so on.
Things that are simple but actually have major consequences with mistakes,
like driving a car.
Like everyone can drive a car,
but if you do it wrong, you can kill people.
So that one's actually tricky,
but you see people working on that with self-driving cars
and there's a lot of work.
I think where medicine is hard is that it's something that's complex
and mistakes can have huge impacts.
And so maybe what we could do is we should work our way up there.
And it's not even a question of like, should we, but we kind of have to, if you think about some of these crises are coming.
And so maybe you start with nursing, and we've seen this with Hippocratic, and that makes sense.
You're not doing diagnosing, you know, you're doing no harm, you know.
Literally, yes.
And so that's, I think, very clever.
Then maybe you can work your way into PA, you know, physician assistant, maybe you could work your way from there into GP.
And I think the general practitioner, concierge's doctor, that tier is kind of a really interesting team.
tier because largely you're triaging and sending off to specialists. So the AI doesn't have to be
an oncologist and a cardiologist and all these things. And so that tier actually alone is kind
of really interesting since so much of medicine is done at that tier. And so much of issues of
access are access to that tier too. If everybody had the AI concierge doctor in their pocket,
I think that would actually be dramatic in terms of the impact on health. So even if we're just
get to that tier, I'll be pretty excited. And once we're at that tier, then you can imagine
sort of going to specialist world.
But that might be, that later part might be a bit off.
Yeah.
And I mean, to your point, this is, it's an inevitability that we'll have to figure out a way
to create leverage on the supply side of this portion of our labor base.
I guess what are your thoughts on co-pilots for doctors?
And is that a more near-term tractable version of this that you think could have an
impact on the near-term?
I think so the whole problem with co-pilots is, can you work it in a way that goes in
the doctor workflow where they view us as a benefit, not a nuisance?
Yeah.
You know, it's not some alert or whatever.
It's like something where they are going to it.
If you can do that, and maybe we've seen this with like scribes and so on,
like something where doctors are like, hell yeah, I want this.
This is great.
If we can create that and maybe that's the sort of the challenge and the call to arms for founders,
like create something, some product that like people are clamoring for it.
And that's obviously knowing that space really well and knowing your customers
and knowing people who are going to use it.
I think if you can get into the workflow, then I think it could go really well.
Bayesian did kind of the ultimate, which is like, let's just embed it into the HR workflow
so that inevitably is just there when they open it up and there's not really any need for
individual physician buy-in.
Yeah, yeah, totally.
Given what we just talked about and all these grand challenges, what are some of the types
of startups that we wish that would walk through the door that we just haven't seen yet?
Yeah, so one area that I've been waiting for, and I think it's maybe a little early,
maybe not maybe just right at the right time, is something where clinical trials can be addressed
with AI. And this is where it's a confluence, a couple of things. One is, like, clinical trials
obviously so important. We talked about real world and, like, ongoing clinical trials as a part of
healthcare. But then finally, clinical trials, because so much money flows through it, you could
improve them 5% or 10%. It's not like you have to do something heroic to be able to, not have
to 10 exit or 100 exit. I remember, like, a decade or so ago, I have an acquaintance who was
working for Google and they're optimizing various filters and this and that. They made
like 5x better, one of the ad filters.
And basically 5x was like $100 million.
Yeah, exactly.
A 5% was $100 million.
Yeah. And so I made me really jealous.
Like I'm working on like something in drug design or whatever to make big leaps and bounds.
And small things for big cash flows can have a huge impact.
So something for clinical trials could be huge or even just picking like the order of rank
ordering of clinical trials to sort of do better job there.
Anything in that space I think would have a huge impact and we haven't seen very much.
Part of it is like it's maybe not where if you're outside of that space, you may not think to go.
I think that would be my pick.
I don't know what's your pick?
Mine would be, I mean, kind of comparable in the sense of the nature of the opportunity, which is if you were to design an AI native health plan from scratch
and basically be the way by which health care payments flow, like all the problems that we talked about earlier,
what are the components of a health plan?
It's a payments mechanism and claims it is a underwriting chassis in terms of how you score risk within a population.
And then it's a network of who are the providers that you would actually steer patients to on the basis of understanding, you know, what kinds of services they need.
And the way that those are built today, you know, you see huge opportunities to both leverage data and AI in the sense of exactly what you just talked about, where a 1% impact on the cost structure of a health plan or the way that you underwrite risk in a certain health plan could literally mean hundreds of millions of dollars of either cost savings or better economics to the providers who are part of those networks.
So that to me, kind of this notion of a full stack AI native health plan that takes full risk on populations and exploits all of these data sets that we're talking about to really understand almost at an individual level.
You know, you can almost sort of imagine like an individualized health plan that is like purpose built for you on the basis of your behaviors and your medical history and things like that, that is priced entirely different than all of your employee peers who are in the same group plan versus what it is today where it's so least common denominator.
like sort of everyone loses because you're trying to design for for everyone in the same sort
of brute force fashion. So that would be mine. That'd be fun. Yeah. Well, those are some
very big audacious grand challenges that we hope many builders go off and pursue. And we'd
obviously love to talk to anyone who's working on problems of this ilk. Yeah, absolutely.
me, Chris Tatiosian, and me, Olivia Webb.
With the help of the bio and health team at A16Z, the show is edited by Phil Hegseth.
If you want to suggest topics for future shows, you can reach us at RaisingHealth at A16Z.com.
Finally, please rate and subscribe to our show.
The content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z.
fund. Please note that A16Z and its affiliates may maintain investments in the companies discussed
in this podcast. For more details, including a link to our investments, please see A16Z.com
slash disclosures.