a16z Podcast - a16z Podcast: Putting AI in Medicine, in Practice
Episode Date: November 3, 2017with Brandon Ballinger (@bballinger), Mintu Turakhia (@leftbundle), Vijay Pande (@vijaypande), and Hanne Tidnam (@omnivorousread) There’s been a lot of talk about technology -- and AI, deep learnin...g, and machine learning specifically -- finally reaching the healthcare sector. But AI in medicine isn’t actually new; it’s actually been there since the 1960s. And yet we didn’t see it effect a true change, or even become a real part our doctor’s offices -- let alone routine healthcare services. So: what's different now? And what does AI in medicine look like, practically speaking, whether it's ensuring the best data, versioning software for healthcare, or other aspects? In this episode of the a16z Podcast, Brandon Ballinger, CEO of Cardiogram; Mintu Turakhia, cardiologist at Stanford and Director of the Center for Digital Health; and general partner and head of a16z bio fund Vijay Pande in conversation with Hanne Tidnam discuss where will we start to see AI in healthcare first -- diagnosis, treatment, or system management -- to what it will take for it to succeed. Will we perhaps see a "levels" of AI framework for doctors as we have for autonomous cars?
Transcript
Discussion (0)
Hi and welcome to the A16Z podcast. I'm Hannah and today we're talking about AI and medicine,
but we want to talk about it in a really practical way, what it means to use it in practice and
in a medical practice, what it means to build medical tools with it, but also what creates
the conditions for AI to really succeed in medicine and how we design for those conditions,
both from the medical side and from the software side. The guests joining for this conversation
in the order in which you will hear their voices are Mintu Tarakia, a
cardiologist at Stanford and director of the Center for Digital Health. Brandon Bollinger, CEO and founder
of Cardiogram, a company that uses heart rate data to predict and prevent heart disease,
and VJ Ponday, a general partner here at A16Z and head of our bio fund. So let's maybe just do a quick
breakdown of what we're actually talking about when we talk about introducing AI to medicine. What does
that actually mean? How will we actually start to see AI intervene in medicine and in hospitals and in
waiting rooms. AI is not new to medicine. Automated systems and healthcare have been described
since the 1960s. And they went through various iterations of expert systems and neural networks and
called many different things. In what way would those show up in the 60s and 70s?
So at that time, there was no high resolutions. There weren't too many sensors. And it was about
a synthetic brain that could take what a patient describes as the inputs and what a doctor finds
the exam as the inputs. Using verbal descriptions? Yeah, basically words. People created, you know,
what are called ontologies and literary and classification structures, but you put in the 10 things you
felt and a computer would spit out the top 10 diagnoses in order of probability. And even back
then, they were outperforming sort of average physician. So this is not a new concept. So basically doing
what hypochondriacs do with Google today. But verbally, yeah. So Google is an, as it's a
in some ways, an AI expression of that where it's actually used ongoing inputs and classification
to do that over time, much more robust neural network, so to speak.
So an interesting case study is the Misen system, which is from 1978, I believe.
And so this was an expert system trained at Stanford.
It would take inputs that were just typed in manually, and then it would essentially try to
predict what a pathologist would show.
And it was put to the test against five pathologists, and it beat all five of them.
And it was already outperforming.
It was already outperforming doctors, but when you go to the hospital, they don't use Misen
or anything similar.
And I think this illustrates that sometimes the challenge isn't just the technical aspects
or the accuracy.
It's the deployment path.
And so some of the issues around there are, okay, is there a convenient way to deploy
this to actual physicians who takes the risk?
What's the financial model for reimbursement?
And so if you look at the way the financial incentives work, there's some things that
backwards, right? For example, if you think about kind of a hospital from the CFO's
perspective, misdiagnosis actually earns them more money. Because when you misdiagnosed,
you do follow-up tests, right? And our billing system is fee for service. So every little
test that's done is billed for. But nobody wants to be giving out wrong diagnosis. So where's the
incentive? The incentive is just in the system, the money that results from it. No one wants to give
an incorrect diagnosis. On the other hand, there's no budget to invest in better diagnosis. And so
that's, I think that's been part of the problem. And so things like fee for value are interesting
because now you're paying people for, say, an accurate diagnosis or for a reduction in hospitalizations
depending on the exact system. And so I think that's the case where actually accuracy is rewarded
with greater payment, which sets up the incentive so that AI can actually win in this circumstance.
Where I think AI has come back at us with a force is it came to health care as a hammer
looking for a nail.
What we're trying to figure out is where you can implement it easily and safely with not
too much friction and with not a lot of physicians going crazy and where it's going to be
very, very hard.
And that, I think, is the challenge in terms of both building, developing these technologies,
commercializing them and seeing how they scale.
And so the use cases really vary across that spectrum.
Yeah, I think about there's being a couple of different cases where AI can intervene.
One is to substitute what doctors do for already, and so people use the example of radiology
as an example.
The other area that I think is maybe more interesting is that AI can complement what doctors
can't do already.
So it would be possible for a doctor to say, read an ECG and tell you whether you're in an abnormal
heart rhythm.
No doctor right now can read your Fitbit data and tell you whether you have a condition
like sleep apnea.
I mean, if you look at your own data, you can kind of see restful sleeve has rail-structured
REM cycles, so you can see some patterns there.
That said, the gold standard that a sleep doctor would use is a sleep study where they
wire you up with six different sensors and tell you to sleep naturally.
There's a big difference here between kind of the very noisy consumer sensors that may
be less interpretable and what a doctor is used to seeing.
Or it could be that the data is on the device, but the analysis can't be done yet.
maybe the analysis needs a gold standard data set to compare to, there are a lot of missing parts
beyond just gathering the data from the patient in the first place.
I think there's some inherent challenges in the nature of the beast.
Health care is unpredictable.
It's stochastic.
You can predict a cumulative probability, like a probability of getting condition X or
diagnosis X over a time horizon of five or 10 years.
But we are nowhere near saying, you know, you're going to have a heart attack in the next three
days. Prediction is very, very, very difficult. And so where prediction might have a place
is where you're getting high fidelity data, whether it's from a wearable or a sensor. It's so
dense that a human can't possibly do it like a doctor's not going to look at it. And two, it's
relatively noisy. Inaccurate, poor classifiers missing, periods where you don't actually have
this continuous data that you really want for prediction. In fact, the biggest predictor of someone
getting ill with a lot of wearable studies is missing data because they were too sick to wear
the sensor. Oh, that's so. So the very absence of the data is a big indicator. Yes, exactly.
That's so interesting. I don't feel well enough to put on my what have you and that means something's
not right. Possibly. Or you're on vacation. And so that's the problem. That's the other challenge of
AI is context. And so what are some of the more simple problems where you have clean data structures,
is you have less noise, you have very clear training for these algorithms.
And I think that's where we've seen AI really pick up in imaging-like studies.
It's a closed-loop diagnosis.
You know, there is a nodule on an x-ray that is, you know, cancer based on a biopsy proven
later in the training dataset or there isn't.
In the case of an EKG, well, we already have expert systems that can give us a provisional
diagnosis on an EKG.
They're not really learning.
And so that's a great problem because most arrhythmias don't need context.
You can look at it and make the diagnosis.
We don't need them to learn.
So that's why it's good to use right away to apply this technology to immediately.
You don't need everything.
You don't need to mind the EMR to get all this other stuff.
You can look at the image and say, does it probably does it have a diagnosis or does it not?
And so imaging of the retina, imaging of skin lesions, x-rays, MRIs, echocardiograms,
EKGs, that's where we're really seeing AI pick up.
I sort of define the problems into inputs and outputs,
and we talked a little bit about some of the inputs
that have newly become available, like EMR and imaging data.
I think it's also interesting to think about
what the outputs of an AI algorithm would be.
And these examples are kind of self-contained,
well-defined outputs that fit into the existing medical system.
But I think it's also interesting to imagine
what could happen if you were to reinvent the entire medical
system with this assumption that we have a lot of data. Intelligence is artificial and therefore
cheap so we can do continuous monitoring. So one of the things I think about is what are the gaps
of people who do not have access to EKGs, right? Most people, like I've actually never had an EKG
done aside from the ones I do myself. And most people actually in the US, you get your first EKG
when you turn 65 as part of your Medicare checkup. And they won't reimburse for anything. So late. You know,
people like my parents. So, like, my dad's an excavator. So he digs foundations for houses. And he
hasn't seen a doctor in 20 years. And if he leaves a job site, the entire job site was shut down.
So it's hard for some people, I think, to go into the doctor's office between the hours of nine to
five. And if you look at that in aggregate, about half of people in the U.S. have a primary care
physician at all, which seems astonishingly low. But that's kind of the fact. There's kind of a gap,
right? About a third of people with diabetes don't realize they have it. About a fifth
the people with hypertension, for AFib, it's 30 or 40%.
For sleep apnea, it's like 80%.
I think it's one thing just finding out, but not being able to do anything about it,
but the actionable aspect, I think, really is a huge game changer.
It means that you can have both better outcomes for patients
and in principle, lower costs for pairs.
Right, and these are areas where there are clear ways of addressing
these specific conditions.
I will take a little bit of a different view here,
which is that I don't know if AI, artificial intelligence, is needed for earlier and better
detection and treatment. To me, that may be a data collection issue. How is that different from
what we're saying about finding it early? How can that not be good? Because that may have to do
with getting sensors out of hospitals and getting them to patients. And that's not inherently an AI
problem. It could be a last mile AI problem so that if you want to scale,
the ability to get this stuff.
Okay, so let's say we get to a point where our bathroom tiles have built-in EKG sensors
and scales, and the data is just collected while we brush our teeth.
It's the sensing technology that may detect things discreetly, like in arrhythmia.
You may not necessarily need intelligence, but who's going to look at the data?
And so that's a scaling issue.
Well, but the idea is, yeah, so AI could look at the data.
And the other thing is that if you're using this as screening, you want to make the accuracy
as high as possible to avoid false positives.
And AI would have a very natural role there too.
But it's interesting that you're saying
it's not necessarily about the analysis,
it's about where the data comes from and when.
I think there are two different problems.
There may be a point that it truly outperforms
the cognitive abilities of physicians.
And we have seen that with imaging so far.
And some of the most promising aspects of the imaging studies
and the EKG studies are that the confusion matrices,
the way humans misclassify things
is recapitulated by the convolutional neural networks.
Can you actually break that down for a second?
So what are those confusion matrices?
So a confusion matrix is a way to graph the errors
and which directions they go.
And so for rhythms on an EKG,
and a rhythm that's truly atrial fibrillation
could get classified as normal sinus rhythm
or atrial tachycardia
or superventricular tachycardia,
the names are not important.
what's important is that the algorithms are making the same type of mistakes that humans are doing.
It's not that it's making a mistake that's more necessarily more lethal and just nonsensical, so to speak.
It recapitulates humans, and to me that's the core thesis of AI in medicine,
because if you can show that you're recapitulating human error,
you're not going to make it perfect, but that tells you that in check,
in check and with control, you can allow this to scale safely since it's liable to do what humans do.
And so now you're automating tasks that, you know, I'm a cardiologist, I'm an electrophysiologist,
but I don't enjoy reading 400 ECGs when it's my week to read them.
So you're saying it doesn't have to be better.
It just has to be making the same kinds of mistakes to feel that you can trust the decision making.
And you dip your toe in the water by having it be assistive.
And then at some point, we as a society will decide if it can go fully auto, right?
Fully autonomous without a doctor in the loop.
That's a societal issue.
That's not a technical hurdle at this point.
Right.
Well, and you can imagine just as for a less say self-driving cars, you have different levels of autonomy.
It's not nothing versus everything.
You know, you can imagine level one, level two, level four, level five as in self-driving cars.
And so I think that would be the most natural way because we wouldn't want to go from nothing to everything.
Exactly. And just like a self-driving car, we as a society have to define who's taking that risk on, right? You can't really sue a convolutional neural network, but you might be able to make a claim against the physician, the group, the hospital that implements it. And how does that shake out?
To figure out literally, like, how to insure against these kinds of errors.
I think the way you think about some of these error cases kind of depends on whether the AI is substituting for part of what a doctor does today or if the AI is doing something that's truly novel.
I think in the novel cases, you might not actually care whether it makes mistakes that would be similar to human.
Oh, that's an interesting point because it's doing something that we couldn't achieve.
What kinds of novel cases like that can you imagine?
Wearables are an interesting case because they'll generate about two trillion data points this year.
So there's no cardiologist or team of cardiologists who could even possibly.
look at those. That's a case where you can actually invert maybe the way the medical system works
today. Rather than being reactive to symptoms, you can actually be proactive. And the AI can be
essentially something that tells you to go to the doctor rather than something that the doctor
uses when you're already there. Just to take radiologies an example, where you could have one
levels where it's as good as a common doctor. Another level where it's as good as the consensus of
doctors. Another level is that it's not just using the labels of a radiologist would say on the
image, it's using a higher level gold standard, like it's predicting what the biopsy would say.
And so now you're doing something that... Which would be into your kind of novel plus...
Yeah, it's something that no human being can do. And so, and it can do that because in principle
could fuse the data from the image and maybe blood work and other things that are easier to get
and much less risk-inducing than removing, you know, tissue in a biopsy.
So pulling those multiple streams of information into one, sort of synthesizing them is another
area that it's very difficult for a human being to do and very natural for a computer.
It is very natural, but I think we need a couple things to get there.
We need really dense, high-quality data to train.
And the more data you put in a model, I mean, so machine learning by definition is statistical
overfitting.
And sometimes...
Well, actually, I think that's wrong.
I mean, that machine learning done poorly,
it's like saying driving is driving a car off a cliff.
Like, you know, poor driving is poor driving,
but machine learning tries to avoid statistical overfitting.
It does.
My point is that you, one of the unknowns with any model,
it doesn't matter if it's machine learning or regression or a risk score,
is calibration.
And as you start including fuzzy and noisy data elements in there,
first of all, often the validation data sets don't perform as well as the training data set,
no matter what math you use.
And why is that?
Well, that's a sign of overfitting, and usually it's because there wasn't sufficient
regularization during the training process.
So overfitting is a concept and statistics to effectively indicate that your model has been so
highly tuned and specified to the data you see in front of it that it may not apply to other
it can't generalize so if you had to use a model um and to identify a bunch of kids in a classroom
and pick the kid who's the fastest a overfitted model might say it's the redheaded kid wearing
nikes okay and because in that class that was the case that was the one child but that has no
plausible biological or other plausibility you can't use that and so if you take that to a place where
the prevalence of Nike shoes or redheads is low
you might miss the fastest person.
And so these are some of the issues,
the underlying shifts in population,
in natural language processing
that's embedded in AI,
the lexicon that people use,
how doctors and clinicians
write what it is that they're seeing
with their patient is different
from not even specialty to specialty,
but hospital to hospital,
sort of mini subclatures?
It's going to be different at Stanford
than it was at UCSF,
which is going to be different at Cleveland Clinic.
I think that's actually a nice thing
about wearable data
is that fit bits are the same
all over the world. This label problem, though, is interesting because, you know, in our context,
each label represents a human life at risk, right? It's a person who came into the hospital
with an arrhythmia. And so you're not going to get a million labels the way you might for a
computer vision application. I mean, it'd be kind of unconscionable to ask for a million labels in
this case. So I think one of the interesting challenges is training deep learning base models,
which tend to be very data hungry with limited labeled data.
kitchen sink approach of taking every single data element, even if you're looking at an image,
can lead to these problems of overfitting. And what Brandon and VJR are both alluding to is
you limit the labels to really high quality labeling and see if you can go there. And so don't
complicate your models unnecessarily. And don't build models that are overly complicated
for the amount of data you have. Right. Because if you have the case where you're doing so much
better on the training set than the test set that's proof that you're, you know, that you're
overfitting and you're doing the ML wrong. Modern ML practitioners have a whole set of techniques
deal with overfitting. So I think that problem is very solvable with well-trained practitioners.
One thing you alluded to, which is the interpretability aspect. So let's say you train on a population
that's very high in diabetes, but then you're testing on a different population, which has a higher
or lower prevalence. That is kind of interesting. So identifying shift.
in the underlying data.
What would that mean?
So let's say we train on people in San Francisco and, you know, everyone runs to work and
eats Keenwall all day.
But then we go to a different part of the country where, you know, maybe obesity is higher,
or you could be somewhere in the stroke belt where the rate of stroke is higher.
It may be that the statistics you trained on don't match the statistics that you're now
testing on.
That's fundamentally a data quality problem if you collect data from all over the world.
you can address this, but it's something you have to be careful of.
But it will take a while for that to happen as we start gathering the data in different ways.
How does that actually even happen?
How are these streams of data funneled in, examined, and fed into a useful system?
So it used to be the way you'd run a clinical trial is that you would have, you know, one hospital,
you'd recruit patients from that hospital, that'd be it.
If you got a couple of hundred patients, that might actually be quite difficult to attain.
I think with Research Kit, Health Kit, Google Fit, all of these things,
you can now get 40 or 50,000 people into a study from all over the world,
which is great, except the challenge that the first five research kit apps had
is that they got 40,000 people, and then they lost 90% of them in the first 90 days.
So everybody just drops out?
Everyone just drops out because the initial versions of the apps weren't engaging.
So it adds an interesting new dimension.
as a medical researcher, you might not think about building an engaging well-designed app,
but actually you have to bring mobile design in as now a discipline that you're good at.
So there has to be some incentive to continue to engage.
Yeah, exactly.
And I mean, you need to measure cohorts the same way Instagram or Facebook or Snapchat does.
So I think the teams that are starting to succeed here tend to be very interdisciplinary.
They bring in the clinical aspect because you need that to choose the right problem to solve,
but also design the study well
so that you have convincing evidence.
You need the AI aspect,
but you also often need mobile design.
If it's a mobile study,
you may need domain expertise in other areas
if your data is coming from somewhere else.
And then it all has to be like gamified and fun to do.
Yeah, yeah.
I mean, gamification is sort of extrinsic motivation,
but you can also give people intrinsic motivation,
giving them insights into their own health, for instance.
It's a pretty good way to hook people.
What's the system's incentive?
I mean, of course, doctors want it if it makes them more accurate or to scale better, patients
wanted if you can predict whether or not you're going to have a problem. How do we incentivize the
system as a whole? I believe fundamentally it is going to come down to cost and scale.
And what willingness does a health care entity, whoever that may be, whether it's employer-based
programs, insurer-based programs, accountable care organizations? Are they going to be willing to take
on risk to see the rewards of cost and scale? And so the early adults,
will be ones who've taken on a little more risk.
Yeah, I think, you know, it is the challenges where the hope is in terms of value and in terms
of better outcomes, but one has to prove it out and hospitals will want to see.
The regulatory risk thing is being largely addressed by this new Office of Digital Health
and the FDA, and they're really doing, seem much more forward thinking about it.
But there are going to be challenges that we have to solve, and I'll give you one just to get
the group's input here, is should you be versioning AI?
Or do you just let it learn on the fly?
And so normally when we have firmware, hardware software updates
that in regulated products, FDA approved,
there's static.
They don't learn on the fly.
If you keep them static,
you're sort of losing the benefit of learning as you go.
On the other hand,
bad data could heavily bias the system and cause harm, right?
So if you start learning from bad inputs
that come into the system for whatever reason,
you could intentionally or unintentionally, you know, cause harm.
And so how do we deal with versioning and deep learning?
I mean, to just freeze the parameterizations of versioning from a computer science
point of view is trivial.
There's the deeper statistical question, which you just version, you could version every day,
every week, every month, and just freeze the parameters.
What you'd want to do is to the point that we're talking about earlier, you want to bring in
new validation sets, things that it has never seen before.
Because you don't want to just test each version on the same validation set because now
you're intrinsically sort of overfitting into it. And so what you always want to be doing,
holding out bits of data such as you can test each version separately, because I want to make sure
that they have very strict confidence that this is doing no harm and this is helping.
Right. It's like we're actually introducing this whole new data set of a different kind of thing,
and that's when you make new considerations. Data's coming in all the time. And so, you know,
you're just version on what came in today, version, and that's that. I mean, so it's pretty
straightforward. And as you're training it. This is the way speech recognition,
works on Android phones is that obviously data is coming in continuously every time someone says
okay Google or hey Siri coming into either Google or Apple but you train a model in batch and
then you test it very carefully and then you deploy it right and the versions are indeed tagged by
the date of the training data it's already embedded in the system who are the decision makers
that are kind of green lighting when like okay we're going to try this new algorithm we're going to
start applying this to these you know radiology images what how are the what are the
decision points. So with EKGs, the early companies used expert systems to just ease the pain
points of me having to write out and code out every single diagnoses. The super low hanging
fruit. Yeah. Can you improve the accuracy of physicians with this? Can you increase their
volume and bandwidth? Can you actually use it to see which physicians are maybe going off course,
what if you start having a couple of physicians whose error rates are going up? Right now with
quality, the QI process isn't really based on random sampling.
There's actually no standardized metrics for QI in any of this.
When people read EKGs and sign them off, they just sign them.
There's nothing telling anyone that this guy has a high error rate.
And so that is a great use case of this,
where you're not making diagnoses, but you're helping anchor and show that,
well, if you believe this algorithm is good and broadly generalizable across data,
you're sort of restating the calibration problem now.
It's not that the algorithm has gotten necessarily worse
because in fact in seven of the eight doctors
it's right on par with them.
But in this other doctor, it could be that that doctor,
if that doctor is not agreeing with the algorithm,
which is agreeing with the other seven,
that doctor is actually not agreeing with the other seven.
And so now you have an opportunity to train and relearn.
Those are the use cases to go off.
And train and relearned, the person?
The person.
Address their reading errors, coding errors,
see what's going on. And that qualitative look, I think, is very, very valuable. So what are the
ways we're actually going to start seeing it in the clinical setting, you know, the tools that we might
see our doctor actually use with us or not? I think it's going to be these adjacencies around
treatment with management. There are a lot of things that happen in the hospital that seem really
primitive and arcane and no one wants to do them. And I'll give you a simple one, which is OR scheduling.
So is it actually the way it looks like it is in Gray's Anatomy? Is it just a whiteboard and a
and somebody at the phone, the O-R front desk. That's unbelievable. There's a back end of scheduling that
happens for the outpatients, but you have add-ons, you have emergencies, you have finite. I mean, it seems like
even an Excel sheet would be better than a whiteboard. The way O-R scheduling works now is primitive,
and it also involves gaming. It involves convincing staff X, Y, and Z to stay to get the case done
or do it tomorrow. So there's so much behind the scenes like human negotiation. For example, when I do
catheter ablations. We have many different moving parts. Equipment, the support personnel of the
equipment manufacturer, anesthesia, fellows, nurses, whatever. And everyone has little pieces of that
scheduling. And it all comes together, but it comes together in the art of human negotiation. And very
simple things like, well, this is your block time. And if you want to go outside your block time,
you know, you need to write a hallmark card to person X. And so very simple. And so very simple.
problem where there's huge returns in efficiency if you can have AI do that, right? And the
AI inputs over time could be like, well, you can really truly know which physicians are
quick and speedy, which ones tend to go over there a lot of times, which patient cases might
be high risk, which ones may need more backup, which should be done during daytime hours.
But you could add their Fitbit data and then you could tell who's drowsy at any given moment.
Oh, that's fascinating. Yeah. Whether or not they want to.
do it. How stressed are they feeling? And so people stay at times that they're really needed. And
that kind of elasticity can come with automation where we fail right now. And so this is a great
place where you are not making diagnoses. There's nothing you're being committed to by,
from a kind of, you know, basic regulatory framework. You're just optimizing scheduling. So who actually
says, so say that that technology is available and how do you actually get it in, you know,
where's the sort of confluence of the regulation and the actual rollout and how does it actually
make its way into a hospital into a waiting room? There's an alternative model I've seen,
which is that startups acting as full stack healthcare providers. So Omata Health or Verta Health
would be examples of this where if you have pre-diabetes or diabetes, respectively, the physician
can actually refer the patient to one of these services. They have on-staff physicians. They're
registered as providers with national provider IDs. They bill insurance just like a doctor
Dr. Wood, and they're essentially acting as a provider who addresses the whole condition
end to end. I think that case actually simplifies decision-making because you don't necessarily
have to convince both Stanford and United Healthcare to adopt this thing. You can actually convince
just a self-insured employer that they want to include one of these startups as part of their
health plan. And so I think that simplifies the decision-making process and ensures that the
physicians and the AI folks are under the same roof. I think that's a model that we're going to
see probably get the quickest adoption, at least in the payer world. There are many models.
And which is the best model will depend on how you're helping and the indication and on the
accuracy and what you're competing against and so on. This is a case where probably we'll see
the healthcare industry may be reconstituted itself by vertical with AI-based diagnostics or
therapeutics. Because if you think right now providers are geographically structured, but with AI,
every data point makes the system more accurate.
Presumably in an AI-based world,
providers will be more oriented around a particular vertical.
So you might have the best data network in radiology,
the best data network in pathology,
the best data network in pathology.
Oh, that's interesting. Yeah.
Thank you so much for joining us on the A16Z podcast.
Great. Thank you.
Thank you.
Thanks for having us.