The Dose - The Missing Ingredient in Health Care AI? Community Voices
Episode Date: March 27, 2026Seventy billion dollars is flowing into health care AI, but the people building it and the patients who need it most are rarely in the same room. On this episode, host Dr. Joel Bervell talks with ped...iatrician, researcher, and tech optimist Dr. Ivor Horn about what responsible AI innovation in health care requires. Drawing on her work building open-access datasets and equity frameworks for machine learning, Horn says that rigorous research, community partnership, and critical thinking are not obstacles to the work of building powerful tools — they are the work. "If you build for the most vulnerable patients," Horn says, "you will build a better product for everyone."
Transcript
Discussion (0)
The DOS is a production of the Commonwealth Fund, a foundation dedicated to health care for everyone.
Tech Optimist Dr. Ivor Horn is my guest on this episode of The Dose.
Dr. Horn is a pediatrician, professor, tech executive, advisor, and board member
to some of the most high-profile health care and corporate entities in the world.
But the title she claims most proudly, tech optimist.
And in today's age, saying you're a tech optimist really,
means something because we're living through the hype, the hope, and the untented consequences
of rapid technological innovation every single day.
And when it comes to health and healthcare, those consequences become personal.
I see it in residency and the pulse oxymeter readings we rely on, the monitors that never
stop beeping, the electronic medical records that shape how we think about patients.
Technology doesn't just sit in the background.
It shapes the decisions we make.
how trust is built in everyday health outcomes.
At a time when health care options feel like they're shrinking for many Americans
and the promises and threats of tech are expanding,
the real question becomes,
what kind of future do we actually want to build?
Dr. Horn, thank you so much for joining me on the dose.
Thank you for having me.
And I was just saying the last time we spoke to each other
was in person years ago at South by Southwest in Austin, Texas.
So it's great to be reconnected with you.
Likewise.
So you sit on the board right now of Boston Children's Hospital.
And you recently served as the chief health equity advisor at Google.
So your insight across the healthcare and technology spaces are broader than most people.
So I want to start the conversation with some insights into what you are thinking about in this moment where AI's presence feels omnipresent.
In boardrooms, in startups, in clinical workflows.
From where you sit, what are your top of?
blind concerns for patient care right now. Yeah, no, thank you so much for that question. I think
what I'm thinking about now are some broad categories that then come into patient care and
thinking about patients. I really think about the importance of critical thinking skills,
and that means critical thinking skills both for us as providers, but for patients and families who are
caring for their loved ones and the decisions that we make as clinicians and providers in the
healthcare system. But how do people make those decisions outside the four walls of health care as
well? The other one for me, I'm a researcher. So I spent over two decades in academic medicine.
So for me, the significance of rigorous research in these moments is so critically important.
and for us to understand the importance of conducting rigorous research.
And this one might surprise you, but the other thing for me is the role of community.
The role of community in being there for people, in people being there for each other,
in those moments when they have questions.
And I think about it for us as people who are doing technology work, people who are doing
the research, people who are using technology, how are we,
utilizing the knowledge and wisdom of community to help us in that decision-making process.
For me as a primary care provider, when I was seeing patients, like the community really played
an important role in how I understood my patients in context. How do we translate that into technology?
So those are some of the big things that I think about under the big umbrella of responsible
innovation in health. Absolutely. I love each of those. And I think, like you said, I think I'm a little
surprise. I would have not maybe thought of community in the first place when you think about
technology. We often see them as separate, but I love the way you frame that as needing that
community is still there in order to inform everything that we're thinking about. And right now,
we're seeing that about $70 billion is flowing into healthcare tech. A lot of that is AI-centered
or supported. I think there's something like 350,000 new applications, merriad projects.
From your perspective, is the capital flowing to the right problems and are the right
partners involved? You know, is it flowing to the right problem? Is it flowing to the right partners? It's
really a resource issue. The people who are thinking about it are the people who can afford it,
who can afford to develop the technology, apply the technology, monitor the technology,
assess it, do the analytics. Those things are really important in sort of who's minding
the innovation. And oftentimes I say it starts with
It's kind of those people who see how it financially benefits. So you think about large technology
companies who have the financial capability to build large models, to do large projects and move fast.
You think about pharma and life sciences. They have the financial resources to do the work.
And that's not new in AI. That's innovation in general, right? We see that in health care.
We see innovation. The MRI did not start in the community.
community. Those who are also thinking about it and who are really concerned about it are also those
who are supporting communities and who are working with the most vulnerable populations. So you're
thinking about state and local public health organizations and departments who are thinking about
how can we be able to utilize this technology when and where we can with the financial resources that
we have to impact and improve outcomes and reduce risk. But what are the costs of adopting
that technology and the inequities that can occur there? I kind of want to pick up on that last
point you're making about clinicians, especially at underserved community programs, for example,
and whether are these individuals, clinicians and patients meaningfully shaping what's being built,
or are they being asked to kind of adapt to it after the fact? Oftentimes you see this work
happening at large academic health centers and academic medical centers. And I think that's the
nature of the resources that are available to folks. So my hope is that we figure out a way to change
that in creating that opportunity. I would have to say, I would love to say they're there at the
table. They're doing their best to get to the table and be a part of the table and building
their own tables. And that's where community comes in. They're building in community to say,
how can we work together to pool our resources to actually do some of this work? But I don't
fully see it yet. You also said something that this isn't new what we're seeing,
that there was the MRI before, right, and how it was used. What lessons should we be carrying
forward from earlier digital health missteps as we move deeper into this AI movement?
Yeah. No, that's such a great question. And this takes me back.
to when I actually started at the intersection of health and technology. And the reason I got into this
space is because I was serving my patients in Anacostia and I saw them with their flip phones. And they were
coming to me saying, Dr. Horn, I know my kid looks fine now, but here's what the cough sounded like.
I recorded it on my phone. Or here's what that rash looked like last night. And for me,
it was like, can you please build for my patients? If you build for my patients, you've
will build a better product for everyone. If you build for those who are most vulnerable,
you are going to meet those needs. The opportunities that we have around adoption in health
care and in health systems, it really accelerated during the pandemic. So we have adoption in
hospitals and in health systems. We have things like ambient listening, ways that we are
making it better for clinicians so that they're able to spend more time.
with their patients. And my hope is that we can make those things more affordable and create those
opportunities where those tools are available to clinicians who don't have the same level of
financial resources. An example of that is Doximity GPD. So Doximity has created this tool
available to clinicians. Taking a resource in making it more broadly available and making it
more equitably accessible is critically important for us to change the paradigm.
from the way that we did things when we launched digital health and the adoption of digital health.
I love that you give that specific example, but it leads me to wonder, how are we assessing these tools right now?
Is there an equivalent to peer review some type of standardized framework that ensures quality, safety, that there's clinical competency?
Or are we relying primarily on kind of market validation to see what hospital systems or individuals are adopting themselves?
Yeah.
I think there is a lot of work being done from the research perspective. I think clinicians are
engaged in the product development process in ways that had not necessarily been true 10 years ago.
So you have clinicians who are sitting at the table having those conversations about what they build
and understanding what they build throughout the product development life cycle. And I think that's really
an important inflection point for us. So for me, that's a good thing. In addition to that, we also
have researchers who are thinking about the rigor of the research. And I separate that out from
research that is really impacted by the industry and is really research coming out of industry
that comes out fast. It says, here's a benchmark. Take this benchmark. For me, it's like,
okay, let's step in and let's do some peer-reviewed research and start having those conversations.
And I think people are recognizing the importance of doing that as well.
Examples, I would say folks like the person I look and see what Leo Anthony Sili and his
team and his lab are doing at Harvard and MIT, because I know they're asking really critical,
rigorous questions about the adoption, the implementation of research.
They're looking at how it impacts on performance.
and those are really important, an important example of how do we begin to think about that rigorous research?
The folks at Stanford, Nigam Shah and that group absolutely are doing the work.
The models that they're building within their systems, I think, are very important.
Obviously, the work that we do at Boston Children's, John Brownstein and his team have been doing work a long time before the AI uptake.
They were doing work in digital health and very much early adopters of collaborating with the clinicians.
and the researchers at Harvard as part of that work.
So there are pockets of folks who are doing that work.
For me, what I think is a gap that I would love to see is I would love to see more
engagement with community to help give more context for models that we're building.
And in the idea of community, I guess once it reaches, once a tool reaches the market,
it's out into the community, is there an accountability process?
You've mentioned some of these kind of different silos of,
individuals that are looking at it, but is there an actual accountability process if something
goes wrong per se? Yeah, I think we are at risk for things going wrong. I think when we look at
the governance, when we look at the implementation of technology within systems, I think systems are
working really hard to create checks and balances. And organizations that have the capacity to do that
are doing that work. Those are some things that we have learned. I think, understand,
the risks is still a part of the work that we have to do from a governance perspective.
How are we asking what risks, you know, what are the protections, what are the safety,
what are the quality protections?
What, how are we, like, what's the process before we adopt a tool?
And how do we monitor that tool?
When the baseline foundational AI model gets updated, how does that, how does that impact
performance?
What happens with drift?
and every organization doesn't necessarily have the capacity to be able to answer those complex questions.
Yeah.
When we think about academia specifically, is it a meaningful part of the developmental process of the next generation of health care tools?
Or is academia too restrained by its own systems or procedure requirements in order to fully participate?
We think about Silicon Valley and the move fast and break things versus the academia of slower and research backed in making sure that you're having peer review.
So very curious about what you think about academia's role in a system that often can move slow.
Yeah.
You know, I was in academia and I left academia and I was in, you know, I was in big tech and doing research in big tech.
And one of the things that for me was really important is we had to think about peer review and the rigor of peer review, sort of like publishing something without peer review.
Like for me, I'm still going to read that article with,
with the same critical eye that I would as if I were a peer reviewer for a publication.
And I think there is an importance of doing that.
Do I think academic research and the research review process needs to find ways to pick up
the pace?
Absolutely.
And I think that's one of the benefits actually of AI.
We have the ability to create more efficiencies in the health services research process.
by using the technology. I think there's a need for education, there's a need for training,
there's a, of the workforce, both on the researchers who are doing the research and on the folks
who are doing the peer review. We've all seen the things that kind of slipped through. We've all
heard the stories of the folks who put the content in white that wasn't read, but you can,
when the peer review comes through and when they run it through the model, it makes it more
positive. So we still have to make sure that we have humans in the loop as part of the process. And at the
same time, I do think it is important for us to make sure that as we're maintaining relevance from an
academic perspective, that we are finding ways to create efficiency and move things forward faster.
And as AI moves from the diagnostic realm into more clinical applications, what are you seeing
in some of the biggest areas for potential? We're already seeing some of them. One, we're seeing
the work to reduce the workload for clinicians.
The numbers are small, but clinicians seem to say they feel it.
And that really matters.
I think the ability to consolidate information and data in the clinical process is very important.
I think a big piece of the work is in research and development.
Just the research and development and drug discovery process,
I think is going to be really important.
One of the things that I really am excited about is the clinicians that are involved in the
implementation process.
I think we need to have more of the care team involved, but the fact that clinicians are
there and part of developing the workflows, I think is really a very important part of the
process that it wasn't as engaged in the past as it is now.
I think I can second the feeling the pressure lifted as a busy medical resident and having
to keep a lot of information, whether it's just being able to make charting a little bit
easier or even I think the biggest thing I've seen a lot is finding information, right?
Like knowing, okay, let's say I have a patient that comes in.
They have adrenal insufficiency, but they also have all these other conditions.
at one point I would have to manually find that research study that talks about how do I think about
all these other comorbidities in the context of what to do, but I can just use AI chatbot
or something that pulls it up in an easier way. So absolutely, I think I agree in terms of the
clinicians needing to be at the forefront and feeling the exact way of having so much more time
to actually then focus on patient care. I'm curious about the challenges you've seen on the flip
side of as we move into this clinical application.
Yeah, the challenges are two things that we've sort of talked about.
The financial resources for less resourced care teams, whether it's a community hospital
or federally qualified health center or rural communities or a primary care provider,
you know, trying to remain in solo practice, the ability for them to adopt this technology
in a way that is useful.
You ask a physician in private practice,
they're like, I'm just trying to keep up with the things that are required of me.
How can you ask me to, one, afford and then to adopt this new tool?
Because many of those are like small businesses.
And I think it is so important because oftentimes the community provider,
can you imagine the ability for them to create a model where they can add the context of
what they understand about the community that they serve in that model and the social drivers of
health as they're providing care. That's really powerful. And it's more than just purchasing a tool
from a vendor. It's really about who's going to help me do the analytics? Who's going to help me
create that model? Who's going to help me sustain that model and update that model? That's the thing
that concerns me. I'm a primary care provider. I'm a primary care community-based pediatrician at
heart, even though I haven't seen patients in a while. So for me, I'm always thinking about
the massive amount of information that comes from a primary care comes to a primary care pediatrician
that could benefit from technology in that way. And as innovations are coming about,
And I think you've hit on this a little bit, but are these innovations addressing the most urgent gaps in our healthcare system?
So we think about primary care access or maternal health or are we seeing more of a proliferation of tools that feel more like they're duplicating things or disconnected from the areas of greatest need or just incremental?
Yeah, there are easier places and spaces where technology can be developed.
and developing at the front line, and as we call it, the last mile is really hard.
And when your goal is to go fast and build new things, and to be honest, compete with others
who are building things so that you can build before they do or get some way ahead,
like choosing those hard problems to solve on the,
front line that require you to engage with partners, engage with community, hear things that
don't quite fit neatly in your four by four box or in what you want to build. So they're building
the things that they can build with the least amount of friction. Which isn't necessary
the things that affect the most people. They're the things that we know we can get done. And like you
sense it's often the hardest things require the most collaboration between people, which this
technology like this doesn't often lend itself to. It can create silos or competition that can
be not to be looking at other solutions. I really appreciate that. I'm curious, who is at the
table right now when it comes to all these conversations that we're having? Depends on what table
you're talking about. I think, I mean, let's start. I'd say the development of AI and using new
innovations in clinical environments? I would have to say the folks that are at the table are more than
they used to be, but less than I would like it to be. So we definitely have big technology companies,
Googles, Microsofts, Open AIs, Anthropics, they're all there. They're all building things.
We have academic health centers, academic medical centers. We have the Stanford's. We have the
Harvard's. And we have other academic medical centers who are building as well and doing really
great work. I think the folks who are really working to try to be at the table are community
health centers, the National Association of Community Health Centers, federally qualified
health centers to really begin to work and collaborate with others to be at the table to broaden
that perspective. I think what's really interesting that I didn't think about when I was
in academia or when I was like seeing patients every day was the entrepreneurs and the innovators
who are building in those spaces that are not really meant for big tech to build. And that's
okay because every we need a whole ecosystem of folks building. And so they're fine. They're like
entrepreneurs who are finding needs that are not being met in other places. And so they're
building there. And I think that part is really important. There's an organization called Health
Tech for Medicaid. And Health Tech for Medicaid is focusing on entrepreneurs who are building for the
Medicaid population, thinking about marginalized populations, and helping them to understand the
complexities of what it means to build for those communities and to meet all of those needs. And I
think having those folks at the table is really important as part of this as part of this process.
Absolutely. You've in your career already have been bringing different players at the table together.
You were involved in the development of skin, SCIM, the skin condition image network.
And that was Google's open access dermatology image data set that you helped create in partnership
with Stanford Medicine empowered by AI. It's a project that meant a lot to me as a medical student
and now as an early physician, especially given the gaps in representation and dermatology training.
And I'm hoping if you can talk about how that project even first came together.
This was one of the things for me.
I appreciated the really smart people that I got a chance to work with every day.
And as a researcher, I get to ask questions.
And so we had a project where we had done something called dermacist.
It was an initial effort to look at skin conditions.
And as part of that study, once they presented it at Google I.O., researchers came back and dermatologists came back and said, wait a minute, you don't have enough representation across like skin types and skin tones to actually do this work.
That was really shortly after I came to Google. And I said, this is something we should be able to work on and solve.
And what was also really powerful about that is we had a whole other group of folks who were looking at skin tone and creating this framework around skin tone related to marketing and images for something completely different.
The Stanford people were the ones that raised their hand and said, yeah, we need to work on this.
So it became a really easy collaboration for the researchers at Google and the researchers at Stanford to say, let's see if we can do this.
Let's see if we can get people to volunteer and share this information.
And so when you talk about community, when people sort of say, we want to be part of the solution and building something, that was really important.
The other thing that we did was we did a research study and we got skin conditions from all over the world as part of a collaboration with a group out of Seattle.
And that's where we created the HEAL framework.
and it's a health equity assessment of machine learning performance.
And we used skin conditions as part of that to create a performance evaluation for,
is the machine learning model that you're building, creating more equity or worsening disparities?
And so just that little someone raising their hand and saying, oh, no, no, no, you guys didn't quite get that right was really important.
But it was a matter of stepping into the void and sense.
saying, we didn't get this right. Let's not just not, not do anything. Let's lean in and fix it.
And I think the ability to do that at that moment in that time was really important.
And I think now the ability to say, we didn't quite get that right. Hold on a minute. Let's
lean in. I don't know how easy that is now. Thank you for sharing that. I actually remember when
that pushback was happening. I think I may have added to the discourse by making a video about it
and how there wasn't enough images, but then also seeing how the changes that were made over time
from the project that you talked about with the image equity project that was happening with the Google
pixel phone. I remember that happening. And so all of those kind of connections, there was so much,
like you said, more that came out of it downstream, where there was real tone, right, which then
became kind of a marketable thing as well, but it made sure that the new Google pixel phone was
able to better see darker skin tones, which then would actually improve when someone took a picture of something
to them be able to look at it and analyze it for AI algorithms.
And that helps not just one company, but all companies when we look at that new technology
that sees people differently.
So thank you for, I mean, being there at that time for being willing to say, hey, let's dive
deeper because like you said, the downstream effects are so vast when that happens.
I'm curious, what are future similar projects that are existing or where do we need to
widen the lens?
You've given such a good example already.
I think as we talk about AI, we need to widen the lens on who's a part of the evaluation process.
When we did the Heel Framework Study, first of all, it was great because a bunch of people raised their hand and volunteered to do that work.
But for me, it was very critical that we sent it for peer review publication.
And they did and it made our work better.
And I think that was really important, the rigor of that because then it could be applied in ways that, you know, make products better.
I think where we need to begin to expand on this as we begin to think about AI and then making
sure that we are including community beyond the internal researchers and the organizations
that can afford it, but also partnerships and collaborations with whether it's minority serving
medical institutions or HBCUs or Latino undergraduate organizations, all of those groups
need to be brought in.
We need to think about how are we engaging rural communities?
How are we thinking across the spectrum?
Because people who experience disparities don't look a certain way.
It's your neighbor.
It's your friend.
It's your cousin.
It's your auntie.
It's your grandparent.
This is something that you do so well in that's sort of education and information.
and we think everyone has access to information and understands things.
And it's not true.
And it's hard for some folks to know whether something is real or it's AI.
And that information and misinformation and disinformation can be dangerous when it comes
to health care.
And if you are a family and your child is sick and you're like,
is that cough bad enough to go to the emergency room? Because if I go to the emergency room, I can't pay my
light bill. And I go and I open up chat GBT and I ask chat GBT a question. And chat GBT says,
oh, yeah, that's fine. You have 2448 hours. And the reality is that that's not the case. The impact is
very significant. And so the importance of us working with these models and making them better and
having accountability and educating the community is all of those things are critically important now
as part of this work. There's so many gems in what you said. I think we used to joke that we would
get these spam calls and like that our parents would fall for them. But now everyone's falling for
things like AI misinformation, disinformation, especially when it comes to health care. And it shows that
we're also susceptible to it.
And I kind of want to ask about iterative thinking.
We hear a lot about iterative thinking in technology.
But healthcare systems are deeply entrenched with legacy infrastructure, existing hierarchies,
there's concentrated decision-making power, right?
And in that reality, is there resistance to truly rethinking systems,
especially when it comes to kind of confronting and eliminating these biases that we're talking about right now?
One of the things that I am quite honestly excited about is I'm excited about the new generation of physicians taking over.
You know, folks need to get out of the way.
Like there are people who are tech optimists like me who are like, let's do it.
Like let's figure out this.
Like let's figure out how to build agents and let's figure out how to build agents better.
Yet at the same time, like we have digital natives who are coming into healthcare.
And I do think it is really important for this generation.
of caregivers, of clinicians, to not wait for someone to give you permission to say, we can do
and we can build things. And that's why working with entrepreneurs for me is such an
uplifting thing. I'm like, let me tell you the old school stuff so that you don't do that.
Someone recently said to me, I'm trying to do this project and I'm going into work with health
systems because I'm trying to fill a gap that they even know needs to be filled. But I keep running
against this bureaucracy, you know, and they're asking me, Ivor, is that real? And I'm like, yeah,
it's real. Let's have a conversation about that. And let's figure out how we can effectively
disrupt ourselves in a way that is positive, that is driving health outcomes, that's driving
improvements everywhere, because we don't have the time to just keep sort of saying, well,
this is just the way that it was done before. We don't need another pandemic.
to force us to move faster.
We should do it in a better way.
And I want to end with this question.
You call yourself a tech optimist.
And in a moment where trust really does feel pretty fragile,
when we think about institutions, data, in medicine especially,
what does responsible optimism actually look like to you?
And what would have to be true, five or 10 years from now,
for you to say, we got it right?
responsible optimism is about not being afraid to ask the hard questions and the unpopular questions.
And to say the unpopular thing of that's not right.
Or we can do that better.
Let's do that better.
I think that is the optimism.
And I say this from a technology perspective,
really because we're so busy in a race to beat the next person that we're putting on rose-colored
glasses when we should be using our critical thinking skills to ask ourselves the hard questions
and say when we're not quite sure about things.
Absolutely.
Well, Dr. Horn, I want to say thank you so much for this incredible conversation.
It's been a long time coming.
And I want to say thank you for your optimism too.
I think especially in a moment that demands both imagination and accountability.
You've already been doing the work.
I know you're going to continue doing the work.
And thank you for inspiring people like myself,
who hopefully will take up the mantle and continue all the work you've already started.
Thank you so much.
This episode of The Dose was produced by Jody Becker,
Jesus Alvarado, and Naomi Libowitz.
Special thanks to Barry Scholl for editorial.
support, Matthew Simonson for recording assistance,
Jen Wilson and Rose Wong for art and design, and Paul Frame for web support.
Our theme music is Arizona Moon by Blue Dot Sessions.
If you want to check us out online, visit Thedose.com.
There, you'll be able to learn more about today's episode and explore other resources.
That's it for The Dose.
I'm Dr. Joel Berville, and thank you for listening.
