Microsoft Research Podcast - Coauthor roundtable: Reflecting on healthcare economics, biomedical research, and medical education
Episode Date: August 21, 2025For the series finale, Peter Lee, Carey Goldberg, and Dr. Zak Kohane compare their predictions to insights from the series’ most recent guests, including experts on AI’s economic and societal impa...ct, leaders in AI-driven medicine, and doctors in training.Show notes
Transcript
Discussion (0)
As a society, indeed as a species, we have a choice to make.
Do we constrain or even kill artificial intelligence out of fear of its risks and obvious ability
to create new harms?
Do we submit ourselves to AI and allow it to freely replace us, make us less useful and less needed?
Or do we start, today, shaping our AI future together with the aspiration to accomplish things that
humans alone and AI alone can't do, but that humans plus AI can.
The choice is in our hands.
This is the AI Revolution in Medicine Revisited.
I'm your host, Peter Lee.
Shortly after OpenAI's GPD4 was publicly released,
Carrie Goldberg, Dr. Zach Ohani and I,
the AI Revolution in Medicine to help educate the world of health care and medical research
about the transformative impact this new generative AI technology could have.
But because we wrote the book when GP-D-4 was still a secret, we had to speculate.
Now, two years later, what did we get right and what did we get wrong?
In this series, we'll talk to clinicians, patients, hospital administrators, and others
to understand the reality of AI in the field and where we go from here.
The book passage I write at the top is from the epilogue,
and I think it's a truly fitting closing sentiment for the conclusion of this podcast series
because it calls back to the very beginning.
As I mentioned before, Carrie Zach and I wrote The AI Revolution in Medicine
as a guide to help answer these big questions,
particularly as they pertain to medicine.
You know, we wrote the book to empower people
to make a choice about AI's development and use.
Well, have they? Have we?
Perhaps we'll need more time to tell.
But over the course of this podcast series,
I've had the honor of speaking with folks
from across the healthcare ecosystem.
And my takeaway, they're all committed
to shaping AI into,
a tool that can improve the industry for practitioners and patients alike.
In this final episode, I'm thrilled to welcome back my co-authors, Carrie Goldberg and Dr.
Zach Ohani.
We'll examine the insights from the second half of the season.
Carrie, Zach, it's really great to have you here again.
Hey, Peter.
Hi, Peter.
So this is the second roundtable. And just to recap, you know, we had several early episodes of the podcast where we talked to some doctors, some technology developers, some people who think about regulation and public policy, patient advocates, a venture capitalist who invest in kind of consumer and patient-facing medical ventures, and some bioethical.
emphasis. And I think we had a great conversation there. I think, you know, it felt mostly
validating. A lot of the things that we predicted might happen, happened, and then we learned
a lot of new things. But now we have five more episodes, and the mix of kinds of people that
we talk to here is different than the original. And so I thought it would be great for us to have a
conversation and recap what we think we heard from all of them. So let's just start at the
top. So in this first episode in the second half of this podcast series, we talked to economists
Ziam Azar and Ethan Mollick. And I thought those conversations were really interesting. Maybe
there were kind of two things, two main topics. One was just the broader impact on the economy.
on the cost of health care, on overall workforce issues.
One of the things that I thought was really interesting
was something that Ethan Mollick brought up.
And maybe just to refresh our memories,
let's play this little clip from Ethan.
So we're in this really interesting period
where there's incredible amounts of individual innovation
and productivity and performance improvements in this field,
like very high levels of it.
We're seeing that in non-medical problems,
the same kind of thing, which is, you know, we've got research showing 20-40 percent performance
improvements, but then the organization doesn't capture it. The system doesn't capture it because
the individuals are doing their own work and the systems don't have the ability to kind
of learn or adapt as a result. So let me start with you, Zach. Does that make sense to you?
Are you seeing something similar? I thought it's incredibly insightful because we discussed on
our earlier podcast how a chief AI officer in one of the health care hospitals in one of the
health care systems was highly regulating the use of AI, but yet in her own practice on her
smartphone was using all these AI technologies. And so it's insightful that on the one hand,
she is increasing her personal productivity. And perhaps she's increasing her quality of her
care. But it's very hard for the health care system to actually realize.
any gains. It's unlikely, let's put it this way, it would be for her a defeat. They said,
now you should see more patients. Now, I'm not saying that won't happen. It could happen.
But gains of productivity are really at the individual level of the doctors. And that's why
that's why they're adopting it.
That's why the ambient dictation tools are so successful,
but really turning it into things that matter
in terms of productivity for health care,
namely making sure that patients are getting healthy
requires that every piece of the puzzle works well together.
It's well-tread ground to talk about how patients get very expensive procedures
like a cardiac transplant and then go home
and they're not put on blood thinners,
and then they get a struck.
You know, the chain is as strong as the weakest link.
And just having AI, one part of it, is not going to do it.
And so hospitals, I think, are doubly burdened by the fact that, A,
they tend to not like innovation because they are high-revenue, low-margin companies.
but if they want to implement it effectively,
they have to do it across the entire processes of health care,
which are vast and not completely under their control.
Yeah, yep.
You know, that was Sarah Murray,
who's the chief of the AI officer and the UC San Francisco.
And then, you know, Carrie, remember we were puzzled by Chris Longhurst finding
in a controlled study that the, you know,
having an AI respond to patient emails didn't seem to lead to any, I guess you would call it
productivity benefits. I remember we were both kind of puzzled by that. I wonder if that's
related to what Ethan is saying here. I mean, possibly, but I think we've seen since then that
there have been multiple studies showing that in fact using AI can be extremely effective or
helpful, even, for example, for diagnosis. And so I find just from the patient point of view,
it kind of drives me crazy that you have individual physicians using AI because they know that
it will improve the care that they're offering, and yet you don't have their institutions
kind of stepping up and saying, okay, these are the new norms. By the way, Ethan Mollock is
a national treasure, right? Like, he is the classic example of someone who just stepped up at
this moment when we saw this extraordinary technological advance. And he's not only stepping up for
himself, he's spreading the word to the masses that this is what these things can do. And so it's
frustrating to see the institutions not stepping up and instead the individual doctor is having to do
it. But he made another very interesting point, which was that the reason that he could be so
informative to not only the public, but practitioners of AI, is these things would emerge
out of the shop, and they would not be aged too long, like a fine wine, before they were just
released to the public. And so he was getting exposure to them to these models just weeks
after some of the progenitors had first seen it. And therefore, because he's actually a really
a creative person in terms of how he exercises his models.
He sees uses and problems very early on.
But the point is, institutions think about how much they are disadvantaged.
They're not Ethan Mollock.
They're not the progenitors, so they're even further behind.
So it's very hard.
If you talk to most of the C-suite of hospitals,
they'd be delighted to know as much about the impact as Ethan Mollock.
Molek. By the way, you know, I picked out this quote because within Microsoft, and I suspect
every other software company, we're seeing something very similar, where individual programmers
are 20 to 30 percent more productive, just in the number of lines of code they write per day
or the number of pull requests per week, any way you measure it. It's very consistent. And yet,
by the time you get to, say, a 25-person software engineering team, the productivity of that
whole team isn't 25% more productive. Now, that is starting to change because we're starting
to figure out that, well, maybe we should reshape how the team operates. And there's more
of a orientation towards having, you know, smaller teams of full-stack developers. And then
you start to see the gains. But if you just keep the team organized in the usual way,
way, there seems to be a loss.
So there's something about what Ethan was saying that resonated very strongly with me.
But I would argue that it's not just productivity we're talking about.
There's a moral imperative to improve the care.
And if you have tools that will do that, you should be using them or trying harder to.
I think, yes, first of all, absolutely, you would, unfortunately, most of the short-term productivity,
measures will not measure improvements in the quality of care because it takes a long time to die
even with bad care. And so that doesn't show up right away. But I think what Peter just said
actually came across in several of the podcasts, which is that it's very tricky trying to shoehorn
these things into making what we're already doing more productive. Yeah, existing structures.
Yeah, and I know, Carrie, that you've raised this issue many times, but it really called
into question, what should we be doing with our time with doctors?
And they are a scarce resource, and what is the most efficient way to use them?
I remember we published a paper of someone who was able to use AI to increase the
throughput of their emergency room by actually more appropriately having the truly sick people
in the sick queue, in the triage queue for urgent care. And so I think we're going to have
to think of that way more broadly about we don't have to now look at every patient
as a unknown with a maybe a few pointers on diagnosis. We can have a fairly extensive
profiling. And I know that colleagues in Khalit in Israel, for example, are using the overall
trajectory of the patient and some considerations about utilities to actually figure out who to see
next week. Yeah. You know, what you said brings up another maybe connection to one thing that
we see also in software development. And it relates to also what we were discussing earlier
about the last thing a doctor wants is to have a tool that allows them to see even yet more
patients per day. So in software development, there's always this tension, like how many lines of
code can you write per day? That's one productivity measure. But sometimes we're taught,
well, don't write more lines of code per day, but make sure that your code is well structured,
take the time to document it, make sure it's fully commented, take the time to talk to your
fellow software engineering team members to make sure that it's well coordinated. And in the long
run, even if you're writing half the number of lines of code per day, the software process will be
far more efficient. And so I've wondered whether there's a similar thing, where doctors could see
20% fewer patients in a day. But if they take the time and also had AI help to coordinate,
maybe a patient's journey might be half as long. And therefore, the health system would be able to
see twice as many patients in a year's period, or something like that.
So I think you've nerds sniped me because you, which is all too easy.
But I think there's a central issue here.
And I think this is the stumbling block between what Ethan's telling us about, between the
individual productivity and the larger productivity, is the team's productivity.
And there is actually a good analogy in computer science.
That's Brooks' Mythical Man Month, where he shows how you can have more and more resources.
But when the coordination starts failing because you have so many individuals in a team,
you start falling apart.
And so even if the individual doctors get that much better, yeah, they take better care of locations
and make less stupid things.
But in terms of giving the – I get you into the emergency room,
and get you out of a hospital as fast as possible, as safely as possible, as effectively the
problem.
That's teamwork, and we don't do it, and we're not really optimizing our tools for that.
And just to throw in a little reality check, I'm not aware of any indication yet that
AI is in any way shortening medical journeys or making physicians more efficient.
Yeah, at least.
So I think, you know, with respect to our book, critiquing our book, I think it's fair to say
we were fairly focused or maybe even fixated on the individual doctor or a nurse or a patient.
And we didn't really, at least I never had a time where I stepped back to think about the whole care coordination team or the whole health system.
And I think that's right.
It's because, first of all, you don't think about it.
It's not what we're taught in medical school.
We're not talking about team communication excellence.
And I think it's absolutely essential.
There's, there was an early Winograd,
and he was trying to capture what are the different kinds of actions
related to pronouncements that you could expect
and how could AI use that.
And that was beginning to get at it.
But I actually think this is dark matter of human organizational technology
that is not well understood, and our products don't do well.
you know, we can talk about all the groupware things that are out there, but they all don't
quite get to that thing. And I can imagine an AI serving as a team leader, a really active
team leader, and a real quarterback of a care team. Well, in fact, you know, we have been trying
to experiment with this. My colleague, Matt Lundgren, who was also one of the interviewees early
on has been working with Stanford medicine on a tumor board AI agent, something that would
facilitate tumor board meetings. And the early experiences are pretty interesting. Whether it
relates to efficiency or productivity, I think remains to be seen. But it does seem pretty
interesting. But let's move on. Well, actually, Peter, if you're willing to not quite move on yet,
this kind of, this kind of segues into one of, I think, the most.
provocative questions that arose in the course of these episodes and that I'd love to have you
answer, which was, remember, it was a question at a gathering that you were at, and you were
asked, well, you're focusing a lot on potential AI effects on individual patient and physician
experiences, but what about the revolution, right? What about, like, can you, can you be more
big picture and envision how generative AI could actually kind of overturn or fix the broken
system, right?
I'm sure you've thought about that a lot.
Like, what's your answer?
You know, I think ultimately it will have to, for it to really make a difference.
I think that the normal processes are a normal concept of how health care is delivered,
how new medical discoveries are made and brought into practice.
I think those things are going to have to change a lot.
You know, one of the things I think about a lot right at the moment is, you know, we tend to think about, let's say, medical diagnosis as a problem-solving exercise.
And I think, at least at the Kaiser Permanente School of Medicine, the instruction really treats it as a kind of detective thing based on a lot of knowledge about
biology and biomedicine and human condition and so on.
But there's another way to think about it, given AI, which is when you see a patient
and you develop some data, maybe through a physical exam, labs, and so on, you can just simply
ask, what did the 500 other people who are most similar to this experience?
How are they diagnosed?
How were they treated?
What were their outcomes?
What were their experiences?
And that's really a fundamentally different paradigm.
And it just seems like at least the technical means will be there.
And by the way, that also then relates to,
and what was most efficacious cost-wise?
What was most efficient in terms of the total length of the patient journey?
How does this relate to my quality scores so I can get more money from Medicare and Medicaid?
All of those things, I think, you know, we're starting to confront.
one of the other episodes that we're going to talk about was my interview with two medical students
actually thinking of a Morgan Cheatham as just a medical student or medical resident is a little strange
but he is one of the things he talks about is the importance that he placed in his medical
training about adopting AI so Zach I assume
You see this also with some students at Harvard Medical School.
And the other medical student we interviewed, Daniel Chen, seemed to indicate this, too,
where it seems like it's the students who are bringing AI into the medical education,
the head of the faculty.
Does that resonate with you?
It absolutely resonates with me.
There are students I run into who, honestly, my first thought when I'm talking to them is,
why am I teaching you and why are you not starting a big AI company,
A, I'm medicine company now, and really change health care instead of going through the rest of the rigmarole.
And I think broadly higher education has a problem there, which is we have not embraced, again, going back to Ethan, a lot of the tools that can be used.
And it's because we don't know necessarily the right way to teach them.
And so far, the only lasting heuristics seems to be use them, use them often.
And so it's an awkward thing where the person who knows how to use the AI tools now in the first-year medical school can teach themselves better and faster than anybody else in their class who is just relying on the medical school curriculum.
Now, the reason I brought up Morgan now after our discussion with Ethan Mollick is Morgan also talked about AI collapsing medical specialties.
And so let's hear this snippet from.
him.
AI collapses medical specialties onto themselves, right?
You have the canonical example of the cardiologist, you know, arguing that we should diaries
and maybe the nephrologist arguing that we should, you know, protect the kidneys.
And how do two disciplines disagree on what is right for the patient when, in theory, there
is an objective best answer given that patient's clinical status?
So I'm interested in this question of whether medical specialties themselves need to evolve.
And if we look back in the history of medical technology, there are many times where a new
technology forced a medical specialty to evolve.
So on the specific question about specialties, Zach, do you have a point of view?
And let me admit, first of all, for all three of us, we didn't have any clue about this
in our book, I don't think.
Not much.
Not much of a clue.
So I'm reminded of a New Yorker cartoon where you see a bunch of surgeons around the patients.
and someone says, is that a spleen?
I don't know.
I slept during the spleen lecture.
Or I didn't take the spleen course.
And yet, when we measure things,
we measure things much more than we think we are doing.
So, for example, we just published a paper
where echocardiograms were being done.
And it turns out those ultrasound waves
just happen to also permeate the liver.
And you can actually diagnose on the way with AI
all the liver disease
that is in and treatable liver disease that's in those patients.
But if you're a cardiologist, liver, I know, I slept through liver lecture.
And so I do think that, A, the natural, often guild slash dollar-driven silos in medicine
are less obvious to AI, despite the fact that they do exist in department.
and often in chapters.
But Morgan's absolutely right.
I can tell you as an endocrinologist,
if I have a child in the ICU,
the endocrinobics, the nephrologist,
and the neurosurgery will argue
about the right thing to do.
And so, in my mind,
the truly revolutionary thing to do
is to go back to 1994
with Pete Solovich,
the Garden Angel Project.
What I think you need is a process,
and the process is the
quarterback, and the quarterback has only one job. Take care of the patient. And it should be thinking
all the time about the patient. What's the right thing? And can be as school marmish or not about
Zach, you're eating this or that, or exercise or sleep, but also, hey, surgeons, and Conradis,
you're talking about my host, Zach, this is the right way because this problem and this problem
and our best evidence is this is the right way to get rid of the fluid.
The other ways will kill them.
And I think you need a authoritative quarterback that has the view of the others, but then makes the calls.
Is that quarterback going to be AI or human?
Well, for the very lucky people, it'll be a human augmented by AI, super concierge,
but I think we're running out of doctors.
And so realistically, it's going to be an AI that will have to be certified in the very different ways
along the ways of the blue and thall says essentially trial by fire,
like putting residents into clinics,
we're going to be putting AIs into clinics.
But what's worse, by the way, than the three doctors arguing about care
in front of the patient is what happens so frequently
is then you see them outpatient,
and each one of them gives you a different set of decisions to make,
sometimes that actually interact pathologically unhealthily with each other,
And only the very smart nurses or primary care physicians will actually notice that and call, quote, a family meeting or bring everybody in the same room to align them.
Yeah.
I think this idea of quarterback is really very, very topical right now because there's so much intensity in the AI space around agents.
And in fact, you know, the Microsoft AI team under Mustafa Silliman and Dominic King, Harshanora, Norian team,
just recently posted a paper on something called sequential diagnosis,
which is basically an AI quarterback that is supposed to smartly consult with other AI specialties.
And interestingly, one of the AI agents is sort of the devil's advocate
that's always criticizing and questioning things.
And at least on very, very hard, rare cases, it can develop some impressive results.
There's something to this that I think is emerging.
And Peter Morgan said something that blew me away even more, which was, well, why do we even need specialists?
If the reason for a specialist is because there's so much medical knowledge that no single physician can know all of it, and therefore we create specialists.
But that limitation does not exist for AI.
And so there he was kind of undermining this whole elaborate structure that has grown up because of human limitations that may not.
ultimately need to be there. Right. So now that gives me a good segue to get back to our
economists and get to something that Azeem Azar said. And so there's a clip here from Azeem.
We didn't talk about AI in its ability to potentially do this, which is to extend the clinician's
presence throughout the week. You know, the idea that maybe some part of,
of what the clinician would do if you could talk to them on Wednesday, Thursday and Friday
could be delivered through an app or a chat bot just as a way of encouraging the compliance,
which is often, especially with older patients, one reason why conditions linger on for longer.
And, you know, in the same conversation, he also talked about his own management of asthma
and the fact that he's been managing this for several decades
and knows more than any other human being,
no matter how well medically trained, could possibly know.
And it's also very highly personalized.
And it's not a big leap to imagine AI having that sort of lifelong understanding.
So, in fact, I would give credit back to our books
since you insulted us.
You challenged us.
You doubted us.
We do have, at the end of the book,
a AI, which is helping this woman manage her way through life,
is quarterbacking for the woman of all these different services.
So there.
You're right.
Yes.
In fact, it's very much, I think, along the lines of the vision that Azim laid out in our conversation.
Yeah.
It also reminded me of the piece Zach wrote about his mother at one point when she was managing
congestive heart failure and she needed to watch her wait very carefully to see her fluid
status. And absolutely, there's no, I see no reason whatsoever why that couldn't be done with
AI right now. Actually, although back then, Zach, you were writing that it takes much more than
an AI to manage such a thing. You need an AI that you can trust. Now, my mother was born in
1927, and she'd learned through the School of Hard Knocks that you can't trust too many people,
maybe even not your son, MD PhD. But what I've been surprised is how, for example,
how many people are willing to trust and actually see effective use of AI as mental health
counselors, for example. Yeah. So it may, in fact, be that there's a generational thing going on,
and at least there will be some very large subset of patients, which will be completely comfortable
in ways that my mother would have never tolerated.
Yeah, now I think we're starting to veer into some of the core AI.
And so I think maybe one of the most fun conversations I had
was in the episode with both Sebastian Bubeck,
my former colleague at Microsoft Research,
and now he's at OpenAI and Bill Gates.
And there was so much that was, I thought, interesting there.
And there was one point, I think, that sort of,
touches tangentially on what we were just conversing about that Sebastian said.
So let's hear this snippet.
And one example that I really like a study that recently appeared where they were comparing
doctors without and with chat GPT.
So this was a set of cases where the accuracy of the doctors alone was around 75%.
Chad GPT alone was 90%.
But then the kicker is that doctors with chat GPT was 80%.
intelligence alone is not enough.
It's also how it's presented, how you interact with it.
And chat GPT is an amazing tool.
Obviously, I absolutely love it.
But it's not, you don't want the doctor to have to type in, you know, prompts and use it
that way.
It should be, as Bill was saying, kind of running continuously in the background, sending
you notifications.
So I thought Sebastian was saying something really profound, but I haven't been able to
quite decide or settle in my mind what it is. What do you make of what Seb just said?
I think it's context. I think that it requires an enormous amount of energy, brain energy,
to actually correctly provide the context that you want this thing to work on. And it's only
going to really feel like we're in a different playing field when it's listening all the time
and it just steps right in.
There is an advantage that, for example,
a good programmer can have in prompting cursor
or any of these tools to do so,
but it takes effort.
And I think being in the conversation all the time
so that you understand the context
in the widest possible way is incredibly important.
I think that's what Zeb is getting at it,
which is if we spoon-feed
these machines. Yes, 90%. But then talking to a human being who then has to interact and gets
distracted from whatever flow they're in and maybe even makes them feel like a early bicycle
rider who almost some realize, I'm balancing on two wheels, oh no, and they fall over. You know,
there's that interaction, which is negatively synergistic. And so I do think it's a very hard
human computer engineering problem. How do we make these two agents, human and
computational work in an ongoing way in the flow? I don't think I'm seeing anything that's
particularly new. And the things that you're beginning to hint about, Peter, in terms of
agentic coordination, I think we'll get to some of that. Yeah. Carrie, does this give you
any pause, the kind of results that, they're puzzling results.
I mean, the idea of doctors with AI seeming, at least in this one test, it's just one test, but, you know, it's odd that it does worse than the AI alone.
Yes, I would want to understand more about the actual conditions of that study.
From what Bill Gates said, I was most struck by the question of resource poor environments, that even though this was absolutely one of the most promising, brightest perspectives that we highly,
in the book, we still don't seem to be seeing a lot of use among the one half of humanity that
lacks decent access to health care. I mean, there are access problems everywhere,
including here in the United States. And it is one of the most potentially promising uses of
AI. And I thought, if anyone would know about it, he would, with the work that the Gates Foundation
does. You know, I think both you and Bill, I felt, are really sympathetico. You know,
expressed genuine surprise that more isn't happening yet. And it really echoed. In fact, maybe even
using some of the exact same words that you've used. And so two years on, you've expressed
repeatedly expecting to have seen more out in the field by now. And I thought Bill was saying
something in our conversation very similar. For me, I see it both ways. I see the world of medicine
really moving fast and confronting the reality of AI in such a serious way. But at the same time,
it's also hard to escape the feeling that somehow we should be seeing even more. So it's
an odd thing, a little bit paradoxical. I think one thing that we didn't focus on hardly at all
in the book, but that we are seeing is these companies rising up, stepping up to the challenge,
a bridge and open evidence and what Morgan describes as a new stack, right?
So there is that on the flip side.
Now, I want to get back to this thing that Seb was saying,
and, you know, I had to bring up the issue of sycophancy,
which we discussed her last roundtable also.
But it was particularly at the time that Seb, Bill, and I had her conversation,
Open AIA had just gone through having to retract a fresh update of GPT4O because it has become too sycophantic.
So I can't escape the feeling that some of these human computer interaction issues are related to this tension between you want AI to follow your directions and be faithful to you,
but at the same time
not agree with you so often
that it becomes a fault
I think so asking the AI
to enter into a fundamental
human conundrum
which is there are extreme versions
of double think
and there's everyday things
every day aspects of double think
which is how to be an effective
citizen and even if you're thinking
I'm thinking this
I'm just not going to say it
because that would be rude or counterproductive.
Or some of the official double sinks
where you're actually told you must say this,
even if you think something else.
And I think we're giving a very tough mission for these things.
Be nice to the user and be useful.
And in education, we're thinking that that is not always one and the same.
Sometimes you have to give a little tough love to educate,
someone and doing that well is both an art and it's also very difficult and so you know I'm willing
to believe that the latest frontier models that have made them use in last month are very high
performing but they've also all highlighting that tension yes that tension between
behaving like a good citizen and being helpful.
And this gets back to what are the fundamental values that we hope these things are following.
It's not, you know, are these things going to develop us into the paper click factory?
It's more as which of our values are going to be elevated and which one will be suppressed.
Well, since I criticized her book before, let me,
pat ourselves on the back this time
because I think pervasive
throughout our book, we
were touching on some of these issues.
In fact, we started the book, you know,
with GPD4 scolding
me for wanting it
to impersonate Zach.
And there was the whole
example of
asking it to rewrite
a poem in a certain way, and it
kind of silently just
tried to slide, you know,
without me knowing, a
slide by with following through on the whole thing. And so that early version of DPD4 was
definitely not sycophantic at all. In fact, it was just as prone to call you an idiot if it thought
you were wrong. I had some very testy conversations around my endocrine diagnosis with him.
Well, and Peter, I would ask you, I mean, last time I asked you about, well, hallucinations,
aren't those solvable? And this time I would ask you, well, sycophancy, isn't that kind of like
a dial you can turn? Is that not solvable? Is that not solvable?
I think there are several interlocking problems, but if we assume super intelligence, even with superintelligence, medicine is such an inexact science, that there will always be situations that are guesses that take into account other factors of a person's life, other value judgments, exactly as Zach had pointed out in our
previous roundtable conversation. And so I think there's always going to be an opening for
either differences of opinion or agreeing with you too much. And there are dangers in both
cases. And I think they'll always be present. I don't know that at least in something as inexact
as medical science, I don't know that'll ever be completely eliminated. And that's interesting
because I was trying to think what's the right balance,
but there are patients who want to be told,
this is what you do.
Whereas there's other patients
who want to go through every detail of the reasoning.
And it's not a matter of education.
It's really a temperamental personality issue.
And so we're going to have to,
I think, develop personalities
that are most effective
for those different kinds of individuals
And so I think that is going to be the real frontier.
Having human values and behaving in ways that are recognizable
and yet effective for certain groups of patients.
And lots of deep questions, including how paternalistic do we want to be?
All right.
So we're getting into medical science and hallucination.
So that gives me a great segue to the conversations in the episode
on biomedical research.
And one of the people that I interviewed was New Bar a fan from Moderna and flagship pioneering.
So let's listen to this snippet.
We some hundred or so times a year ask what-if questions that lead us to totally weird places of thought.
We then try to iterate, iterate, iterate to come up with something that's testable.
Then we go into a lab and we test it.
So in that world, sitting there going, like, how do I know this transformers going to work?
The answer is, for what?
Like, it's going to work to make something up.
Well, guess what?
We knew early on with LLMs that hallucination was a feature,
not a bug, for what we wanted to do.
So I think that really touches on just the fact that there's so many unknowns
in such lack of precision and exactness in our understanding of human biology and of medicine.
Carrie, what do you think?
I mean, I just have this emotional reaction, which is that I love the idea of AI marching into biomedical science and everything from getting to the virtual cell eventually to, Zach, I think it was a colleague of yours who recently published about a, it was a new medication that had been sort of discovered by AI and it was actually testing out up to the phase two level or something.
thing, right?
All this is, this is Marinka's work.
Yeah, Marinka, yeah.
Rinka, Jitnik.
And, yeah, so, so, I mean, I think it avoids a lot of the sort of dilemmas that are
involved with safety and so on, with AI coming into medicine.
And it's just the discovery process, which we all want to advance as quickly as possible.
And it seems like it actually has a great deal of potential that's already starting to be
realized.
Oh, absolutely.
Absolutely. Actually, I love this topic. First of all, I thought, actually, I think Bill and Seb actually has interesting things to see on that very topic, that rationales, which I had not really considered why, in fact, things might progress faster in the discovery space than in the clinical delivery space, just because we don't know in clinical medicine what we're trying to maximize precisely. Whereas for a drug effect, we do know.
what we're trying to maximize.
Well, in fact, I happen to save that snippet from Bill Gates saying that.
So let's cue that up.
I think it's very much within the realm of possibility that the AI is not only accelerating
health care discovery, but substituting for a lot of the roles of, you know,
I'm an organic chemist or I run various types of assays.
I can see those which are, you know, testable output.
type jobs with still very high value. I can see, you know, some replacement in those areas
before the doctor. So, Zach, isn't that Bill saying exactly what you're saying?
That is my point. I have to say that this is another great bet that either we're all going to be
surprised or displeased or a large group of people will be surprised or disappointed. There's still a lot of
people in the sort of medicinal chemist, trialist space who are still extremely skeptical that this is going to
work. And we haven't quite shown them yet that it is. Why have we not shown them? Because we
haven't gone all the way to a phase three study, which showed that the drug behaves as it was expected
to, is effective, and basically doesn't hurt people. That turns out to require a lot of knowledge.
I actually think we're getting there, but I understand the skepticism. Carrie, what are your thoughts?
Yeah, I mean, there will be no way around going through full-on clinical trials for anything to
ever reached the market, but at the same time, you know, it's clearly very promising. And just to
throw out something for the pure fun of it, Peter, I saw one of my favorite tweets recently was
somebody saying, you know, isn't it funny how computer science is actually becoming a lot more
like biology in that it's just becoming empirical. It's like you just throw stuff at the AI and see
what it does. And I was like, oh yeah, that's what Peter was doing when they wrote the book. He understood
as many inners as anybody can, but at the same time, it was a totally empirical exercise in
seeing what this thing would do when you threw things at it. Right. So that's the new biology.
Yeah. So I think we talked in our book about accelerating biomedical knowledge and medical
science. And that actually seems to be happening. And I really had fun talking to Daphne Collier.
about some of the accomplishments that she's made.
And so here's a little snippet from Daphne.
This will impact not only the early stages of which hypotheses we interrogate,
which molecules we move forward,
but also hopefully at the end of the day,
which molecule we prescribe to which patient.
And I think there's been obviously so much narrative over the years
about precision medicine, personalized medicine,
and very little of that has come to fruition
with the exception of certain islands and oncology primarily
on genetically driven cancers.
So, Zach, when I was listening to that, I was reminded of one of the very first examples
that you had where, you know, you had a very rare case of a patient and you're having to
narrow down some pretty complex and very rare genetic conditions.
This thing that Daphne says, that seems to be the logical conclusion that everyone who's
thinking hard about AI and biology is coming to. Does it seem more real now, two years on?
It absolutely seems more real. Here's some sad facts. If you are at a cancer center,
you will get targeted therapies if you qualify for it. Outside cancer centers, you won't.
And it's not that the therapies aren't available. It's just that you don't have people thinking
about it in that way. And especially if you have some of the rare and more aggressive cancers,
If you're outside one of those cancer centers, you're at a significant disadvantage for survival for that reason.
And so anything that provides just the simple, in quotes, dogged investigation of the targeted therapies for patients, it's a home run.
So my late graduate student, Atul Butte, died recently at...
a UCSF, where he was both a professor and the leader of the Baker Institute, and he was the Zuckerberg-Chann-Chann professor of pediatrics.
He was diagnosed with a rare tumor two years ago.
His wife is a PhD biologist, and when he was first diagnosed, she sent me the diagnosis and the mutations.
And I don't know if you know this, Peter, but this was still when we were writing the book and people didn't know about GP4.
I put in those mutations into GPT4 and the diagnosis.
And I said, I'd like to help treat my friend.
What's the right treatment?
And GPT, to paraphrase, GP4 said, before we start talking about treatment,
are you sure this is the right diagnosis?
Those mutations are not characteristic for that tumor.
Right.
And he had been misdiagnosed.
And then they changed the diagnosis therapy and some personnel.
I don't have to hallucinate this.
It's already happened.
And we're going to need this.
And so I think targeted therapy for cancers is the most obvious use.
And God forbid, one of you has a family member who has a cancer, it's moral malpractice,
not to look at the genetics and run it past GP4 and say, what are the available therapies?
I really deeply believe that.
Carrie, I think one thing you've always said is that you're surprised that we don't hear more stories along these lines.
And I think you threw a quote from Mustafa Suleiman back at me.
Do you want to share that?
Yes.
Recently, I believe it was a big technology interview, and the reporter asked Mustafa Suleiman.
So you guys are seeing 50 million queries, medical queries a day.
You know, how's that going?
And I think I am a bit surprised that we're not seeing more stories of all types.
Both here's how it helped me and also here was maybe a suggestion that was not optimal.
Yeah.
I do think in our book we did predict both positive and negative outcomes of this.
And it is odd.
A tool was very open with his story.
and, of course, he was such a prominent leader in the world of medicine.
But I think I share your surprise, Carrie.
I expected by now that a lot more public stories would be out.
Maybe there is someone writing a book collecting these things.
I don't know.
Maybe someone called Terry Goldberg should write that book.
I'd write a book, maybe.
I mean, we have patients use AI, which is a wonderful blog by Dave DeBronkart,
the patient advocate.
But I wonder if it's also something structural, like who would be or what would be the institution
that would be gathering these stories, I don't know.
That's a problem.
You see, this is back to the same problem that Molok was talking about.
Individual doctors are using them, are using them.
The hospital as a whole is not doing that.
So it's not judging the quality of, as part of its quality metrics, of how good the AI is performing
and what new has happened.
And the other audience, namely the patients, have no mechanism.
There is no mechanism to go to a better business bureau and say, they screwed up or this is great.
So now I want to get a little more futuristic.
And this gets into whether AI is really going to get almost to the ab initio understanding of human biology.
And so Eric Topol, who is one of the guests, spoke to this a bit.
So let's hear this.
So you talk about a virtual cell.
Is that achievable within 10 years,
or is that still too far out?
No, I think within 10 years for sure.
You know, the group that got assembled
that Steve Quake pulled together,
I think it's 42 authors of paper in cell.
The fact that he could get these 42 experts,
life science and some in computer science,
to come together and all agree that not only is this a word
the goal, but it's actually going to be realized. That was impressive. I have to say Eric's optimism
took me back. Just speaking as a techie, I think I started off being optimistic. As soon as we can
figure out molecular dynamics, biology can be solved. And then you start to learn more about
biochemistry, about the human cell, and then you realize, oh my God, this is just so fast and
unknowable. And now you have Eric Topol saying, well, in less than 10 years.
So what's delightful about this period is that those of us who are cautious,
we're so incredibly wrong about AI two years ago. That's a true joy. I mean,
absolute joy. It's great to have your futurism made much more positive. But
I think that we're going from, you know, for example,
Alpha Fold has had tremendous impact.
But remember, that was built on years of acquisition
of crystallography data that was annotated.
And of course, the annotation process becomes less relevant
as you go down the pike, but it started from that.
And there's lots of parts of the cells.
So when people talk about virtual cells,
I don't mean to get too technical.
mostly people are talking about perturbation of gene expression.
They're not talking about, oh, this is how the liposome
and the centrosome interacts
and how this is how the gulgy bodies bump into each other.
There's a whole bunch of other levels of abstraction.
We had known nothing about.
This is a complex factory,
and right now we're sort of the level from code into loading code into memory.
We're not talking about how the rest of the robots work in that cell.
and all the rest of those robots we're going to sell
turns out to be pretty important to
functioning. So I'd love
to be wrong again, and
in 10 years, oh yeah,
not only, you know,
our first in human study will be you,
Dr. Zach, we're going to put
the drug because we've never, could be fully
simulated you. That'd be great.
And by way, just the
people there do, there probably was
a lot of animal research that could be
done in Silco, and that
for various political reasons, we're now
thing happened. That's a good thing. But I think that sometimes it takes a lot of hubris to
get where we need to get, but my horizon is not the same as his. So I guess I have to take this
time to brag. Just recently, out of our AI for Science team, we did publish in science, a biological
emulator that does pretty long time span.
very, very precise and very efficient molecular dynamics,
biomolecular dynamics, emulation.
We call it emulation because it's not simulating
every single time step, but giving you the final translation.
That's an amazing result.
But that is an amazing result,
and you're doing it in some very important interactions,
but there's so much more to do.
I know, and it's single molecules,
not even two molecules,
and there's so much more to go for here.
But on the other hand, Eric is right.
You know, 42 experts writing for cell, you know, that's not a small matter.
So I think sometimes you really need to drink your own hallucinogens to actually succeed.
Because remember, when the human genome project was launched, we didn't know how to sequence at scale.
We said maybe we would get there.
And then, in order to get the right funding and excitement and psych-focus,
we predicted that by early 2000s, it would be transforming medicine.
Has not happened yet.
Things have happened, but at a much slower pace.
And we're 25 years out.
In fact, we're 35 years out from the launch.
But again, things are getting faster and faster.
Maybe any of the singularity is going to make a whole bunch of things easier,
and GPD6 will just say, Zach, you know, you are such a pessimist.
Let me show you how it's done.
It really is a pessimism versus optimism.
Like, is it, I mean, biology is such a bitch, right?
Is it, can we actually get there?
At the same time, everyone was surprised and blown away by the, you know,
the quantum leap of GPT4.
Who knows when enough data gets in there if we might not have a similar leap.
Yeah.
All right.
So let's get back to health care delivery.
besides Morgan Cheatham, we talked to a more junior medical student who's at the Kaiser Permanentee School of Medicine, Daniel Chen.
And, you know, I asked him about this question of patients who come in armed with a lot of their own information.
Let's hear what he said about this.
But for those that come in with a list, I sometimes sit down with them and we'll have a discussion, honestly.
I don't think you have meningitis because, you know, you're not having.
having a fever, some of the physical exam maneuvers we did were also negative. So I don't think
you have anything too worried about that. So I think it's having that very candid conversation
with the patient that helps build that initial trust. So Zach, as far as I can tell, Daniel and
Morgan are figuring this out on their own as medical students. I don't think this is part of the
curriculum. Does it need to be? It's missing the bigger point. The incentives and economic forces
are such that even if you were Daniel
and things have not changed
in terms of incentives
and it's 20-30
he still has to see this many patients
in an hour
and sitting down
going over that
with a patient let's say
in fact I think computer scientists
are enriched for these sort of neurotic
explain me why this works
when often the answer is
I have no idea empirically
it does. And
patients in some sense deserve
that conversation and we're taught
about joint decision making
than practice. There's a lot of skills that are
deployed to actually deflect
so that you can get through
the appointment
and see enough patients per hour.
And that's why
I think that one of the
central, another task for AI
is how to engage with patients.
to actually explain to them
why their doctor is doing what he's doing
and perhaps ask the one or two questions
that you should be asked to the doctor
in order to reassure you
that they're doing the right thing.
I just, right now,
we are going to have less doctor time,
not more doctor time.
And so I've always been struck
by the divide between medicine
that we're taught as it should be practiced
as a gentle person's vocation or sport, as opposed to assembly line heads down,
you've got to see those patients by the end of the day,
because otherwise you haven't seen all the patients at the end of the day.
Yeah.
Carrie, I've been dying to ask you this, and I have not asked you this before.
When you go see a doctor, are you coming in armed with chat GPT information?
I haven't needed to yet, but I certainly would.
And also my reaction to the medical student description was, I think we need to distinguish between the last 20 years when patients would come in armed with Google and what they're coming in with now, because at least the experiences that I've witnessed, it is miles better to have gone back and forth with GPT4 than with, you know, dredging what you can from Google.
And so I think we should make that distinction.
And I also, the other thing that most interested me was this question for medical students of whether they should not use AI for a while so that they can learn how to think.
And similarly, maybe don't use the automated scribes for a while so they can learn how to do a note.
And at what point should they then start being able to use AI?
And I suspect it's fairly early on that, in fact, they're going to be using it so consistently that there's not that much they need to learn before they start using the tools.
These two students were incredibly impressive.
And so I have wondered, you know, if we got a skewed view of things.
I mean, Morgan is, of course, a very, very impressive person.
And Daniel was handpicked, you know, by the dean of the medical school to be a subject of this interview.
But, you know, we filter our students by and large.
I mean, there's exceptions, but students in medical school are so starry-eyed.
And they got into medical school.
I mean, some of them may have faked it, but a lot of them really wanted to do good.
And they really wanted to help.
And so this is very confident with them.
And it's only when they're in the machine past medical school that they realize,
oh, my God, this is a very, very different story.
And I can tell you, because I teach a course in computational, competently enables medicine.
So I get a lot of these nerd medical students.
And I'm telling them, you're going to experience this and, you know, say, I'm not going to be able to change medicine until I get enough cred 10, 50 years from now, whereas I could start my own company and immediately change medicine.
And increasingly, I'm getting calls in residency and saying, Zach, help me.
How do I get out of this?
And so I think there's a real disillusion.
of that between what we're asking for people coming to medical school.
We're looking for a phenotype, and then we're disappointing them, massively.
Not everywhere, but massively.
And for me, it's very sad because among our best and brightest.
And then because economics and expectations and the nature of the beast,
they're not getting to enjoy the most precious,
part of being a doctor, which is that real human connection and longitudinality.
You know, the connection between the same doctor at visit after visit is more and more of a
luxury.
Well, maybe this gets us to the last episode, you know, where I talked to a former state
director of public health, Umar Shah, and with Jean-Rico Faruja, who's the CEO of the Mayo Clinic.
And I think if there's one theme that I took away from those conversations,
that we're not thinking broadly enough nor big enough.
And so here's a little quote of exchange that Umer Shah,
who was the former head of public health in the state of Washington,
and prior to that in Harris County, Texas.
And we had a conversation about what techies tend to focus on
when they're thinking about AI and medicine.
I think one of the real challenges is that when even
tech companies, and you can name all of them, when they look at what they're doing in the
AI space, they gravitate towards health care delivery. Yes. And in fact, it's not even
delivery. I think techies, I did this too, tend to gravitate specifically to diagnosis.
I have been definitely guilty. I think Umeir, of course, was speaking as a former frustrated
public health official and just thinking about all the other things that are important to maintain a
healthy population. Is there some lesson that we should take away? I think our book also focused
a lot on things like diagnosis. Yeah. Well, first of all, I think we just have to have
humility. And I think it's a really important ingredient. I found myself staring at the increase in
lifespan in human beings over the last two centuries and looking for bumps that were attributable.
I'm in medical school. I've already made this major commitment. What are the bumps that are
attributable to medicine? And there was one bump that was due to vaccines, small bump, another small
bump that was due to antibiotics. And the rest of it is nutrition, sanitation. Yeah, nutrition
and sanitation. And so I think doctors can be incredibly valuable, but not.
all the time. And we're spending now one-sixth of our GDP on it. The majority of it is not
effectively prolonging life. And so the humility has to be the right medicine at the right time.
But that runs, A, against a bunch of business models, it runs against the primacy of doctors
in health care. It was one thing when there were no textbooks, there was no PubMed.
you know, the doctor was the posture of all the chroma knowledge that we have.
But I think your guess we're right.
We have to think more broadly in the public health way.
How do we make knowledge pervasive like sanitation?
Although I would add that since what we're talking about is AI,
it's harder to see if, and if what you're talking about is public health,
I mean, it was certainly very important to have good data during the pandemic, for example.
But most of the ways to improve public health, like getting people to stop smoking and eat better and sleep better and exercise more, are not things that AI can help with that much.
Whereas diagnosis or trying to improve treatment are places that it could tackle.
And in fact, Peter, I wanted to put you, oh, wait, Zach's going to say something, but Peter, I wanted to put you on the spot.
I mean, if you had a medical issue now and you went to a physician, would you be okay with them not using generative AI?
I think if it's a complex or a mysterious case, I would want them to use generative AI.
I would want that second opinion on things.
And I would personally be using it.
If for no other reason, then just to understand what the chart is saying.
I don't see, you know, how or why one wouldn't do that now.
It's such a cheap second opinion.
And people are making mistakes.
And even if there are mistakes on the part of AI,
if there's a collision discrepancy, that's worth having a discussion.
And again, this is something that we used to do more of
when we had more time with patients.
We'd have clinic conferences.
And we don't have that now.
So I do think that there is a role for AI.
But I think, again, it's much more of a continual presence,
being part of a continued conversation rather than an oracle.
And I think that's when you'll start seeing,
when the AI is truly a colleague and saying,
you know, Zach, that's the second time we made that mistake.
You know, that's not obesity.
That's the effect of your drugs that you're getting here.
You better back off of it.
and that's what we need to see happens.
Well, and for the business of healthcare,
that also relates directly to quality scores,
which translates into money for health care providers.
So the last person that we interviewed was John Rico, Frusia.
And, you know, I was sort of wondering,
I was expecting to get a story from a CEO saying,
oh, my God, this has been so disruptive,
incredibly important, meaningful, but wow, what a headache.
At least John Rico didn't expose any of that.
Here's one of the snippets to give you a sense.
When Geront of AI came for us, it's like, I wouldn't say we told you so,
but it's like, ah, there you go, here's another two.
This is what we've been talking about.
Now we can do it even better.
Now we can move even faster.
Now we can do more for our patients.
It truly never was disruptive.
It truly immediately became enabling, which is strange, right?
because something as disruptive as that instantly became enabling at Mayo Clinic.
So I tried pretty hard in that interview to get John Riku to admit
that there was a period of headache and disruption here.
And he never, ever gave me that.
And so I take him at his word.
Zach, maybe I should ask you, what about Harvard and the whole Harvard medical ecosystem?
I would be surprised if there are system-wide measurable gains in health quality right now from AI.
And I do have to say that Mayo is one of the most marvelous organizations in terms of team behavior.
So if there's someone who's gone the team part of it right, they've come the closest, which relates to our prior conversation.
They have a quarterback idea pretty well down.
to others. Nonetheless, I take them at his word, but that it hasn't disrupted them,
but I'm also, I have yet to see the evidence that there's been a quantum leap in quality
or efficacy. And I do believe that it's possible to have a quantum leap in efficacy
in the right system. So if they haven't been disrupted, I would venture that
they've absorbed it, but they haven't used it to its fullest potential.
And the way I could be proven wrong is next year, all sorts of metrics showing that over
the last year, they've had decreased readmissions, decreased complications, decreased errors,
and all that.
And if so, God bless them.
And we should all be more like Mayo.
So I thought a little bit about two other quotes from the interviews that sort of
maybe would send us off with some more inspirational kind of view of the future.
And so there's one from Bill Gates and one from Jean-Rico Ferruccia.
So what I'd like to do is to play both of those.
And then maybe we can have our last comments.
You know, I've gone so far as to tell politicians with national health systems that
if they deploy AI appropriately, that the quality of care, the overload of the doctors,
the improvement in the economics will be enough that their voters will be stunned because
they just don't expect this.
And they could be reelected just on this one thing of fixing what is a very overloaded
and economically challenged health system in these rich countries.
And now is Jean-RICO.
And we seemed to be on a linear path, which is, let's try and reduce administrative burden.
Let's try and truly be a companion to a physician or other provider.
And then in the next step, we keep going until we get to, now we can call Agentica,
whatever I want to talk about.
And my view was no, is that let's start with that aim, the last aim,
because the others will come automatically if you're working on that harder problem.
Because one, to get to that harder problem, you'll find all the other solutions.
All right.
I think these are both kind of calls to be more assertive about this and more forward-leaning.
I think two years in to the GPD-4 era, those are pretty significant and pretty optimistic calls to action.
So maybe just to give you both one last word, what would be one hope that you would have for the world of health care and medicine two years from now?
I would hope for businesses that whoever actually owns them at some holding company level, but regardless of who owns them are truly patient-focused companies, companies where the whole AI is about improving your care.
and it's only trying to maximize you care,
and it doesn't care about resource limitations.
And I was listening to Bill,
and the problem with what you're saying about saving dollars for governments
is for many things,
we have some very expensive things that work.
And if the AI says, this is the best thing,
you're going to break your bank.
And instead, we, because of resource limitations,
We play a human-based fancy footwork to get out of it.
That's a hard game to play,
and I leave it to the politicians and the public health officials.
We have to do those trades of utilities.
In my role as doctor and patient,
I'd like to see very informed authoritative agents,
acting only on our behalf so that when we go and we seek to have our maladies addressed,
the only issue is what's the best and right?
thing for me now. And I think that is both technically realizable and even in our weird system,
there are business plans that will work that can achieve that. That's my hope for two years from now.
Yeah, fantastic. I second that so enthusiastically. And I think, you know, we have this very
glass half full, glass half empty phenomenon two years after the book came out. And it's sort of
very nice to see, you know, new approaches to administrative complexity and to prior authorization
and all kinds of ways to make physicians' lives easier. But really what we all care about is our
own health and that we would like to be able to optimize the use of this truly glorious
technological achievement to be able to live longer and better lives. And I think what Zach just
described is the most logical way to do that. Yeah, I think for me, two years from now,
I would like to see all of this digital data that's been so painful. It's such a burden
on every doctor and nurse to record actually amount to something meaningful in the care of
patients. And I think it's possible. All right. So it's been quite a journey. We were
joking before, we're still on speaking terms after having written a book. And then I think listeners
might enjoy knowing that we debated amongst ourselves what to do about a second edition,
which seemed too painful to me. And so I suggested the podcast, which seemed too painful to the two
of you. And in the end, I don't know what would have been easier writing a book or doing this
podcast series, but I do think that we learned a lot. Now, last bit of business here,
to avoid having the three of us try to write a book again and do this podcast, I leaned on the
production team in Microsoft Research and the Microsoft Research podcast, and I thought it would
be good to give an explicit acknowledgement to all the people who've contributed to this.
So it's a long list of names. I'm going to read through them all, and then I suggest that
we all give an applaud to them. And so here we go. There's Neltsa Berger.
Tatiana Bohemskine, David Silice Garcia, Matt Corwin, Jeremy Crawford, Christina Dodge, Chris Durie, Ben Erickson, Kate Forster, Katie Halliday, Alyssa Hughes, Jake Knapp, Weishung-Lew, Matt McGinley, Jeremy Mashburn, Amanda Melfie, Will Morrell, Joe Plummer, Brenda Potts, Lindsay Shanhan,
Sarah Sobolovsky, David Sullivan, Stephen Sullivan, Amber Tingle, Caitlin Traynor, Craig Tushoff, Sarah Wang, and Katie Zoller.
Really a great team effort, and they made it super easy for us.
Thank you, thank you.
Thank you.
A big thank you again to all of our guests for the work they do.
and the time and expertise they shared with us.
And last but not least, to our listeners.
Thank you for joining us.
We hope you enjoyed it and learned as much as we did.
If you want to go back and catch up on any episodes you may have missed or to listen to any again,
you can visit a.k.a.m. slash AI Revolution podcast.
Until next time.
Thank you.