a16z Podcast - Why We Shouldn’t Fear AI in Healthcare
Episode Date: July 7, 2020"Why We Shouldn’t Fear the ‘Black Box’ of AI (in Healthcare and Everywhere)" by Vijay Pande. First published in the New York Times, January 2018. You can also find and share this article at a16...z.com/aidoctor
Transcript
Discussion (0)
Why we shouldn't fear the black box of AI in healthcare and everywhere by Vijay Ponday.
This article first appeared in the New York Times in January 2018.
You can also find and share it at A6.Z.com slash AI doctor.
Our official intelligence is black box is nothing to fear.
So alongside the excitement and hype about our growing reliance on artificial intelligence,
there's a fear about the way the technology works.
A recent MIT Tech Review article titled The Dark Secret at the Heart of AI warned that no one really knows how the most advanced algorithms do what they do, and that could be a problem.
Thanks to this uncertainty and lack of accountability, a report by the AI Now Institute recommended that public agencies responsible for criminal justice, health care, welfare, and education shouldn't even use such technology.
So given these types of concerns, the unseeable space between where the data goes in and the answers come out, you know, is often referred to as a black box, you know, seemingly a reference to the Hardy and actually, in fact, orange, not black, data recorders mandated on aircraft and often examined after accidents.
In the context of AI, the term broadly suggests an image of being in the dark about how technology works, that we put in and provide data and models and architectures, and then computers provide us answers while continuing.
to learn on their own, but in a way that's seemingly impossible and certainly too complicated
for us to understand. There's a particular concern about this in healthcare, where AI is used to
classify which skin lesions are cancerous, to identify very early stage cancer from blood, to predict
heart disease, to determine which compounds in people and animals could extend healthy lifespans
and more. But these fears about the implications of Black Box are misplaced. AI is no less
transparent than the way in which doctors have always worked. And in many cases, we're a
presents an improvement, augmenting what hospitals can do for patients and the entire healthcare
system. After all, the black box in AI isn't a new problem due to new tech. Human intelligence
itself is, and always has been, a black box. Let's take the example of a human doctor making a
diagnosis. Afterward, a patient might ask that doctor how she made that diagnosis, and she would
probably share some of the data she used to draw her conclusion. But could you really explain
how and why she made that decision? What specific data from what studies she drew on?
What observations from her training or mentors influenced her, what tacit knowledge she gleaned from her own and from her college shared experiences, and how all of this combined into that precise insight.
Sure, she could probably give a few indicators or pointed her in a certain direction, but there would be an element of guessing.
And even if there weren't, we wouldn't know that there weren't other factors involved of which she wasn't even consciously aware.
If the same diagnosis had been made with AI, we could draw from all of the available information.
information on that particular patient, as well as from data anonymously aggregated across time
and from countless other relevant patients everywhere to make the strongest evidence-based
decision possible. It would be a diagnosis with a direct connection to the data, rather than
a human intuition based on limited data and derivative summaries of anecdotal experiences with a
relatively small number of local patients. But we make decisions in areas that we fully don't
understand every day, often successfully, from the predicted economic impacts of policies to weather
forecasts, to the ways in which we approach much of science in the first place. We either oversimplify
things or accept that they're just too complex for us to break down linearly, let alone explain
fully. It's just like the black box of AI. Human intelligence can reason and make arguments
for a given conclusion, but it can't explain the complex underlying basis for how we arrived at
that particular conclusion. Think of what happens when a couple get divorced because of one stated
cause, say infidelity, while in reality there's an entire unseen universe of intertwined causes,
forces, and events that contribute to that outcome. Why did they choose to split up when another
couple in a similar situation didn't? Even those in a relationship can't fully explain it. It's a black
box. The irony is that compared with human intelligence, is actually more transparent of the
intelligences. Unlike the human mind, AI can and should be interrogated and interpreted.
There are many technologies that could help interpret artificial intelligence, even in a way
that we can't interpret the human brain, like the ability to audit and refine models,
to expose knowledge gaps in deep neural nets, debugging tools that will inevitably be built,
and even the potential ability to augment human intelligence via brain computer interfaces.
In the process, we may even learn more about how human intelligence itself
works. Perhaps the real source of critics' concerns isn't that we can't see AI's reasoning,
but that as AI gets more powerful, the human mind becomes the limiting factor. In that future,
we actually need AI to understand AI. In healthcare, as well as in other fields, this means that
we will soon see the creation of a category of human professionals who don't have to make
the moment-to-moment decisions themselves, but instead manage a team of AI workers. Just like
commercial airline pilots who engage autopilots to land in poor conditions.
Doctors will no longer drive the primary diagnosis.
Instead, they'll ensure that the diagnosis is relevant and meaningful for a patient and
oversee when and how to offer more clarification and more narrative explanation.
The doctor's office of the future will very likely include computer assistance, both the
doctor's side and the patient side, as well as data inputs that could come far from beyond
the office walls.
When that happens, it will become clear that the so-called black box of artificial intelligence
is more of a feature, not a bug, because it's more possible to capture and explain what's going
on there than it is in the human mind.
None of this dismisses or ignores the need for oversight of AI.
It's just that instead of worrying about the black box, we should focus on the opportunity
and therefore better address a future where AI not only augments human intelligence and
intuition, but perhaps even sheds light and redefines what it means to be human in the first
place.