a16z Podcast - Expert AI as a Healthcare Superpower
Episode Date: January 10, 2023In this episode, Marc Andreessen and Vijay Pande discuss expert AI and its role in healthcare, bio, and more. Watch on Youtube: https://youtu.be/c7ScUDYSRYoSubscribe to Bio Eats World: https://podcas...ts.apple.com/us/podcast/bio-eats-world/id1529318900 Stay Updated: Find us on Twitter: https://twitter.com/a16zFind us on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
2020 was a year where many of us got to experience firsthand how AI is transforming the creative spheres, from writing to image generation.
But those are far from the only arenas for AI is advancing.
In this episode, A16Z Bio and Health founding general partner Vijay sits down with A16Z co-founder Mark Andresen, who infamously wrote his essay, Software Is Eating the World, over a decade ago.
In this conversation, Mark and VJ discuss what AI can and cannot do today, but also the tension of where it's advancing,
and what might stop it. They also explore the rapidly advancing technology with a frame of what
it can augments instead of replace, touching on the potential future of doctors, teachers, therapists,
and more. I hope you enjoy this fascinating conversation.
Hello, and welcome to BioEats World, a podcast to the intersection of bio, health care, and tech.
I'm Olivia Webb, the editor.
lead for bio and health at A16Z.
I'm very excited to share this episode because it features bio and health founding partner
BJ Ponday in conversation with A16Z co-founder Mark Andreessen on the topic of expert
AI.
You can also check out a video version of this podcast on A16Z's YouTube channel, and we'll
link it in the show notes.
In this episode, Mark and VJ have a lively discussion about the future of expert AI with regard
to health care.
but they also get into self-driving cars, screenplays, music, and the nature of consciousness itself.
It's one of our longest episodes to date, but we could have gone longer.
As VJ and Mark discuss, AI has the potential to change many industries.
So let's get started.
Hey, I'm VJ Ponday.
I'm the founding general partner of the A16Z Bion Health Fund.
And I'm Mark Andreessen, a co-founder of A16C.
Mark, thank you so much for joining us.
Yeah, it's great to be here.
Yeah.
So, you know, you famously wrote about software eating the world.
And that was basically, what, 10 plus years ago?
Yeah.
And actually, that very much seems to have come to fruition.
If you look at all these other industries, that software really wasn't so a part of,
software has actually become a dominant part.
But actually, this year's been kind of an amazing year for another type of software for AI.
And I'm curious to sort of talk about the arc of what we think is going to happen in the future,
based on what we've seen in the past
and really how this new technology
is going to change everything,
much like we've seen software change the last 10 years.
I'm curious what you think for just like this year.
It's been kind of an amazing year.
We always seem like not much happens in any given year,
but 2022 seems to have been an amazing year for AI.
Well, so Vladimir Lenin, yes, once said,
there are decades in which nothing happens,
and then there are weeks in which decades happen.
Yes.
And let's not hope that happens politically anymore,
but it does happen in science and technology
that does happen there are sort of moments where things
kind of hit critical mass
and you know this
the sort of AI machine learning revolution seems
like that's what's happening right now
you know it's been interesting to watch
you know it's sort of like it feels to me at least
it was like there was like a breakthrough moment in 2012
right that had to do with images
and then there was a lot of work you know subsequently
that led to things like the creation of self-driving cars
you know based on that and then there was some
it feels like some natural language breakthrough
maybe three years ago yeah
and now that's really kind of like
into this, you know, kind of whole thing that we see happening around, you know,
GPT and text generation.
Yeah.
And then, you know, even other applications, transcription, you know, is getting much better.
All of a sudden, speech synthesis is getting much better.
And then now you've got this artistic revolution happening with image, you know, image creation.
And now video creation is right next, you know, coming up now really fast.
Yes.
And so it seems like one of those catalytic moments.
And then it's, you know, it's like every week now there's like,
it seems like there's like fundamental breakthroughs, there's research papers,
there's product releases coming out.
So it seems like a cascading thing.
The way I think about it as a software person,
you know, sort of lifelong, lifelong programmer,
is that they're, you know,
basically in the fullness of time,
it will appear, I think that there were kind of two different ways
to write software.
There was sort of the old Paderet of right software,
which is sort of the classic von Neumann machine,
you know, deterministic way.
And the whole problem with writing software
and the old model is like computers are hyper-literal.
Yes.
And so they do exactly what I tell them.
Unfortunately.
Right.
Every time they do something wrong,
it's because you have instructed them
improperly and it's your
very humbling experience to learn as a young programmer
that everything is your fault
and the machine will just sit and wait for you to fix the problem
like it's not going to do that at its own
but then there's this new there's this other way
to write software and this is asking to do with
you know having these these AI systems
and then having training data training the systems
tweaking the systems and the sort of you know
capability that the way I described it to kind of
normies is though you know that sort of unlocks the ability
for computers to more and more interact with the real world
and with the messiness of the real world
right and the probabilistic nature
of the real world.
Yeah, well, it seems almost less like writing software, almost like training something.
It's like when I think about machine learning and image recognition you talked about
it felt like it was almost like training a dog, right?
Like reinforcement learning is like we'll give treats as it gets better.
But there's something different now.
It feels like, I don't know, we've gone from like training a dog to recognize a bird versus
like a hot dog or hot dog or not hot dog or so on to actually something where it feels
closer to like training a person or I don't know how you feel like.
When we talk about learning and training and data, like what are we training?
Where do you think we are in that arc of getting to, like, eventually how old 9,000 and so on?
Well, so while this has been happening, you know, and you have kids, like I have a, you know, a young, young child, so I have a seven-year-old now.
So as this stuff has all the mapping, I've been simultaneously training and training now the seven-year-old.
Yes.
Yeah.
You know, anybody who's had kids will recognize what, you know, kind of what I'm about to say.
But, you know, it is really interesting watching little kids.
The way I think about, or at least the little kid I have who's great, it's like, you know, it's like, you know, it's like,
everything for the first for the first few years it's like every single thing he did was like a little
applied physics experiment which is like let's see what happens if i drop this let's see what happens
if i eat this let's see what happens if i do this to daddy yes right i see what the response is
and they just run experiment and you can see it you can see it very clearly when they're like
learning how to walk because they're like running all these experiments about how to stand up and
what to hold on to and they keep falling over and then at some point like the little neural network
like actually figures it out it does learn off the way they go right yeah um and so you know
clearly it's like it's kind of you know it's a little bit eerie like you can see that a similar
kind of thing happening you know like having said that like you know the you know the human brain
just like it's developing um and then you know it did ultimately you know it clearly has consciousness
achieves you know higher levels of consciousness achieves higher levels of sort of self-knowledge
you know reaches the Descartes you know kind of you know stage where it is there who has
self-awareness um you know clearly is very creative from an early age i'm a little less
convinced that the software technologies we have now are like on some linear path towards
It's just, like, quote, AGI.
Yeah, it's quite a quote, like, consciousness.
Like, I is, it's hard for me to believe the consciousness is just simply, like,
emergent from, like, higher-scale neural networks.
Like, I, I, to me, seems like a hand wave.
Yeah.
Having said that, I have a lot of our friends who are pretty sure that that's what's going to happen.
So, yeah, actually, I feel that way as well.
So I want to get to EGI in a bit.
I mean, and also we can debate where the consciousness is an illusion, as it is.
But where we are now is kind of amazing.
Like, people can take, like, GPT3, you can give it SAT exams, can do okay.
Actually, it can do quite well.
Yeah, yeah, I can do the score is it like the, so the one I saw that scored it like with 1,200.
Yeah, yeah, yeah, something like that.
Yeah, so it's not bad, right?
I can do that work.
That would get, you know, a lot of...
Yeah, I actually gave it, like, asking questions, like, to explain the derivation for the source trial radius, you know, the black hole radius to write a code for, let's say, eight-by-eight-tac-tac-toe, like random things that you should never be able to do because it's not just memorizing it, it's generalizing, and it's getting that.
But then also it actually seems to have some sort of weird hiccups.
And actually, one thing that really does not seem to get is humor, you know.
So I'm kind of curious where you think it's going to go, because before we get to AGI,
there are things that an average human can do pretty well that GPT3 can't.
But then there's also what experts can do.
And what I'm very curious about is actually we may get to some of the expert stuff first
before it can do even something like humor.
The irony is that something like humor that we take.
think for granted might actually be really hard and other areas might be easier.
Well, the ultimate example of the things that can't do, like it can't like pack your suitcase.
Yeah, yeah.
Like, there's no robot that will pack your suitcase.
Yeah, yeah.
And if you try to get her, like, make an omelet, like, it will shred your clothes and they're not, you know.
Yeah.
So it could drive your car, but it can't pack your suitcase.
So you can't do your lottery.
Yeah.
So there are these interesting kind of twist.
So I would describe a little bit as followed us, which is I think that this generation of AI that we have is a
impressive it is it is a little bit of a sleight of hand yes which we'll maybe we talk about and
there but i also think actually to your point human consciousness or human intelligence is also a
little bit of a slide of hand yeah yeah it'd be slightly different slides of hand so
so the sleight of hand that you see when you're using gpt or you know one of these uh image generation
things is it's not literally creating new information like what it's doing is it is it doesn't
have like it has no opinion yes uh it has no like point of view it just no like it's not sitting
there like thinking on its own coming up with some new thing uh what is doing it's doing
is it's basically training, you know, ideally what it's doing is training on the sum total of all existing human knowledge.
Yeah. So for text generation, it's training on all existing human text, right? And so it plays back at you, basically, projections from the sort of, you know, assembled composite.
Yes, the text. And so you ask it to do the 8 by 8, like, yeah, like, probably somebody on the internet at some point wrote some paper.
I think though it's a little more than that because it, because I asked like 56 by 56 or 101 by 101. It asked some sense of generalization. Yeah. But I'll bet. We can, we could, we could just
check this. I'll bet if we Google long enough, I'll bet we could find a paper that described
a general purpose algorithm for, you know, multi-eval. Oh, that may be, right? Yeah, yeah, he's like,
somebody did. Yeah, I've done the same human. I haven't write Seinfeld scripts. And sometimes
they're really funny. And sometimes they're just like, yeah, it makes no sense. Yeah, I went for
curb, but it's the same idea. It's not yet. But like, look, there are a lot of jokes on the internet,
right? And so you'd have to kind of, you could kind of go back and kind of say, okay,
it probably like pluck these jokes. Or by the way, maybe there was a paper somewhere
where they articulated a general theory of humor, right? Because this has been,
Humor's been studied as a thing, and maybe there's, like, a general thing of, like, humor's, like, you have expected or whatever, and so it generalizes.
Well, it could be, too, like, all sitcoms might be the same sitcom.
Well, at some level, right?
Well, so you're being an example.
So, I also have to do, like, dramatic screenplays, like, dramatic hostage, but good at those, like, you can say, like, write a three-act screenplay, and it will do it, and it will do it.
And it will have the proper, like, set up and resolution and so forth.
But, yeah, there are systems for, like, screenwriting in Hollywood where they have, like, three-acts.
Yes, yes, it's all Rocky or it's all Star Wars.
Yeah.
Yeah, well, so actually it's really interesting that maybe what we think is magical when humans do it isn't actually all that magical either.
So that's what I was going to say.
So then the human sleight of hand is like, you know, is there actually free will?
Is there actually creativity happening upstairs?
Yes, that way, if there is, is it everybody?
Is there really a thousand types of movies or is there like one latent space of the monument?
And basically what's happening?
I think the theory, you know, I'm kind of making this up, but I think the theory would be the hero with a thousand faces or the idea of the Jungian hero's journey.
Yes, it's just sort of the basis for all of these plots, Star Wars and Harry Potter and everything else.
Yes. It'd be, you know, a, somebody with your background might say that basically it's sort of algorithm for surfing human neurochemistry.
Yes, yes, right? And it's generating different, like, neurasical responses to, like, you know, fear and anxiety and, you know, love and all these other, all these other things.
I've always been fascinated. There's this thing in psychology called core affect theory.
Oh, that one I don't. Oh, yeah, this is great.
So we, so, okay, so what are humans in all these, love and despair, like, really have these different emotions.
It's all great.
Core FI theory says, no, we don't.
Oh, yes or no, good bet.
Good or bad.
Yeah, and then higher low.
Yeah.
And so we either have like a positive, like,
we either have like a positive neural response
or a negative neural response,
and then it's either high intensity or low intensity,
and then you just basically,
and so it's like, wistfulness is like, you know,
just slightly negative,
but like, you know, despair is like extremely negative.
Yeah.
So it's all two by two.
It's a two by two.
And yes, it's, and we're more basic organisms than we think.
And then we just, we retro, you know,
And we're very, one of the things that's very known is humans are very good at creating a story to justify whatever happens, right?
And so we create these stories, these scripts are of this idea of an emotion, but it's basically just justifying the neural response.
And so the cynical view would be like having an ice cream cone on a hot day and falling in love.
It's far like the same thing.
Well, maybe neurochemically maybe they are.
Well, this comes into play in like, you know, drug abuse, right?
Yeah.
Which is, you know, things that generated an opioid response.
Yeah.
Like some people get an opioid response from alcohol.
Yeah.
And their former pro-alcoholism, people don't get that response.
So it's literally a neurochemical thing.
So, yeah, look, maybe we're bundles of neurochemistry to a much deeper extent
or much simpler extent than we want to believe.
Having said that, you know, again, oh, and then that takes me the other thing on AI,
which is, you know, one of the ways that people are testing AI is with the so-called Turing test.
Yes, and the simplified form of the Turing test is you're chatting with somebody that may be a human
or maybe a bot, and you chat for 20 minutes, and can you guess,
better than randomist with a human or a bot.
You know, my take on that
is the Turing, you know, Alan Turing was a genius,
but the Turing test is malformed.
Yes, humans are too easy to trick.
Yeah, yeah, yeah.
But that's too low of a bar
because tricking a person is not that hard
and does not prove anything other than
that you've tricked the person like.
Yes.
Like, I think, and this is relevant
because I think, you know,
things like GPT are about to pass the Turing test.
Yes, yes, yes, any other property probably right.
It's probably right.
Yeah, yeah.
And so I think it's going to turn out
that that was too lightweight of a test.
Yes.
Well, here's, here's my favorite
example for why I know GPT is not self-aware. If you ask it if it's self-aware and you ask it to
elaborate on how it became self-aware, it will happily tell you. Yes. And by the way, if you ask it
if you turn it off, it's going to tell you, please don't. Yeah, yeah, yeah. If you ask it to
explain to you why it's not self-aware. Yes. It will very happily do that too. It does not have
a differential opinion about those two outcomes. Yeah. Yeah. Whereas every living, you know,
every conscious, even even non-conscious living organism has a very different response to those
Yes. It's been amazing because in some ways I feel like it's as much been interesting to study the AI as the AI is reflected for us to study yourself. You know, and I think we are sort of seeing that the magician has certain tricks, whether it's an AI magician and a human magician. I was going through this education process. Curious though, like it feels like, you know, so like GPD can get into high school, get into college, let's say. But like, what would it take for it to get its PhD? You know, and like we're, I think that's where we're the sort of,
dramatic stuff is to come yeah yeah well so again it's exactly to your point this has stressed
i would i haven't asked the question the other way which is that well okay what does it take to get a
phd what does it take you to get like yeah how are the universities doing yes yes yes yeah yeah how are
they doing and quality control of their own yes yes yes yes so how many people are getting
phds today that we would say are like actually valid like a side of yes you know whatever
actual accomplishments yeah yeah yeah by the way um people who got you know professors a hundred
years ago, like, how would they score the
SDs that are being granted today? Yeah, they say
I answered that. I answered that. Would they say
the bar is lower? Lower. I think they would say the bar is
dramatically lower. Yes. Right?
And so, you know, the answer might be, we have lower
at the bar, but the same thing for college admissions. Like, you know,
what does it take to get into college? What does it take to finish college?
Yeah. And, you know, the education says, well,
this is coming up a lot right now because it's like, okay,
GPT can auto-generate, like, you know, essays, right?
And so student essays.
And so it's like, okay, the grading method
of assign an essay and grade the result, like, is probably
not going to work anymore. But it's like,
Was that ever actually, like, just because we thought that that was education, was that actually education?
Like, was that actually teaching anybody anything?
Like, actually, I'm sure someone's going to take that to apply to colleges.
Oh, yeah, yeah, yeah, absolutely.
Yeah, college applications are basically, I did.
Yeah, at least at the extent that you believe the college applications were in a legitimate way,
just evaluate anybody in the first place.
Yes, like that's now a star.
I'd be more skeptical that they were ever useful at first one.
Yeah, right.
Yeah.
Well, so in the PhD, let's talk about, like, at least that old school mentality of a PhD of some advanced.
learning where you become an expert in something.
Right. You know, I think that's the thing where...
What do you mean by expert?
Let's say the ability to be in the top 0.1% of humanity, of, let's say, designing a drug
or building, doing something.
Yeah, yeah, yeah, yeah.
Is that what they teach?
Yeah, you know, that's...
That's my goal.
It wasn't aware of people.
It wasn't aware of that far as the drug tell.
I think it is.
There's some time.
Or at least that's what you have to do eventually when you get out.
Right.
Yeah, you know, and that you have to apply it.
And I think it's where I think it is, one of the things about being an expert in my mind is that
something that is the difference between bad, good, and great can be really close.
Like, I could probably write a piece of music, but no one would think it's all that great, you know.
And then you could have someone who's a good musician, but not a great one.
And then you have like a genius, like a Mozart or Led Zeppelin or whatever, a peculiar genre, you know.
And I think where we aren't there yet is that when the difference between.
good and great is so close.
Or like, I don't know if I remember from Spowl Tap,
there's a fine line between brilliant and stupid.
You know, I think that is where I think it hasn't really hit yet.
And that if you look at the jokes, the jokes are just kind of,
okay, the screenplays it makes are not like brilliant screenplays.
I think it could get into college, but could it win best screenplay, you know?
And so that's this part where I think we're there, we're not there yet.
You know, but that I think we're getting there.
So name a great music composer generated by a music,
Ph.D. program.
Yeah, yeah, yeah, it was just since the last year of years.
Yeah, name one.
Yeah, I'm thinking more in the scientific side of things.
But, yeah, I don't think, probably the PhD program than that space is probably not intended
to generate music.
Okay.
Yeah, yeah.
Maybe one great screenplay written by a PhD in drama.
Yeah, yeah.
So, so that's an interesting point.
But I think what I'm getting at is still, like, the ability to do something.
And so, and the education part, we can talk about how they learn.
Okay.
Because I think in the case of the screenplay, or.
the music you're talking about, they
still have to learn something, right?
They, you know, or do you think they just
innately sort of knew how to write a screenplay?
I don't know. Yeah. I assume there's a process
where they write a screenplay, it's kind of mediocre.
Oh, yeah, okay, yeah, yeah, yeah, and then, and they get
critiqued, or they critique themselves, and then, and then
it improves and improves and improves.
Well, the screenplay, okay, so the divorce is divorce for the education.
Yeah, yeah, yeah, yeah.
The test of the screenplay, the test of screenplay, the test for screenplays is
itself. Yeah, yeah, so screenplays are subject to market
discipline. Yeah, yeah, right?
Yeah, so the question number one for a screenplay is, does it sell jazz studio?
Will they buy it?
Yeah, and then the test number two is when the movie comes out and the TV show comes out,
is that if you watch it, do it like it?
Yes, do they finish it?
Yeah, yeah.
Yeah, one of the fun things that Netflix will now tell people who make film and TV is they actually tell them for the first time
whether anybody's actually finishing.
Yeah, yeah, yeah, just all those stats are kind of mind-blogging.
Right, you know.
Yeah, a lot of movies.
And, you know, people go to the theater and they feel, you know, invested in,
and they don't want to leave in the middle.
But, yeah, yeah, it's very easy to punch out, or, you know, it turns on a lot of screenplays.
You know, this is something that professional screenwriters will tell you, like, it can't ever sag.
Yeah.
Yeah, just as one example, because people will stop watching.
So, yeah, so screenwriting is subject to a market test, popular music.
Yeah, for market test.
But in the classical music, which I'm a huge fan of it, is no one is subject to market tests.
Right.
It's thoroughly subsidized.
Yes, that's interesting.
Right.
It's not in the free market anymore.
Yeah.
Or maybe the equivalents are movie music is, you know.
Yeah, but so movie music is subject to market tests.
Right.
And it's probably the modern classical.
It is the modern.
Yeah, it is the modern.
Classical. Yeah, for that reason. So, yeah, like, the market test is real. But yeah, let me grant your point. So let's build on one. Yeah, grant your point. Like, let's use the, let's use the term paste.
Yeah, yeah. Or just ability to do something hard. Well, ability, so, okay, so ability to do something hard and let's say create something. Create something complicated. And then, create something complicated. And then also the ability to judge.
Yeah, right. And you critically like to start with judging your own work. Yeah, and probably they're for the ability to prove.
and then therefore the ability to prove, right?
So, yeah, I think that there's, yeah,
so there is something about taste.
Yeah.
Like, I tend to think this stuff all has, like, aesthetic.
Yes.
Properly constructed math.
Yes, so formula or software program has aesthetic.
Oh, 100%.
Oh, 100%.
Right.
Focule design has aesthetic property.
Physics, you know, also.
All of it.
Yeah.
Yeah.
So there's something about taste that's like some combination of quantitative,
militative.
Yeah.
Like, a great startup is like from a mediocre one is taste.
Right, yeah, exactly.
And like there's certain signals, like there's certain methods
and certain signals.
It's not necessarily reducible to an algorithm.
It's more of like a composite.
You know, it's sort of a foundational knowledge
to bind with some scope of experience
combined with some kind of ineffable characteristic of genital.
Well, we associate an aesthetic with it,
but I wonder whether that's also just our emotional connection to it.
You know, because I think we have this good right or wrong
or more right or more wrong, like a gradient.
Like, yeah, that's the right direction.
But a lot of is also whether something is elegant
versus like just a hack.
You can tell whether these great things are just,
simple and powerful rather than like some complicated machine to do something that, you know,
you know that's going to eventually fall apart or that you think about that's true in physics
or in go to market or in music.
It has all that both that sort of complexity and simplicity at the same time.
But so I'm curious, like so when, I guess that point, which I think that's a when, not an if.
Okay.
Yeah, yeah, yeah.
Or so why would you say, why wouldn't it get there?
Because, like, do we even understand how it works at people?
Well, maybe we don't have to.
Well, maybe we don't have to.
So this is where I describe this.
This is like the AGIC, but this is where I call the hand wave.
Yeah.
It's sort of the same thing.
The embedded assumption that it's so bad is that it will be an emergent process
that will sort of unlock as a consequence of greater or greater levels in sale.
Yeah.
Maybe.
Yeah.
Yeah.
One way of looking at that is, yes, that is what's going to happen.
That's what's going to happen.
Yeah.
Their impression that is, it's just a messive.
It's a hand wave.
It's a hand wave and I think it's a cold cope.
And the cope would be.
Okay, so here, let me ask you a question every time.
Yeah, yeah.
What is the sub-specialty of human biology and medicine that most understands the nature of human consciousness today?
Oh, I don't think there's one, right?
There is one.
Anesthesiology.
Okay.
Which is poorly understood.
But they know how to turn it off.
Yeah.
And they know how to turn it back on.
Yes.
They've got the on-off.
That's all we got.
That's all we have.
Like, we collectively have been studying this question of human consciousness for a very long time.
We have very advanced technologies today,
functional MRI, like all this stuff.
But that speaks to, there's a field I would love to see create it,
which is molecular psychology.
Yeah.
Okay.
Yeah, where you can start to probe this.
A little more than all about.
Okay.
And molecular...
So, and there's a really literal or metaphorical when you say?
Quite literal.
Like, it's a play, like, molecular biology was this big thing in the 80s.
Or if I, like, can bring, like, chemistry of small molecules to, like,
Polkett biology, or chemical biology as well.
And if we could use, like, small molecules to maybe perturb more than
just on off but like perturbs things we can start to understand the brain yeah a little bit because
reading is one thing but like poking and and sort of perturbing and then seeing the result is usually how we
do any sort of experiments yeah would you view that is that a chemical chemical would that be a chemical
experimentation or that would be electrical it could be either one it could be any of that but probably
some combination of those things in early it's like on a track in theory to enable any of some of this
yeah so like look we just don't okay so here be the counter argument is what we just we don't
how human consciousness works.
We actually, I actually, I, I didn't go into the field, but I didn't go in the field.
I actually was, that was going to be what I was going to study in school 30 years ago,
but I looked at the field at the time, and I was like, they don't have a clue.
Yeah, I'm going to spend my entire career.
So you wanted to go into consciousness.
The tiny, college and instance was, yeah, the hot thing.
You know, building off of it.
Yeah, yeah, yeah.
But that was like the expert systems.
Expert systems.
Well, the early neural networks, and then a lot of it got into brain chemistry and, like,
we're going to figure this stuff out, and we're going to learn how to build, you know.
And it's just like they didn't know them
As far as I know, they don't know now
And so the counter argument would be
This is all just like massive cope
For the fact that we actually
We don't understand that
So we don't understand how to do it
And so all we can do is hand wave
And kind of just say
Well, it's just going to be a merchant
And it's like, no, it's not
And we're going to be sitting here 30 years from now
And we're still not going to have any more knowledge
You know, barring other scientific breakthroughs
Of the kind that you're talking about it.
Yeah, what's interesting is if you think about that time
We had neural nets
But they were all single wear basically
And then they couldn't even do XOR
You know, you couldn't even do some simple things
because you needed deeper networks to get at them.
And you couldn't have deep networks then
because we didn't have the computational power.
And so the space was pretty dormant for a while,
you know, AI until like we started going to having the,
basically just the computational power from GPUs,
a lot of the things that we would go deep.
And then you could feed the data through.
So it is possible that we sort of have a point
where we sort of saturate the compute that we have now.
We get to as much as we can get to.
And that may get close to age, you know, maybe not.
And then it takes another like 30 years
to get to the next sort of breakthroughs to get there.
But, okay, so I would pull back from there.
So, AGI is the fun thing.
There is a sort of step back, which is to pick a domain.
And you know the domains I think a lot about, like life sciences, the diet and drugs, doing health care, like seeing if you can do a, pick a diagnosis, can you suggest a drug?
In those areas, now we're talking about much more limited domain.
So we're not talking about, we don't need to go all the way to consciousness for that necessarily.
You can have something that's more limited.
In that limited domain, right now, it seems like generally isn't quite far enough yet to be able to, like, yeah, I don't see the examples quite yet.
Yeah.
Yeah.
Well, we'll see.
I mean, so what's the counter?
And I know you, especially think about health care a lot.
Yeah.
Yeah.
Well, so the first thing is whenever your score, well, let's talk about medical diagnosis, which is kind of just a little hanging fruit question because everybody experiences it.
So to start up front, you have to ask a question up front, which is like, is the goal, what's the threshold?
Is the threshold perfect or is the threshold better than human?
Yeah, that's a great point.
Right?
Yeah.
And by the way, this is a topic that comes up all the time with self-driving cars, right?
Which is, is it perfect?
It will never make a mistake or is it just going to be better, better than human.
And the way the self-driving cars score this is accidents per thousand miles driven.
And self-driving cars are already lower than human drivers.
And humans may actually be getting worse.
With texting.
Oh, with texting.
Yeah.
Yeah, by the way, you know, it increased forms of certain kinds of dirt abuse.
Right.
And then, of course, the machines have the characteristic.
They get better universally, right?
So a car has one mishap and one location.
Every other car gets trained on how to deal with that in the future where, you know,
the learning happens across the entire system.
And so, like, I think you can make a serious argument that, like, basically, self-driving cars
are already better than people on a relative basis.
And therefore, like, morally, you could even go so far as to say human drivers should
be outlawed today.
Right.
Like, if you have the alternative, if you can have the self-driving car, then, yeah.
Like, the utilitarian argument would be you should obviously ban human drivers today
because the machine-driven stuff is already better.
Probably, by the way, the same is true for airplanes.
Right.
Now, we're not actually going to do that,
and there are other considerations involved and so forth,
but, like, you know, logically speaking,
you should at least think about that as a possibility.
And I think you should think about that as a possibility,
I think, for a medical diagnosis, which is, you know,
and here the test is very simple, which is,
well, I hear at least express two tests.
Test number one is the absolute test,
which is, if I feed in a set of symptoms,
it generates the correct diagnosis,
100% of the time deterministically guaranteed.
That's a high bar.
the other is I do that with the algorithm and then I go to 100 doctors
human doctors and I give back 100 different responses and then let's compare
right and then let's track over time and says you compare to
yeah right and like how good is the media doctor at doing the diagnosis
and like I don't know what your experience is bad like well and the meantime doctor
may be smart but also may be overloaded maybe exhausted may have like 12 other
patients 15 minute yeah I know a lot of experiences 15 minute
There's the thing here, like, experts in these areas tend to either, like, be, like, doctors themselves or they, like, know a lot of doctors or they have, like, they're, you know, they work in the industry.
They make money.
They have a concierge doctor who spends a lot of time with them and does house calls.
The median health care experience is 15 minutes in somebody's, you know, hairy schedule with the doctor that may or may not ever see you again and has very limited data.
And there's one little algorithm, which is that they come up with their diagnosis, they come up for the treatment.
You go with that.
That doesn't work.
You repeat.
And while not sick and while still sick and not dead, you just repeat.
And then I think many of us have been through that.
Well, and then there's all the other sort of thing.
So then there's like drug interaction.
You know, is any one doctor tracking all the interactions of your drugs?
Then there's this other issue, which is, okay, they give the prescription.
Is there actually compliance for taking the prescription?
Does the doctor actually know whether you're taking the prescription?
Oh, compliance is one of the biggest disasters.
Right.
But that means, like, the ability for a median doctor to even evaluate the success of a treatment,
they may actually may not be able to do it because they may not have the data on compliance.
and so like you look at the existing I don't know for me you look at the existing system by which this all happens it's very similar to look in the existing system by which people actually drive cars which was like oh my god this is not good like this is really not good and we kind of fool ourselves in believing that it's good because it kind of feels good and we don't really want to look behind the curtain but we look behind the curtain and it's pretty horrifying yeah and so from that standpoint if if you follow that logic then it says okay if the machine could do a better job you know if the machine was twice as good um at just like listening symptoms giving you the response during the prescription to
doing the follow-up.
Yeah.
I mean, how far...
I don't know if you've done this,
but you plug it a list of symptoms.
I've been playing with it, too.
Yeah, yeah, yeah, yeah.
I mean...
Because it does have acts.
I mean, it has a test of the collective medical...
Yeah, and if it does it now, it can't.
You know, it could be filled with all the EMRs,
all the medical records and so on,
and then it could sort of learn from that as well.
Well, then the other question, I'm sure you thought about,
but like, okay, so the medical field moves.
Yeah.
In the existing system, the media doctor
has to, like, read all the papers.
Yeah, yeah, yeah, yeah, which never happens.
happens way no one has that for that yeah right yeah and then there's continuing education but
still it's not the same well here's an example would do you like do you like your gp would you
would you rather do you want to m gp or an old gp oh probably yeah pleasure yeah yeah presumably the
old gp has more experience you know so they have more pattern matching over time yeah
yeah yeah yeah yeah the mjp is probably more up on the current signs yeah yeah yeah okay
yeah and then it's like okay do you really want to have to make that trade off yeah or can the
machine actually have both exactly well that's the thing is that like you talked about how like um
Can it beat, let's say, how does it do compared to a hundred doctors?
When the hundred doctors collaborate, presumably that's the ideal situation, right?
I mean, well, that sounds horrifying.
No, no, no, no.
I mean, that's the wisdom of the crowd.
No, that's perfect.
It could go, well, I guess it could go either way, but it's like to Betty.
Usually.
That's the Soviet method.
Usually when you actually, when you pool it, you can, or at least maybe it's how you collaborate.
Have you really found human beings to make better decisions in groups than they do as individuals?
That's a good question.
Yeah.
In your entire life?
Yeah.
Oh, yeah, yeah, yeah, the full, the serious answer is, the wisdom of crowds, madness are crowds.
Yeah, yeah, yeah, yeah, yeah.
Or when are you harnessing the wisdom, when are you descending into madness, or even just, you know, mediocrity?
Yeah, and very specific tasks.
Groups can do well, but otherwise it's like one big, quite group project from high school.
Yeah, which is like a, well, so gently, what generally would happen with people in groups is the social conformance kicks in.
Yeah, and so people want, it's a well-known, you know, kind of thing.
There's a lot of like group polarization, which is you take.
take a group of people.
We might be inclined slightly to one side of the medical spectrum.
You put them together and let them talk for three-hour mess.
They all come out much more radical.
Yes, yes, yes.
Right.
Does it self-reinforced?
Yes.
Yes, yes.
Right.
Well, so maybe that's a really interesting thing,
because you can imagine training AI to do, have these different aspects.
And its collaboration with other versions of it would be very different.
Yeah.
It could be very different.
I mean, yeah, maybe it should do like this effectively in Monte Carlo.
Yeah, yeah, yeah.
Right, right.
Right, right.
Right.
Right.
Right.
Yeah.
Well, okay.
So, so I, I, either will never get.
there or we're already there now.
But I think in 10 years, it does seem especially, maybe we hit like another winter,
but it seems like things are accelerating so much.
This seems pretty real.
It seems pretty real.
What do you think society needs to do to change?
Because there's like all these things we were talking about.
And this seems bigger than like just the revolution of software over the last 20 years or
internet from last 20 years.
Because we're talking about how it changes government, how it changes regulatory, how it changes
education.
I mean, I don't even know where you want to start with that.
But I think that's something where it may take us 10 years just culturally to be able to get ready for this thing that may arrive in 10 years or may already be here.
Yeah, I don't know where you want to start.
Yeah, so where I would start is we've already fallen into, I think, we have deliberately kind of fallen into a trap already, which is we've only been using a single kind of example.
And we've used it both in our discussions on medicine and also in education, which is basically a something is done today.
People are doing something today and then maybe the machine can do it instead.
That's an important thing and that's it's worth thinking about.
But the way the technological impact actually plays out in human society is not just that.
The way it plays out is, let's basically revisit more fundamental assumptions.
Yeah.
Or what's not being done today.
Well, it's not being done today.
That all of a sudden becomes possible.
And this comes, this, this always comes up in any sort of discussion about employment.
Yeah.
People doing jobs versus machine doing jobs.
People get worried about technological displacement of jobs.
But technological displacement of jobs, like technology never actually creates unemployment.
Technology only ever creates jobs in that.
Yeah.
And the reason for that is technology makes possible things that were not possible.
before.
Yes, yes, so at least, which is what...
Yes, growth.
And so specifically, for example, the role of the doctor, you know, it's like, okay,
the doctor of the future is probably not going to be doing the same.
Right.
We have a term in the IT, break fix.
That's kind of what doctors, you know, the core motion of a lot of doctors.
As he said, diagnose, prescribed, diagnose, prescribe.
And doctor's bugging.
Yeah, it's a buggy-moding.
Yeah, exactly.
Doctors of the future probably, like, the technologically empowered doctor care from now is
highly unlikely to be spending their day doing that.
they are probably going to be standard of their day doing things that are actually much more important than that.
Yes.
Right.
And so, for example, maybe they have more time, right, with patients because the machine is a time-saving device.
Maybe they have more data to draw on, you know, to be able to make their decisions.
You know, they've got the machine as a partner in making decisions.
Maybe they're able to spend more time in their conversation with the patient talking about psychological issues
as compared to just physical issues.
And as, you know, in a lot of medical conditions involve, you know, two sides of that are behavioral issues.
Well, as you know, like a lot of primary medical issues today are a consequence of different behaviors, yes.
And maybe doctors should be spending more type of behaviors.
Yes.
And it speaks to compliance as well as other issues.
Awesome.
Yeah, well, I mean, compliance is behavioral issue.
Like, why don't people do this or that?
Right.
But then also there's all the behavioral health issues, which is probably one of the biggest catastrophes that we have coming out of COVID.
Yeah, exactly.
Right.
Yeah, exactly.
Maybe doctors should be, you know, maybe the doctor in the future will be more of a life coach, of which there will be a pharmacological, you know, sort of a biological or pharmacological component.
Right? But maybe it's like, maybe it's more, you know, sort of the holistic, the whole, the dream of sort of holistic, you know, medicine. And so, you know, maybe the doctor in the future is just as much, is a, is actually a much more important and, you know, sort of fundamental figure in your life than he is, then he or she is today.
Yeah, that sounds fantastic. Yeah. So if I'm a doctor, that, that's where, that's where I would want to be. Yeah. Yeah. Yeah. Towards. Right. And then that, and that's probably a bigger and more important market, right? And then, and then in terms of like the size of that industry will probably expand, you know, kind of correspondingly, I think the same thing is true in education.
Like, you know, the teacher 10 or 20 years from now, I hope it's not doing the same
things the teachers doing today.
I hope they're doing much better things, right?
So, for example, one-to-one tutoring, like, there's basically the thick education example.
Like, there's only one, in the last, like, 50 years, there's basically only one-known
education intervention at scale that actually improves outcomes after, you know, thousands
of experiments, it's one-to-one tutoring.
Yes.
Which is very ancient, actually.
Which is very ancient, right, which is the ritual form of education.
Yeah, it's literally how people used to get educated.
And so maybe this industrial, you know, the education system.
we have today as an artifact of the industrial age.
If the industrial age components of it become automated,
the teacher becomes freed out to actually work more one-to-one of students,
the result might actually be a significant breakthrough in how education.
Although the ways you're describing, you can imagine also like AI doing one-on-one.
Well, yeah, they were the intensive.
There would be part of that.
But also, yeah, and maybe the AI is the one-on-one,
and maybe in that case the teacher is supervising the AI.
Right, and maybe the teacher is making sure that the AI is, like, on the right track
and doing the right things and is able to kind of sit at the control panel and watch all that
pattern. Well, that speaks to something really interesting because I think we're probably a little
nervous, at least short term, to just unleash this and, like, not pay attention to it. And so you'll
have the doctor using this as a tool but keeping an eye on it. You'll have the teacher maybe scaling
dramatically for all this one-on-one, but keeping an eye on it. Do you think that's actually the way it's
going to? I mean, this is kind of how all technologies work. Yeah, yeah, yeah. So it's sort of,
another way to think about it is you could imagine two acronyms for AI's and artificial intelligence,
which kind of applies your placement.
Yeah.
The one I actually like much better is augmented intelligence.
Yes.
Which is like the old Doug Engelbart idea.
Yes.
And augmented intelligence is, you know,
it's another example,
the term would be Steve Jobs,
a bicycle for your mind.
Right, right.
Or, you know, a bullet train for your life.
He's like, yeah.
Right.
And so the augmentation, right?
And so the way, if you just look at the history
of new technologies, the way it plays out,
it's every net freight is going to be a replacement
and it turns out it's an augmentation.
Yes.
So you take a human being and you give them
the technological tools.
They, therefore, are much more productive.
Yeah.
Like a factory.
versus like an artisan with their tools.
Yeah, exactly.
Or like, you know, the dream of like an is, you know, an isoskeleton.
Yeah, the dream, you know, the dream about, you know, any of these things.
Yeah.
I mean, look, artists are much more productive today with digital tools than they were with just, you know, painting canvas.
Yeah, yeah.
And by the way, even artists that still work on painting canvas are much more productive today
because they can tell their, they're promised to much larger audience online.
Or, like, my favorite thing for art is, like, you know, photography comes online,
and that dramatically changes art because being photorealistic isn't that interesting anymore.
Right.
But so that creates moderate.
Yeah.
which actually is maybe even more expressive.
I'm just dating a picture.
And so now I can make pictures with AI all the time.
So where does that shove art?
Maybe to a more interesting place.
And the artists of history,
the artists were not happy about the introduction.
Yes.
And it originally is a thread.
Of course, yeah.
But it transformed place.
Yeah, it turned out to be,
it turned out.
Yeah, it turned out.
The market for art is much larger today than it was.
That's interesting before the introduction of photography.
I mean, we call it different things.
We call it things like TV shows and so forth.
But like the market for creative expression is much, much larger than it used to be.
By the way, music, same thing.
Right.
I mean, you know, recorded music was originally.
a threat. It used to be musician would compose and perform, right? And then, you know, to have
music in your home, you'd have to hire a musician to come into your home. You know, photographs
were a threat to that. But photographs made the music industry much, much larger. So people who
were good at making musical all of a sudden had a much bigger market. Yeah. So I think
AI is going to play out in a very similar way. Like, there are people who will argue, you know,
AI's different because I just keep climbing its ladder, it will replace everything. I actually think
it's going to be, basically, it's the ultimate superpower. It's the ultimate pairing. We were
talking about crazy screenplays and scripts. A good example. If I'm
a Hollywood's Greenwriter today. Like, GPT's my best friend. And I'm just sitting there all day long
and I'm just saying, you know, playing out. It's like, okay, I reached this plot point.
Dot, dot, dot. Give me a list of like 10 ideas for what to do. It's like, oh, okay, that's an interesting
one. I'll give you an example of how this could work. So Matt Manman, it's one of my favorite shows.
Matthew Wider, you know, ran that show. And he was always praised. He's like, wow,
that show was so unpredictable. Like, you know, you never knew where it was going. And he said,
yeah, well, the technique we had in the writer's room was, in any given time, we had to figure out what
happened next in the plot. We would brainstorm. We would come up with the five sort of things,
Obviously.
Five obvious things did.
Rule all those dudes.
So GPT would be obvious things
and you rule those out.
Yeah, exactly.
So it pushes creativity.
All of a sudden,
every individual screenwriter could do that
without having to have a whole writer's
to them to brainstorm and just plug that in.
It gives it back to you in two seconds.
You're just like, okay, not those things.
I'm going to do something else,
and now I am more creative than I was before.
Wait, your comment about music really interesting
because now we've got Spotify,
so we got everything in your pocket.
Can imagine, like, the AI Spotify,
which is like the doctor,
the personal trainer, the educator,
like all those different things in my podcast,
available right now for whatever I need to do.
Yeah, that's right.
Yeah, and with the human escalation path, right?
Yeah, it's like, yeah, the AI therapist or whatever,
but yeah, with the thing of like, well, okay, yeah, yeah, yeah,
especially if it gets really serious to escalate immediately.
Yeah, that's right.
Yeah.
Okay, so what's, what's going to hold us back?
What do we need to change?
So I think it's mostly fear.
So this is where maybe I'm a radical on it.
Yeah, you know, this is where people start talking about, like, regulation.
Yeah.
I think it's, like, we have these, we have these fear-driven reactions.
I always think about, there's this deep-seated myth in,
in human societies, the Prometheus myth, right?
Yeah, yeah.
And the Prometheus myth is all about new technology, right?
And the Prometheus myth is like, basically this new technology of fire.
Right.
And, you know, fire is one of these classic technologies where, like, it can be used for good.
Or can burn live.
Or it can be used very madly, right?
And yeah, it can destroy your old world.
And so, you know, Prometheus famously goes and retrieves, you know, fire from the gods.
And his punishment for it is to be chained to a rock.
And if his liver pecked out every day for the rest of eternity.
So, embedded in there is like,
the anxiety about the new technology, and then the arrival of the technology and maybe is like, you know,
the fear, right, is that it's that bad and the person who does that should be punished.
And so I always find that myth kind of plays out over and over again and all these discussions about regulation that this stuff, you know, especially it's the gods who punish it, right, the existing gods.
Yes, yes.
Well, on behalf of, on behalf of existence.
But, yeah.
Yeah, so, yeah, I think generally it's this, it's just you get these fears.
If you look at the history of, you know, we talk about some of this, if you look at the history of new technologies.
You generally have these fears every step along the way.
It has technology has been created with some prediction
that it's going to upend the social order and cause the...
Well, it does upend to some degree.
It will do that, but generally speaking, in a positive way, on balance.
Yeah.
I mean, technology is why we live much better lives today.
Certainly, people now would not want what people had 50 years ago.
Nobody would make that, yes.
Right.
And you could go back in time infinitimate.
And it would ever make the trade, yes.
Nobody would ever make a trade to go back in time.
It never happened.
Yeah.
And right, that's literally it's because.
because you would not want to lose the technologies in the context of the thing, yes, because you have today.
So I think that's true.
And so I actually think, like, fear may be the, Jeff Rupoff, FDR, fear may be the actual biggest threat.
Yeah.
Fear leads to the kind of, you know, reach for regulation.
Yes.
I'm a skeptic.
I don't, it's like, I don't know, regulating math.
Because we really need to regulate math.
Well, but it's not going to look like regulating math, right?
It's kind of look like regulating this superpower.
That's what they're going to say.
Yeah, right.
Yeah.
Right.
But then the actual implementation, yes, is regulating.
Yes.
Yeah, regulating algebra.
Yeah, regulating algebra, regulating linear algebra.
Yeah, really going to regulate linear algebra, matrix multiplication.
Yeah, really seriously.
Yeah.
And then even if we do, are we going to possibly do it in a way that makes any sense?
Yeah.
Well, okay, but it won't, obviously won't look like that.
It will be saying, well, we can't have computer drive cars.
Right.
Or, like, what's the, what's, how do you give the computer a test?
Yeah.
Or how do you know, like, okay, you make this, I'll be the cynic.
So, okay, you make this claim that the computer AI is.
better than human. Like, how do I know that?
Because that, well, yes, it turns out because the cars are driving.
Yeah. So, yeah.
So there was, okay. So here was you know, okay, so here's how that played out
self-driving cars. Yes. There was one category company that said, we're going to basically
wait until it's perfect. Yeah, it's going to basically train of melody.
We're going to work with the regulators. We're building stuff. Yes.
They're not driving. And they're not on the road. There's still not on the road.
There's another category of company that said, you know what, let's evolve out of basically
the cruise control. And, you know, it's crew to cruise control and then it's radar. And you get
humans driving with it and you label data and exactly and you don't expect the car to drive itself
in for the very beginning the car is like an autopilot kind of thing the expectations you pay attention
like you know Tesla's the company I'm alluding to and if you if you turn on full self-driving on
Tesla you're still you know you're still told like you're not supposed to be watching a movie you're
supposed to be actually paying attention and the car will like alert you when it's time to pay
attention um but you know notwithstanding that Tesla has been climbing the ladder on self-driving car
functionality capability they do new software releases push live to car at night anytime they want
those new releases are not being tested by
any federal, you know,
it's a whatever, it's not,
these things, yeah, there's no actual test happening.
And that has, that, and that has led to
incredible progress, including,
as you said, clearly in the data, this is now safe.
Because you can't, drivers, you can't make it work
just magically, right? It has to
do happen gradually. Right, because it's
actually much like medicine is, it's
entering into a complex system, yeah, a lot of variables
in the real world. Like medicine, too,
it's like life or death. You know, it's just
serious. But, yeah. But, it's,
Yeah, and then we go back to how we started the conversation.
The wait for permission thing, the binary zero over one, wait for permission, wait for perfection, thing versus the incremental, let's get better and better and better.
And the threshold is, is it better than humans?
Is it an end up improvement?
I mean, clearly in self-driving cars, that's that kind of approach is the approach that's working.
You just, you know, observe.
And do you think you get to the tipping point where, look, let's look at the statistics?
We have, because we have all this happening right now.
We have the statistics, and it's like so much better than humans, why wouldn't we do?
Yeah, exactly. Right. And then at some point, the morality tips where it's like, well, obviously, we have to go in this direction because it's just obviously better.
Yeah. I suspect we're going to get their medicine pretty quick.
Yeah, yeah, yeah. I'm an optimist on that. And again, I'm not an optimist because I think PAA is going to be perfect. I'm an optimist because I think the status quo is not that good.
Yeah, yeah. Well, that might be like you start empowering doctors. You get them tools. They start using them. And you start empowering patients. Patients start using them.
And actually, here, I think it's even different than a car because you know, I'm on a road.
It's your body or whatever.
And actually, patients are driving their own health care more than ever.
I think COVID was another sort of tailwind there.
So maybe you start, maybe it's just about developing the tools and giving them out.
Well, here would be an example.
So let's use our screenwriting example of the plant in medicine, which is, you know, a given set of conditions, there may be many possible diagnoses.
An experience I've had is there's a set of symptoms.
Yes.
One doctor comes up with a different diagnosis.
You read the literature, and it's like actually both of those diagnoses.
in theory are, but, like, for some reason, the one guy only thought of the one, the other
thought of the other, yeah.
So a way for doctors to start using this technology today would be, plug in the symptoms,
give me five possible diagnoses.
Yes.
Okay.
Oh, I didn't even realize, right, that, you know, because maybe this is a new thing since,
you know, I went to medical school or something.
I didn't realize diagnosis number three is an option.
I just go look at that.
Yes.
Yes.
Right.
And so the doctor is still doing the diagnosis.
So it's your screenplay example.
Yeah.
Your augment is it.
As a doctor, you're augmented.
Yeah, in that case is luring you to things that you should know, but you know.
Yeah.
Yeah.
Yeah. I mean, that's interesting.
It's almost like having a mentor or just someone to riff with.
Yeah, that's right. Yeah, yeah, yeah.
And they're right. It's a great thing is it is a machine.
It will riff with you as much.
Yes. I'm like, you know, sit there at three in the way. Yeah, yeah, it has 100 times for you.
It's happy to. It just get bored. It doesn't get tired.
Yes. By the way, and then it also has the advantage. It has all the up-to-date information.
Yes. Yes. Right. And all the outcomes. And when it makes a mistake, it actually can learn from it,
whether actually from being, other than being, like, devastated by it or emotionally reacting to it.
Right. Right. And like, self-driving cars.
If some other doctor and some other state had a patient last week and made a mistake and they fixed the mistake, it will not make the mistake.
It will not make the mistake. You can hear a patient. Yeah. Yeah. Yeah. So, I mean, so you think, and so that is a very different regulatory play than we have seen in the history of health care.
Well, I think that's just, well, you tell me, I think that's just going to happen. So, yeah, here's that everybody knows. I'll give you a couple things. Yeah. Yeah. Everybody knows that patients should not be on Google.
Yes. Everybody knows every patient now does that. It's cool, Dr. Google.
Literally how it's called in the field. Right. And there's no way, like, you're not.
not practically speaking, you kind of regulate that out of these.
Yeah, yeah, yeah, that's going to happen.
I think doctors using these new tools as augment is something that they can just do.
It doesn't require approval.
So the ship is already sale.
I think so.
And by the way, patients using GPT, if it hasn't started, it's going to start imminently.
Yeah, yeah, probably.
So the patients are going to show up with the results of GPT queries and the doctors are going to have to respond to that.
And so they're going to end up being in this world whether they want to be or not.
But that's actually really interesting because as a patient, and I probably know just about medicine to be dangerous to myself.
But like I show up with the doctor and I have all.
that thought out, basically that might equalize the patients, you know, such that they can actually
come much more educated and come from much more thoughtful and they become much more in the process
as well. Yeah. Okay, so what goes wrong? Double-edged sword. I mean, you, as a doctor, do you want
your patient? Yeah. You want a patient more educated or less educated. They may just be humaning me,
but I think they want some more. Maybe with you that, yeah. With me, they might look a little bit
more sideways. Actually, but if it was really helpful, I think they would. I think it's just about
how good it is, right? Okay, so what goes wrong? Like,
I mean, look, then the big thing that goes, I mean, look, the big thing, I think two things go.
So one is just the expectation of perfection, right?
And look, it's very, you know, it's very easy to generate the negative headline.
It's very easy to set off the scare of the moral panic, basically, right?
It was a single essence goes wrong and it gets extrapolated.
You know, we talk a lot about philidomide, like, you know, it'd be very easy to have that kind of moment.
Or like the person on a bike that got hit by a Tesla or something like that.
I think it was biking across a freeway.
Right, right.
Exactly.
And so, like, a human probably pin him, too.
Yeah, that's right.
Yeah.
Oh, well, that's a good point.
Yeah, yeah, yeah.
The trolley problem.
Yeah.
You know, the trolley problem's been in the press a little more recently as a chance out that Seth Macon Free was an expert in the trolley problem.
Okay.
It shows you that, actually, that's not the route to, ultimately, as it's been marketed.
But, yeah, the trolley problem, the problem always gets mooted.
The trolley problem gets always mooted about who self-driving cars,
which is, you know, you have a choice between killing, you know,
I don't know, it's like, I don't know, five grandmas or one little kid
or all this different, like, you have to pull a lever inside.
But, like, human drivers don't.
No, no, no.
New drivers never make that.
No, they have gas or brake.
Yes, right?
And they have, I'm going to hit the car in front of me.
Yeah, not yet hit the car in front of me.
It's never this elaborate thing.
It's always a very simple thing.
And so it's not a question of whether the machine can ideally solve this sort of, you know,
idealized complex problem.
It's kind of kick the brakes faster, right, when it's about to crash.
the car directly in front of it. And so properly, logically, kind of containing the expectation
here to actual real world and not having this spin off into these like basically fantasy narratives
that you can then criticize. Yeah. So that, the absolute limits. And then yeah, look, I think just
the generalized fear, right? And what I always have to remind myself is like, you know,
I'm like I'm a software developer by background. It's like, okay, I can actually, like the algorithms
that do that, like, you know, can I tell you every aspect how they work? No. Like, do they, do I
understand how they work, do I understand the basic foundations,
to understand the basic math? Yes. Yes. This is why I make
the comment about regulating math. Yes.
As somebody who's not a coder,
right, this whole, this, all this is weird.
It's like weird math. Yeah, right.
And so there is a, yeah,
I have to remind myself to be patient
and tolerant of people who don't understand the mechanics
of what's happening. That I said, I think the people
who are going to be written, I think they also
have to get in a mechanic to try to understand us,
and there's always slippage there. Yeah, so what's the
antidote to fear? Is it optimism?
Is it education?
I mean, ideally, I mean, I think there's, ideally, it's, yeah, ideally it's cultural orientation towards
new technology, and then ideally it's education of people learning and kind of, you know, the C.P. Snow, two cultures
that we have to get across coming together and kind of educating each other.
Honestly, a big part of it also, I think, is when things become a feta company.
Yeah, I mean, this is what Tesla has done himself driving cars.
Yeah.
Like, if it's just happening.
Yeah, yeah, right?
Because who would want to go back?
Like, he wants to go back.
Like, the system adapts.
Right.
And so there was this famous Uber fought all these regulatory wars and all the cities that they were in
because it was not technically allowed under the taxi limbo charges in the beginning.
So one of the things they did early on was they just made sure that there were always lots of Uber cars available around state houses and city halls.
And so whenever somebody, you know, so you literally have somebody who's like in, you know,
sort of giving this like roaring speech, you know, in City Hall about shutting down Uber.
And then they would come out and they'd have to get home really fast.
And the Uber would show up 20 seconds later.
Right.
It's like at some point, it just was like taken for granted.
that point, if you just said literally, are we going to take Uber away? People would have said,
no, we can't. It's over. And that's what happened. And then laterally it happened is they
changed the laws to accommodate that behavior. And so I actually think part of it here is just
like having these tools, okay, here's a thing. Here's a good news thing. These tools are
becoming widely available up front, right? So like 50 years ago, a new technology like this
would have been like deployed in the government first and then into companies and then years
later in the form of something individual people could use. The model today is like it's just
online. Yeah. Like GPD is online right now. Yes.
Well, the future of paint is really intriguing because from an engineer's point of view,
it's an engineer's dream that if we may get good enough, such that it can get to a point
where people just love it and it's helpful and it does what it needs to do, the rest will take
care of itself.
Yeah.
I mean, I kind of think that's mostly how things out.
I mean, yeah.
No, that's a beautiful feature.
Yeah.
Yeah.
So now, look, having said that, healthcare is very sophisticated, right?
There's lots of regulations.
There's lots of payment, right?
All these things.
So I saw this thing on 20 the other day.
Yeah, it blew my mind, right?
Because this whole time I've been thinking in terms of, like, you know, diagnosis and in my life.
So this doctor posted a video, and I think I saw that.
We always saw this.
So I first a video, and he said, look, he said the problem it is whatever diagnosing.
So whenever, like I do the diagnosis, I do the prescription.
Then it's a question of whether or not I can get the insurance company to reimburse to pay for the thing.
To do that for anything even slightly out of the ordinary, I have to write a letter, the doctor,
has to write a letter to the insurance company.
And that letter needs to be in a specific format and it needs to make the case, right?
Make the case.
And it needs to have the scientific citation.
Yeah. And if I do the letter really well, it's going to get paid for.
And if I don't do the letter really well, it's not going to get paid for.
That's going to a matter, you know, to the possibly life of the patient.
Yes.
And so he's like, it turns out GPT is really good writing this letter.
With the references.
With the references, yes, with the scientific references, like full on, right?
Yeah.
And so you've got this.
So that's another way I think about it is you've got this bureaucratic process, which is legitimate
and required and yet to exist.
And that data needs to be submitted.
And honestly, it does not matter to that process, whether that document is written by human-air machine.
Yeah, yeah, yeah.
But all of a sudden, if every doctor in the world is really good at writing correctly,
yes, letters, then all of a sudden, it goes to the thing.
All of a sudden, that doctor now is another, you know, whatever,
four hours a week to take care of patients.
Yeah.
Like, that's the kind of thing that I think is going to happen more uniquely.
And that, what's interesting about that example is you can imagine that example
having a big impact on the efficiency of the health care system today.
Yes.
Without any regulatory changes.
Yes.
Without, oh, within the system.
Within the system.
Actually within the system.
Yeah.
And so, and that was the one where it's just like, oh, in retrospect, that's obvious.
I just haven't thought about it.
Yeah, I think's about it.
All the other doctors start to do that.
The whole system upgrades and stuff function, you know, one time.
Yeah.
I think that kind of thing, I think, is a real possibility.
Yeah, and that could be because someone's working within the system, you can have the transformation immediately.
But then eventually someone has to read all those letters.
But someone has to validate them.
It's probably, you know, some sort of NLP on the other side.
Well, that's right.
It's corresponding.
So there's, we have this company, this company called Do Not Pay.
Yeah.
Yeah, it's just this company.
It's an app that sort of acts like a bot.
Yeah, no, I've used the app.
It's for people.
that people try it's it's it and it basically it will basically get you it started to get you like it was started to get you out of like basically um fake done sort of BS traffic days um and then it's he real he did this thing a while ago where it will unsubscribe you for you know all these consumer subscription services like Comcast or whatever like they all make it hard to like ever turn off the subscription and so he has this way to the bot will do it for you and so he just started using AI and the bot and so he now so the way a lot of consumer subscription companies work is you can't you can't you can't actually unsubscribe online you have to
call an 800 number and you have to argue with a person and there's actually this thing in
these companies called save teams where they're actually paid specifically to prevent you from
unsubscribing and they'll try to cut special deals with you and they'll try to talk you out of it
and so he has this thing wired up where now he has the he has a AI generated text with with then
text to speech oh it just talking it talks it talks it talks to the customer service person
into the line and basically with infinite patience yes and so it will just sit and he will just
argue like, no, I am actually going to us to drive for, yes, no, I'm not running
such a special offer.
No, no, no, no, no.
Right.
Yeah, exactly.
Until finally the other guy, finally gives up and says, okay, fine, I'll stop charging
you.
And so it's like, okay, you know, it was a precondition of the system that that worked the
way that it did.
It was a burden on people to have to deal with that.
A, I can now step in and equalize the power imbalance between the customer and the company.
And presumably that will change the system.
Yeah, well, one would think.
Well, one would help, right.
And to your point, like, step one for change.
the system might hit retaliation, which is all of a sudden the save teams will be bots.
And so maybe the bots will be arguing with the bots.
But at least it gets you out of this kind of Kafka-esque thing you're in today,
where when you deal with these big companies, you're dealing with this giant bureaucracy,
at least it like equalizes the power.
Well, that's kind of mainly good.
And that will be the spark for changing things.
Because once you're in that sort of system, like, we got to do it.
Yeah, this is crazy.
Yeah.
And bots argue with each other all day along.
It's just clearly stupid.
Yeah, yeah, yeah.
And especially with bonds on both sides, now we can finally say, well, let's do an API on both sides.
So let's do something smart on both sides.
Yeah, just kind of it now.
Yeah, yeah.
Well, Mark, I mean, that's such a sort of beautiful, optimistic view of how this could go, right?
Because the future we're talking about is actually much more engineer-driven that if an engineer can build this, and it really, really works.
It really helps patients.
It really changes things.
It will get adopted.
As it gets adopted, cultural will work around it and will love it and will love it and will not want to go back.
And then the future will just be right in front of us.
Yeah, patients are going to get a vote.
Yes, doctors are going to get a vote.
Great.
Yeah.
And, you know, it's an industry native of people, a world many of people, people will get a vote.
Yeah.
Beautiful.
Thank you so much for joining.
Thank you for joining BioEats World.
BioEats World is hosted and produced by me, Olivia Webb, with the help of the bio and health team at A16Z and edited by Phil Heggseth.
BioEats World is part of the A16Z podcast network.
If you have questions about the episode or want to suggest topics for a future episode,
please email BioWeetsworld at A16Z.com.
Last but not least, if you're enjoying BioWeets World,
please leave us a rating and review wherever you listen to podcasts.
Please note that the content here is for informational purposes only,
should not be taken as legal, business, tax, or investment advice,
or be used to evaluate any investment or security
and is not directed at any investors or potential investors in any A16Z fund.
For more details, please see A16Z.com,
slash disclosures.