Theories of Everything with Curt Jaimungal - How AI Healthcare Will Change the World
Episode Date: April 7, 2025As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe What if you had a thousand doctors working for you 24/7, at v...irtually no cost? In this episode of Theories of Everything, a panel of leading AI and medical experts explores how “medical swarms” of intelligent agents could revolutionize healthcare, making personalized, concierge-level treatment accessible to all. This isn’t science fiction, it’s the near future and it will change everyone’s life. Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Links Mentioned: • Ekkolapto: https://www.ekkolapto.org/polymath • Ekkolapto’s Longevity Hackathon: https://www.youtube.com/playlist?list=PLy5dPSW_KkniuHpoLwlzkYcxhxn50Mn0T • William Hahn’s lab: https://mpcrlab.com/ • Michael Levin’s presentation at ekkolapto: https://www.youtube.com/watch?v=Exdz2HKP7u0 • Gil Blander’s InsideTracker (website): https://blog.insidetracker.com/ • Dan Elton’s website: https://www.moreisdifferent.com/ • FAU’s Sandbox: https://www.fau.edu/sandbox/ • Will Hahn on TOE: https://www.youtube.com/watch?v=xr4R7eh5f_M&t=1s • Will Hahn’s in-person interview on TOE: https://www.youtube.com/watch?v=3fkg0uTA3qU • Michael Levin on TOE: https://www.youtube.com/watch?v=c8iFtaltX-s • Stephen Wolfram on TOE: https://www.youtube.com/watch?v=0YRlQQw0d-4 • Neil Turok’s lecture on TOE: https://www.youtube.com/watch?v=-gwhqmPqRl4&list=PLZ7ikzmc6zlOYgTu7P4nfjYkv3mkikyBa&index=13 • Robin Hanson on TOE: https://www.youtube.com/watch?v=LEomfUU4PDs • Tyler Goldstein (YouTube): http://www.youtube.com/@theoryofeveryone GO TO THIS MAN'S YOUTUBE CHANNEL. HE HELPED WITH THE CAMERA WORK IMPROMPTU AND ALSO HAS A FANTASTIC CHANNEL ANALYZING THEORIES. THANK YOU, TYLER! • Joscha Bach on TOE: https://www.youtube.com/watch?v=3MNBxfrmfmI • Manolis Kellis on TOE: https://www.youtube.com/watch?v=g56lxZwnaqg • Geoffrey Hinton on TOE: https://www.youtube.com/watch?v=b_DUft-BdIE Timestamps: 00:00 Introduction 4:43 A New Approach to Healthcare 5:33 AI in Medical Imaging 7:40 Cognitive Models 11:09 Education in Medicine 23:02 Exploring the Boundaries of AI 32:04 The Future of AI in Medicine 37:20 Swarming Agents 41:49 The Ethics of AI in Healthcare 45:17 AI into Clinical Practice 55:58 Preparing for an AI-Driven Future 1:15:03 The Human Element in Medicine 1:17:19 Emotional Intelligence in AI 1:20:11 Unified Theory in Medicine 1:21:31 Conclusion Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Breaking news, a brand new game is now live at Bet365.
Introducing Prize Matcher, a daily game that's never ordinary.
All you have to do is match as many tiles as you can, and the more you match the better.
We also have top table games like our incredible Super Spin Roulette, Blackjack, and a huge selection of slots.
So there you have it, how can you match that?
Check out Prize Matcher and see why it's never ordinary at Bet365.
Must be 19 or older Ontario only. Please play responsibly if you or someone you know has concerns
about gambling visit connexontario.ca T's and Z's apply. If you imagine a VIP in
the government and something happens to them literally hundreds of doctors are
going to get put on call. None of us can afford that but if we look at this
technology it'll be very reasonable to think that you have the brain power of a
thousand physicians. What is this going to look like when you have the whole chat
GPT to yourself? It's only a matter of time where you can afford to have a thousand agents
all working for you.
What if every person on earth could have access to a personal team of medical specialists, not just one doctor, but hundreds working
in concert available 24-7 and at virtually no cost.
In this episode, we explore how AI is revolutionizing healthcare through what experts call medical
swarms.
So, armies of specialized agents that could make concierge medicine available to everyone.
Joining us are a distinguished panel of experts, AI
professor William Hahn, who is the founder of FAU's machine perception and cognitive
robotics lab and returning guest of this channel. Here he is discussing Wolfram and consciousness.
Link in the description. There's also Dr. Gil Blander, an MIT aging research veteran
and founder of Insight Tracker. There's also Dr. Dan Elton, an NIH scientist developing AI for medical imaging and AI professor
Elon Berenholtz, founder of Florida Atlantic University's Machine Perception Cognitive
Robotics Lab.
My name's Kurt Jaimungal, and I'm honored to have been invited to moderate today's
Polymath Medical Salon at FAU's Gruber AI
Sandbox, keynoted by pioneer biologist Mike Levin, here's his presentation from Polymath,
link in the description, curated by Adi Cha of Ecolopto. They held the first longevity
hackathon at MIT with Augmentation Lab featuring toguests like Steven Wolfram, Joschebock,
Manolis Kellis. And this was also curated by academic and medical philanthropist Ruben Gruber.
Together, they paint a picture of a healthcare future that's not about replacing doctors.
No, it's about democratizing access to medical expertise at an unprecedented scale and sophistication.
Subscribe if you like the exploration of cutting-edge research in AI, theoretical physics,
consciousness studies, biology, and mathematics.
Good evening everybody. We're really excited that you guys are here for this kickoff event.
We're here in the machine perception and cognitive robotics laboratory,
something I started with Elon here about ten years ago. And then we expanded into this beautiful space
we call the AI sandbox.
So first and foremost, he's not gonna like it,
but we have to thank Ruben Gruber back here.
Give it up for Ruben, he made this all possible.
It's not just for building us this beautiful space
that enables events like this,
but in this particular event for serving as a catalyst for what we think is a very exciting space that's combining the improvements we've seen with artificial intelligence with some of the killer or rather life-saving applications for artificial
intelligence.
So, I'm very excited here to share.
We're going to have a nice dialogue with the panel, but we want your input.
The whole point is to create a conversation factory and to have a loop that's connecting
the practitioners, the physicians, the people who need care, the people who give care, and all of the amazing young people here
that are going to go out and use these tools
to make those things happen.
Go ahead up for Dr. Han.
Thank you.
You may have seen me running around
practically like a chicken without a head
for basically the whole event.
My name is Adi Shah.
I run this thing called Ecolopto,
which is Greek for incubate,
if I got Google Translate correctly. And I basically run a research institute. And this
research institute is kind of a somewhat of a sloppy excuse for me to be able to better understand
why we're here on this planet. What is purpose? Why are we here? What are the unknowns, unknowns
of biology, math, physics, right? All these types of things. But if I had to sum up Ecolopto in one
word, beyond just doing hackathons, research hackathons, and salons like these, and all sorts of things, right?
Is that I want to be able to better enhance the intelligence, the intelligence capability
of the brain and the rest of the body to better interact with and understand reality.
You know, AI is just a kind of intelligence. It's very capable. What else is there?
That's what I want to solve.
Thank you.
Hello.
Um, my name is Gil Blunder and, uh, came with the idea of the company that I want
to start the company name is inside tracker.
And what we're trying to do is helping
human to live better longer based on what's happening inside the body.
So we are looking at a lot of variable we're looking at around 50 blood biomarkers, DNA,
data from wearables, we even added the food recognition so we are basically extracting
what you eat and what you supplement, combine
all of it together and as I discussed with Will, trying to build in a way digital twin
for you and based on that, providing to you the best intervention for you, like a laser
focus intervention of what food to eat, what supplement to take, what exercise to do, what
lifestyle changes to perform.
Hi everyone, Dan Elton here.
Like many of the people in my class, I got into machine learning and AI.
Eventually, I got into doing research on AI for medical imaging at NIH.
So I'll just jump into some of the applications that I've been working on and that I'm excited
about.
One of them is basically this idea of unlocking additional information in medical images.
And for each of these different things, you can get very precise measurements, which can
be used for risk prediction.
So I've done some work on cardiovascular disease risk prediction
using these biomarker measurements.
The application I want to talk to you about today,
this idea of using AI to help
treat patients with chronic complex conditions.
So one of these is long COVID.
I actually had long COVID for over a year.
We were just talking about that with somebody. And when I had long COVID, I started reading
a lot about chronic fatigue syndrome. It actually really blew my mind that actually about 2%
of people worldwide have chronic fatigue syndrome, which is extremely debilitating and really
reduces quality of life. But the thing is that the medical system is very poorly equipped to handle these sort of
persistent complex conditions. What we could do is before
a patient goes to see a doctor, they could actually talk to an AI first.
Basically, the point is that doctors really suck at handling these conditions.
They only look at the biology, they're not equipped to handle the psychology and the
sociology, and they're really overworked, right? So they're only going to spend 40 minutes
tops and that's not nearly enough time to understand a complex condition. So I think
AI can really help fill those gaps. It could
be AI deployed in the hospital or it could be something you just access from home.
Hey everybody, Elon Barinholtz. I am a co-director of the MPCR lab with William. So I will state
in sort of my position first, which is I think that large language models, the underlying math of what AI is doing is actually
the same thing our brain is doing. And that's sort of my starting point for what I'm thinking
about endlessly for day and night. What we have now in these models is not just a really cool
piece of technology that we can do
important things with. It is certainly that and we are going to do very
important things with it. I think we should be very optimistic and hopeful
about the extraordinary science that's going to come from large-lifes
models in AI. But that's not what I'm personally most excited about. What I'm
personally most excited about is that we have a brain in the jar. That I don't know
if I have the floor right now to sit and try to convince you, but my
theory is that we've modeled something very fundamental about cognition itself, certainly
language.
And we've captured something so essential that we can now successfully model a human
brain, not all of it, not the sensory pieces,
not a lot of sort of the subcortical kind of activity,
but the thinking cognitive brain is something we can now model directly
in a system that's not us, that we can manipulate arbitrarily.
We can do experiments with it
that presumably, and again this is my conjecture, but will have implications for ourselves.
And so one of the things, it's in my poster in any case, I believe that there's core insights into what the nature of memory is
that is distinct from the way neuroscience has thought about it since its inception.
The insight comes from these models.
That these models seem to actually operate in
a way that is probably synonymous with the way
our brain functions in a certain sense.
I'm sorry if that's a little abstract.
But what we can do now that we could never have done as a species before is say we're
going to take some analog to the human brain that can learn the kinds of things that we
can do.
We can test it the way we test ourselves and we can start to experiment with it.
William and I were talking last night about a curriculum, right?
You can test.
There's a big debate.
Math.
Is math important?
How many people think, and raise your hands, how many people think that it's really important
that we continue to teach children basic mathematics, arithmetic, long division, how to deal with
fractions, factor, all of that?
Raise your hands.
Okay, how many people think maybe that's not important?
Okay, I'm not here to debate this.
Here's what I'm going to say.
We can later though.
We can debate that.
Here's my proposal.
We can test that.
We can take a fresh brand new LLM.
We can train it on the entire curriculum that you would receive without the math.
And then you test it on other stuff, right?
Because we're not, presumably you didn't raise your hands
because, you know, when you're calculating the tip,
you still need to be able to do a little bit of decimals.
That's not what you meant.
You meant there are broader implications for mathematical training
that are going to spill over to other kinds of,
well, now we can test that.
And the same for music, foreign language, and so on.
Right.
Now for something less controversial, is this guy right here who came all the way from Canada,
which may or may not become part of the U.S. at some point.
But for right now, it's its own thing.
And why don't you tell people what you do and why you're here?
Hi, I'm Kurt Jaimungal and I have this channel called Theories of Everything on YouTube.
You can search it.
It's a place where I interview professors and researchers on the latest theories.
So for instance, Mike Levin on limb regeneration and his anthrobots and xenobots and Stephen
Wolfram on his physics project.
It's essentially what are the largest questions that there are in reality about nature, about
meaning, about life, and then trying to answer them rigorously and speak to people who have
theories about them, especially those that are new, but rigorous and technical.
So I want to know what is the background of the audience here?
Who here is in neuroscience? Raise your hand.
And who here is in math? All right. What about physics? All right. Well, I'm going to just
ask a set of questions to these these people and then I'm going to open the floor to you
all because they're the quality of the questions to Levin were so high.
So what can we learn, Elon, from LLMs about brain disease?
So funny you should ask.
So that's what I was about to talk about.
And you're not allowed to use the word autoregressive.
Okay.
So let me, can I first define autoregressive and then refer to it?
No, I'm kidding.
Anyway. Okay, so let me, can I first define autoreguiso and then refer to it? No, I'm kidding. Anyway, so as I said, I think that there's the same math, the same computation that's
happening.
We won't name it, but that's happening in large language models.
Basically what they're doing is they're taking in an input, right?
It's a sequence.
You ask them a question and then they just
guess the very next word that they're supposed to say based on that question. And then they
take that word and they feed it back into the input sequence. So you have the question
plus that word and then they say, oh, what's the next one? And so this is fundamentally
what they're doing. They're just guessing this next toky My theory is that that is at least what human language is.
I have lots of reasons that I believe that, but I don't know.
I need to elaborate on them right now.
But if that's the case, then when somebody is generating language, what they have to
do is retain sort of everything they've said up till now, and then generate the next token.
That's what I'm doing when I'm talking right now.
I'm actually just generating the next word,
but I'm doing it premised on everything I've said until now.
So what you're saying is you're an LLM.
So what I'm saying is at least linguistically,
I think we're LLMs, not just me, you guys too, right?
So I think it probably extends beyond linguistics.
However, linguistics is at least important enough to say,
can we model something that happens linguistically in people?
What happens in dementia, of course,
it's typically referred to as,
actually early on in dementia,
you will see short-term memory loss.
People won't be able to repeat back a sequence that you say to them in perfect order.
So that's often referred to as a kind of memory loss. I understand it differently. I understand it
as that there has to be some representations, activation. In order to guess that next token,
you have to have some memory of what you've said before
but it's not meant for retrieval it's meant for next token generation. So this model makes
completely different interpretation of what dementia is. Instead of thinking dementia as
you're missing information you can't retrieve it, it's that your brain is no longer retaining the activity that's necessary to generate the next token.
What I'm doing experimentally is I'm building LLMs that have a shorter context window.
So instead of, you know, when you talk to ChatGBT or Claude, you can feed it a document.
Everything that it produces during the course of your conversation goes back into its input.
It churns through on every single cycle.
Every single word it outputs, it goes through the whole thing.
In humans, we probably don't do that.
If I ask you what I said three sentences ago, can anybody repeat any of my sentences?
Probably not.
There isn't actually, the information isn't necessary
in order, isn't sufficient there to actually repeat it.
However, if you note, you are able to follow
what I'm saying, right?
I'm talking about this idea of a context window,
how big it is, manipulating it in the case of LLMs,
it's as a model of, right, you've got all of that.
So what we've got here is something like an LLM,
but it has a very different feature
to it. It has this kind of K of activity, of the activation. It's not the entire prompt
that's going in every time. So this model of dementia is just to squeeze that window
and to say, okay, LLM, you're not going to actually have the entire conversation we've had until now.
You're going to have some much more reduced context that will allow you to do that next token generation.
And I'm doing this by just a simple mathematical manipulation.
You just say, here's what you've got to work with.
But if you talk to these things, they get confused in just the way a dementia patient
would.
And they try to make excuses for it.
They're trying to interpret why is it that they'll kind of confabulate during conversations
with them.
So what that tells me is, oh, I think this might actually be a proper model that's going
to actually explain what's going on in dementia
such that we can now think about, well, what are the interventions one can do?
What does this new, completely new interpretation of what memory is tell us about how to potentially
improve memory?
So, I'm sure you've heard of the-
I didn't say what a regression was. Yeah, great, great, great. to potentially improve memory. So I'm sure you've heard of the diffusion LLMs.
Oh, that's so interesting.
You should mention that.
So at first I was dismayed.
I don't know what to think about these.
At this point, you can talk about what autoregression is and what the implications are for diffusion
LLMs to your theory.
Okay.
So as I mentioned, so what LLMs are doing is they're trained, they're trained
model, they're a network.
A network takes an input, it produces an output.
The input and output that they are trained to do is here's a sequence of words and then
all they're trained to do is then an output, the next word, it's called a token, maybe
not exactly a word, but we could just say next word. That's all they're trained to do is then an output, the next word. It's called a token, maybe not exactly a word, but we can just say next word.
That's all they're doing.
And then they take that word, stick it back onto the sequence, feed it through again.
Same model, right?
It's just got a new input.
And now with the new input with a new word, it then produces yet another output.
And it just does this sequentially.
And that's autoregression.
Autoregression just means that you're doing this
sequential input output generation,
then the output becomes part of sequence in some way.
Okay, so now what was the question again?
I got too excited about talking about autoregression.
Ah, diffusion, thank you.
So I have to have it.
By the way, any questions at all
that have come up for Dr. Baranolti so far?
Anyone in neuroscience at all?
One right here.
Thank you.
But I feel like I'm interrupting a very interesting explanation of autoregression and diffusion.
I want to-
I do want to answer that.
Yeah, so-
We'll come back to it.
Okay.
Well, thank you, Adi, for interrupting a lot for me.
Anytime.
So, I thought this idea that you have of essentially simulating, you're talking about the
mind, maybe you're alluding to consciousness, but really what I think you're talking about is
personality. Because when we think about dementia and we think about it through the the model of a
linguistic lens, you don't have access to what is happening in the brain, what is happening in the
the rest of one's physiology when you're thinking about it just linguistically,
right? So you are able to test interventions on those parts of a person's mind that are
visible. Those are the behavioral parts. And at least in my understanding, when I think
about linguistics, I think about those behaviors, I think about how we see patterns over time
represented as one's personality.
And people's personalities tend to predict behaviors.
They also can be modified by external factors.
You can change a pattern.
And yeah, what do you think about that interpretation?
First of all, I think we only have evidence so far for language.
As you know, on theories of everything, we delve into some of the most reality-spiraling concepts from theoretical physics and consciousness to AI and emerging
technologies to stay informed in an ever-evolving landscape. I see The Economist as a well-spring
of insightful analysis and in-depth reporting on the various topics we explore here and beyond.
and in-depth reporting on the various topics we explore here and beyond. The Economist's commitment to rigorous journalism means you get a clear picture of the world's
most significant developments.
Whether it's in scientific innovation or the shifting tectonic plates of global politics,
The Economist provides comprehensive coverage that goes beyond the headlines.
What sets The Economist apart is their ability to make complex issues accessible and engaging,
much like we strive to do in this podcast. If you're passionate about expanding your knowledge and gaining a deeper understanding of the forces that shape
our world, then I highly recommend subscribing to The Economist.
It's an investment into intellectual growth, one that you won't regret. As a listener of Toe, you get a special
I think we only have evidence so far for language, as being this autoregressive.
My grander theory, which I don't have enough of a leg to stand on yet,
is that it's all that.
So personality is also just an autogenerative process.
There are many of them. There are visual ones,
there's multi-sensory ones.
They do different kind of works.
All factory ones?
All factory language?
Yeah.
I don't know about all faction.
I don't think we can think and smell.
I don't think we can think and smell.
I have something to say about that.
We don't do the... anything you think, anything you can think in is sort of is autoregressive,
right?
Because it takes time.
That's the hallmark of it.
The reason it takes... when you recognize somebody, you recognize them instantly.
When you have to think through a problem,
you have to take time to do it.
What is that time? It's just the autoregressive process.
Actually, I have a really interesting thing on the thinking and smell.
If you look at what language is capable of,
and making an assumption,
which is probably a bad assumption,
that thought is linguistic in nature,
then you'd expect that a corpus of text that's able to train an LLM,
would be able to handle those other senses in the same way that it can,
say, describe something that it hasn't seen before
visually and explain how to paint it,
and it can explain how something sounds.
But what it can't do is it can't come up with a cocktail recipe,
because that's smell and taste,
and for some reason that's smell and taste.
And for some reason, that's just not present latently in the corpus
in the same way that sight and sound is,
which might indicate that you can't think in it.
It indicates that people are very visual and auditory,
and language was created by people.
If it was dogs, the smell would be well-represented, probably.
But would that be thought?
Maybe.
I'm not convinced that the LLMs can't think in terms of flavors and things like that.
Just this weekend, I was showing my brother what you can do with ChatGPT. Coincidentally,
he had sent me a picture. He just went to a new market and he had gotten a whole bunch
of different ingredients. He said, oh, I should ask Chachi P.T.
I said, just take a picture.
And he literally took a picture of the ingredients
and asked what to make, and it gave him a very elaborate recipe.
And at the end, he said it was 10 out of 10.
He thought it was really fantastic.
And it was a random set of ingredients,
and he was able to compile them.
So I think if we fed them enough cookbooks, you know, there is, and again, are they just kind of vacuuming up the human experience?
Does it really ever understand what these cocktails are going to taste like?
In the case of vision, you can see, right? You can feed them YouTube videos and they can think visually because they can replicate that kind of dynamics.
But there's no data set for them to smell.
Okay. I want to ask Gil a question. So Gil, you left academia for industry.
Now, many people who stay in academia do so because there's research in academia and they
see industry as that's, well, that's where you make money, but it's not where you perform
research.
But it's my understanding that you did both.
So can you please talk about that?
Why is it that you left the academy?
What is it that the universities do well and where do you see them lacking?
That's a tough question.
I left the academia because I felt like I want to have a big impact.
And again, no complain, no bad thing about anyone in the academia.
But what I've seen at that time, it was long ago is that basically you publish a paper
that may be five or 10 or 20 or 50 people are reading and that's it.
And the impact is a pretty small.
If you look at the, and I wanted to translate more to, to everyone.
And actually that's a, that's what I'm trying to do.
My mission is to translate information, translate information for everyone. And actually that's what I'm trying to do. My mission is to translate the information for everyone.
And that's why I decided to move to the industry.
And I think that doing research in the industry is not less exciting or less important than
in the academia.
For example, we have now a data set of hundreds of thousands of people with blood, DNA, fitness
tracker and some food recognition and some biological age and a lot of things that we
discussed before about all of us, all of us, whoever doesn't know about it, it's like something
that the US government is trying to do, reach to one million and all of that.
But it takes time, it takes money.
Let me ask you a quick practical question.
What are the markers?
Because you test for markers, blood markers, maybe others.
What are three markers that people here at home should look at as indicators of their
health that people are talking about?
Yeah.
So I can talk about blood markers,
but there are other markers that are not blood.
For example, a VU2 max is a very important marker
and it's not blood.
You can hope on even your Apple watch can tell you that.
It's not very accurate.
So that's one marker that's important.
But if you are talking about blood biomarkers, I would say that glucose or A1C is very important
because it's showing whether you are going into the diabetic route.
You have A1, sorry, you have ApoB, which is a marker more for cardiovascular diseases,
HSCRP, which is a marker of inflammation.
But there are a lot of markers.
I wouldn't say that those are the most important.
I think that it depends what are you trying to treat.
My belief is everyone is a unique person and you need to define what are the issues that
you have and then the right marker for the problem that
you have.
Let's define it and let's then try to see how can you attack it and improve yourself.
I am Arif Dalvi, I direct Parkinson's disease center just 15 minutes down the road.
But I had a question for Ilan, you know, the word confabulation.
As a neurologist, we see that in a
syndrome called Wernicke-Korsakov syndrome. So people who are alcoholics,
they damage an area of the brain called the mammothalamic tract. And if you show
them, you just hold your hands like this and you say, do you see the string? Not
only will they see the string that is not there, but they will describe it in
great detail. So that area has a screw of everything in the brain, we don't know exactly what
the mammulothalamic tract does, but it's sort of like an error checking part of
the brain.
So in terms of LLMs and confabulation, is that confabulation coming because of a
lack of error and can that be computationally solved?
It is exactly the right kind of error and can that be computationally solved?
It is exactly the right kind of question you can now ask.
You can ask this question computationally.
Where is the deficit?
Assuming again that language generation is modeled by this system,
we can break it in different ways and see if you get something characteristic of exactly what you're referring to.
I have some ideas how you could induce exactly that.
And you would see very much this kind of confabulation.
First of all, they're always confabulating.
We have to get our minds that they don't really, they're not reading the prompt.
They're sort of guessing, right, sort of what is the appropriate response given this prompt.
And in some sense, what's happening in this case is it's over-guessing. It's going sort of beyond
the data in a certain sense. And the question is, you know, how do you model that in this kind of
system? But yes, it's exactly, and it's similar to what Dr. Levin was talking about, that we need
to think about these things, I would say computational,
which is not a…
His word is better, I think, psychologically.
But I guess in some sense, I think that when we're dealing with this particular kind of
modeling, we have to be careful about overinterpreting.
We have to really think about it computationally.
I think it is the breakdown in, say, the over-waiting of recent context the breakdown in say, you know, the over over waiting of
recent context, where you're like, I know what I'm talking about now. You know, because
I remember three seconds ago is what I was talking about. And you could just go off in
a crazy dizzying. William, I've talked about this sometimes dreams. I have a theory of
dreams now. Right. The theory is that it's a short context window. You don't remember
your your your activation system because it's you a short context window. You don't remember your activation
system because you're actually generating, right? You're just generating. It's not the
world. It's your mind creating the visual imagery. And it's premising it on a much shorter
window than you usually have. Everything is sort of internally consistent. If you see
somebody in the early days of deep fakes and deep fake videos in particular was really fun to watch because they looked very dream-like
because you'd have somebody riding on a motorcycle and suddenly they fly into the air on a rocket
ship and if you look at any three or four frames it makes perfect sense, right? And
that's what dreams are. So this kind of, you know, the confabulation you're referring to
could be thought of as representing exactly that.
And now we can get into the guts of the system and say, well, let's modify that.
So is it a scale function?
Like before LLMs, we had auto predict, auto correct on our iPhones.
But now with LLMs, the scale has become so huge.
Is that a function of just the scale?
It is.
It's some, so it's two things.
It's the transformer model, which parallelizes instead of having to turn through the entire sequence, you just do it boom all at once.
So it's parallelization and then GPU scale.
So yes, that basic recipe plus scaling seems to be, until the scaling laws break, seems to be the solution.
Now, if nobody in this audience can ask the question, I'm ready for it, which is like, come on, this is not a model of the brain.
We don't need all that data.
That's not how we learn.
We don't cram.
We don't churn through billions of pieces of text by the time we learn to speak.
So I think that right now, industry-wide, they're just a race to get to beat the benchmarks.
And what they know works is scaling.
And it does work, and scaling is fantastic.
But just because that solution is working right now doesn't mean you need all that scale.
And this is probably a tremendous amount of waste, both in terms of how much compute you
need, and also in terms of the curriculum, as I call it.
They train these things, just churn through all this data.
If we taught them children's stories first and then gravitated towards nuclear physics
later on in their education, maybe they would get there sooner or something like that.
So yeah.
That's something that would be kind of interesting to see, right?
How many of you have tried to use AI?
There's different ones out there, but which ones of you, slight pivot, have tried to diagnose
some specific medical thing with you or someone you care about?
Or, okay, we just, I've done this, we've all done this quite recently, I'm sure.
Now how many of you certified if what they said was actually correct?
And it was correct?
Okay.
Most of the time it wasn't correct.
Yeah. Okay most of the time it wasn't going to be in yeah and i think that they are talking about a deli lamb in the training set and all of that i think that what is missing in the health wellness biology is the training set.
Because the internet is great to know the literature and the language and all of that but,
and the language and all of that, but the data of health, wellness, performance is not there or what is there is a lot of time is skewed by gurus that wrote a blog or something
like that.
So it's not a really medical.
And I think that that's the next frontier.
How do you build?
And I call it a model on top of the foundation model.
So you need to build your own model on top of the foundation model. So you need to build your own model
on top of the foundation model and then train it
and then use the LLM in a medical setting
because you cannot make mistake or you can,
but it should be very rare the mistake
because it's a life and death.
And before we, one last thing,
before we get to the questions,
something else that I think about, since in a lot of ways you're kind of in this, this middle ground between direct medicine practice, what people would do and sort of the research side as you know, from inside tracker, right?
And one of the things that I think about a lot is the rise of the so-called Google doctor, which we've all definitely done at some point or another in our life and may even still do it. Maybe now so long are the Google doctors, the chat GPT doctor, you know?
And it's interesting because those things have sparked into, you could say part of things
like the biohacking and longevity field were almost inspired by what the internet did,
but you could just type in a question.
Oh, but then not just that.
Now you have forums, have all these different forums and websites and channels.
Those approaches will be fundamentally impossible without the internet.
Exactly.
So now we want to think about what's impossible now that will be possible next year or the
year after because of these new technologies.
How are we going to change the way we think about health and longevity?
As the microphone is traveling, can you talk about your swarm of medical AI views?
Yeah, so the thing I've been working on recently that I'm very excited about is getting an entire collection of these AI agents.
And so imagine when you open up a tab for chat GPT and you have a conversation and you're accessing this model that is incredibly capable, as I'm
sure you're aware of now. But this is like going to the deli and asking for one
slice of baloney. Because while you're answering that question in the tab,
everybody else, basically on the plan, is doing the same thing. And so as
impressive as these agents are, we need to remember you're kind of sharing. You're
only getting one slice of the baloney.
What is this going to look like?
One, when you have the whole chat GPT to yourself.
It's only a matter of time economically to where it gets to the point where you could
afford to have the whole thing.
And then it'll get to the point where you'll have two of them and you'll have three of
them and then you'll have a thousand of them.
And if you have a thousand agents all working for you, how do you prompt them? Do you set it up like a company?
Do you tell the CEO of your swarm, here's what I want you to do?
And they'll have a management team.
Because we imagine if we look at computers, the cost of memory,
having thousands of bytes was a big deal.
Then millions of bytes is a big deal.
Now, trillions is not a big deal.
We're going to be when they talk about tokens, you're going to be talking about mega tokens.
How many millions of tokens a second can the models output?
And then you're going to have a million agents that can each output a million tokens per
second.
How do we anticipate and plan for that reality, which I think we're going to see relatively
soon? and plan for that reality, which I think we're going to see relatively soon.
And in the area of medicine, if you imagine if there's like kind of a VIP in the government
and something happens to them, literally hundreds of doctors are going to get put on call to help with that situation, right? The entire hospital floor will be-
We've seen it with Trump and COVID a few years ago, yeah?
Exactly. That's what I mean, right? The entire floor, the entire hospital is like,
we're helping this one person.
None of us can afford that.
We can't afford it right now.
But if we look at this technology,
it'll be very reasonable to think that you have the brain
power of a thousand physicians and one's a radiologist
and one's an internal medicine,
and one is talking about your psychology
and one is looking at what you ate for breakfast and so on.
And they'll all have this dialogue.
And imagine kind of like a medical conference,
like we have the Society for Neuroscience,
I think it's 30,000 people show up to that annual meeting.
Imagine having that kind of horsepower for you.
For one little thing, maybe not even a life-threatening situation,
you say, I just want to feel better. Yeah, one comment on that.
I think that that's the, in my opinion, that's the real deal.
The agent and the swarm on agent.
I'll tell you why, because to get to diagnostic is not a problem.
We know, everyone knows that we shouldn't eat packaged food,
and we should exercise, and we should sleep at least seven to nine hours
and we should do a lot of things that we know, but 95% of us are not doing it.
One of the reasons is it's hard.
If you have those agents and they, for example, fill in your calendar that now it's time to
exercise and now it's the time for you to eat the banana, and maybe order the banana from Whole Foods,
and you won't have the chocolate, so we are lazy.
I won't go to the grocery shop and buy the chocolate.
I think that that's, in my opinion,
that will be the revolution.
It's not the knowledge, it's how to implement it.
And I think that the swarm of agents,
that's the revolution in my opinion.
100%, yeah.
So AI is very good at taking a whole lot of data,
putting it together and coming out with an output.
But AI can't do anything with the subtleties of humanity.
We have a Parkinson's neurologist.
So can AI pick up a masked face, a change in voice?
AI can't do physical examinations.
So AI could just spit back data that was put into it.
So that's a great point.
And it comes to this sort of different eras of AI.
And so now we're in this era
where we have these language models
and we can talk to them in English.
And I would argue, we could debate this,
that they're approaching artificial general intelligence
if they're not there already.
But what we gotta remember is while that model
can't examine a face or listen to the voice changes,
there are other, what used to be called narrow AI models
that have been developed for the last 10, 15 years,
some longer, that can do that.
Now they're very cumbersome pieces of code, they're not easy to work with, and you cannot
talk to them in English.
But in the next couple years, those two kind of fields are going to be merging, these traditional
deep learning type AI and the language model.
And then I think they're absolutely going to be able to look for face droops and voice
changes and shuffle gate analysis and things like that.
And how about understanding the subtleties of emotion?
Many times a patient will come into your office
and basically you're gonna have to pull information
out of them and you've got to understand emotionally
how that patient is to ask the proper questions.
There may be proper questions AI could ask
based on a sequence, but it cannot pick up emotion.
Subtleties of humanity.
And that's the other thing that I wanted to get to
when you're talking about memory.
Memory has a big emotional input.
I think it's debatable whether or not
they can analyze emotion.
I would argue that they can.
There's a new one that just came out.
I'm sure some of you have seen it.
We wanted to run it later.
It's this sesame.
And it's incredibly good at not just recognizing
your emotion, but predicting emotion.
We were just talking to it yesterday.
We were saying, well, how is it?
Because it claimed it didn't have emotions.
But it emotes, right?
It's very expressive.
So in general, I think the safest strategy is even if you think it can't do that thing today, XYZ, assume it will be able to in the next 6 to 12 months.
As long as we can get the data for it, which we have.
In response to the doctor here, I don't think AI is going to replace doctors right away, even though it's getting in areas like radiology and other
areas, it's getting better than the average doctor because what's going to happen is that
the doctor is going to take on, is mostly going to spend a lot more time with patients
one-on-one interpreting the AI outputs for the patients and providing that personalized
human connection.
And so, yeah, some doctors probably will be replaced outright, but I think there will
be a role for some time.
Right now we are building some benchmarks.
And one benchmark has that the latest AI actually has better EQ than 62% of physicians and it will
probably get to about 90%. There is a cultural bias, so in Japan it's
where they trust machines a little bit more than they do in America. It's
actually somewhat higher than it is here, but it is actually very competitive now with physicians.
I'd like to hear more about the diffusion model.
Auto-group regressive is very old-timey at this point.
It's like months old now.
But if we can talk about diffusion,
I think that's going to unleash a lot of capability.
The reason why I was excited about that.
So diffusion is you can do it in parallel.
So the thing about auto-aggression is it's slow.
It's inherently slow because what you have to do
is you have to produce the output, then rerun it.
It's inherently sequential.
And that's bad in terms of runtime.
Now, diffusion models can do this sort of thing in parallel.
They can, like, if diffusion models just try to figure out
the entire sequence all at once.
So there's an interesting...
So let me just back up real quick.
I think language is inherently autoregressive, like language itself.
And the only way to solve it is to do it autoregressively.
I think there's a hard...
So my prediction is that you're not going to be able to do fusion.
Now, people may have read that diffusion models are doing a pretty good job,
but it's on code.
It's on code.
And code is a human artifact that's not built autoregressively.
It's not how it functions.
The syntax is not...
It's not actually built for that.
It's not... That's not how it's generated.
In the case of language,
I don't think we're ever going to get,
so my prediction is you're not going to be able to do it in parallel.
It has to be done.
The language itself contains within it
this autoregressive predictability,
that's how you have to do it.
I think that we're stuck in some ways being stuck doing the serial,
not going to be able to do diffusion.
But prove me wrong, this? This is sort of an empirical
claim, which is what makes us all so cool.
We also have different modalities with which we experience the world. This sort
of classic left versus right brain paradigm. And one part of us sees the
world in this sequential serial kind of fashion. And the other part of our mind
sees things all at once,
like recognizing a face.
We don't really decompose that as sort of just there.
And I suspect that maybe these different mechanisms
that we're discovering in technology,
autoregression and diffusion,
maybe these are both valid models.
We just need to be looking in different aspects
of the brain to find them.
You know, three things.
First is healthcare is very care has unstructured data,
unlabeled data in terms of having your model
actually turn from detection to diagnosis.
There's a thin line of difference, right?
So when you say diagnosis, you're talking about CPT codes,
you're talking about FDA approvals.
So my question is very, very simple.
Number one is, how can AI LLMs and all the models can structured and label data for next generation of researchers?
My second question to you is what's the workflow process in terms of taking AI to actually clinics and healthcare in terms of making a workflow, getting FDA approval, putting it into a device trial, having FDA come and see what you did, and getting it through.
That's how you can put it in healthcare administration.
My third question is, what challenges do you see? Because convincing doctors, as he said, is very, very correct, right?
Thin line of difference between diagnosis and detection. We can claim that it can detect, but claiming diagnosis is a
very big thing. What is the insurance process? That's a different industry we're talking
about, right?
Yeah, I can talk about that because I actually worked at Mass General Brigham for several
years and I was actually working on deploying AI into the radiology clinic. So I can talk about the FDA briefly. They've approved over a thousand AI tools.
Most of them are not commercially successful because hospitals don't have any money to
spend on this sort of thing and they're basically beholden on to the insurance companies to
pay all the bills.
You mentioned the CPT codes. There are actually, I think, two CPT codes for AI, but generally insurance doesn't cover
any kind of use of AI.
So it's very challenging, actually, for hospitals to find money for AI.
And that's one of the reasons it's going to be a lot slower.
The percolation of deployment of the technology is going to be a lot slower. You'll probably have AI doctors
at home that you're using before actually in the hospital. But the other the other thing is the FDA
the other another reason it's going to be slow getting into the hospital is because the FDA and
FDA is only the FDA has not released any guidance on more general AI.
I actually think that people are going to be using these AI doctors, general AI doctors at home,
unfortunately, long before they're actually used in the clinic.
One comment I had about what the doctor just said, you know, I am
as a physician worried about losing my job and I thought that you know it is
going to be the pixel based people who will lose their jobs, radiologists,
pathologists, as a neurologist that there's so much touchy-feely diagnosis
going on I wouldn't be at risk and then I consulted with a startup somewhere in Illinois.
They were sending me videos of patients
with essential tremor and Parkinson's,
and I was diagnosing based on the video.
And they were actually looking at facial recognition
and voice of the patient, and based on that, diagnosing them.
And we had something like a 90% concordance.
But I also wanted to talk to Dan that, you know, on,
on the one hand you're getting with LLMs very efficient at reading through
unstructured medical records, but there's a lot of untapped medical data.
For example, I go to the OR even as a neurologist to map the brain for deep
brain stimulation.
Just a second, one comment on what you said about replacement of physician.
Think about the self-driving car.
We are talking about it for the last, I don't know, 20, maybe 50 years and it's still not
here.
So, and I think that what clinician is doing, and I have a high respect to clinician, is
a bit more than driving.
So I don't think that it will sit in our generation.
That's one.
The second is, as some other people said, the human touch is very important and the
people want the human touch.
I built the InstaTracker was built completely automated and scalable.
And we've seen that at the end of the day, the customer coming to us and asking, okay,
what should I do?
They see everything on the screen. They have like five recommendations, five things that
they should do and they don't understand it. They need someone to help them. So I don't
think that the clinician will be replaced in the-
Thank you for giving me hope.
I just wanted to make one comment, if I may, as a physician, actually to my physician colleague
here. I think the talk about sort of the talk about, uh, sort of physical examination
being something AI can't do.
That's very much part of the current or even the sort of historic paradigm of
medicine, but because it's really a proxy for what's going on inside and what
imaging is showing us another diagnostic technique.
So while physical examination remains important, I think it's going to, the new paradigm of
medicine will actually be based on large data sets, et cetera, combined with imaging.
And I think it's probably, in my mind, it's probably wrong to focus on the lack of physical
examination thinking AI won't be able to take over from what doctors do because I think
it will just become less important as other more accurate methodologies take over that.
I was going to say the current way we spend that the economy of medicine is dependent
on this sort of diagnostic code infrastructure, which may be the wrong model for these models
because now what you're going to want to do is things should just take in your data and make recommendations, not give you human-nameable diagnostic categories.
That's how we had to do it because that's how humans could practice medicine.
But they can think more subtly and they can make predictions that would say with this
genetic profile and this cardiovascular measures and your current lifestyle and all
of that stuff that frankly human doctors are not equipped to deal with as a data stream.
They can't really think those things.
It will be able to and it will be able to say, you know what, you should reduce sugar
a little bit and you should exercise 15 minutes more a week.
Why?
And maybe the insurance company wants to know why, right?
And it'll say because the data says that.
But it's more than that.
I think that today if you look at what clinicians are doing, and we have some clinicians here,
so correct me if I'm wrong, but basically you sit at the EMR and they're entering a
lot of information, the 15 minutes gone, and that's it.
The next patient
is coming.
And I think that what will happen now or in the future is you'll have all the information
in your computer or tablet or whatever, and you have time to understand the user and also
to provide the intervention that will work for him.
There are some people, we know that now, we saw that the people that are a father basically lazy not that they've a they're sick and they have no g lp one for that.
So we are saying that will they start to divide the population for more and more buckets and then maybe they will help us to know what bucket is this patient and then the job of the clinician will be to communicate it and to help him
to implement the intervention.
In terms of chronic disease prevention, 80% of chronic diseases can be prevented, but
how well can AI predict and model, I guess, the dynamic interactions happening with multi,
I guess, factorial chronic disease?
Like, for example, someone who has diabetes guess, factorial chronic disease, like for example, someone
who has diabetes who may develop cardiovascular disease, chronic kidney disease, how well
can AI differentiate between the biomarkers from one particular disease versus the other,
like looking at all together instead of just one thing at once for the multi-agent?
Yeah, I think that, let's start with what's happening right now.
Our healthcare system is basically sick.
The clinician won't treat you or won't look at you or kick you out of the office if you're
not sick.
And we need to move to prevention system.
Basically starting to looking at, I went to my clinician and told her that
my APOB is a BTI.
She told me, what is APOB?
Okay.
So, and she, I'm in Boston, so it's not like, I don't know, in Alabama.
So we have a problem with a clinician that were trained or learned like 30, 40 years
ago and they know what they know.
They are very busy.
They are poor people.
I really don't want to replace any clinician.
They are working hard and they are doing their best, but they don't have time to go to PubMed
and read a new paper.
So I think that what the AI can do is to send them a summary or even look at you when you
are coming to the practice and immediately send the AI is great in summarizing
data, look at all the data available of all the history that you have and all the medical
data that are available and give him one sentence or one paragraph that he can read to you and
basically explain to you where you are and then provide you some intervention that you
can do.
The industry and employment for the younger generation is going to be totally skewed and
flipped as time progresses.
So what can we, the younger generation, do to prepare for all of this?
Yeah, fantastic question.
Historically, the answer was to specialize and to find a particular niche and go into
that, whether it's in medicine or in general,
to just be a world expert in a very, very narrow thing.
That, I don't think is going to be competitive anymore
compared to having a broad landscape view
of what's actually happening.
So I would imagine, I would encourage you to run
into these tools as fast as you can, try them.
The thing we were talking about earlier with Vibe coding,
the ability to create software is going to explode.
Right now, we don't realize it,
but software is extraordinarily expensive.
And there's lots of apps and tools and things
that we would love to have as individuals,
as companies in your practice.
And you have to rely on the software industry
to create them because a single phone app,
you know, costs a half a million bucks to get out the door.
That's going to completely change. So I don a half a million bucks to get out the door. That's gonna completely change.
So I don't think we're gonna need less software developers.
I think we're gonna need more software developers
than ever before, and it's gonna change.
I have a stack of punch cards in my office
because they were here at FAU.
We had an early IBM computer here.
At the time, at one point,
that's what it meant to be a computer programmer.
You were literally down in the ones and zeros.
And now we can think in English.
The dream of computer science from the 1950s
that we just talk in plain language
and we get software out of it.
And I think this in some sense is the killer app of LLMs
is that they can write code.
They can create working software.
And historically that took teams
of really trained, talented people to do.
And now we're getting this kind of literacy.
Just real quick, if we go back to the Middle Ages, there was no word, there was no concept
of being illiterate.
There was no expectation that most people would be illiterate.
The idea was that you had this professional class of scribe and that they took care of
that for you.
Nowadays, we don't have a word for ill health or it because there's a specialized class
of physicians and it's your job to tell me what it is.
And I don't bother with that.
I don't think we're going to be as comfortable outsourcing that.
We're going to take responsibility for that medicine and we're going to need the AI tools
to do that.
So we're going to be able to create these custom apps and we're going to be able to
create this customized, personalized medicine for everybody.
Yeah. A couple of comments about that. I think that's a good, very good question.
So as the founder of a company, I can tell you that you have a lot of prioritization of what the
team will do and you're maybe doing maybe 1%, maybe even less than what you want to do.
So I completely agree with Will. We'll need more software developers.
What the software developer will be, they will be much more efficient and will do more.
And for example, if you are testing a model and we're trying to find what is the best
model for a specific question, instead of looking at two different ways, we'll take
a look at 20 different ways and we'll take a look at 20 different ways
and we'll do it in half time.
Now another analogy is to compare it to the industrial revolution that happened, I don't
know, 100 years ago.
So then a lot of farmers basically lost their job because one combine can cover, I don't
know, 1,000 people.
But if you think about the code, the code is not limited.
The land in the house is limited, the code is not limited. The land in the house is limited.
The code is not limited.
So basically in my opinion, we'll need more coders and basically everyone here, even the
biologists and I know everyone can be a coder right now.
Maybe not as good as someone that can write an algorithm, but everyone I can be a coder.
I'm not a coder, but I can be a coder right now. So I think that that's a great achievement. I think that all of you need
to, if you will ask me as a high school student, you should learn to code and learn to learn
as much as you can and be the best that you can and be curious.
All right. I just have two quick comments. Sorry, I have to interject.
About the software engineering thing, I think I would not do a degree in computer science
just because the computer science is mostly going to teach you about theory of algorithms
and all this stuff that's not very useful in my opinion.
Unless you're really interested in that and you want to try to be a professor, I wouldn't
do that.
And I think a lot of software engineering jobs will be replaced.
So I would definitely consider a different career.
But I wanted to mention this thing about EQ.
People are mentioning, oh, AI is way better emotional intelligence.
I think the study people were referring to was done specifically with
looking at responding to messages in a patient portal. And because the doctors are really
overworked, they tend to give very terse responses to those messages. Or as the AI can give,
gives more nuanced sort of empathetic responses. So in that context, or is the AI can give more nuanced sort of empathetic responses?
So in that context, yes, the AI is much more pleasant to talk to, more aware of emotions,
but I don't think it negates what I was talking about earlier that people, they still want
that human connection.
And there's actually an economist called Robin Hansen.
He- He'll be speaking at FAU tomorrow. Oh, yeah, yeah, that human connection. And there's actually an economist called Robin Hansen. He, uh, he,
he'll be speaking at FAU tomorrow.
Oh yeah. Yeah, that's right. He'll be, he's very, he's,
he's a real polymath actually. And he's, they, you know,
they've done studies where they, they found like, um, applying more,
more healthcare resources doesn't like people seem to consume health care
resources at a much higher rate than you would
expect. Like there's diminishing returns, like they
showed like people are doubling the amount of healthcare you
utilize, doesn't improve your outcomes. So there's this
puzzle, like why are people consuming so much healthcare and
he argues it's because people have this emotional need to feel like they're being
cared for.
So really what they're doing is they're fulfilling that emotional need.
I don't know if AI is, maybe people will get that from AI, but.
I want to ask a question.
Recently health plans and large entities such as pharmaceutical companies, et cetera,
and also very large practices which are owned by hospitals, et cetera, are putting in their
contracts provisions that prevent the distribution of patient data.
In other words, who owns the data?
So this whole evening is built around data.
The issue is not that so much because data is getting consumed.
It's who owns that data and is it going to be released?
Is it primary data? Is it secondary data? Is it peer-reviewed data?
Yeah. Well, under HIPAA, patients do have a right to get their data.
But what I observed when I was at Mass General Brigham and other hospitals is it can be very
challenging especially with the radiology images that if anyone has
driver tried to get one of the radiology images it can take a long time and they
they have to give you like a CD-ROM and or a DVD it's very challenging and I
actually think the hospitals are making it even harder because they're realizing the value of their data and
they're also more reluctant to share data with researchers I think because
they're realizing that you know the data has enormous value. I don't I think that
big big companies like Microsoft and Google are working with hospitals to have agreements where they can basically bring in a
late an LM
Found or foundation model and train it on all of the data. So
I think that's how how it will be done, but
You know if a patient wants to get all their data and upload it to some sort of cloud service like
ChatTPT, unfortunately, I think it's way harder than it should be.
Just real quick, while I think it's amazing going back and looking at all these data sets,
it's essentially kind of saying, you've got all these great leftovers in the fridge.
We collected this data for some other purpose and let's go harvest and mine it.
And that's fantastic.
But I think now that we understand the value of these tools,
we're going to be collecting data at a scale
that we've never seen before.
Whereas before we took a few data points.
Now, one of the things I've been thinking about
is this sort of the Star Trek replicator,
or not the replicator, the Tri-Corner.
You sort of swing this smartphone style device
in front of someone and it's capturing all kinds of data.
So I think we need to be thinking about how do we harvest the value of the data we've
already spent the money to collect, but maybe more importantly, how do we revolutionize
the healthcare system so that it's actually capturing the data, structuring it, and putting
it in a way that will help...
I think we got an important data collection question right over here.
Hi, everyone.
Hope you're enjoying today's episode.
If you're hungry for deeper dives into physics,
AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my
sub stack. Subscribers get first access to new episodes, new posts as well, behind the scenes
insights, and the chance to be a part of a thriving community of like-minded pilgrimers.
By joining, you'll directly be supporting my work and helping keep these conversations at When someone, for example, gets sick and he mentioned who owns the data,
there's already massive amounts of, for example,
CAT scans available for people who
may have passed from certain illnesses or survived
and their remission as they progress through the disease.
So what is that data doing right now?
Is it just sitting dormant somewhere and it be actually utilized?
Well, like with the medical imaging data,
the hospital does own it in the sense that they can use it for
commercial purposes and they can use it for research.
They can use it for a lot of things.
Like I said before, there are big companies like Google,
Microsoft that are working out contracts with hospitals
to essentially train GPT-4 on all of the radiology images and all of the text reports.
And I think if you train something like GPT-4 on all of the images in a large healthcare
system and all of the reports, I actually think you probably have something at the level of a radiologist.
I saw this South Korean company named Vuno who's very adept in like AI and radiology.
So I wanted to know what are roadblocks here in the US that's stopping us from doing the
same thing.
The FDA is one of the strictest regulatory bodies in the world.
So, actually, it's almost more exciting to think about some of the emerging countries
where things can be deployed more readily and there's actually more need.
So, if you're seeing, I'm not surprised that you saw, you're seeing AI in other countries just because of the FDA.
To the physician's point of social determinants of health, it actually contributes 30% to a
patient's health outcomes. 30%. And a person can't change the zip code overnight, where he stays.
And it says that where the patient lives, zip codes determine how long the patient will
live to some extent.
It is unfortunate, but it is the truth.
From that perspective, the stress of living in that particular zip code affects the person
at a molecular level and a cellular level.
So what are we going to do about those things?
Move to the right zip code.
It is not possible.
At least, you know, has anyone here tried to get a therapist?
It's very hard.
My point at the beginning of this was that doctors, they're not trained, they're only
trained in anatomy, physiology and the biological component. They're not very good at handling
how to change behavior and psychological stuff, but it seems AI can
can provide the sort of counseling. I mean, behavior, we do know how to change
behavior.
That's one of the things in behavioral therapy, right?
It just takes a lot of time talking.
Most people I know that are sick, they do want to get better.
People with chronic fatigue, like I was talking about the 2 to 4% of people with chronic fatigue,
they desperately want to get better.
When they go to the traditional healthcare system,
the doctors have no idea how to treat it because they have such a complex condition.
Okay, let's hear from Dan Van Zandt.
Yeah. So I think we have two perspectives on AI.
We have the skeptical perspective of like,
oh, well, AI won't solve this problem or it won't solve that problem.
We have the high perspective of this is going to change everything.
But I think practically my view is,
it's not going to solve all the problems
in medicine and society,
but it's going to make incremental progress
on some of those.
I'm curious where each of the speakers in the panel
see as the best target in the immediate future.
I'm talking the next three months,
maybe if, you know, a company was sufficiently motivated
where AI could make incremental progress
on a specific problem in medicine, not solving all of the issues with society and zip codes and everything else,
but a very specific thing that you see AI as being really helpful towards.
Okay, Elon, I kind of want to split my answers, but I think this sort of AI first,
maybe, I don't know if this is going to happen in the next three months, but where
people are able to intelligently sort of pre-diagnose and come in, decide, make their preliminary
medical decisions like should I go to the doctor or not in a more informed and intelligent
way potentially.
I don't see which industry and which company is harnessing that exactly.
So it's hard to see the profit motive, which means it doesn't ever happen.
Right?
At Chatt GBT, I guess, OpenAI themselves can see this as a utility and then they can sell
that as a utility.
But I do think that there is a very strong possibility in the very near future, I'm hopeful,
that we're going to just see straight up AI breakthroughs in terms of actual hypotheses.
You might know a little bit about this.
Deeper search and these other engines, I haven't gotten access to it yet, but Coscientist,
which is a Google product, is currently in a small beta.
I signed up for it.
I got emails back.
I haven't seen it yet. But these may actually be really powerful tools to drive science forward.
We talked about the data and data ownership, it's very possible that companies that are
actually sitting on silos of data, they don't want to share them, they need the profit motive,
but they can then use these tools maybe to actually develop novel drug interventions and things like that.
So I think that soon we'll probably will see some actual effective practice of science using these.
Okay, now the three months, the next three months in AI, what's a specific problem that can be
solved? Don't paint the whole picture of the world changing in 35 years. What's the next three months
looking like? Well, I think it's just the ability for everybody to get their own physician, essentially.
This kind of concierge medicine that only the very wealthy can afford.
To some sense, we look at how much health care costs, you know, at this trillion dollar price tags.
As a culture, we can't afford it.
We have to change something about that.
And so I think the opportunity is to be able to talk to these expert systems,
as we used to call them, and at the low cost.
And I just want to mention this sort of polymath approach
that's come up a few times as a theme, that what's so spectacular about,
maybe not even the current version, but the promise of these systems,
it's not just that they will know internal medicine and cardiology,
but they will be therapists.
They will become your friend.
They will know how to talk to you.
They'll be your dietician.
They'll be your gym coach and this whole thing.
And so they're gonna be talking to you
about your blood pressure,
but talking about it in the context of,
hey, you know you could save some money this week
on your grocery bill if we make these recipes.
It will be so kind of holistic because it's polymath.
It's not just a physician, it's all of these things at once.
And I think that's something you can do today.
You can go into the prompt and say,
can you please simulate being the following 10 things
and talk to me what I should do today?
Gil, yeah, I will start with a joke and then I will...
So a joke about AI.
So a CEO of a company wrote a few bullet points that explain the vision of the company.
And then one of the executives said, no, no, that's too short.
Let's please extend it.
So using AI and wrote 10 pages.
And then one of the employees received the 10 pages and they asked the AI, okay, summarize
it to five bullet points and you got the same.
So I think that that's the power of AI.
Basically take a big data and summarize it and take it to the point.
So I think that we can, and maybe in the next three months, and it's available today, to
take all the PubMed and all the information and come to a patient with five bullet points.
That's what you need to do today. Yeah, and just a similar thing is kind of what I was talking about before is the AI
could interview the patient before they see the doctor and then summarize some of the
most important points instead of just doing an intake form that is kind of very grand,
is very like, the data is very compressed
into like a five point scale,
like how's your sleep zero to five.
The AI can spend more time exploring like that
and then capture the salient points
and review all the medical history as well.
So yeah, I think chart summarization is a very low hanging fruit application we're already
seeing.
I didn't want to say something.
So this is a medicine and AI event.
We have lots of different types of people here.
They all have incredible amounts of skills and a set of errors.
But I think something important to some of her today is we want to talk to all the physicians
in the audience, people like Dr. Dalvi, who spent years of their life, arguably the last
years of their youth, giving up a part of themselves so that people like Dr. Dalvi, who spent years of their life, arguably the last years of their youth,
giving up a part of themselves
so that people like us could live healthier today
and even have this position.
Some of us are probably,
may even reach centenarian status in this room,
or if we hit longevity, escape velocity, hopefully more.
But I want to take a little bit of time today
to all of the physicians in the audience,
the people that gave up their lives to save ours.
What do you think about AI?
How would you use AI?
Are you against it?
I'll give it to Dr. Dalvi.
Thank you firstly for inviting me and thanks to the panel for a phenomenal discussion.
So I've used chat GPT in a practical sense.
So I, for example, one of the problems with neurology, it's one of the high burnout professions
because there's so much documentation involved. I've created custom GPTs where I will type in a
paragraph worth of what I've seen in the clinic and talk to the patient about and
then it will convert it into a level five note that is billable and I get my
money for it. And the advantage is I don't take the computer into the
patient's room. I don't use EMR't take the computer into the patient's room.
I don't use EMR. I have contact with the patient. So they are very happy. I'm very happy. And my
hospital billing system is happy. So that's a practical use. It's also a phenomenal research
tool. So I have used it, you know, if I'm writing a paper, instead of reviewing PubMed article after
article, I'll use it to pull out the five, seven that are really relevant to me and get
started.
It's also, for me personally, it's a good teacher.
I've become suddenly in my old age interested in philosophy.
So I asked Claude A.I.s to imagine where Dostoevsky and Tolstoy and Turgenev, my favorite Russian
authors are discussing
the myth of Sisyphus.
And it came up with a little short story that explained it to me like I have no professor
would have.
So phenomenal uses, practical as well as intellectual.
Awesome.
Thank you, Dr. Dalby.
Well, I love AI.
I think it's wonderful.
I think it's definitely going to change a lot of things. I use
it nearly every day. As a matter of fact, working on the gentleman that was here with me tonight,
working on a new company that's going to be using AI in an area of healthcare that has not been
addressed at all, which is the long-term care arena.
We have an aging population that has not been addressed very well by the government, and we need to actually make some great strides in that area. So we've got to be able to review and go through all of the diagnoses that these people have, what their overall needs are, and how we're going to move them forward
from where they are right now.
So I think it's wonderful.
I commend all of you guys for doing it.
Please keep it up, okay?
I'm not afraid of losing my job.
I'm gonna be dead anyway by the time that all comes out.
But I'm not afraid of losing my job,
and I would tell anybody, you know,
100 years from now, they're not gonna lose their job
because you need still the human mind. You need the interaction. You need the touch. You need
everything else that's going to go along with it. Right? Unless you're going to have a Mr. Data
from Star Trek, okay? It's going to be hard to kind of eliminate that. So I appreciate all you guys.
And what would you, by the way, what would you say, we actually have a MD candidate here who
came all the way from Boston just for this event.
What would you say to people right now
who are the next generations of physicians,
the next generation, what would you say to them?
Well, God speed to you.
I mean, I don't know,
I was just talking to one of my colleagues,
the gentleman that had the other neurologist there,
and we just see medical education
just going in a different direction.
It's all about now passing a test, okay?
And I have other friends who are teaching it right here
at the university, at the medical school,
and have complained about the students
are not focusing on the clinical aspect of it,
the oscillation, the percussion,
all of the different things that we did.
And that's what you need to do, because that's going to make you a good doctor.
It's going to make you a good clinician, good diagnostician.
Okay, passing a test is one thing.
Yeah, that's great.
Okay, and you will.
You will, I guarantee you will.
But you've got to go the distance, right?
Because that's a person, right?
That's a person.
That's a person. All uniquely different.
And you have to treat them as such. Future doctor right here.
I'm a medical student in Boston. I go to Boston University School of Medicine in their accelerated
BAMB program. In terms of AI, I'm definitely interested in terms of the way it influences
our medical education right now, because a lot of students are using AI to teach them the material, because it can teach them
better than some of their professors, unfortunately. And also just due to the amount of time constraint,
like you're saying, passing exams is very hard. So that's just another area to think about in
terms of clinical AI, in terms of medical education. What are your concerns with AI? AI in medicine?
In the clinic or for patients at home that will ask their questions to GPT.
You know how we were talking about the EQ thing?
I think it's important to think about when you're stuck on the phone with one of those robotic voices
and they're not listening to you and you're trying really hard to explain something,
AI has come a long way.
It is very good at reaching states that are close to those,
which I would say are most intimate with our phenomenology.
But to be very blunt, I don't know if AI can experience ecstasy
the same way that I can.
How would I describe it?
I think it could try.
I don't think it could experience an orgasm per se.
So how would you teach sex education in the same way?
Or addiction. It's a real thing.
These homeless shelters that we go into and we volunteer with, you're talking to people about these very intimate feelings.
So that's where I have some concern, but yeah.
Thank you.
Yeah.
So I would say I'm very bullish on AI, um, from two perspectives, really.
One is offering really the promise of universal healthcare, which currently
isn't available to many people in this country or around the world because of cost and access and availability.
And I think to have, as one of the panel members mentioned, to bring that cost of consultation
down to near zero is going to be an incredible thing.
And many people suffer because they don't have access to pretty basic diagnosis and
treatment and I think that's an incredible opportunity.
I think the other great opportunity is just using these very large data sets,
getting more insight into complex diseases and drug development and those sorts of
things, and also longevity, this sort of emerging science of both lifestyle
intervention and drug interventions for longevity.
And I think a lot of these are very complex problems that, that AI is
very well suited to just hackling.
Even just plain trial and error.
Do you think that we could use AI to solve not only just unusual medical diseases,
but come up with very unusual solutions to them that we never thought about before.
Well, I think that's absolutely right.
I think one sort of anecdote I'd give you is I remember as a medical student,
obviously, you know, and you'll know this, you're sort of taught how to take
a history and examine it.
It's a very, it's a very sort of structured approach to getting information
from the patient, both reported information and physical information.
And I remember my, in England, we call them consultants, but attending doctors here.
Um, and they would basically say, you know, they would hear the first one or two
lines of that story and they would immediately jump to the answer.
And that's what we call experience.
But that's also what AI will be brilliant at because when it's had millions and
millions and millions of trainings of somebody reporting their symptoms and you know what they're suffering from, it will quickly pick
up on those patterns just like an experienced physician will and it will give better answers
more of the time and people receive better care in my opinion. Thank you. Why don't we have a
theory of everything for medicine or biology? This is something that's been chased after forever
in mathematics and physics. Why isn't it there in biology yet? And by having something like that,
but that also let us not have to die in the way that we do anymore, lose the people we care about.
Yeah, I just want to real quick, I want to thank Kurt for traveling.
Always makes the discussion quite interesting. And I want to thank Addy,
who did a tremendous amount of work organizing our speakers.
I've received several messages, emails, and comments from professors saying that they recommend theories of everything to their students, and that's fantastic.
If you're a professor or a lecturer and there's a particular standout episode that your students
can benefit from, please do share.
And as always, feel free to contact me.
New update!
Started a sub stack.
Writings on there are currently about language and ill-defined concepts as well as some other
mathematical details.
Much more being written there.
This is content that isn't anywhere else.
It's not on theories of everything.
It's not on Patreon.
Also, full transcripts will be placed there at some point in the future.
Several people ask me, hey Kurt, you've spoken to so many people in the fields of theoretical
physics, philosophy, and consciousness.
What are your thoughts?
While I remain impartial in interviews, this substack is a way to peer into my present
deliberations on these topics.
Also, thank you to our partner, The Economist.
Firstly, thank you for watching, thank you for listening.
If you haven't subscribed or clicked that like button, now is the time to do so.
Why? Because each subscribe, each like helps YouTube push this content to more people like yourself.
Plus, it helps out Kurt directly, aka me.
I also found out last year that external links count plenty toward the algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, etc., it shows
YouTube, hey, people are talking about this content outside of YouTube, which in turn
greatly aids the distribution on YouTube.
Thirdly, you should know this podcast is on iTunes, it's on Spotify, it's on all of the
audio platforms.
All you have to do is
type in theories of everything and you'll find it. Personally, I gain from rewatching
lectures and podcasts. I also read in the comments that hey, toll listeners also gain
from replaying. So how about instead you re-listen on those platforms like iTunes, Spotify, Google
Podcasts, whichever podcast catcher you use. And finally, if you'd like to support more conversations like this, more content like
this, then do consider visiting patreon.com slash Kurt Jaimungal and donating with whatever
you like.
There's also PayPal, there's also crypto, there's also just joining on YouTube.
Again, keep in mind, it's support from the sponsors and you that allow me to work on
toe full time.
You also get early access to ad free episodes, whether it's audio or video, it's audio in
the case of Patreon, video in the case of YouTube.
For instance, this episode that you're listening to right now was released a few days earlier.
Every dollar helps far more than you think.
Either way, your viewership is generosity enough.
Thank you so much.