Huberman Lab - Mark Zuckerberg & Dr. Priscilla Chan: Curing All Human Diseases & the Future of Health & Technology
Episode Date: October 23, 2023In this episode, my guests are Mark Zuckerberg, CEO of Meta (formerly Facebook, Inc.), and his wife, Dr. Priscilla Chan, M.D., co-founder and co-CEO of the Chan Zuckerberg Initiative (CZI). We discuss... how CZI plans to cure all human diseases by the end of this century by funding transformative projects and technologies at the intersection of biology, engineering, and artificial intelligence (AI). They describe their funding and development of CZI Biohubs and the progress already underway to accelerate the understanding of cell function, pathways, and disease. Then, Mark discusses social media, its impact on mental health, and new tools for online experiences. We also discuss Meta’s virtual reality (VR), augmented and mixed reality tech, and how AI will soon completely transform our online and physical life experiences. This episode ought to interest anyone curious about biology, medicine, mental health, AI, and the future of technology and humanity. For the full show notes, including the episode transcript (available exclusively to Huberman Lab Premium members), please visit hubermanlab.com. The Brain Body Contract Tickets: https://www.hubermanlab.com/events Pre-sale password: HUBERMAN Thank you to our sponsors AG1: https://drinkag1.com/huberman Eight Sleep: https://eightsleep.com/huberman LMNT: https://drinklmnt.com/huberman InsideTracker: https://insidetracker.com/huberman Momentous: https://livemomentous.com/huberman The Brain Body Contract Tickets: https://www.hubermanlab.com/events Timestamps (00:00:00) Mark Zuckerberg & Dr. Priscilla Chan (00:02:15) Sponsors: Eight Sleep & LMNT; The Brain Body Contract (00:05:35) Chan Zuckerberg Initiative (CZI) & Human Disease Research (00:08:51) Innovation & Discovery, Science & Engineering (00:12:53) Funding, Building Tools & Imaging (00:17:57) Healthy vs. Diseased Cells, Human Cell Atlas & AI, Virtual Cells (00:21:59) Single Cell Methods & Disease; CELLxGENE Tool (00:28:22) Sponsor: AG1 (00:29:53) AI & Hypothesis Generation; Long-term Projects & Collaboration (00:35:14) Large Language Models (LLMs), In Silico Experiments (00:42:11) CZI Biohubs, Chicago, New York (00:50:52) Universities & Biohubs; Therapeutics & Rare Diseases (00:57:23) Optimism; Children & Families (01:06:21) Sponsor: InsideTracker (01:07:25) Technology & Health, Positive & Negative Interactions (01:13:17) Algorithms, Clickbait News, Individual Experience (01:19:17) Parental Controls, Meta Social Media Tools & Tailoring Experience (01:24:51) Time, Usage & Technology, Parental Tools (01:28:55) Virtual Reality (VR), Mixed Reality Experiences & Smart Glasses (01:36:09) Physical Exercise & Virtual Product Development (01:44:19) Virtual Futures for Creativity & Social Interactions (01:49:31) Ray-Ban Meta Smart Glasses: Potential, Privacy & Risks (02:00:20) Visual System & Smart Glasses, Augmented Reality (02:06:42) AI Assistants & Creators, Identity Protection (02:13:26) Zero-Cost Support, Spotify & Apple Reviews, Sponsors, YouTube Feedback, Momentous, Social Media, Neural Network Newsletter Title Card Photo Credit: Mike Blabac Disclaimer
Transcript
Discussion (0)
Welcome to the Huberman Lab podcast where we discuss science and science-based tools for everyday life.
I'm Andrew Huberman and I'm a professor of neurobiology and
Ophthalmology at Stanford School of Medicine. My guest today are Mark Zuckerberg and Dr. Priscilla Chan.
Mark Zuckerberg, as everybody knows, founded the company Facebook. He is now the CEO of Metta,
which includes Facebook, Instagram, WhatsApp,
and other technology platforms.
Dr. Priscilla Chan graduated from Harvard
and went on to do her medical degree
at the University of California, San Francisco.
Mark Zuckerberg and Dr. Priscilla Chan
are married and the co-founders of the CZI
or Chan Zuckerberg Initiative,
a philanthropic organization whose
state of goal is to cure all human diseases.
The Chan Zuckerberg Initiative is accomplishing that by providing critical funding, not
available elsewhere, as well as a novel framework for discovery of the basic functioning of
cells, cataloging all the different human cell types, as well as providing AI or artificial
intelligence platforms to mine all of that data to discover new pathways and cures for logging all the different human cell types, as well as providing AI or artificial intelligence
platforms to mine all of that data to discover new pathways and cures for all human diseases.
The first hour of today's discussion is held with both Dr. Priscilla Chan and Mark Zuckerberg
during which we discuss the CZI and what it really means to try and cure all human diseases.
We talk about the motivational backbone for the CZI that extends well into each of their
personal histories.
Indeed, you'll learn quite a lot about Dr. Priscilla Chan, who has an absolutely incredible family
story leading up to her role as a physician and her motivations for the CZI and beyond.
And you'll learn from Mark how he's bringing an engineering and AI perspective to the discovery
of new cures for human disease.
The second half of today's discussion is just between Mark Zuckerberg and me,
during which we discuss various meta-platforms, including, of course,
social media platforms, and their effects on mental health in children and adults.
We also discuss VR, virtual reality, as well as augmented and mixed reality,
and we discuss AI, artificial intelligence,
and how it stands to transform not just our online experiences
with social media and other technologies,
but how it stands to potentially transform every aspect
of everyday life.
Before we begin, I'd like to emphasize that this podcast
is separate from my teaching and research roles at Stanford.
It is, however, part of my desire and effort
to bring zero cost to consumer information about science and science-related tools to the general public. In keeping
with that theme, I'd like to thank the sponsors of today's podcast. Our first sponsor is
8 Sleep. 8 Sleep makes smart mattress covers with cooling, heating, and sleep tracking capacity.
I've spoken many times before on this podcast about the fact that getting a great night sleep
really is the foundation of mental health, physical health, and performance.
One of the key things to getting a great night's sleep is to make sure that the temperature
of your sleeping environment is correct.
And that's because in order to fall and stay deeply asleep, your body temperature actually
has to drop by about one to three degrees.
And in order to wake up feeling refreshed and energized, your body temperature actually
has to increase by about one to three degrees.
With eight sleep, you can program the temperature of your sleeping environment in the beginning,
middle, and end of your night.
It has a number of other features like tracking the amount of rapid eye movement and slow wave
sleep that you get, things that are essential to really dialing in the perfect night sleep
for you.
I've been sleeping on an 8 sleep mattress cover for well over two years now, and it has
greatly improved my sleep.
I fall asleep far more quickly.
I wake up far less often in the middle of the night my sleep. I fall asleep far more quickly. I wake
up far less often in the middle of the night and I wake up feeling far more refreshed than
I ever did prior to using an eight-sleep mattress cover. If you'd like to try eight-sleep,
you can go to eightsleep.com slash Huberman to save $150 off their pod three cover. Eight
sleep currently ships to the USA, Canada, UK, select countries in the EU and Australia. Again,
that's eightsleep.com slash Hubertman.
Today's episode is also brought to us by Element.
Element is an electrolyte drink that has everything you need and nothing you don't.
That means plenty of electrolytes, sodium, magnesium, and potassium, and no sugar.
The electrolytes are absolutely essential for the functioning of every cell in your body
and your neurons, your nerve cells, rely on sodium, magnesium, and potassium in order to communicate with one another electrically and chemically.
Element contains the optimal ratio of electrolytes for the functioning of neurons and the other
cells of your body.
Every morning, I drink a packet of element dissolved in about 32 ounces of water.
I do that just for general hydration and to make sure that I have adequate electrolytes
for any activities that day.
I'll often also have an element packet or even two packets in 32 to 60 ounces of water
if I'm exercising very hard and certainly if I'm sweating a lot in order to make sure that I
replace those electrolytes. If you'd like to try element, you can go to drinklmt.com slash
Hubertman to get a free sample pack with your purchase. Again, that's drinklmnt.com slash huberman. I'm pleased to announce that we will be hosting four live events in Australia,
each of which is entitled the Brain Body Contract, during which I will share science and science-related
tools for mental health, physical health, and performance. There will also be a live
question and answer session. We have limited tickets still available for the event in Melbourne on February 10th, as well as the event in Brisbane on February 24th. Our event in Sydney at the
Sydney Opera House sold out very quickly. So as a consequence, we've now scheduled a second
event in Sydney at the Aware Super Theater on February 18th. To access tickets to any of these
events, you can go to HubermanLab.com slash events and
use the code Huberman at checkout. I hope to see you there. And as always, thank you for
your interest in science. And now for my discussion with Mark Zuckerberg and Dr. Priscilla Chan.
Priscilla Mark, so great to meet you and thank you for having me here in your home.
Oh, thanks for having us on the podcast.
Yeah.
I'd like to talk about the CZI, the Chan Zuckerberg Initiative.
I learned about this a few years ago,
when my lab wasn't still is now at Stanford,
as a very exciting philanthropic effort
that has a truly big mission.
I can't imagine a bigger mission.
So maybe you could tell us what that big mission is,
and then we can get into some of the mechanics
of how that big mission can become a reality.
So, like you're mentioning, in 2015, we launched the Chan Zuckerberg Initiative and what we were hoping to do at CCI was think about how do we build a better future for everyone. And looking for ways where we can contribute the resources that we have to bring philanthropically.
And the experiences that Mark and I have had for me as a physician and educator for Mark as an engineer.
And then our ability to bring teams together to build builders.
You know, Mark has been a builder throughout his career, and what could we do if we actually put together a team
to build tools, do great science?
And so within our science portfolio,
we've really been focused on what some people think
is either an incredibly audacious goal or an inevitable goal,
but I think about it as something that will happen
if we sort of continue
focusing on it, which is to be able to cure prevent or manage all disease by the end of
the century.
All disease.
All disease.
So that's important, right?
A lot of times people ask, like, which disease?
And the whole point is that there is not one disease.
And it's really about taking a step back to where I always found the most hope as a physician, which
is new discoveries and new opportunities and new ways of understanding how to keep people
well come from basic science.
So our strategy at CZI is really to build tools, fund science, change the way basic scientists can see the world and how they can move quickly in their
discoveries. And so that's what we launched in 2015. We do work in three ways. We fund
great scientists. We build tools right now, software tools to help move science along and make it easier for
scientists to do their work. And we do science. You mentioned Stanford being an
important pillar for our science work. We've built what we call biohubs,
institutes where teams can take on grand challenges to do work that wouldn't
be possible in a single lab
or within a single discipline.
And our first biohub was launched in San Francisco
collaboration between Stanford, UC Berkeley and UCSF.
Amazing.
Curing all diseases implies that there will either be
a ton of knowledge gleaned from this effort,
which I'm certain there will be. a ton of knowledge glean from this effort, which I'm certain there
will be, and there already has been, we can talk about some of those early successes in a moment.
But it also sort of implies that if we can understand some basic operations of diseases
and cells that transcend autism, Huntington's, Parkinson's, Cancer, and any other disease that perhaps there are
some core principles that would make the big mission a real reality, so to speak.
What I'm basically saying is, how are you attacking this?
My belief is that the cell sits at the center of all discussion about disease, given that
our body is made up of cells and different types of cells. So maybe you could just illuminate for us a little bit
of what the cell is in your mind as it relates to disease and how one goes about understanding
disease in the context of cells because ultimately that's what we're made up of.
Yeah, well let's get to the cell thing in a moment,
but just to even take a step back from that,
we don't think that it's CGI that we're going to cure
prevent or manage all diseases.
The goal is to basically give the scientific community
and scientists around the world the tools
to accelerate the pace of science.
And then we spent a lot of time when we were getting started
with this, looking at the history of science and trying to understand the trends and how they've played out over time.
And if you look over this very long-term arc, most large-scale discoveries are preceded
by the invention of a new tool or a new way to see something.
And it's not just in biology, right?
It's like having a telescope came before a lot of discoveries in astronomy and astrophysics.
But similarly, the microscope and just different ways to observe things or different platforms,
like the ability to do vaccines, precede the ability to kind of cure a lot of different
things.
So this is sort of the engineering part that you were talking about about building tools.
We view our goal is to try to bring together
some scientific and engineering knowledge
to build tools that empower the whole field.
And that's sort of the big arc
and a lot of the things that we're focused on
including the work in single cell and cell understanding,
which you can jump in and get into that if you want.
But yeah, I think we generally agree with the premise
that if you want to understand this stuff
from first principles, people study organs a lot,
right, they study how things present across the body,
but there's not a very widespread understanding
of how each cell operates.
And this is sort of a big part of some of the initial work
that we tried to do on the human cell atlas and understanding what are the different cells.
And there's a bunch more work that we want to do to carry that forward. But
but overall, I think when we think about the next 10 years here of this long arc to try to
empower the community to to be able to cure prevent or manage all diseases,
we want that we think that the next 10 years
should really be primarily about
being able to measure and observe more things
in human biology.
There are a lot of limits today.
You want to look at something through a microscope.
You can't usually see living tissue
because it's hard to see through skin or things like that.
So there are a lot of different techniques
that will help us observe different things.
And this is sort of where the engineering background comes in a bit, So there are a lot of different techniques that will help us observe different things.
And this is sort of where the engineering background comes in a bit.
Because when I think about this from the perspective of how you'd write code or something,
the idea of trying to debug or fix a code base, but not be able to step through the code line by line,
it's not going to happen.
At the beginning of any big project that we do at Meta, we like to spend a bunch of the
time up front just trying to instrument things and understand what are we going to look at
and how we're going to measure things before so we know we're making progress and know
what to optimize.
This is such a long-term journey that we think that it actually makes sense to take the
next 10 years to build those kind of tools for biology and understanding just how the human body works in action
and a big part of that is cells. I don't know. Do you want to jump in and talk about some of the
efforts? Could I just interrupt briefly and just ask about the different interventions, so to speak,
that CZIs is a unique position to bring to the quest to cure all diseases.
So I can think of, I mean, I know as a scientist that money is necessary, but not sufficient,
right? Like when you have money, you can hire more people, you can try different things.
So that's critical, but a lot of philanthropy includes money.
The other component is, you know, you want to be able to see things as you pointed out.
So you want to know the normal disease process.
Like what is a healthy cell?
What's a diseased cell?
Are the cells constantly being bombarded with challenges and then repairing those and then
what we call cancer is just kind of run away train of those challenges not being met by
the cell itself or something like that.
So better imaging tools.
And then it sounds like there's not just a hardware component, but a software component.
This is where AI comes in.
So maybe we can, at some point, we can break this up into three different avenues.
One is understanding disease processes and healthy processes.
We'll lump those together.
Then there's hardware.
So microscopes, lenses, digital deconvolution, ways of seeing things in bolder relief and more precision,
and then there's how to manage all the data. And then I love the idea that maybe AI could do what
human brains can't do alone, a manage understanding of the data. Because it's one thing to organize data.
It's another to say, you know, you know, this, as you pointed out in the analogy with code that this particular gene and that particular gene are potentially
interesting, whereas a human being would never make that potential connection.
So, you know, the tools that CZI can bring to the table, we fund science, like you're
talking about, and we try to, there's lots of ways to fund science and just to be clear, you know, what we fund
is a tiny fraction of what the NIH funds, for instance.
So, you guys have been generous enough that it's, it definitely holds weight to NIH, NIH
as contribution.
Yeah, and, but I think every, every funder has its own role in the ecosystem.
And for us, it's really, how do we incentivize new points of view?
How do we incentivize collaboration?
How do we incentivize open science?
And so a lot of our grants include inviting people to look at different fields.
Our first neuroscience RFA was aimed towards incentivizing people from different backgrounds,
immunologists, microbiologists, to come and look at how our nervous system works and how to keep it healthy.
Or we asked that our grantees participate in the pre-print movement to accelerate the rate of sharing knowledge
and actually others being able to build upon science.
So that's the funding that we do. In terms of building,
we build software and hardware, like you mentioned. We put together teams that can build tools that
are more durable and scalable than someone in a single lab might be incentivized to do. There's
a ton of great ideas and nowadays most scientists
can tinker and build something useful for their lab, but it's really hard for them to be able to
share that tool sometimes beyond their own laptop or forget the next lab over or across the globe.
So we partner with scientists to see what is useful, what kinds of tools, in imaging
and nopari.
It's a useful image annotation tool that is born from an open source community, and how
can we contribute to that?
Or a cell by gene, which works on single cell datasets, and how can we make it build a
useful tool so that scientists can share data sets,
analyze their own, and contribute to a larger
corpus of information. So we have software teams that are building,
collaborating with scientists to make sure that we're building easy to use,
durable, translatable tools across the scientific community in the areas that we work in.
We also have institutes.
This is where the imaging work comes in, where, you know, we are proud owners of an electron
microscope right now. It's going to be installed at our imaging institute and that will really
contribute to a way where we can see work differently. But the more hardware needs does need to be developed.
We're partnering with a fantastic scientist in the BioHub network to build a mini-phase
plate to increase, to align the electrons through the electron microscope to be able to
increase the resolution so we can see in sharper detail.
So there's a lot of innovative work within the network that's happening.
And these institutes have grand challenges that they're working on.
Back to your question about cells.
Cells are just the smallest unit that are alive. And are your body, all of our bodies have many, many, many cells.
There's some estimate of like 37 trillion cells, different cells in your body.
And what are they all doing?
And what do they look like when they're healthy?
And you're healthy?
What do they look like when you're sick?
And where we're at right now with our understanding
of cells and what happens when you get sick is basically
we've gotten pretty good at from the human genome project
looking at how different mutations in your genetic code
lead for you to be more susceptible to get sick
or directly cause you to get sick.
So we go from a mutation in your DNA to,
wow, you now have Huntington's disease, for instance.
And there's a lot that happens in the middle.
And that's one of the questions
that we're going after at CZI is what actually happens.
So an analogy that I like to use to share with my friends
is right now, say we have a recipe for a cake.
We know there's a typo in the recipe.
And then the cake is awful.
That's all we know.
We don't know how the chef interprets the typo.
We don't know what happens in the oven.
And we don't actually know how it's exactly connected
to how the cake
didn't turn out, how you had expected. A lot of that is unknown, but we can actually
systematically try to break this down. And one segment of that journey that we're looking
at is how that mutation gets translated and acted upon in your cells. And all of your cells have what's called mRNA.
mRNA are the actual instructions
that are taken from the DNA.
And what our work in single cell is,
looking at how every cell in your body
is actually interpreting your DNA slightly differently.
And what happens when healthy cells are interpreting
the DNA instructions and when six cells
are interpreting those directions?
And that is a ton of data.
I just told you there's 37 trillion cells.
There's different large sets of mRNA in each cell.
But the work that we've been funding
is looking at how,
first of all, gathering that information.
We've been incredibly lucky to be part of a very fast moving
field where we've gone from in 2017
funding some methods work to now having really not
complete, but nearly complete, atlases of how the human body works,
how flies work, how mice work at the single cell level,
and being able to then try to piece together like,
how does that all come together when you're healthy
and when you're sick?
And the neat thing about the sort of inflection point
where we're at in AI is that I can't look at this data And the neat thing about the sort of inflection point
where we're at in AI is that I can't look at this data
and make sense of it.
There's just too much of it.
And biology's complex, human bodies are complex.
We need this much information.
But the use of large language models
can help us actually look at that data and gain insights,
look at what trends are consistent
with health and what trends are unsuspected.
And eventually our hope through the use of these data sets that we've helped curate
and the application of large language models is to be able to formulate a virtual cell, a cell that's completely
built off of the data sets of what we know about the human body, but allows us to manipulate
and learn faster and try new things to help move science and then medicine along.
Do you think we've catalogued the total number of different cell types? Every week I look at great journals like Cell, Nature and Science, and for instance, I saw
recently that using single cell sequencing, they've categorized 18 plus different types of fat
cells. We always think of like a fat cell versus a muscle cell. So now you've got 18 types.
Each one is going to express many, many different genes
in RNAs, mRNAs.
And perhaps the one of them is responsible for what we see
in advanced type 2 diabetes or in other forms of obesity
or where people can't lay down fat cells, which turns out
to be just as detrimental in those extreme cases.
So now you've got all these lists of genes,
but I always thought of single cell sequencing
as necessary but not sufficient.
But you need the information,
but it doesn't resolve the problem.
And I think of it more of a hypothesis generating experiment.
Like, okay, so you have all these genes,
and you could say, wow, this gene is particularly elevated
in the diabetics cell type of,
let's say, one of these fat cells or muscle cells for that matter, whereas it's not in non-diabetics.
So then of the millions of different cells, maybe only five of them differ dramatically.
So then you generate a hypothesis, oh, it's the ones that differ dramatically that are important, but maybe one of those genes, when it's only 50% changed, has a huge effect because
of some network biology effect.
And so I guess what I'm trying to get to here is, how does one meet that challenge and
can AI help resolve that challenge by essentially placing those lists of genes into a 10, know, 10,000 hypotheses, because I'll tell
you that the graduate students and postdocs in my lab get a chance to test one hypothesis at a time.
And that's really the challenge, let alone one lab. And so for those that are listening to this,
and you know, hopefully it's not getting outside the scope of kind of like standard
understanding or the understanding we've generated here. But what basically is you have to pick at
some point. More data always sounds great, but then how do you decide what to test?
So no, we don't know all the cell types.
I think one of the one thing that was really exciting when we first launched this work
was, you know, cystic fibrosis.
Like cystic fibrosis is caused by mutation and CFTR.
That's pretty well known.
It affects a certain channel that makes it hard
for mucus to be clear.
That's the basic cystic fibrosis.
When I went to medical school, it was taught as fact.
So they were lungs filled up with fluid.
These people are carrying around sacks of fluid filling up.
I've known people like, I've worked with people like then,
they have to literally dump the fluid out.
They can't run or do it in 10 sex or size.
Life is shorter.
Life is shorter.
And when we applied single cell methodologies to the lungs,
they discovered an entirely new cell type that actually
is affected by mutation, and the CF mutation,
the cystic fibrosis mutation, that actually
changes the paradigm of how we think about cystic fibrosis.
It's amazing.
Just don't know.
So I don't think we know all the cell types.
I think we'll continue to discover them
and we'll continue to discover new relationships
between cell and disease,
which leads me to the second example I wanna bring up
is this large data set that the entire scientific
community is built around single cell,
is starting to allow us to say this mutation,
where is it expressed, what types of cell types it's expressed in.
And we actually have built a tool at CZI called cell by gene,
where you can put in the mutation that you're interested in.
And it gives you a heat map of cross cell types,
of which cell types are expressing the gene
that you're interested in.
And so then you can start looking at, okay, if I look at gene X
and I know it's related to heart disease,
but if you look at the heat map, it's also spiking in the pancreas.
That allows you to generate a hypothesis, why?
And what happens when this gene is mutated and you're in the function of your pancreas?
Really exciting way to look and ask questions differently.
And you can also imagine a world where
if you're trying to develop a therapy, a drug,
and the goal is to treat the function the heart.
But you know that it's also really active
in the pancreas again.
So what is there going to be an unexpected side effect
that you should think about as you're bringing
this drug to clinical trials?
So it's an incredibly exciting tool
and one that's only gonna get better
as we get more and more sophisticated ways
to analyze the data.
I say I love that because if I look at the advances in neuroscience over the last
15 years, most of them did necessarily come from looking at the nervous system, came from
the understanding that the immune system impacts the brain.
Everyone, prior to that, talked about the brain as immune privileged organ.
What you just said also bridges and the divide between single cells, organs, and systems, right?
Because ultimately, cells make up organs, organs make up systems, and they're all talking
to one another.
And everyone nowadays is familiar with like gut brain access or the microbiome being so
important, but rarely is the discussion between organs discussed, so to speak.
So I think it's wonderful.
So that that tool was generated by CZI.I. or C.C.I.
Funded that tool so how does we built that we built it so is it built by meta is as better
And it's its own engineers got it. Yeah, they're completely different organizations
incredible and and so a graduate student or postdoc who's interested in a particular
Mutation could put this mutation into this database
So a graduate student or postdoc who's interested in a particular mutation could put this mutation into this database.
That graduate student or postdoc might be in a laboratory known for working on heart, but
suddenly find that they're collaborating with other scientists that work on the pancreas,
which also is wonderful because it bridges the divide between these fields.
Fields are so siloed in science, not just different buildings, but they people rarely talk,
unless things like this are happening.
I mean, the graduate student is someone that we want to empower,
because one, they're the future of science, as you know.
And within cell by gene, if you put in the gene you're interested in
and it shows you the heat map, we also will pull up
like the most relevant papers to that gene.
And so, like, read these things.
Fantastic.
As we all know, quality nutrition influences, of course, our physical health,
but also our mental health and our cognitive functioning, our memory, our ability to learn new things
and to focus. And we know that one of the most important features of high quality nutrition
is making sure that we get enough vitamins and minerals from high quality, unprocessed or
minimally processed sources, as well as enough probiotics and prebiotics and fiber to support basically
all the cellular functions in our body, including the gut microbiome.
Now I, like most everybody, try to get optimal nutrition from whole foods, ideally mostly
from minimally processed or non-process foods.
However, one of the challenges that I and so many other people face is getting enough
servings of high quality fruits and vegetables per day, as well as fiber and probiotics that often accompany those fruits and vegetables.
That's why way back in 2012, long before I ever had a podcast, I started drinking AG1.
And so I'm delighted that AG1 is sponsoring the Hubertman Lab podcast. The reason I started taking
AG1 and the reason I still drink AG1 once or twice a day is that it provides all of my foundational
nutritional needs.
That is, it provides insurance that I get the proper amounts of those vitamins, minerals,
probiotics, and fiber to ensure optimal mental health, physical health, and performance.
If you'd like to try AG1, you can go to drinkag1.com slash Huberman to claim a special offer.
They're giving away 5 free travel packs plus a year supply of vitamin D3 K2. Again, that's drink AG1 dot com slash huberman to claim
that special offer.
I just think going back to your question from before, I mean, are there going to be more
cell types that could discover? I mean, I assume so, right? I mean, no catalog of the stuff
is ever, you know, it doesn't seem like we're ever done, right? We keep on finding more.
But I think that, that gets to one of the things
that I think are the strengths of modern LLMs
is the ability to kind of imagine different states
that things can be in.
So from all the work that we've done
and funded on the human cell Atlas,
there is a large corpus of data
that you can now train
a large-scale model on.
One of the things that we're doing, it sees, which I think is pretty exciting, is building.
We think it's one of the largest nonprofit life sciences AI clusters, right?
It's like a, you know, on the order of 1,000 GPUs.
And it's larger than what most people have access to in academia, that you can do serious clusters, right? It's like a, you know, on the order of a thousand GPUs. And, you know,
it's larger than what most people have access to in academia that you can do serious engineering
work on. And, you know, by basically training a model with all of the human cell outless
data and a bunch of other inputs as well, we think you'll be able to basically imagine
all of the different types of cells
and all of the different states that they can be in and when they're healthy and diseased
and how they'll interact with different, you know, interact with each other, interact
with different potential drugs.
But I mean, I think the state of LLM, I think this is where it's helpful to understand,
you know, have a good understanding and be grounded in like the modern state of AI.
I mean, these things are not foolproof, right?
I mean, one of the flaws of modern LLMs is they hallucinate, right?
So the question is, how do you make it so that that can be an advantage rather than a disadvantage?
And I think the way that it ends up being an advantage is when they help you imagine a bunch of states
that someone could be in, but then you, you know, as the scientist or engineer go and validate that those are true, whether they're,
you know, solutions to how a protein can be folded or possible states that a cell could
be in when it's interacting with other things.
But, you know, we're not yet at the state with AI that you can just take the outputs of
these things as, like, as gospel and run from there.
But they are very good, I think as you said, hypothesis
generators or possible solution generators that then you can go validate.
So I know that's a very powerful thing that we can basically, you know, building on the
first five years of science work around the human cell Atlas and all the data that's
been built out, carry that forward into something that I think is going to be a very novel
tool going forward. And that's the type of thing that I think we're set up to do well. I mean, you had this exchange
a little while back about funding levels and how CZI is just sort of a drop in the bucket compared
to NIH. But I think we have this, the thing that I think we can do that's different is
funding some of these longer term bigger projects, that it is hard to galvanize the, and
pull together the energy to do that.
And it's a lot of what most science funding is, is like relatively small projects that are
exploring things over relatively short time horizons.
And one of the things that we try to do is is like build these tools over, you know, 5, 10, 15-year periods.
They're often projects that require hundreds of millions of dollars of funding
and world-class engineering teams and infrastructure to do.
And that, I think, is a pretty cool contribution to the field that
I think is there aren't as many other folks who are doing that kind of thing.
But that's one of the reasons why I'm personally excited
about the virtual cell stuff,
because it's like this perfect intersection
of all the stuff that we've done
and single cell to previous collaborations
that we've done with the field
and bringing together the industry and AI expertise
around this.
Yeah, I completely agree that the model of science that you're put together with CZI
isn't just unique from an age, but it's extremely important.
The independent investigator model is what's driven the progression of science in this country
and to some extent in Northern Europe for the last hundred years.
And it's wonderful on the one hand because it allows for that image we have of a scientist
kind of tinkering away or the people in their lab and then the eurecas.
And that hopefully translates to better human health.
But I think in my opinion, we've moved past that model as the most effective model or the
only model that should be explored.
Yeah, I just think it's a balance.
It's a balance. You want that. But you want to empower those people. I think that's these
tools and tools for those. Sure. And there are mechanisms to do that like NIH. But
it's hard to do collaborative sciences. It's sort of interesting that we're sitting here not far
for because I grew up right near here as well, not far from the garage model of tech, right?
He looked at the record model and not far from here at all
And the idea was you know the tinker in the garage the inventor and then people often forget that to implement all the technologies
They discovered took enormous factories in warehouse. So you know, there's there's a similarity there to Facebook meta
etc
But I think in science what we imagine that the scientists alone in their laboratory
and those eureka moments. But I think nowadays that the big questions really require extensive
collaboration and certainly tool development. And one of the tools that you keep coming back to is
these LLMC's large language models. And maybe you could just elaborate for those that aren't familiar.
You know, what is a large language model for the uninformed?
What is it?
What does it allow us to do that different types of AI don't allow?
More importantly, perhaps, what does it allow us to do that a bunch of really smart people
highly informed in a given area of science, staring at the data?
What can do that they can't do?
Sure.
So, I think a lot of the progression of machine learning
has been about building systems, neural networks
or otherwise that can basically make sense
and find patterns in larger and larger amounts of data.
And there was a breakthrough a number of years back
that some folks at Google actually made
called this transformer model architecture.
And it was this huge breakthrough
because before then there was somewhat of a cap
where if you fed more data into a neural network
past some point, it didn't really glean more insights
from it, whereas Transformers just,
and we haven't seen the end of how big that can scale to yet. I mean, I think that there's a chance
that we run into some ceiling, but it's never-
So it never asks some totes.
We haven't observed it yet, but we just haven't built big enough systems yet. So I would guess that,
I don't know, I think this is actually one of the big questions in the AI field today,
is basically our Transformers and are the AI field today is basically our
transformers and are the current model architecture is sufficient.
And if you just build larger and larger clusters, do you eventually get something that's like
human intelligence or super intelligence?
Or is there some kind of fundamental limit to this architecture that we just haven't
reached yet?
And once we kind of get a little bit further and building them out, then we'll reach
that and then we'll need a few more leaps
before we get to the level of AI that,
I think, will unlock a ton of really futuristic
and amazing things.
But there's no doubt that even just being able to process
the amount of data that we can now
with this model architecture has unlocked
a lot of new use cases.
And the reason why they're called large language models
is because one of the first uses of them
is people basically feed in all of the language
from basically the World Wide Web.
And you can think about them as basically prediction machines.
So if you fit in, you put in prompt,
and it can basically predict a version of what should come next.
So you type in a headline for a new story and it can predict what it thinks the story
should be.
Or you could train it so that it can be a chatbot, right?
Okay, if you're prompted with this question, you can get this response.
But one of the interesting things is it turns out that
there's actually nothing specific to using human language in it.
So if instead of feeding it human language,
if you use that model architecture for a network
and instead you feed it all of the human cell outless
data, then if you prompt it with a state of a cell,
it can spit out different versions of like how that cell can
interact or different states that the cell could be a next when it interacts with different
things.
Does it have to take a genetics class?
So for instance, have you given a bunch of genetics data?
Do you have to say, hey, by the way, and then you give it a genetics class, so it understands
that you know, you've got DNA RNA and RNA and proteins.
I think that the basic nature of all these machine learning techniques is they're basically pattern recognition systems.
So, they're these very deep statistical machines
that are very efficient at finding patterns.
So, it's not actually,
I mean, you don't need to teach a language model
that's trying to speak a language.
A lot of specific things about that language either.
You just feed it in a bunch of examples.
And then let's say you teach it about something in English,
but then you also give it a bunch of examples
of people speaking Italian.
It'll actually be able to explain the thing
that it learned in English in Italian, right?
Even though the crossover and just the pattern recognition,
is the thing that is pretty profound and powerful about this.
But it really does apply to a lot of different things.
Another example in the scientific community
has been the work that AlphaFold,
you know, that basically the folks
that DeepMind have done on protein folding.
It's just basically a lot of the same model architecture,
but instead of language there,
they kind of fed in all of the protein data,
and you can give it a state and it can
spit out solutions to how those proteins get folded.
It's very powerful.
I don't think we know yet as an industry,
what the natural limits of it are
and that that's one of the things that's pretty exciting about the current state. But it certainly
allows you to solve problems that just weren't solved with the generation of machine learning
that came before it. Sounds like CZI is moving a lot of work that was just done in vitro
in dishes and in vivo in living organisms, model organisms, or humans, to Ensilico, as we say.
So, do you foresee a future where a lot of biomedical research, certainly the work of CCI
included is done by machines? I mean, obviously it's much lower cost,
and you can run millions of experiments,
which of course is not to say that humans
are not going to be involved,
but I love the idea that we can run experiments
in silico and mass.
I think the in-silico experiments are going to be
incredibly helpful to test things quickly,
to cheaply into just unleash a lot of creativity.
I do think you need to be very careful about making sure it still translates and matches
this, humans.
You know, one thing that's funny in basic sciences,
we've basically cured every single disease in mice.
Like mice have, we know what's going on when they have a number of diseases because they're
used as a model organism.
But they are not humans, and a lot of times that research is relevant, but not directly
one-to-one translatable to humans.
So you just have to be really careful about making sure
that it actually works for humans. Sounds like what CZI is doing is actually creating
a new field. Mac, as I'm hearing all of this, something, okay, this transcends
immunology department, you know, cardiothoracic surgery, I mean, neuroscience, I mean, the idea of
a new field where you certainly
embrace the realities of universities and laboratories because that's where most of the work that
you're funding is done, is it right?
So maybe we need to think about what it means to do science differently.
And I think that's one of the things that's most exciting.
Along those lines, it seems that bringing together a lot of different types of people at
different major institutions is going to be especially important. So I know that the initial CCI biohub,
um, gratefully, um, included Stanford, um, we'll put that first in the list. Um, but also UCSF,
forgive me, the many friends at UCSF and also Berkeley, but there are now some additional institutions involved.
So maybe you could talk about that and what motivated the decision to branch outside the Bay Area
and why you selected those particular additional institutions to be included.
Well, I mean, I'll just say a part of a big part of why we wanted to create additional biohubs
is we were just so impressed by the work that the folks
who were running the first biohub did.
Yeah.
And I also think, and you should walk through the work
of the Chicago biohub and the New York biohub
that we just announced.
But I think it's actually an interesting set of examples
that balance the limits of what you want to do
with physical, material, engineering and where things are
purely biological, because the Chicago team is really building more sensors to be able
to understand what's going on in your body.
But that's more of a physical engineering challenge, whereas the New York team, we basically
talk about this as a cellular endoscope of being able have like an immune cell or something that can go and understand
you know what is like what's the thing that's going on in your body but it's not like a physical piece of hardware
it's a cell that you can basically you know have have just go report out on on on different things that are happening inside the body
so you should sell the microscope totally yeah and then eventually actually being able to act on it.
But you should go into more detail on all this.
So a core principle of how we think about biohubs is that it has to be, when we invited
proposals, it has to be at least three institutions.
So really breaking down the barrier of a single university, oftentimes asking for the people
designing the research aim to come
from all different backgrounds, and to explain why that the problem that they want to solve
requires interdisciplinary inter-university institution collaboration to actually make
happen.
We just put that request for proposals out there with our San Francisco Biahub as an example,
where they've done incredible work in single cell biology and infectious disease.
And we got, I want to say like 57 proposals from over 150 institutions.
A lot of ideas came together, and we are so, so excited that we've been able to launch
Chicago and New York.
Chicago is a collaboration between UIUC,
University of Illinois, Urbana-Champaign,
and University of Chicago and Northwestern.
And if I, obviously these universities are multifaceted,
but if I were to describe them by their like
stereotypical strength, Northwestern has an incredible medical
system and hospital system.
University of Chicago brings to the table incredible basic science strengths.
University of Illinois is a computing powerhouse. And so they came together and proposed that they were going to start thinking about cells and tissue.
So that one of the one of the layers that you just alluded to. So how do the cells that we know
behave and act differently when they come together as a tissue? And the first one of the first tissues that
they're starting with is skin. So they've been already been able to as a collaboration
under the leadership of Shana Kelly design, art of engineered skin tissue. The architectural
looks the same as what's in UNI. And what they've done is built these super, super thin sensors,
and they embed these sensors throughout the layers of this engineered tissue,
and they read out the data.
They want to see how these cells,
what these cells are secreting,
how these cells talk to each other,
and what happens when these cells get inflamed.
Inflammation is an incredibly important process
that drives 50% of all deaths.
And so this is another sort of disease agnostic approach.
We want to understand inflammation.
And they're going to get a ton of information
out from these sensors that tell you what happens
when something goes awry.
Because right now, we can say, when you have an allergic reaction,
your skin gets red and puffy.
But what is the earliest signal of that?
And these sensors can look at the behaviors of these cells
over time, and then you can apply a large language model
to look at the earliest statistically significant changes
that can allow you to intervene as early as possible.
So that's what Chicago's doing.
They're starting in the skin cells.
They're also looking at the neuromuscular junction,
which is the connection between where a neuron attaches
to a muscle and tells the muscle how to behave.
Super important in things like ALS,
but also in aging.
The slowed transmission of information
across that neuromuscular junction
is what causes old people to fall.
Their brain cannot trigger their muscles to react fast enough. And so we want to be able to
embed these sensors to understand how these different interconnected systems within our bodies
work together. In New York, they're doing a related but equally exciting project where they're
engineering individual cells to be able to go in and identify
changes in a human body.
So what they'll do is, they're calling us, it's a wild,
I mean, I love it.
I mean, I don't want to go on a tangent, but for those that
want to look it up adaptive optics,
you know, there's a lot of distortion and interference
when you try and look at something really small
and really far away and really smart physicists figured out,
well, use the interference as part of the microscope.
Make those actually lenses of the microscope.
We should talk about imaging separately.
So it's extremely clever along those lines.
It's not intuitive, but then when you hear it,
it makes so much sense.
It's not immediately intuitive.
Make the cells that are already can navigate to tissues
or embed themselves in tissues.
Be the microscope within that tissue.
Totally.
I love it.
The way that I explain this to my friends and my family
is this is fantastic voyage, but real life.
Like, we are going into the human body,
and we're using the immune cells,
which, you know, are privileged
and already working to keep your body healthy,
and being able to target them to examine certain things.
So, like, you can engineer an immune cell
to go in your body and look inside your coronary arteries
and say, are these arteries healthy or are there plaques?
Because plaques lead to blockage, which lead to heart attacks.
And the cell can then record that information and report it back out.
That's the first half of it. The New York Biolaub is going to do.
Fantastic.
The second half is, can you then engineer the cells to go do something about it?
Can I then tell a different cell, a mean cell, that is able to transport in your body
to go in and clean that up in a targeted way?
And so it's incredibly exciting.
They're going to study things that are sort of immune privilege, that your immune system
normally doesn't have access to, things like ovarian and pancreatic cancer,
they'll also look at a number of neurogegenerative diseases
since the immune system doesn't presently
have a ton of access into the nervous system.
But it's both mind-blowing and it feels like sci-fi,
but science is actually in a place
where if you really
push the group of incredibly qualified scientists, like, could you do this if given the chance
the answer is like, probably, give us enough time, the bright team and resources, like,
it's doable.
Yeah, I mean, it's a 10 to 15-year project.
Yeah.
But it's, it's awesome.
Engineer itself, yeah.
I love the optimism.
And the moment you said, make the cell, the microscope.
So it's like, yes, yes, and yes.
It just makes so much sense.
What motivated the decision to do the work of CCI in the context of existing universities,
as opposed to, you know, there's still some real estate up in Redwood City where there's
a bunch of space to put biotech companies and just hiring people from all
All backgrounds and saying hey, you know have at it and doing this stuff from scratch
I mean, it's a it's a very interesting
Decision to do this in the context of an existing framework of like graduate students that need to do the thesis and get a first author paper
Because there's a whole set of structures within academia that I think both facilitate but also limit the progression of science
You know that independent investigator model that we talked about a little bit earlier
It it's so core to the way science has been done
This is very different and frankly sounds far more efficient if I'm to be to completely honest and you know
We'll see if I renew my NIH funding after saying that but um, but I think we all want the same thing
We all want to as scientists and as humans,
we want to understand the way we work
and we want healthy people to persist to be healthy
and we want sick people to get healthy.
I mean, that's really ultimately the goal.
It's not super complicated, it's just hard to do.
So the teams at the BiHUB are actually independent
of the universities.
So each BiHUB will probably have in total maybe 50 people working on sort of deep efforts.
However, it's an acknowledgement that not all of the best scientists who can contribute
to this area are actually going to one, want to leave a university, or want to take on
the full-time scope of this project.
So it's the ability to partner with universities and to have the faculty at all the universities
be able to contribute to the overall project is how the bio have been structured.
Got it. But a lot of the way that we're approaching CZI is this long-term iterative project to figure out, try a bunch of different things,
figure out which things produce the most interesting results, and then double down on those in the next five-year push.
So we just went through this period where we kind of wrapped up the first five years of the science program.
We try a lot of different models, all kinds of different things. And it's not that the biohub model, we don't think it's like the best or only model, but we found
that it was sort of a really interesting way to unlock a bunch of collaboration and bring
some technical resources that allow for this longer term development. And it's not something that
is widely being pursued across the rest of the
field. So we figured, okay, this is like an interesting thing that we can, that we can
all push on. But I mean, yeah, we do believe in the collaboration. But I also think that
we come at this with, you know, we don't think that the way that we're pursuing this is
like the only way to do this or the way that everyone should do it. We're pretty aware of what is the rest of the ecosystem and how we can play a unique role in it.
It feels very synergistic with the way science has already done and also fills in an incredibly
important niche that frankly it wasn't filled before. Alonal lines of implementation. So
let's say your large language models combined with imaging tools
that reveal that a particular set of genes acting in a cluster, I don't know, set up an
organ crash. Let's say the pancreas crashes at a particular stage of pancreatic cancer.
I mean, still one of the most deadliest of the cancers. And there are others that you certainly wouldn't want to get,
but that's among the ones you wouldn't want yet the most.
So you discover that.
And then the idea is that, OK, then AI reveals
some potential drug targets, but then
bear out in vitro and addition and a mouse model.
How is the actual implementation of two drug discovery?
Or maybe this target is drugable, maybe it's not,
maybe it requires some other approach,
you know, laser, laser ablation approach or something.
We don't know.
But ultimately, is CZI going to be involved
in the implementation of new therapeutics?
Is that the idea?
Less so.
Less so.
That's, you know, this is where it's important to work
in an ecosystem and to know your own limitations.
Like there are groups and startups and companies that take that and bring it to translation very effectively.
I would say the place where we have a small window into that world is actually our work with rare disease groups. We have, through our rarest one portfolio,
funded patient advocates to create rare disease organizations
where patients come together and actually pool
their collective experience.
They build bio registries, registries
of their natural history.
And they both partner with researchers
to do the research about their disease and with drug developers to incentivize drug
developers to focus on what they may need for their disease.
And one thing that's important to point out is that rare diseases aren't rare.
They're over 7,000 rare diseases, and collectively impact many,
many individuals.
And I think the thing that's from a basic science perspective, an incredibly fascinating
thing about rare diseases, is that there are actually windows to how the body normally
should work. And so there are often mutations that when genes that were when they're
mutated cause very specific diseases, but that tell you how the normal biology works as well.
Got it.
So you discussed basically the goals, major goals and
issues of the CGI for the next five to 10 years.
And then beyond that, the targets will be explored by biotech companies.
They'll grab those targets and test them and implement them.
There are also, I think, been a couple of teams from the initial BioHUB that were interested
in spinning out ideas, right into startup.
So, that's just, even though it's not a thing that we're going to pursue because we're
a philanthropy, we want to enable the work that gets done to be able to get turned into
companies and things that other people go take and run towards building, you know, ultimately
therapeutics.
So, that's another zone, but that's just, that's not a thing that we're going to do.
Yeah, I gather you're both optimists.
Yeah.
Is that part of what brought you together?
Forgive me for you.
Switching to a personal question, but I love the optimism that seems to sit at the root
of this E.G.I.
I will say that we are incredibly hopeful people, but it manifests in different ways between
the two of us.
Yeah.
What do you, how would you describe your optimism versus mine?
It's not a loaded question.
I don't know.
I mean, I think I'm more probably technologically optimistic about what can be built. And I think you, because of your focus as an actual doctor, kind of have more of a sense
of how that's going to affect actual people in their lives.
Whereas for me, it's like, I mean, a lot of my work, it is, you know me it's like I'm in a lot of my work. It's like touch a lot of
people around the world and the scale is sort of immense and I think for you just like it's like being
able to improve the lives of individuals, whether it's students at any of the schools that you've started
or any of the stuff that we've supported through the education work, which isn't the goal here or, you know, like just being able to improve
people's lives in that way, I think is the thing that I've seen you be super passionate
about.
And does that, do you agree with that characterization?
I'm trying, I'm trying to.
Yeah, I agree with that.
I think that's very fair.
And I'm sort of giggling to myself because in a day-to-day life as like life partners,
our relative optimism comes through as Mark just like
is overly optimistic about his time management
and we'll get engrossed in interesting ideas.
I'm late.
And he's late.
And every position is very punctual.
And because he's late, I have to channel Mark
as an optimist whenever I'm waiting for him.
That's a nice way.
OK, I'll start using that.
That's what I think.
When I'm in the driveway with the kids waiting for you,
I'm like, Mark is an optimist.
And so his optimist translates to some tardiness.
Whereas I'm sort of, I'm like, how is this going to happen?
I'm going to open a spreadsheet, I'm going to start putting together a plan,
and like, pulling together all the pieces, calling people,
to sort of, like, bring something to life.
But it is, it's one of my favorite quotes that is,
optimists tend to be successful, and pessimists tend to be right.
And yeah, I mean, I think it's true in a lot of different aspects of life.
We know who said that.
Did you say that?
Mark said it.
No, I did not.
Absolutely not.
No, no, no.
I like it.
I did not invent it.
We'll give it to you.
No, no, no, no, no, no, no.
Just get it down.
But I do think that there's really something to it, right?
I mean, if you're discussing any idea, there's all these reasons why it might
not work.
And so I think that, and those reasons are probably true, the people are stating them
are probably have some validity to it.
But the question is, is that the most productive way to view the world?
And I think across the board, I think the people who tend to be the most
productive and get the most done, you kind of need to be optimistic because if you don't believe
that someone can get done, then why would you go work on it? The reason I ask the question is that
these days we hear a lot about the futures looking so dark in these various ways and
you have children, so you have families, and you are a family, excuse
me, and you also have families, independently, that are now merged. But I love the optimism
behind the CZI because behind all this, there's sort of a set of big statements on the wall.
One, the future can be better than the present in terms of treating disease,
maybe even you said eliminating diseases, all diseases, love that optimism, and that there's
a tractable path to do it. Like we're going to put literally money and time and energy
and people and technology and AI behind that. And so I have to ask, was having children a significant modifier in terms of your view
of the future like, wow, like you hear all this doom and gloom, like what's the future
going to be like for them?
Did you sit back and think, you know, what would it look like if there was a future with
no diseases?
Is that the future we want our children in?
I mean, I'm voting a big yes, so we're not going to debate that at all.
But was having children sort of an inspiration for the CCI in some way?
Yeah.
So I think my answer to that, I would dial backwards for me, and I'll just tell a very
brief story about my family.
I'm the daughter of Chinese Vietnamese refugees. My parents and
grandparents were boat people. If you remember people left Vietnam during the war in these small
boats into the South China Sea. And there were stories about how these boats would sink with whole
families on them. And so my grandparents,, both sets of grandparents who knew each other, decided that there was a better future
out there, and they were willing to take a risk for it.
But they were afraid of losing all of their kids.
My dad is one of six, my mom is one of 10.
And so they decided that there was something out there
in this bleak time and they paired up
their kids, one from each family, and sent them out on these little boats before
the internet, before cell phones, and just said, we'll see you on the other side.
And the kids were between the ages of like, you know, 10 to 25. So young kids, my mom was a teenager
with that early teen when this happened. And everyone made it. And I get to sit
here and talk to you. So how could I not believe that like better is possible?
And like, I hope that that's in my like epigenetic somewhere. And then I carry that on.
That is a spectacular story.
Isn't that wild?
It is spectacular.
How can I be a pessimist with that?
I love it.
And I so appreciate that you became a physician
because you're now bringing that optimism
and that epigenetic understanding
and cognitive understanding and emotional understanding
to the field of medicine.
So I'm grateful to the people that made that decision.
Yeah, and then, you know, when I think you don't, I've always known that story,
but you don't understand how wild that feels until you have your own child. And you're like,
well, I can't even, I refuse to let her use, you know, glass bottles only or something like that.
And you're like, oh my god, like the risk and sort of willingness
of my grandparents to believe in something bigger and better is just astounding. And our
own children sort of give it a sense of urgency.
And spectacular story and you're sending knowledge out into the fields of science and
bringing knowledge into the fields of science. And I love this. We'll see you on the other
side.
Yeah.
I'm confident that it will all come back.
Well, thank you so much for that.
I mark out you have the opportunity to talk about it.
Having kids change your world view.
It's really tough to beat that started.
It is tough to beat that story.
And they are also your children.
So in this case, you get you get you get two for the price of one, so to speak.
So having children definitely changes your time horizon,
something that that's one thing.
As you just, there were all these things
that I think we had talked about.
If for as long as we've known each other,
that you eventually want to go do,
but then it's like, oh, we're having kids.
We need to get on this, right?
So I have those.
That was actually one of the checklists,
the baby checklists before the first.
It was like the baby's coming.
You have to start CZI.
Truly.
And like, sitting in the hospital delivery room,
finishing editing the letter that we were going to publish
to announce the work that we were doing.
Some people think that it's an exaggeration.
It was not.
We really were editing the
final drafts.
First CZI before you had that in the child.
Well, it's it an incredible initiative. I've been
following it since its inception and it's already been
tremendously successful and everyone in the field of
science and I have a lot of communication with those
folks. I feel the same way,
and the future is even brighter for it.
It's clear, and thank you for expanding to the Midwest
and New York, and we're all very excited to see
where all of this goes.
I share in your optimism.
And thank you for your time today.
Yeah, really.
Thank you.
Thank you.
A lot more to do.
I'd like to take a quick break,
and thank our sponsor sponsor Inside Tracker.
Inside Tracker is a personalized nutrition platform that analyzes data from your blood
NDNA to help you better understand your body and help you reach your health goals.
I've long been a believer in getting regular blood work done.
For the simple reason that many of the factors that impact your immediate and long term health
can only be analyzed from a quality blood test.
Now, a major problem with a lot of blood tests out there, however, is that you get information
back about metabolic factors, lipids, and hormones, and so forth, but you don't know what to
do with that information.
Withincyetracker, they make it very easy, because they have a personalized platform that
allows you to see the levels of all those things, metabolic factors, lipids, hormones, etc.
But it gives you specific directives that you can follow that relate to nutrition,
behavioral modification, supplements, etc. that can help you bring those numbers into the ranges
that are optimal for you. If you'd like to try Inside Tracker, you can go to InsideTracker.com-slaeshuberman
to get 20% off any of Inside Tracker's plans. Again, that's InsideTracker.com-slaeshuberman.
And now for my discussion with Mark Zuckerberg, slight
shift of topic here. You're extremely well known for your role in technology development,
but by virtue of your personal interests and also where meta technology interfaces with
mental health and physical health, you're starting to become synonymous with health, whether you realize it or not.
Part of that is because there's post footage of you rolling jiu-jitsu, you wanted to do jitsu
competition recently.
You're doing other forms of martial arts, water, sports, including surfing, and on and
on.
So, you're doing it yourself,
but maybe we could just start off with technology
and get this issue out of the way first,
which is that I think many people assume that technology,
especially technology that involves a screen,
excuse me, of any kind,
is going to be detrimental to our health.
But that doesn't necessarily have to be the case.
So could you explain how you see technology
meshing with, inhibiting,
or maybe even promoting physical and mental health?
Sure. I mean, I think this is a really important topic.
It's, you know, the research that we've done suggests that it's not all good or all bad.
I think how you're using the technology has a big impact on whether it is basically a
positive experience for you.
Even within technology, even within social media, there's not one type of thing that
people do.
I think at its best, you're forming
meaningful connections with other people. And there's a lot of research that basically suggests that
it's the relationships that we have and
friendships that kind of bring the most happiness and
in our lives and it's some level end up even correlating with living a longer and healthier life because you know that kind of you know
grounding that you have in community ends up being important for that so I think that that aspect of social media
which is
the ability to connect with people to understand what's going on in people's lives have empathy for them
communicate what's going on with your life express that
that's generally positive there there are ways that it can be negative in terms of bad interactions, things like bullying,
which we can talk about because there's a lot that we've done to basically make sure that
people can be safe from that and give people tools and give kids the ability to have the
right parental controls, that their parents can oversee that.
But that's sort of the interacting with people's side. There's another side of all of this, which I think of as just like passive consumption,
which at its best, it's entertainment, right?
An entertainment is an important human thing too, but I don't think that that has quite
the same association with the long-term well-being and health
benefits as being able to help people connect with other people does. And I think that it's worst.
Some of the stuff that we see online, I think these days a lot of the news is just so relentlessly negative. That it's just hard to come away from an experience
where you're looking at the news for
a half an hour and feel better about the world.
So I think that there's a mix on this.
I think the more that social media is about connecting
with people and the more that when you're kind of consuming and using the media part of social media,
to learn about things that kind of enrich you and can provide inspiration or education as
opposed to things that just leave you with a more toxic feeling, that that's sort of the balance
that we try to get right across our products. And I think we're pretty aligned with the community because at the end of the day, people
don't want to use a product and come away feeling bad.
There's a lot that people talk about, evaluate a lot of these products in terms of information
and utility, but I think it's as important when you're designing a product to think about
what kind of feeling you're creating with the people who use it, whether that's kind of an aesthetic
sense when you're designing hardware or just kind of like what do you make people feel?
And generally people don't want to feel bad, right?
So I think when, you know, that doesn't mean that we want to shelter people from bad things
that are happening in the world.
But I don't really think that it's not
what people want for us to just be showing all the super negative stuff all day long. So we work
hard on all these different problems, making sure that we're helping connect people as best as
possible, helping make sure that we give people good tools to block people who might be bullying them or harass
them, or especially for younger folks, anyone under the age of 16 defaults into an experience where
their experience is private. We have all these parental tools. So that way, parents can kind of understand
what their children are up to and a good balance. And then on the other side, we try to give people
tools to understand how they're spending their time. And we try to give people tools to understand how they're spending
their time.
And we try to give people tools so that if you're a teen and you're kind of stuck in some
loop of just looking at one type of content, we'll nudge you and say, hey, you've been
looking at content of this type for a while.
Like, how about something else?
And here's a bunch of other examples.
So, I think that there are things that you can do to kind of push this in a positive
direction, but I think it just starts with having a more nuanced view of like, this is
an all good or all bad, and the more that you can make it kind of a positive thing, the
better this will be for all the people who use our products.
And that makes a really good sense.
In terms of the negative experience, I agree.
I don't think anyone wants a negative experience in the moment.
I think where some people get concerned, perhaps, and I think about my own interactions with say
Instagram, which I use all the time, forgetting information out, but also consuming information.
I happen to love it. It's where I essentially launched the non-podcast segment of my podcast
and continue to. I can think of the experiences that are a little bit like highly processed food where
it tastes good at the time. It's highly engrossing, but it's not necessarily nutritious and you
don't feel very good afterwards. So for me, that would be the little collage of default
options to click on in Instagram. Occasionally, I notice, and this just reflects my failure,
not Instagrams, right, that there are a lot of like street fight things, like of people
beating people up on the street. And I have to say, these have a very strong gravitational
pull. I'm not somebody that enjoys seeing violence per se, but you know, I find myself
all click on one of these like, what happened? And I'll see someone like, you know, get hit
and there's like a little melee on the street or something and
Those seem to be offered to me a lot lately and again. This is my fault. It reflects my prior searching experience
But it I noticed that it has a bit of a gravitational pull where
You know, there's no I didn't learn anything. It's not teaching me any kind of useful street
self-defense skills of any kind
and any kind of useful street self-defense skills of any kind.
And at the same time, I also really enjoy some of the cute animal stuff.
And so I get a lot of those also.
So there's this polarized collage that's offered to me that reflects my prior search
behavior.
You could argue that the cute animal stuff is just entertainment, but actually it fills
me with a feeling in some cases that truly
delights me. I delight in animals. And we're not just talking about kittens. I mean, animals I've
never seen before interactions between animals. I've never seen before that truly delight me. They
energize me in a positive way that when I leave Instagram, I do think I'm better off. So I'm
grateful for the algorithm in that sense. But I guess the direct question is, is the algorithm just
reflective of what one has been looking at a lot prior to that moment where they log on?
Or is it also trying to do exactly what you described, which is trying to give people
a good feeling experience that leads to more good feelings?
Yeah, I mean, I think we try to do this in a long-term way. I think one simple example of this is we have this issue a number of years back about
clickbait news.
So articles that would have basically a headline that grabbed your attention, that made
you feel like, oh, I need to click on this, then you click on it, and then the article is
actually about something that's somewhat tangential to it, but people clicked on it.
So the naive version of this stuff, the 10-year-old version, it was like, oh, people seem to be
clicking on this, maybe that's good.
But it's actually a pretty straightforward exercise to instrument the system to realize
that, hey, people click on this and then they don't really spend a lot of time reading the news that after clicking
on it and after they do this a few times, they, you know, it doesn't really correlate with
them, you know, saying that they're having a good experience.
Some of what you, some of how we measure this is just by looking at how people use the
services, but I think it's also important to balance that by having
real people come in and tell us, okay, we show them. Here are the stories that we could
have showed you, which of these are most meaningful to you, or would make it that you have
the best experience and just kind of like mapping the algorithm and what we do to that
ground truth of what people say that they want. So I think that through a set of things like that, we really have made large steps to
minimize things like clickbait over time.
It's not gone from the internet, but I think we've done a good job of minimizing it on
our services.
Within that though, I do think that we need to be pretty careful about not being paternalistic
about what makes different people feel good.
Right?
So I don't know that everyone feels good
about cute animals.
I mean, I can't imagine that people would feel really bad about it,
but maybe they don't have as profound
of a positive reaction to it as you just expressed.
And I don't know, maybe people who are more into fighting
would look at the, you know, the street fighting videos
assuming that they're within our community standards.
I think that there's a level of violence
that we just don't wanna be showing at all,
but that's a separate question.
But if they are, I mean, it's like,
I mean, I'm pretty into MMA.
I don't get a lot of street fighting videos,
but if I did, maybe I'd feel like
I was learning something from that.
I think at various times in the company's history,
we've been a little bit too paternalistic
about saying, this is good content, this is bad,
you should like this, this is unhealthy for you.
And I think that we wanna look at the longterm effect.
So you don't wanna get stuck in a short-term loop of like,
okay, just cause you did this today,
it doesn't mean it's like what you aspire
for yourself over time.
But I think as long as you look at the long term of what people both say they want and what they do,
giving people a fair amount of latitude to like the things that they like, I just think feels
like the right set of values to bring to this. Now of course that doesn't go for everything.
There are things that are kind are truly off limits and things that
are bullying, for example, or things that are really inciting violence, things like that.
I mean, we have the whole community standards around this. But I think, except for those
things, which I would hope that most people can agree, bullying is bad. I hope that 100%
of people agree with that, not 100, maybe 99%.
Except for the things that kind of get that
sort of very,
that feel pretty extreme and bad like that,
I think you wanna give people space
to like what they wanna like.
Yesterday I had the very good experience
of learning from the meta team about safety protections
that are in place for kids who are using
Meta platforms and
frankly, I was like really positively surprised at the huge number of
filter-based tools and and and just ability to
customize the experience so that it can
stand the best chance of enriching not just
remaining neutral, but enriching their mental health status.
One thing that came about in that conversation, however,
was I realized there are all these tools,
but do people really know that these tools exist?
And I think about my own experience with Instagram,
I love watching Adam and Sarah's Friday,
Q&As, because he explains a lot of the tools that I didn't know existed.
People haven't seen that.
I highly recommend that they watch that.
I think every takes questions on Thursdays and answers them most every Friday's.
If I'm not aware of the tools without watching that that exists for adults, how does Meta look at the challenge of making sure
that people know that they're all these tools?
I mean, dozens and dozens of very useful tools,
but I think most of us just know the hashtag, the tag,
the click stories versus feed.
We now know that I also post to threads.
So we know the major channels and tools,
but this is like owning a vehicle that has incredible features
that one doesn't realize can take you off road,
can allow your vehicle to fly.
There's a lot there.
So what do you think could be done to get that information out?
Maybe this conversation could cue people to their success.
I mean, that's part of the reason why I wanted to talk to you about this.
I mean, I think most of the narrative around social media
is not, okay, all of the narrative around social media is not okay all
of the different tools that people have to control their experience it's you know the kind of narrative
of is this just negative for for for teens or something and and I think again a lot of this comes
down to you know do you know how was the experience being tuned and or is it actually you know
like our people using it to connect in positive
ways. And if so, I think it's really positive. So, yeah, I mean, I think part of this is we
probably just need to get out and talk to people more about it. And then there's an in-product
aspect, which is, you know, if you're a teen and you sign up, we take you through a pretty,
you know, extensive experience that tries to outline some of this,
but that is limits to, right?
Because when you sign up for a new thing,
if you're bombarded with like, here's a list of features,
you're like, okay, I just sign up for this.
I don't really understand much about what the service is.
Like, let me go find some people to follow,
who are my friends on here,
before I like learn about controls
to prevent people from harassing me or something.
That's why I think it's really important to also show a bunch of these tools in context.
So, if you're looking at comments and if you go to delete a comment or you go to edit something,
try to give people prompts and lines, okay, did
you know that you can manage things in these ways around that or when you're in the inbox
and you're filtering something, right?
It's remind people in line.
So I know just because of the number of people who use the products and the level of nuance
around each of the controls, I think the vast majority of that education, I think, needs to happen in the product.
But I do think that through conversations like this and others that we need to be doing,
I mean, we can create a broader awareness that those things exist.
So that way, at least people are primed.
So that way, when those things pop up in the product, people are like, oh, yeah, I knew
that there was this control.
And like, here's how I would use that.
Yeah, I find the restrict function to be very useful.
Yeah.
More than the block function.
In most cases, I do sometimes have to block people, but the restrict function is really useful
that you could filter specific comments.
You know, someone might have a, you might recognize that someone has a tendency to be a little
aggressive.
And I should point out that I actually don't really mind what people say to me, but I try
and maintain what I call classroom rules in my comment section where I don't like people attacking
other people because I would never tolerate that in the university classroom. I'm not going to tolerate that in the comment section.
For instance, yeah, and I think that the example that you just
you just used about restrict versus block gets to something about product design that's important to
which is that block is
sort of this very powerful
tool that if someone is giving you a hard time and you just want them to disappear from
the experience, you can do it.
But the design trade-off with that is that in order to make it so that the person has
just gone from the experience and that you don't show up to them, they don't show up to you.
Inherent to that is that they will have a sense
that you blocked them.
And that's why I think some stuff like restrict
or just filtering, like I just don't wanna see
as much stuff about this topic.
People like using different tools for very subtle reasons.
I mean, maybe you want the content to not show up,
but you don't want the person who's posting the content to know that you don't want to not show up, but you don't want the person who's posting
the content to know that you don't want it to show up.
Maybe you don't want to get the messages in your main inbox, but you don't want to tell
the person that you're not friends or something like that.
You actually need to give people different tools that have different levels of power and
nuance around how the social dynamics are on using them play out
in order to really allow people to tailor the experience and the ways that they want.
In terms of trying to limit total amount of time on social media, I couldn't find really
good data on this.
How much time is too much?
I think it's going to depend on what one is looking at, the age of the user,
et cetera.
But I agree.
I know that you have tools that cue the user to how long they've been on a given platform.
Are there tools to self-regulate, like I'm thinking about, like the Greek myth of the
sirens and, you know, people, you know, tying themselves to the mast and covering their
eyes so that they're not drawn in by the sirens? Is there a function aside from deleting the app temporarily? And then reinstalling
it every, every time you want to use it again, is there a true lockout, self-lockout function,
where one can lock themselves out of access to the app? Well, I think we give people tools that let
them manage this and and and there's the tools that you get to use and there's the tools that the parents get to use
To basically see how the usage works
But yeah, I think that there's there's different kind of and I think for now we've mostly focused on
You know helping people understand this and then you know give people reminders and things like that
It's tough though to answer the question that you
We're talking about before this is there an amount of time which is too much
Because it does really get to what you're doing but if you fast forward
Beyond just the apps that we have today to an experience that there's like a social experience in the future of
The augmented reality glasses or something that that we're building a lot of this is gonna be
You're interacting with people
in the way that you would physically as if you were kind of like hanging out with friends or
working with people. But now they can show up as holograms and you can feel like you're present
right there with them no matter where they actually are. And the question is, is there too much time
to spend interacting with people like that?
Well, at the limit, if we can get that experience to be kind of as rich and giving you as good
of a sense of presence, as you would have if you were physically there with someone, then I don't
see why you would want to restrict the amount that people use that technology to any less than
want to restrict the amount that people use that technology to any less than what would be the amount of time that you'd be comfortable interacting with people physically, which obviously
is not going to be 24 hours a day.
You have to do other stuff.
You have work.
You need to sleep.
But I think it really gets to how you're using these things.
Whereas, if what you're primarily using the services for is to, you're getting stuck
in the loops, reading news or something for is to, you're getting stuck in loops,
reading news or something that is really getting you into a negative mental state,
then I don't know, I mean, I think that there's probably a relatively short period of time that
maybe that's kind of a good thing that you want to be doing. But again, even then, it's not zero,
right? Because it's just because news might make you unhappy, it doesn't mean that the answer is
to be unaware of
negative things that are happening in the world. I just think that there's like different people have
different tolerances for what they can take on that. And I think we, you know, it's generally having
some awareness is probably good as long as it's not more than you're kind of constitutionally able to take.
So, I don't know. Try to not be too paternalistic about this as our approach,
but we want to empower people by giving them
the tools both people and if you're a teen,
your parents, to have tools to understand
what you're experiencing and how you're using these things
and then go from there.
Yeah, I think it requires of all of us
some degree of self-regulation.
I like this idea of not being too paternalistic.
I mean, that seems like the right way to go. I find myself
Occasionally having to make sure that I'm not just passively scrolling that I'm learning. I like forging
For organizing and dispersing information. That's been my my my life's career
So I have learned so much from social media. I find great papers great ideas
I think comments are a great source of feedback.
And I'm not just saying that because you're sitting here. I mean, Instagram in particular, but
other meta-platforms have been tremendously helpful for me to get science and health information out.
One of the things that I'm really excited about, which I only had the chance to try for the first
time today, is your new VR platform, the newest Oculus, and when then we can talk about the glasses, the Ray bands.
Sure.
Those are still, those two experiences are still kind of blowing my mind, especially the
Ray band glasses.
And I have so many questions about this.
So I'll resist.
But we've got into that.
Okay.
Well, yeah, I have some experience with VR.
My lab is used to VR.
Jeremy Balinson's lab at Stanford is one of the pioneering labs of VR and mixed reality.
I guess some of these are augmented reality,
but now mixed reality.
I think what's so striking about the VR
that you guys had me try today is how well it interfaces
with the real room.
Let's call it the physical room.
I could still see people, I could see where the furniture was, so I didn't bump in
anything, I could see people's smiles, I could see my water on the table while I was doing
this, what felt like a real martial arts experience, except I wasn't getting hit virtually,
but it's extremely engaging. And yet on the good side of things, it really bypasses a lot of the early concerns that
Balanced and Lab, again, Jeremy's Lab was early to say that, oh, you know, there's a limit
how much of the R1 can or should use each day, even for the adult brain, because it can
really disrupt your vestibular system, your sense of balance.
All of that seems to have been dealt with
in this new iteration of the art.
Like, we didn't come out of it feeling dizzy at all.
I didn't feel like I was re-entering the room
in a way that was really jarring.
Going into it is obviously, whoa, this is a different world,
but you can look to your left and say,
oh, someone just came in the door,
hey, how's it going?
I'm playing this game just as it was
when I was a kid playing in Nintendo
and someone walked in.
It's fully engrossing, but you'd be like,
hold on and you see they're there.
So first of all, Bravo, incredible.
And then the next question is,
what do we even call this experience?
Because it is truly mixed.
It's a truly mixed reality experience.
Yeah, I mean, mixed reality is sort of the umbrella term that refers to the combined
experience of virtual and augmented reality.
So augmented reality is what you're eventually going to get with some future version of the
smart glasses where you're primarily seeing the world, but you can put holograms in it.
So we'll have a future where you're going to walk into a room and there's going
to be as many holograms as physical objects.
So, if you just think about like all the paper, the kind of art, physical games, media, your
workstation, and you're referring to it.
If we refer to let's say an MMA fight, we could just draw it up on the table right here
and just see it repeat as it puts us turning and looking at a screen.
Yeah, I mean pretty much any screen that exists could be a hologram in the future with smart glasses.
Right, there's nothing that actually physically
needs to be there for that when you have glasses
that can put a hologram there.
And it's an interesting thought experiment
to just go around and think about,
okay, what of the things that are physical in the world
need to actually be physical.
And your chair does, right, because you're sitting on it,
a hologram isn't gonna support you, but I like that art on the wall. I mean, that doesn't need to physically be there. I mean,
so I think that's sort of the augmented reality experience that we're moving towards. And then we've
had these headsets that historically we think about as VR, and that has been something that kind of,
it's like a fully immersive experience.
But now we're kind of getting something
that's a hybrid in between the two
and capable of both, which is a headset
that can do both virtual reality
and some of these augmented reality experiences.
And I think that that's really powerful.
Both because you're gonna get new applications
that kind of allow people to collaborate together
and maybe the two of us are here physically, but someone joins us and it's their avatar there,
or maybe it's some version of the future, like you're having a team meeting and you have some
people there physically and you have some people dialing in and they're basically like a hologram
there virtually, but then you also have some AI is that personas that are on your team that are
helping you do different things.
They can be embodied as avatars and around the table meeting with you.
Are people going to be doing first dates that are physically separated?
I could imagine that some people would,
is it even worth leaving the house type date?
And then they find out and then they meet for the first time.
I mean, maybe, I think, you know,
dating has physical aspects to it too.
Some people might not be, they want to know whether or not it's worth the effort to
head out to it.
They want to reach the divide, right?
It is possible.
I know some of my friends who are dating basically say that in order to make sure that they
have a safe experience, then if they're
going on a first date, they'll schedule something that's shorter and maybe in the middle
of the day, like maybe it's coffee, so that way if they don't like the person, they can
just kind of get out before going and scheduling a dinner or a real full date.
So I don't know, maybe in the future people will have that experience where you can feel
like you're kind of sitting there and it's even easier and lighter weight and safer,
and if you're not having a good experience, you can just like teleport out of there and be on.
But yeah, I think that this will be an interesting question in the future is
there are clearly a lot of things that are only possible physically, that, or so much better
physically. And then there are all these things that we're building up that can be digital experiences,
but it's this weird artifact of kind of how this stuff has been developed that the digital
world and the physical world exist in these like completely different planes.
Or you want to interact with the digital world, well we do it all the time, but we pull
out a small screen or we have a big screen.
And then just basically we're interacting with the digital world through these screens.
But I think if we fast forward in a decade or more,
it's, I think one of the really interesting questions
about what is the world that we're gonna live in.
I think it's gonna increasingly be this mesh
of the physical and digital worlds that will allow us to feel
that the world that we're in is just a lot richer,
because there can be all these things that people create that are so much easier to do digitally
than physically.
But it be you're going to have a real physical sense of presence with these things and not
feel like interacting in the digital world is taking you away from the physical world,
which today is just so much viscerally richer and more powerful.
I think the digital world will sort of be embedded in that, and we'll feel just as vivid
in a lot of ways.
So that's what I always think.
When you were saying before, you felt like you could look around and see the real room.
I actually think that there's an interesting philosophical distinction between the real
room and the physical room, which historically, I think people would have said those are the
same thing.
But I actually think in the future, the real room is going to be the combination of the
physical world with all the digital artifacts and objects that are in there that you can
interact with them and feel present, whereas the physical world is just the part that's
physically there.
And I think it's possible to build a real world that's the sum of these two that will actually be a more profound experience than what we have today.
I was struck by the smoothness of the interface between the VR and the physical room.
Your team had me try a, I guess, it was an exercise class in the form of a book.
It was essentially like hitting mitz boxing. So hitting targets boxing. Supernatural.
Yeah, and it comes at a fairly fast pace that then picks up.
It's got some tutorials, very easy to use, and certainly got my heart rate up.
I'm in at least decent shape.
And I have to be honest, I've never once desired to do any of these on-screen fitness things.
I mean, I can't think of anything more immersive than like a clap, like, I don't want to insult
any particular products. But like riding a stationary bike
while looking at a screen, pretending I'm on a road
outside, I can't think of anything worse for me. Maybe only
like the leaderboard. Okay. Maybe I'm just a very
competitive person. Like, you're going to be running on a
treadmill. Yeah. At least give me a leaderboard. So I can beat
the people who are ahead of me. I like moving outside and certainly an exercise class or a Robics class as they used to call them.
But what the experience I tried today was extremely engaging. And I've done enough boxing to
at least know how to do a little bit of it and I really enjoyed it. It gets your heart rate up.
And I completely forgot that I was doing an on-screen experience because, in part because I believe I was still
in that physical room.
And I think there's something about the mesh of the physical room and the virtual experience
that makes it neither of one world or the other.
I mean, I really felt at the interface of those and certainly got presence, this feeling
of forgetting that I was in a virtual experience and got my heart rate up pretty quickly.
We had to stop because we were going to start recording, but I would do that for a good
45 minutes in the morning.
And there's no amount of money you could pay me truly to look at a screen while pedaling
on a bike or running on a treadmill.
So again, Bravo, I think it's going to be very useful.
It's going to get people moving their bodies more, which certainly social media up until now and a lot of technologies have been accused
of limiting the amount of physical activity that both children and adults are engaged in.
And we know we need physical activity. You're a proponent of and practicing nervous
activity. So is this a major goal of meta? Get people moving their bodies more and getting their heart rates up and and so on?
I think we want to enable it and I think it's good, but I think it comes more from
like a philosophical view of the world than it is.
Nuster, I mean, I don't go into building products to try to shape people's behavior.
But I believe in empowering people to, you know, do what they want and be the best version
of themselves that they can be.
So no agenda.
That said, I do believe that, you know, there's the previous generation of computers were devices
for your mind.
And I think that we are not brains and tanks.
It's like, I think that there's sort of
a philosophical view of people of like,
okay, you are primarily what you think about
or your values or something.
It's like, no, you are that and you are a physical manifestation
and people were very physical.
And I think building a computer for your whole body
and not just for your mind is very fitting
with this world view that like the actual essence of you
if you wanna be present with another person,
if you wanna be fully engaged in experience,
is not just a video conference call that looks at your face
and where you can share ideas,
it's something that you can engage your whole body.
So yeah, I think being physical is very important to me.
I mean, that's just a lot of the most fun stuff
that I get to do.
It's a really important part of how I personally balance
my energy
levels and just get a diversity of experiences because I could spend all my time running
the company, but I think it's good for people to do some different things and compete
in different areas or learn different things.
All of that is good.
If people want to do really intense workouts with the work that we're doing with Quest or
with eventual AR glasses, great.
But even if you don't want to do a really intense workout, I think just having a computing environment
and platform, which is inherently physical, captures more of the essence of what we are as
people than any of the previous of what we are as people
than any of the previous computing platforms that we've had to date.
It was even thing, just of the simple task of getting a better range of motion, a.k. flexibility,
I could imagine inside of the VR experience, you know, leaning into a stretch, you know,
a standard kind of like, like a lunch type stretch, but actually seeing a meter of like,
are you get, are you approaching new levels of flexibility in that moment?
Where it's actually measuring some kinesthetic elements
on the body and the joints.
And I mean, I was just trying,
whereas normally you might have to do that
in front of a camera,
which then would give you the data on a screen
that you'd look at afterwards
or hire an expensive coach,
but so, or looking at form and resistance training.
So you're actually lifting physical weights,
but it's telling you whether or not you're breaking form,
I mean, there's just so much that could be done inside of there,
and then my mind just starts to spiral into,
like, wow, this is very likely to transform
what we think of as, quote, unquote, exercise.
Yeah, I think so.
I think there's still a bunch of questions
that need to get answered.
You know, I don't think most people are gonna necessarily want to install,
you know, a lot of sensors or cameras
to track their whole body.
So we're just over time getting better
from the sensors that are on the headsets
of being able to do very good hand tracking.
Right, so we have this research demo
where you now, just with the hand tracking,
from the headset you can type it,
just project a little keyboard onto your table,
and you can type, and people,
like type like a hundred words a minute with that.
With a virtual keyboard.
Yeah, we're starting to be able to
using some modern AI techniques,
be able to simulate and understand
where your torso is positioned is.
Even though you can't always see it,
you can see it a bunch of the time,
and if you fuse together what you do see with the accelerometer and understanding how
the thing is moving, you can kind of understand what the body position is going to be.
But some things are still going to be hard, right? So, you mentioned boxing. That one
works pretty well because we understand your head position, we understand your hands,
and now we're increasingly understanding your body position.
But let's say you want to expand that to more time or kickboxing.
So legs, that's a different part of tracking, that's harder because that's out of the field of view more of the time.
But there's also the element of resistance, right?
So you can throw a punch and retract it in shadow box
and do that without upsetting your kind of physical balance
that much, but if you want to throw a roundhouse kick
and there's no one there, then I mean,
the standard way that you do when you're shadow boxing
is you basically do a little 360.
But like, I don't know, is that gonna feel great?
I mean, I think there's a question about
what that experience should be.
And then if you wanted to go even further,
if you wanted to get grappling to work,
I'm not even sure how you would do that without having
resistance of understanding what the forces applied to you
would be, and this then you get into,
okay, maybe you're gonna have some kind of body suit
that can apply, you know, haptics, but I'm not even sure that even a pretty advanced haptic system
is gonna be able to be quite good enough to do to simulate like the actual forces that
would be applied to you in a grappling scenario.
So this is part of what's fun about technology, though, is you get, you keep on getting new
capabilities and then you need to figure what things you can do with them.
So I think it's really neat that we can kind of do boxing
and we can do this supernatural thing.
And there's a bunch of awesome cardio and dancing
and things like that.
And then there's also still so much more to do
that I'm excited to kind of get to over time,
but it's a long journey.
And what about things like painting and art and music?
You know, I imagine, of course, different mediums.
I like to draw a pen and pencil, but I can imagine trying to learn how to paint
it virtually.
Of course, you could print out a physical version of that at the end.
This doesn't have to depart from the physical world.
They could end in the physical world.
Did you see the piano demo where you, either you're there with a physical keyboard or it could be a virtual keyboard,
but the app basically highlights what keys you need to press in order to play the song. So it's
basically like you're looking at your piano and it's teaching you how to play a song that you choose.
And actual piano. Yeah. But it's illuminating certain keys in the virtual space.
And it could either be a virtual piano, if you, or keyboard, if you don't have a piano or keyboard,
or it could use your actual keyboard.
So, yeah.
I think stuff like that is going to be really fascinating for education and expression.
And for broad, excuse me, but for, and for broadening access to totally expensive equipment
on my piano is, is no small expense.
Exactly.
And it takes up a lot of space and needs to be tuned.
Yeah.
You can think of all these things like the, the kid that has very little income or their
family has very little income could learn to play a virtual piano at much lower cost.
Totally.
Yeah, it gets back to the question I was asking before about the thought experiment of how many of the things that we physically have today actually need
to be physical. The piano doesn't. Maybe there's some premium where it's maybe it's a somewhat
better, more tactile experience to have a physical one, but for people who don't have the space for it or who
can't afford to buy a piano or just aren't sure that they would want to make that investment
at the beginning of learning how to play piano, I think in the future you'll have the option
of just buying an app or a hologram piano, which will be a lot more affordable.
And I think that's going to be unlock a ton of creativity too because
And I think that's going to be unlock a ton of creativity too because
I mean instead of the market for piano makers being constrained to like a
relatively small set of experts who have like perfected that craft. You're gonna have like
you know kids or Developers all around the world designing crazy designs for
potential keyboards and pianos that look nothing like
what we've seen before, but maybe like bring even more joy and or even more kind of fun
in the world where you have fewer of these physical constraints.
So I think it's going to be a lot of wild stuff to explore.
It's definitely going to be a lot of wild stuff to explore.
I was just had this idea slash image in my mind of what you were talking about merged with our earlier conversation
when Priscilla was here, I could imagine a time not too long from now where you're using
mixed reality to run experiments in the lab, literally mixing virtual solutions, getting
potential outcomes, and then picking the best one to then go actually do in the real
world, which is very both financially costly and time-wise costly.
Yeah, I mean, people are already using VR for surgery
and education on it.
I mean, there's some study that was done that basically did,
tried to do a controlled experiment of people
who learned how to do a specific surgery
through just the normal kind of textbook and lecture method
versus like you show the knee
and you have it be a large blown up model
and people can manipulate it and kind of practice
where they would make the cuts
and like the people in that class did better.
So I think that there's, yeah,
I think that it's gonna be profound for a lot of different areas.
And last example, it leaps to mind, you know, I think social media and online culture has been accused of creating a lot of real world.
Let's call it physical world social anxiety for people, but I could imagine I'm practicing a social interaction.
Or a kid that has a lot of social anxiety or that needs to advocate for themselves, better learning how to do that progressively
through a virtual interaction
and then taking that to the real world
because it's in my very recent experience today,
it's so blended now with real experience
that the kid that feels terrified
of advocating for themselves
or just talking to another human being
or an adult or being in a new circumstance
of a room full of kids,
you could really experience that in silico first and get comfortable, let the nervous system
attenuate a bit and then take it into the quote unquote, physical world.
Yeah, I think we'll see experiences like that.
I mean, I also think that some of the social dynamics around how people interact in this
kind of blended digital world will be more nuanced in other ways.
So I'm sure that there will be kind of new anxieties that people develop to just like,
you know, teens today need to navigate dynamics around texting constantly that
that we just didn't have when we were kids. So I think it will help with
some things. I think that there will be new issues that hopefully we can help people work through too.
But overall, I think, yeah, no,
I think it's gonna be really powerful and positive.
Let's talk about the glasses.
Sure.
This was wild.
Yeah.
Put on a pair of Ray bands.
I like the way they look.
They're clear.
They look like any other Ray band glasses,
except that I could call out to the glasses. I could just say,
you know, hey, meta, I want to listen to the Bach variations, the Goldberg variations
of Bach. And meta responded and no one around me could hear, but I could hear with exquisite
clarity. And by the way, I'm not getting paid to say any of this. I'm just still blown away by this.
Folks, I want to pair these very badly.
I could hear, okay, I'm selecting those now
and or that music now,
and then I could hear it in the background,
but then I could still have a conversation.
So this was neither headphones in or headphones out.
And I could say, wait, pause the music and it would pause.
And the best part was I didn't have to leave the room mentally.
I even have to check out a phone.
It was all interfaced through this very local environment in and around the head.
And as a neuroscientist, I'm fascinated by this because of course all of our perceptions,
auditory, visual, et cetera, occurring inside the casing of this thing we call a skull.
But maybe could comment on
the origin of that design for you,
the ideas behind that and where you think it could go
because I'm sure I'm just scratching the surface.
The real product that we want to eventually get to
is this kind of full augmented reality product
in a kind of stylish and comfortable,
normal glasses form factor.
Not Dorgy VR headset, so to speak. No, I mean, the VR headset does feel kind of stylish and comfortable, normal glasses form factor. Not Dorkey VR headset, so to speak.
No, I mean, the VR headset does feel kind of like,
it will, but things on the face.
There's going to be a place for that too,
just like you have your laptop and you have your workstation,
or maybe the better analogy is you have your phone
and you have your workstation.
These AR glasses are going to be like your phone,
in that you have something on your face
and you will, I think, be able to, if you want, wear it for a lot of the day and interact with
it very frequently. I don't think that people are going to be walking around the world
wearing VR headsets. That's, yeah, that's certainly not the future that I'm hoping we get to.
But I do think that there is a place where, for having, because it's a bigger form factor,
it has more compute power.
So just like your workstation or your kind of bigger computer can do more than your phone
can do, there's a place for that.
When you want to settle into an intense task, right, if you have a doctor who's doing
a surgery, I would want them doing it through the headset, not through their phone equivalent or just lower powered glasses.
But just like phones are powerful enough to do a lot of things, the glasses will eventually
get there too.
Now, that said, there's a bunch of really hard technology problems to address in order
to get to this point where you can like put kind of full holograms in the world,
you're basically miniaturizing a supercomputer and putting it into a pair of glasses
so that the pair of glasses still looks stylish and normal. And that's a really hard technology
problem. Making things small is really hard. A holographic display is, you know, it's different
from what our industry is optimized for for, you know, 30 or 40 years now building screens.
There's like a whole kind of industrial process around that that goes into phones and TVs
and computers and like increasingly so many things that have different screens. Like there's a whole
pipeline that's gotten very good at making that kind of screen.
And the holographic displays are just a completely different thing, because it's not a screen.
It's a thing that you can shoot light into through a laser or some other kind of projector,
and it can place that as an object in the world.
So that's going to need to be this whole other industrial process that gets built up to doing that like in an efficient way. So all that said, we're
basically taking two different approaches towards building this at once. One is we are
trying to keep in mind what is the long term thing that it's not super far off I think
within you know, few years. I think, within a few years,
I think we'll have something that's sort of
a first version of this full vision
that I'm talking about.
We have something that's working internally
that we use as a dev kit.
But that's kind of a big challenge.
It's gonna be more expensive,
and it's harder to get all the pieces working.
The other approach has been, all right, let's start with what
we know we can put into a pair of stylish sunglasses today
and just make them as smart as we can.
So for the first version, we worked with,
we did this collaboration with Rayban,
because that's well accepted.'s like well-accepted,
you know, these are well-designed glasses, they're classic, people have used them for decades.
For the first version, we got a sensor on the front, so you could capture moments without having to take your phone out of your pocket.
So you got photos and videos, you had the speaker and the microphones, you can listen to music.
You could communicate with it.
But it was, you know, that was sort of the first version of it.
We had a lot of the basics there, but we saw how people used it and we tuned it.
We made the camera is like twice as good for this new version that we made.
The audio is a lot crisper for the use cases that we saw that people actually used, which
is some of it is listening to music, but a lot of it is like people want
to take calls on their glasses. They want to listen to podcasts, right? The biggest
thing that I think is interesting is the ability to get AI running on it, which it doesn't
just run on the glasses, it also, it kind of proxies through your phone. But with all the advances
in LLMs, and we talked about this a bit in the first part of the conversation,
having the ability to have your meta AI assistant that you can just talk to and basically ask
any question throughout the day is, I think, it'd be really fascinating. And like you were saying about how we process the world
as people, eventually I think you're
going to want your AI assistant to be able to see what you see
and hear what you hear.
Maybe not all the time, but you're going
to want to be able to tell it to go into a mode where it
can see what you see and hear what you hear.
And what's the kind of device design
that best positions an AI assistant
to be able to see what you see and hear what you hear
so can best help you?
Well, that's glasses, right?
Where we're basically as a sensor to be able to see
what you see and a microphone that is close to your ears
that can hear what you hear.
The other design goals is, like you said,
to keep you present in the world, right?
So I think one of the issues with phones
is they kind of pull you away from
what's physically happening around you.
And I don't think that the next generation
of computing will do that.
I'm just chuckling myself because I have a friend.
He's a very well-known photographer
and he was laughing about how people go to a concert
and everyone's filming the concert on their phone.
So that they can be the person that posted the thing,
but like there are literally millions
of other people who posted the exact same thing,
but somehow our unique experience,
it feels important to post our unique experience
with glasses that would essentially
of smooth that gap completely.
Yeah, totally.
You can just worry about it later, download it.
There are issues I realize with glasses
because they are so seamless with everyday experience,
even though you and I aren't wearing them now.
It's very common for people to wear glasses,
issues of recording and consent.
Yeah, that's like, I go into a locker room at my gym.
That's where we have my time.
I'm assuming that the people with glasses
aren't filming, whereas right now,
because there's a sharp transition when there's a phone in the room and someone's
pointing it, people generally say no phones in locker rooms and recording. So that's just
one instance. I mean, there are other instances.
We have the whole privacy light. I don't know. Did you get this to explore that?
Yeah. So it's anytime that it's active, that the camera sensor is active,
it's basically pulsing a white bright light.
Got it.
Which is, by the way, more than cameras do.
Someone's trying to be holding a camera.
Yeah, I mean, phones aren't showing a bright sensor
when you're taking a photo.
So, no people oftentimes will pretend they're texting
and they're actually recording. I actually saw an instance of this in a barber shop once
where someone was recording and they were pretending that they were texting. And it was a pretty
intense interaction that ensued and it was like, wow, you know, it's pretty easy for people
to fain texting while actually recording. Yeah, so I think when you're evaluating a risk with a new technology, the bar shouldn't
be, is it possible to do anything bad? It's, does this new technology make it easier to do
something bad than what people already had? And I think because you have this privacy light
that is just broadcasting to everyone around you, hey, this thing is recording now, I think because you have this privacy light that is just broadcasting to everyone around you, hey, this thing is recording now, I think that makes it actually less discrete to
do it through the glasses than what you could do with the phone already, which I think
is basically the bar that we wanted to get over from a design perspective.
Thank you for pointing out that it has the privacy light.
I didn't get long enough in the experience to explore all the features. But again, I can think of a lot of uses being able to look at a restaurant from the outside
and see the menu, get status of how crowded it is as much as I love. I don't want to call
out. Let's just say app-based map functions that allow you to navigate and the audio is
okay.
It's nice to have a conversation with somebody
on the phone or in the vehicle
and just be great if the road was traced
where I should turn.
Yeah, absolutely.
These kinds of things seem like it's gonna be
straight forward for a meta-engineer to create.
Yeah, and some of the future versions,
we'll have it also at the holographic display
where I can show you the directions.
But I think of the role,
just basically just be different price points that pack
different amounts of technology.
The holographic display part, I think, is going to be more expensive than doing one that
just has the AI, but is primarily communicating you with you through audio.
So, I mean, the current Rayban metaglasses are $2.99.
I think when we have one that hasn't display in it,
it'll probably be some amount more than that,
but it'll also be more powerful.
So I think that people will choose what they want to use
based on what the capabilities are that they want
and what they can afford.
But a lot of our goal in building things is,
we try to make things that can be accessible to everyone. But a lot of our goal in building things is,
we try to make things that can be accessible to everyone.
Our game as a company isn't to build things and then charge a premium price for it.
We try to build things that then everyone can use
and then become more useful
because of very large number of people are using them.
So it's just a very different approach.
We're not like Apple or some of these companies that just try to make something and then sell
it for as much as they can, which I mean, they're a great company.
So I mean, I think that that model is fine too.
But our approach is going to be, we want stuff that can be affordable so that way everyone
in the world can use it.
Long lines of health, I think the glasses will also potentially solve a major problem
in a real way, which is the following.
For both children and adults, it's very clear that viewing objects in particular screens
up close for too many hours per day leads to myopia.
Literally, it changed in the length of the eyeball and near-sightedness.
On the positive side, we know, based on some really large clinical trials, that kids
who spend, and adults who spend two hours a day or more out of doors, don't experience
that and maybe even reverse their myopia.
And it has something to do with exposure to sunlight, but it has a lot to do with long
viewing, viewing things at a distance greater than three or four feet away. And with the glasses, I realize
one could actually do digital work out of doors. It could measure and tell you how much time you've
spent looking at things up close versus far away. I mean, this is just another example that leaps to
mind. But in accessing the visual system, you're effectively
accessing the whole brain because it's the only two bits of brain that are outside the
cranial vault.
It just seems like putting technology right at the level of the eyes, seeing what the
eye see is just got to be the best way to go.
Yeah, I think, well, multimodal, right?
I think is you want the visual sensation, but you also want text or language.
Sure.
I think it's...
But that all can be brought to the level of the eyes.
What do you mean by that?
Well, I mean, I think what we're describing here is essentially taking the phone, the computer,
and bringing it all to the level of the eyes.
And of course, one would like more...
Physically at your eyes.
Yeah.
And one would like more kinesthetic information, as you mentioned before, where the legs are,
maybe even long function. Hey, have you taken enough steps today? But that all can be,
if it can be figured out on the phone, it can be by the phone, it can be figured out by
classes. But there's additional information there, such as what are you focusing on in
your world? How much of your time is spent looking at things far away versus up close?
How much social time did you have today? It's really tricky to get that with a phone.
My phone right in front of us is if we were at a standard lunch nowadays, certainly at Silicon
Valley, and then we're peering at our phones. I mean, how much real direct attention was
in the conversation at hand versus something else? You can get issues of where are you
placing your attention by virtue of where you're placing your eyes?
And I think that information is not accessible
with a phone in your pocket or in front of you.
I mean, a little bit, but not nearly as rich
and complete information as one gets
when you're really pulling the data from the level
of vision and what kids and adults are actually looking
at and attending to.
Yeah, yeah.
So it's extremely valuable. You get autonomic information,
size of the pupils. So you get information about internal states. I mean, that you can,
there's internal sensor and outside. So there's the sensor on the Rayban metaglasses is external,
right? So it's basically allows you to see what you see. Then, that was the, sorry, the AI
system to see what you're seeing. There's a separate set of things which are eye tracking,
which are also very powerful for enabling a lot of interfaces.
If you want to just look at something and select it
by looking at it with your eyes rather than having to drag a controller over,
or pick up a hologram or anything like that, you can do that with
high tracking. So that's a pretty profound and cool experience too, as well as just kind of
understanding what you're looking at so that way you're not kind of wasting compute power,
drawing pixels in high resolution, the part of the kind of world that you're not, that's going to be your peripheral vision.
So yeah, all of these things,
they're interesting design and technology trade-offs, where if you want the external sensor,
that's one thing, if you also want the eye tracking,
now that's a different set of sensors,
each one of these consumes compute,
which consumes battery, the ticket more space more space, so it's like,
where are the eye tracking sensors going to be?
It's like, what you want to make sure that the rim of the glasses is actually quite thin,
because there's a kind of variance of how thick can glasses be before they look more like goggles than glasses.
There's this whole space, and I think people are gonna end up choosing
what product makes sense for them.
Maybe they want something that's more powerful
that is more of the sensors,
but it's gonna be a little more expensive,
maybe slightly thicker,
or maybe you want like a more basic thing
that just looks like very similar to what Rayband glasses are
that people have been wearing for decades,
but kind of has AI in it,
and you can capture moments without having to take
your phone out and send them to people.
In the latest version, we got the ability into live stream.
I think that that's pretty crazy that now you can be going back to your concert case
or whatever else you're doing.
You can be doing sports and just watching your kids play something and just you can be watching and
you can be live streaming it to your kind of family group. So people can see it. I think that
like that stuff is, I think that's pretty cool that you basically have a normal looking pair of
glasses at this point that can kind of live stream and has like an AI assistant. So the stuff is
making a lot faster progress in a lot of ways
than I would have thought.
And I don't know, I think people are going to like this version,
but there's a lot more still to do.
I think it's super exciting.
And I see a lot of technologies.
This one's particularly exciting to me
because of how smooth the interface is.
And for all the reasons that you just mentioned,
what's happening with and what can we expect around AI interfaces
and maybe even avatars of people within social media are we not far off from a day where there
are multiple versions of me and you on the internet where people, for instance, I get asked a lot
of questions. I don't have the opportunity to respond to all those questions, but with things
like chat, GPT, people are trying to generate answers to those questions on other platforms.
Will I have the opportunity to soon have an AI version of myself where people can ask
me questions about like what I recommend for sleep and circadian rhythm, fitness, mental
health, etc. based on content I've already generated that will be accurate.
So they could just ask my avatar.
Yeah, this is something that I think a lot of creators are going to want that we're trying
to build.
And I think we'll probably have a version of next year, but there's a bunch of constraints
that I think we need to make sure that we get right.
So for one, I think it's really important that it's not that there's a bunch of versions of you.
It's that if anyone is creating like an AI assistant version of you, it should be something
that you control, right?
It's, I think there are some platforms that are out there today that just let people like
make, I don't know, the AI bot of me or other figures.
And it's like, I don't know. I mean, we have platform policies for,
not for like decades,
since the beginning of the company at this point,
which is almost 20 years,
that basically don't allow impersonation.
Real identity is like one of the core aspects
that kind of our company was started on
is like you wanna kind of our company was started on is like you want to authentically
be yourself.
So, yeah, I think if you're almost any creator being able to engage your community and
there's just going to be more demand to interact with you than you have hours in the day.
So they're both people who, out there, who would benefit from being able to talk to an AI version of you. And I think you and other creators would benefit from being able to
keep your community engaged and service that demand that people have to engage with you.
But you're going to want to know that that AI kind of version of you or assistant is
going to represent you the way that you would want. And there are a lot of things that are awesome about these modern LLMs, but having perfect
predictability about how it's going to represent something is not one of the current strengths.
So I know there's some work that needs to get done there.
I don't think it needs to be 100% perfect all of the time, but you need to have very good
confidence, I would say, that it's going to represent you the way that you'd want for you to want to
turn it on, which again, you should have control over whether you turn it on.
So we wanted to start in a different place, which I think is a somewhat easier problem,
which is creating new characters for AI personas. So that way it's not,
we built one of the AI's is like a chef
and they can help you come up with things
that you could cook and can help you cook them.
There's a couple of people that are interested
in different types of fitness
that can help you plan out your workouts or help with recovery or different things like that.
There are people, there's an AI that's focused on DIY crafts.
There's someone who's a travel expert that can help you make travel plans or give you
ideas.
But the key thing about all these is they're not modeled off of existing people, so they don't
have to have kind of a hundred percent fidelity to like making sure that they never say something
that, you know, a real person who they're modeled after would never say because they're
just made up characters.
So I think that that is, that's a somewhat easier problem.
We actually got a bunch of different kind of well-known people
to play those characters,
because we thought that would make it more fun.
So there's like Snoop Dogg is the dungeon master,
so you can like drop him into a thread and play text-based games,
and it's just like, I do this with my daughter
when I tuck her in at night, and she just loves,
like storytelling, right?
And it's like Snoop Dogg is the dungeon master.
We'll come up with, like, here's what's happening next.
And she's like, okay, I turn into a mermaid.
And then I like swim across the bay and I go and find the treasure chest and unlock it.
And it's like, and then Snoop Dogg just to always will have a next version of the,
like the next iteration on the story.
So it's stuff is fun, but it's not actually Snoop Dogg.
He's just kind of the actor.
He's playing the dungeon master,
which makes it more fun.
So, I think that's probably the right place to start,
is you have like, you can kind of build versions
of these characters that people can interact
with doing different things.
But I think what you wanna get over time
is to the place where any creator or any small business can very easily
just create an AI assistant that can represent them and interact with your kind of community
or customers if you're a business.
And basically just help you grow your enterprise.
So I don't know, I think that's going to be cool, but I think this is, it's a long-term
project. I think we'll have more progress on it to report on next year,
but I think that's coming.
I'm super excited about, because we hear a lot about
the downsides of AI, I mean, I think people are now
coming around to the reality that AI is neither good nor bad,
it can be used for good or bad,
and that there are a lot of life enhancing spaces
that it's gonna show up and really, really improve
the way that we engage socially, what we learn,
and that mental health and physical health
don't have to suffer and in fact can be enhanced
by the sorts of technology
so we've been talking about.
So I know you're extremely busy.
I so appreciate the large amount of time
you've given me today to sort through all these things.
That was fun.
And to talk with you in Priscilla and to hear what's happening and where things are headed.
The future certainly is bright.
I share in your optimism and it's been only strengthened by today's conversation.
So thank you so much and keep doing what you're doing and on behalf of myself and everyone listening.
Thank you because regardless of what people say, we all use these platforms
excitedly and it's clear that there's a ton of intention and care and thought about, you know,
what could be in the positive sense and that's really worth highlighting.
Awesome. Thank you. I appreciate it.
positive sense and that's really worth highlighting. Awesome, thank you, I appreciate it.
Thank you for joining me for today's discussion
with Mark Zuckerberg and Dr. Priscilla Chan.
If you're learning from and or enjoying this podcast,
please subscribe to our YouTube channel.
That's a terrific zero cost way to support us.
In addition, please subscribe to the podcast
on both Spotify and Apple.
And on both Spotify and Apple,
you can leave us up to a five star review. Please also check out the sponsors mentioned at the beginning and throughout today's episode.
That's the best way to support this podcast. If you have questions for me or comments about
the podcast or guests that you'd like me to consider hosting on the Hubertman Lab podcast,
please put those in the comment section on YouTube. I do read all the comments.
Not during today's episode, but on many previous episodes of the Hubertman Lab podcast,
we discuss supplements. While supplements aren't necessary for everybody, read all the comments. Not during today's episode, but on many previous episodes of the Hubert Mune Lab podcast,
we discuss supplements.
While supplements aren't necessary for everybody, many people derive tremendous benefit from them,
for things like enhancing sleep, hormone support, and improving focus.
If you'd like to learn more about the supplements discussed on the Hubert Mune Lab podcast, you
can go to LiveMomentus spelled O-U-S, so livemomentus.com slash Hubert Mune.
If you're not already following me on social
media, it's Huberman Lab on all social media platforms. So that's Instagram, Twitter,
now called X, Threads, Facebook, LinkedIn, and on all those places, I discuss science
and science-related tools, some of which overlaps with the content of the Huberman Lab
podcast, but much of which is distinct from the content on the Huberman Lab podcast. So
again, it's Hubermanlab on all social media platforms.
If you haven't already subscribed to our monthly
Neural Network newsletter,
the Neural Network newsletter is a completely zero-cost
newsletter that gives you podcast summaries
as well as toolkits in the form of brief PDFs.
We've had toolkits related to optimizing sleep,
to regulating dopamine, deliberate cold exposure,
fitness, mental health, learning, and neuroplasticity,
and much more. Again, it's completely zero cost to sign up. You simply go to HubermanLab.com,
go over to the menu tabs, scroll down to newsletter, and supply your email. I should emphasize
that we do not share your email with anybody. Thank you once again for joining me for today's
discussion with Mark Zuckerberg and Dr. Priscilla Chan. And last but certainly not least, thank you for your interest in science.