a16z Podcast - a16z Podcast: When Humanity Meets A.I.
Episode Date: June 28, 2016with Fei-Fei Li (@drfeifei), Frank Chen (@withfries2), and Sonal Chokshi (@smc90) Who has the advantage in artificial intelligence — big companies, startups, or academia? Perhaps all three, especial...ly as they work together when it comes to fields like this. One thing is clear though: A.I. and deep learning is where it’s at. And that’s why this year’s newly anointed Andreessen Horowitz Distinguished Visiting Professor of Computer Science is Fei-Fei Li [who publishes under Li Fei-Fei], associate professor at Stanford University. Bridging entrepreneurs across academia and industry, we began the a16z Professor-in-Residence program just a couple years ago (most recently with Dan Boneh and beginning with Vijay Pande). Li is the Director of the Stanford Vision Lab, which focuses on connecting computer vision and human vision; is the Director of the Stanford Artificial Intelligence Lab (SAIL), which was founded in the early 1960s; and directs the new SAIL-Toyota Center for AI Research, which brings together researchers in visual computing, machine learning, robotics, human-computer interactions, intelligent systems, decision making, natural language processing, dynamic modeling, and design to develop “human-centered artificial intelligence” for intelligent vehicles. Li also co-created ImageNet, which forms the basis of the Large Scale Visual Recognition Challenge (ILSVRC) that continually demonstrates drastic advances in machine vision accuracy. So why now for A.I.? Is deep learning “it”… or what comes next? And what happens as A.I. moves from what Li calls its “in vitro phase” to its “in vivo phase”? Beyond ethical considerations — or celebrating only “geekiness” and “nerdiness” — Li argues we need to inject a stronger humanistic thinking element to design and develop algorithms and A.I. that can co-habitate with people and in social (including crowded) spaces. All this and more on this episode of the a16z Podcast.
Transcript
Discussion (0)
The content here is for informational purposes only, should not be taken as legal business tax or investment advice or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. For more details, please see A16Z.com slash disclosures.
Hi, everyone. Welcome to the A6 and Z podcast. I'm Sonal and I'm here today with A6 and Z partner Frank Chen. And we are interviewing our newest professor in residence. This is actually the third year, the first year,
we had Vijay Ponday, who's now the general partner on our bio fund. And then we have Dan Bunae. And now we are so pleased to welcome Dr. Faye Faye Lee, who is the director of the Stanford AI lab, the Stanford Toyota AI Center and the Stanford Computer Vision Lab, which is pretty much the most important work happening. At least we think AI is the white hot center of both a lot of startup activity as well as academic research. And Faye Faye Y in the world has.
has it gotten so hot again?
From my perspective, AI has always been hot.
So, AI is a discipline of about 60 years old.
In the past 60 years, I call that the InVitual AI time,
where AI was developed in the laboratories and mostly in research centers.
We were laying down mathematical foundations of AI.
We were formulating the questions of AI.
and we were testing out the prototypes of AI algorithms.
But now going forward, we're entering what I call the AI in vivoval time,
which AI is entering real life.
So why now, what's the triggering the switch between in vitro to in vivo?
I think several things are happening.
First is that AI's techniques have come of age.
But what's driving that?
There are two more very important factors.
One is the big data contribution to AI.
It's, you know, the information age, the Internet age has brought us big data and now even
boosted by just trillions of sensors everywhere.
And the third factor that's contributing this is the hardware, the computing hardware,
the advance of the CPUs, of the GPUs, and the computing clusters.
So the convergence of, I'd say, mathematical foundations and statistical machine learning tools,
the big data and the hardware is created this historical moment of AI.
Why don't we unpack those in turn?
Because I think each one of them in themselves are interesting trends.
So when we talk about hardware, we have CPUs, we have GPUs.
So it turns out deep learning is great to do on GPUs because it's linear algebra and parallelizable.
Are we going to see deep learning chips?
I think so and I hope so.
What would deep learning chips look like?
Obviously, the ability to do much more parallelization, but what does it actually look like?
Is it like what's happening with Nvidia's chips right now or something different?
Nvidia is definitely one of the pioneers in deep learning chips in the sense of their GPUs are highly parallelizable, can handle high.
parallelizable operations, and as it turned out, much of the internal operations of a
deep learning algorithm, which technically we call neural networks or convolutional neural
networks, involves a lot of repeated computation that can be done concurrently.
So the GPUs have really contributed a lot in speeding up the contributions because this can be
done in parallel.
Well, GPUs are wonderful for training the deep learning algorithms, but I think there is still a lot of space in rapid testing or inference time chips where it can be used in recognition, you know, in devices, in embedded devices.
So I see there is a trend coming up in deep learning chips.
So more specialized hardware at the boat dedicated for that.
Yeah, and we've already seen the startups do it like Nirvana.
Obviously, Google announced the TensorFlow processing unit, right?
So they've got dedicated silicon as well.
So GPU to TPU basically?
Yeah, exactly.
Which is once you know that you're going to do something over and over again, then you want it in silicon, both for the performance and then very importantly on the embedded side for power consumption, which is you want your iPhone eventually to be able to do this.
But I still think, Frank, that this is a little bit.
I wouldn't say it's too early, but I think we're still in the exploratory stage because the algorithms are not.
matured enough yet.
There is still a lot of exploration about what to do it in the best way.
So, you know, like this year Icler, one of the top deep learning conferences,
one of its best paper is on a particular work coming out of Stanford, not my lab, actually,
somebody else's lab, Professor Bill Daly's lab, where they're exploring a sparse algorithm
that can enable a specific design of a chip.
So this conjunction of improving algorithm
in order to also design the innovative chip
is still happening right now.
Is that a new thing?
You mean like algorithm driving the design of the chips?
Sort of the chicken egg thing of what comes first.
The chip design is already so complicated
that you have to do it with algorithms.
Humans can't actually lay out chips.
I don't mean algorithm like designed.
I thought what you were saying was designing the chip
for a particular type.
of almost universal algorithm, which is how I heard.
It is designing the chip for a type of algorithm, but it's a family of algorithms.
Your argument is that because we're not sure what the winning algorithms are going to be,
we're still in this very productive period where we're trying lots and lots of algorithms.
It might be too early to design chips because to put something in hardware,
it's obviously incredibly expensive to get to an ASIC.
It's $50 million to tape out.
And so unless you're sure, you know what algorithms are going to run,
you can't optimize the chips for it.
is that actually I think it's really important this thing is happening right now this R&D has to
happen concurrently it's just like Sonant said there's a chicken and egg dynamics here that
algorithms affects the way chips are designed but the constraints of the chips would in turn
affect the algorithm I think this is time to explore this this is the time to devote resources
of course in terms of business model one has to be careful so the second thing
thing, another of the three things that you mentioned was that we've laid the mathematical
foundations for artificial intelligence. And I want to come back to this idea of, look, the hottest
thing right now is deep neural networks. But over the 60 years of AI research, we've actually
used many, many different techniques, right? Logical programming. We've used planning algorithms.
We've tried to implement planning algorithms as search algorithms. And so is deep learning it? Is this
what the community has been waiting for, or is this just, okay, it's hot now, but there's
going to be something else later, too?
I got this question a lot, is deep learning the answer to it all?
So first of all, I'm very happy you actually brought up other algorithms and tools.
So if you look at AI's development, in the very early Minski, MicArthur days, they used a lot of,
you know, first order logic and expert systems, and those are very much driven by cognitive
designs of rules.
But what really, I think, was the first AI spring phase is the blossoming of machine
learning, statistical machine learning algorithms.
We're looking at, you know, boosting algorithms, Bayesian nets, graphical models, support
vector machines, regression algorithms, as well as neural networks.
So that whole period, there's about 20, 30 years of blossoming of machine learning algorithm
late the statistical machine learning foundation to today's AI, and we shouldn't overlook that.
In fact, many, many industry applications today still use some of the most powerful
machine learning algorithms that are not deep-deaf.
Deep learning is not the newest.
It's actually developed in the 60s 70s by people like Kunahiko Fukushima, then carried out by
Jeff Hinton and Jan Lukun and their colleagues.
I think there is some really powerful ingredients of the neural network architecture.
It is a very high capacity model.
It can take almost any function and they can do end-to-end training that takes data and
all the way to the task objective and optimize on that.
But is deep learning it?
I think there is quite a few questions remained that would challenge today's
deep learning architecture and hopefully challenge the entire thinking of AI going forward.
One of the more obvious one everybody talks about is supervised versus unsupervised training.
This is, I think, so important because a drawback of the current narrative is that it focuses
so much on the supervised cases that we don't have computers that learn the way children learn.
Exactly.
First of all, we don't even know much how children learn.
There is a vast body of education, developmental and psychology literature, and that's not
getting into computer science yet, you know, supervised learning is powerful when data can be
annotated, but it gets very, very hairy when we want to apply a more realistic training scenario.
For example, if one day a company built a little robot that sends to your home and you
want the robot to adapt to tasks that your family wants to do, the best way of training is
probably not to open the head of the robot and Putin, you know,
I'm putting all the annotated data.
You want to just, you know, like show and talk about what tasks there is and have the robot observe and learn.
That kind of training scenario we cannot do in deep learning yet.
Right.
But there is more than just supervised training versus unsupervised training.
There is also this whole definition of what is being intelligent, right?
Task-driven intelligence is really important, especially for industry.
You know, tagging pictures, avoiding pedestrians.
Speech recognition, transcribing speech, carrying goods, specific task-driven applications
are part of AI and important.
But there is also the AGI, artificial general intelligence of reasoning, abstraction, communication,
emotional interaction, understanding of intention and purpose, formulation of knowledge,
understanding of context.
All this is still largely unknown
in terms of how we can get it done.
Where would you put creative AI on that list from,
okay, there's the problems that are read to be solved,
unsupervised, supervised, generalized intelligence,
and now also creative intelligence?
Actually, you know, here's one question we should ask ourselves,
what is creativity?
If you look at the four, five matches,
of AlphaGo, there were multiple moments when AlphaGo made a movement.
Master Lisa Dole was really surprised.
And if you look at the goal community, people were just amazed by the kind of creativity
AlphaGo has in terms of making the moves that most people cannot think of.
From that point of view, I think we're already seeing creativity.
Part of creativity is just making right decisions in a somewhat unexpected way.
Right.
That's already happening.
I'm actually more interested in the type of creativity where it defies logic because that's an example of logical creativity.
I'm thinking of something like Jackson Pollock.
There is no way a computer is going to waste paint and splatter it because it's the most inefficient, irrational thing to possibly do.
That's a kind of creativity.
I want to know about. I mean, I'm seeing examples of like AI written short films, AI poetry,
your own lab. There are people who are writing captions for images. That's like maybe still
mechanistic. And Kevin Kelly would even argue that creativity himself is largely mechanistic and it's
not as human as we think anthropomorphic as we think it is. But I really mean like artistic
creativity. Yeah, that's a great question. So interestingly, you already see some of the
deep learning work of transferey artistic style. You can put in a vengo painting and turn a photo
into that. But I agree that's very
mechanistic.
The kind of creativity
we're talking about blending our
logical thinking, emotional thinking
and just, you know,
intuitive thinking. And
I haven't seen today's any
work that
builds on the kind of
mathematical formulation that
would enable that. It comes
back to one of the three things that you
use to set up, wise AI
winning now. And
is about data, which is if you're just going to feed the system a bunch of data and then
have the neural net train itself, can that ever lead to something that's truly creative,
which isn't in the data itself?
Right, exactly.
Or maybe it could, by the way, because maybe it can follow the same type of logical arc of history
where you go through a classic phase, a traditionalist phase, an impressionist phase,
a post-impressionist phase, an abstract phase, and then you actually go through a Jackson-Pollock
kind of modern art phase.
It's like I almost wonder if you could technically train on that type of history of art and see what happens.
I know that's crazy and this is completely abstract, but, you know, and it's not in any way tied to the actual computer science, but just theoretically.
We already have systems that can paint in all of those styles because there was enough in the data so that it could form a classifier that said, here's the style of Van Gogh or here's the style of an impressionist and then we can mimic those styles.
So the question is down that road, down using deep learning, can you ever get to break through new things?
Right.
Generative intelligence. Not general, but generative. Generative. So there's a lot of thinking on that.
We're pretty far from going from impressionism to cubism and all this. But coming back to a more mundane class of work, for example, we are doing computer vision. And some of our work recently is to write a brief captioning or a few caption sentences about images. And then the next thing we did is to start doing Q&A of a picture.
And at this point, we start to think, can we actually develop algorithms that's not just learning the training data, but learning to learn.
Exactly.
Learning to ask the right question.
For example, we just submitted a paper that is if we show the computer a picture and ask a question about what is the woman doing, instead of directly having the computer learn to answer, the computer needs to actually.
ask a series of questions in order to answer this.
So the algorithm needs to not learning to answer the question directly,
but learning to explore the potential space to ask the right question to arrive at the final answer.
So the ability of learning to learn is what we want children to have.
And this is what we're exploring in our algorithms.
Okay, so then let's go back for a moment to something you said earlier, Fei-Fei.
I really like how you describe that these phases as sort of the in vitro, like the laboratory phase and then the in vivo, like the in real life phase.
It's a wonderful way of clumping the work and the moment we're at.
But there's always been industry and lab and company, you know, collaboration since the beginning of computing.
So what is different now that startups can play in this space in vivo?
I think several factors.
One is that the algorithms are maturing to the point that industry and startups can use it.
You know, 20 years ago, it's only a few top places in the world, top labs in the world
that holds some algorithms that can do some AI tasks.
It's not percolated to the rest of the industry, the rest of the world.
So for any startup or even company, for that matter, to get their hands on those algorithms,
it's difficult.
But there are also other reasons because of the blossoming of Internet, because of the blossoming of sensing,
we now have more use cases.
In order to harness data, we need to manage and understand this information.
This created a huge need for intelligent algorithms to do that.
So that's a use case.
Because of sensing, we start to get into scenarios like self-driving, like cars.
And now suddenly we need to create intelligent algorithms to have the cars drive.
So that's what's creating this.
in my opinion,
blossoming.
The fun thing to watch unfold
will be
startups versus big
established labs and
companies.
On the one hand,
we've got George
at comma.
comaid AI
who built a
self-driving car
by himself,
like one person.
And then on the
other side,
you're involved
with the stale
Toyota's
Center for
AI research,
which is sort of
the big
industrial approach to this.
So what do you
think the relative
contributions
will be between
startups and
big organizations?
In terms of
self-driving cars,
who is going to
win the self-driving car?
competition, right? I think the advantage of the big companies are some of the following.
A company like Toyota, as soon as they are committed to this, I hope that they put cameras
in their car. They can already get data very quickly, whereas a startup, this is a lot more difficult.
The data advantage is a big differential. Companies like Google, even though they didn't have
cars at the beginning, they had algorithms. They started this early, so they now have both
data and algorithms. They were software companies first, as opposed to a car company.
trying to become a software company.
Exactly.
The software is such an important part.
They actually have an edge there.
What about startups?
Do they still have an edge?
I think there is a lot of business scenarios that might be not so critical on the path
for these big companies, but the startup can come in through a more nicheer or more
vertical space and build up their data and algorithm that way.
Or the startup company can do what Mobile Eye does.
Instead of building the entire system, entire car, they build one critical component.
That's better than anybody else.
And that's another angle they can come in.
Your colleague Andrew Ng, who used to be at Stanford and now runs the AI lab at Bidu,
has called Tesla's autopilot system irresponsible because it got into a crash.
Because there are well-known scenarios, basically, where the system wouldn't perform safely.
And so Andrew said, look, it's premature.
Sure. So I wanted to get your thoughts on this, especially since you're involved with the Toyota program.
So when Tesla's auto pilot came out, I watched some of the YouTube videos as a mom. I would never want to put my kids or myself into those cars.
So from that point of view, I did kind of react, you know, screamishly on that.
But what I'm hoping here is a really clear communication strategy between the business and the consumer.
I don't have a Tesla, so I don't know what Tesla told the users.
But if the communication is extremely clear about when you should trust the system
and when you should use it, when you shouldn't, then we get into a situation, you know,
when customers are not doing the right thing, who is it to blame.
And we're getting more and more into that in AI and ethics is that who is to blame?
because every single machines, if used in a wrong way, would have its very scary consequences.
I think that's a societal conversation we need to be having.
Yet another example of how technologists and technology needs marketing.
I mean, if we tell our company CEOs all the time about the importance of these functions,
it just continually reinforces that.
Yeah, marketing and training and the right user experience, right?
So this is going to be one of the hardest areas.
to design for, which is if we're on this continuum somewhere between intelligence augmentation
and full autonomy, how do you design a system so that the driver knows, oh, it's time for you
to pay attention to, again, because I don't know what to do. Does the steering wheel vibrate?
Does there, is there an auditory cue? Like, these are going to be tricky systems to design.
I agree. And I think this is actually where there's a really important conversation to be had.
Nissan has an anthropologist on staff, Dr. Melissa Chefkin or Sefkin, I forgot how to pronounce her
last name, but she's an anthropologist whose full-time job is to study these issues in order
to build it into the actual design. And it's not just like software engineers you're designing
this. There's a conversation to be had. In our Stanford Toyota Center, this center has a
group of professors working on different projects. And there's one big project that is led by
human computing interaction. HCI, right? Yeah, ICI because of this. Yeah, it's great to see
sort of anthropologists, maybe philosophers, come back into the mix because these complex
systems, you really want the full 360-degree view of design. It's not just what technology
enables, but what are human expectations around it? And one thing to really keep in mind,
compared to computers, humans are extremely slow computing machines. The information transfer
in our brain is very slow compared to transistors. And add on top of that our motor system,
you know, from our brain to our muscles is even slower.
So when we are talking about human-machine interaction and split-second decision-making,
we should really factor in that.
Yeah.
I mean, it sort of brings to mind the famous trolley problem.
You knew I was going to go here, so no, right?
Because I can't help bringing this up.
Well, and I edited Patrick Lynn, who's like a long-time thinker in this space.
Yeah, the YouTube video that Patrick created is great.
So if you want to sort of see the full exposition, go see his YouTube video.
But in summary, the challenge is this, humans are slow.
And so if you get into an accident because your response time was so slow, you're definitely not liable, right?
Like, you just couldn't control the car breaking in front of you.
An autonomous car can actually make a decision.
So imagine that you're an autonomous car and then your algorithm needs to decide, all right, the truck in front of me suddenly break.
I could plow myself into the back of the truck and injure my passengers or I could swerve to the right and maybe
take out the motorcyclist, or I could sort of the left and hit a minivan, the computer will need
to make an explicit decision. And it has the reaction time to actually make an explicit decision.
And so if that decision is explicit, can it be held liable? Can the designer of that algorithm
be held liable because it made an explicit decision rather having a split-second response?
When people bring up the trolley example, it gets really frustrated because it's so abstract.
But I actually think that the act of going through this thought process is exactly what gets you
to answering these questions that you're asking.
about the liability who's accountable, the emotional tradeoffs that we make and how to understand
even our own limitations, as you point out, Fei-Fei.
This actually brings up the topic that in the past few years I've been really advocating
in education and research of AI, we into inject a strong humanistic thinking element into this
because our technology is more and more in vivo. It's touching people as real lives. And how do we
think and develop and design algorithms that can, you know, hopefully better human's lives,
but really have to cohabit it with humans. We need that kind of humanistic thinking.
I actually want to ask about a paper that you guys recently just put out. I actually included in
our last newsletter. It was about autonomous cars navigating social spaces. So interesting because
this is lab research in the wild. Yes. This is no longer, you know, we can have these algorithms.
work perfectly fine.
But to have them navigate,
I'm thinking of streets like in India
where there will be a cow
and like 10 buffaloes
behind you in the middle of all this.
And I don't know any computer
that's accounting for that.
So I'd love to hear
how you guys came to that paper
and some of the thinking.
This is a project that
the main PI is
Severeal severesi.
It's the social robot.
They create it called Jack Rabbit.
So to honor California's Jack Rabbit.
And the purpose of Jack Rabbit
is a top.
autonomous driving robot or vehicle that's taking care of what we call the last miles of driving
where it tends to be in much more social spaces rather than highways, you know, sidewalks, busy cities,
campuses, airports and all this. So when we look at the problem of last miles of driving or just the social
space, we quickly realize the problem is way, you know, not only you have to do everything that a highway
driving car needs to be doing to understand the layout of the scene, the pedestrians,
the lanes and all this, you also have to navigate in a way that is courteous and acceptable
to people.
So one naive solution, people say, well, you know, just make it really low speed and stop
whenever there's people.
We tested that.
If we do that, the robot will never go anywhere because in a very crowded space, there's
always people. If the robot just follows the most naive rule of our yield to people all
the time, the robot would just be sitting there from the starting point and not getting
anywhere. Frankly, if that robot in the future, San Francisco, it would be kicked too, probably
a couple of times. Maybe people would be really irritated about, or in New York, they'd be
irritated at Times Square about it moving so slowly. So we thought about that. We haven't thought
about, you know, what to do yet. We think the robot has to have an SOS kind of call.
So what we want to do is to create a robot that understand human social dynamics.
So it can carry its own task, for example, from going from A to B to deliver something on campus.
But do it in a courteous way.
So we started to first record human behavior by data on campus and look at how people gather together when they talk in small groups or how they walk,
especially, you know, 9 o'clock Stanford campus.
There's so many students going to so many classes,
but they're not going in completely random way.
They tend to form interesting patterns depending on the direction they're going.
So we gather all these data.
We feed it into the algorithm, have the algorithm learn about this,
especially from injecting some social rules,
such as people tend to follow others
going the same direction.
You do not break two people
or several people when they're talking.
So we injected all these
and learned the right way of doing it.
And then we put it into the algorithm.
And then the algorithm started to learn by itself
how to navigate.
Just a probe on that how to navigate,
not how to learn those social cues itself.
Right.
How to navigate.
We give them some social cues,
but we only give them high level cues.
The detail, for example,
The algorithm still has to learn when I avoid two people talking, how far do I avoid?
Do I avoid them by 10 feet or 2 feet?
These are the things that are learned, just by observing.
Have there been any surprises yet for you guys out of this?
No.
Sorry.
When I read the paper, the question that immediately came to mind for me, which is that social norms vary from place to place.
That's what I was thinking, too, the cross-cultural aspect, especially.
So when we ship these robots that observe social norms, is this going to be the new localization?
In other words, here's the self-navigating robot, Mumbai edition.
Here's the self-navigating robot, Boston Edition.
Excellent question.
So my answer to that is, as of now, we have to train them location by location.
We have to gather data.
But as I was saying earlier, you know, the next dream I would have is to teach robot how to learn,
learning to learn rather than just to mimic training data.
At that point, it should be online learning.
It should be incremental learning so that the robot can adapt to different.
Right.
So you wouldn't have to train it on a particular city's actual traffic patterns.
You just drop it in there and the robot will figure it.
Exactly.
Like the way humans do when you travel.
Exactly.
When in Rome, do as a romance, so to speak.
I mean, I come from the world of developmental psychology
and the development of moral and social mores requires not just a regular
cognition, but a metacognition and an awareness of your own thinking that is a whole new layer
that it just complicates things. So it's super fascinating. Okay, so I want to go back then to something
you said, Faye, about this humanistic side of things. Tell us more about what you're thinking
when you say that. Like, do you mean that we should be injecting humanities into computer science or
art? Like, you know, I've heard of this move from STEM to STEAM. Like, what are you actually talking
about when you say that? So here's where it all came from. About three years ago, I was thinking
I was observing that in my professional life, there are two crises people tend to talk about,
and they seem to be completely disconnected, these two crises.
The first crisis is that Terminators are coming next door, and the AIs are turning evil and all this.
We're summoning evil, and AI is going to just one day rule us all.
That's one crisis.
Another crisis we hear also is about the lack of diversity in STEM and computing and
from where I stand, the total lack of diversity in AI.
And it dawned on me that these two crises are actually connected by a very important hypothesis,
which is the lack of humanistic thinking and humanistic mission statement in education and
development of our technology.
So let's look at the first one.
Why do we ever think technology might turn evil?
Well, technologies are always in the hands of people.
Technology themselves are neutral, you know, be it nuclear weapon or nuclear physics or just a knife, you know, that can cut Apple.
You know, in the hands of people, technology can have consequences.
So in order to have responsible and benevolent technology, what we really want is to have a society, have a group of technologists who have the humanist.
awareness and thinking so that we can use technology responsibly.
So that's related to the first thing.
The second thing is why are we not millions and millions and millions of dollars
and are putting to attracting diversity into computing and STEM and where I stand?
I find it very hard to convince women and underrepresented minority to working AI.
This is, by the way, despite being at Stanford,
which has, what, 50-50 parity in the computer science program with women and men?
No, it's not 50-50.
It's about 25 to 30 percent in undergraduate that we have women.
And then this thing just goes down as you go higher up.
Oh, it goes down as you go higher up.
Oh, yeah.
The attrition at every stage is grim.
So looking at Stanford students, they're extremely talented.
Almost any student coming to Stanford, whether it's undergrad or PhD, they're talented enough to be at a
but also have, you know, great writing skills, care about the world.
I suddenly realize here in our field, as well as Silicon Valley, we're not sending the right
messages to attract people of all walks of life.
What do you mean by that?
We tend to just celebrate geekiness, nerdiness.
But when you have an ambitious young woman coming into our department or into the AI lab,
She might be thinking about the aging society.
She might be thinking about curing cancer.
She might be thinking about a lot of socially important topics.
If we present ourselves just as geeks loving to do geeky things,
we're missing a huge demography who actually want to turn technology into humanistic mission.
So then suddenly I realized we're missing huge opportunity,
attracting diversity because we're not talking enough or thinking of,
of humanistic mission in AI.
And that united my two themes I've been thinking about.
Just to put a sharp and a point on this,
I don't want to be cliche about only women
and underrepresented minorities would take on, quote,
the soft problems.
Because there are also other people
who might want to take on those challenges of aging
and some of the other interesting shifts
that are happening.
But to your point, we're not necessarily inclusive enough.
We're not thinking about this enough, period,
regardless of background,
to be able to really welcome
that type of thinking.
I think it's all walks of life.
They come with their experiences and value systems.
That's fair.
One thing I start to notice, I have a lot of friends who are extremely successful Silicon
Valley entrepreneurs and technologists.
And given my own age, all of my friends, many of them are entering an age that they
have aging parents.
Yes.
Suddenly, they're talking about health care.
Which they never did.
When they were 20s, they're thinking about beers.
Your point is that having that access to that experience really informs that perspective.
Right.
So all walks of life add to our collective thinking and creativity in our technology.
I know one of the things that your lab does is an outreach to high school girls who come to campus for two weeks.
This is the brainchild of me and my former student, Dr. Olga Rukovsky.
Our hypothesis is let's catch girls at the age that.
they're starting to think of who they are and what they want to do.
And we find an age group of high school, freshman to sophomore, thinking about what they
want to focus on.
So we created this AI camp that specifically we aim for two things.
One is we want to be very technical because we want to get inspired the future leaders of
AI and talented math and computing students.
but we want to attract these students who otherwise might not think of AI because they didn't know such a strong humanistic mission is in AI.
We actually run a very rigorous hypothesis testing over the summer.
I wrote a technical paper about this.
I like this approach, by the way, because I get really tired of hearing all the different camp for this, camp for that, program for this, program for that.
And I feel like, come on, guys, are we really solving the problem?
It's kind of refreshing to hear that you're taking a little, a much more rigorous approach to it.
So our campus designed in the morning, the students go through rigorous lectures and work with the TAs and PhD students and postdocs on the technical problems of AI.
In the afternoon, the girls were divided into four research groups.
And each of the research project is a technical AI project, for example, computer vision or NLP or computational biology.
But we put a very strong humanistic statement into each of the project.
project. For example, last year, we had four projects. The computer vision project uses depth sensors
to look at hospital environment and help doctors and nurses to monitor hand hygiene scenario.
The NLP natural language project uses Twitter data during natural disasters. For example,
earthquake. The girl's aim is to do the right data mining to find messages that helps
to do disaster relief.
And the self-driving car project,
we designed a aging problem of a senior
that needs to retrieve drugs.
That's amazing.
And go there and come back.
So everything is very technical,
but suddenly they learn that they connect these technology
to humanistic purposes.
We have a team of three researchers,
two undergrad, one PhD student and myself,
we conducted a rigorous evaluation,
project on this hypothesis, can humanism increase the interest in AI?
And we found a statistically significant difference from the beginning to before and after for
these girls' thinking.
And that particular paper is published in the Computer Science Education Conference
to show this makes a difference.
That's great.
It'll be interesting to see what happens when you expand that to other groups.
Yeah, we're running it again this year.
and we really hope that this can become a continuous program.
Okay, well, Fei-Fei, I'm excited to have you join us and bring all these perspectives to our own firm and the entrepreneurs we work with.
And we're so excited. Thank you for joining.
Thank you.