Behind The Tech with Kevin Scott - Year in Review 2021
Episode Date: December 13, 2021It's time for our annual episode that revisits inspirational guests, including AI experts, bioengineers, Grammy winners, entrepreneurs and digital influencers. Topics range from research on AI and pro...tein design, to robotics, to the importance of recruiting more girls for STEM. We also catch up with a few of our past guests to find out what they're up to these days. Wishing you all happy holidays and a healthy 2022! Show links: Kevin Scott Behind the Tech with Kevin Scott Discover and listen to other Microsoft podcasts
Transcript
Discussion (0)
.
Hi everyone. Welcome to Behind the Tech.
I'm your host, Kevin Scott,
Chief Technology Officer for Microsoft.
In this podcast, we're going to get behind the tech.
We'll talk with some of the people who've made
our modern tech world possible and
understand what motivated them to create what they did.
So join me to maybe learn a little bit about the history of
computing and get a few behind thescenes insights into what's happening today.
Stick around.
Hello and welcome to a special episode of Behind the Tech.
I'm Christina Warren, Senior Cloud Advocate at Microsoft.
And I'm Kevin Scott.
And today we are doing our year in review episode. And this means that we're going to revisit a few fascinating conversations
with our guests from 2021 and beyond with topics ranging from protein design
to robotics to the importance of getting more girls in STEM.
You know, Kevin, we call 2020 an unprecedented year.
My words kind of fail me when I think about summing up the events of 2021.
It makes me think of our previous guest, science fiction writer Charlie Strauss, who said, and I quote,
this is just trying to front run the insanity in my fiction,
and I'm having great difficulty making elder gods more horrifying than what's happening from around the world today.
Indeed, the world today is stranger than fiction.
It really is. But for today's show, we are focusing on Behind the Tech,
which produced a fantastic year of conversations. As always, we were enlightened, humbled,
and inspired by the incredible guests we had the honor of speaking with.
Yeah, we had a lineup of awesome guests this last year,
including AI experts, bioengineers, entrepreneurs, and digital influencers.
We met Mae Jemison, the first African-American woman in space.
We chatted with scientist Ashley Lawrence about the future of AI and robotics.
And we spent time with Grammy Award-winning Jacob Collier
and his collaborator,
Ben Bloomberg. Yeah. And the exciting thing about this episode is that we're also going to check in
with some folks from earlier years of the podcast. So we'll revisit some bits of conversations from
your interviews with them. And we'll also share some news about what they've been up to recently.
Great. So who's up first? Okay. So first, we thought that it was fitting to check in with Anders Helzberg
because Anders was our first ever guest on the podcast.
Anders is one of my legitimate coding heroes.
He built Turbo Pascal when he was at Borland,
which is this tool that inspired me as a teenager to take computer science seriously.
He eventually moved over to Microsoft where he helped create the C Sharp programming language.
And just a few months ago, he and the team released TypeScript 4.4,
which offers smarter control analysis, stricter checks and speed improvements,
and a bunch of other cool programming language awesomeness.
Here's a snip from our conversation where we're reminiscing about the fact that Anders and his team
came up with one of the first integrated development environments written in Z80 assembly language.
I just want to double-click on this point again. Coming up with one of the first integrated development environments
that you had written in Z80 assembly language at that point in time,
that's an unbelievable breakthrough.
I suppose in retrospect, yes.
I never really thought of it that way.
It's just incredible.
I mean, it's like the first.
What the heck?
It just seemed like, heck, this is going to be so much better
than having to have first load an editor and then load a debugger.
Why not just put it all there?
I mean, I don't know.
I never really.
And especially at the time because, I mean, again, like more framing.
These are not windowed systems.
Can't have multiple things open at the same time.
Like it's super tedious to switch from one program to another.
So like having everything in one place is just this huge productivity win.
Oh, it was totally. another. So like having everything in one place is just this huge productivity win. Totally. The edit, compile, run, debug cycle just shrunk by many orders of magnitude.
Yeah. And I'm embarrassed to say I've like forgotten, was it F9 to like compile and run,
or was it F5? You know, I don't even remember what it was. I think it was F5. Yeah.
It was like miraculous.
Maybe it was F3, but yeah. No, it was great. Yeah. There were all sorts of tricks in there.
Like the runtime library was the first 12K of the system.
And then when producing code, I'd just copy the first 12K into the XE we were producing.
There's your runtime library, right?
And then generate code from there on out.
Yep.
And you could compile to memory, you know, and we'd put the code in memory and run it, right?
Or the original implementation, you could compile to tape, you know, and we'd put the code in memory and run it, right? Or the original implementation, you could compile to tape, to floppy tape.
And then you could, or sorry, to the tape recorder interface, right?
And then you could load that machine code up because, I mean, there was only 64K of memory.
I mean, it was crazy.
Yeah.
So I bought a copy of Turbo Pascal 5.5 out of a catalog called Programmer's Paradise.
This is just sort of how you used to buy software.
And so I forked over my $200 or whatever it was.
Oh, no, it wasn't even that much.
It was $49, like $49.95.
Yeah, so it was affordable because I was poor,
so thank you for making cheap software.
That was an excerpt from Kevin's conversation with Anders Helzberg, computer scientist and tech fellow at Microsoft.
And that was episode one of Behind the Tech from 2018.
Gosh, that seems like a lifetime ago, doesn't it?
It does indeed.
And on that note, let's jump to one of this year's guests, Dr. David Baker. David is a biochemist and computational biologist who has pioneered methods to predict and design three-dimensional structures of proteins.
That's right. Dr. Baker is the director of the Institute for Protein Design and a professor of biochemistry in the spring of 2021. And since then, his lab has partnered with the U.S. Agency for International Development on a $125 million project to detect emerging viruses.
Here's an excerpt from my conversation with David about the work at the Institute for Protein Design at the University of Washington.
So, you know, maybe in terms of SARS coronavirus too, like, can you describe?
Yeah, that's a great idea. Really good suggestion. In fact, now when I give talks,
I explain protein design in the context of coronavirus. So let me just spend a couple
minutes describing what we've been doing at the Institute with regard to coronavirus. So
the genome sequence was determined and made available last, at the Institute with regard to coronavirus. So the genome sequence was determined
and made available at the beginning of last year. So we took that amino acid sequence and used the
methods we've been developing to predict the three-dimensional structure of the protein on
the surface, the spike protein. Of course, you're right. There's higher literacy about this now than
there ever was. And we knew that the spike protein found the ACE2 receptor
on the target cells. So starting initially with that model and then shifting over to the x-ray
crystal structure when it was determined of the spike ACE2 complex, the first thing that we did
was to design small proteins that we predicted would fold up into such a way that they'd have a shape and chemical complementarity to the part of the spike protein called the receptor binding domain
that binds ACE2. So these are like, I talked about sort of lock key interactions. So if you imagine
the ACE2 is the key and the RBD is the lock. So it's sort of the spike protein goes and binds to the ACE2.
We basically made things that would compete away that interaction that is bind more tightly to the
virus than ACE2. And we were able to make compounds that bind to the virus about a thousand times more
tightly than ACE2. And this was really cool. They were completely made up proteins, completely
unrelated to anything that had been seen before. And with our really cool. They were completely made up proteins, completely unrelated to anything
that had been seen before. And with our collaborators, we were able to actually
determine experimentally how these small proteins bind to the spike. And they bound basically
exactly like in our computer model. So that means we could go from essentially from the sequence of
a virus to these very, very tight, high affinity binding proteins. And the next thing we showed was
that those proteins block the virus from getting into cells. And then we showed with collaboration that they
protect animals from infection by the virus. And I think this was kind of a real aha moment for me,
because we'd been developing these methods for designing proteins over the years. And here,
in the midst of a pandemic, we were actually able to apply them to make therapeutic candidates.
And those are now headed for clinical trials. It's been slow because this is a completely new
modality, this whole idea of computational design proteins. So there's been a little bit of a push
back because these are completely new things. No one knows exactly how they'll behave. But for the
next pandemic, we're going to be ready. So we have all the methods worked out. And I think we've gotten a lot over a lot of the sociological issues to actually using
these as drugs. And there's nothing really that can be as fast if you can go from the amino acid
sequence to actually computing a protein which fits perfectly against the virus. So that's the
first thing we did. The second thing we did was to design, again, completely from scratch,
little molecular devices that emit light,
luminesce when they encounter the virus. And those are pretty neat. We're developing those now for
not only for detecting the virus, but also for monitoring responses to vaccination, like how
good are my antibodies against the virus? And so rather than that being just like a fixed
key that fits into a lock, that's
actually a device that can undergo changes in its state when it encounters the virus.
And the third area, my colleague Neil King at the Institute has been developing sort of a next
generation of coronavirus vaccines using designed protein nanomaterials that we've created at the
Institute, which self-assemble into big things that look like death stars. And we can put the parts of the coronavirus spike on the surface.
And when Neil does that, it finds it gets very, very strong immune responses, stronger than with
the current vaccines. These design nanoparticle vaccines are now in clinical trials. So that sort
of illustrates some of the key areas in protein design now, being able to design very precise
shapes that can block, that can bind very tightly to targets, being able to design molecular devices that can undergo, that can basically do logic calculations, and being able to design nanomaterials like these protein Death Stars.
That was from Kevin's conversation with Dr. David Baker from the spring of 2021.
So unbelievably cool.
The Death Star protein.
So awesome.
Okay, well, now we're going to jump back to the summer of 2020 and our conversation with Dr. Fei-Fei Li.
Dr. Li co-leads the Institute for Human-Centered Artificial Intelligence at Stanford University. And this past October, the Institute awarded $2 million in seed grants to 26 research teams
with a focus on bias, diversity, healthcare, and cognitive science.
Yeah, it's super inspiring.
The grantees are conducting research in things like civics education for a just and sustainable future
and ultra-fast MRI
for precision radiotherapy. It's just incredible work. It really, really is. Not only has Fei-Fei's
own research been groundbreaking, but her work as an educator is remarkable. In 2015, she co-founded
a nonprofit called AI for All, and that's dedicated to nurturing new AI leaders. I just
love their mission statement, which is, our vision for AI is a world where diverse perspectives,
voices, and experiences unlock AI's potential to benefit humanity.
Here's an excerpt from my conversation with Fei-Fei.
And also, just talking about the rural America, this is something I feel passionate about,
and I have a story to share with you.
So, you probably know that I co-founded and chair this non-profit education organization called AI for All, right?
Yep.
It started as a summer camp at Stanford about five years ago
to encourage diversity students
to get involved in AI,
especially through human-centered AI,
studying and research experience
to encourage them to stay in the field.
And then our goal is in 10 years,
we will change the workforce composition.
Yep.
Now it became a national nonprofit and seed granted by Melinda Gates and Jensen Huang Foundation.
That's awesome. I didn't know Jensen was involved. That's great.
Yeah, it's Jensen and Lori Huang Foundation.
And this year, we're on 11 campuses nationwide. One of the populations we put a lot of focus on, in addition to gender,
race, income, is geographic diversity and serving rural community. For example, our CMU campus is
serving rural community in Pennsylvania. We also have Arizona campus. One story actually came out of our Stanford camp is Stephanie.
Stephanie is still a high school junior now.
And she grew up in the backdrop of Strawberry Field in rural California in a trailer park with a Mexican mom.
And she come from that extremely rural community, but she's such a talented student and has this knack and interest for computer science.
And she came to our AI for All program at Stanford two years ago.
And after learning some basics about AI, one thing she really was inspired is she realized this technology is not a cold-blooded just bunch of codes.
It really can help people.
So she went back to her rural community and started thinking about what she can do using AI to help.
And one of the things she came up with is water quality.
Yes.
Really matters to her community.
And so she started to use machine learning techniques to look at water quality through
water samples.
And that's just such a beautiful example.
I just love her story to show that when we democratize this technology to the communities,
the diverse communities, especially these communities that technology hasn't reached enough in.
The young people, the leaders, and the citizens of this community will come up with such innovative and relevant ideas and solutions to help those communities.
And I think that getting this technology democratized is sort of a one-two punch.
So, in this conversation, Kevin had about Feifei's
earlier work on ImageNet, ImageNet consisted of 15 million images which were organized in
everyday English language of 22,000 vocabularies, mostly nouns. And it was at that time in 2009,
the largest database of natural object images in the world.
And it really was, in Fei-Fei's words, the onset of the deep learning revolution.
Here's Kevin's conversation with Fei-Fei about the continuation of that work.
Tell me a little bit more about this work that you're doing that sort of blends vision and language together.
Because that seems really quite exciting.
Yeah.
So it actually is a continuation or a step forward from ImageNet.
If you look at what ImageNet is, for every picture, we give one label of an object.
Fine.
That's cool.
You have 50 million of them.
It becomes a large data set to drive object recognition.
But it's such an impoverished representation of the visual world.
So the next step forward is obviously to look at multiple objects and, you know, be able to recognize more.
But what's even more fascinating to me is not the list of 10 or 20 objects in a scene.
It's really the story.
And so right after the bunch of work we have done with ImageNet around 2014, when deep
learning was, you know, showing its power, my students and I started to work on what
we call image storytelling or captioning.
And we show you a picture.
You say that two people are sitting in a room having a conversation.
That's the storytelling.
And that is a sentence or two, right?
And honestly, I'll tell you, Kevin, when I was in grad school in early 2000, I thought I wouldn't see that happen in my lifetime. Because it's such an unbelievable capability humans have to connect visual intelligence with language, with that.
But in early 2015, my group and my students and I published the first work
that shows computers having the capability of seeing a picture and generate
a sentence that describes the scene. And that's the storytelling work. And we used, obviously,
a lot of deep learning algorithm, especially on the language side, we use recurrent models like
LSTM to train the language model, whereas on the image side, we use recurrent models like LSTM to train the language model, whereas on the
image side, we use convolutional neural network representation. But stitching those together
and seeing the effect was really quite a wow-y moment. I could not believe that
I saw that in my lifetime, that capability. Yeah. I sort of wonder whether or not these big unsupervised language models right now, these transformer things that people are building, the models that come out of them have such, they're just very large and there's not much, you sort of barely have like any signal in the parameters at all.
It's like just diffuse across the entire model.
I just wonder like whether getting like a vision model coordinated with training these
things is going to be the way that like they more concisely learn.
Oh, I see.
Well, yeah, I mean, human intelligence is very multimodal.
So, multimodality is definitely not only complementary, but sometimes it's more efficient.
Yeah. the kind of comprehension and abstraction and deep understanding that humans have.
They can say two people are sitting in a room having a conversation, but they lack the common
sense knowledge of the social interactions or, you know, why are we having eye contact
or whatever, right?
So there is a lot more deeper things going on that we don't know how to do yet.
That was Stanford researcher Dr. Fei-Fei Li talking about her organization, AI for All.
Now let's continue our conversation about AI with Sam Altman, who joined the show two years ago.
Sam is the CEO of OpenAI and former president of Y Combinator.
Yeah, Y Combinator is one of the most successful, if not the most successful,
startup incubators in existence. And OpenAI is a really interesting model. It's an AI research and deployment company with a mission of ensuring that AGI, artificial general intelligence,
is safe and benefits all of humanity. And the recent news about OpenAI is that Microsoft announced in early November of 2021
the launch of the Azure OpenAI service,
which makes OpenAI's machine learning models available on the Azure platform.
And this is really exciting because these models like OpenAI's GPT-3
are incredibly difficult to train.
So having them behind an API
available on a cloud like Azure
really helps democratize the power of those models
and gets it into the hands of people
who can do the really interesting things
with the models that the world needs them to do.
Let's listen to Kevin's chat with Sam Altman.
One of the interesting things that's happening right now with
these computers that we're building to train very big models is
that computer architecture is all of a sudden interesting again.
And it hasn't been for, you know, 20 years, maybe 15, like a while.
Yeah.
That's cool because there's only people that really want to work on that.
They've had nothing to work on,
which means we can get incredible talent focused on this.
Yeah, we've got all of these people who did high-performance computing
in the 90s who, you know, and like I was not an important person
working on high-performance computing in the 90s, but you know, and like I was not an important person working on high
performance computing in the 90s, but like I was a compiler person.
And like I thought that none of the stuff that I learned in graduate school was ever
going to be directly useful again.
And like, here it is.
Here we are.
It's cool.
It's really cool.
It is really cool.
And just a reminder of like how cyclical, not just technology is, but history.
I mean, how much do you think about the historical corollaries for the disruption that we're going through,
like Industrial Revolution?
I think the steam engines are a really fascinating example.
That's a great one.
Do you have any others?
Because I know you've thought a lot about this.
Yeah, I mean, I think the analogs are the agricultural revolution,
the industrial revolution,
the computer revolution.
And I think the AI revolution
will be bigger
than any of those three
or bigger than all three
of them together.
I love reading
sort of firsthand accounts
of people at the time
as they were kind of
going through those.
There's this great book
called Pandemonium,
which is,
it's all primary source material
of the industrial revolution as it was arriving. And many of the things that people say
in that book could be said now about how people feel about AI. There's no jobs. It's going to
take over. The machines are going to kill us. Like, the future is going to be terrible. Or like,
it's going to be utopia. It's like, this is so amazing. Like, there's nothing these machines
can't do. And the reality was some complicated thing in the middle.
And we always figure out something new to do.
Like the rate of, for example, one of the common themes in that book was like,
what are we all going to work on?
The rate of job turnover is something like 50% of the jobs every 75 years.
And this held remarkably constant.
You know, it has like fits and spurts,
but that's held constant for hundreds of years.
And like we go, like technology changes,
whole classes of jobs go away and we find new ones
and they're difficult to predict what they're gonna be.
But like, I think the jobs this time will change a lot,
but we're gonna find things to do, I'm pretty sure.
So what is the most exciting thing that you think is going to happen in AI
over the next few years that you can talk about?
Well, I'll give a few,
because I think the interesting thing is
the breadth of things that are going to happen.
I think we'll have language models
where we can interact with computers,
with natural language,
in an amazing way that feels unimaginable now.
That's going to feel like intelligence.
I think we'll have robots
that can do human dexterity levels of manipulation,
and that's going to be a huge impact on the world.
I think computer games are going to get really good,
really fun to play.
It's a sort of small sample.
Yeah.
So it's exciting.
Totally.
It's amazing.
And, you know, none of those things,
and so this is sort of to my point,
like none of those things is like Commander Data
from Star Trek
The Next Generation
walking around
and still useful stuff
will happen.
Right.
So that's the thing
that makes me
like super, super excited.
Totally.
And if we get Commander Data,
like I'm excited
about that as well.
Might happen.
Probably not
in the next couple years.
Probably not.
That was the CEO of OpenAI, Sam Altman.
Next up, Dr. Mae Jemison.
Dr. Jemison is a doctor, an engineer, a professor, a philanthropist, an entrepreneur, a writer, a dancer, and a NASA astronaut.
She was the first African-American woman in space.
And her list of accomplishments is long. And we encourage you to listen to her podcast.
Yeah, she's had an intimidatingly remarkable career.
As an undergraduate at Stanford, she majored in chemical engineering.
She concurrently took graduate-level classes in biomedical instrumentation, and she also ended up majoring in African and African-American studies.
Right, and she also danced all the way through college.
And so, Kevin, you asked her about this diversity of passions and academic pursuits
and how important it was to her as a scientist to draw upon this breadth of experience.
Yeah. And she said it had been incredibly important,
especially in helping her form her vision for the 100-Year Starship,
which is an initiative to ensure human capability for travel beyond our solar system within the next 100 years.
Here's Dr. Mae Jemison talking about interstellar human travel.
What we do, what we see, even what we research and the questions that we ask are based upon who we are and our experiences, right?
What we've seen, what we've observed.
So coming back to some of the projects that I work on now, even 100-Year Starship, the title of the proposal that won this DARPA Geek Prize of the year, right? It was an inclusive, audacious
journey transforms life here on earth and beyond. And that first word inclusion, I doubt that it
would have been there if I was not the one leading the project. But the inclusion was not only across
ethnicity, gender, and geography. It was across disciplines because you cannot solve a problem
like human interstellar flight. You cannot even start to approach it without taking into account
the full breadth and scope of human experiences. So 100-year starship is about making sure that
capabilities for human interstellar flight exist within the next 100 years. Capabilities, not
building a starship or launching a starship, but having the capabilities. And the reason for that
was the challenge that it requires, right? The radical leaps in knowledge that are required.
We can't ease up on this. Why is that different than going to Mars? We've been to Mars a bunch of times,
right? There are some engineering challenges. There are some life science challenges,
but we can actually create a technology roadmap to get there. I'm a little irritated I wasn't on
Mars. That's what I assumed when I was a little kid growing up, right? At least I'd be on the
moon. They just announced potential Artemis crew members that are going back to the moon, they just announced, you know, potential Artemis crew members, right? They're going back to the moon for NASA. I'm trying to figure out how I missed the original
one and how I missed the other 50 years later. Interstellar is so different because of the vast
distances, because of the enormous amounts of energy, for example, that you'd have to generate in order to go across those distances in a reasonable amount of time.
The autonomy that has to be developed within a vehicle, within a system.
What do we have to know about life systems? systems, you know, from the microbiomes that help us to digest our food to the microbiome in the
soil that help plants grow. All of these kinds of things need to happen, even what makes us human,
right? So people can come up with all these other things or why is it important for humans to go,
what do we learn by place, by physically being there? But even before you go, right, let's not even think about
that. How do you develop the public commitment and the will to support something like this?
How do you develop the behavioral characteristics that we needed on a starship? Because I could
actually see the behavior becoming the long pole in the tent. It's not going to be the tech. It's
going to be when I tell you I'm not going to do something
after you wake me up out of spending animation, right? And I say, yay. How are we going to work
as a team? I don't know, but I'm not doing that. But it's such a wide range of challenges.
And each one of those challenges, if you think about it, and I did not go through all of them at all, but just think of the energy, how much energy. So we can't do it through regular chemical propulsion. There's not enough chemical propulsion in the solar system to get us there. We'll have to do fission, fusion, antimatter. We're okay with fission, but we really don't contain and
control it really well, right? Fusion, we go back and forth with whether we can do fusion, right?
And antimatter, we don't know how to contain antimatter, but each one of them is an order
of magnitude greater energy resource. But imagine what that would do to our world
if we learned how to generate, control, and store that kind of energy. How would it impact us? The
same thing with understanding the microbiome, the same thing with understanding investing
financially in something like this. What is the return on investment? So when I look at all of
this and human behavior, don't let me not leave out human behavior, right? When I look at all of this and human behavior, don't let me not leave out human
behavior, right? When I look at all of this, these are really the challenges that we face in the
world today, in our world, on this planet. And if we don't solve those, we have a problem.
That was Kevin's interview with Dr. Mae Jemison, the first African American woman in space.
Now let's hear from a recent guest who also had a fascinating career path.
Ashley Lawrence is a scientist, engineer, and hip hop artist known as Solstice.
Ashley talks about his immersion in growing up in the South Side of Chicago and how a
boombox with two tape decks
served as his first recording studio. And those early efforts led Ashley to a career in the music
industry, touring through Europe and Japan. And one of his songs was featured in the Oscar-nominated
film, The Blind Side. Yeah, and simultaneously, Ashley pursued his career as a scientist.
He enjoyed a 20-year career in research and development of AI technologies at Johns Hopkins Applied Physics Laboratory
and recently joined Microsoft as a vice president, distinguished scientist, and managing director for Microsoft Research.
Here's an excerpt of Kevin's conversation with Ashley about the narratives surrounding artificial intelligence.
The narrative is really important because it's such an important technology and it is having such a profound impact on what the future is looking like every day as it unfolds
that people need to be able to understand how to engage with it to sort
of like, what do I think about this technology? What do I think about policy about this technology?
What do I think about, you know, like my hopes and my fears for the future of this technology?
So, you know, have you thought much about like, you know, the story of AI?
Yeah, absolutely.
And maybe there's a couple of sides,
but there's many sides,
but maybe two sides I'll pick to explore there.
One is absolutely the idea that
AI is taking us in a bold new direction as a society.
And I think it's more important than ever
that we can engage around these policy
questions and really around the directions of AI, definitely outside of computer science and
across disciplines. And so we do need to create narratives. Even more than that, I think we need
to create directions that we agree on, that we want to take this technology. A lot of times, I
think people are discussing AI as something
separate from human beings and human intelligence. And I think we need to be thinking of these two
things as complementary. So what are our goals for these things? Can we start to set some audacious
goals around enabling as many people as possible on the planet to live a long, healthy life,
creating an atmosphere of shared prosperity.
And what is the role of AI in doing that? To me, these big societal narratives should be
at the top level of abstraction in terms of what we're talking about. And then everything else is
derived from that. I think if we're going to just let a thousand flowers bloom and see where we land
on this thing, I think we could wind up with some really unintended consequences, you know, from that. Yeah, I really, really agree. And I think,
you know, too, if you have the wrong narrative, you could have unintended consequences as well.
Like one of the things that I have been telling people over and over again over the past handful
of years is just sort of a useful, useful device about thinking about the future of AI is that AI,
especially its embodiment in machine learning is a tool.
Just like any other tool that people have invented,
it's a human-made thing and humans use it
to accomplish a whole wide variety of tasks. And, you know, the tool is going to be as good or as bad
as the uses to which we put it.
And, you know, it's just very, very important, I think,
for us to like have a set of hopeful things
that we're thinking about for, you know,
those uses of AI
as we have our anxieties.
And both are important.
You have to, it will certainly be used for bad things.
But as with any technology,
the hope is that there will be orders of magnitude
more positive things and good things that people will attempt to do with it than the bad.
And part of how we get to that balance of good versus bad is the stories that we're telling ourself right now about what it's capable of and like what to be wary about.
I think I think that's that's right on point. And, you know, we can even ask yourself, you know,
what does it mean to behave intelligently as a species? I actually think we're getting to the
point where we can start asking ourselves and holding ourselves to, you know, to some standard
there. You know, if you just think about artificial intelligence at a low level, you know, from an
agent standpoint, you know, I think intelligence itself is
the ability to achieve goals, to set and achieve goals. And then what do you have to do? You have
to be able to have some understanding of the world around you through some mechanisms of perception,
whether that's kind of our human modalities or other kinds of modalities. You have to decide
on a course of action that best achieves your goals, and then you have to carry on a course of action, you know, that best achieves your goals.
And then you have to carry it out.
Like, these are the things you do to be intelligent.
So when you extrapolate that to us as a species, because one of the hallmarks of human intelligence is our social intelligence, our ability to, you know, to collectively set, pursue goals and things like that. So I think, and I'm sort of, as you can
see, I'm sort of cursed now to see everything through the lens of intelligence and artificial
intelligence. This is just my lens on the world. But I think it's helpful. I think it's useful.
I think in order to behave intelligently as a species, we have to do some of these things that
you're talking about, setting some bold visions and directions and figuring out how to organize around those.
That was Distinguished Scientist
and Managing Director for Microsoft Research,
Ashley Lawrence.
We have another story about the intersection
of music and computer science.
Earlier this year, we had the pleasure
of meeting Kimberly Bryant,
the founder and CEO of Black Girls Code,
an organization that's dedicated to promoting
equal representation of Black women and girls
in the tech sector.
This October, in celebration
of the International Day of the Girl,
Black Girls Code announced their partnership
with actress, singer, and influencer Willow Smith
to help amplify their message about the importance
of getting girls into STEM.
We talk a lot about the need to bring diverse voices and perspectives to build not only
better technology, but a better democracy.
Kimberly's work is helping to do this by creating education and mentorship opportunities in
areas like AI, robotics, virtual reality, gaming, and blockchain.
Here's an excerpt from Kevin's conversation with Kimberly.
You all are doing such great work.
I wonder if you could get the world to do anything
that would get more women and more women of color
into computing like what would that thing be uh and like how can we all help support that
i think we need more organizational support not just at black girls co but any organization that's doing this work as a nonprofit,
we can't do it alone. So for me, it's always about how can we have this magnified effort
of different organizations that are all working collectively to elevate girls in the STEM fields,
particularly in computer science.
And so that means like getting companies to volunteer to help support this.
So that means bringing in staff.
Our classes are run by volunteers that work in industry. So getting folks to volunteer at organizations like Black Girls Code, getting individuals to both give as
well as encourage their organizations and companies to give to support organizations like Black Girls
Code, absolutely positively creating both mentorship and internship opportunities.
Those opportunities are transformational because it's difficult to understand what a computer scientist does if you haven't done that.
I see this for my daughter.
When she's in school, that's totally different than when she's in her internship and she's on a team of engineers that are working on a product line. Totally different experience and totally different
way for her to develop this mindset of what a computer scientist really is and what that does.
And then I think really making sure that once these girls are career ready, they're graduating,
that they can get a foot in the door. They can get an opportunity to work at a company like Microsoft and others
and have a fruitful career there.
So pushing on all those various levels,
either via individuals that are giving up their time and resources
or really holding our companies accountable
for providing these opportunities to get more women in the field.
Yeah, it's such a good and necessary push.
And we should all be very, very grateful
we have your leadership
and your organization's leadership out there
helping us all make progress on this.
That was the founder and CEO of Black Girls Code,
Kimberly Bryant. Now let's turn to another CEO of Black Girls Code, Kimberly Bryant.
Now let's turn to another intersection of music and tech, this time with Grammy award-winning
musician Jacob Collier and his collaborator, Ben Bloomberg.
Ben is a creative technologist who designs and builds everything from electroacoustic
musical instruments to AI-driven performances and tours.
And actually, we should call him Dr. Bloomberg
since he's recently earned a PhD from MIT.
That is such an accomplishment.
And Jacob Collier is a multi-Grammy Award-winning
instrumentalist, songwriter, arranger,
and producer based in London.
Since we last spoke with Jacob,
he's added more Grammy Awards
for a total of seven nominations and five wins. Incredible. And since then, he's been doing live performances with the likes of Coldplay,
recording in Nashville for Jesse Vol. 4, and selling out in venues in Europe and North America
for his DJesse World Tour 2022. I just saw a post of Jacob with Joni Mitchell. So he got to hang out with her for what
he called a raucous evening of magic and music. And he said that it was like being in the presence
of Shakespeare. I'm so jealous. Yeah, me too. It's just so awesome. And as for Ben, I had the
opportunity to work with Ben Bloomberg on our Build Conference this year, Microsoft's annual developer conference. He helped conference goers better understand how he's using tech in
his musical production work. Well, I'm excited to revisit your conversation with Ben and Jacob.
It's such a great conversation. And in fact, we had to break it into two episodes. You talked
about this technology that Ben designed and created called the Harmonizer, which allows Jacob to do these one-man live shows.
Here's Kevin, Jacob, and Ben talking about the impact of technological reliability and failability as it relates to creative and performative endeavors. you know what what i've done over the years with technology like you're sort of building things
and you've got billions of people who use it and so like you're constantly worried about
the fragility of things and robustness and fault tolerance and reliability and whatnot, because,
you know, the consequences of something failing are like you just impact a lot of people.
For you all, it seems to me that, you know, one of the special things about what you do with
music is that, you know, done well, like you are completely capturing someone in this immersive emotional
state and like mistakes, you know, like a cough or like they're, they're very easy ways to sort
of pull you out of that immersion. And so like, in that sense, like the stuff really has to be
robust, right? I think it's a mixture of being robust and then leaving space for being spontaneous, you know?
And I think this is something
that I'm kind of forever indebted to Ben at doing
was on that first Skype conversation
and in those kind of initial dreaming phases
of the one-man show and the harmonizer
and all sorts of other things,
there was never a moment where it was kind of like,
oh, no, no, that's too, we can't do that.
That's going to be too fragile or that's not going to work out, or that's not reasonable for
you to be expecting, you know, whatever. It was like, well, if it's not possible, then we'll find
a way to make it possible. And then I came to trust that process, not necessarily to end up
where I was expecting it to. But, you know, there are a few different examples of things where we'd
set out thinking, I want to be able to do this live and then by the time we do it live it's really it's changed
it's it's nature I mean I remember we started with the one man show having about 10 different
foot pedals across the whole stage and I had I had to run around um hitting all the pedals as I
would play each instrument and then springing from that instrument. But I figured out, well, what we figured out in trials was that if I hit that pedal, even a fraction of a second
after the downbeat, it would loop the following bar, you know. And so there's only so much
processing that my mind can do in one go about when I hit a button. And also how there's also
only so much I could do physically with my body um on stage at one moment and still
be a human and musical and give energy to a room and so you know that's an example of something
where you know we kind of looked at each other and said you know what maybe we should just lose
the pedals and let's have the loopers loop invisibly and let's just tell them when to start
and stop looping and then and then i my job would be to to land just to land in front of them at the
right moment in the song,
play them for the right length of time and hold that in my head and run away and keep playing.
And so, you know, things do change. But I think the thing about Ben is that there's always space for an idea to kind of be impossible for a little bit, because it's a very important,
fragile moment when an idea is being had, where you can't stamp on it and be too realistic.
You have to dream. You have to say, no one could ever do this okay let's let's go and do it you know and then obviously once you
get started that's when kind of my my wealth of experience of of being guided by what my idea of
good and bad is creatively steps in and ben's massive wealth of experience about how things
work the best and what things work well and what is a no-no and what is a yes,
those kind of come into fruition. But there's that lovely moment at the beginning where you
just think, whoa, yeah, we could do this. I guess I'm...
Let's go for it.
Yeah, let's go for it. I'm curious, Kevin, actually, to ask you,
as somebody who has so successfully had ideas and implemented them, how do you have ideas?
And how do you assemble people around you to help those ideas come to life where the idea can be as safe but yet impossible as it needs to be to be a good idea?
Yeah, I think it's really, really, I mean, it's sort of the hardest thing about creativity, right?
Especially, I don't know, like, so with engineers, you know, I think there's this mindset thing that maybe you're even born with where you sort of look at the world in terms of like all of the things that are wrong.
So you're just sort of constantly scanning things.
It's like, oh, this doesn't work as well as it could.
And, you know, like this is
broken and like needs to be fixed, which is, you know, both a good and a bad thing. It's like a
slightly jaundiced, you know, worldview, but it's also the thing that results in like, you know,
this determination and drive to go like make things better, and I think there's this moment when you get a bunch of
technical people in the, or creative people in the room and they, they come to a problem with
this, they've got this set of tools that they have, like they have an understanding of, uh,
like how the world works, which is just that understanding at a point in time. And they have, you know, they have their experiences about what they've, you know, tried in the past and like which things have worked and which haven't.
And like it's it's sort of hard at the beginning, I think, to overcome everyone's past.
So because, you know, you'll have a lot of people in the room who are like oh you can't
do that that's going to be too hard that's impossible like i don't know how to do that
and the thing that you have to do is figure out inside of those groups how you can uh how you can
give people the permission to like speak the the daring crazy thing and and not immediately get shot down
where where they feel safe it's like oh no like you just don't want to tell people oh that idea
is stupid uh so like part of it is about you know language and culture like one of the ways that we we really admire the growth mindset work out of this brilliant professor at Stanford.
And one of the things that we tell everyone is we don't want to be know-it-alls. We want to be
learn-it-alls. And so if you think about all of this stuff as a learning experience, it's like
you have an objective in mind and the process like going towards the objective is learning how to get there. Then you, you sort of wash away a bunch of
this, you know, sort of cultural stuff that can blow ideas apart before you ever really understand
whether they're going to work or not. And I'm guessing that that sounds like you all sort of
approach things very similarly. Yeah, I would say, I definitely say so and it's it's really it's lovely to hear
you talk in those terms um when ben and i you know for example set out after doing this this
first show which is at the montreux jazz festival in switzerland we were opening up for herbie
hancock and chick career and there were 3 000 people who had never heard of me or anything or
which is completely it was completely new for for me and for everyone um you know having done that gig we set out on this on this tour you know
our first tour ever and it's crazy and and the objective of that if you you know if you think
about it in those terms was was kind of unclear you know we wanted to have a good time and we
wanted to to play music and and we wanted make people happy. But we didn't really know
past that what exactly we wanted it to be like and feel and represent. We didn't know why we were
doing it. There wasn't really a reason that we were doing it other than just that it felt like
the right moment to do it. And so Ben and I set out for a month of shows, maybe about 20 shows. And we had eight bags between the two of us.
And that meant that we each had to carry four. And so I can clearly remember having a huge suitcase
in my left hand, huge suitcase in my right hand, and then a great big rucksack on my back with all
of the gear, all the computers and stuff in it. then on my front the the stick bass the um the like double bass in it in a case that goes on a stand and ben was kind
of equivalently bestowed um and we would be waddling around the states and we waddled across
the whole the whole of the us and stayed on friends couches and all sorts of things and
just feeling out what was good and and and what we loved the most and then sort of building around
that and even over the course of that one tour you know there were lots of different things that we changed
and I think for me having been used to a very kind of quick process of manifesting something
that I like in a recording environment you know it takes me 2.5 seconds to change my mind and
start something fresh on the road it happens at a different speed and
i i guess ben i'm interested to hear your thoughts on what this has been like for you and continues
to be like but you know at the end of every show i would say right we need to change these six
things about these six songs and we're going to change the whole structure and remove the let's
speed this one up change the key of this one and and ben was really good at kind of from my
perspective of letting me do that kind of processing but also
grounding me in the idea that we were building a show that had to run every night and if we change
everything about every show every night then you're kind of starting from scratch and so that
there was a real kind of mutual patience i think that we had to have about how that process evolved
and our sort of goal of working together emerged slightly further the more steps we put in the
line you know but um ben do you want do you want to talk to that at all you know i i think what's
really interesting especially sort of in the world of live performances generally people are really
risk averse um and so every venue you go to every house crew that you work with people are you know
people's sort of reputations and everything is sort of
dependent on how the show goes. And so I think what we've evolved towards now is just starting
the conversation. Every single time we go into a venue saying, look, let's try if we can to set
aside all of our sort of preconceived notions about what a show is
and how you normally use your equipment and things like that. And it's actually like in big,
bold letters on the front of the rider that says like, this isn't a normal show. Let's sort of
work from base principles here. And so I think that sort of mindset is what has really sort of
pervaded and sort of evolved. And it took me a little bit,
you know, I think, Jacob, you know, you definitely stretched me a lot because, you know, at the very
beginning, you'd be like, oh my God, like, we can't change that, you know, between the shows.
I remember actually the very first show at Ronnie Scott's, which we actually did as a rehearsal
show before the Big Montreux show.
And it was like five minutes before the show started.
And Jacob, you came up to me and said, we need to change the playback.
We need to change. I forget exactly what it was. And I said, no, no, we can't change the playback right now.
And it was like the very first, you know, it was sort of the very first
like sort of moment where we sort of had to say, okay, like, you know, we could change it.
But every time we've changed something, there's been a problem and maybe we'll get good at changing
things down the road, you know, and really flexible. But so far, every time we've changed
something, there's been like a little hiccup. So we could change it, and there might be a little
hiccup. We just don't know, and we can't test. So like, what should we do, you know?
Yeah. What I've always tried to do with the technology products that my teams and I have built is, you sort of got these two things.
One is like the faster and the higher quality you can gather feedback, the quicker you can learn and the better you can make the thing that you're trying to produce. And so engineering your environment in a way where you can get that high quality
feedback as quickly as possible is like super important. So like, that's one thing. And then
the other thing is, you know, if you are thoughtful enough, you can usually understand
the sorts of risks that you need to be able to take.
And then it builds some systems to help you manage those risks
so that you can like walk right up to the edge of something
and even allow yourself to fail.
Like a thing that we have in operations is this thing called MTTR,
Mean Time to Recovery.
So with software, I mean, like both of you know this,
like there's no way to produce bug-free software.
Like it is literally from a theory of computation perspective.
It is, these are undecidable problems.
Like you can't compute a solution to them
no matter how powerful a machine you have and so you
have to reconcile yourself to the fact that you're going to produce things that will have errors in
them and so the the question then becomes how do you catch as many errors as humanly possible
before you you know throw something out into the world and then like how once you know knowing full
well that things are going to get through like how do you know knowing full well that things are going to get
through like how do you build your system so that you can recover from failures quickly
and and that is that for software for products like that's a very useful way to think about
the world like it just lets you move faster than you otherwise would if you are constantly being crippled by the fear that you're going to fail. we really do have a lot of wiggle room because Jacob on stage can make just about anything
feel amazing. And so, there have been some pretty ridiculous sort of, I won't call them failures,
but moments where things didn't quite go as expected. And if it were anybody else, it would have been sort of
a train wreck. And Jacob is able to take even the craziest things. I think one time in Germany,
all the loopers started speeding up and going up in pitch because one of the video operators
pushed the wrong button at the wrong time. And like nobody else can handle
that stuff, but Jacob can make that sound musical and, you know, keep the audience having a good
time. So I think in that respect, we were sort of really uniquely positioned to try out some
pretty risky stuff. You know, he kind of makes it possible i guess i guess from from my just one thing from
from my perspective on that um would be that i think ben and i have different uh kind of values
different different um experiential values of control um and what when control is necessary
and you know so i know for example that when it comes to precision of musical information, I'm quite controlled because I kind of tend to know the highest resolution position for this note to be, for it to mean the most in the groove, for example.
And I think less about, you know, things like, you know, is the flow chart from this element to this element within the tech going to work every night in a way that means we can all have a good time.
And I guess that means that there are elements of the tech which I'm stretching from my perspective
and absolutely the same in reverse, where Ben, for example, will be very, very risk-aware in
some scenarios, but then will also highly encourage me to jump off my own creative
rails in a musical sense and try new stuff the whole time. You know, and I think if there's one thing that comes from making music in one room for
10 years and then going on the road, having never done a gig, you know, I don't tend to think about
imperfections being great. You know, I tend to think about imperfections that aren't to plan
being things that I will kind of want to correct and make sure that they are right. It's not that
they're going to be completely in a grid based system all the time, but it's where I want it, you know. But
one thing that I, you know, at first was very kind of, I guess I was quite intimidated by
touring and now I'm completely in love with about touring is that it's one of the only moments of
your life where you have no room to be anything other than just present. You just have to be
present. And so a lot of the mistakes and the imperfections we've ended up designing the whole show to let those shine even more than we used to you know and so I used to
think that the best gigs we did with the one-man show were the gigs where I nailed all the all the
instruments but it's just not true and you know you were saying about an environment where you
can get instantaneous feedback I mean for me that's just that's going on tour and every every
night you get a fresh round of feedback and it doesn't matter it
doesn't really matter what the audience say to you after the after the show you know they might say
oh it's a great show you know we loved it or it was rubbish you know what but you tend to sense
just even just from standing on the stage and being you on stage how that is going down it was a real
kind of a quick learning process of people immediately responded to the moments where i
wasn't impossibly perfect at something you know impossibly good at something to the moments where I wasn't impossibly perfect at something,
you know, impossibly good at something. The moments where there would be space for me to be,
yeah, wiggle around or something would go wrong. I can think of gigs where,
you know, someone would cough and it would loop every time it would loop as part of the loop,
you know, in the percussion loop, because it was really quiet and someone would yell or scream or
a plane would go overhead or whatever. That would become part of the groove of the song and that's
a fantastic that's a fabulous challenge musically how do you make that make sense but you have to
be willing to look a little bit like a fool and just sort of embrace it and and i think for me
one thing i've learned really is is how special it can be when everyone's kind of doing that
together you know audience and performer alike are both coping with this strange curveball and alchemizing it into
something that feels really, really great, you know. That was Jacob Collier and Ben Bloomberg.
So let's end this the way that we end every episode of Behind the Tech,
which is to ask guests what they do for fun.
Yeah, this is really one of my favorite parts of the podcast.
There is Dr. Peter Lee, who confessed to his love of simulated race car driving.
Kimberly Bryant is a passionate gardener. And Dr. Mae Jemison loves cooking West African cuisine.
As for Justine Ozerick, it's martial arts, specifically Kali and Jiu-Jitsu.
Here's iJustine.
When I walked into that gym, it was incredibly humbling because nobody cares who you are,
what you do. It's like when you're on the mat, you're there to learn, you're there to train.
And it's just, for me, it kind of puts so many things into perspective, mostly because I wanted
to learn everything so incredibly fast. And as a white belt, you think, yeah, I'm doing such a
great job. And then somebody comes in and crushes you. And then you just go home crying. You're
like, oh man, I know nothing. And it's like every single time, it's like every time you level up and you learn something,
that just causes another problem.
But it's just such a vast variety of things that you earn.
And it's so empowering when you actually are able
to start leveling up and actually learning things.
And it's honestly one of the most rewarding things
that I've ever done.
And I think I learned the most
when I got injured for the first time,
because I was going too tough. I wanted to learn everything so fast. And that taught me to slow
down and kind of enjoy the process. Like you're not going to learn everything in this first year.
So if you keep getting hurt, you're not going to be able to progress. So it's like that injury
taught me to kind of figure out what else I could do when I couldn't be training. And that's when I
started the other martial arts, Kali, which is more kind of like hand-to-hand combat with like
various sticks and different weapons. I was doing that while I was injured. And then it's kind of,
that sort of has played into my life a lot where if there's a problem that I'm having, I'm like,
okay, I myself cannot do this, but what else can I do to work around that issue to make it work? So it's like martial arts has taught me so, so much about
like myself. And it's just such a rewarding experience. And I think from that, my biggest
piece of advice to anyone is like, have a hobby that you do outside of anything and just go set
your phone down and just be. And I mean, that took me like 13 years to kind of figure that out.
And you know, when I did, I was like, this really is life changing.
That was from Kevin's conversation with actress, author and influencer Justina Zarek.
Well, before we close, I just want to say thank you again to all our guests
on Behind the Tech. Your
ingenuity, compassion, and
dedication truly make an impact on
the world, and we're grateful that these folks
take time away from that amazing work to
chat with us. Yes, thank you
to all of our guests, and as
always, thank you for listening.
As 2021 draws
to an end, please
take a minute to drop us a note at behindthetech at
microsoft.com and tell us about who you'd like to hear from in
2022. Be well.
See you next time.