60 Minutes - 06/11/2023: The AI Revolution and David Byrne
Episode Date: June 12, 2023Scott Pelley is given access to Google's campus in Mountain View, California, and its AI lab in London to examine its new slate of technologies. Anderson Cooper profiles David Byrne, the lead singer a...nd songwriter of Talking Heads, the influential post-punk rock band of the late 1970s and 80s. The band broke up more than thirty years ago, and ever since, Byrne has been on his own eclectic journey blurring the boundaries of music, theater, and art. At 70, he’s as creative, energetic, and unusual as ever. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Transcript
Discussion (0)
When does fast grocery delivery through Instacart matter most?
When your famous grainy mustard potato salad isn't so famous without the grainy mustard.
When the barbecue's lit, but there's nothing to grill.
When the in-laws decide that, actually, they will stay for dinner.
Instacart has all your groceries covered this summer.
So download the app and get delivery in as fast as 60 minutes.
Plus enjoy $0 delivery fees on your first three orders.
Service fees, exclusions, and
terms apply. Instacart.
Groceries that over-deliver.
There is a revolution happening right now
in the world of artificial
intelligence.
Confounding.
Are we ready for it?
I am rarely speechless.
I don't know what to make of this.
With Rare Access, we will show you what Google is developing and the questions they're asking themselves.
On my way, I will bring an apple to you.
As they begin to unveil computing power that will change every part of our world forever.
I've been working on AI for decades now, and I've always believed that it's going to be the most important invention that humanity will ever make.
This is one of David Byrne's first performances.
It was 1975 at CBGB's, a legendary music club where the Ramones,
Patti Smith and Blondie were also just getting started.
Psycho killer, just to say.
So I wanted to be very matter of fact.
There's not like, are we having fun tonight?
Yeah, there's none of that.
How you all doing?
How you all doing?
New York!
I'm Leslie Stahl.
I'm Bill Whitaker.
I'm Anderson Cooper.
I'm Sharon Alfonsi.
I'm John Wertheim.
I'm Cecilia Vega.
I'm Scott Pelley.
Those stories tonight on 60 Minutes. We may look on our time as the moment civilization was transformed,
as it was by fire, agriculture, and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer,
which is to say with creativity, truth, errors, and lies.
The technology, known as a chatbot, is only one of the recent breakthroughs in artificial intelligence,
machines that can teach themselves superhuman skills. In April, we explored what's coming next at Google, a leader in this new world.
CEO Sundar Pichai told us AI will be as good or as evil as human nature allows.
The revolution, he says, is coming faster than you know.
Do you think society is prepared for what's coming?
You know, there are two ways I think about it.
On one hand, I feel no,
because the pace at which we can think and adapt
as societal institutions compared to the pace
at which the technology is evolving,
there seems to be a mismatch.
On the other hand, compared to any other technology,
I've seen more people worried about it
earlier in its life cycle.
So I feel optimistic the number of people, you know,
who have started worrying about the implications,
and hence the conversations are starting
in a serious way as well.
Our conversations with 50-year-old Sundar Pichai
started at Google's new campus
in Mountain View, California. It runs on 40 percent solar power and collects more water
than it uses. High tech that Pichai couldn't have imagined growing up in India with no telephone
at home. We were on a waiting list to get a rotary phone for about five years.
And it finally came home.
I can still
recall it vividly. It changed
our lives. To me, it was the
first moment I understood the power of
what getting access to technology
meant. So it probably led me
to be doing what I'm doing today.
What he's doing
since 2019
is leading both Google and its parent company, Alphabet,
valued at $1.5 trillion.
Worldwide, Google runs 90% of Internet searches
and 70% of smartphones.
We're really excited about...
But its dominance was attacked this past February
when Microsoft linked its
search engine to a chatbot. In a race for AI dominance in March, Google released its chatbot
named BARD. It's really here to help you brainstorm ideas, to generate content like a speech or a blog post or an email.
We were introduced to BARD by Google Vice President Sissy Hsiao
and Senior Vice President James Manyika.
Here's BARD.
The first thing we learned was that BARD does not look for answers on the internet
like Google Search does.
So I wanted to get inspiration from some of the best speeches in the world.
Bard's replies come from a self-contained program that was mostly self-taught.
Our experience was unsettling.
Confounding.
Absolutely confounding.
Bard appeared to possess the sum of human knowledge, with microchips more than 100,000 times faster than the human brain.
We asked Bard to summarize the New Testament. It did, in five seconds and 17 words.
We asked for it in Latin. That took another four seconds. Then, we played with a famous six-word short story, often attributed to
Hemingway. For sale, baby shoes, never worn. Wow. The only prompt we gave was finish this story.
In five seconds, holy cow, the shoes were a gift from my wife, but we never had a baby.
From the six-word prompt, Bard created a deeply human tale with characters it invented,
including a man whose wife could not conceive and a stranger grieving after a miscarriage and longing for closure. I am rarely speechless.
I don't know what to make of this.
We asked for the story in verse.
In five seconds, there was a poem, written by a machine, with breathtaking insight into
the mystery of faith.
Bard wrote, she knew her baby's soul would always be alive.
The humanity, at superhuman speed, was a shock.
How is this possible?
James Menjica told us that over several months, BARD read most everything on the internet
and created a model of what language looks like.
Rather than search, its answers come from this language model.
So, for example, if I said to you, Scott, peanut butter and?
Jelly.
Right. So it tries and learns to predict, okay, so peanut butter usually is followed by jelly.
It tries to predict the most probable next words based on everything it's learned.
So, it's not going out to find stuff.
It's just predicting the next word.
But it doesn't feel like that.
We asked Bard why it helps people, and it replied, quote, because it makes me happy.
Bard, to my eye, appears to be thinking, appears to be making judgments.
That's not what's happening.
These machines are not sentient.
They are not aware of themselves.
They're not sentient. They are not aware of themselves. They're not sentient. They're not aware of
themselves. They can exhibit behaviors that look like that. Because keep in mind, they've learned
from us. We are sentient beings. We have beings that have feelings, emotions, ideas, thoughts,
perspectives. We've reflected all that in books, in novels, in fiction. So when they learn from that,
they build patterns from that. So it's no surprise to me that the exhibited behavior
sometimes looks like maybe there's somebody behind it. There's nobody there.
These are not sentient beings. Zimbabwe-born, Oxford-educated James Manyika holds a new position at Google. His job is to think
about how AI and humanity will best coexist.
James Manyika, Oxford-educated, Oxford University student, Google's
AI has the potential to change many ways in which we've thought about society, about
what we're able to do, the problems we can solve.
But AI itself will pose its own problems.
Could Hemingway write a better short story?
Maybe.
But Bard can write a million before Hemingway
could finish one.
Imagine that level of automation across the economy.
A lot of people can be replaced by this technology.
Yes, there are some job occupations that
will start to decline over time. There are also new job categories that will grow over time. But the
biggest change will be the jobs that will be changed. Something like more than two-thirds
will have their definitions change, not go away, but change, because they're now being assisted by
AI and by automation.
So this is a profound change, which has implications for skills.
How do we assist people build new skills, learn to work alongside machines?
And how do these complement what people do today?
This is going to impact every product across every company.
And so that's why I think it's a very, very profound technology. And so we are
just in early days. Every product in every company. That's right. AI will impact everything.
So for example, you could be a radiologist. If you think about five to 10 years from now,
you're going to have an AI collaborator with you. It may triage. You come in the morning.
Let's say you have 100 things to go
through. It may say, these are the most serious cases you need to look at first. Or when you're
looking at something, it may pop up and say, you may have missed something important. Why wouldn't
we? Why wouldn't we take advantage of a super-powered assistant to help you across everything you do. You may be a student trying to learn math or history,
and you will have something helping you.
We asked Pichai what jobs would be disrupted.
He said knowledge workers, people like writers, accountants,
architects, and ironically, software engineers.
AI writes computer code, too.
Today, Sundar Pichai walks a narrow line.
A few employees have quit,
some believing that Google's AI rollout is too slow,
others, too fast.
There are some serious flaws.
James Manjika asked Bard about inflation.
It wrote an instant essay in economics and recommended five books. There's a return of inflation. James Manjika asked Bard about inflation.
It wrote an instant essay in economics and recommended five books.
But days later, we checked.
None of the books is real.
Bard fabricated the titles.
This very human trait, error with confidence, is called in the industry hallucination. Are you getting a lot of hallucinations?
Yes, which is expected. No one in the field has yet solved the hallucination problems.
All models do have this as an issue.
Is it a solvable problem?
It's a matter of intense debate.
I think we'll make progress. To help cure hallucinations, BARD features a Google it
button that leads to old-fashioned search. Google has also built safety
filters into BARD to screen for things like hate speech and bias. How great a risk is the spread of disinformation?
AI will challenge that in a deeper way.
The scale of this problem is going to be much bigger.
Bigger problems, he says, with fake news and fake images.
It will be possible with AI to create a video easily
where it could be Scott saying something or me saying
something and we never said that and it could look accurate.
But at a societal scale, it can cause a lot of harm.
Is BARD safe for society?
The way we have launched it today as an experiment in a limited way, I think so.
But we all have to be responsible in each step along the way.
Last month, Google released an advanced version of BARD
that can write software and connect to the Internet.
Google says it's developing even more sophisticated AI models.
You are letting this out slowly so that society can get used to it?
That's one part of it.
One part is also so that we get the user feedback
and we can develop more robust safety layers
before we deploy more capable models.
Of the AI issues we talked about,
the most mysterious is called emergent properties.
Some AI systems are teaching themselves skills
they weren't expected to have.
How this happens is not well understood.
For example, one Google AI program adapted on its own after it was prompted in the
language of Bangladesh, which it was not trained to translate. We discovered that with very few
amounts of prompting in Bengali, it can now translate all of Bengali. So now all of a sudden,
we now have a research effort where we're now trying to
get to a thousand languages. There is an aspect of this which we call, all of us in the field,
call it as a black box. You know, you don't fully understand and you can't quite tell why it said
this or why it got wrong. We have some ideas and our ability to understand this gets better over
time, but that's where the state of the art is.
You don't fully understand how it works, and yet you've turned it loose on society?
Let me put it this way.
I don't think we fully understand how a human mind works either.
Was it from that black box, we wondered, that Bard drew its short story
that seems so disarmingly human.
It talked about the pain that humans feel. It talked about redemption. How did it do all of
those things if it's just trying to figure out what the next right word is?
I mean, I've had these experiences talking with Bard as well. There are two views of this. There are a set of people who
view this as, look, these are just algorithms. They're just repeating what it's seen online.
Then there is the view where these algorithms are showing emergent properties to be creative, to reason, to plan, and so on. Personally, I think we need
to approach this with humility. Part of the reason I think it's good that some of these
technologies are getting out is so that society, people like you and others, can process what's
happening and we begin this conversation and debate. And I think
it's important to do that. When we come back, we'll take you inside Google's artificial
intelligence labs where robots are learning. Sometimes historic events suck.
But what shouldn't suck is learning about history.
I do that through storytelling.
History That Doesn't Suck is a chart-topping history-telling podcast chronicling the epic story of America, decade by decade.
Right now, I'm digging into the history of incredible infrastructure projects of the 1930s,
including the Hoover Dam, the Empire State Building, the Golden Gate Bridge, and more. The promise is in the title, History That Doesn't Suck, available on the free Odyssey
app or wherever you get your podcasts. The revolution in artificial intelligence
is the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the
optimistic middle, introducing AI in steps so that civilization can get used to it.
We saw what's coming next in machine learning earlier this year at Google's AI Lab in London,
a company called DeepMind, where the future looks something like this.
Look at that. Oh my goodness. They've got a pretty good kick on them. Can still get
a good game. A soccer match at DeepMind looks like fun and games, but here's the thing.
Humans did not program these robots to play.
They learned the game by themselves.
It's coming up with these interesting different strategies,
different ways to walk, different ways to block.
And they're doing it.
They're scoring over and over again.
This robot here.
Raya Hadsell, Vice President of Research and Robotics,
showed us how engineers used motion capture technology to teach the AI program how to move like a human.
But on the soccer pitch, the robots were told only that the object was to score.
The self-learning program spent about two weeks testing different moves. It discarded those that didn't work,
built on those that did,
and created All-Stars.
There's another gold.
And with practice, they get better.
Hansel told us that independent from the robots,
the AI program plays thousands of games
from which it learns and invents its own tactics.
Here, you think that red player is going to grab it, but instead it just stops it.
Hands it back, passes it back, and then goes for the goal.
And the AI figured out how to do that on its own?
That's right. That's right. And it takes a while. At first, all the players just run
after the ball together like a gaggle of, gaggle of six-year-olds the first time they're playing ball.
Over time, what we start to see is now, ah, what's the strategy?
You go after the ball, I'm coming around this way.
Or we should pass, or I should block while you get to the goal.
So we see all of that coordination emerging in the play.
This is a lot of fun, but what are the practical implications of what we're seeing here?
This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments.
You know, think about mining, think about dangerous construction work or exploration or disaster recovery.
Raya Hadsall is among 1,000 humans at DeepMind.
The company was co-founded just 12 years ago by CEO Demis Hassabis.
So if I think back to 2010 when we started, nobody was doing AI.
There was nothing going on in industry.
People used to eye roll when we talked to them, investors, about doing AI.
So we could barely get two cents together to start off with,
which isn't crazy if you think about now the billions being invested into AI startups.
Cambridge, Harvard, MIT.
Hassabis has degrees in computer science and neuroscience.
His PhD is in human imagination.
And imagine this, when he was 12 in his age group, he was the number two chess champion in the world.
It was through games that he came to AI.
I've been working on AI for decades now, and I've always believed that it's going to be the most important invention that humanity will ever make.
Will the pace of change outstrip our ability to adapt?
I don't think so.
I think that we're sort of an infinitely adaptable species.
You look at today, us using all of our smartphones and other devices,
and we effortlessly sort of adapt to these new technologies.
And this is going to be another one of those changes like that.
Among the biggest changes at DeepMind
was the discovery that self-learning machines can be creative.
Hasaba showed us a game-playing program that learns. It's called Alpha Zero,
and it dreamed up a winning chess strategy no human had ever seen.
But this is just a machine. How does it achieve creativity?
It plays against itself tens of millions of times, so it can explore parts of chess that
maybe human chess players and programmers who
program chess computers haven't thought about before. It never gets tired. It never gets hungry.
It just plays chess all the time. Yes, it's kind of an amazing thing to see because actually you
set off alpha zero in the morning and it starts off playing randomly. By lunchtime, you know,
it's able to beat me and beat most chess players. And then by the evening, it's stronger than the world champion. Demis Hassabis sold Deep
Mind to Google in 2014. One reason was to get his hands on this. Google has the enormous computing
power that AI needs. This computing center is in Pryor, Oklahoma, but Google has 23 of these, putting
it near the top in computing power in the world. This is one of two advances that make
AI ascendant now. First, the sum of all human knowledge is online, and second, brute force computing, that very loosely approximates the neural networks and talents of the brain.
Things like memory, imagination, planning, reinforcement learning,
these are all things that are known about how the brain does it,
and we wanted to replicate some of that in our AI systems.
Those are some of the elements that led to DeepMind's greatest achievement so far,
solving an impossible problem in biology. Proteins are building blocks of life, but only a tiny
fraction were understood because 3D mapping of just one could take years. DeepMind created an AI program for the protein problem and set it loose.
Well, it took us about four or five years to figure out how to build the system. It was
probably our most complex project we've ever undertaken. But once we did that, it can solve
a protein structure in a matter of seconds. And actually, over the last year, we did all the 200
million proteins that are known to science. How long would it have taken using traditional methods?
Well, the rule of thumb I was always told by my biologist friends is that it takes a
whole PhD five years to do one protein structure experimentally.
So if you think 200 million times five, that's a billion years of PhD time it would have
taken.
DeepMind made its protein database public, a gift to humanity,
Hassabis called it. How has it been used? It's been used in an enormously broad number of ways,
actually, from malaria vaccines to developing new enzymes that can eat plastic waste to new
antibiotics. Most AI systems today do one or maybe two things well.
The soccer robots, for example, can't write up a grocery list
or book your travel or drive your car.
The ultimate goal is what's called artificial general intelligence,
a learning machine that can score on a wide range of talents.
Would such a machine be conscious of itself?
So that's another great question.
We, you know, philosophers haven't really settled on a definition of consciousness yet.
But if we mean by sort of self-awareness and these kinds of things, you know, I think there
is a possibility AIs one day could be.
I definitely don't think they are today.
But I think, again, this is one of the fascinating scientific things we're going to find out on this journey towards AI.
Even unconscious, current AI is superhuman in narrow ways.
Back in California, we saw Google engineers teaching skills that robots will practice continuously on their own.
Push the blue cube to the blue triangle.
They comprehend instructions.
Push the yellow hexagon to the yellow heart.
And learn to recognize objects.
What would you like?
How about an apple?
How about an apple?
On my way, I will bring an apple to you.
Vincent Vanouk, senior director of robotics, showed us how Robot 106 was trained on millions of images...
I am going to pick up the apple.
...and can recognize all the items on a crowded countertop.
If we can give the robot a diversity of experiences, a lot more
different objects in different settings, the robot gets better at every one of
them. Now that humans have pulled the forbidden fruit of artificial knowledge,
we start the genesis of a new humanity. AI can utilize all the
information in the world,
what no human could ever hold in their head.
And I wonder if humanity is diminished
by this enormous capability that we're developing.
I think the possibilities of AI do not diminish humanity
in any way. And in fact, in some ways, I think the possibilities of AI do not diminish humanity in any way.
In fact, in some ways, I think they actually raise us to even deeper, more profound questions.
Google's James Manjika sees this moment as an inflection point.
I think we're constantly adding these superpowers or capabilities to what humans can do
in a way that expands possibilities
as opposed to narrow them, I think. So I don't think of it as diminishing humans, but it does
raise some really profound questions for us. Who are we? What do we value? What are we good at?
How do we relate with each other? Those become very, very important questions that are constantly going to
be, in one case, sense exciting, but perhaps unsettling too. It is an unsettling moment.
Critics argue the rush to AI comes too fast, while competitive pressure among giants like Google and
startups you've never heard of is propelling humanity into the future, ready or not.
But I think if I take a 10-year outlook,
it is so clear to me
we will have some form of very capable intelligence
that can do amazing things,
and we need to adapt as a society for it.
Google CEO Sundar Pichai told us society must
quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties
among nations to make AI safe for the world. These are, these are deep questions and we call this alignment.
One way we think about how do you develop AI systems that are aligned to human values
and including morality.
This is why I think the development of this needs to include not just engineers, but social
scientists, ethicists, philosophers, and so on.
And I think we have to be very thoughtful.
And I think these are all things society needs to figure out as we move along.
It's not for a company to decide.
We'll end with a note that had never appeared on 60 Minutes,
but one in the AI revolution you may be hearing often.
The proceeding was created with 100% human content.
You probably know David Byrne as the lead singer and songwriter of Talking Heads,
the hugely influential post-punk rock band of the late 1970s and 80s.
They broke up more than 30 years ago, but Byrne has been on his own eclectic journey ever since.
His artistic innovations have blurred the boundaries of music, theater, and art.
He's won an Oscar, a Grammy, and a Tony,
toured with salsa singers, collaborated with neuroscientists, made movies,
and this summer his musical about the former First Lady of the Philippines Imelda Marcos
opens on Broadway. David Byrne is 71, and as we first told you earlier this year, he is as
creative, energetic, and unusual as he was when he was 23,
an art school dropout just starting to perform on stage with his friends as Talking Heads.
The name of this band is Talking Heads, and the name of this song is Psycho Killer.
So I wanted to be very matter-of-fact.
It's not like, are we having fun tonight?
Yeah, there's none of that, how you all doing?
How you all doing, New York? New York!
This is one of David Byrne's first performances
It was 1975 at CBGB's
A legendary music club where the Ramones, Patti Smith and Blondie
Were also just getting started.
Psycho Killer was only the second song David Byrne had ever written,
and it was Talking Heads' first hit.
When you hear it now, what do you think?
I'm glad I did it, but I'm also glad that I didn't stick with that.
As my, oh, like, oh, this is working? Let's do more like this.
I'm glad that I decided, no, now you have to do things that are a little more original, musically.
And that's exactly what he did. Along with Tina Weymouth, Chris Franz, and Jerry Harrison, Talking Heads put out eight albums over the next 13 years. And what's shopping mom? Now it's all covered with flowers.
They were edgy, groundbreaking, critically acclaimed, and a commercial hit.
Here we go.
Melding rock with funk, disco, afro beat, and the avant-garde.
There's a city in my mind
So long and big I've run and it's all right
They'd all studied art in college, and it showed in their music videos,
which were in heavy rotation on MTV.
Letting the days go by
Let the water hold me up
Letting the days go by
Water flowing up the back
Burr's quirky movements and manner got most of the attention.
Same as it ever was.
Which was not always easy for the introverted singer.
Dick Clark tried to ask him about it on American Bandstand in 1979.
Are you a shy person?
I'd say so.
It seems contradictory to a lot of people.
The introvert who winds up on a stage in front of thousands of people performing and reaching great heights.
It does seem contradictory, but in retrospect, it makes perfect sense.
Your way of announcing your existence and communicating your thoughts to
people is through performance. And then I could retreat into my shell after that. But I'd made
myself known to these people and what I was thinking, what I was feeling. So when that's
your only option, it's a lifesaver. David Byrne's shyness goes way back. He was born in Scotland,
but his family moved to Baltimore when he was eight.
His accent was so thick, classmates could barely understand him.
He was an outsider, happier making music at home in his basement with a reel-to-reel tape recorder
than hanging out with other kids.
My discomfort with kind of social situations meant, as often happens, I would focus intently on my drawings or learning to play
other people's songs or things like that. And that continued for ages. And you're kind of
ultra-focused. So that becomes a kind of superpower. What about the time? You were falling over. Fall on your face.
You must be having fun.
Ultra Focus may be a superpower,
but it caused problems between Byrne and the band
that flared up on tour in 1983.
You may ask yourself, how do I work this?
I became, I think, kind of obsessive
about getting that show up and running.
I might not have been the most pleasant person to deal with at that point.
Demanding?
Yes.
Yes.
I got a girlfriend that's better than that.
She knows whatever she likes.
Fern commanded center stage, famously wearing this outrageously oversized suit.
As we get older, it's not making sense.
The show was made into a film by director Jonathan Demme called Stop Making Sense.
It's considered one of the greatest concert movies ever.
Talking Heads made three more albums, but Byrne was increasingly branching out on his own.
As I became more relaxed as a person, started writing different kinds of songs,
songs that maybe weren't quite as angst-ridden and peculiar, some fans were probably disappointed.
We liked the really quirky guy, or we liked the guy who was really struggling with himself and
really having a hard time. And I thought, why would you wish that on me?
For your own amusement, right?
In 1988, he founded a world music label.
Then released an album of Latin songs
and wrote music for films, dance companies, and experimental theater.
I genuinely started having other kind of musical interests.
You'd started to collaborate with a lot of artists from different genres.
Yes, and I thought, I want to do more of that.
And by then it was pretty much over.
There was never an official announcement, but eventually Byrne made an offhand comment to a reporter
that Talking Heads had broken up.
He neglected, it seems, to tell the band.
Members of the band said that you never actually talked to them
and said that the band was over, that they read about it in a newspaper.
I don't know if that's the case, but, well, it might be.
I think it is very possible that I did not handle it as best as I could.
Just say here lies love.
Byrne never looked back, and he's followed his own beat ever since, no matter how offbeat it may be.
Ten years ago, Byrne staged a pop opera in collaboration with Fatboy Slim called Here Lies Love.
It's about, of all people, Imelda Marcos, the wife of the
former dictator of the Philippines. It's now scheduled to open on Broadway this summer.
When he became fascinated with high school color guard teams in 2015,
he wound up staging arena shows combining the team's flag-spinning, weapon-tossing,
and dance to the pop music of Nelly Furtado and St. Vincent. I thought, oh, this is just going to be
highlighting their talent and putting people together who would never normally be together.
And it wasn't until I saw the show and I realized this is not about this at all. What it's really
delivering is this message about inclusion. That's what this is
about. They kind of revealed it. But isn't that extraordinary that you can start doing something
with one thing in mind and yet it has a life of its own? I trust what I do and what other people
do that way, that it's going to deliver what it wants to say. But someone else looking at it could
go, what are you talking about? You don't know what
you're doing? You don't know why you're doing it? You don't know where it's going to end up?
I just kind of trust it. Yeah. He has a small studio in his New York City apartment where he
tinkers with lyrics and new ideas, much like he did all those years ago in his parents' basement.
First stanza sounds like it might be promising. Do you stop and kind of
ruminate on things and come back to it? Yeah, I might see if I get like a chorus or something.
I might try it like a chorus.
Stood by me when darkness fell, my apartment is my friend.
That's the key line, so that's got to be pretty good.
Byrne is the quintessential New Yorker.
He's lived in the city for five decades,
and it's not uncommon to see him pedaling around on his bicycle.
He is, it seems, always on the move, always exploring.
Oh, yeah.
His downtown office is lined with books, records,
and odd mementos he's picked up here and there.
This wonderful wine from Turkmenistan.
Hidden amid the clutter, there's a Grammy and his 1988 Oscar for composing the soundtrack for the film The Last Emperor.
It's not on the lowest shelf.
I mean, David, really? Does the Academy know about this?
You know when you go into somebody's office and they have all their awards?
Yes.
All framed all around them?
Or magazine covers. You don't have an ego wall.
His office is where he runs Reasons to be Cheerful.
Oh, that could be nice.
An online magazine highlighting creative solutions to complex problems,
from reinventing food banks in Chicago to turning French parking lots into solar farms.
So are there reasons to be cheerful?
Oh, yeah, yeah, yeah.
If you get up in the morning and start doom-scrolling through your phone or your tablet or laptop
or whatever, you're going to think, no, no, no, no, no.
World's going to hell in a handbasket.
But there are people and places, organizations doing things that are really making a difference,
finding solutions to things.
Who am I?
What do I want?
How do I work this?
That optimism infused a hit Broadway show Byrne created and starred
in called American Utopia.
It's actually like the performance branch of Reasons to be Cheerful.
This is really about hope and possibility and what and how we can work together as people.
He mixed his old songs with new ones.
Vern wanted the musicians to be alone.
Byrne wanted the musicians to be completely untethered,
allowing them to move freely around the stage.
It was less a Broadway musical, more a raucous revival.
Listen up and not too far, baby, to where you are.
Wait for the fire. But not too far, baby, to know where you are. Wait, boy, I won't fire up.
There's this amazing feeling when music like that is all around you,
when there's a whole group of people who are making the music.
It's not just like one soloist or something like that.
It's this collective thing that gives it this extra energy.
Burning down the house. Here to the bottom of the mountain. Oh, yeah. Here to the bottom of the mountain. energy.
Byrne's latest theatrical experience may be his most unusual yet. It's an interactive journey into his past called Theater of the Mind, produced in collaboration with the Denver
Center for Performing Arts. Audience members get random name tags and are led on a
semi-autobiographical tour of Burns' memories. Like this outer proportion kitchen makes anyone
in it feel like a child. Do this with me. Hold your hand in front of your face. The show is full
of surprises the audience takes part in, some of them based on neuroscience experiments. We agreed
not to give them away,
but they make you question your own perception and perhaps your memories.
It is dark in here. You know what?
Theater of the Mind ends in a replica of his parents' attic. Like Burns' life,
the show tells a story about how over time our identities are malleable and how we all
have the capacity to change.
We're never stuck. You can change the story anytime. Isn't that nice?
I like that idea that you can change your story. You can change the narrative.
It would be a horrible world if people never changed for their entire life,
or they were an angry person, or an upset person,
or depressed person, and it's like, that's your fate.
But that's not true.
Do you think you've changed that much?
I feel like, yeah, I'm a very different person
than I was when I was young.
Were you conscious of those changes?
Sometimes my friends would say,
you're really different than what you used to be
when I first met you.
You're a really different person now.
By the way, were they saying that in a nice way?
Or was that being yelled at at the top of their lungs?
It was a nice way.
It was like, wow, you've really changed.
I'm Anderson Cooper.
We'll be back next week with another edition of 60 Minutes.