60 Minutes - 04/16/23: The Revolution | The Unlikely Adventures of David Grann
Episode Date: April 17, 2023Scott Pelley is given exclusive access to Google’s AI lab in London and their Mountain View, California, headquarters as society moves closer to embracing the rapid advancements in artificial intell...igence. How quickly machines can learn and teach themselves in the real world, the future of the artificial intelligence revolution, and other questions are discussed during Pelley’s interview with Google CEO Sundar Pichai and other senior executives in charge of these systems. The Wager tells the true story of an open-water adventure in the 18th century that turns into a saga of shipwreck, anarchy, betrayal, and murder. Bestselling author and darling of Hollywood developers David Grann sits down with 60 Minutes before the release of his new book. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
Transcript
Discussion (0)
There are very few things that you can be certain of in life.
But you can always be sure the sun will rise each morning.
You can bet your bottom dollar that you'll always need air to breathe and water to drink.
And, of course, you can rest assured that with Public Mobile's 5G subscription phone plans,
you'll pay the same thing every month.
With all of the mysteries that life has to offer, a few certainties can really go a long way.
Subscribe today for the peace of mind you've
been searching for. Public Mobile. Different is calling.
There is a revolution happening right now in the world of artificial intelligence.
Confounding.
Are we ready for it?
I am rarely speechless.
I don't know what to make of this.
With rare access, we will show you what Google is developing and the questions they're asking themselves.
On my way, I will bring an apple to you.
As they begin to unveil computing power that will change every part of our world forever.
I've been working on AI for decades now, and I've always believed that it's going to be the
most important invention that humanity will ever make. Please don't judge me. Do you like books
about adventures and heroism? True stories with unbelievable outcomes, then author David Grand is your man.
Today, he's one of the world's top-selling writers, in part because of his hands-on,
years-long research that breathes life back into his fearless characters.
But Grand is the first to admit,
I am not an explorer. I mean, I would have been the first to die on the island,
let's be perfectly honest.
We're going to play this out. What's the cause of death?
Oh, my cause of death, terror.
I'm Leslie Stahl.
I'm Bill Whitaker.
I'm Anderson Cooper.
I'm Sharon Alfonsi.
I'm John Wertheim.
I'm Scott Pelley.
Those stories and more tonight on 60 Minutes. We may look on our time as the moment civilization was transformed,
as it was by fire, agriculture, and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer,
which is to say with creativity, truth, error, and lies.
The technology, known as a chatbot, is only one of the recent breakthroughs in artificial intelligence,
machines that can teach themselves superhuman skills.
We explored what's coming next at Google, a leader in this new world.
CEO Sundar Pichai told us AI will be as good or as evil as human nature allows.
The revolution, he says, is coming faster than you know.
Do you think society is prepared for what's coming? You know, there are two ways
I think about it. On one hand, I feel no, because, you know, the pace at which we can think and adapt
as societal institutions compared to the pace at which the technology is evolving,
there seems to be a mismatch. On the other hand, compared to any other technology,
I've seen more people worried about it
earlier in its life cycle. So I feel optimistic the number of people, you know, who have started
worrying about the implications, and hence the conversations are starting in a serious way as well.
Our conversations with 50-year-old Sundar Pichai started at Google's new campus in Mountain View, California.
It runs on 40% solar power and collects more water than it uses.
High-tech that Pichai couldn't have imagined growing up in India with no telephone at home.
We were on a waiting list to get a rotary phone for about five years.
And it finally came home.
I can still recall it vividly. It changed our lives.
To me, it was the first moment I understood the power of what getting access to technology meant.
So it probably led me to be doing what I'm doing today.
What he's doing since 2019 is leading both Google and its parent company, Alphabet,
valued at $1.3 trillion. Worldwide, Google runs 90% of internet searches and 70% of smartphones.
We're really excited about it. But its dominance was attacked this past February when Microsoft linked its search engine to a chatbot.
In a race for AI dominance, Google just released its chatbot named BARD.
It's really here to help you brainstorm ideas, to generate content like a speech or a blog post or an email. We were introduced to BARD by Google Vice President Sissy Hsiao
and Senior Vice President James Manyika.
Here's BARD.
The first thing we learned was that BARD does not look for answers on the Internet
like Google Search does.
So I wanted to get inspiration from some of the best speeches in the world.
Bard's replies come from a self-contained program that was mostly self-taught. Our experience was
unsettling. Confounding. Absolutely confounding. Bard appeared to possess the sum of human
knowledge with microchips more than 100,000 times faster than the human brain.
We asked Bard to summarize the New Testament. It did in five seconds and 17 words.
We asked for it in Latin. That took another four seconds. Then we played with a famous six-word short story, often attributed to Hemingway.
For sale, baby shoes, never worn.
Wow.
The only prompt we gave was, finish this story.
In five seconds.
Holy cow.
The shoes were a gift from my wife, but we never had a baby.
From the six-word prompt, Bard created a deeply human tale with characters it invented,
including a man whose wife could not conceive and a stranger grieving after a miscarriage and longing for closure.
I am rarely speechless. I don't know what to make of
this. Give me... We asked for the story in verse. In five seconds, there was a poem written by a
machine with breathtaking insight into the mystery of faith. Bard wrote, she knew her baby's soul
would always be alive. The humanity at superhuman speed was a shock. How was this possible?
James Menyika told us that over several months, BARD read most everything on the Internet and created a model of what language looks like.
Rather than search, its answers come from this language model.
So, for example, if I said to you, Scott, peanut butter and?
Jelly.
Right. So it tries and learns to predict.
Okay, so peanut butter usually is followed by jelly. It tries to predict the most probable next words based on everything it's learned.
So it's not going out to find stuff. It's just predicting the next word.
But it doesn't feel like that.
We asked Bard why it helps people, and it replied, quote, because it makes me happy.
Bard, to my eye, appears to be thinking, appears to be making judgments.
That's not what's happening. These machines are not sentient. They are not aware of themselves.
They're not sentient. They're not aware of themselves. They're not sentient. They're not aware of themselves.
They can exhibit behaviors that look like that.
Because keep in mind, they've learned from us.
We are sentient beings.
We are beings that have feelings, emotions, ideas, thoughts, perspectives.
We've reflected all that in books, in novels, in fiction.
So when they learn from that, they build patterns from that.
So it's no surprise to me that the exhibited behavior sometimes looks like maybe there's somebody behind it.
There's nobody there. These are not sentient beings.
Zimbabwe-born, Oxford-educated James Manyika holds a new position at Google.
His job is to think about how AI and humanity
will best coexist. AI has the potential to change many ways in which we've thought about society,
about what we're able to do, the problems we can solve. But AI itself will pose its own problems. Could Hemingway write a better short story?
Maybe.
But Bard can write a million before Hemingway could finish one.
Imagine that level of automation across the economy.
A lot of people can be replaced by this technology.
Yes, there are some job occupations that will start to decline over time.
There are also new job categories that will grow over time. But the biggest change will be the jobs that will
be changed. Something like more than two-thirds will have their definitions change, not go away,
but change, because they're now being assisted by AI and by automation. So this is a profound change, which has implications for skills.
How do we assist people build new skills, learn to work alongside machines,
and how do these complement what people do today?
This is going to impact every product across every company,
and so that's why I think it's a very, very profound technology,
and so we are just in early days.
Every product in every company.
That's right.
AI will impact everything.
So, for example, you could be a radiologist.
If you think about five to ten years from now, you're going to have an AI collaborator with you.
It may triage.
You come in the morning.
Let's say you have a hundred things to go through.
It may say these are the most serious cases you need to look at first.
Or when you're looking at something, it may pop up and say you may have missed something important.
Why wouldn't we?
Why wouldn't we take advantage of a super-powered assistant to help you across everything you do?
You may be a student trying to learn math or history, and you will have something helping you.
We asked Pichai what jobs would be disrupted. He said knowledge workers, people like writers,
accountants, architects, and ironically, software engineers. AI writes computer code, too. Today, Sundar Pichai walks a narrow line.
A few employees have quit, some believing that Google's AI rollout is too slow,
others too fast. There are some serious flaws. James Manjika asked Bard about inflation. It wrote an instant essay in economics and recommended five books.
But days later, we checked.
None of the books is real.
Bard fabricated the titles.
This very human trait, error with confidence, is called in the industry hallucination.
Are you getting a lot of
hallucinations? Yes, you know, which is expected. No one in the field has yet solved the hallucination
problems. All models do have this as an issue. Is it a solvable problem? It's a matter of intense debate. I think we'll make progress.
To help cure hallucinations, BARD features a Google It button that leads to old-fashioned search.
Google has also built safety filters into BARD to screen for things like hate speech and bias.
How great a risk is the spread of disinformation?
AI will challenge that in a deeper way.
The scale of this problem is going to be much bigger.
Bigger problems, he says, with fake news and fake images.
It will be possible with AI to create, you know, a video easily
where it could be Scott saying something or me saying something,
and we never said that, and it could look accurate.
But at a societal scale, it can cause a lot of harm.
Is BARD safe for society?
The way we have launched it today, as an experiment in a limited way, I think so.
But we all have to be responsible in each step along the way.
Pichai told us he's being responsible by holding back for more testing advanced versions of BARD
that he says can reason, plan, and connect to internet search.
You are letting this out slowly so that society can get used to it?
That's one part of it. One part is also so that we get the user feedback and we can develop more
robust safety layers before we build, before we deploy more capable models. Of the AI issues we talked about, the most mysterious is called emergent properties.
Some AI systems are teaching themselves skills that they weren't expected to have.
How this happens is not well understood.
For example, one Google AI program adapted on its own
after it was prompted in the language of Bangladesh, which it was not trained to know.
We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali.
So now all of a sudden, we now have a research effort where we're now trying to get to a thousand languages.
There is an aspect of this which we call, all of us in the field,
call it as a black box.
You know, you don't fully understand.
And you can't quite tell why it said this or why it got wrong.
We have some ideas, and our ability to understand this gets better over time.
But that's where the state of the art is.
You don't fully understand how it works, and yet you've turned it loose on society?
Let me put it this way.
I don't think we fully understand how a human mind works either.
Was it from that black box, we wondered, that Bard drew its short story that seems so disarmingly human.
It talked about the pain that humans feel.
It talked about redemption.
How did it do all of those things if it's just trying to figure out what the next right word is?
I mean, I've had these experiences talking with Bard as well.
There are two views of this.
There are a set of people who view this as, look, these are just algorithms.
They're just repeating what it's seen online.
Then there is the view where these algorithms are showing emergent properties to be creative, to reason, to plan, and so on, right? And personally, I think we need to approach this with humility.
Part of the reason I think it's good that some of these technologies are getting out
is so that society, you know, people like you and others can process what's happening
and we begin this conversation and debate, and I think it's important to do that.
When we come back, we'll take you inside Google's artificial intelligence labs,
where robots are learning.
Sometimes historic events suck.
But what shouldn't suck is learning about history.
I do that through storytelling. History That Doesn't Suck is a chart-topping,
history-telling podcast chronicling the epic story of America, decade by decade.
Right now, I'm digging into the history of incredible infrastructure projects of the 1930s,
including the Hoover Dam, the Empire State Building, the Golden Gate Bridge, and more.
The promise is in the title, History That Doesn't Suck.
Available on the free Odyssey app or wherever you get your podcasts.
The revolution in artificial intelligence is the center of a debate
ranging from those who hope it will save humanity to those who predict doom.
Google lies somewhere in the optimistic middle, introducing AI in steps
so civilization can get used to it. We saw what's coming next in machine learning at Google's AI
lab in London, a company called DeepMind, where the future looks something like this.
Look at that. Oh my goodness. They've got a pretty good kick on
them. Can still get a good game. A soccer match at DeepMind looks like fun and games, but here's
the thing. Humans did not program these robots to play. They learned the game by themselves.
It's coming up with these interesting different
strategies, different ways to walk, different ways to block. And they're doing it. They're
scoring over and over again. This robot here. Raya Hadsell, vice president of research and robotics,
showed us how engineers used motion capture technology to teach the AI program how to move like a human. But on the soccer pitch,
the robots were told only that the object was to score. The self-learning program spent about two
weeks testing different moves. It discarded those that didn't work, built on those that did, and created All-Stars. There's another goal.
And with practice, they get better. Hansel told us that independent from the robots,
the AI program plays thousands of games from which it learns and invents its own tactics.
Here, you think that red player is going to grab it, but instead it just stops it,
hands it back, passes it back, and then goes for the goal. And the AI figured out how to do that
on its own? That's right. That's right. And it takes a while. At first, all the players just
run after the ball together like a gaggle of six-year-olds the first time they're playing ball.
Over time, what we start to see is now, ah, what's the strategy?
You go after the ball, I'm coming around this way,
or we should pass, or I should block while you get to the goal.
So we see all of that coordination emerging in the play.
This is a lot of fun, but what are the practical implications of what we're seeing here?
This is the type of research that can eventually lead to robots that can come out of the factories
and work in other types of human environments.
You know, think about mining, think about dangerous construction work, or exploration, or disaster recovery.
Raya Hadsall is among 1,000 humans at DeepMind.
The company was co-founded just 12 years ago by CEO Demis Hassabis.
So if I think back to 2010 when we started, nobody was doing AI. There was nothing going on in industry. People used to eye roll when we talked to them, investors, about doing AI. So we could barely get two cents together to start off with, which isn't crazy if
you think about now the billions being invested into AI startups. Cambridge, Harvard, MIT,
Hassabis has degrees in computer science and neuroscience. His PhD is in human imagination. And imagine this.
When he was 12 in his age group, he was the number two chess champion in the world.
It was through games that he came to AI.
I've been working on AI for decades now,
and I've always believed that it's going to be the most important invention
that humanity will ever make. Will the pace of change outstrip our ability to adapt?
I don't think so. I think that we, you know, we're sort of an infinitely adaptable species.
You know, you look at today us using all of our smartphones and other devices,
and we effortlessly sort of adapt to these new technologies.
And this is going to be another one of those changes like that.
Among the biggest changes at DeepMind was the discovery
that self-learning machines can be creative.
Hasaba showed us a game-playing program that learns.
It's called AlphaZero,
and it dreamed up a winning chess strategy
no human had ever seen. But this is just a machine. How does it achieve creativity?
It plays against itself tens of millions of times, so it can explore parts of chess that
maybe human chess players and programmers who program chess computers haven't thought about
before. It never gets tired. It never gets hungry. It just plays chess all the time.
Yes, it's kind of an amazing thing to see because actually you set off alpha zero in the morning
and it starts off playing randomly. By lunchtime, you know, it's able to beat me and beat most chess
players. And then by the evening, it's stronger than the world champion. Demis Hassabis sold DeepMind to Google in 2014.
One reason was to get his hands on this.
Google has the enormous computing power that AI needs.
This computing center is in Pryor, Oklahoma,
but Google has 23 of these,
putting it near the top in computing power in the world.
This is one of two advances that make AI ascendant now.
First, the sum of all human knowledge is online.
And second, brute force computing
that very loosely approximates the neural networks and talents of the brain.
Things like memory, imagination, planning, reinforcement learning,
these are all things that are known about how the brain does it,
and we wanted to replicate some of that in our AI systems.
Those are some of the elements that led to DeepMind's greatest achievement so far,
solving an impossible problem in biology. Proteins are building blocks
of life, but only a tiny fraction were understood because 3D mapping of just one could take years.
DeepMind created an AI program for the protein problem and set it loose.
Well, it took us about four or five years
to figure out how to build the system.
It was probably our most complex project we've ever undertaken.
But once we did that,
it can solve a protein structure in a matter of seconds.
And actually, over the last year,
we did all the 200 million proteins that are known to science.
How long would it have taken using traditional methods?
Well, the rule of thumb I was always told by my biologist friends
is that it takes a whole PhD five years to do one protein structure experimentally.
So if you think 200 million times five,
that's a billion years of PhD time it would have taken.
DeepMind made its protein database public,
a gift to humanity, Hassabis called it.
How has it been used?
It's been used in an enormously broad number of ways, actually,
from malaria vaccines to developing new enzymes that can eat plastic waste to new antibiotics.
Most AI systems today do one or maybe two things well.
The soccer robots, for example, can't write up a grocery list or book your travel or drive your car.
The ultimate goal is what's called artificial general intelligence,
a learning machine that can score on a wide range of talents.
Would such a machine be conscious of itself? So that's another great
question. We, you know, philosophers haven't really settled on a definition of consciousness yet,
but if we mean by sort of self-awareness and these kinds of things, you know, I think there
is a possibility AIs one day could be. I definitely don't think they are today. But I think, again,
this is one of the fascinating scientific things we're going to find out on this journey towards AI. Even unconscious, current AI is superhuman in
narrow ways. Back in California, we saw Google engineers teaching skills that robots will
practice continuously on their own. Push the blue cube to the blue triangle.
They comprehend instructions. Push the yellow blue cube to the blue triangle. They comprehend instructions.
Push the yellow hexagon to the yellow heart.
And learn to recognize objects.
What would you like?
How about an apple?
How about an apple?
On my way, I will bring an apple to you.
Vincent Van Nook, Senior Director of Robotics,
showed us how Robot 106 was trained
on millions of images. I am going to pick up the apple. And can recognize all the items on a crowded
countertop. If we can give the robot a diversity of experiences, a lot more different objects in
different settings, the robot gets better at every one of them.
Now that humans have pulled the forbidden fruit of artificial knowledge,
Thank you.
we start the genesis of a new humanity.
AI can utilize all the information in the world,
what no human could ever hold in their head.
And I wonder if humanity is diminished by this enormous capability that we're developing.
I think the possibilities of AI do not diminish humanity in any way.
In fact, in some ways, I think they actually raise us to even deeper, more profound
questions. Google's James Manika sees this moment as an inflection point. I think we're constantly
adding these superpowers or capabilities to what humans can do in a way that expands possibilities
as opposed to narrow them, I think. So I don't think of it as diminishing humans,
but it does raise some really profound questions for us.
Who are we? What do we value?
What are we good at? How do we relate with each other?
Those become very, very important questions
that are constantly going to be, in one case,
in a sense, exciting, but perhaps unsettling, too.
It is an unsettling moment.
Critics argue the rush to AI comes too fast, while competitive pressure among giants like Google and startups you've never heard of is propelling humanity into the future, ready or not. But I think if I take a 10-year outlook,
it is so clear to me we will have some form of
very capable intelligence
that can do
amazing things.
And we need to adapt
as a society for it.
Google CEO Sundar Pichai
told us society must quickly
adapt with regulations for AI in the economy,
laws to punish abuse, and treaties among nations to make AI safe for the world.
These are deep questions, and we call this alignment. One way we think about how do you develop AI systems that are aligned to human values and including morality.
This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on.
And I think we have to be very thoughtful.
And I think these are all things
society needs to figure out as we move along. It's not for a company to decide.
We'll end with a note that has never appeared on 60 Minutes, but one in the AI revolution you may
be hearing often. The proceeding was created with 100% human content.
Sundar Pichai explains the evolution of Google's founding don't be evil motto.
It's a lot more of a nuanced view, but it underpins how we think about things.
At 60minutesovertime.com What's better than a well-marbled ribeye sizzling on the barbecue?
A well-marbled ribeye sizzling on the barbecue that was carefully selected by an Instacart shopper and delivered to your door.
A well-marbled ribeye you ordered without even leaving the kiddie pool.
Whatever groceries your summer calls for, Instacart has you covered.
Download the Instacart app and enjoy $0 delivery fees on your first three orders.
Service fees, exclusions, and terms apply.
Instacart. Groceries that over-deliver.
Some authors are perfect matches for their subject matter.
John Grisham was once a trial lawyer.
John le Carre was once a spy by another name.
Then there's David Graham, who has emerged as one of the world's top-selling writers
and darling of Hollywood developers by venturing into unknown worlds,
abandoning his comfort zone.
The Unlikely Adventures of David Graham.
His latest book, The Wager, tells of British castaways from the 1740s.
It's an open water quest that becomes a saga of shipwreck, anarchy, betrayal, and murder.
Imagine Mutiny on the Bounty meets Lord of the Flies, except every word of it really happened.
Graham's success comes from, yes, meticulous reporting and vivid writing,
but also from how he puts the pieces together.
You talk about structuring these stories as a puzzle.
Yes.
Is there only one way to solve this puzzle?
Well, I'm very weird about this. I do always think there is some kind of idyllic form of a story,
like some perfect, pristine, lost city that you're trying to find and get to.
We're going to structure the David Grant story.
What would you suggest? Where would we start?
Oh, gosh, in some archive, looking semi-blind at some document.
That's where it always begins.
We can work with that.
We find our subject inside the National Archives in the suburbs of London,
unboxing dusty files, consulting documents so frail they require a pillow for support.
Grant spent two years playing detective, gathering facts, source material for his latest book.
We're going to have to touch this really carefully.
He took us tumbling back in time to the 18th century.
You see that?
Communing with logbooks, muster books, and diaries from the expedition of the HMS Wager,
the warship featured in his book.
And here you see the little initials next to their names here.
You'll see Lieutenant.
You'll see AB means Able Seaman.
How are you deciphering this?
When I first looked at a lot of these books, it was like reading gibberish.
I was like, what is this telling me? And so I would have to look at it again, look at it again,
start to figure out the codes, the language they use. But once you do, these documents speak
volumes. All these names and symbols told a larger story. With their empires at war,
a British squadron of roughly 2,000 men set out to capture a Spanish galleon filled with treasure off the Philippines.
That meant rounding Cape Horn, negotiating some of the world's most treacherous waters and winds.
But one of the ships in the squadron lost its way just off the coast of Patagonia.
Grant showed us on his own map where the wager got into trouble, a place aptly named.
The Gulf of Pain. They'rely named. The Gulf of Pain.
They're barreling into the Gulf of Pain.
As they're coming around, they're desperately, frantically trying to avoid this land.
The wager careened into rocks, ripping apart.
145 castaways, many sick from scurvy, swam to the nearest island.
You'd think the name alone, the Gulf of Pain, would discourage visitors,
especially one bespectacled 56-year-old man who admits he hates camping.
I spent the first two years doing research in a way very suited to my physical attributes,
which was in archives.
You're indoors.
Yes, indoors.
But there came a point where I began to fear that I could never fully understand what these 150 or so men
had gone through on that island unless I went. There's always a moment where something gnaws
at you, something unknown. And so it was then that I decided to try to make this trip.
So in 2019, Graham flew to Chile and chartered a 52-foot vessel.
The boat looked pretty big.
I thought, this is good.
This is going to be like a Jacques Cousteau expedition.
We're going to be fine.
We kind of stay originally through these channels that are sheltered in Patagonia.
I think, ah, this is perfect.
It's beautiful.
It's a little cold.
It's winter, but it's beautiful.
And then there's a certain point where the captain says to me, all right, now we've got to go out into the open sea if we're going to get to Wager Island.
And that was my first glimpse of these terrifying seas.
Rough seas.
It was truly terrifying, or at least for me.
My captain seemed cool.
Grant and his crew endured the moody waters over a 10-day journey.
I had to sit on the floor, hunker down.
The Dramamine was pumping in me.
This is Wager Island, named for the ship that washed up 300 years ago.
A spit of inhospitable land hugging the Pacific coast.
Scenic from a distance, but you wouldn't want to spend the night.
The castaways did months in unrelenting cold and whipping winds. You know
it's bad when celery is the big selling point, the only edible thing that grows on the island,
though it does cure scurvy. There were no animals. I kept thinking, oh, there's got to be something,
like something. There's got to be a rat. We couldn't find anything. This depth of detail,
it's Grant's earmark. He's created his own subgenre of narrative nonfiction,
keeping readers hanging with a page-turning mix of history, journalism, and true crime.
But it's also literary pointillism.
You step back and glimpse a larger tableau, one with broader themes.
This fascinates you.
Oh, yeah.
Yeah, well, you see, I mean, on this island, you see everything playing out.
You see questions of leadership playing out. You see questions of leadership playing out.
You see questions of loyalty playing out, questions of duty playing out.
You see human nature being peeled back.
All that is taking place in this little tempest.
And this was no one-off.
For his first book, 2009's The Lost City of Z, a number one bestseller turned into a feature film.
Everyone out of the boat!
Graham trekked through the Amazon to a place known as the Green Hell, following the trail of a British explorer, Percy Fawcett.
Did I hear right? You took out supplemental life insurance?
Yes, I did. I made sure I got extra travel insurance. I had a little child at the time.
There is something, and I think this is important, it's not something I
really like to talk about, but there is something selfish about these journeys, and even something
about the people I write about, because many of them die on these expeditions. Graham's swashbuckling
takes on an added degree of difficulty on account of a degenerative eye condition he's had since his
20s. What's the impact of that on your work? I mean, it's terrible when you're on an expedition,
like you can't see at night and you're stumbling, getting lost,
or you're falling, or you're on a boat, or something like that.
But because I know I have this weakness,
I'm very acutely observing as much as I can,
and in some ways maybe paying more observation
than if I could just take it in so easily.
Grant first put those powers of observation
to work as a reporter on Capitol Hill.
But tired of Washington's spin, he wanted to write real stories.
In 2003, he joined the New Yorker magazine.
In one issue, he might write about an eccentric giant squid hunter in New Zealand.
In another, a botched death penalty conviction in Texas.
All of it predicated on exhaustive research.
Please don't judge me.
From his office, itself an inhospitable island of sorts,
at his home in a suburb of New York,
Grant showed us a pile of research from his 2017 book, Killers of the Flower Moon.
The book centered on the mysterious deaths among members of the oil-rich Osage Nation in 1920s Oklahoma.
And, boxed up in an archive, where else,
Grant found a smoking gun, evidence of a systemic murder campaign by outsiders.
This was secret grand jury testimony, and it was unmarked. I mean, it was a public record,
but I was like, is this supposed to be, am I allowed to look at this?
The book has sold nearly two million copies, and it ignited a Hollywood auction. The winning bid,
five million dollars.
The film, directed by Martin Scorsese, starring Leonardo DiCaprio, premieres at Cannes next month.
Paramount, parent company of CBS, is a distributor. It's not lost on Graham that stories birthed in
decidedly unglamorous archives end up on red carpets in the French Riviera. The Wager, out
this week, has also been optioned for film.
It would be grand six-story to hit the big screen.
Do you worry what Hollywood's going to do to your work?
Yes. Yeah, you always worry. The truth is, you don't have that much control when Hollywood
develops your work.
What is your role once one of your books gets put into development?
Maybe a certain actor wants to know about the person they're playing.
One of the stars will call you
and say, tell me more?
Yeah.
What's an example?
Oh, I'll respect privacy,
but, you know,
occasionally some people
will reach out to you, but...
You do not seem
particularly comfortable
talking about the Hollywood angle
to this.
No, I don't like talking about...
Your posture's changing.
Yeah, I don't like it.
I don't like it.
Because, you know,
it's just a different world,
you know?
It's just a different world.
This is Portsmouth.
Grand feels much more comfortable transporting himself three centuries back to this world.
The wager sets sail here, in the British harbor town of Portsmouth.
The entire expedition may have faded from memory,
but Grand being Grand, he saw references everywhere.
Anson's name is still remembered and on this pub.
We visited the ship Anson,
a pub named for the squadron's leader, George Anson.
Here, men were rounded up by the British Navy
and pressed into service on the wager's doomed mission.
You could be drinking, you know, having a beer, enjoying yourself.
The next minute you know you're being put on a little boat
that was like a floating jail.
They would take you out to the ship.
And it's what made creating unity and cohesion on this expedition particularly challenging.
A few hundred yards from the pub, we boarded the HMS Victory,
an 18th century warship preserved in the harbor,
virtually the same model of ship as the Wager,
a thousand tons of oak and rope where the crew ate and slept next
to cannons. And after it was fire, you'd have this huge force flying back and you better get out of
the way. Having immersed himself in what he calls the wooden world, Grant got to the point he could
render a description like this. At one point it was so windy and the gusts were so strong
that they couldn't fly their sails. So the captain orders
the men to climb the mast and to use their bodies as threadbare sails. So they are on top of the
mast, some of them 100 feet in the air, in a typhoon. You have to understand that the masts
are going like this. They're almost touching the water, and these men are clinging like spiders.
Wow.
As Grant breathes fresh life into events
from hundreds of years ago,
you almost wonder if he had climbed the mast himself.
He's the first to admit, yeah, that wouldn't be the case.
I am not an explorer.
Like, if you compare, I mean, when I look at these people,
I mean, I would have been the first to die on the island,
let's be perfectly honest.
We're going to play this out.
What's the cause of death?
Oh, my cause of death, terror.
I would have taken one look at those seas and be like, I'm out of here.
This is nuts.
So, you know, I would never have endured anything that these people endure. But my own quest do sometimes get me in places and to do things
I otherwise would never do in my ordinary life.
You would never catch me going to Wager Island in a little boat.
About Wager Island, marooned and starving, the castaways split into factions,
including a group intent on overthrowing the captain,
an act of mutiny,
punishable by death. When two groups of castaways made it home, we won't spoil how,
they had conflicting accounts of what had happened.
You know, imagine this. They get back to England. They have survived scurvy, multiple typhoons,
starvation, shipwreck. And now, after all that, they're summoned to face a court martial,
and they could be hanged. I mean, it's just kind of unbelievable.
Unbelievable and complicated. Grant solved the puzzle of structuring the wager by telling this
tale on the high seas from three different perspectives, allowing readers to decide for
themselves where the truth resides.
And if he fixated on the perfect way to let the book unfold, devoted to his own quest as his characters are to theirs, that's what makes it classic Graham.
What is your obsession with obsession?
You know, I always thought for a long time that my fascination with obsessed people was
because they made the best stories, right? I mean, the Ahabs of the world. There's a reason because they made the best stories, right?
I mean, the Ahabs of the world.
There's a reason why we tell Ahab stories, right?
Over time, you know, I've begun to realize
that I might have a little bit more in common
with some of these obsessives than I care to admit.
You call it your fascination with obsession.
So they're obsessed, you're merely fascinated.
That's what I like to think.
Yes, I'm just merely, I'm completely dispassionate.
But, you know,
the truth is
that I don't think
you can
really be a writer
and a researcher
and an investigator
unless you are
at some level
obsessed.
In the mail, comments on last Sunday's broadcast.
The origin of everything showed some of the stunning images captured by the Webb Space Telescope. The resurrection of Notre Dame chronicled the reconstruction of Paris' fire-damaged medieval cathedral. What was most striking was the enthusiasm and inspiring
take shown by the web scientists as well as the devoted people involved with restoring Notre
Dame. They exemplify the best of our human species. But one viewer's inspiration is another
viewer's apostasy. How disgusting to see a 60-minute segment on the Big
Bang Theory on Easter. I'm Scott Pelley. We'll be back next week with another edition of 60 Minutes.