The Prof G Pod with Scott Galloway - AI, Big Data, and the Power of Framing — with Kenneth Cukier
Episode Date: May 13, 2021Kenneth Cukier, a senior editor at The Economist, joins to share insights from his latest book, Framers: Human Advantage in an Age of Technology and Turmoil. Kenneth explains how to use frames to make... better decisions and avoid crises, as well as why we should rethink the phrase “think outside the box.” He also shares his thoughts on how artificial intelligence is shaping business, healthcare, and society. Follow him on Twitter, @kncukier. Scott opens with why he believes America is experiencing the worst LBO in history. Algebra of Happiness: Develop a process for resetting. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards.
Head over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts, mortgage rates, and more.
NerdWallet. Finance smarter.
NerdWallet Compare Incorporated.
NMLS 1617539.
Episode 67. The atomic number of holmium.
The biggest box office film of 1967 the graduate i have no problem with older women and younger men as a matter of fact
i banged a hot cougar last weekend and now i'm banned from the zoo
okay that was bad. Go, go, go!
Welcome to the 67th episode of the Prop G Show. How'd you like that opening, huh? You don't get that on unfucking the daily from the New York Times.
Anyways, in today's episode, we speak with Ken Cookeye, a senior editor at The Economist and co-author of Framers, Human Advantage in the Age of Technology and Turmoil.
We discuss with Ken the insights from his book, including how to better frame crises
and why the phrase, think outside the box, misses the mark as a cognitive method.
We also hear Ken's thoughts on AI's role in business and society.
Okay.
All right.
What is on our mind? Today, we're talking about
LBOs or leveraged buyouts. What's an LBO? It's when a private equity firm acquires another company
using a significant amount of debt or borrowed money to meet the costs of the acquisition or
to pay for the acquisition. And the most important part, puts very little of its own capital, its own
equity into the company. The problem with LBOs or what
private equity guys have been accused of is that they buy a good company, they fire a third of the
staff, they cut costs, they put huge leverage on the firm, take out all of the money and hope the
company survives. But if it doesn't, they have all their money out and oftentimes you end up with a
weaker company in 10 years because they have failed to make any investments in infrastructure over the long term.
Now, to be fair, a lot of buyouts emerge stronger.
A lot of companies need the discipline of debt.
They need to get in fighting shape.
And a lot of people would argue that the markets are smarter than that.
And if they see its owners not making the requisite investments, that it'll take the
appreciation of the stock down and it just isn't a good business model. However, it's just hard to argue that there haven't been some
buyouts where the folks show up, lever the shit out of the company, kind of take off and say,
all right, you're on your own, and you're left with something that's a shadow of itself. And
typically, there's a decent amount of human costs involved in the culling, if you will.
The private equity firm, through financial engineering and operational improvements, Latin for violently shrinking
headcount and cutting costs, hopes the acquired company appreciates in value over the time so
they can eventually sell it at a profit. And that does happen a lot. Private equity is probably,
I would imagine, on a risk-adjusted basis, the best performing asset class.
If I sound conflicted, I am. I am. Often that means telling a great story when the
market for IPOs is hot and flinging feces at unwitting investors. And today we're seeing
the greatest LBO of all time play out or the worst, I guess, we've taken sort of the worst
attributes or what we sort of, I don't know, if we look at LBOs through a negative lens,
if we look at LBOs through the Senator Sanders and
Senator Warren lens, which is, I don't know, a bit of a cartoon of what it actually is,
but there is a kernel of truth there. The management of the U.S. and its key institutions,
you could argue, is the worst LBO. Whether in government, central banks, hospitals,
or higher education have sought to completely consume the public benefit that their services
offer to prop up their own balance sheets and 401ks. We see this as the great grift, or I've called it the great
grift, a wealth transfer from the young and future generations to the already wealthy and a way to
add more leverage to the U.S.'s balance sheet to prop up the NASDAQ, support an obese tax code to
make sure the shareholder class can continue to pay less tax on the money their money makes compared
to the money people actually sweat for.
And that's the difference between taxes on capital gains versus current income and pump
money into the hedge funds that we call universities masquerading around as if they're charities
or nonprofits.
Why do these institutions have nonprofit status?
Why?
Anyway, at the same time, perhaps until now, this has been matched with a lack of investment
in the backbone of our country, specifically infrastructure, which has left an empty shell of a country or
a weakened country, if you will. The pillars, the base, the infrastructure, the, I don't know,
whatever you call it, the cement, you know, the steel on the ground. We've just said, I know,
I know, let's cut taxes, let's party, it's champagne and cocaine, and we'll worry about
tomorrow, tomorrow.
Are we in the midst of the mother of all bad leveraged buyouts, where we've cut costs like
crazy and have not made any long-term or long-forward leaning investments? Where has
America really made investments? The Republicans would argue we've invested in our most productive
citizens. Well, okay, what the fuck does that mean, just lowering tax rates on the rich?
And now have we ended up with a weak company where the shareholder class have taken all of their money out and have kind of left us to see what happens? The Fed's balance sheet has increased 787% since May 2008. Think about that. It's up ninefold in the last 13 years been buying a whole lot of bonds and other assets in order to inject trillions of dollars into the economy.
I think this is just such bullshit.
Just as certain seeds can germinate unless there's a fire.
You know why I'm here in my nice home and why I'm financially secure?
I'm here because the latest or the last great crisis or economic crisis, the Great Recession, we only spent $700 billion to
prop up the banks. And we let other asset classes and we let the rest of the economy fall to some
sort of natural level. We let it take a shit kicking. And what did that do? It gave me the
opportunity as I was coming into my prime income earning years, the opportunity to buy Amazon
at $160 a share, whereas now $3,200. It gave me the opportunity to buy Apple at $12 a share. It
gave me the opportunity to buy a place in New York at $800 a square foot instead of where it
is now $2,500 a square foot. So all you're doing by propping up assets, by artificially reflating,
inflating, maintaining their asset value, is saying to the already wealthy, we want you to stay rich at the expense of younger and future generations. Six trillion dollars, 10 times the
bailout of the banks. And what has that done? Let's pump a ton of stimulus into the economy.
Yes, some of it will get to the middle and lower income classes, but the majority of it hasn't
gone to individuals, it's gone to organizations. This is the mother of all LBOs, but the worst type of LBO. It hasn't imposed any sort of
discipline. Do young families where one in five households with kids that are food insecure,
do they need more discipline? Do they need to tighten their belts? It just feels as if we've
decided, okay, we're going to take financial engineering and apply it to a society. And as
Kent Cooke says in our interview, government is supposed to think long-term. They're going to take financial engineering and apply it to a society. And as Ken Kukie says
in our interview, government is supposed to think long-term. They're supposed to be the adults in
the room that say, okay, we're going to make these multi-generational investments. And it
seems as if the short-termism, this desire by the people who have weaponized government, by the way,
billionaires, check this out, billionaires on average speak to their respective senator
once a month. What
happens in a presidential election? Everyone starts traveling to Florida. You can bet my
senior senator, Marco Rubio, is going to spend a lot more time on planes making his pilgrimages to
Iowa than actually doing anything, I don't know, called public policy or anything like that.
And what happens when the first two states in the presidential primary are older and
whiter than the rest of the nation, which they both are, which they both are?
We end up with policies and framing, if you will, or starting off the presidential election
cycle with a bunch of policies that basically kiss the jiggly, white, veiny asses of America.
That was an ugly image.
That was an ugly image.
Do you think, you want to know
how America would be much different? Is if the first presidential primary was in Mississippi,
or if it even was in California, that is a better representative of our nation. And it
talks more about a multicultural society and the challenges that we face with a younger population,
or maybe what it is they want from America. We have totally
LBO'd this nation. This lack of investment is a disaster waiting to happen physically,
metaphorically. The U.S. Government Accountability Office says that nearly one in four bridges are
deficient. Jeez, America, one in four bridges deficient? The Environmental Protection Agency
estimates that over the next decade, drinking water, wastewater, and irrigation systems will require $632 billion in investment.
The American Society of Civil Engineers estimates there's a total infrastructure gap of more
than $2 trillion needed by 2025 that if failed to be addressed would result in almost $4
trillion of GDP loss.
That's the thing.
This just doesn't make any goddamn sense.
If you start a company, you invest in technology, you invest in infrastructure, you invest in real estate,
you make forward-leaning investments in new hires that don't pay off immediately because you know
you'll get a return or you hope you'll get a return. And we're just not making any of those
investments in our country. According to the Global Competitiveness Report from the World
Economic Forum, in 2019, the United States ranked 13th in the world in infrastructure quality,
topping from fifth place in 2002.
True story, I was at the World Economic Forum in 2002.
I was invited in 2000 and invited back twice as a global pioneer, what they called a global leader of tomorrow.
And guess what?
I haven't been invited back since.
You know what it's like to peak at 34 years old?
That's what it was for me.
I peaked at 34.
Athletically, I peaked at the age of 12.
And then I grew about four inches and I stayed the same weight. And I looked like Ichabod Crane
with bad acne trying to play football and basketball and baseball in high school. And
you know what? It didn't work. It didn't work that well. The schoolroom became Ichabod's empire,
over which with lordly dignity, he held absolute sway. That's the dog. Anyways, data from the Global Infrastructure Hub
projects that by 2040, China will have invested 5.1% of its GDP on infrastructure, while the U.S.
will have spent just 1.5%. Let me get this. China is investing at triple the rate of us,
and they're adding tens of millions of people to the middle class as we take millions out.
What a shocker. They're
doing better than us. Infrastructure is an investment in the country and an investment
in the people who live and work in that country because that's who really needs roads and public
transportation. What's a transfer of wealth from the poor to the rich? You don't build the J-line.
You don't build highways. Who needs public transportation to get to work?
The poor and the middle class, they don't have helicopters.
They don't have drivers and cars.
They don't have alternative, they don't have childcare.
They have to, they need regular transportation.
We are outsourcing costs as we have done for the last 30 years to lower and middle
class households.
Infrastructure goes beyond improving roads, airports, and railroads. It means providing broadband access to the 21 million Americans
who still don't have it. It means investing in childcare and schools. The U.S. underfunds
school infrastructure by $46 billion each year. It means investing in a social safety net so that
people can afford to take a risk, so that people who don't have trust funds and prep school contacts
can move their families for better jobs or start their own businesses, knowing that one failure won't
put them on the street. Everyone is kind of one bad decision away from living in their car. What
happens when you have to stay somewhere because you have health insurance from one employer?
You don't move. Your human capital is inert, and you don't move to where better opportunities are,
and the economy doesn't grow. Pew Research found that in April 2020, less than a quarter of low-income adults said they had enough funds saved to cover their expenses for three months in case of an emergency compared with 48% of middle-income and 75% of upper-income adults. to stop the bad, poorly thought out LBO of America and get back to responsible stewardship
of the Commonwealth. Let's make the investments this asset deserves. And that's more than just
investing in little Johnny and little Susie, your own kids, right? Don't we have the opportunity,
don't we have the foresight to have more empathy, to have more gratitude, to see what a wonderful
company this is, what a wonderful organization America Inc. is.
It requires, it warrants, and it deserves the same type of investment our forefathers made
such that we could enjoy these incredible fruits. The LBO of America, it isn't working.
Stay with us. We'll be right back for our conversation with Kenneth Cookeye. trajectories? And how do they find their next great idea? Invest 30 minutes in an episode today.
Subscribe wherever you get your podcasts. Published by Capital Client Group, Inc.
Hey, it's Scott Galloway. And on our podcast, Pivot, we are bringing you a special series
about the basics of artificial intelligence. We're answering all your questions. What should
you use it for? What tools are right for you? And what privacy issues should you ultimately watch out for? And to help us out,
we are joined by Kylie Robison, the senior AI reporter for The Verge, to give you a primer on
how to integrate AI into your life. So tune into AI Basics, How and When to Use AI, a special series
from Pivot sponsored by AWS, wherever you get your podcasts.
Welcome back.
Here's our conversation with Kenneth Cooke, a senior editor at The Economist and the co-author of Framers, Human Advantage in the Age of Technology and Turmoil.
Ken, where does this podcast find you?
It finds me in London, just a little bit outside of London in a leafy suburb called Putney.
Great. And the ubiquitous phrase, think outside the box, completely misses the mark as a cognitive
method, which was something that stuck out
in what you'd sent us. Can you expand on what you meant there?
Yeah, sure. Thinking outside the box, the idea of brainstorming is fundamentally flawed,
because it presumes that you can basically take yourself out of a mental model that you're in at
any one time, out of a frame, and put yourself into a different one. And the fact that the
aspiration is to be free of these constraints. And that's actually wrong because you can never actually
leave the frame that you're in. However, what you can hope to do is to be conscious of a given frame
that you're in, a mental model, and to then deliberately change it. But the idea of simply
abandoning it is wrong. And one of the reasons why is it is the constraints.
It is the box where the magic is.
Because if we focus our box, our constraints, our frame in one domain, it emphasizes certain
features that we want to sort of plow into and develop at the exclusion of others.
So all of life is about living a sort of a model in our minds of the exterior world that
we're seeing.
And if we're deliberate about how we frame the world, how we actually exclude certain
things and focus on other things, we become really effective.
But if we try to give it up altogether, that's not going to work.
So give us an example of how you become a better framer.
Sure.
Well, I mean, first maybe start with what is a frame?
You know, what are the components of a frame? And there's three elements to it. The first one is
causality. It's a template of understanding how the world works because of cause and effect,
and that makes the world predictable. We can leave our mark in it and we can anticipate certain
things. The next is the idea of counterfactuals. We're constantly asking
ourselves, what if questions? What if I do that? What if this happens? How would I respond to that?
And all of life is a series of iterations and permutations that's happening mentally about how
we respond to the world, if you will, by dreaming, or if you will, posing these counterfactuals.
And then what do we do with them? We impose constraints on them.
We basically relax certain constraints and we tighten others. And by doing so,
we have this mental model to say, I'm going to actually focus on this element at the exclusion of these other elements. And by doing so, we're now in a frame. It might be in terms of the
economy. If we look at the Amazon rainforest,
one person might say, hey, this is valuable timber that's important to cut down and sell today so that people can have economic enfranchisement. At the same time, you'll have someone look at
the exact same situation with the exact same data and information, but with a different frame and
say, no, no, no, this is the lungs of the planet, and it's important for long-term viability that we preserve it and don't cut it down.
By understanding the frames that we have, we can now understand how we can interact
with each other, have a conversation, and try to come up with better solutions to our
problems.
And when you look at how different organizations or entities or governments frame their relationship
with each other, do you see any obvious flaws where if we think about reframing the way we approach each other or governments or conflicts, any obvious applications or examples of kind of incorrect framing, so to speak?
Yeah, I mean, you kind of see it everywhere. I do think that there are these,
I think it is an important and substantive worldview debate
among a country that has at its core
this idea of decentralization and freedom,
and another one which is uneasy with that
and actually feels more comfortable
with the degree of centralization.
And of course, it's taking measures that actually restricts human freedom. However, if we frame the issue as if
these two great powers are on a collision course, that is what we are going to get. We're going to
get a collision. If we frame it differently and say, how can we accommodate each other and still
work on the issues that we need to work on to advance human
progress, respond to global challenges, while also accept the fact that we have differences
that we're probably not going to settle, but we're just going to have to simply manage.
We're going to be better off as a civilization. Do you think some of that is because we
incorrectly framed Russia? I love the idea of looking at the opportunities
for partnership versus the threats
that the two competing powers present.
But do you think some of that
is that we got the framing wrong on Russia
and kind of we're talking about
hitting the reset button
and how we would work together
and kind of come to this recognition
that they're our adversary?
Do you think that,
well, let me put it this way.
How important is it that you learn
from past framing versus informing kind of, I think of it as like you're trying to create context or
a framework for how you make decisions. How important is it to either incorporate history
or like as Yoda said, to forget what you know? Well, in all ways, I'm never going to contradict Yoda. He is the sort of the paragon of the wise sage. It is true that history is essential because it helps us develop our repertoire of frames. It's not, history doesn't repeat. It doesn't always rhyme.
But what it does do is it gives us a greater appreciation for the diversity of mental models that can exist.
And we can take other mental models that we've had and we can adapt them.
So I do think that we are, I think there's two problems.
It's not just that we've misframed the change from Soviet Union to Russia, and therefore we are beguiled when it
comes to looking at China. I think there's another misframing is that there's a vainglory in the West
that because we, quote unquote, crushed the Soviet Union, and in 1989, we had these revolutions that
just vaunted the Western worldview, that became unipolar, that we think that we can still apply
that stamp at every domain and every step in the 21st century, and we can't. I think what we need
to aspire to is a form of pluralism. And the problem that we have is the simplicity or
simplisticness of trying to see the world in a very binary way, particularly as,
you know, the West being more the Americans being the heroes, whereas every other country
sort of being either a vassal or somehow diminished. I think there's a lot of
incredible reasons to be incredibly proud of America and its legacy. But I think that we need, as a strong
country, it needs to be a self-confident country, and it needs to interact in a much more wise and
deft way in world affairs. So take us, let's go back a year. So the United Kingdom,
one of the highest mortality per capita rates in the world, the U.S. 5% of the population and 22% of
the world's infections and deaths. If you were to go back and apply this notion of how to reframe
the problem such that we might have less bad outcomes, what do you think we could have done,
both nations could have done better? Such a good question. It's one that
historians are going to sort of be reading and weeping, as well as the students of history. The most important, of course, is the question of how
we framed the situation at the very earliest days. Because there, it was so pronounced.
In the US and in Britain, the framing was of the seasonal flu. And therefore,
the range of options that we saw was basically mitigation,
because what do you do when it's the seasonal flu, but sort of let it burn through the population,
hope for herd immunity, and sort of mitigate it with basic steps that you can take that are
sort of effective, but not too draconian. On the other hand, countries in Asia, or in the Asia
Pacific, like New Zealand and Australia, did better. And the reason why is
because they framed the COVID outbreak differently. Their mental model wasn't the seasonal flu,
it was SARS. And because they approached it with this template of SARS, it meant that you threw
everything at the problem immediately to overreact, quote unquote, is to actually react and to react responsibly,
because it's far better to, quote unquote, overreact and eliminate COVID than to simply
underreact and have to live with it in an endemic way. So the strategy of mitigation versus
elimination is pronounced. And New Zealand choosing, did incredibly well. In June,
they declared themselves COVID-free when Britain was having deaths galore, because they didn't
take it seriously enough, because they framed the problem incorrectly.
And as we look back and we think, okay, given what we know a year in, how would you frame,
or if we wanted to implement a series of measures such that the next time a
pandemic or any crisis washes up on our shores, we're better prepared or we can reduce the damage,
how do you think we should be framing our approach to disasters or reframing, if you will?
So, we need to change our mindset and frame it probably on a basic cost prevention outlook.
We're spending trillions of dollars to support the economy and to bring people and particularly
an underclass of people who've been more severely harmed by the economic crisis that COVID has
brought in its wake, spend trillions of dollars to respond to this when we could have spent
actually literally millions, and maybe if you want to be generous, billions of dollars
to have avoided the problem at the outset. And some countries have done that. Finland and Norway
in particular have a much better way of approaching these problems by thinking about
having an emergency response program that is viable for any form of disaster. Is it a flood
or is it a pandemic? You're still going to need certain forms of medical equipment, for example,
and a self-sufficiency of supply of certain goods. If we approached it with this sort of disaster response and a
fungible disaster responsive mindset, we would invest before the crisis. And by spending
hundreds of millions or several billion dollars, we can avoid calamities that cost trillions.
That's where we need to go as a society. It's really hard to do that because it requires
a politics that looks different than the politics that we have today. And it's a huge irony as well.
Let me dwell on this for a second. The purpose of government, according to David Hume, was because
individuals have a lifespan of 70 years, to be generous, less in his day, that we're caught up in the
short term.
We're beguiled by our passions, that we live for the moment.
But institutions, in particular, the state can be the entity that will have the long-term
perspective and can invest over several generations.
And you can imagine that the state in the late 1600s, early 1700s, and the 1800s, in the glory days of liberalism, not the American liberal left center, but liberalism as in liberty and the role of the individual, that the state was the entity that was going to be responsible for intergenerational investment.
And that's gone now. In fact, it's just the absolute opposite where we have individuals who may have a mindset
of several decades, and we have politicians that have a mindset of every four years.
That doesn't work anymore. The state has sort of lost its whole purpose of long-term investment.
Yeah, it feels as if, though, I mean, so you've correctly, I think, correctly diagnosed the
problem. But if we're going to say, all right, let's move to the solutions, we're short-term thinkers versus long-term thinkers.
You know, what, if that's our new frame, if our frame is, an objective is to create more long-term thinking, more, you know, multi-generational investments in thinking, what would we do to get us to think more long term?
I think we need, I'm not so certain institutional fixes are going to work.
And if you actually look at the state of politics in many countries now, many democracies, but in particular America, if you extended the length of office of the executive, you're probably not going to get a better politics.
In fact, you might have gotten worse.
Imagine Trump for eight years as opposed to four.
There's more breakage that can happen.
In fact, you might want to have the four-year limit feels right intuitively.
Certainly, we've levered our entire politics around that sort of quadriennial state of affairs.
Of course, it has to come from the public, right?
But we've lost also the sense of a political culture that is informed and that shares some
degree of common base of knowledge and of facts.
What I'm saying is it feels very cliche, I'm going to be honest,
because we've been having this national conversation for several years now.
And of course, since the election and January 6th,
it's become front and center that we can't miss it.
And I think a lot of people simply play the role of ostriches and put their head in the sands because they think now that we have Joe Biden in the presidency, that the problem
is going away.
Trump is not on Twitter, so he's less visible.
But the same sort of poison in the body politic is still coursing through.
It's not changing.
If anything, it's growing because Trump is going to
be on the ticket for every single Republican voting in the midterms in a way that he was on
the ticket in the presidential election. And so, of course, the Republicans are all going to come
out in fury trying to bring back Trumpism without Trump, whereas the Democrats might not. So I don't think that changing the institutions and institutional structure is actually going
to give us a ability to think long term.
Coming up after the break.
The point is that we don't need the human being as the loci of all intelligence.
The artificial intelligence is going to teach us
new things that we didn't know before.
Stay with us.
What software do you use at work?
The answer to that question is probably more complicated
than you want it to be.
The average U.S. company deploys more than 100 apps,
and ideas about the work we do
can be radically changed by the tools we use to do it. So what is enterprise software anyway?
What is productivity software? How will AI affect both? And how are these tools changing the way we
use our computers to make stuff, communicate, and plan for the future? In this three-part special
series, Decoder is surveying the IT landscape
presented by AWS. Check it out wherever you get your podcasts.
Support for this podcast comes from Klaviyo. You know that feeling when your favorite brand
really gets you. Deliver that feeling to your customers every time. Klaviyo turns your customer
data into real-time connections across AI-powered email, SMS, and more, making every moment count. Thank you. Monday and beyond. Make every moment count with Klaviyo. Learn more at klaviyo.com slash BFCM.
You've spent a lot of time thinking about AI, and you did a TED Talk where you said,
we need to make sure that technology and AI is our servant and that we're not technology or AI's servant.
Give us a sense of what you think the threat of AI is and how you think, that because it's hard to understand, and also because it'll exceed our ability to understand how it reaches decisions, that therefore it's just a matter of time before it becomes invulnerable. And I disagree with that. People don't understand that artificial
intelligence is a tool in the same way that the abacus is a tool or that the computer is a tool.
It will exceed our abilities. That shouldn't surprise us nor alarm us because technologies
axiomatically exceed our abilities. The steam shovel can do better than a human being using its brawn.
But we need to understand it, that we are still in control of the AI. Now, it is possible that we could give up more to it than it deserves, and we shouldn't do that. But if you scratch the
surface and look at the successes of AI, the biggest successes they've had, whether it's AlphaGo
or crushing human players in Dota 2, Defense of the Aceans, which is a video
game that requires strategy and planning, that behind it, human beings have had to set the
parameters in order for the AI to function and to win and to beat the other humans. Because human
beings can frame. A frame is a mental representation. It's an abstraction. It's
ability to take the world and condense the
information to learn from it and therefore use it in a fungible way in lots of other circumstances.
It becomes efficient. It becomes understandable, coherent, and actionable. But AI doesn't do any
of that because AI doesn't have any sense of causality. It has no ability to conjure up
counterfactuals unless you program that in, and it has no way to identify meaningful constraints.
Now, these are the very features that define a frame or a mental model that human beings do as a second nature.
Now, we can get better at that and do it better, but we do this and the artificial intelligence doesn't.
And because of that, we are still in the driver's seat, and if we don't stay in the driver's seat, it's our fault. Yeah, I'm with you on this. I've always said,
at the end of the day, AI is not dangerous. It's the people programming it. What are we telling
it to do? I've never bought that it turns sentient and starts making its own decisions.
When you look at AI
and you think about the economy, who do you think, where do you think the biggest opportunities are
for AI? What industries, what types of employment, and what types of employment and industries are
most at risk with the introduction or the advancement of AI? So clearly healthcare is a
place where AI is going to be revolutionary. And the reason why is that we're already collecting data in healthcare, but we're not really using it for a variety of reasons, either because we don't have a culture of using it, we're not very good at it. The privacy law might restrict us from using it in an effective way, might be expensive to do. But once we collect more of that data, because it becomes accessible to do it,
and we run algorithms through it, and we have findings that over time can be validated and
verified as legitimate, we can start making improvements that we never could imagine.
So let's just take a second and look at a given hospital with a tricky case of a particular
patient. And a lot of patient cases are tricky. You never have a clear answer one way or another. But you sort of know in your heart of hearts that there's probably a thousand
people in the United States and certainly more globally who are facing the exact same problem,
who are faced relatively the exact same problem. But are we learning from the experience of what
worked and what didn't in those other cases? The point about AI is something called feature
extraction. It makes
inferences, but it makes inferences and feature extraction based on the quote-unquote raw data,
all of the data that can possibly take in, that the human being need not specify what is relevant.
Those correlations of relevancy sort of bubble up from the data that you have. Now, you do need a
model. You can't just be blind
about it. You can't simply torture the data until it confesses to anything. But the point is that
don't leave the human being as the loci of all intelligence. The artificial intelligence is
going to teach us new things that we didn't know before. So I want to stress that in the book.
It's not that we're naysayers of artificial intelligence. In fact, we're great friends of AI.
We think it is going to be incredibly transformational.
But the point here is that it's going to exceed human capacity to be the loci of information
and knowledge.
And therefore, we're going to learn new things.
And healthcare is begging for reform so that we can improve outcomes, lower costs, and
most importantly,
increase access, increase access to people who've otherwise been denied high quality healthcare.
That's such an easy win. But then when you go beyond it, just look at any domain in society that can collect data. We're going to be able to improve how it works with artificial intelligence.
So education is also begging for it as well, particularly in an age of COVID and remote learning and the hybrid office, where so many
of our interactions now that used to sort of have an informational component to it, but were
done in physical space. So therefore, that information was never captured and rendered
into a data format, now has a data component to it, becomes datafied. Suddenly, when we have the data,
we can apply this tool of machine learning to it and learn new things that we didn't know before.
So again, education is the second one. But as for the risks, to be honest, I find it hard to come up
with the problems of applying AI because I see so many opportunities to reform what we do by applying this veneer of empirical
evidence at scale with pattern matching and statistics to become better at doing what we're
already doing. The risk, of course, is going to be AI in the hands of people who don't share our
values, people who are authoritarians who want to misuse it, whether it is to violate privacy for clickbait
or to manipulate elections or something much more heinous and worse, which is to use facial
recognition to have an at-scale control of the population. That's where I want to focus my fire,
at the dangers of the misuse of it by humans, not because there's something inherently wrong with the AI itself.
So give us just a couple of basic applications of how AI will improve our lives. We talk a lot about AI, but give me just a couple of basic benefits we are absorbing or registering because
of AI. Sure. We're at the outset of it. So let me sort of give you one that sort of is to mentally help
us see what's to play for. So there was research several years ago that looked at retinal scans and
wanted to identify whether artificial intelligence could identify someone's likelihood to have a heart
disease. And so they posed lots of questions to that data set in which they had
known the answer so they can start pattern matching things. And by applying this algorithm
to retinal scans, it could predict if the person was a smoker or not. It could predict if the
person was within a certain age of three to five years, which is pretty impressive to say through the eye
scan alone, a regular scan, you can predict if someone's a smoker or if someone is a certain age,
but it could also predict the sex of the person, if it was a male or a female.
And that stunned the researchers. And it was pretty good at the prediction. It was around 95% in both cases. So scientists had absolutely no idea that it was even feasible to predict sex based
on the retinal scan.
So what was it that the AI could see in the blood vessels and of the retina that gave
this sort of gender-based component to it?
We still don't know.
But suddenly now you can imagine if we can
identify something as foundational as the sex of a person based on their retina scan,
suddenly you can imagine, well, what would be the best predictor that someone's going to have
heart disease? What is the best predictor that someone is going to have dementia,
not in three years, but in seven years. And what sort of interventions can we take now,
particularly if everyone has Fitbits and Apple Watches, that we can actually look at population
scale of the different ways, this sort of observational data, if you will, data in the
wild, what people are doing, whether exercise works and how well it works. What is the minimum level of exercise to forestall dementia?
Can you overexercise? How much does diet play a component to this as well? These are the sorts
of things that we are going to be able to learn that we just don't know today. So all of the world
has this sort of dark matter of information, information all around us. And when we apply this tool to it, we're going to
be able to unlock new learnings that we never could know before. The big challenge is going to
be epistemological, because we believe that the human mind is the loci of all that is in the world
in the same way that the sun is the center of our solar system.
We define everything around the ability of the human mind to understand. And in fact,
my co-authors and I have just written a book that vaunts the human mind as the center because
framing is so foundational. But at the same time as the human mind is as foundational, this tool of AI is going to create new elements of knowledge that is latent and not yet exposed and will come forward to us.
We are going to have to interpret it and we're going to have to accept it on faith because we may not be able to understand how the algorithm came to the conclusion, in this case of a retina, that a person is a male
or female based on the retina scan. So the whole project of the sciences from the 1600s onwards has
been to put man's empirical knowledge at the center of the universe and cast out authority
based on faith. And now in the 21st century, we're going to have to base our knowledge
back on faith again, but faith born of the algorithm and of the machine.
This is going to be very humbling to us, but I think it's the direction that we're going to have
to go. When you look at companies, organizations, are there any that stand out where you think they
clearly understand AI, their decisions, their investments, and their
products reflect that they get this stuff? Yeah. I mean, the reason why the most important
companies in the world are AI companies or data companies is not simply because they're
based on ad tech, which happens to be a big sector, but also because they have the data
and they understand the importance of using data.
I mean, the whole history of corporate America in the last 20 years can be written through the lens of machine learning.
Those companies that collected data had deliberately found a way to get a data advantage and then
could actually learn at scale by taking some information and then
collecting more and beating their rivals with that information is the way they've succeeded.
So an example is just simply take Yahoo and Google and search queries that are mistyped.
Yahoo would recognize that a search query had a typo in it and would auto-correct it and
give the viewer or the user the search that they're looking for. That sounds really nice.
That's pretty good. But that's just simply a basic auto-correction. Google realized that the typos
themselves were very valuable, that they could actually turn the typos into a product,
they can create the world's best spell checker, they can do it in every language, that they can
identify new words that moved into a given language, in this case in English, by simply
watching what was being typed in. So words like Obamacare and iPad started coming out. And of
course, there was other variants of spellings that people
would click on and they would identify if the person left the search window or if they actually
researched a variant of what they searched before to find out that it wasn't right.
By creating a machine learning algorithm that tracked all of this, they became an organization
that just realized we're in the business of collecting data and learning at scale. And that's why they can apply all of their features from reading Gmails to having an
incredible way to help people write letters and compose essays by having an autocomplete
for the word that you're trying to or the sentence you're trying to write.
That wasn't a technology, it was a mindset.
And so every company is going to have to have that machine learning mindset or it's not going to last long.
So take us to either your 25-year-old self or a 25-year-old who's just out of school and buys into an AI future.
How do young people best position themselves to take advantage of this brave new world of AI?
When I was in college, the class I did worst at
was statistics. I basically, I in effect failed, but I had a teacher who took mercy on me. And I
will even say that on the last day of class, I asked him to summarize sort of the theory of what
he was trying to teach us or what he did teach us. And he laughed at me,
and the whole class laughed. And it took me, frankly, decades to realize why I was so bad
at statistics. And the reason why is because I took it so seriously, that I then saw it as not
an irony whatsoever, that 10 years ago, I basically wrote a book about statistics called Big Data,
that I had to sort of plow that domain to fully understand what was going on in terms of how you can take the hurly-burly of the world, render it into a format to make it predictable,
understandable, and repeatable, and learn things that you couldn't do before.
So my message to my 25-year-old self, or message now and to extrapolate to 25-year-olds today,
is if you find something that is sort of grating at you intellectually,
pursue it.
And if everyone laughs at you, I mean, maybe you're onto something.
Maybe you're going to do something original and novel with it.
Traditional statisticians read the book Big Data, and they criticized me and my co-author, Victor,
because they said, what about Spurrier's correlations? And we looked at them and said,
are you mad? What about the fact that you're self-driving cars, right? That was the rebuttal
to it. You do something different. It's not the same thing whatsoever. But it requires you to
accept the
fact that you're going to get laughed at because what you're doing is something original.
So if you're getting laughed at, you're doing something right. I like that. Ken Kukie is a
senior editor at The Economist and host of its weekly podcast on technology. He is also an
associate fellow researching artificial intelligence at Oxford University's Saeed Business School. His latest book, Framers, Human Advantage in an Age of Technology and Turmoil,
is out now. He joins us from his home in London, England. Ken, thanks for your time and stay safe.
Thank you, Scott. algebra of happiness so i am feeling down today i'm like kind of i don't know sad angry i don't
know what the fuck is going on with me but i'm not feeling my usual cheery sunshiny easter parade
self and so the first thing i do is i acknowledge that. And I'll even say, if someone
calls me, is close to me, says, how are you? I say, I don't know, for some reason I'm feeling
a little bit down. I find that helps. Just saying it makes me feel better or starts the road to
repair. And sometimes the road to repair is only six hours or a couple of days. But I acknowledge
that there are peaks and valleys and I acknowledge when I'm in a valley. And the first thing I try
and figure out is, is there something I'm upset about that I'm not dealing with that is
bothering me? And I try and process it. And as I've done today and said, okay, I can't really
point to anything that I have no legitimate reason to be down or unhappy. I recognize that it's
probably something physical or chemical, and I start trying to address it. And the way I address it, again, is with this algorithm of SCAFA.
And the first S is sweat.
I try and work out really hard.
And it's really hard when you're depressed.
It takes a special amount of effort.
But I find that if I can sweat, it sort of resets me.
Clean.
I try and eat clean.
I try and avoid trans fats.
A, abstinence. I try and lay off the
alcohol and the marijuana for a few days. I don't drink a ton. I don't smoke a lot of marijuana,
but I try not to have any. When I'm feeling down, I find that it just taking that shit out of my
system. And I've said before, I'm not a Puritan. Helps me. F, family. I find being around close friends and
family grounds me and gives me some context and just generally makes me feel better. And then
affection. We're mammals being around others, whether it's your dog, being affectionate with
your dog. My favorite thing is to watch TV at night with my kids. And I will try and do this
tonight if they'll put up with it, but I try and get all of us to flop on a couch.
And instinctively, and it's probably the nicest thing in my life, is my kids will instinctively
throw a leg, one of their legs over mine. But I find that as mammals, proximity and closeness
and affection to other mammals is very restorative for me. But one, acknowledge that you're down,
acknowledge that it's okay.
Figure out if there's something you haven't processed that you need to address or at least acknowledge. And if it's not, realize, okay, you need to reset and develop your own process for
resetting. Life is short. We're going to be dead soon. I believe that. I can't get over how fast
it's all going. We deserve to be happy,
but deserving to be happy isn't enough. You need to be action-oriented, and you need to deal with
this and snap out of it. And just as today, I am confident I will snap out of it, but I don't take
it for granted. I'm going to address it. Our producers are Caroline Shagrin and Drew Burrows.
Claire Miller is our assistant producer.
If you like what you heard, please follow, download, and subscribe. Thank you for listening
to The Prophecy Show from the Vox Media Podcast Network. We'll catch you next week on Monday
and Thursday. Support for the show comes from Alex Partners. Did you know that almost 90%
of executives see potential for growth from digital disruption, with 37% seeing significant
or extremely high positive impact on revenue growth. In Alex Partners' 2024 Digital Disruption
Report, you can learn the best path to turning that disruption into growth for your business.
With a focus on clarity, direction, and effective implementation, Alex Partners provides essential
support when decisive leadership is crucial. You can discover insights like these by reading Alex Thank you. AlexPartners.com slash V-O-X. In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.
Do you feel like your leads never lead anywhere?
And you're making content that no one sees and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform that builds
campaigns for you, tells you which leads are worth knowing, and makes writing blogs, creating videos,
and posting on social a breeze. So now, it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.