The Prof G Pod with Scott Galloway - No Mercy / No Malice: Techno-Narcissism
Episode Date: June 17, 2023As read by George Hahn. https://www.profgalloway.com/techno-narcissism/ Learn more about your ad choices. Visit podcastchoices.com/adchoices...
Transcript
Discussion (0)
Join Capital Group CEO Mike Gitlin on the Capital Ideas Podcast.
In unscripted conversations with investment professionals, you'll hear real stories about
successes and lessons learned, informed by decades of experience.
It's your look inside one of the world's most experienced active investment managers.
Invest 30 minutes in an episode today.
Subscribe wherever you get your podcasts.
Published by Capital Client Group, Inc.
Support for PropG comes from NerdWallet.
Starting your credit card search with NerdWallet?
Smart.
Using their tools to finally find the card that works for you?
Even smarter.
You can filter for the features you care about.
Access the latest deals and add your top cards to a
comparison table to make smarter decisions. And it's all powered by the Nerd's expert reviews of
over 400 credit cards. Head over to nerdwallet.com forward slash learn more to find smarter credit
cards, savings accounts, mortgage rates, and more. NerdWallet, finance smarter.
NerdWallet Compare Incorporated. NMLS 1617539.
I'm Scott Galloway, and this is No Mercy, No Malice. The hype around AI is a shouting match
between techno-solutionists and techno-catastrophists. In fact, both are flavors of the same bullshit techno-narcissism.
Techno-narcissism, as read by George Hahn.
I'm at Founders Forum in the Cotswolds, which they assure me is somewhere outside of London.
There are a lot of Teslas and recycled mason jars as we're making the world a better place.
As at any gathering of the tech elite in 2023, the content could best be described as AI and the Seven Dwarves.
The youth envision toggles between inspiring moments
and bouts of techno-narcissism.
It's understandable.
If you tell a 30-something male he is Jesus Christ,
he's inclined to believe you.
The tech innovator class has an Achilles tendon
that runs from their heels to their necks.
They believe their press.
Making a dent in the universe is so aughts. Today, membership in the Soho House of Tech requires you to birth the leverage point
that will alter the universe. Jack Dorsey brought low-cost credit card transactions to millions of
small vendors. But he's still not our personal Jesus, so he renamed
his company Block and pushed into crypto because Bitcoin would bring world peace.
It may sound a little bit ridiculous, but you fix that foundational level
and everything above it improves. My hope is definitely peace.
Side note, if anybody knows Jack's brand of edibles, WhatsApp me.
Elon Musk made a great electric car,
then a better rocket,
and recently appointed himself Noah
to shepherd humanity to an interplanetary future.
When techno-narcissism meets technology
that is genuinely disruptive,
versus crypto or a headset, the hype cycle makes
the jump to light speed. Late last year, OpenAI's chat GPT reached 1 million users in five days.
Six months later, Congress asked its CEO if his product was going to destroy humanity. In contrast, it took Facebook 10 months to get to a million users,
and it was 14 years before the CEO was hauled before Congress for damaging humanity.
The claims are extreme, positive and negative. Solutionists pen think pieces on why AI will save the world, while catastrophists warn us of the risks of
extinction. These are two covers of the same song. Techno-narcissism. It's exhausting.
But AI is a huge breakthrough, and the stakes are genuinely high. It's notable today that many of the outspoken prophets of AI doom
are the same people who birthed AI. My worst fears are that we cause significant,
we the field, the technology, the industry, cause significant harm to the world. If this technology
goes wrong, it can go quite wrong. Specifically taking up all the oxygen in the AI conversation with dystopian visions of sentient AI eliminating humanity isn't helpful.
It only serves the interests of the developers of non-sentient AI in several ways.
At the simplest level, it gets attention.
If you are a semi-famous computer engineer who aspires to be more famous, nothing beats telling every reporter
in earshot, quote, I've invented something that could destroy the world, unquote. Partisans complain
about the media's left or right bias, but the real bias is towards spectacle. If it bleeds, it leads,
and nothing bleeds like the end of the world with a tortured genius at the center of the plot.
AI fear-mongering is also a business strategy for the incumbents
who'd like the government to suppress nascent competition.
OpenAI CEO Sam Altman told Congress we need an entire new federal agency to scrutinize AI models
and said he'd be happy to help them define what kinds of companies and
products should get licenses, i.e. compete with open AI. Quote, licensing regimes are the death
of competition in most places they operate, unquote, says antitrust scholar and former
White House senior advisor Tim Wu.
Total gangster.
Similar to cheap capital and regulatory capture,
catastrophism is an attempt to commit infanticide on emerging competition.
Granted, we should not ignore the dangers of AI,
but the real risks are the existing incentives and amorality in tech and our ongoing inability to regulate it.
The techno-catastrophists want to create a narrative that the shit coming down the pike is not the result of their actions, but the inevitable cost of progress.
Just as the devil's trick was convincing us he didn't exist, the technologist's sleight of hand is to absolve himself of guilt for the
damage the next generation of tech leaders will levy on society. This has been social media's
go-to for years, obfuscating their complicity in teen depression or misinformation behind a facade
of technical complexity. It's effective. Dr. Frankenstein, having lost control of Frank,
should have warned, quote, I'm worried Frank is unstable and unstoppable. So when his monster
began tossing children into lakes, he could shrug his shoulders and claim, I told you so.
To be clear, the monster is in fact a non-domesticated animal. It's unpredictable. The techno-solutionists
promising the end of poverty and more avocado toast, if we would just get out of their way,
are not to be trusted either. AI sentience is a stretch, but there are plausible paths to horrific
outcomes. Even dumb computer viruses have caused billions of dollars in damage.
My first thought after witnessing all the hate manifest around a light beer
was that we'll likely see AI-generated deepfakes of woke commercials produced by brands
for other brands to inspire boycotts by extremists. As Sacha Baron Cohen said, quote, democracy is dependent on shared truths
and autocracy on shared lies, unquote. AI, like the products of many of our big tech firms,
could widen and illuminate the path to autocracy. On a more pedestrian level,
we're going to witness a tsunami of increasingly sophisticated scams.
The first AI-driven externality of tectonic proportions will be the disinformation apocalypse
leading up to the 2024 presidential election and in other NATO country elections,
as the war in Ukraine continues.
Vladimir Putin's grip strength is weakening.
His historic miscalculation in Ukraine
has left him just one get-out-of-jail-free card,
Trump's re-election.
The former president has made it clear
he would force Ukraine to bend the knee to Putin
if he gets back in the White House.
Do you want Ukraine to win this war?
I don't think in terms of winning and losing.
I think in terms of getting it.
Putin has generative AI at his disposal.
Expect his fancy bear operation
to compile lists of every pro-Biden voice on the internet
and undermine their credibility
with smears and whisper campaigns.
Also, a barrage of deepfake videos of Biden falling down, mumbling, and generally looking
like someone who's, wait for it, going to be 86 the final year of his term. Side note,
President Biden will go down as one of the great presidents, and it's ridiculous he's running again. Yes, I'm ageist.
So is biology. But I digress. Anyway, expect information space chaos.
There's more. AI has shortcomings and limitations that are causing human harm
and economic costs already. Both the catastrophists and the solutionists want you to focus on sentient
killer robots because the actual AI in your phone and at your bank has shortcomings they'd rather
not be blamed for. AI systems are already responsible for a keystone cop's record of
blunders. Starting with the cops themselves, the deployment of AI to direct
policing and incarceration has been shown to perpetuate and deepen racial and economic
inequities. Other examples seem lighthearted and then insidious. For example, a commercially
available resume screening AI places the greatest weight on two candidate
characteristics, the name Jared and whether they played high school lacrosse. These systems are
nearly impenetrable. It required an audit to suss out the model's obsession with Jared.
Who knows what names or hobbies the AI was putting into the no pile.
Amazon scientists spent years trying to develop a custom AI resume screener but abandoned it when they couldn't engineer out the system's inherent bias toward male applicants,
an artifact of the male-dominated data set the system had been trained on, Amazon's own employee base.
AI driving directions plot courses into lakes, and systems for assessing skin cancer risk
return false positives when there's a ruler in the photo.
Generative AI systems, like ChatGPT or MidJourney, have problems too. Ask an image-generating AI for a picture of
an architect, and you'll almost certainly get a white man. Ask for social worker, and the systems
are likely to produce a woman of color. Plus, they make stuff up. Two lawyers recently had to go
before a federal judge and apologize for submitting a brief with fake
citations they'd unwittingly included after asking ChatGPT to find cases. It's easy to dismiss such
an error as a lazy or foolish mistake, and it was, but we are all sometimes lazy and or foolish,
and technology is supposed to improve our quality of life versus nudge us into
professional suicide. Over the long term, productivity enhancements create jobs. In 1760,
when Richard Arkwright invented cotton spinning machinery, there were 2,700 weavers in England and another 5,200 using spinning wheels.
7,900 total.
27 years later, 320,000 total workers had jobs in the industry, a 4,400% increase.
However, the path to job creation is job destruction, sometimes societally disruptive job destruction.
Those 2,700 hand weavers found themselves out of work and possessing an expired skill set.
A world of automated trucks and warehouses is a better world, except for truck drivers and warehouse workers.
Automation has already eliminated millions of manufacturing jobs, and it's projected to get rid of hundreds of millions
of other jobs by the end of the decade. If we don't want to add fuel to the fire of demagoguery
and deaths of despair, we need to create safety nets and opportunities for the people whose jobs are displaced by AI.
The previous sentence seems obvious, and yet...
My greatest fear about AI, however, is that it is social media 2.0,
meaning it accelerates our echo chamber partisanship and further segregates us from one another.
AI life coaches, friends, and girlfriends
are all in the works. Disclosure? We're working on an AI Prof G, which means it will likely start
raining frogs soon. The humanization of technology walks hand in hand with the dehumanization of humanity. Less talking, less touching, less mating.
Then affix to our faces a one-pound hunk of glass, steel, and semiconductor chips,
and you have crossed the chasm to a devolution in the species.
Maybe this is supposed to happen,
because we're getting too powerful And other species are punching back
Shit, I don't know
In sum
AI is here and generating real value and promise
It's imperfect and hard to get right
We don't know how to get AI systems to do exactly what we want them to do
And in many cases we don't understand how they do what they're doing.
Bad actors are ready, willing, and able to leverage AI
to speedball their criminal conduct,
and many projects with the best intentions will backfire.
AI, similar to other innovations, will have a dark side.
What's true of kids is true of tech.
Bigger kids, bigger problems. And this kid at two
years old is seven feet tall. The good news, I believe, is that we can do this. We haven't had
a nuclear detonation against a human target in 80 years, not because the fear of nuclear holocaust created a moral panic but because we did the work
world leaders reached across geographic and societal divides to shape agreements that
diffused tensions and risks when nukes and bioweapons became feasible in no small part
thanks to huge public pressure organized and pushed forward by activists. The Strategic Arms Reduction
Treaty that de-risked U.S.-Russian conflict throughout the 90s wasn't just an agreement,
but an investment that cost half a billion dollars per year. All but four of the world's
nations have signed the Treaty on the Non-Proliferation of Nuclear Weapons.
Blinding laser weapons, though devastatingly effective, are banned. So are mustard gas,
bacteriological weapons, and cluster bombs. Our species' superpower, cooperation,
isn't focused solely on weapons. In the 1970s, the world spent $300 million, $1.1 billion today,
to successfully eradicate smallpox, which killed millions over thousands of years.
Across domains, we come together to solve hard problems when we have the requisite vision and will. Technology changes our lives,
but typically not in the ways we anticipate. The future isn't going to look like Terminator or the
Jetsons. Real outcomes are never the extremes of the futurists. They are messy, multifaceted,
and by the time they work their way through the digestive tract of society, more boring than anticipated.
And our efforts to get the best from technology
and reduce its harms are not clean or simple.
We aren't going to wish a regulatory body into existence,
nor should we trust seemingly earnest tech leaders to get this one right.
I believe that the threat of AI does not come from the AI itself,
but that amoral tech executives and befuddled elected leaders are unable to create incentives
and deterrents that further the well-being of society. We don't need an AI pause.
We need better business models and more perp walks.
Life is so rich. I just don't get it. Just wish someone could do the research on it.
Can we figure this out?
Hey, y'all.
I'm John Blenhill, and I'm hosting a new podcast at Vox called Explain It To Me.
Here's how it works.
You call our hotline with questions you can't quite answer on your own.
We'll investigate and call you back to tell you what we found.
We'll bring you the answers you need every Wednesday starting September 18th.
So follow Explain It To Me, presented by Klaviyo.
Hello, I'm Esther Perel, psychotherapist and host of the podcast Where Should We Begin,
which delves into the multiple
layers of relationships, mostly romantic. But in this special series, I focus on our relationships
with our colleagues, business partners, and managers. Listen in as I talk to co-workers
facing their own challenges with one another and get the real work done. Tune into How's Work,
a special series
from Where Should We Begin, sponsored by Klaviyo.