The Prof G Pod with Scott Galloway - No Mercy / No Malice: Guardrails
Episode Date: April 22, 2023As read by George Hahn. https://www.profgalloway.com/guardrails/ Learn more about your ad choices. Visit podcastchoices.com/adchoices...
Transcript
Discussion (0)
Join Capital Group CEO Mike Gitlin on the Capital Ideas Podcast.
In unscripted conversations with investment professionals, you'll hear real stories about
successes and lessons learned, informed by decades of experience.
It's your look inside one of the world's most experienced active investment managers.
Invest 30 minutes in an episode today.
Subscribe wherever you get your podcasts.
Published by Capital Client Group, Inc.
Support for PropG comes from NerdWallet.
Starting your credit card search with NerdWallet?
Smart.
Using their tools to finally find the card that works for you?
Even smarter.
You can filter for the features you care about.
Access the latest deals and add your top cards to a
comparison table to make smarter decisions. And it's all powered by the Nerd's expert reviews of
over 400 credit cards. Head over to nerdwallet.com forward slash learn more to find smarter credit
cards, savings accounts, mortgage rates, and more. NerdWallet, finance smarter. NerdWallet Compare Incorporated. NMLS 1617539.
I'm Scott Galloway, and this is No Mercy, No Malice. Tech has embarked on its next adventure,
artificial intelligence. This could mean rapid innovation, increased productivity, and another tsunami of societal harms.
Now is our moment to protect ourselves from ourselves, not with a pause, but with government
oversight. Guardrails, as read by George Hahn. Last week, we learned about a significant leak of classified material
that exposed key details of the Ukrainian war effort and America's security apparatus.
The perpetrator?
Not an extremist group or criminal network,
but someone we're more familiar with.
A young man who spends
too much time online.
Technology and our inability to regulate it have again made things worse, much worse.
The leaker's preferred platform was Discord, which has been used to share child pornography
and coordinate
the white supremacist riots in Charlottesville. Discord is not alone. Recently, Instagram assisted
the suicide of a young British girl by serving her images of nooses and razor blades. Facebook
fueled a mob riot in Myanmar. The list goes on. Teen depression, viral misinformation, widespread distrust of national
institutions, polarization, algorithms optimized for rage and radicalization. We've discussed this
before. What's startling about this latest scandal is the banality. A reckless young man trying to
gain social status online accidentally shapes world events.
Steve Jobs called computers bicycles for the mind because they amplify our capabilities dramatically.
It's a nice image.
A more apt analogy for many young men, however,
is a Kawasaki H2 Mach 4,
a motorcycle that possessed far too much power and had a rear-biased weight balance
that made it an accident waiting to happen. Too much power, not enough balance, an injury is
inevitable. Tech has become a bullet bike, a reliable source of disturbing accidents and organ donations.
People have always been stupid,
and everyone is stupid some of the time.
Note, Professor Chipola's definition is people whose actions are destructive
to themselves and to others.
One of society's functions
is to prevent a tragedy of the commons
by building safeguards to protect us from our own stupidity.
We usually call this regulation, a word Reagan and Thatcher made synonymous with bureaucrats and red tape.
Yes, air traffic control delays and the DMV are super annoying, but not crashing into another A350 on approach to Heathrow,
not suffocating as your throat swells from an allergic reaction,
and being able to access the funds in your FTX account are all really awesome.
60 years ago, the U.S. registered more than 50,000 car crash deaths annually. So we created the National Highway Traffic Safety Administration
and charged it with making the roads safer. If you're under 60, this may be hard to imagine,
but not that long ago, many Americans saw seatbelts as an assault on their personal liberty.
Some cut them out of their cars. Democracy bested stupidity, however,
and between 1966 and 2021,
vehicular death rates in America were halved.
The NHTSA is one of the many boring federal agencies critical to a healthy society.
Before the Food and Drug Administration,
the sale and distribution of food and pharmaceuticals was a free-for-all.
The Federal Aviation Administration is the reason your chances of dying in a plane crash are 1 in 3.37 billion.
Next time someone tells you they don't trust government, ask them if they trust cars, food, painkillers, buildings, or airplanes. The limits on innovation imposed by these agencies,
their red tape, are real and worth it. Millions of us are alive and prospering because we had
the foresight and discipline to blunt the sharp end of industrial progress with the guardrails of democratic oversight. Until you open your phone.
The greatest anomaly in the history of U.S. regulation
is the place more and more of us spend most of our time.
Online.
A lethal cocktail of complexity, lobbying,
cultural worship of tech leaders,
and anti-government libertarian screed
has rendered tech immune to the basic standards of safety and protection.
Lethal is the correct term.
Tech comes into the purview of other agencies on occasion.
Though it's always bitching it's special
and shouldn't be restrained by the olds at the FTC and DOJ.
Our government should, I think, just try to get out of the way and not impede progress.
And the industry's blocking efforts have been effective.
There is no FDA or SEC for tech,
which is America's largest sector by market capitalization and growing.
The justification for this was the go-to new economy get-out-of-jail-free word,
innovation. When tech was nascent and niche, we were smart to err on the side of growth
versus regulation. That movie ended a decade ago. Phones aren't toys for early adopters, and search and social have moved beyond campuses.
We don't require a license to drive a big wheel, but if a big wheel 5G went 750 miles per hour,
we might restrict access or at least demand airbags. The go-to narrative for these platforms
after every new disaster is the delusion of complexity
and that the internet is just another communications technology
like the phone or a letter,
a reflection of society,
and it would be near impossible to put guardrails in place.
Also, sprinkle in some blather regarding free speech.
This is all bullshit.
AI can write a Seinfeld script in the voice of Shakespeare.
It can scan platforms for words and images associated with risks.
They're already doing it for signals you might be shopping for Crocs.
But what's the incentive for a platform to make the investment in any editorial review?
Other than decency and regard for others, that is?
We've done a good job stupid-proofing the offline world, but that's increasingly not where we live.
Especially younger people who now spend roughly the same amount of time online as they do sleeping.
They, we, pass the majority of our waking hours riding in a vehicle with no airbags,
licenses, or traffic lights.
Plus, there are millions of autonomous vehicles on the road controlled by unknown actors,
and they're prone to running over pedestrians.
Tech is embarking on its next big adventure, artificial intelligence, which likely means rapid innovation, increased productivity, and another tsunami of unforeseen societal
harms.
Predicting how AI will tear the fabric of civilization is the new bingo.
Humanoid phishing scams that access bank accounts,
AI-generated camera footage and headlines
that make the truth increasingly opaque,
rogue AI gods determined to eradicate humanity.
Experts agree all of this is possible.
What's telling is the technologists'
collective reaction to their own creation.
For the first time, they want to slow down.
A few weeks ago, they wrote a petition calling on AI labs to immediately pause
all training of the most powerful AI models for six months.
Hundreds of tech leaders signed the letter. The CEO of OpenAI,
Sam Altman, who started the hype spiral with ChatGPT, says he's scared of his company's own
algorithms. One AI expert said the six-month moratorium isn't harsh enough. Instead, he says, shut it all down. What undermined the veracity of the letter
was one signatory, Elon Musk, who's asked others to pause as he fast-tracks his own AI programs.
As usual, he's full of shit. We should grab this opportunity with both hands,
specifically both hands on the wheel.
Not a pause, which in my view is a bad idea.
China, Russia, and North Korea won't pause.
If the guy who just disappeared my blue check hadn't been kicked out of OpenAI and controlled it instead,
do you think he'd be advocating for a pause?
Better idea. The 78 podcasts that garner more downloads than the Prof G pod should suspend their programming.
You know, just to get our arms around this new and potentially dangerous podcast medium.
But we do need to seize this moment, likely brief, when some tech leaders have remembered the virtues of government oversight. We need a serious, sustained, and centralized
effort at the federal level, perhaps a new cabinet-level agency, to take the lead in regulating.
We can call it AI because pretty soon AI will be everywhere in tech. There have been efforts at
comprehensive technology regulation we can pick up and carry across the finish line. For example, last year, Senator
Michael Bennett proposed the Digital Platform Commission Act, which would create a federal
body to, quote, provide reasonable oversight and regulation of digital platforms, unquote.
In other words, exactly what we're talking about. But it's still stuck at the introduced stage. As with any political issue,
it needs public support. What won't work is fake regulation, when the government issues broad,
vague statements about what companies should generally do. That's what Biden did with crypto,
and he's doing it again with AI, specifically his blueprint for an AI Bill of Rights,
which is filled with truisms, platitudes, and no laws.
Similarly, the NIST published its AI Risk Management Framework.
Again, not laws.
Earlier this month, a tech executive was tragically stabbed and died.
Outspoken members of the tech industry
immediately speculated the killer was
a psychotic homeless person.
A few days later,
an acquaintance of the victim,
also a tech entrepreneur,
was arrested and charged.
Note,
the homeless are more likely to be victims of crime than perpetrators.
The above is another variation on a story told repeatedly across an innovation economy
where we have incorrectly conflated wealth and innovation with character.
A growing vein of the tech community, venture catastrophists, deploy weapons of mass, and render our discourse more coarse,
making it less likely we come together
and address issues including homelessness and crime.
Our failure to regulate this sector,
as we have done with every other sector,
is stupid.
Life is so rich.
Hello, I'm Esther Perel, psychotherapist and host of the podcast, Where Should We Begin,
which delves into the multiple layers of relationships, mostly romantic.
But in this special series, I focus on our relationships with our colleagues, business
partners and managers.
Listen in as I talk to co-workers facing their own challenges with one another and get the
real work done.
Tune into How's Work, a special series from Where Should We Begin, sponsored by Klaviyo.
What software do you use at work?
The answer to that question is probably more complicated than you want it to be.
The average US company deploys more than 100 apps, and ideas about the work we do
can be radically changed by the tools we use to do it. So what is enterprise software anyway?
What is productivity software? How will AI affect both? And how are these tools changing the way we
use our computers to make stuff, communicate, and plan for the future? In this three-part special
series, Decoder is surveying the IT landscape
presented by AWS.
Check it out wherever you get your podcasts.