The Prof G Pod with Scott Galloway - The Risks and Opportunities of an AI Future — with Eric Schmidt
Episode Date: November 21, 2024Eric Schmidt, a technologist, entrepreneur, philanthropist, and Google’s former CEO, joins Scott to discuss the dangers and opportunities AI presents and his latest book, Genesis: Artificial Intelli...gence, Hope, and the Human Spirit. Follow Eric, @ericschmidt. Scott opens with his thoughts on Netflix’s bet on live sports. Algebra of happiness: don’t let perfect be the enemy of good. Subscribe to No Mercy / No Malice Buy "The Algebra of Wealth," out now. Follow the podcast across socials @profgpod: Instagram Threads X Reddit Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for the show comes from ServiceNow,
the AI platform for business transformation.
You've heard the big hype around AI.
The truth is AI is only as powerful as a platform it's built into.
ServiceNow is a platform that puts AI to work for people across your business,
removing friction and frustration for your employees,
supercharging productivity for your developers,
providing intelligent tools for your service agents to make customers happier,
all built into a single platform you can use right now.
That's why the world works with ServiceNow.
Visit servicenow.com slash ai4people to learn more.
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform offers all the automation, integration,
and reporting tools that get your marketing running seamlessly, all backed by their expert
live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow! Go to constantcontact.ca
and start your free trial today. Go to constantcontact.ca for your free trial.
Constantcontact.ca. Thumbtack presents the ins and outs of caring for your home. Out.
Procrastination, putting it off, kicking the can down the road.
In.
Plans and guides that make it easy to get home projects done.
Out.
Carpet in the bathroom, like why?
In.
Knowing what to do, when to do it, and who to hire.
Start caring for your home with confidence.
Download Thumbtack today.
Episode 326.
326 is the area code serving southwestern Ohio.
In 1926, the first SATs took place.
Latest exam for me, a prostate exam.
My doctor told me it's perfectly normal
to become aroused and even ejaculate. That being said, I still wish he hadn't.
Welcome to the 326th episode of The Prof. G.
Bot.
In today's episode, we speak with Eric Schmidt, a technologist, entrepreneur, and philanthropist.
He also previously served as Google's chief executive officer.
I don't know if you've heard of him.
It's a tech company.
You can actually go there and type in your own name and you see what the world thinks
of you.
Later, he was the executive chairman and technical advisor.
We discussed with Eric the dangers and opportunities
AI presents in his latest book,
Genesis, Artificial Intelligence, Hope in the Human Spirit.
Well, that sounds like a show on the Hallmark channel in hell.
Okay, what's happening?
Off to Vegas this week, I've been at Summit.
It's beautiful here. It's lovely.
I love kind of the Western Baja sky or light.
I think I may retire here.
When I retire in Mexico, I think the food's amazing.
The people are incredibly cool.
The service is cool.
I, no joke, think that Mexico is the best vacation
deal in the world.
Anyways, where am I headed to next?
I go to Vegas tonight.
Now that you asked.
Doing a talk there tomorrow.
Vegas during the week, not so much fun.
Not so much fun. That definitely kind of an unusual vibe there.
Then I go to LA for a couple of days.
Daddy will be at the Beverly Hills Hotel.
Swing by, say hi.
I'll be the guy alone at the bar.
I love eating alone at the Polo Lounge.
How do you know if I like you?
I stare at your shoes, not mine.
Anyways, then I'm back to Vegas for Formula One,
which I am so excited about.
I love it, the city comes alive.
And then, just cause I know you like to keep
on my travels, I head to Sao Paulo,
where the nicest hotel in the world is right now.
I think the Rosewood in Sao Paulo.
I think Rosewood is actually the best brand
in high end hospitality.
Isn't that good to know?
A lot of insight here, a lot of insight.
All right, let's move on some news
in the media and entertainment space.
Netflix said that a record 60 million households worldwide
tuned in to watch the boxing match
between Jake Paul and Mike Tyson.
I'm sorry, I'm sorry.
Just a quick announcement, this is very exciting.
I just struck a deal as I told you I'm going to LA
and you're the first to know that Hulu has announced
it'll be live streaming a fight between me and Jimmy Carter.
By the way, if you get paid $20 million,
I don't know what Tyson was paid, I think it was $20 million,
you have an obligation to either kick the shit out of someone
or have the shit kicked out of you.
This kind of jab, snort through your nose,
and just stay away from the guy.
I don't buy it.
I want my $12 back Netflix.
Despite the disappointment in the fight,
Jake Paul did in fact defeat Mike Tyson in eight rounds.
Can he even call it a win?
Can you?
The fight was shown at over 6,000 bars
and restaurants across the US,
breaking the record for the biggest
commercial distribution in the sport.
But the record numbers came with a few hiccups.
Viewers reported various tech issues,
including slow loading times, pixelated screens,
and a malfunctioning earpiece from one of the commentators.
That's a weird one, a malfunctioning earpiece
from one of the commentators.
Data from Down Detector revealed that user reported outages
peaked at more than 95,000 around 11 p.m. Eastern time.
Frustrated fans flooded social media criticizing Netflix
for the poor streaming quality. Netflix CTO Elizabeth Stone, soon to be probably former CTO,
wrote to employees,
I'm sure many of you have seen the chatter in the press and the social media about quality issues.
We don't want to dismiss the poor experience of some members and know we have room for improvement,
but still consider this event a huge success. No, it was a pretty big fuck up for you, Ms. Stone.
Specifically, Netflix tries to garner evaluation, not of a media company, but of a tech company,
which means you're actually supposed to be pretty good at this shit.
And didn't you know exactly how many people were going to show up for this?
Didn't you kind of weren't you able to sort of estimate pretty accurately just exactly how many people would be dialing at exactly the same time and then
test the shit out of this. You're beginning to smell a little bit like Twitter in a presidential
announcement. That just is unforgivable for a fucking tech company. Come on guys, this is what
you do. This isn't the first time Netflix has fumbled with a live event. Last year their Love
is Blind reunion show faced a similar situation, leaving viewers waiting over an hour before a
recorded version was made available. And this brings up a bigger question. With Netflix's push into live sports,
including NFL games scheduled for Christmas and a major deal with WWE starting next year,
can they deliver the kind of quality viewers expect that they get from broadcast cable? It looks like
what's old is new again, and that we have taken for granted kind of the production quality
of live TV and how difficult it is.
That's one thing I'll say about Morning Joe or The View or even I think Fox does a great
job.
They're great at delivering TV live.
I think CNN also does a fantastic job.
Netflix isn't alone.
Other streaming platforms including Comcast's Peacock have also been getting into live sports.
Earlier this year, Peacock's January playoff game between the Kansas City Chiefs and Miami Dolphins
drew 23 million viewers,
which broke records for internet usage in the US.
Get this, the game was responsible
for 30% of internet traffic that night.
That's like squid games.
This is all proof that the market for live sports
on streaming platforms is a massive opportunity
and companies are willing to spend big.
According to the Wall Street Journal,
Netflix is paying around $75 dollars for NFL game this season
They also recently signed a 10-year five million dollar deal with WWE. It used to be that live
in sports were sort of the last
Walls to be breached in broadcast cable like we'll always have sports and then the people with the cheapest capital in the deepest pockets
Shut up and said hey, we'll take Thursday night football. Hey, we'll take the sports. And then the people with the cheapest capital and the deepest pockets showed up and said, hey, we'll take Thursday night football.
Hey, we'll take the Logan Paul or Jake Paul.
Is it Jake or Logan?
I don't know, I can't remember.
Anyways, I mean, literally broadcast cable television
right now.
It's like Mark Twain said about going bankrupt.
It was slowly then suddenly.
We're in the suddenly stage of the decline
of linear ad supported TV.
It has gotten really bad in the last few months.
I had breakfast with the former CEO of CNN,
who's a lovely guy.
And he said that CNN's viewership versus the last election
has been cut in half.
Can you imagine trying to explain to advertisers,
our viewership is off 50% since the last time
we were talking about election advertising.
My theory is that the unnatural, unearned torrent of cash that local news stations have
been earning for the last 20 years is about to go away.
And what are we talking about?
Scott, tell us more.
What are you saying?
Effectively, a lot of smart companies, including I think Hearst and others, have gone around
and bought up these local news stations.
And why?
Because they're dying, aren't they?
Well, yeah, they are. But old people watch local news, mostly to why? Because they're dying, aren't they? Well, yeah, they are.
But old people watch local news,
mostly to get the weather and local sports,
and because that Jerry Dumpy is just so likable
and that hot little number.
They always have some old guy with good hair
and broad shoulders who makes you feel comfortable
and safe and some hot woman in her 30s
who's still waiting for the call up to do daytime TV.
And everybody, old people love this.
And old people vote.
Now what's happening?
Okay, so the numbers are in.
A million people watch the best shows on MSNBC.
The average age is 70.
It's mostly white and it's mostly women.
So a seven-year-old white woman.
Podcasts, 34-year-old male.
Think about that.
Also the zeitgeist is different.
People go to cable news to sanctify their religion or specifically their politics.
People come to podcasts to learn.
The zeitgeist is different.
We try to present our guests in a more aspirational light.
We're not looking for a gotcha moment to go live on TikTok.
It's not, say, a twist of phrase, dead to done in six minutes
because we got a break for an opioid-induced constipation commercial or life alert.
I'm falling!
We don't do that shit.
We sell zip recruiter and athletic greens and
fundraise and different kind of modern cool stuff like that.
Also, Viori. I'm wearing Viori shorts right now.
By the way, I fucking love this athleisure.
Oh my God. I look so good in this shit.
Actually, no one really looks good.
No man looks good in athleisure,
but I look less bad than I look in most athleisure. I love the fabrics. I'm not even getting paid to say this,
wearing it right now. So let's talk a little bit about Netflix. It's up 81% year to date. True
story, I bought Netflix at 10 bucks a share. That's the good news. The bad news is I sold it
at eight bucks a share, and now it's at $840. Daddy would be live broadcasting from his own
fucking Gulfstream right
now had I not been such a shithead. I want to find a time machine, get in it, go back, find me, kill me
and then kill myself. Jesus. God. Anyways, Amazon is up 34%. I do on that stock. Disney is up 22%.
My stock picked for 2024. Warner Brothers Discovery down 22%. Jesus Jesus Christ Malone, you fired the wrong guy.
Paramount, by the way, Zazloff, the guy who was
overseeing a destruction of about 60 or 70% of
shareholder value since he talked to a bunch of
stupid people into why this merger made any fucking
sense and took on way too much debt.
He's managed to pull out about a third of a
billion dollars despite destroying a massive
amount of shareholder value.
Paramount is down 28% year today. Comcast is down 2.3%. Comcast I think is arguably the best run
of the cable folks, obviously not including Netflix, which is just a gangster
run company. So Netflix has about 250 million users, Amazon Prime Video has
200 million. Is that fair though? Cause you just automatically get it with Prime. Disney Plus 150 million, Max 95.
I love Max.
I sold, we sold our series into Netflix,
our big tech drama.
I think most of us would have liked HBO
just cause HBO has a certain culture
that feeds kind of the water cooler.
You're talking about something in streaming media.
You're usually talking about something on Macs, but
Netflix has also got bigger reach.
These are good problems. Hulu's
Paramount is at 63
million, Hulu 49,
Peacock 28,
ESPN Plus at 26, Apple
TV at 25,
and then Starz, remember them,
at 16 million. Effect million effectively these guys have
cheaper capital they're absolutely killing linear TV does that mean it's a
bad business no there's someone's gonna come in and roll up all of these assets
between by the old Viacom assets CNN Turner all the Disney shit ABC they're
gonna roll them all up milk them for their cash flow, cut costs faster than the revenue declines.
These businesses, while they seem to be going out of business pretty fast right
now, it'll probably level out.
AOL is still a small but great business.
I think it does something like four or $500 million in EVTA because there's still
a lot of people that depend on AOL in rural areas for their dial-up for their
internet and you know, some people will kind of hang in there if you will, but this is going to be a distress play.
They're going to stop this consensual hallucination
that these things are going to ever grow again.
They'll consolidate them to start cutting costs.
One of the best investments I ever made, Yellow Pages.
We bought a Yellow Pages company for about two
or two and a half times cash flow.
Yeah, it's going down by 8% to 12% a year.
But if you cut costs faster than that, going and buying the other shitty yellow pages companies
and then consolidating the staff, which is Latin for layoff people, and you can cut costs
faster than 8 percent, you have an increase in EBITDA every year.
I still find across the entire asset class, and this is where I'll wrap up, in general,
a basic axiom that I have found holds water through the test of time around investing is the sexier it is, the lower the ROI.
And if you look at asset classes in terms of their sex appeal, venture investing or angel investing is fun, right?
It's for what I call FIPS, formerly important people that want to stay involved and want to help entrepreneurs.
But be clear, the only return you get is psychic. It is a terrible asset class, even if something works.
And at that stage, it is very hard to predict.
You're talking about one in seven maybe do well.
And even that one company, likely you'll get washed out
along the way at a little bump, and the VCs have showed up,
and they'll wash you out.
It is a very tough asset class to make money.
Venture does better, but the majority of the returns
are not only crowded to a small number of brands
that get all the deal flow, but a small number of partners within that small
number of firms.
And then you have growth.
I think that's better.
Then you have IPOs.
Unfortunately, IPOs that winter is really ugly right now.
The IPO market's basically been in a pretty big deep freeze for several years now.
People keep thinking it's going to come back.
We got excited about Reddit, but not a lot followed. And then you go into public company stocks. It's impossible
to pick stocks by an index fund. Then you get into distressed or mature companies, dividend
plays. And then what I love is distressed. I find that distressed is the best asset class.
Why? What business has the greatest likelihood of succeeding? Anything in senior care. Why?
Again, see above. The less sexy it is.
People don't want to be around old people. It reminds them of death. They're generally pretty
boring. I know, I'm supposed to say they just have so much experience and wisdom. Sometimes. And people
want to avoid them. People want to hang out with hot young people, right? And people want to hang
out with hot young companies. Specifically, capital wants to hang out with hot young growing companies and they don't like the way that old companies smell, so to speak.
So they avoid them.
And that's why there's a great return on investment in distress.
What's the learning here?
Sex appeal and ROI are inversely correlated.
So yeah, if you want to invest in a members club downtown for the fashion industry and
the music industry, have at it, but keep in mind ROI and sex appeal inversely correlated.
We'll be right back for our conversation with Eric Schmidt.
Support for Profit.G comes from Mint Mobile.
You're probably paying too much for your cell phone plan.
It's one of those budgetary line items that always looks pretty ugly,
and it might feel like there's nothing you can do about it. That's where Mint
Mobile has something to say. Mint Mobile's latest deal might challenge your
idea of what a phone plan costs. If you make the switch now, you'll pay just
$15 a month when you purchase a new three-month phone plan. All Mint Mobile
plans come with high-speed data on a limit talked and text delivered on the
nation's largest 5D network. You can even keep your phone, your contacts, and your
number. It doesn't get much easier than that. To get this new customer offer and your new three-month premium
wireless plan for just $15 a month, you can go to mintmobile.com slash ProfG. That's
mintmobile.com slash ProfG. You can cut your wireless bill to $15 a month at mintmobile.com
slash ProfG. $45 upfront payment required equivalent to $15 a month. New customers on
a first three-month plan only. Speak slower about 40 gigabytes on unlimited plan. Additional taxes, fees, and restrictions apply.
See Mint Mobile for details.
This episode is brought to you by Secret. Secret deodorant gives you 72 hours of clinically proven
odor protection, free of aluminum,
parabens, dyes, talc, and baking soda.
It's made with pH balancing minerals and crafted with skin conditioning oils.
So whether you're going for a run or just running late, do what life throws your way
and smell like you didn't.
Find Secret at your nearest Walmart or shoppers drug mart today.
Support for the show comes from one password. How do you make a password that's strong enough so no one will guess it and
impossible to forget? And now how can you do it for over 100
different sites and make it so everyone in your company can do
the exact same thing without ever needing to reset them? It's
not impossible. One passwordword makes it simple.
OnePassword combines industry-leading security
with award-winning design to bring private, secure,
and user-friendly password management to everyone.
OnePassword makes strong security easy for your people
and gives you the visibility you need to take action
when you need to.
A single data breach can cost millions of dollars,
while OnePassword secures every sign-in
to save you time and money.
And it lets you securely switch between iPhonein to save you time and money.
And it lets you securely switch between iPhone, Android, Mac and PC.
All you have to remember is the one strong account password that protects everything
else.
Your logins, your credit cards, secure notes or the office wifi password.
Right now, our listeners get a free 2-week trial at OnePassword.com slash Prof for your
growing business.
That's 2 weeks free at OnePassword.com slash prof for your growing business. That's two weeks free at one
password.com slash prof. Don't let security slow your business down go to
one password.com slash prof.
Welcome back. Here's our conversation with Eric Schmidt, a technologist, entrepreneur, philanthropist,
and Google former CEO.
Eric, where does this podcast find you?
I'm in Boston.
I'm at Harvard and giving a speech to students later today.
Oh, nice.
So let's bust right into it.
You have a new book out that you co-authored with the late Henry Kissinger
titled Genesis Artificial Intelligence, Hope and the Human Spirit.
What is it about this book or give us what you would call the pillars of
insight here around that'll help people understand the evolution of AI?
Well, the world is full of stories about what AI can do, and
we generally agree with those.
What we believe, however, is the world is not ready
for this. And there are so many examples, whether it's trust, military power, deception, economic
power, the effect on humans, the effect on children that are relatively poorly explored.
So the reader of this book doesn't need to understand AI,
but they need to be worried
that this stuff is going to be unmanaged.
Dr. Kissinger was very concerned
that the future should not
be left to people like myself.
He believed very strongly
that these tools are so
powerful in terms of their effect on human
society, that it was important
that the decisions be made
by more than just the tech people.
And the book is really a discussion about what happens
to the structure of organizations, the structure of jobs,
the structure of power,
and all the things that people worry about.
I personally believe that this will happen
much, much more quickly than societies are ready for,
including in the United States and China. It's happening very fast.
And what do you see as the real existential threats here? Is it that it becomes sentient?
Is it misinformation, income inequality, loneliness? What do you think are the
first and foremost biggest concerns you
have about this rapid evolution of AI? There are many things to worry about. Before we say the bad
things, let me remind you enormous improvements in drug capability for health care, solutions to
climate change, better vehicles, huge discoveries in science, greater productivity for kind of everyone,
a universal doctor, a universal educator, all of these things are coming.
And those are fantastic.
A long way you come with, because these are very powerful, especially in the hands of
an evil person and we know evil exists, these systems can be used to harm large numbers
of people. The most obvious one is
their use in biology. Can these systems at some point in the future generate biological pathogens
that could harm many, many, many, many humans? Today we're quite sure they can't, but there's a
lot of people who think that they will be able to unless we take some action. Those actions are
being worked on now.
What about cyber attacks?
You have a lone actor, a terrorist group, North Korea,
whomever, whatever your evil person or group is,
and they decide to take down the financial system
using a previously unknown attack vector,
so-called zero-day exploits.
So the systems are so powerful that we are quite concerned that in addition to democracies using
them for gains, dictators will use them to aggregate power and they'll be used in a harmful and military context.
So I'm freaked out about these AI girlfriends. I feel as if the biggest threat in the US right
now is loneliness that leads to extremism.
And I see these AI girlfriends and AI searches popping up.
And I see a lot of young men who have a lack of romantic or economic opportunities turning
to AI girlfriends and begin to sequester from real relationships.
And they become less likely to believe in climate change, more likely to engage in misogynistic
content, sequester from school, their parents work. In some, they become really shitty citizens.
And I think men, young men are having so much trouble that this low risk entry into these
faux relationships is just going to speedball loneliness and the externalities of loneliness.
Your thoughts? I completely agree. There's lots of evidence that there's now a problem with young men.
In many cases, the path to success for young men has been, shall we say, been made more difficult
because they're not as educated as the women are now. Remember, there are more women in college than
men and many of the traditional paths are no longer as available. And so they turn to the
online world for enjoyment and sustenance, but also because of the social media algorithms,
they find like-minded people who ultimately radicalize them either in a horrific way like
terrorism or in the kind of way that you're describing, where they're just maladjusted.
This is a good example of an unexpected problem of existing technology.
So now imagine that the AI girlfriend or boyfriend,
let's use AI girlfriend as an example, is perfect.
Perfect visually, perfect emotionally.
And the AI girlfriend in this case,
captures your mind as a man to the point where she or whatever it is
Takes over the way you thinking you're obsessed with her
That kind of obsession is possible especially with people who are not fully formed
Parents are going to have to be more more involved for all the obvious reasons
But at the end of the day parents can only control what their sons and daughters are doing within reason. We've ended up, again, using
teenagers as an example, we have all sorts of rules about age of maturity, 16, 18, what have you,
21 in some cases. And yet you put a 12 or 13 year old in front of one of these things and they have
access to every evil as well as every good in the world, and they're not ready to take it. So I think the general question of, are you mature enough
to handle it, sort of the general version of your AI girlfriend example, is unresolved.
So I think people, most people would agree that the pace of AI is scary and that our institutions
and our ability to regulate are not keeping up with
the pace of evolution here. And we see what perfectly what happened with social around
this. What can be done? How, how, what's an example or a construct or framework that you
can point to where we get the good stuff, the drug discovery, the help with climate
change, but attempt to screen out or at least put in check
or put in some guardrails around the bad stuff.
What's the, what are you advocating for?
I think it starts with having an honest conversation
of where the problems come from.
So you have people who are absolutists on free speech,
which I happen to agree with,
but they confuse free speech of an
individual versus free speech for a computer. I am strongly in favor of free speech for every human.
I am not in favor of free speech for computers. And the algorithms are not necessarily optimizing
the best thing for humanity. So as a general point, specifically,
we're going to have to have some conversations
about what age are things appropriate?
And we're also going to have to change some of the laws,
for example, section 230, to allow for liability
in the worst possible cases.
So when someone is harmed from this technology,
we need to have a solution to prevent further harm.
Every new invention has
created harm. Think about cars, right? So cars used to hit everything and they were very unsafe.
Now cars are really quite safe, certainly by comparison to anything in history. So the history
of these inventions is that you allow for the greatness and you police the guard, technically the guardrails,
you put limits on what they can do. And it's an appropriate debate, but it's one that we have to
have now for this technology. I'm particularly concerned about the issue that you mentioned
earlier about the effect of on human psyche. Dr. Kissinger, who studied Kant, was very concerned, and we write in the book at some length,
about what happens when your worldview is taken over by a computer as opposed to your friends,
right? You're isolated, the computer is feeding you stuff, it's not optimized around human values,
good or bad. God knows what it's trying to do.
It's trying to make money or something.
That's not a good answer.
So I think most reasonable people would say,
okay, some sort of fossil fuels are a net good,
I would argue.
Pesticides are a net good,
but we have emission standards and an FDA.
Most people would, I think, loosely agree
or mostly agree that some sort of regulation that
keeps these things in check makes sense. Now, let's talk about big tech, which you were an
instrumental player in. You guys figured out a way, quite frankly, to overrun Washington with
lobbyists and avoid all reasonable regulation. Why are things going to be different now than
what they were in your industry when you were involved in it?
Why are things going to be different now than what they were in your industry when you were involved in it?
Well, President Trump has indicated that he is likely to repeal the
executive order that came out of President Biden, which was an attempt at this.
So I think a fair prediction is that for the next four years
there'll be very little regulation in this area as the president will be focused on other things. So what will
happen in those companies is if there is real harm, there's liability, there's lawsuits and
things. So the companies are not completely scot-free. Companies, remember, are economic
agents and they have lawyers whose jobs are to protect their intellectual property and their
goals. So it's going to take, I'm sorry to say, it's likely to
take some kind of a calamity to cause a change in regulation. And I remember when I was in
California when I was younger, California driver's licenses, the address on your driver's license
was public. And there was a horrific crime where a woman was followed to her home and then she was
murdered based on that information,
and then they changed the law. And my reaction was, didn't you foresee this?
Right? You put millions and millions of licensed information to the public and you don't think that some idiot who's horrific is going to harm somebody?
So my frustration is not that it will occur, because I'm sure it will, but why did we not anticipate that as an example? We should
anticipate, make a list of the biggest harms. I'll give you another example.
These systems should not be allowed access to weapons. Very simple. You don't
want the AI deciding when to launch a missile. You want the human to be responsible.
And these kinds of sensible regulations
are not complicated to state.
Are you familiar with character AI?
I am.
Really, just a horrific incident
where a 14-year-old establishes a relationship
with an AI agent
That he thinks is a character from Game of Thrones. He's obviously unwell. Although he my understanding is from his mother who's taken this on as an issue
understandably
He did not qualify as someone who was mentally ill
establishes this very deep relationship with obviously a very nuanced character and
establishes this very deep relationship with obviously a very nuanced character.
And the net effect is he,
he contemplates suicide and she invites him to do that.
And you know, the story does not end well.
And my view, Eric, is that if we're waiting for people's critical thinking to show up or for the
better angels of CEOs of companies that are there to make a profit,
that's what they're supposed to do. They're doing their job.
That we're just going to have tragedy after tragedy
after tragedy.
My sense is someone needs to go to jail.
And in order to do that, we need to pass laws showing
that if you're reckless with technology
and we can reverse engineer it to the death
of a 14 year old, that you are criminally liable.
But I don't see that happening.
So I would push back on the notion that people need to think more critically. 14 year old that you are criminally liable, but I don't see that happening.
So I would push back on the notion that people need to think more critically.
That would be lovely.
I don't see that happening.
I have no evidence that any CEO of a tech company is going to do anything but increase
the value of their shares, which I understand and is a key component of capitalism.
It feels like we need laws that either remove this liability shield.
I mean, does any of this change until someone shows
up in an orange jumpsuit?
I can tell you how we dealt with this at Google.
We had a rule that in the morning we would look at
things and if there was something that looked like
real harm, we would resolve it by noon and we would
make the necessary adjustments. The example that you gave
is horrific, but it's all too common and it's going to get worse for the following reason.
So now imagine you have a two-year-old and you have the equivalent of a bear that is the
two-year-old's best friend. And every year the bear gets smarter and the two-year-old gets smarter too, becomes three, four, five, and so forth.
That now 15-year-old's best friend will not be a boy or girl of the same age, it'll be a digital device.
And such people highlighted in your terrible example are highly suggestible. So either the people who are building the equivalent
of that Bayer 10 years from now
are gonna be smart enough to never suggest harm
or they're gonna get regulated and criminalized.
Those are the choices.
The technology, I used to say
that the internet is really wonderful
but it's full of misinformation
and there's an off button for a reason, turn it off. I can't do that anymore.
The internet is so intertwined in our daily lives, all of us, every one of us, for the good and bad,
that we can't get out of the cesspool if we think it's a cesspool
and we can't make it better because it keeps coming at us.
The industry, to answer your question, the industry is optimized to maximize your attention and monetize it.
So that behavior is going to continue. The question is, how do you manage the extreme cases?
Anything involving personal harm of the nature that you're describing will be regulated one way or the other.
Yeah, at some point it's just the damage we incur until then, right? We've had 40
congressional hearings on child safety and social media, and we've had zero laws. In fairness to
that, there is a very, very extensive set of laws around child sexual abuse, which is obviously
horrific as well. And those laws are universally implemented and well adhered to. So we do have examples where
everyone agrees what the harm is. I think all of us would agree that a suicide of a teenager is
not okay. And so regulating the industry so it doesn't generate that message strikes me as
a brainer. The ones which will be much harder are where the system has essentially captured the emotions
of the person and is feeding them back to the person as opposed to making suggestions.
And we talk about this in the book.
When the system is shaping your thinking, you are being shaped by a computer, you're
not shaping it.
And because these systems are so powerful, we worry, and again, we talk
about this in the book, of the impact on the perception of truth and of society. Who am I?
What do I do? And ultimately, one of the risks here, if we don't get this under control, is that we
will be the dogs to the powerful AI, as opposed to us telling the AI what to do. A simple answer to the question of when is the industry believes that within five to
ten years, these systems will be so powerful that they might be able to do self-learning.
And this is a point where the system begins to have its own actions, its own volition.
It's called general intelligence, AGI as it's called, and the arrival of AGI will need
to be regulated. We'll be right back. Support for ProfG comes from Miro. While a lot of CEOs believe
that innovation is the lifeblood of business, very few of them actually see their team unlock
the creativity needed to innovate. A lot of times that's because once you've moved from discovery
and ideation to product development,
outdated process management tools,
context switching, team alignment,
and constant updates, massively slow,
the process, but now you can take a big step
to solving these problems
with the innovation workspace from Miro.
Miro is a workspace where teams can work together
from initial stages of project or product design
all the way to designing and delivering the finished product.
Powered by AI, Miro can help teams increase the speed of their work
by generating AI-powered summaries, product briefs, and research insights
in the early stages of development.
Then, move to prototypes, process flows, and diagrams,
and once there, execute those tasks with timelines and project trackers
all in a single shared space.
Whether you work in product design, engineering, UX, agile, or marketing, bring your team together on Miro.
Your first three Miro boards are free when you sign up today at Miro.com.
That's three free boards at Miro.com.
Oh, interrupting their playlist to talk about Defying Gravity, are we?
That's right, Newton.
With a Bronco and Bronco Sport, gravity has met its match.
Huh, maybe that apple hit me a little harder than I thought.
Yeah, you should get that checked out.
With standard 4x4 capability, Broncos keep going up and up.
Now get purchase financing from 0% APR for up to 60 months on eligible 2024 Bronco family models.
Visit your Toronto area Ford store or ford.ca.
Support for PropG comes from Fundrise.
Artificial intelligence is poised to be one of the biggest wealth creation events in history.
Some experts expect AI to add more than $15 trillion to the global economy by 2030.
Unfortunately, your portfolio probably
doesn't own the biggest names in AI.
That's because most of the AI revolution
is largely being built and funded in private markets.
That means the vast majority of AI startups
are going to be backed and owned by venture capitalists,
not public investors.
But with the launch of the Fundrise Innovation Fund
last year, you can get in on it now.
The Innovation Fund pairs a $100 million plus
venture portfolio of some of the biggest names in AI
with one of the lowest investment minimums
the venture industry has ever seen.
Get in early at fundrise.com slash prop G.
Carefully consider the investment material
before investing, including objectives,
risks, charges, and expenses.
This and other information can be found
at the Innovation Fund's prospectus
at fundrise.com slash innovation.
This is a paid advertisement.
We know that social media and a lot of these platforms and apps and time on phone is this
not a good idea.
I'm curious what you think of my colleagues who are at Jonathan Hyde and that is, is there
any reason for anyone under the age of 14
to have a smartphone?
And is there any reason for anyone under the age of 16
to be on social media?
Should we age gate pornography, alcohol, the military?
Shouldn't we, specifically the device makers
and the operating systems, including your old firm,
shouldn't they get in the business of age gating?
They should. And indeed, Jonathan's work is incredible. He and I wrote an article together
two years ago, which called for a number of things in the area of regulating social media.
And we start with changing a law called COPPA from 13 to 16. And we are quite convinced that
using various techniques, we can determine the age of the person with a little bit of work.
And so people say, well, you can't implement it. Well, that doesn't mean you shouldn't try.
And so we believe that at least the pernicious effects of this technology on below 16 can be addressed.
When I think about all of this, to me, we want children to be able to grow up and grow up with humans as friends.
And I'm sure with the power of AI arrival that you're going to see a lot of regulation about child content.
What can a child below 16 see?
This does not answer the question of what do you do with the 20 year old, right? Who's also still being shaped.
And as we know, men develop a little bit later than women.
And so let's focus on the underdeveloped man who's having trouble in college or
what have you, what do we do with them?
And that question remains open.
In terms of the idea that the genie is out of the bottle here, and we face a very
real issue or fulcrum retention
And that is we want to regulate it. We want to put in guardrails at the same times. We want to let our
You know our sprinters and our IP and our minds and our universities and our incredible for profit machine
we want to let it run right and
The fear is that if you regulate it too much
And the fear is that if you regulate it too much, the Chinese or the Islamic Republic isn't quite as concerned
and gets ahead of us on this technology.
How do you balance that tension?
So there are quite a few people in the industry,
along with myself, who are working on this.
And the general idea is relatively light regulation looking for the extreme cases.
So the worst extreme events would be a biological attack, a cyber attack, something that harmed
a lot of people as opposed to a single individual, which is always a tragedy.
Any misuse of these in war, any of those kinds of things we worry a lot about.
There's a lot of questions here. One
of them is, do you think that if we had an AGI system that developed a way to kill all of the
soldiers from the opposition in one day that it would be used? And I think the answer from a
military general perspective would be yes. The next question is,
do you think that the North Koreans, for example, or the Chinese would obey the same rules about
when to apply that? The answer is no one believes that they would do it safely and carefully under
the way the US law would require. US law has a law called person in the loop or meaningful human control that tries to keep these things from going out of hand.
So what I actually think is
that we don't have a theory of deterrence with these new tools. We don't know
how to deal with the spread of them. And the simple example, and
sorry for the diversion for a sec, but there's closed source and open source.
Closed is like you can use it, but the software and the numbers are not available.
There are other systems called open source where everything is published.
China now has two of what appear to be the most powerful models ever made and they're completely open.
And obviously you and I are not in China and I don't
know why China made a decision to release them, but surely evil groups and so forth will start to
use those. Now maybe they don't speak Chinese or what have you, or maybe the Chinese just discount
the risk. But there's a real risk of proliferation of systems in the hands of terrorism. And
proliferation is not gonna occur
by misusing Microsoft or Google or what have you.
It's going to be by making their own servers in the dark web.
And an example, a worry that we all have
is exfiltration of the models.
I'll give an example, Google or Microsoft or OpenAI
spends $200 million or something
to build one of these models.
They're very powerful. And then
some evil actor manages to exfiltrate it out of those companies and put it on the dark web.
We have no theory of what to do when that occurs because we don't control the dark web. We don't
know how to detect it and so forth. In the book, we talk about this and say that eventually the network systems globally will have fairly sophisticated supervision systems that will watch for this, because it's another example of proliferation.
It's analogous to the spread of enriched uranium. If anyone tried to do that, there's an awful lot of monitoring systems that would say, you have to stop right now or we're going to shoot you. So you make a really cogent argument for the kind of existential threat here,
the weaponization of AI by bad actors. And we have faced similar issues before. My understanding is
there are multilateral treaties around bio weapons or we have nuclear arms treaties.
So is this the point in time where people such as yourself and our defense infrastructure
should be thinking about or trying to figure out multilateral agreements? And again, the hard part
there is my understanding is it's very hard to monitor things like this. And should we have
something along the lines of Interpol that's basically policing this and then fighting fire
with fire, using AI to go out and find scenarios where things
look very ugly and move in with some sort of international force. It feels like a time for
some sort of multinational cooperation is upon us. Your thoughts?
We agree with you and in the book we specifically talk about this in a historical context of the
nuclear weapons regime which Dr. Kissinger,
as you know, invented largely. What's interesting is working with him, you realize how long it took
for the full solution to occur. America used the bomb in 1945, Russia or Soviet Union demonstrated
in 1949. So that's roughly, it was a four-year gap and then there was sort of a real arms race.
Once that, it took roughly 15 years for an agreement to come for limitations on these
things during which time we were busy making an enormous number of weapons, which ultimately were
a mistake, including these enormous bombs that were unnecessary. And so things got out of hand.
In our case, I think what you're saying is very important
that we start now, and here's where I would start.
I would start with a treaty that says,
we're not going to allow anyone
who's the signatory of this treaty
to have automatic weapons systems.
And by automatic weapons, I don't mean automated.
I mean ones that make the decision on their own. So an agreement that any use of AI of any kind in
a conflict sense has to be owned and authorized by a human being who is authorized to make that
decision. That would be a simple example. Another thing that you could do as part
of that is say that you have a duty to inform when you're testing one of these systems in
case it gets out of hand. Now, whether these treaties can be agreed to, I don't know. Remember
that it was the horror of nuclear war that got people to the table and it still took 15 years.
I don't want us to go through an analogous bad incident involving an evil actor in North Korea.
Again, I'm just using them as bad examples or even Russia today. We obviously don't trust.
I don't want to run that experiment and have all that harm and then say,
hey, we should have foreseen this.
Well, my sense is when we are better at a technology,
we're not in a hurry for a multilateral treaty, right?
When we're under the impression that our nuclear scientists
are better than your nuclear,
our Nazis are smarter than your Nazis kind of thing,
that we like, we don't want a multilateral treaty
because we see advantage.
And curious if you agree with this,
we have better AI than anyone else.
Does that get in the way of a treaty or should we be
doing this from a position of strength?
And also if there's a number two, and maybe you think
we're not the number one, but assuming you think
that the US is number one in this, who is the number two?
Who do you think poses the biggest threat?
Is it their technology or their intentions or both?
If you were to hear that one of these really awful things
took place, who would you think most likely
are the most likely actors behind it?
Is it a rogue state?
Is it a terrorist group?
Is it a nation state?
First place, I think that the short-term threats
are from rogue states and from terrorism.
Because as we know, there's plenty of groups
that seek harm against the elites in any country.
Today, the competitive environment is very clear that the U.S. with our partner, UK, I'll give you an example.
This week, there were two libraries from China that were released, open source.
One is a problem solver that's very powerful. And another one is a large language
model that's equaled and in some cases exceeds the one from Metta with it's they use every day. It's
called Llama 3 400 billion. I was shocked when I read this because I had assumed that are in my
conversation with the Chinese that they were two to two to three years late. It looks to me like it's
within a year now. So it'd be fair to say it's the US and then China within a year's time. Everyone
else is well behind. Now I'm not suggesting that China will launch a rogue attack against us and
American city. I am alleging that it's possible that a third party could steal from China, because it's
open source, or from the US if they're malevolent, and do that. So the threat escalation matrix goes
up with every improvement. Now today, the primary use of these tools is to sow misinformation,
which is what you talked about. But remember
that there's a transition to agents and the agents do things. So it's a travel agent or
it's whatever and the agents speak English, you give them English and they respond in English,
so you can cat-continue them. You can literally put agent one talks to agent two, talks to agent three, talks to agent four,
and there's a scheduler that makes them all work together.
And so for example, you could say to these agents,
design me the most beautiful building in the world,
go ahead and file all the permits,
negotiate the fees of the builders
and tell me how much it's gonna cost
and tell my accountant that I need that amount of money.
That's the command.
So think about that.
Think about the agency, the ability to put an integrated solution that today takes a
hundred people who are very talented and you can do it by one command.
So that acceleration of power could also be misused.
I'll give you another example. You were talking
earlier about the impact on social media. I saw a demonstration in England, in fact.
The first command was build a profile of a woman who's 25, she has two kids, and she has the
following strange beliefs. And the system wrote the code and created a fake persona that existed
on that particular social media case. Then the next command was take that person and modify that
person into every possible stereotype, every race, sex, so forth and so on, age, demographic thing,
with similar views and populate that and 10,000 people popped up
just like that.
So if you wanted, for example, today, this is true today,
if you wanted to create a community
of 10,000 fake influencers to say, for example,
that smoking doesn't cause cancer,
which as we know is not true, you could do it.
And one person with a PC can do this.
Imagine when the AIs are far,
far more powerful than they are today. So one of the things that Dr. Kissinger was known for,
and quite frankly, I appreciate was this notion of real politic. Obviously, we have aspirations
around the way the world should be. But as it relates to decision making, we're also going to
be very cognizant of the way the world is and makes him, I mean, he's credited with a lot of very controversial slash difficult
decisions depending on how you look at it.
What I'm hearing you say leads, all these roads lead to one place in my kind of quote
unquote critical thinking or laughter of brain.
And that is there's a lot of incentive to kiss and make up
with China and partner around this stuff.
That if China and the US came to an agreement
around what they were gonna do or not do
and bilaterally created a security force
and agreed not to sponsor proxy agents
against the West or each other, that we'd have a lot, that would be a lot of progress.
That might be 50, 60, 80% of the whole shooting match, as if the two of us could say,
we're going to figure out a way to trust each other on this issue,
and we're going to fight the bad guys together on this stuff. Your thoughts?
So Dr. Kissinger, of course, was the world's expert in China. He opened up China,
which was one of his greatest achievements.
But he was also a proud American. And he understood that China could go one way or the other.
His view on China was that China, and he wrote a whole book on this, was that China wanted to be the middle kingdom as part of their history, where they sort of dominated all the other countries.
But it's not like America. His view was they wanted to dominated all the other countries. But it's not like America. His view
was they wanted to make sure the other countries would show fealty to China, in other words,
do what they wanted. And occasionally, if they didn't do something, China would then extract
some payment such as invading the country. That's roughly what Henry would say. So he was very much a realist about China as well.
His view would be at odds today with Trump's view and the US government's. The US government
is completely organized today around decoupling, that is literally separating. And his view,
which I can report accurately because I went to China with him,
was that we're never going to be great friends,
but we have to learn how to coexist.
And that means detailed discussions
on every issue at great length
to make sure that we don't alarm each other
or frighten each other.
His further concern was not that President Xi would wake up tomorrow and invade Taiwan,
but that you would start with an accident and then there would be an escalatory ladder.
And that because the emotions on both sides, you'd end up just like in World War I,
which started with a shooting in Sarajevo, that ultimately people found in a few months that they
were in a world war that they did not want and did not expect. And once you're in the war,
you have to fight. So the concern with China would be roughly that we are co-dependent and
we're not best friends. Being dependent is probably better
than being completely independent, that is non-dependent
because it forces some level of understanding
and communication.
Eric Schmidt is a technologist, entrepreneur
and philanthropist.
In 2021, he founded the Special Competitive Studies Project,
a nonprofit initiative to strengthen America's
long-term competitiveness in AI and technology more broadly. Before that, Eric served as Google's chief executive officer
and chairman and later as executive chairman and technical advisor. He joins us from Boston. Eric,
in addition to your intelligence, I get the sense your heart's in the right place and you're using
your human and financial capital to try and make the world a better place. Really appreciate you and your work.
Algebra of happiness. I'm at this gathering called summit and I've been
struck by how many people are successful, or at least
the appearance of being successful.
So as I know the rich kids, but they do seem to be, I don't know, economically
secure or overeducated, interesting.
Some of them started sold businesses.
But what I see is a lot of people searching and they'll say shit like, well,
I'm just taking year to really focus on improving my sleep.
Okay.
No, that's, sleep is supposed to be part of your arsenal.
It's not why you're fighting this war.
You need good sleep, but I don't think you should
take a year to focus on it.
Anyways, does that sound boomer of me?
But this notion of finding a purpose and what I have
found is, and this is probably one of the
accoutrements of a prosperous society is ask
yourself, do you have the wrong amount of
money? Do you have just the wrong amount of money? What do I mean by that? Obviously, the worst amount
of money is not enough, but a lot of my friends and a lot of people I think at this summit suffer
from just having the wrong amount of money. What do I mean by that? They have enough money so they
don't have to do something right away, but they don't have enough money to retire or go into
philanthropy or really pursue something creative and not make money.
That's exactly the wrong amount of money.
And I would say a good 50% of my friends who kind of hit a wall, got stuck, experienced
their first failure, sit around and wait for the perfect thing and wake up one, two, three
years later and really don't have a professional purpose or a professional source of gravity and
You know, they're kind of basic stuff, right?
Do something in the agency of others be in service to others, but more than anything. I think the call sign is
just now and that is don't let perfect be the enemy of good and
Give yourself a certain amount of time to find something.
And within that amount of time, when it elapses, take the best thing that you have.
And it might not be the, it might not foot to the expectations that you have for yourself
or be really exciting or dramatic or really lucrative.
But the thing about working is it leads to other opportunities.
And what I see is a lot of people who kind of are cast into the wilderness and then
come out of the wilderness with no fucking skills.
And that is you'll be surprised how much you're Rolodex in your skills of atrophy.
And so what is the key?
Do you want to write a book?
Do you want to start a podcast?
Do you want to try and raise a fund?
Do you want to start a company?
What is the key?
What is the critical success factor?
Is it finding the right people?
Is it finding capital? Is it thinking through? Is it finding the right people? Is it finding capital?
Is it thinking through?
Is it positioning the concept?
Is it doing more research?
No, the key is now.
You want to write a book, open your fucking laptop and start writing and it's going to
be shit.
But then when you go back and edit it, it'll be less shitty.
And then if you find someone to help you review it and you find some people, it'll get dramatically
even less shittier. You right, you wanna start a business?
Nobody knows.
The only way you have a successful business
is you start a bad one and you start iterating.
But here's the key, starting.
You wanna be in a nonprofit,
you wanna start helping other people,
we'll start with one person and see if in fact,
your infrastructure, your skills, your expertise
tangibly change the community, the environment,
or their life.
What is key to all of this?
Three words, first N, second O, third W. I have so many people I run across who are searching
not because they're not talented, not because there's not opportunity, but they're thinking
they're going to find the perfect thing.
No, find the best thing that is now and get started
This episode was produced by Jennifer Sanchez and Caroline Chagrin Jew burrows is our technical director Thank you for listening to the property pod from the Vox media podcast network
we will catch you on Saturday for no mercy no malice as read by George Hahn and
Please follow our property markets pod wherever you get your pods for new episodes every Monday and Thursday