The Prof G Pod with Scott Galloway - The AI Hype Cycle — with Gary Marcus
Episode Date: January 19, 2023Gary Marcus, a professor emeritus of psychology and neural science at NYU and the author of “Rebooting AI,” joins Scott to discuss artificial intelligence including the overall hype cycle, ChatGPT..., and useful applications. Follow Gary on Twitter, @GaryMarcus. Scott opens with his thoughts on CEO pay, specifically Tim Cook’s pay cut. He then wraps up by discussing a recent partnership between Walmart and Salesforce. Algebra of Happiness: your body is an instrument, not an ornament. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards.
Head over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts, mortgage rates, and more.
NerdWallet. Finance smarter.
NerdWallet Compare Incorporated.
NMLS 1617539.
Episode 228.
228 is the area code of mississippi southeastern region in 1928 an australian aviator
and his crew were the first to cross the pacific by air and mickey mouse appeared in steamboat
willie the divorce lawyer said to mickey mouse you can't divorce minnie because she's crazy
and mickey replied i didn't say she was crazy i said she was goofy
go go go
welcome to the 228th episode of the prop gpod in today's episode we speak with gary marcus a
leading voice in artificial intelligence and emeritus professor of psychology and neuroscience at NYU. He's having a moment
right now for a lot of reasons. We discuss with Professor Marcus the state of play in artificial
intelligence, including what to think about chat GPT, the hype cycle, and useful applications.
Okay, what is going on? Tim Cook is taking a 40% pay cut, bringing his CEO pay down to a whopping $49 million. That's $49 million as his target compensation for the year, down from $99 million in 2022. I wonder if they chose $99 million thinking if it was $100 million, and he's eligible for a bonus of up to $6 million.
The board claims this was Cook's suggestion, but let's be honest.
I'll be honest.
I've never met a CEO who recommended that they take smaller compensation.
My guess is the board decided to collectively do this and said, for PR purposes and optics, let's pretend it's your idea.
But who knows?
Maybe it was your idea, but who knows, maybe it was his idea. According to Bloomberg, more than 30 public company executives have compensation deals that surpassed $100 million in value at the end of 2021. Think about that. 30 public company executives have made more than $100 million in 2021. The top
12 packages topped $200 million. The average pay deal for an S&P 500 CEO is $18.3 million. That's 324 times what the typical worker
makes at those same companies.
Think about that.
I think it was about 60 times about 40 years ago.
Now it's gone to 324 times.
Why is that?
So as someone who has served
on a bunch of public company boards
and boards get to decide what the compensation is,
or specifically there's a compensation committee
who decides the CEO compensation, and this is how it goes down. They bring in a consultant, usually Towers
Perrin, because boards don't like to do actual work, and they pay Towers Perrin to do a CEO
compensation survey. And they will find a like company of similar size in the same industry and
say, this is what the CEOs on average make at this type of company, this type of size of company in this type of industry. And they say, this is the median. This is 50%. And you think, well,
Bob's doing a bang up job or Lisa is doing a great job. We can't pay them 50%. We won't pay them
the median. We'll pay them where the mean is the median or the mean, I guess it doesn't matter in
this case. We're going to pay them more than that. We're going to pay them at the 60th or 70th percentile. And you think, well, that's fairly
innocent, a little top up. But here's the thing, that 50th percentile is extraordinary. That's the
compensation that every oil CEO is getting for a company running an $8 billion firm, which guess
what, is a shit ton of money. But what happens is when you pay them in the 60th or 70th percentile,
what you end up doing is creating this explosive upward trajectory of CO compensation because it's exponential. Because instead of their
wages rising 3% or 4% a year along with inflation, it's increasing 8% to 12%, which means that
literally every six years, CO compensation doubles, which means in 24 years, it's going to go up 16
fold, which it has.
Now, this is a function of a few things, but the first thing I would argue is proximity.
And that is when you get to know people, I mean, here's the thing,
the number two and the number three person at most public companies make a fraction of what the number one person makes. Why? Why? Because the board gets to know the CEO,
and the CEO will determine the compensation for all of his executives and decide, okay, if the CFO makes $2 million, that's pretty good cabbage. But the board gets to know personally the CEO. And as a function of that, they end up deciding, you know, Bob or Lisa is a good person, we should pay them, you know, the 50% level. We should pay them more than that. But CO compensation has just absolutely skyrocketed. Now, with respect to Tim Cook, with respect to Tim Cook, the question is, is this out of line?
And you could argue that CO compensation is out of line and it's market dynamics, a lot of things here.
I would argue just raise taxes or make people pay current income on their CEO or their
stock compensation. I've never understood why there's a difference between long-term capital
gains and capital gains or short-term. I want to go back to where Reagan was and it's just capital.
For some reason, we've decided, or it's just capital gains, we've decided that the dollars
that other dollars make is more noble than the dollars that sweat makes, which makes absolutely
no sense. As a matter of fact, if you were going to try and justify or solve a societal harm,
specifically in the middle class kind of waning,
you would say, okay, where do they make their money?
They make their money in salary or current income.
That should be a lower tax rate
than people that make their money off of assets,
usually people who are rich, right?
So, but no, we don't want to do that.
This idolatry of innovators kind of floats down.
And how do CEOs get compensated?
80, 90, 95% of their compensation is stock-based compensation so they can access those lower long-term capital gains tax rates. Now, beyond that issue around CEO compensation and how America has decided to reward money more than sweat, let's talk a little bit about Tim Cook. I would argue that relatively speaking, Tim Cook is not overpaid. No individual, no CEO in history has added more shareholder value than Tim Cook has under his
stewardship. I think he's added about $1.5 trillion in shareholder value. No one's done that.
Let's look at another CEO, Elon Musk. And I'm not talking about the compensation
from his founder shares. I'm talking about his compensation over the last five years based on
equity and options that he was afforded. It comes down to something like $10 or $12 billion. And
that's after a decline in the stock price of two-thirds over the last year. So that's over
the last five years. I think he's in the money on his options around $10 or $12 billion. And that's
even a bit misleading because he sold a lot of those options or exercised a lot of them and sold them when the stock was much higher.
So what do we have here?
We have one CEO, Tim Cook, that over the last five years has gotten 0.02% of the total market cap in the form of compensation after overseeing a trillion and a half dollars in equity value.
And then we have one CEO who oversaw a $350 billion increase in value. Let's
call Elon Musk a founder. I think he is the founder of Tesla. And he's gotten, get this,
about 3% of the company's market cap and compensation. So relatively speaking,
Tim Cook is underpaid and Elon Musk is dramatically overpaid. What's the difference here?
Governance. Apple has a real board. They're thoughtful. They look at compensation. They're not scared of the CEO,
and they say, okay, this seems reasonable. This seems that a half a billion dollars over five
years seems like remarkable compensation from a remarkable performance. Who's the CEO at Tesla?
Elon Musk. And who's on the board? Well, his brother and a bunch of sycophants, the kind of
people that would let him operate other companies or behave recklessly that the Tesla brand incurs huge damage and has to start
discounting. By the way, everything that Elon Musk is involved in right now, it's on sale.
Get this, Twitter is offering 50% off their ad rates. If you show up with a quarter of a million
dollars to spend on the Twitter platform, they'll give you half a million dollars in advertising.
There is no media company in the world I can think of right now that is offering advertisers a 50% off deal or promotion right now, which gives you a sense for just how many advertisers have abandoned the platform.
Anyways, back to CEO compensation.
We have an upward exponential trajectory in CEO compensation. I'm not sure there's a lot we can
do about that because the top guy or gal, if they're the right guy or gal, can add extraordinary
value across multiple stakeholders. But we can at least have them pay their fair share of taxes
and recognize that they should not pay a lower tax rate than the person who is cleaning the
bathrooms or who is answering their phones or is doing a lot of the frontline work for the company.
So that seems like a fairly easy fix. But to be clear, on a relative basis, I would argue,
I would argue, as weird as it sounds, Tim Cook is underpaid.
Okay, what else is happening? Let's wrap up with some news in the B2B space. Walmart has partnered with Salesforce to, quote unquote, unlock local fulfillment and delivery solutions for retailers. Hmm, okay. The partnership essentially means that firms using Salesforce's e-commerce platforms will have access to Walmart's commerce services to speed up delivery and fulfillment times. This seems like an attempt to sort of take on Shopify, if you will.
This is really interesting. So is this Walmart trying to go up against Amazon and partnering
with Salesforce in sort of like maybe we go vertical and try and be Shopify? I don't know
what this is, but it's pretty interesting. It's innovative. Salesforce has been in the news
recently for basically overhiring and having to take some action around layoffs,
which I don't think is anything that extraordinary. But Walmart, I think, continues to be a pretty
innovative player. Will this work? I don't know, but these are the kind of things they should be
thinking about. But they look at Shopify as multiple, they look at Amazon as multiple,
and they think, we want some of that Amazon-Shopify-like multiple on earnings.
Amazon has done something similar in terms of offering its services such as cashierless checkouts to other businesses.
It's also planning to offer external retailers
the ability to add a buy with Prime button
on their checkout sites.
What is so extraordinary about Amazon
is they take the biggest expense lines.
And rather than saying, okay, health insurance,
fulfillment, processing power
are our most expensive line items. They're our
biggest costs. Let's try and drill them down. Let's try and take out costs. They say,
how do we overinvest in that competence? How do we overinvest in processing power? How do we
overinvest in our fulfillment infrastructure? And then turn it from a cost center into a profit center and offer it to other firms. No firm in history
has pulled off this Houdini jujitsu act of taking their biggest expenses and turn them into
amazing businesses that end up costing a ton of money. Amazon's biggest cost or one of their
biggest expense lines beyond labor was the cost for processing power, the cost, the data storage
and processing power. So what do they do? They build it out exponentially and then offer it for
sale in the form of AWS to other providers. My God, who else has done that? Healthcare is probably
next. They're experimenting around different ways to offer healthcare services. They start with their
own employees, they'll over-invest in it, become great at it, and then start renting it or selling it to other companies. It's really exceptional what
they've done here. And I think Walmart is trying to drink some of that Kool-Aid. Is that the right
term? Probably not. But they're trying to learn from Amazon and say, okay, what are our biggest
expenses? Walmart has an enormous fulfillment infrastructure. How can we invest in it and start
renting it out to other folks? And we're going to partner on the front end or the software end, which probably isn't their
competence, with one of the great cloud CRM companies, Salesforce. I think it's a really
interesting idea. We'll be right back for our conversation with Gary Marcus. And how do they find their next great idea? Invest 30 minutes in an episode today.
Subscribe wherever you get your podcasts.
Published by Capital Client Group, Inc.
Hello, I'm Esther Perel, psychotherapist and host of the podcast, Where Should We Begin?
Which delves into the multiple layers of relationships, mostly romantic. But in this special series, I focus on our relationships with our colleagues,
business partners, and managers.
Listen in as I talk to co-workers facing their own challenges with one another
and get the real work done.
Tune into Housework, a special series from Where Should We Begin, sponsored by Klaviyo.
Welcome back.
Here's our conversation with Gary Marcus, an emeritus professor of psychology and neuroscience at NYU and author of five books, including his latest, Rebooting AI.
Professor Marcus, where does this podcast find you?
I am in Vancouver, British Columbia, where I've been for the last few years.
What, in your view, if you were going to help someone understand AI
and set the context for why it's going to be so powerful,
what's your definition of it?
AI is really hard to define. Let's start with
the fact that we're contrasting it really with natural intelligence. So AI is short for artificial
intelligence. Sometimes people joke that the contrast is with natural stupidity.
There's not one crisp, clear definition. And one of the complexities in the definition
starts with the fact that really
intelligence itself is multidimensional. It's not just one thing. People like Sternberg and Gardner
have made this point in different ways. So they'll talk about things like we have kinesthetic
intelligence, like a dancer or a Michael Jordan or someone like that has kinesthetic intelligence.
And then there's kind of analytical intelligence. There's lots of different dimensions. If you look
at the SAT, it's not a perfect measure of intelligence.
Nothing is.
But I think part of what we have in mind when we talk about intelligence is being able to
adaptively solve problems that are new.
The biggest problem that I see is that people treat AI as if it's a kind of universal solvent
bit of magic.
Like if you have a little bit of AI, you can do anything you want.
And the reality is we have different AI techniques that do different things. They have different
strengths and different weaknesses. They're like tools. You know, you wouldn't say I have tools,
period. What you really mean is like, I have a screwdriver, I have a hammer, I can do a bunch
of things with these and a bunch of things with those. And, you know, a good carpenter learns to
do different things with those different tools and where they're appropriate. Here's an example that I use fairly frequently, which is turn-by-turn navigation. I want to go from point shortest path according to that and so forth. That's AI. It doesn't get hyped anymore. There's this old joke
about when we don't know how to do something, we call it AI. And when we do, we just call it
engineering. So I'm trying to think of an athlete that just revolutionized or inspired a sport. I
don't know if it was Serena Williams or someone who just took the sport to the next level. It
feels like chat GPT has taken the sport of AI to a new level, that all of a sudden it's all the rage. What is unique
about chat GPT that has all of a sudden got all of us talking about AI? Well, I mean, I don't
entirely buy the analogy to begin with. It has certainly gotten more people to talk about these
things than before. It's not so different from a bunch of other systems that came before it.
Some of it just has to do with how it was percolated through the media, the availability
of it.
So GPT-3 came out a couple of years ago.
It was certainly a lot of talk about it.
It was, they quote, wrote an op-ed in The Guardian that was actually edited by human
editors, but it got a lot of press too. But one thing they did differently with chat GPT and GPT-3 is GPT-3,
they had very limited release. They handled chat GPT in a very different way, which is they put
it out for everybody to use. You just need an account. In terms of the underlying technology,
it's really not that different from GPT-3. They've added guardrails, which is both
important and frustrating. So the guardrails mean that it's much harder to get it to do
really malicious things. So just before chat GPT came out, Meta released a system called Galactica
that stirred up a huge amount of attention, but disappeared after a couple of days.
And the problem with it is it had no guardrails at all.
So it was very easy, for example, to say,
hey, write me an article about the benefits of antisemitism.
And it would write an article about the benefits
or putative benefits of antisemitism.
So that came out in November.
In terms of the technology,
it's not really that different from ChatTPT.
They're both trained on massive amounts of data.
Galactica happened to be trained mainly on scientific articles, but there's a lot ofT. They're both trained on massive amounts of data. Galactica
happened to be trained mainly on scientific articles, but there's a lot of overlap in what
they're trained on. And they are similarly prone to bullshit. So I'm trying to think of an example
somebody gave yesterday. Henry Minsky, who's the son of Marvin Minsky, one of the founders of AI,
sent me an email yesterday and he asked chatPT, what is the benefits of finances for
cells in immunology? And ChatGPT just made something up that was total bullshit.
Now, going back to your question, is this like suddenly Serena Williams has come to the scene?
I would say no, that it just looks like that. There should be a moment here of realization that
this is not really what we mean by intelligence. Yes, it can do a lot of things, but pretty much
everything that it does is approximative. It sort of looks like it works, and then you go
dive in and there's a problem. So here's an example. I bet that if we tried to write your
biography, that it would come up with something that is similar to
you, but probably sprinkle in a bunch of stuff that's not true. Like it would make up where you
went to university and what you taught at NYU and so forth. So everybody is like excited about the
possibility that ChatGPT is going to be a so-called Google search killer, Google killer.
And there's a chance that that will happen, I suppose. But the
fundamental problem is that if you get back a list of websites, you can judge for yourself,
is this one accurate? Is this one not? Does it look plausible? Do I believe these guys?
Whereas chat returns like a paragraph and it returns the paragraph with no sourcing,
although people are working on that a little bit, but with no sourcing and saying everything that it believes to be true, so to speak, as if it were true equally. And you have
no way to say which of this is and is not. Let's come back to misinformation. But if I read into
your comments, my sense is you've, well, I'll back up. I was at the DLD conference and I met
with the CEO of Neva, Sridhar Ramaswamy, and he said that we've hit peak AI hype already.
That he sort of said, okay, this company being valued at $29 billion, we're already at peak hype here.
And I said, well, is the performance going to match the promise?
And he said, maybe someday, but we've hit this hype cycle faster than any
other technology. I mean, right now, the general kind of consensus is this changes everything.
It feels like Bitcoin in the early days, right? Yeah, or driverless cars. Think about driverless
cars back in 2012, right? Sergey Brin said, we'll have them on the road. Everybody will be able to
use them in five years. So is this autonomous car? Is this a reasonable facsimile? More, again, more hype than
substance? So I think you are correct that this has been a faster hype cycle than we have ever
seen before. A few weeks earlier, by the way, we saw the same hype cycle for different aspects of
generative AI, which was around the art. And I think we should sort of unpack this a little bit.
We've already gone from art to it's going to replace artists and, you know, Getty and
these clip art companies.
So now it's going to fire all editors.
You know, it wasn't enough hype to say it's going to replace artists.
So I think there's actually some real application.
I think the economics here are very
tricky. I think in the case of driverless cars, it's just really too hard what's being promised.
The so-called level five self-driving, like you can rent out your Tesla at night and use it as
a taxi when you're not driving. That's just not happening soon. I do think that there are some
applications of generative stuff. So you really can use it for the art. There's a whole question around how the artist should be compensated. There are lawsuits that were just filed in the last few days, and there will be more. So the economic question around the art stuff is, do the artists get compensated? What there. So, you know, stable diffusion and open AI,
for example, each have art products. Google can certainly replicate this. In fact,
stable diffusion is itself open source. So it's not clear, you know, how any particular company
is going to make money when everybody essentially knows the recipe now. So you can make yours a
little bit better by having a better data set
and there's some scrambling around that.
I don't know if the economics are there
to support as many players
as we have kind of around the basket right now.
Okay, now let's turn to the language models.
The language models have this problem
that they hallucinate a lot.
We have a human tendency to be gullible
and people are over-attributing to chat GPT and intelligence that's not really there.
So some applications, I think, are viable.
People really are using it as a tool in computer programming where the programmer knows what they need.
They can see if it's not correct, and they can debug it.
Then other people are saying, we're going to use it as a search engine.
That's much more complicated because it does make up so much bullshit.
And people are like, well, we'll just make the model bigger and that will go away.
To my mind, it's not going to go away.
It's intrinsic to the nature of how these systems work.
But the thing that we're talking about itself, chat TPT, I don't think it can solve the truth
problem.
That's not really what it does.
They're built to write stuff that sounds plausible. They're not built to write stuff that is true. It's not actually analyzing
its data set saying, this thing that I'm saying, is it consistent with what I know? And this lack
of a validation step, in my mind, is fatal for making it a serious full-service search engine.
You could market it as a brainstorming tool,
make some money off of that, sure.
But that's not a Google killer.
Like people who have sugar plums dancing in their head
because they think, wow, Google is a trillion-dollar business
and we're going to take their trillion-dollar business.
Well, no, you might take a little piece of it.
Does that justify a $29 billion valuation?
I don't know.
Transaction is valuing GPT at $29 billion. And there's a difference between value and price. So let me say it's been priced at $29
billion. Sure. It sounds to me like you're a skeptic that it's going to live up to that $29
billion valuation. Yeah, I am. We should maybe preface that discussion, though, by saying that
that's a weird transaction. It's the weirdest set of terms I have ever seen. I think it's unique in the business. I actually have some admiration for both Microsoft and OpenAI in terms of how they structured the transaction. It's incredibly creative. It's way off Microsoft bought a third of the company for $10 billion and valued it at $29.
There's all this other stuff.
So control of the company kind of ping-pongs back and forth in ways that kind of boggle my mind and I still don't fully understand.
And I only have a secondhand report of, so I'm not master of.
So if they do a little bit of business, like Microsoft gets most of the profits.
They don't get 10% or 33% of the profit.
They get 75% of the profit for a while and then they get 49%.
If it's really a killer thing and it makes trillions of dollars, which is kind of unlikely, then OpenAI gets control of the company back.
They get their equity back. At some point, this other clause triggers,
and then maybe they let the greatest thing ever go.
So it's a very weird transaction.
Let's just pause right there,
because where I've been quoting you,
and I'm misquoting you,
is that you'd said something to the effect
that the initial founders or architects of OpenAI
did it to try and saw an important technology
that could go good or bad ways
and wanted to be thoughtful
around the direction and development
and the evolution of AI.
And that they probably didn't envision
that it would be a point of differentiation for Bing
and potentially be, you know,
I don't want to say handed over to the capitalists,
but I would argue that this deal,
it gives the veneer of, and it is a unique structure, but it gives the veneer that, oh,
after a certain point, the capital or the profits go back. But at that point, for that to happen,
it would have to be one of the 10, by my calculations, most profitable companies in
the history of all business. It feels to me like a lot of jazz hands and a lot
of virtue signaling under the auspices that we pretended this was for the greater good.
And we've decided, oh, I smell money. Let's figure out a structure that still maintains that veneer
of the public good or for all mankind while giving the first $90 billion to Microsoft and the initial investors.
Am I being cynical here?
I think it's more than $90 billion.
And I don't think you're being cynical.
I mean, let's replay the history here.
OpenAI was founded as a nonprofit.
And the motivations, if you go back and read what people like Elon Musk,
who helped found it, the motivations were essentially that, to keep AI from being owned by the capitalists and to keep AI safe for everybody.
And everything about the name and the history of that company has been transformed in very complicated ways, shall we say.
So first we can look at the name. Well, first we look at the nonprofit part. It started as a nonprofit.
Then it became a for-profit. But with the nonprofit still going, and I've always wondered
about this, so I haven't looked at it in a while, but let's say in 2019 or something, you could
still find tax forms for the nonprofit. They were still taking money in. So they were still shielding some of the money
they were getting from paying taxes.
So they started as a nonprofit, they added a for-profit.
That does happen sometimes,
but it does seem like the motivations have changed here.
So the founders are apparently selling a bunch of stock
at also the same $29 billion valuation, about $300 million stock. And then,
yes, there's this very tight relationship with Microsoft. There was already a tight relationship.
It's now gotten even tighter. That's really not what was on the original dance card, right? The
original dance card was, I think, driven by a fear of DeepMind in particular. I don't know if anybody's going to say this in print,
but this is how a lot of us read it is.
DeepMind had just ascended, right?
Google had bought them for what seemed like a phenomenal amount of money
at the time for an AI company, around $500 million.
Everybody was talking about DeepMind.
They had put out this Atari game thing that wanted lots of Atari games.
I was critical of it, but lots of people were excited about it.
And so people were worried, like, is DeepMind going to take over the world?
And OpenAI came in in that mix.
And the idea was we're going to keep the world safe from having small, you know, wings of private.
Now it was a private wing of Google.
We don't want Google and DeepMind to take over the world, so we're going to build this antidote to it.
And then suddenly, you got to GPT-3, and they had a release of it.
And Gary Marcus, who's a known critic of this stuff, says, can I try it out?
And they say nothing.
Like, it's not open anymore.
They said, we're not going to have it open to the public because it's too dangerous
to use. And then we all laughed at them and they were actually right. It is dangerous, but they
decided, yeah, dangerous to use, but if we can sell it to Microsoft for, you know, $10 billion
and put an API on it and charge money for it, that's fine. And so there's been a lot of wavering on the kind of ethical
principles that they are putatively following. So when you said you're quoting me, I don't think I
ever spelled it out in quite the way that you reconstructed it, but I don't know that you're
wrong either. The chance that they will use this as a nonprofit for good, factoring in sort of the
historical decisions they've made
and the structure of the transaction is small. I mean, realistically, they are now currently
operating as a division of Microsoft with an API that they will charge money. They are not open
about what data went into it. They're not letting anybody use it for free. There's nothing particularly open about open AI anymore than I can see, other than this promise that if we're a sort of Saudi Aramco company, maybe we'll make some change.
So let me put forward a thesis and you tell me where I have it wrong. This is going to be less groundbreaking than the current hype cycle, fewer applications than we'd initially hoped, but also represents less of a threat than
a lot of people go to. Your thoughts? So I was with you for the first part, but not the last.
I think there's a real threat here. So yes, I think it's going to be, I mean, it's not artificial
general intelligence. There are some people now that are perceiving it as like tantamount to
artificial general intelligence. There's some people who think that Microsoft is getting a sweetheart deal because they're buying artificial general intelligence.
Of course, if it really turns out to be that, if I turn out to be wrong about pretty much
everything I've said in my career, and it really does turn out to be artificial general
intelligence, then it reverts back to open AI after they make this enormous amount of money.
And I could be wrong. I don't think I am, but I'm a scientist
and I understand there are knowables
and there are places where I could be wrong.
We could dive into that part of the argument.
But more likely than not,
it is not actually artificial general intelligence.
It's going to have some application.
People are going to be able to use it for programming,
for brainstorming, for some applications
in creative writing and so forth and so on.
But maybe it doesn't do search.
And so if it doesn't replace Google as search, but it does all these other little things,
maybe $29 billion turns out to be the right valuation.
I don't know.
Like these numbers are picked out of a hat in a certain sense.
Let's say it's a modest success, but not a breakout success.
That doesn't entail, unfortunately, that it's not dangerous.
So I do think that it's dangerous. And the reason I think it's dangerous is it's going to lead us, it has a high probability, or let's say significant probability of leading us to a post-tr russian um fire hose model of propaganda where
you don't just want to persuade somebody else of x you just want to persuade them they can't
believe anything just confuse them yeah alternative everything about everything fascists love that
strategy right um i think the gpt chat gpt all um these systems have a lot of unfortunate negative potential to be used by bad actors to create such
an environment where we can't trust anything. And if we're in a world where we don't trust the
search engines, most of the stuff that we see on social media isn't true. We don't know which is
which. That just plays right into the hands of authoritarians. That's incredibly dangerous. So
even if there's not that much money
to be made, if it transforms our culture to reduce trust even more than we already have,
that's genuinely dangerous, I think. Doesn't that lend itself, though,
to the importance of institutions and identity? It does. The last time that the world was in
this situation was in the yellow journalism period in the 1890s
when Hearst and so forth were just putting out any garbage. And society decided that what we
need to do was to actually have things like fact-checking. Fact-checking came out of that
era. And so we may have a sort of second telling of that history where we need to build new AI
as a guardian to detect this stuff, which I'm interested in doing,
building AI to detect the misinformation.
We may need to have regulations which say if you produce misinformation at volume, not as a one-off, you say one lie on Twitter, fine.
But if you produce thousands of pieces of misinformation with malice, maybe that should
actually be punishable.
In the United States, it's not right now.
Unless you say something under oath or something like that that's false, there's just
not that much in our legal structure. We might need to change the legal structure to have
enforcement. We might need to change how we think about the social media. We might have to rethink
230. There's all kinds of stuff that might have to happen. So there might be a response, right,
if it gets bad enough.
To the extent you're willing, when you look into 2023,
I won't call them predictions,
but what are you comfortable sort of speculating
what might happen with chat GPT and AI,
generally speaking, in the business ecosystem?
Well, the biggest thing that's going to happen,
first of all, is that GPT-4 is going to come out.
Nobody's even going to remember chat GPT by the end of the year, as strange as that seems, since everybody's talking about it now. GPT-4 is going to be even better than chat GPT. We're going to have the same hype cycle. People are going to go insane over it. But I think it's still going to have the same problems around hallucination. And so by the end of the year, everybody's going to be using this a little bit,
but everybody's also going to have these frustrations around hallucination and stuff
like that. You're going to have a lot of people trying to make product out of it,
as they have been for GPT-3 for a while. Most of the companies trying to make product around it
are going to realize that liability is an issue and they're not going to succeed on it.
We're still going to have arguments about how smart is it and how not smart it is.
Google is still absolutely going to be in business at the end of 2023. I have my own issues with
Google, but I am not at all thinking that they're going to be wiped out by this. People will still
be talking about this transaction. Was it the right move or the wrong move? We still won't really know for either party how well it turned out. Things by the end of 2023,
from a business perspective, are still going to be up in the air because it's still going to look
like a promising technology. It's still not going to arrive. But I mean, you know the old thing
about going from a demo to an actual product and how hard that can be. That's been true in the
history of AI. There's so many demos of. That's been true in the history of AI.
There's so many demos of robots that don't actually see the light of day.
We had demos of Facebook M, and we had demos of Google Duplex,
and a lot of these things just don't actually come out.
So at the end of 2023, a lot of people are going to be at that transition. Can I turn this demo, the demos are easy to make, into a product
that customers will actually pay money for, that there won't be any liability lawsuits over and so forth?
I predict at the end of 2023, that question will not be resolved.
Gary Marcus is an emeritus professor of psychology and neuroscience at NYU and also a leading voice in artificial intelligence.
He's also the author of several books, including the New York
Times bestseller, Guitar Zero. His latest book, Rebooting AI with Ernest Davis, is one of Forbes'
seven must-read books on AI. You can also follow him on Substack or on Twitter at Gary Marcus.
And Gary, you're launching a podcast. What's the name of the podcast?
That's right, Humans vs. Machines. I think our first episode might drop in the beginning of March
and the rest will come out later in the spring.
Good. And he joins us from his home in Vancouver. Professor Marcus, we appreciate your time.
Thanks very much for having me.
We'll be right back.
What software do you use at work? The answer to that question is probably more complicated
than you want it to be. The average US company deploys more than 100 apps, and ideas about the
work we do can be radically changed by the tools we use to do it. So what is enterprise software
anyway? What is productivity software? How will AI affect both? And how are these tools changing
the way we use our computers to make stuff, communicate, and plan for the future?
In this three-part special series, Decoder is surveying the IT landscape presented by AWS.
Check it out wherever you get your podcasts.
Hey, it's Scott Galloway, and on our podcast, Pivot, we are bringing you a special series about the basics of artificial intelligence. We're answering all your questions. What should you use it for? What tools are right for you?
And what privacy issues should you ultimately watch out for? And to help us out, we are joined
by Kylie Robeson, the senior AI reporter for The Verge, to give you a primer on how to integrate
AI into your life. So tune into AI Basics, How and When to Use AI, a special series from Pivot sponsored by AWS, wherever you get your podcasts.
Algebra of Happiness. I just got off a podcast, the Armchair Expert podcast with Dak Shepard.
Something that struck me is he's huge. He's 205 pounds right now and ripped. And he graduated from college or UCLA in 2000. I graduated in 87. So he's 13 years younger than me. He's 45.
And we talk a lot about on this podcast, the importance of being physically in shape,
but something we don't talk a lot about. And I want to be clear, it's never all day and she was a horrific cook.
So, and both my parents are very thin.
So naturally I'm very thin and food became kind of a task for me or a punishment almost.
My mom used to every Sunday night, no joke, make a vat of shepherd's pie, which is mashed potatoes, ground beef, corn, and I think something else. And then we'd feast
on the fresh shepherd's pie on Sunday night, and then she'd freeze it. And I'd heat it up in this
microwave that was slightly better technology than Chernobyl after it exploded. And I'd eat
shepherd's pie all week, and it was just awful. So I grew up not enjoying food, unnaturally skinny,
and I just was always very self-conscious. People, the first thing they'd say when they met me is
they'd comment on how ridiculously skinny I was. And it got in the way of my ability to play sports.
It got in the way of my ability to date, if you will, because I was so painfully thin.
And I was just so self-conscious about it. And then
when I got to UCLA, I rode crew, started putting on weight, got to the fraternity. Everyone
complained about the food. It was all these rich Jewish kids from the valley. I was a poor Jewish
kid with a single mother. And I thought the food was fucking amazing and started eating like crazy
and went from about, I don't know, about 160 to kind
of 185. And 25 pounds of muscle at 19 makes a big difference. And about the same time, my skin
cleared up and all of a sudden I started getting what felt like more respect from other men. And I
just felt better about myself and women started noticing me. And I just associate really good
things come from not being skinny.
And as a result, every time I look in the mirror my whole life, I'm like, oh my God, I look frighteningly skinny.
I look unhealthy.
I look emaciated.
I'm 6'2", 187 pounds.
And if I look in the mirror right now, I think I look unnaturally and unhealthfully thin.
That is body dysmorphia. And I think part
of what's helped me is one, working out a lot. You just feel like you're doing something about it.
Realizing that your body is an instrument, not an ornament. Rather than measuring how big or
how ripped or how non-skinny I looked, I would think of myself, I started timing myself
in terms of my ability to row 2,000 meters on the erg or how much I could lift or how fast and how
far I could run and try and take pride in being strong as opposed to just being ripped or being
fit. And it's difficult to tell someone how to do this, but trying to figure out ways to appreciate
your body, find something about your body that you like,
and really try and develop it and take some pride in it. And also just acknowledge that it is a
human condition to be somewhat anxious about your body type and your fitness. The vast majority of
people, especially women, do not like their bodies. And I think it is a real skill to try and figure
out a way to appreciate your physicality
and your form and focus in on how blessed you are. Everyone is blessed with certain attributes
and cursed with others, but focus on the stuff you can work on. And again, I just think when you
work out kind of four plus times a week, you feel like no matter where you end up, you're doing
something about it and you're taking control. But body dysmorphia is a thing for not
only women, but for men. And it's been something that as I've gotten older, I've dealt with and
I appreciate it and just been cognizant of it. And quite frankly, just kind of hugging yourself
and appreciating. It's gotten easier to appreciate as I've gotten older because people get just so
fucking ugly as they get older and get so sloppy. So relatively speaking, I'm in great shape.
Great, great shape for his age.
Everything is followed down by for his age.
But body dysmorphia is a thing for men.
Take pride in what works about it.
This is not a rental.
You are stuck with this thing.
Nothing is more important, but appreciate it.
It's yours and it's going to be with you for a while.
Our producers are Caroline Shagrin and Drew Burrows. Jennifer Sanchez is our associate
producer. If you like what you heard, please follow, download, and subscribe. Thank you for
listening to the Prop G Pod from the Vox Media Podcast Network. We will catch you next week.
Support for this podcast comes from Klaviyo. You know that feeling when your favorite brand really gets you.
Deliver that feeling to your customers every time.
Klaviyo turns your customer data into real-time connections across relationships with their customers during Black Friday, Cyber Monday, and beyond.
Make every moment count with Klaviyo.
Learn more at klaviyo.com slash BFCM.
Support for the show comes from Alex Partners. Thank you. Alex Partners 2024 Digital Disruption Report, you can learn the best path to turning that disruption into growth for your business. With a focus on clarity, direction, and effective implementation, Alex Partners provides essential support when decisive leadership is crucial.
You can discover insights like these by reading Alex Partners' latest technology industry insights, Available at www.alexpartners.com.
That's www.alexpartners.com.
In the face of disruption, businesses trust Alex Partners to get straight to the point and deliver results when it really matters.