Today, Explained - The AI hype machine
Episode Date: May 22, 2024Big Tech companies have rolled out a new batch of AI-powered products, improving upon what came before. But as Wired's Will Knight and investigative journalist Julia Angwin explain, they’re not even... close to living up to the world-changing technology the Big Tech CEOs promised. This episode was produced by Amanda Lewellyn, edited by Amina Al-Sadi, fact-checked by Laura Bullard, engineered by David Herman with help from Andrea Kristinsdottir, and hosted by David Pierce. Transcript at vox.com/today-explained-podcast Support Today, Explained by becoming a Vox Member today: http://www.vox.com/members Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hey there, what's up? How can I brighten your day today?
Does that voice remind you of somebody?
Hi, how you doing?
I'm well.
Open AI is actually pulling Sky, that new voice, the one for its new chatbot, just days after rolling it out.
Because, yes, it sounds a bit too much like a certain Spike Jonze character.
Ladies and gentlemen, her.
In the past week or so, a bunch of companies have actually announced a bunch of new AI tools.
And this is becoming something of a pattern.
Roll out a flashy new AI product, something glitches horribly, roll it back in.
But wait, weren't the CEOs of these companies just telling us about how they were about to change humanity forever?
We're going to do this. It's going to be net great, but it's going to be
like a technological revolution. And those always come with change.
Ahead on Today Explained, AI's hype versus AI's reality. The all-new FanDuel Sportsbook and Casino is bringing you more action than ever. Want more ways to follow your faves?
Check out our new player prop tracking with real-time notifications.
Or how about more ways to customize your casino page
with our new favorite and recently played games tabs.
And to top it all off, quick and secure withdrawals.
Get more everything with FanDuel Sportsbook and Casino.
Gambling problem? Call 1-866-531-2600.
Visit connectsontario.ca.
It's Today Explained. I'm David Pierce, filling in as host today. Will Knight is here with me.
He's a senior writer at Wired, where he covers artificial intelligence,
which means that last week, he too was watching Google's annual developers conference. And this year, as you can imagine, was just 110% AI, generative AI in everything,
you know, turned up to 11. It was the most AI event they've put on and probably the most AI one I've ever seen. Everyone get out your computers. It's time to have ourselves a nice Google.
Yeah, so I think the most noticeable thing
is going to be the change to search they showed.
I mean, this is their cash cow, huge business,
and it's so important for many other businesses, right?
And they are rolling out these AI overviews,
which are generative AI summaries of the search results
instead of having to go through all these links.
Ideally, it will summarize it for you as if the
AI has gone off and read all the web pages
and it's got all the answers. So whatever's on
your mind, whatever you need to get
done, just ask.
And Google will do the Googling for you.
It doesn't always mean the answers are going to be right.
Someone was pointing a camera at a broken
film advance lever on a SLR, like a
film SLR camera. The question was
why can't I advance this thing?
Why is this lever not moving all the way
to advance the next thing?
The answer that Google delivered in its own video
and highlighted is the most wrong answer.
But it does also really threaten to kind of upend
this symbiotic relationship that companies have had
with Google because it means no longer people
have to necessarily have to go and click on those links.
And this summer, you can have an in-depth conversation with Gemini using your voice.
They showed off new versions of Gemini,
which is their answer to ChatGBT,
which can see things and talk to you as it sees the world.
So it's almost like a little robot in your pocket
where you can, like, show things.
We're calling this new experience live.
When you go live you'll be able to open your camera so Gemini can see what you see and respond
to your surroundings in real time. One of the other things that caught my eye was they start
to show off what they call and other people call agents. So these AI programs you don't just talk
to but you'll give them a task and they'll go off on the web and try to complete it. The idea is that this could be a totally new way to sort of use computers.
It's pretty fun to shop for shoes and a lot less fun to return them when they don't fit.
Imagine if Gemini could do all the steps for you. Searching your inbox for the receipt,
locating the order number from your email,
filling out a return form,
and even scheduling a pickup.
That's much easier, right?
Having played with some open source ones,
I've played with some of these people who have cobbled together these tools.
They are equal parts, really, really amazing,
and you can see the potential.
And equal parts is bonkers,
and they go off the rails and do something.
And the stakes
are that much higher if you're like sending emails or changing your calendar or you've given it your
credit card or something. So if they can figure out how to make them reliable, which I don't
think they can quite do now, so they have to really limit what they can do, then I think it
could be really transformative. And the usefulness I think is going to be less clear. It depends if
they can get them to actually work. It does seem like it's kind of an amazing baseline that we're still at that. If only
they actually work, how cool it would be. Well, the announcements keep coming one way or another.
They're going to keep launching stuff. And the other one was what, like 24 hours before
Google OpenAI did a big launch. Compare and contrast the vibes for me. Google has a big
giant developer conference. What did OpenAI do? Yeah, this was a much more small-scale,
intimate thing just at their headquarters. Today, we are releasing our newest flagship model.
This is GPT-4.0. The CTO on stage with a couple of engineers showing off the new chat GPT.
I will say that the vibe was also somewhat bonkers because they showed off this cool, impressive new model,
but they also revealed that the latest thing they've done is make it remarkably like the AI from her.
Hey, chat GPT.
Hey there. What's up? How can I brighten your day today?
In that it's kind of inclined to flirt with people.
Wow, that's quite the outfit you've got on.
Which is a twist, a plot twist I wasn't expecting.
Open AI's new voice-enabled chatbot is getting attention,
not for its ingenious tech,
but because it sounds suspiciously like US actress Scarlett Johansson.
Johansson says Open AI CEO Sam Altman wanted to hire her to voice Sky,
but that she declined the offer due to personal reasons. She points out Altman insinuated the
intentional similarity when he tweeted a single word, her, the day the ChatGPT product was
announced. So they just said that they were going to change the voice because of this backlash over
it being a little bit too like um scarlet
johansson and a bit too sexy so what else did open ai announce they had the the new voice which which
was definitely kind of the the star of the show the not scarlet johansson but kind of scarlet
johansson voice uh what else did we get from open ai right so they also they showed that underneath
the hood they have this new model which was chatPT 4.0, which is a completely new
reimagined model, which takes video and audio so it can do the voice, but it can handle video as
well. So they showed all these examples where you, in real time, you can talk about what you're
seeing. Show it to me whenever you're ready. Okay. So this is what I wrote down. What do you see? Oh, I see. I love chat GPT. That's so sweet of you. The idea here is that you
may have this kind of new paradigm in personal computing where you've got something that's
always seeing what you're seeing. It can remember where you left the remote control. It can
tell you about everything you're looking at. We were expecting maybe a brand new,
super-powered new model, which is going to be able to do way more things
and be way more intelligent.
So it does feel like a little bit of a cop-out
or a bit of a swindle that that's the new kind of AI.
It just happens to sort of have more of a personality, as it were.
Do you think we're getting inklings
of what the kind of first huge killer app of AI is going to be?
I mean, you mentioned search,
which seems like it has the potential to be one.
Maybe it is just chatting with a stranger.
Maybe it's some of the agent stuff that we're seeing.
What is your sense of,
are we getting glimmers of where this is going?
I don't think we've seen what would be the killer apps.
I think there could be huge companies built on top of
kind of quite mundane or seeming,
much less sort of sexy uses of AI
and these models that just, you know,
automate all sorts of tasks.
And that could be really big financially.
I haven't seen something that's like,
oh, this is really going to change everything.
I mean, that was one of the fascinating things
with ChatGPT. It was clearly a really big research advance and it was
wild to play with it, but it was never the case that you could say, oh, this, I can really see
how this was going to change work. Even in cases where people would say, oh, it's going to, you
know, you can write essays with it, you can do, but there were all the problems with it. And I
mean, you can already copy essays off the internet, right?
I mean, people do use these tools.
I think they've sort of crept into their workflow somewhat in sort of smaller ways.
But it's not like we've seen like this completely killer app akin to something like the smartphone or the internet yet. the midst of all of this, there are what I would call ongoing staffing machinations at OpenAI in particular, which have been going on for, what, the better part of seven or eight months at this
point? What happened this past week? Yeah, so this past week, well, just after the OpenAI
announcement, Ilya Sutskovor, who's one of the co-founders, really the sort of technical brains
of the company from the beginning, and one of the people who tried to oust Sam Altman, the CEO, sort of finally announced
he was stepping down.
There was a lot of speculation that it would be very difficult for him to carry on having
tried to boot out the CEO.
He led this team, though, that was focused on long-term AI safety.
So if you remember after ChatGPT, you'd have all these people coming out and saying, right now we think that this technology is not only the biggest thing since
it may destroy humanity, which seemed kind of very outlandish, but it became very much the norm for
people to talk about where we need to really focus on these long-term risks because we think it's
going to just get more powerful. So he led this team that was focused on that, and they pretty
much all quit. And the ones who were remaining have all been folded into other parts of the company.
So it raises the question, was that overblown? Do they not care about that anymore? OpenAI will
tell you that they still have researchers focusing on that. And the leaders put out a statement
saying, well, we still really care about this. But it's certainly the speed has changed. And
it's changed because Google came
along and said, well, we're not going to sit back and let open AI just overtake us. And so we're
going to move much more quickly because they were being quite cautious to begin with.
And we've seen the companies sort of releasing these tools and then discover that there are
issues with them. You know, Google had this image generator that was generating
kind of inappropriate,
historically really weird in Congress images
because they were trying to be quite politically correct
with what they were putting out.
And they had a huge backlash around that.
Today's New York Post cover shows this AI rendering of George Washington.
Well, he looks awfully tini.
So you can see they're moving quite quickly
and then fixing things after the fact, which isn't exactly what you might want if you think this stuff might
really go off the rails. Will Knight at Wired. In a minute, we're going to ask whether all of
these problems with all of these AI tools are actually fixable. and Aura says it's never been easier thanks to their digital picture frames. They were named the number one digital photo frame by Wirecutter.
Aura frames make it easy to share unlimited photos and videos directly from your phone to the frame.
When you give an Aura frame as a gift, you can personalize it, you can preload it with a thoughtful message,
maybe your favorite photos.
Our colleague Andrew tried an Aura frame for himself.
So setup was super simple.
In my case, we were celebrating my grandmother's
birthday and she's very fortunate. She's got 10 grandkids. And so we wanted to surprise her
with the AuraFrame. And because she's a little bit older, it was just easier for us to source
all the images together and have them uploaded to the frame itself. And because we're all connected over
text message, it was just so easy to send a link to everybody. You can save on the perfect gift by
visiting AuraFrames.com to get $35 off Aura's best-selling Carvermat frames with promo code
EXPLAINED at checkout. That's A-U-R-A-Frames.com, promo code EXPLAINED. This deal is exclusive to
listeners and available just in time for the holidays. Terms and conditions do apply. of with a sportsbook born in Vegas. That's a feeling you can only get with BetMGM. And no matter
your team, your favorite player, or
your style, there's something every
NBA fan will love about BetMGM.
Download the app today
and discover why BetMGM
is your basketball home for the season.
Raise your game to the next level this
year with BetMGM, a sportsbook
worth a slam dunk. An authorized
gaming partner of the NBA. BetMGM, a sports book worth a slam dunk, an authorized gaming partner of the NBA.
BetMGM.com for terms
and conditions. Must be 19 years
of age or older to wager. Ontario
only. Please play responsibly.
If you have any questions or concerns about
your gambling or someone close to you,
please contact Connex Ontario
at 1-866-531-2600
to speak to an advisor
free of charge.
BetMGM operates
pursuant to an operating agreement
with iGaming Ontario.
Support for this show
comes from the ACLU.
The ACLU knows exactly
what threats
a second Donald Trump term presents.
And they are ready
with a battle-tested playbook.
The ACLU took legal action
against the first Trump administration
434 times. And they will do it again to protect immigrants' rights, defend reproductive freedom,
fight discrimination, and fight for all of our fundamental rights and freedoms.
This Giving Tuesday, you can support the ACLU. With your help, they can stop the extreme Project 2025 agenda.
Join the ACLU at aclu.org today.
Google.com.
What is it? What does it mean?
Why are we here?
No one knows.
And you're not going to find out. Not today.
Explained. We're back. Tech CEOs have been hyping this AI future for a while now.
You know, I've always thought of AI as the most profound technology humanity is working on.
More profound than fire or electricity or anything that we have done in the past. Julia Engwin is a New York Times contributing opinion writer and the founder of Proof News.
She argues that the product demos we saw last week are, let's just say, not quite on the level of fire just yet.
Well, I felt really vindicated by those announcements because I think they were really underwhelming.
You know, Sam Altman promised us that he was going to show us something magic,
and it was kind of a routine update.
Like, you would see, you know, the new iPad, actually,
was probably slightly more magical than this update of OpenAI.
Same thing with Google, right?
They pulled out all the stops for this announcement.
Google!
But, like, I'm hard-pressed to tell you anything
that really was, like, a compelling, like,
oh, my God, I'm so excited to try this.
And so I feel like the problem is
that they started the AI conversation with,
AI is so smart that it's going to kill you.
Like, where do you go from there?
Right?
Yeah.
Like, it's really hard when, like like the gap between that and the reality is that it can't really answer even the basic question.
And then the gap just gets wider and wider.
Yeah.
So with that as kind of the hype machine, where do you feel like we are in real world, like ground truth AI stuff?
What is your sense of where any of this stuff actually
is right now? Well, I mean, if you just look at the studies, it kind of consistently comes back to
AI is like a 50-50 coin flip. So you look at medical diagnoses, right? There's a bunch of
papers. The most recent one that I looked at from Stanford's human-centered AI lab shows that when they were
looking at how AI performs on citing medical studies, it was basically 50% the evidence
didn't support what AI was saying, right? Even some of the spectacular things that were touted,
like remember when ChatGPT supposedly aced the uniform bar exam. And they said that they had scored in the 90th
percentile, like a new study from an MIT researcher actually found that it was in the 48th percentile.
So we're seeing that it does more than maybe would have expected a computer to do a couple
years back. But it's not reliable. Coders who work with AI as like a coding assistant will tell you,
you have to know enough to debug the code that it generates for you.
So it requires a bit of expertise, basically, to fact check the AI.
So what do we do with that, though?
Because on the one hand, 48th percentile on the LSAT is something.
It's better than my computer would have, it's better than I would do on the LSAT, I suspect.
Yeah, definitely.
Me too, for sure.
On the one hand, cool when it tells you the truth.
On the other hand,
I now have to go fact check it every single time.
And so maybe we've accomplished nothing
because I have to just go do the work that it did
to make sure that it's telling me the truth this time.
So I don't know, I'm so torn in this moment,
whether to be impressed that we've made any progress at all,
or totally annoyed by the fact that the progress
we've made just makes me do even more work. Yeah, I mean, this is why in my piece in the
New York Times, I referred to it as like a bad intern, his work you have to check, right?
Yeah, that's good.
Because it's basically like, you know, sometimes you get a great intern, and they are like producing
great stuff, and sometimes you don't and and
sometimes it's more work than actually just not having help right um and so i feel like we're
kind of in that space with ai i think it wouldn't be as frustrating honestly if they hadn't promised
us that we were on a steady march to what they call artificial general intelligence which is
this sort of mythical concept that there's going to be one AI machine that can do everything, right? Like it's going to be an expert at chess
and at like drone strikes and law and medicine. And, you know, none of us knows a human being,
right, who has all of those expertise. And so they have told us like, well, this is moments away.
We're almost there. In fact,
we are so close to being there that we're worried. And I think that that is the problem is that we
were told that and then we have the reality on the ground is so different than that.
Yeah. So on that front, let's run with the intern analogy a little bit here, because if you
have a bad intern, the theory supposedly is that
you can teach that intern how to be better, right? And that the work you're putting in to do twice as
much work because the intern's a doofus comes back in the end because you eventually teach the intern
how to do the job. And I would think there's some of that happening in AI. But you've also described some pretty big
barriers between where AI is and where all of these folks are suggesting that it might go and
is worth waiting for. Can you walk me through some of the big barriers between here and there?
Well, first of all, I do want to say, like, it's definitely possible we're going to get
better interns out of this eventually. But I think there are some major barriers.
One major barrier is energy use.
So AI is incredibly expensive in terms of its energy use.
It's why there are really only the big tech companies are actually only the really main ones able to compete in this space,
because you need incredible cloud computing resources. And then they're running up against energy uses.
Basically, they need these energy intensive data centers to power this AI models.
One study finding that training a single large language model program takes the same amount of
electricity needed to power 120 homes for an entire year.
Microsoft just broke ground on a new one, but in Virginia, like, somebody who runs the data
centers there said, like, we're done, like, we can't expand anymore. Sam Altman has said,
we have run into an energy wall. This seems to me where things are going,
like, we're going to want an amount of compute that's just hard to reason about right now.
How do you solve the energy puzzle?
Nuclear?
That's what I believe.
Fusion?
That's what I believe.
Nuclear fusion?
Yeah.
So they have an energy barrier.
They also have a data barrier.
The current hypothesis for AI is that you need more and more data to make your models better.
I think I just want to put a pin in that, that that may or may not be true. But if it is true and that's what they're operating under, like they have already scraped all the data that is available on the public
internet. And, you know, as we have seen from the lawsuits by New York Times and Authors Guild,
it was not a consensual scraping. And just to be clear, I'm not looking to shut down AI or turn
the clock back. I just want guardrails so that AI fairly compensates
the people whose work comprises its entire brain.
So there's two problems.
One, all those people are suing them
and may want their data back out of those models.
And two, they're talking about basically creating
synthetic data to train their models.
So basically having the AI invent data
to then make their models bigger.
And then people have talked about that leading to like model collapse. And so, you know, you have
two huge inputs into AI, both of which are looking pretty shaky. And so it does, I think,
raise the question of like how much exponential progress they're going to continue to make.
And I would say this, I think there is an interesting question about whether bigger is really better.
There are a lot of really interesting experiments being done where scientists, doctors are putting more qualified, smaller data sets together and then trying to build AI applications on top of those. And I haven't seen enough to know yet how successful
those are going to be, but I think that's a really interesting question. Like I would
feel more comfortable myself querying an AI model that was only populated by like peer-reviewed
medical studies, for instance, right? And so I think there could well be an interesting AI
future that involves a lot more specialized models. And
that would make sense, by the way, with the history of technology, right? Specialized machines
have generally been what we rely on. And so the idea that AI was going to be that was always kind
of weird. Totally. So let's just go glass half full here for a minute and put on our rose-colored augmented reality glasses.
The problems you're describing, especially these kind of big infrastructural ones, are they solvable?
Is there a world in which we get through some of these big barriers in the next few years?
I think maybe.
I always would say you have to bet on American ingenuity, right? What's one thing
about this country that is wonderful is that when we want to do something kind of no holds barred,
we try to figure out how to do it. But I think the question is at what cost, especially on the
energy side. Is it worth pouring all this money into bigger and bigger AI models and data centers?
Or should we have a nationwide EV supercharging network so everyone can have an electric vehicle?
Like, you know, we're in a world where everything has a tradeoff.
And so I guess the question I have is, yeah, maybe we could if we put all things aside and decided, like, this is the thing that we want.
I'm just not sure the return on investment is worth it.
Well, and there are some folks arguing for precisely that, right?
I mean, Sam Altman is out here saying pretty straightforwardly and without a hint of irony
that it might cost trillions of dollars to get AI to the point where we need to and making
the case that that is the correct use because the benefits at the end will be so huge and
so society-serving that it'll
be worth it. Not to put words in your mouth, but I'm guessing you're fairly skeptical of that
argument. If you were going to make that claim and have it be believed, you have to show some
evidence, right? And like the reality is, he's a little bit the boy who called Wolf. He said
so many times how much magic we're going to see just
on the next round and the next model so i would believe it if like yann lacoon said it because
he's been much more measured he's a meta he's an ai pioneer and he says look it's going to take
forever we are nowhere near close and it's going to take a lot of hard engineering and iteration
and like that feels like right to me.
Julia Engwin, New York Times contributing opinion writer and the founder of Proof News.
Today's show was produced by Amanda Llewellyn, edited by Amina Alsadi, fact-checked by Laura Bullard, and engineered by David Herman, with help from Andrea Christensdottir.
I'm David Pierce, and this is Today Explained. Wow!