Big Technology Podcast - Google’s Crazy AI Overviews, OpenAI’s Superalignment Drama, NVIDIA’s Unstoppable Run
Episode Date: May 24, 2024Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Google's AI Overviews tell us to eat rocks 2) Is this a signal the web really is done? 3) Google's histor...y of messy AI launches 4) OpenAI's super-alignment team in chaos 5) Is there immediate AI safety risk or did the team leave because there isn't? 6) OpenAI's NDA practices 7) Did OpenAI take Scarlett Johansson's voice? 8) Is there a line for the tech industry taking intellectual property? 9) NVIDIA's wild earnings report 10) Is this the end of the NVIDIA run? 11) When are we going to see an ROI on AI spending? --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Google's AI overviews are crazy.
Open AI Super Alignment team falls apart and has to sign wild NDAs.
And now the company is fighting with Scarlett Johansson.
And yes, Nvidia remains unstoppable.
How long does that go on?
All that and more coming up right after this.
Welcome to Big Technology Podcast Friday edition when we break down the news in our traditional cool-headed and nuanced format.
It's a holiday weekend here in the U.S., but that's not going to stop us.
We have so much news to cover.
It feels like this week is kind of an even more eventful week than usual.
We have Google's AI search errors coming out with this AI overview situation.
We have a big drama at OpenAI.
What's new?
We have Nvidia turning in major results.
Microsoft has this AI PC.
It continues.
So it's going to be a fun.
I think we're going to have some laps this week because some crazy stuff happened.
But let me introduce Ranjan Roy and we can get the show on the road.
Ron John, welcome to the show.
I think this will be a fun one, especially starting with these Google AI generative searches.
One week ago, I was promised that the web was finished.
One week ago, I was promised that Google had just destroyed the web with its AI overviews,
that a generative search would prevent anybody from clicking through to a story ever again.
Let me give the scorecard of how things look right now.
If you ask Google how many rocks you should eat, Google will tell you you should eat.
one small rock a day.
If you ask Google, why is my cheese not sticking to my pizza?
We'll tell you, you can add about one-eighth of a cup of non-toxic glue to the sauce to give it more tackiness.
If you ask Google, is everything you see on the internet real?
Google will tell you, yes, everything on the internet is 100% real.
And listeners, I can report that at least one blogger, Katie Natopoulos from Business Insider,
has made and eaten the glue pizza.
Ranja, what do you think about this?
Is this a sign that these large hot takes were overblown?
Or should we actually interpret it to mean that maybe a rock a day is actually healthy for you in glue pizza isn't that bad?
I knew you would start with this because this does run in the face of a lot of what I've said recently.
So I'm going to try to defend myself here.
And for regular listeners, you may remember I have agreed with a Gartner claim that I think it was like
26% of all search will be synthetic or generative in the next couple of years.
I've said Reddit is the greatest training data set in the world.
And the source of a couple of these posts, especially the glue on pizza, was apparently
a kind of shit post from Reddit from 11 years ago that somehow made it into these generative
search results.
So for me, the way I'm looking at this is this is definitely going to be disruptive.
in the near term for generative search.
And it's a reminder that this is a major transition for Google because the thing that
always allowed them basically plausible deniability from the actual information they're presenting
is they sent you elsewhere.
They didn't present the information other than in snippets, which they'd been introducing
more and more, as their own.
And now this is Google's essentially publishing when they give you an answer.
so they are going to be responsible for it.
And there's been a lot of shit posting on the internet over the years that can be
misconstrued.
There's even a couple of people who had examples where it was from The Onion or other satirical
information that's on the internet that clearly the model had not been trained properly
to handle.
But I don't think this is actually going to change anything.
I think this is going to be a little bit of a kerfuffle at the moment.
But I actually, these are for the most part,
ridiculous queries that clearly people have been setting up for the screenshots and for the
tweet. But I don't think this set changes anything in the medium turn. I'm sticking by my
generative search is the future. Okay. And this is Google, Google's company line kind of echoes
what you say, which they said that this is generally very uncommon queries and aren't representative
of most people's experiences. However, it's sort of the whole kerfuffle brings into relief
this big question, which is, is this something people actually need? And is this actually going to
improve the search experience? Like, I get that these are uncommon queries, but I'm also just
like, does Google actually, is it Google actually doing this because it believes that the
experience is going to be better? Or is Google doing this because it feels like it needs to make a move
on artificial intelligence, seeing the perplexities out there, seeing the Bing searches. And
remember Bing hasn't been able to inch up beyond 3.6% of the worldwide search market,
even though it has these AI generated responses. And I wonder if Google is doing something,
not because it believes it's right for the users, but because it believes it has to.
And one piece of evidence I'm going to put out here to sort of back up the point of maybe
that's the case is even after this AI thing, the Google AI overviews told people to eat
rocks. Google's stock went up on Friday more than the S&P 500. It's almost as if the market
says, we don't really care if this product works, if it's monetizable, if it's good for users.
As long as you're doing something AI, we're going to reward you. I think that Google does realize
that search is different. It's going to be different. And again, these kind of queries and basically
hacking the system like this is at the moment kind of hilarious. Again, how many rocks should
I eat one small rock per day.
It's the recommended...
They're not in two.
It's above the daily.
No, no, no.
Two gets a little bit ridiculous because that's just overindulgence.
But, no, I think I still, this is going to be really interesting because how Google kind of pushes ahead.
I don't think it's just to get a little bit of AI-driven stock, you know, stock pumping in the near term.
I think they recognize that search is going to be transformed again.
And perplexity is just so much better than Google search is right now.
Even asking ChatGPT for any kind of evergreen information, chat GPT is better,
perplexity is far better, that Google is going to have to do something because when you
have a monopolized business model like they do with search and it's going to, it's threatened
like this, you have to do something.
And they should be able to still be the leader in this because they still have
the best data set in the world. So I think they're going to still keep pushing ahead and I think
that they should. Can we at least admit that the web is still safe for next little bit? Or do you think
the web is toast? No, no. This is my, the more I've been thinking about this, because a lot of people
I really respect and like the way they think are the one, you know, the web is going to die.
This is the end of the web as we know it. I still have argued that the,
the web has already been dead for a while.
The web is that we kind of like the web that we are nostalgic for of the late 2000s
and like finding information and having like discovery and magical rabbit holes on
and going web page to web page has been long dead.
I think things have been walled off on different platforms for a while.
I think there's some, and I say this myself as essentially as a blogger and newsletter writer
and you as well, like, a lot of us are still doing it, but overall, the idea that there is this
massive ecosystem that's magical, I think, has been long replaced by crappy SEO optimized
websites and, like, even larger spammy publishers and all the above. I think that that idea,
it's any kind of generic evergreen information, I think, should be synthetically generated,
and there's still always going to be room for good original information
and how it gets discovered is going to be different.
But already it's been, you know,
it's SEO and Google search has not been the way to find good, exciting information.
Do you think the web's dead?
Do I think the web is already dead?
Alive and thriving.
I think it's, I wouldn't say alive and thriving.
But the web is the web and the web will be the web.
It would basically be my answer to that.
Let me ask you.
The web is the web is the web.
Like the web, the web has not been disintermediated.
I mean, it has to some extent been disintermediated, but it's not gone and it's not going anywhere.
Like, I do think that, because people who say the web is dead mean that what they mean is we're going to just live within platforms all the time.
And this idea that we'll have independent websites and stuff like that is gone.
And I just don't think that's the case.
No, I think we will.
I think they just will not be discovered via search.
That's the difference.
And they don't need to be the same way, you know, write a.
newsletter and publish it on substack and have people within the substack network or your newsletter
subscribers shared across different platforms like that's where already discovery happens for the most
part for me um and for us i think and then and actual community building but the idea that someone
is searching for technology news on google search and is ever going to find me or us i think has long been
a dream. Well, we do have, I mean, I did recently set up big technology for SEO and we do have
some SEO results coming in. But typically they just flood in for like one story. Like one is a story
that I get a lot of visits from is like does Nvidia have actual competitors. And I have a story
about the competition coming for Nvidia. And actually that does quite well in search. But I take your
point. And I think this is important. And I'll agree with you here. If you're going to set up a media
business or a business on the internet right now, you aren't going to like live through.
the website like the domain name isn't the most important thing and for me like after coming from
buzzfeed setting up big technology as a newsletter and as a podcast which is effectively
subscription media through subscription platforms right the podcast catching apps and the email inbox
sort of that goes around this whole filter of like the search engines and the facebook's to get you to
links. So I guess de facto, I'm kind of agreeing with you if you think about the business I've
set up. But if you're... So the web is dead. I don't know. How many times a day do you visit a
website? Like websites are still important. And I still have a website, even though I'm going
through the podcast and the email. Yeah, that's true. I think the tech meme homepage is still
a daily go-to for me. I sometimes go to New York Times.com, I guess. Via platforms end up on
website certainly. So yeah, I'll say I guess the big question is all information around
gender to Bay I search that I have wondered about and that I don't quite see a path for is like all
information up to today has been covered or written about or exists on the internet. But if there is
no economic incentive to create new information for anyone, then it does pose a question when new
things happen. And I'm going to put news as one of these things, but just like a new discovery
happens. What about entertainment? Yeah, stand by for that later in the show. Yeah. A little foreshadow.
Yeah, yeah. Basically, if things are synthetically generated and return to users,
but I also think it's about time. I think a total like rethinking of economic incentives for
digital content creation is it's not the worst thing. It's just will it all accrue only to open
AI and Google and a couple of other tech giants? Or will there be completely new models
to create it? That could be an exciting thing. Yeah, let me ask you one more question as we
close out this segment about it. Why does Google botch every single one of these introductions?
I mean, the Gemini's thing was terrible. Their initial introduction of like Bard was bad. It cost
them $100 billion off the stock. I mean, this one, I guess if you do it at scale, at the scale of
Google search, you'll get some rock-eating answers, but it seems like every possible what time
that Google has an introduction, they mess it up in some novel an extremely embarrassing way
where it just doesn't happen with their competitors. This is what's so fascinating to me is
like we've all joked about Sundar and is he going to have a job and Google, are they lagging on
this? What I think is impressive or fascinating is that they keep pushing ahead. I mean, after the
the Gemini image
like with the diverse
Nazis barred
itself, just the name alone
much less the outputs a year
ago, they're still pushing through
verses, and we're going to get into this
open AI that seemed to every
single launch and rollout was
like beautifully orchestrated
and the products themselves
actually worked right on launch and everyone
just marveled at them and even the ones
that were demos like Sora and video
everyone was just like
this is going to change the future. We love this. Sam, Altman, you're great. And Google,
meanwhile, is just going strong. Leroy Jenkins.
Left and right. Yeah, Leroy Jenkins. But they're going. And like, to me, this is such a fast-moving
unpredictable space that I do believe in is going to fundamentally change technology, the entire
industry. So I'm still going to give them a bit of credit that they're still pushing forward.
Because any one of these, you could imagine any one of these mishaps in the past, like if you're a large giant corporation playing from a position of fear, these are the kind of things that could shut down projects and lay off divisions.
And I mean, these are bad.
Like these are not great, but it shows that they're actually, they're not afraid right now and that they're going to keep pushing on this.
And I don't think it's just for any kind of short term stock momentum.
I think they really, and even, it was interesting to me that I am a believer in this technology,
but also listening to Aaron Levy at the live podcast and then you'd publish the episode on Wednesday,
like it's sometimes even fascinating to me that I'm bullish and a believer, but when I see like
people who, like him, who've running a multi-billion dollar company and have been in Silicon Valley
since the early days of the, like the mid-2000s of Web 2.0, like the look in their eyes in terms
of how transformative this is, it almost makes me like, am I missing something myself?
Am I, because I'm already bullish, I think this is going to be really interesting and change
business models. But like, they're like, this is really going to change everything. So clearly
the Kool-Aid is strong among the Silicon Valley folks. Yeah, let me just sum up what your position
here is Google's good, so is failure, the haters be damned. And the journalists are making a mountain out of a moleil here. I mean, I think that like some of the coverage has been just totally laughable. Like the times the headline on this story was Google AI search errors cause a furor online. It's like, where people actually furious are they kind of laughing? And the headline is the company's latest AI search feature has erroneously told users to eat glue and rocks provoking a backlash.
someone like tweeted that subhead and said it's not erroneous if you're brave enough
to eat the rocks and it's only one only one rock one small rock that's the day though that adds up
yeah a day come on that's a it's just a pebble i uh one thing one thing that was interesting for me
is i i have been a big proponent we've debated this that i still think redid is the most
valuable data set on the internet it is funny to me and this i will recognize this reminded me that
like there's entire subreddits there's one circle jerk nyc where people the whole thing is just
shit posting like like entire threads of totally sarcastic commentary of people just i mean it's
hilarious and you recognize it as satire that's why you subscribe to it that's why you read it
but how large language models are going to try to process that this was a reminder to me that
that, yeah, it's still going to be a challenge in an uphill climb for this.
Yes.
Let's say an appropriate bit of nuance, but still, Ron, John, you've argued well.
And I think you've got the best of me on this one.
So give you a point there.
Just a pebble, just a small room.
On pebble for dinner.
Just after we went off air last week, we talked about how this one's co-head of the Super
alignment team on OpenAI, his name's Ian, like it.
just he resigned and he just tweeted i resigned and we talked about how he had a bigger responsibility
to the world if he cared about the safety of these ai systems to tell us more than simply i resigned
then over the weekend he did and more than that we learned a little bit more about why
these open ai folks that are quitting with safety concerns aren't saying anything so very quickly
Lika says, I joined because I thought open AI would be the best place in the world to do this research.
However, I have been disagreeing with Open AI leadership about the company's core priorities for quite some time
until we finally reached a breaking point.
I believe much more of our bandwidth should be spent, getting ready for the next generations of models
on security, monitoring, preparedness, safety, adversarial, robustness, super alignment, confidentiality, social impact, and related topics.
these problems are quite hard to get right and i am concerned that we aren't on a trajectory to get
there over the past few months my team has been sealing against the wind sometimes we were struggling
for compute and it was getting harder and harder to get this crucial research done and he goes on
and he says opening i must become a safety first a g i company so first before we get into a little bit
deeper what's going on here what did you think when you saw his deeper explanation that you had called for
on Friday.
It just actually pushed me back into the what is actually happening behind the scenes.
That's all I can think about because, again, open AI moving too quickly, maybe like the,
and we'll get into using the voice of actresses that they don't have proper licensing for,
Google launching products out into the wild that tell you to eat rocks.
I get a lot of this stuff is not great.
And a lot of this stuff is it's risky behavior and it's going to require taking risk.
But the ideas of like super alignment, AGI, like what has them afraid?
Is it just personal problems and kind of infighting with Sam Altman?
Or is there really, really something that they have seen that they are, that they're worried about,
that they're like, that they're willing to very publicly leave about, I still don't understand
what's going on there.
So here's the deal.
There's two possibilities here.
One is that Jan and Ilaika and Ilya Sitskever saw something pretty scary in what was going
on within Open AI and wanted to raise the alarm and ultimately just found like they were hitting
a wall internally and nothing has happened.
And like you mentioned last week, these people are basically.
folks that have been dedicated to making AI safe.
So, you know, if they didn't talk specifically about what they saw, that that's kind of weird.
The other possibility, so I guess I'm sort of biasing in terms of what I believe, because I think
the other possibility is quite likely.
The other possibility is that we are still at this elementary level of LLMs, and yet
it's going to be powerful for enterprises and powerful for businesses, and who knows, maybe
it will, like, be useful in search one day and we'll make Mark Zuckerberg a bunch of money.
But it's not anywhere close to achieving super natural or superhuman capabilities.
And then these folks just kind of got frustrated after open AI was basically like,
can you justify what you're doing?
And they kept yelling safety without any like practical application.
And this isn't just me.
This is Jan Lecun, who's the chief AI scientist at Facebook.
He says, it seems to me that before urgently figuring out how to control AI systems
much smarter than us, we need to have the beginning of a hint of a design for a system,
smarter than a house cat. Such a sense of urgency reveals an extremely distorted view of reality.
No wonder the more based members of the organization seek to marginalize the superalignment group.
It's as if someone said in 1925, we urgently need to figure out how to control aircrafts that can
transport hundreds of passengers at near the speed of sound over the oceans.
It would have been difficult to make those long-haul passenger jets safe before the turbojet was
invented and before any aircraft had crossed the Atlantic nonstop.
Yet, we can now fly halfway around the world on two engine jets in complete safety.
It didn't require some sort of magical recipe for safety.
It took decades of engineering, careful engineering, and iterative refinements.
What do you think, Rajan?
Yeah, I loved that quote from Jan.
Like, again, the idea of after the first flight saying, like, we need to start preparing for
safety for this far-off vision versus actually just getting there and along the way, having
the right people very carefully and, I mean, you know, the economic and capitalist incentives
to not have planes that crash all the time, though Boeing as of today, you know, it's a different
story right now. But I think it's not because of some oversight in the 1920s that
doors are flying off of Boeing airplanes. Yeah, it's the exact point. And I actually, it's kind of
been fascinating to me to see how this develops. And all right, I think maybe I am liking this because
it seems a little more sometimes it's not some, the Q Star, they saw like LLM come to life and
start running around the Open AI office and like just something crazy happening. But it actually
it's mundane organizational politics that they have this vision and required resources. And
And in reality, at this point, when Google's generative search is still telling us to eat rocks
and even Open AI and ShatchyPTs is temperamental sometimes, but just doesn't respond the way
as smartly as it could, you're right, maybe that is.
They just weren't getting resources anymore.
And then it's in their incentive to still go out making a big deal about it and still reminding
us that safety is a big priority. It is interesting to me in this whole, if that is the
case, like, are we in good hands right now? Are we in the right hands? Because one thing I
always think about is the development of social media in the early 2010s, there was no one
responsible at the wheel. Like, no one was thinking about safety of information. No one was saying,
it was just pure Chamoth style at Facebook as head of growth, like growth. It was just as fast as
possible as tech driven as possible with no societal awareness versus at least now that
conversation every company is thinking about it everyone is thinking about it again google is being
mocked as we speak because they're like because of issues so i don't know do you think we're in
good hands and we're in the right hands to steer us through this if if the yon lacoon
framing is the right one yeah i would say that there's we're in a very different situation
than we were with social media.
We're basically one platform or two platforms controlled everything.
Whereas, like, this is much, much more distributed.
And we already have, like, an open source social media model.
What do you do with that?
Nothing.
You need a network effect.
Whereas with AI, you can use open source to build things.
You can use open source to sort of distribute the impact here.
And so it's much, much, much less likely that this is going to be something that's
controlled by, let's say, an open.
and Microsoft alone or controlled by Amazon or a meta or Google.
It's going to be distributed.
And therefore, I think that that has a better chance of landing in, you know,
with reasonable application versus this sort of crapshoot when you just have one
company or two companies control things.
So to me, I think that's, that is probably where we are.
Now, like, the question is, like, are we in good hands with open AI?
I don't know.
I mean, they have definitely been very quick to release things that everybody.
else has not and so far as far as I can tell the world is still standing and we haven't had like
mad uh you know mass uh hysteria and madness due to people's exposure to chatbots and people are
this is one of the things with misinformation and the AI safety uh and even this Google rock thing
I think people are often much smarter than a lot of the commentators give them credit for being
like if Google tells you to eat rocks you're not going to eat rocks if an AI tells you that
it wants to take your wife away, you're not going to go to your wife and be like,
you know, you should go date Bing right now because the AI told me to.
Like, people actually do have good defenses against this stuff.
And so far, so good.
I'm not sure if I'm going to fully agree there, I think.
I have a feeling there's probably people eating rocks as we speak just from seeing the tweet
because it was funny and then deciding I'm going to eat a small rock.
But they're doing it ironically, right?
So they're not like doing it because Google told them to.
Actually, that is fair, which what does that say about humanity?
But I guess in terms of that good hands, open AI certainly their behavior this week,
I think there's a couple of things that we saw that definitely dig into the question of,
are we in good hands?
Right.
Yeah.
So that is a perfect lead into this next bit of things where, okay, so first of all,
just I'll set it up a little bit and I'm going to hand to you.
But basically, there is a vertical on Vox.
It's called Future Perfect.
And it's inspired by effective altruism,
which to me is a very interesting thing
that there's actually a part of a publication
that says it's EA inspired.
Like, I don't realize what?
Like, that's very bizarre to me.
I still haven't fully wrapped my head around it.
But they have a great reporter there, Kelsey Piper,
who then went on after the I resigned tweet
to dig up,
why all these open AI X safety people have been so quiet and said nothing about their concerns.
And that is, you know, thinking about whether we're in good hands or not and whether open AI
really should be trusted for handling like the safety of this stuff is, uh, is a little bit
concerning. So what happened, Ron John? Yeah. So in when they leave, there's a non-disclosure
agreement where there's a provision that you can never speak ill about, uh, you can never talk about
the things you did or saw or speak negatively about Open AI forever. Otherwise, even the vested
equity that you had and certainly unvested equity will be clawed back. And you can imagine for
anyone who sat at Open AI at any time, especially if you're an early employee, that's a lot of
money for a lot of people, especially at the current valuation. You know, what is it, 90 billion or 95
billion like you imagine you're someone you're someone who is there when it was in the
tens or hundreds of millions even that's insane amounts of money so people have kept
quiet and uh you know already the whole world around like uh non-compete agreements
nDAs around when you leave a company it's already been one that there's been a lot of chatter around
are these ethical are these the right way to go but this is almost never been seen to have an
NDA in perpetuity preventing you from talking about badly about a company. I mean, you report on
finance. You've been in the finance world. Have is there ever been a situation where you can get
your shares, your vested shares clawed back if you don't sign an NDA and a non-disparagement?
That to me is nuts. I mean, these people work for you for years. You're going to claw back the shares
because they won't sign an additional document about disparagement. That's crazy. See, that part to me,
for some kind of time period actually does make sense.
And that's the kind of thing you'll see because especially if you have proprietary information,
when you leave a company, even if it's vested equity for a private company,
that to me is actually not totally unreasonable that for a one year period or something
that you cannot use any of the proprietary knowledge you had to speak ill about the company.
That is not that crazy to me.
because again, it's like you are, you're, and you almost have the incentive to not do that because
otherwise your equity is going to go to shit anyway. So I think like that part is not the, it's the
in perpetuity. It's the, that's the kind of thing you never see. Right. And this was a line that stood
out, stood out to me. And I'm curious what you think about it. So Kelsey writes, all of this is highly
ironic for a company that initially advertised itself as open AI. That is as committed in its
mission statements to building powerful systems in the transparent and accountable manner.
So, Ronan, you tell me this.
I mean, is this like a disproof of the fact that they, because they will still, and they
do talk a big game on safety?
Is this almost like a refutation of their interest in that if they're, you know,
you know, obviously nonprofit, dedicated to advancing AI safely, but don't freaking say a word
or your equity's gone.
Yeah.
No, I mean, that, I feel, I think we had talked about they should just rename themselves to
chat GPT as a corporation like the pretense of openness and helping the world and all that stuff
I feel is so long gone that to me this is just another sign that they're a ruthless corporation
and that is do no but that has and ruthlessly been delivering excellent products and
you know finding customers and doing pushing the industry forward which is fine and it's just
just again to move away from this whole open AI thing. But also, it is a sign that not only are
they a ruthless corporation, they're among the most ruthless to have these kind of NDAs.
Now, here is another thorny ethical question for you that's not as easy to answer,
which is, is it incumbent upon the researchers on these safety and alignment teams to give up their
equity, not sign these non-disparishments? And if they're really concerned about what this
AI is going to do to us than to actually go out and say it.
Absolutely.
100%.
If you know that you have worked on something that is potentially going to end and kill all
of humanity, I think giving up your shares that will be worthless when we are all taken
over by robots and artificial intelligence is probably a reasonable thing to ask
ethically of of people who claim to care about safety anyway.
So yes, give up your open AI shares if you saw QSTAR running around the open AI office murdering people.
Damn, but so much money, though.
Yeah, I don't know.
I mean that, but that valuation.
End of world.
Put that valuation.
Millions of dollars and shares.
95 bill.
It's tough.
So speaking of ruthlessness and deciding to push the boundaries as much as possible,
in order to grow. A fascinating thing happened this past week, which is that OpenAI released that
flirty chatbot that we spoke about last week. And then, Scarlet Joe, and then, sorry, and then Sam Altman
tweeted her, which we noted. And then Scarlett Johansson is like, hey, wait a second. That sounds a lot
like me. And put a legal team together and basically said, I'm going to sue Open AI. And it's even
more than that. Basically, what was happening in the background was that Open AI was trying to
convince Scarlett Johansson to voice this system or to somehow partner with them or to market
it and she declined and they still released a voice that sounded a lot like her very, very quickly
from her statement. Last September, I received an offer from Sam Altman who wanted to hire me
to voice the current chat GPT4O system. He told me that he felt that. He felt that. He felt that
by my voicing the system, I could bridge the gap between tech companies and creatives and
help consumers feel comfortable with the seismic shift concerning humans and AI. He said he felt
my voice would be comforting to people after much consideration, and for personal reasons,
I declined the offer. Nine months later, my friend's family and the general public all noted
how much the newest system named Sky sounded like me. When I heard the release demo, I was shocked,
angered, and in disbelief that Mr. Aldman would pursue a voice that sounded so irritated.
similar to mine that my closest friends and news outlets could not tell the difference.
All right.
I listened to the Scarlett Johansson and her.
She voices this computer that a guy falls in love with, Joaquin Phoenix, falls in love with,
and I listened to this open AI demo.
The intonations were pretty similar.
I wouldn't say it's a dead ringer for Scarlett Johansson's voice,
but I could see how some people could be confused, especially after Altman tweets her
afterward.
I'm curious, Ron John, what you think.
Do you think what Open AI did was above board?
I mean, I'll give you the explanation for why it might possibly be okay.
I mean, they had this vision for the product.
They wanted to get Scarlett Johansson involved.
She wouldn't come on board.
They had somebody else voice it.
Of course, it was going to be this flirty AI assistant, similar to her.
If it wasn't voiced by her itself, by, you know, Scarlett Johansson herself,
it was going to be voiced by somebody who was going to try to emulate that.
So what's the problem with that?
What's your perspective?
I think this is going to be a really, really fascinating legal case for a couple of reasons.
So first, Natasha Tiku at the Washington Post actually kind of has a play-by-play
where it seems to exonerate Open AI because they had hired this actress a long time ago,
long before approach.
It's not like they approached Scarlett Johansson, she declined, and then they went out
and found an actress who sounded like her.
the actress herself says she was not given any instruction to sound like scarlet johansen so those things seem to
be that make it make it seem as though you know this just happens to be a coincidence but in reality
open a i is fine on the other side you have sam altman's tweet of her is honestly the most damning
part of this because whether or not she accidentally sounded like scarlet johansen he is
very specifically, you know, tying Scarlett Johansson's voice to their new multimodal product and
voice-driven product. So that alone makes it clear from a business perspective they are working to
associate the two of those voices, whether or not that was the initial intention. The fact that they
did approach Scarlett Johansson is another sign that they were trying to do this. And my favorite
part of all this is Bet Midler makes her way into this story. Because if for those,
who are not familiar, the singer of Wind Beneath My Wings
and an 80s and 90s movie star.
Maybe she's still doing stuff on Broadway, I'm not sure.
There's a precedent here where in the 90s,
Ford, the motor company, it approached her to sing a song
that was already hers for a commercial.
She declined.
They got the rights to the song and then had a singer
who was not her sing it exactly in her style.
And so she sued,
and she actually won.
And it was clear that the intent of the product or the output was to mimic or copy the celebrity
and kind of fool people into thinking the celebrity endorses this product.
And Sam Altman, I think it completely set themselves up to be guilty in this bite.
Those three letters that one tweet her from a pure business and legal perspective,
he is trying to convince users to associate their voice product with Scarlett Johansson.
So here's a tweet from Mike Solana.
He says, to recap an unknown voice actress using her own voice just lost the greatest opportunity of her life because one of the wealthiest, most famous actresses alive was told by her friends that it sounded like her.
This didn't happen.
God.
I mean, it is funny to me because I have seen there.
Now people who typically are not siding with everyday creators and indie creators.
creators against large technology corporations are suddenly siding with them. It is interesting
the power dynamics here because the idea that like, you know, an unknown actress is suffering
at the hands of evil Scarlett Johansson in defense of $95 billion open AI and Sam Altman.
It does, I'll agree, create some funny power dynamics, but I don't know. And the idea that open AI is
the defender of the indie creator when we've just talked about are they destroying the entire web
and all incentives for creating any kind of content uh i do think that's a bit rich well this is the
thing right is that silicon valley uh obviously like tech industry has built a lot of very cool things
we're using them right now right but there's also been a shall we say disregard for intellectual
property a complete and total disregard like if there's moments where they could use
copyrighted material. I mean, the AI thing has made this very clear. They will just use it.
And I don't think there's a lot of value. There's certainly not a value in the tech industry
placed on, let's say, journalistic content, news reporting. And they have, the tech industry has
basically felt like it's its own right to effectively take this content and do with it what it
pleases because it doesn't see a lot of value in it, except when it's delivered to users and
its own products. And by the way, users love this stuff. Like a user would much rather
be on like a Twitter timeline or on Google News than on a publisher site almost always. And like you
can have like, I don't know, 50 publishers can, you know, contributing value to Google News and one of
them getting the click. And of course the traffic is good. But there have been other like instances,
for instance, like training on YouTube where it's like, okay, well, we don't really care about
copyright, you know, which we think that Open AI has done. And it's just like, all right,
we're going to train on it. Now it gets a little bit more complicated.
complicated when you go into the entertainment industry, right? And I had a tweet that was basically like the entertainment industry is starting to feel, finally feel what the news media industry has felt for years. And it's like, Scarlett Johansson is much more beloved by the public than any single journalist, like probably by a few standard deviations. And doing things where like you're going to try to take or emulate her voice for your product after asking her and she said no and tweeting her, like the tech industry.
I think is going to find out that there is a line here, and this might be it.
What do you think?
I think this is it.
I 100%.
And again, it's that one tweet.
Otherwise, there's so much deniability around it's another actress.
We hired a long time ago.
We didn't mean for that to happen.
Like, that's not what we intended.
That's just her voice.
You're a bad person for saying she can't have her own voice, this unknown actress.
But they didn't say that.
they said her he said her he wanted everyone he marked that's marketing that is genuinely and i think
it's going to this is going to be a really interesting case and i think it's going to set precedent
for how content existing content and IP can be used in training these models and i think
it's going to be a good thing because that's always needed to happen even on the image side like
adobe has been very very clear about how they built firefly and transparent versus even like
stable diffusion and Dolly are certainly have been a little more aggressive. I think we are going to
have to move towards a world where LLMs are more carefully built and the foundation models are
carefully built. And I think this is going to set that precedent in a good thing. And Scarlett Johansson's
name will live in the annals of tech history forever. Absolutely. And shout out to Bobby Allen,
who at NPR, friend of the show, who got that scoop. During the Hollywood film,
strikes, right? The actors and the writer strikes, I was not sure why they were negotiating for, like, AI rights. And I just kind of thought it was way premature. And now I'm just like, oh, they were actually right on time. And the industry, the Hollywood industry, is starting to see. And that's why this is resonating so much that if this continues to go on, there might be no incentive for humans to make film, make TV. Like, we all love watching TV. We all love watching films. And we get to see those films because there's a financial
incentive for the creators to make those films and make those television shows. And if, you know,
effectively some of these roles can be co-opted by artificial intelligence, then the whole
industry, you know, potentially fall apart. And there's this guy, David Kronenberg,
who's a filmmaker, he's a writer and director. And he's basically saying, like, this is coming.
He goes, I think artificial intelligence can enhance computer generated imagery. And he says it's
quite shocking what can be done even now with the beginning of AI. He says the whole idea of
productions and actors and so on will be gone. That's the promise and the threat of artificial
intelligence. If the persons can write it in enough detail, the movie will appear. I've always felt
like statements like this are kind of fanciful and a little bit like nuts, but also just like
seeing this Carl Johansson thing play out. I'm like, oh, maybe he has a point. I don't think
it's going to be that transformative and I say and I think it will be but I think already like voice
acting especially for advertisements is something where there's no doubt in my mind that celebrities
will license their voice you will pay them for the voice on some kind of marketplace or through
some kind of arrangement create the commercial on your end with their voice send it back to them
for approval it's better for them it's better for you you pay and everyone's happy I think like
that those kind of arrangements, I think influencers and actors will start licensing out,
certainly their voice at a certain point once video is good, their likeness as well for exactly
these kind of reasons. And especially in the advertising world, I think all that's going to happen.
I think for like the actual creation of movies, I still am more optimistic that this just makes
it possible for more people to create great stuff that they otherwise would not have been able to
make. So it's going to make the market more competitive and allow more people to make very,
very professional looking content. And I think that's a good thing. I do think I was thinking that like
at a certain point it could be like when you go watch a play. We live here in New York like you're on
Broadway or even at an off Broadway theater. There's something different about seeing people up
close acting and like live. And that that's like a special experience. And,
maybe at a certain point that's the difference like watching real actors acting something out
even on film will be this kind of like other more art house type thing versus or just a different
kind of experience versus the I mean to me more every Marvel movie should be AI created and
already kind of has been for how bad the last few have been like I think the the whole market
it'll kind of bifurcate and you know you'll have the very high input live real actors and then
you'll have just the kind of like fake stuff on an ad or whatever big blockbuster that's kind of
crappy anyways or potential an interactive movie that you're sort of creating with AI and it's more
entertaining and more compelling than anything you could watch on screen and in a theater but who knows
maybe you're getting it in myself I would love that I would love that all right and we
mentioned tech meme earlier in the show, and before we would go to break, I just want to thank
TechMeme for showcasing Big Technology podcast on its homepage. We have a great partnership going
on. And that's because TechMeme is my go-to site for finding out what's happening in the tech
world as it's happening. And I think you'll find great value from it too. So that's TechMeme.com,
T-E-M-E-M-E.com. All right, we're going to be back after this and we're going to talk about
NVIDIA. Hey, everyone. Let me tell you about the Hustle Daily Show, a podcast filled with business,
tech news and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now they have a daily podcast called The Hustle Daily Show
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So search for The Hustle Daily show and your favorite podcast app,
like the one you're using right now.
So let's talk about NVIDIA before we head out for the weekend.
crazy earnings crazy crazy earnings i mean they tripled their revenue to 26 billion uh profit was
seven times uh what it was in a quarter year ago 14.88 billion quarterly records for
invidia the stocks above a thousand dollars a share they're worth 2.63 trillion dollars they're
about to do a 10 for one stock split uh which is going to make their stock more affordable and
retail investors are going to just dive in met a woman at a bagel shop this week who's
Like, I watch CNBC and I've been trying to build my position in NVIDIA, but it's too expensive.
And now you're going to see her and a bunch of others jump in.
Obviously, exceptional results.
The question is, can NVIDIA sustain this?
And is this something that is the valuation out of sync with where this technology is going?
Because as some people have pointed out, this is from Sequoia, that the, let's see, and this is from the Wall Street Journal.
Sequoia Capital estimated in March at the industry.
put in $50 billion to Nvidia's chips to train large language models, but generative AI
startups have only made $3 billion in revenue. And Nvidia has 80% of AI chips. So what did you think
about the Nvidia earnings, this massive stock that's just driving the market? And do you think
that this is sustainable? All right. I may be setting myself up here because no one has certainly
ever made money talking speaking ill about invidia but i think this is kind of on its last
legs and here's why and again the my favorite was from aaron levy he had uh because around the
statistic that the data center category rose 427 percent from a year year on year from 222.6 billion
dollars in revenue and he's like these two numbers have never been next to each other in the
history of capitalism, seeing that kind of growth at that scale is insane. To me, I don't like
that the stock split makes me uncomfortable. And like already, once you're pushing into the
financial engineering side just to kind of keep the stock afloat, your numbers are amazing,
and I agree, but that already starts to be a warning sign to me because you're going to, I think
in the short term, it will push the stock significantly higher and added.
tailwind far beyond these numbers themselves. But the stock is still pricey. And then we get
into they are basically the monopoly at the center of this entire generative AI boom. No one else
is making money yet except Nvidia and they're making a lot of money. But I do think all this
investment is going to, there's still going to be a trough of disillusionment around the corner
when people start to realize that making money off of this is going to be hard.
and is going to take some time to get the business models right,
to get the actual user experiences right.
And I think the moment that happens, that, I mean,
these numbers that you see,
because the growth is already priced in,
it's going to start to, I don't know,
it'll start to turn the corner on NVIDIA.
But maybe there is an ROI there.
If we see AI get baked into every website we visit
and every, even our PCs itself,
I mean, Microsoft had this PC event, they started creating these AI PCs and they do some crazy things like this feature called Recall, where you basically, they're taking screenshots of everything you do and sort of can help remember things that you visited and, you know, what sort of things you were doing on your computer in case you forget or you want to see it again.
But I guess that also takes place, like a lot of the computing takes place on the PC itself versus in the cloud, which is what Nvidia is service.
Well, but we had talked about, like you spoke with Aaron Levy, we talked about last week, the cost of compute is going to go down and down and down. And like, again, GPT 4-O4 Omni is 50% cheaper than GPT4 was when it first launched. Like that's the main thing that we're going to build smarter models, smaller models, I think. So that then all the talk around like we're going to need to rethink our entire electricity infrastructure.
structure to feed this AI machine as it gets into every experience that we have.
I actually don't think is going to be as profound as people are saying it's going to be.
And that's why I think Nvidia is the one that gets hurt on that.
We all benefit.
But I think it's going to compute's going to get smarter.
Models are going to get cheaper.
And that's good for everyone, all of us except for Nvidia.
Yeah, it's like everybody needs an AI strategy.
So doing whatever they can to buy these chips.
But eventually, like, though it will settle into it.
the people that can make money from this and they can't.
And like, I think even about meta, like, we praise meta,
but meta is going to have 650,000 Nvidia GPUs by the end of this year.
And they started buying early, you know, power reels.
But where is the ROI there for them?
I mean, they have these chatbots and their apps and, you know,
they've built this open source model.
But eventually are they just going to be like, how exactly are we supposed to make money from this?
Maybe reality labs.
I don't know.
What do you think?
Well, no, but this is why.
The best thing I forget where I heard is that Nvidia trades like a commodities company, not a technology company, is that it's trading a commodity GPU chips that are in a, like in commodities, it's called a super cycle where lithium or copper or something, there's some, some like exogenous use case that creates just such an insane demand that the price of it skyrockets of that underlying thing you're trading.
So any company that happen to invest a lot of money to build mines or find this commodity, just their profits become insane.
And I think that's exactly what's happening here.
And then any super cycle, there's always a downside on the other end.
And I think that's exactly what's going to happen.
It's suddenly a lot of people are going to be sitting on a lot of chips that are still useful, but they'll start realizing that they're not cranking out revenue in the way they hoped they would.
So then they're going to have to kind of rethink a little bit.
And I think there's going to be a lot of generative AI startups that are also probably raised a lot of money and are hoarding compute right now that are just going to go out of business.
Inflections, stability.
They're already seeing that.
We're already seeing it.
Yeah, yeah, exactly.
So I think again, and again, each one of these situations good for everyone except Invidia.
So let's just play a short game before we end.
All right.
NVIDIA is currently 2.62 trillion market cap.
Apple's $2.9 trillion.
Does NVIDIA surpass Apple?
On what time horizon, or just ever?
Ever.
Yes.
I definitely think in the short term.
I think there's a not, this is going to play out over the next year.
And I think when this stock splits already every, you said in a bagel shop,
everyone I know talks about NVIDIA, asks, do you own NVIDIA?
like brags about when they got into NVIDIA.
It is like the super meme stock at the moment.
And again, it's one of those things that's almost kind of frustrating
because it is a very strong sound business
that actually has built incredible products
and has positioned itself incredibly well
for this transformative moment.
But the moment, that story just never ends Wales.
But I do think it will surpasses.
Apple at some point, especially after the stock split, unless Apple announces Siri is fixed
in a couple of weeks at WWC and then to the moon.
Okay. So then next one, is NVIDIA going to surpass Microsoft? Is there a chance that
Nvidia becomes the most valuable company in the world? Microsoft's $3.2 trillion market cap.
I think so. I think so. Again, same thesis that there's so, there's so many short
term tail wins. The stock split is being made to bring in retail even more. This is going to,
Nvidia is going to be the story five years, seven years from now when we talk about the AI
mania. And again, like any other mania, I think there's an underlying story here. But I think
Nvidia, it can definitely go a lot higher right now. That's crazy. I mean, obviously, it's not
investment advice. So people take it with a grain of salt, but, or take it for what it's worth. But
my goodness. I mean, if
Nvidia becomes the most
invidia becomes the most valuable company in the world
if it happens, and we don't have proven
ROI on the technology that's
boosting this. It's just, it's going to be
one of the craziest moments in financial history,
I think, and I don't think it's going to end well.
But many people have said that about Nvidia before
and keeps going up. Number go up.
That's the thing. But again,
it's already a very, very expensive
stock right now. That's the main thing that, you know,
it's like it's grown incredibly,
but on any kind of financial metric or financial ratio, it's incredibly expensive.
And I think that's, there's bringing in more retail and making it more accessible is only
going to inflate that a bit more.
But at some point, its customers have to see, see ROI and that's going to be interesting.
And telling people to eat rocks is not going to get us there, at least in the near term.
Well, yes.
One small pebble seems at least.
one small rock.
Appropriate for a Memorial Day barbecue.
This Memorial Day, yeah, when you're grilling and you got the dogs and the burgers on
the grill, make sure to add this side dish of choice.
All right, Ron John.
Great speaking with you as always.
Have a great weekend.
And looking forward to talking with you, not next week, but the week after.
Yep.
See in a couple weeks.
Yes.
All right, folks.
Next week we have Brian McCullough of TechMeem ride home coming in and filling in for Ron
John, so that'll be fun.
We also have Luis Metzakis to come on to talk about two of our most intriguing companies, Chia and Timu.
She's done a deep dive forward on Big Technology, and we're going to talk about it on Wednesday.
All right, everybody, thank you so much for listening.
Have a great weekend, and we'll see you next time on Big Technology Podcast.