Big Technology Podcast - OpenAI’s o1 Reasoning Model Debuts, $2,000 ChatGPT, AI News Anchors
Episode Date: September 13, 2024Parmy Olson of Bloomberg joins for our weekly discussion of the latest tech news. She's also the author of the new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World. We cover 1) Op...enAI's release of its new o1 model, that can do reasoning, also know as Q* or Strawberry 2) o1's features and what makes it different 3) Businesses struggling to find o1 uses 4) Investor concerns over AI 5) A precursor to AI agents? 6) OpenAI raising at a $150 billion valuation now 7) Would people pay $2,000 per month for ChatGPT? 8) When will OpenAI have to return its investment? 9) Lessons about Sam Altman and Demis Hassabis from Parmy's book 10) AI news anchor avatars --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
OpenAI's new reasoning model called O1 has arrived.
Is it a game changer or more of the same?
Plus, will OpenAI charge some users $2,000 per month to use chat cheap ET?
And by the way, will it raise at $150 billion valuation?
And our AI avatar is about to do TV news.
That's coming up with a Bloomberg tech columnist Parmey Olson right after this.
Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format.
Open AI has just released its reasoning model.
It was called Strawberry as a code name, but now it's being called 01.
And we're going to talk all about what it is, what it means, and where Open AI and the AI race goes from here.
And we have the perfect guest host for us to do it.
Parmy Olson is here.
She's a technology columnist with Bloomberg.
And she's also the author of a new book out this week.
It's called Supremacy, AI, Chat Chip-T, and the race that will change the world.
And what a week to release a book because we are just like here flooded with AI news.
Parma, great to see you.
Welcome to the show.
Thank you.
It's wonderful to be here.
I'm a big fan of the show.
So just so pleased to be able to talk to you.
Awesome.
Well, thank you and thanks for coming on.
Let's get to the big news, which is that we've been hearing rumors that OpenAI was going
to release this strawberry model, which it was initially called QSTAR, and there were worries about it.
around the Sam Altman, Oster, and it's here.
And basically the big thing about this model is that it does reasoning.
And so you can ask it a question, and you can actually kind of see the way that it thinks through the problem.
It goes through a bunch of different steps.
And that's enabled it to be much, much better, more accurate, more competent, smarter than any previous model.
So I'm like looking at some of the charts that Open AI put out with the release.
I'm looking at accuracy.
So if you think about competition math, GPT-40 was getting a 13.4 accuracy score on competition math.
But this thing, in the way that it can think through the different problems, is getting 83.3 score on accuracy with competition math.
So it jumped from 13.4 to 83.3 on competition code, it goes from an 11 to an 89.
and Ph.D. level science question, actually, GPT40 wasn't bad at a 56, but 01 preview, or sorry,
01 gets a 78. And O1 preview, which is the model that's out now gets a 78.3, where an expert
human gets 69.7 accuracy score. I think that's just the percent of questions they get accurate.
That's the accuracy score. Pardon me, what do you think about the release of these models?
do they live up to the hype? I mean, we basically saw a moment where, you know, was Sam Altman pushed out of Open AI because they thought he wasn't being careful about this type of model? Like, that was one of the assumptions. Are these a big step forward? What do you think about the release of these models?
Well, first of all, I think I would put this into context of the huge amount of pressure that Open AI has been under for the last year to get out something that's going to cause as much awe as Chat TPT did.
in November 2022 and Dolly before that.
And I thought it was really interesting that in the last few months, we've been getting
the sense that, hey, Open AI has been announcing a lot of things, but it hasn't actually
been launching the things that it's been announcing.
So, for example, you know, there was the whole SORA video thing.
Yeah, where is that?
Yeah, where is, where did that?
Where was that?
Even, actually, even GPT for, oh, the voice activated, the sort of voice conversational interface, I just met people in San Francisco last week who were like, yeah, I still haven't been able to try that yet. It hasn't really fully rolled out.
Yeah, probably. Can I tell you something about that? So first of all, like, I've been wondering where it is because I've been signed up to chat GPT premium for a while and hope that I would get that new voice interface. And I haven't gotten it yet.
And I, over the past, so I'll just say this, over the past 24 hours, I've been monitoring the chatter around the strawberry and 01 release.
And my favorite exchange is Sam Altman announcing, I don't know if you've seen this.
He says, rollout complete is live up to 100% of chat GPT plus and team users now.
This is of the new reasoning model.
One of the first replies is from this guy Matt Paulson who goes, when are we getting the new voice features?
Well, yeah. Do the thing you said you were going to do first.
Yeah. And Sam responds, hey, how about a couple weeks of gratitude for magic intelligence in the sky?
And then you can have more toys soon. I mean, it was interesting, but also just like so revealing of the frustration.
I think that you're noting of the fact that like we haven't seen Sora released broadly.
We haven't seen the voice features release broadly.
And I think that's kind of the trap that Sam and his company has kind of put themselves in by just releasing
a lot of demo wear, which is very exciting.
But if you start getting people too excited for too long a time and then you don't give
them anything, then they start to get a little bit frustrated and a little bit cynical of
what's actually going on behind the curtain.
And you know what?
It's funny what you just said, that you've been signed up to premium.
I have done exactly the same thing because I signed up to Claude, and I'm paying for two
subscriptions to these things.
Yes, me too.
keeping, I kept chat GPT Plus because I wanted to try out the voice interface and it still
hasn't shown up on my phone either. So here we go. Pardon me. It's funny because I also had,
I was waiting for voice forever and it didn't come. So I unsubscribed for from Plus and my
subscription ends today. This is not planned. So I've been able to actually try out GPT-01 and it's
been pretty interesting, but still no voice. So have you got only 12 hours left before you're
exactly? You know, there's like a monthly cap. There's a very small monthly cap or something like
that or a weekly cap. And I have like not cared about rate limits. I've been like testing
the crap out of this thing because I'm like, all right. You might as well. Yeah. And I'll just
share like my initial reaction to it. And I'm curious what you think about it. It feels like this thing
is extremely math and code based. Like that's where the actual breakthroughs are going to be.
like I said, when I was reading these benchmarks, it's competition math, competition code,
PhD level science questions that it's pauses.
And by the way, when you're doing the queries, you can actually see it,
pause and think, think through each step.
Are we do want to use the word think?
Yeah, I do.
Yeah, okay, go ahead.
You can correct me on it.
But you can see it like go and tackle now I'm getting nervous about think.
But you can see it tackle each different step.
And there was actually like if you look at the consumer.
preferences like the are the human preferences like where they'd want to use these new
o one preview models versus gpt four oh they actually prefer gpt for oh for personal writing
it's about even on editing text but where gpt this this oh one model really really thrives
is it's way it's much more preferred for computer programming data analysis and mathematical
calculation and that's why i think that when you when you get into this
like people start to judge, is this a step forward? The people who are using it for more
technical activities are really going to see a jump, and the people who are using it for more
stuff like I would, like writing or editing text, aren't going to see it. So there might be a
bias among journalists to pan this thing, whereas like people in the technical fields are going
to be seeing that there's real advances. Well, I think it's a good strategy, what you just
described, that if this is a model that is perhaps more useful for people who are doing data
analysis, they're scientists, their coders, then they've got a specific models which just has a
little bit more of a clear utility for them. Whereas if you're in marketing or you're in customer
service, then you're more likely to use the previous model, something like GPT-40, which is a lot more
sort of language-oriented, as opposed to reasoning-oriented. I think that's no bad thing in terms of
business use case, because I think till now, there's been this kind of
this effort to try and create this, well, we started off with this ambition to create artificial
general intelligence, right? That was the founding of open AI. We want to create AI that has the same
broad cognitive capabilities as the human mind. And I think that actually has a big downside
when you're actually trying to sell something packaged like that to businesses, which is that
you're giving them this Swiss Army knife of a tool that has all these general capabilities.
And the risk is that your end customer can end up just becoming paralyzed within decision on what to do.
Like if you talk to some businesses who are trying to figure out how to use generative AI models,
I heard from one bank, for instance, they asked, they put out a survey to their employees just saying,
how should we use generative AI? And they got more than a thousand responses back from their employees.
which is great in one way.
But in another way, it actually just makes everything take a bit longer
and in terms of figuring out a strategy on how best to use it.
So I actually think from what you're describing,
if the models are starting to kind of fracture a little bit
and there's going to be a sense going forward
that you can use this type of model for mathematical-based processes
and data analysis,
and then you can use this kind for language processing.
Maybe that kind of runs counter to the original ideology,
of let's build a broad, godlike, general level intelligent AI.
Maybe it's moving away from that.
But I think, like, practically speaking, that kind of makes a bit more sense.
So one of the things I've been thinking about as 01 has released or strawberry has released
has been, okay, so the models are smarter.
Where's the application?
I think you sort of brought that up.
It's like, are we going to, this is one of the things that has been.
sort of plaguing the AI industry, even though it's doing well, obviously, but it's been like,
okay, the models are getting smarter and they do magical things, but we still don't really
know how to use them in practice, right? This is actually something that I highlighted. There was a
GROC engineer who said, O1 seems powerful so far, but everyone is kind of unsure what to do with it,
kind of humorous place to be after all the speculation. So what do you think happens as these models get
smarter in fields like science and coding.
Yeah, I think it's, I think it's really interesting because there's this race in Silicon
Valley to make everything more and more capable, which is disconnected, I think, a little bit
from the sentiment among their enterprise, like businesses in the rest of the world, who are just
like, we don't necessarily need AI that's more capable.
We just, like, we're quite happy with what you've already put out there.
Like, that is already pretty impressive as a step change up from what was even available two years ago.
So my sense over the last year has been one thing that's been lacking from the AI model vendors, like Open AI, like Anthropic, like Google, Microsoft with Azure, has been just to do a little bit more handholding with their business customers on how to actually implement the current models that are available.
But I think there's just this kind of, you know, rabbit enthusiasm to try and just make these models smarter and smarter.
You've talked on your show before about the arms race.
It's kind of had this, it's got this kind of self-perpetuating cycle that, you know, even like Salesforce last night,
their announcement got completely overshadowed by Open AI, but they announced agent force, which is basically what they said.
is the very first AI platform for businesses that has autonomous agents.
There are startups that have been talking about doing this, but haven't released this.
And I haven't seen anything similar.
I don't know if you have, but from Google or OpenAI or even Anthropic,
people are talking about it.
And it looks like Salesforce is the first to do it.
So there's this real, and they're very proud of that.
And so there's a real competition.
I'm just going to call bullshit on that.
I don't think Salesforce is going to be the leader in AI agents or even the pioneer.
I've heard people talk about things as AI agents, then you take a look at what it is.
And it's like, there's no agent to it.
Well, and what does the agent even mean, right?
I mean, when I was talking to them about it and they were saying, I was saying,
well, are you using the word bot and agent interchangeably?
And they said, oh, no, no, no, bots, those are old.
That's not the agents are new.
That's the new thing.
So a bot would be if you're talking to a customer service chat bot,
and it just kind of is based on a large language model and uses,
predictive technology to say something that sounds about right, whereas an agent can actually
resolve your issue by retrieving data about you, the customer, from the Salesforce, CRM, and the
data cloud, which is one of their big services that came out two years ago. And so it means these
things can actually take action. But I'm with you on that, Alex. Like I don't, you know,
we've seen so much of this in AI, just a lot of, a lot of big announcements, a lot of
excitement, but then you see businesses actually just struggling a little bit to make these
things work for them and work well. Yeah. And so I'm just going to read a couple examples of
folks who've been trying out O-1 and why this might be different. And I think it's your right
to point out the difference between like chat and bots or bots and agents. Because this is like
from somebody who works at this company called Spellbook Legal. His name is Scott Stevenson. It's very
interesting that he says he says oh one is extremely good at long document manipulation docker vision is one
of the hardest problems we've dealt with uh like for example taking a long set of instructions and
modifying a legal document and oh one is really good at this when people are overwhelmed by oh one
i think it's because they're thinking of it as chat still and its ability to do work is going to be
really good uh once function calling is fully supported so you know i think that like with these bots it is
definitely like, you know, we talked about handholding. And, you know, maybe these companies need to do a little bit more handholding. But I think often in technology, what happens is they put the capabilities out there. And then people like Scott Stevenson, right, figure out the way to do it because they're just using it more often. And that's why they're putting this out in preview. And that's the sort of use cases that get spread by word of mouth or productized. And maybe that's what happens. And I would say that would be the argument to keep just building these, these, uh,
models making these models smarter and maybe the thing is that they've gotten so fast so smart
so fast that they've sort of outpaced the utility because we haven't had enough time or
the industry hasn't had enough time to put their hands on and try things like this what do you
and actually digest it yeah no I think that's a that's a very very valid point um and I think
you know coming for so I'm based in London and I hear a lot more I'm not not being based in
San Francisco, I'm not in the Silicon Valley bubble. So being in London, I hear a lot more of
kind of the rhetoric from people on the East Coast of the U.S., from Wall Street, from investors,
from hedge funds, from health care companies, legal firms. And there's a real sense of skepticism
over the last few months, which I know you've talked about. But I think it's not entirely
warranted. And I think, like, this market correction that's happened in the last few months was
healthy. It was crazy that
Nvidia became worth
$3 trillion at one point
in market capitalization.
And I think
when people talk about hype,
I think so much of the hype is about
timelines. There's a sense that
businesses and investors
kind of got this impression that AI
was going to start bringing a return on investment
for companies sooner
rather than later. Like I was talking to
one CEO of an AI
vendor a few months ago. And I was saying, so how are you going to measure, how are we going to measure
success for generative AI in terms of return and investment for businesses? And you know what he said?
He said, in the next two or three quarters, you're just going to see more Fortune 500 companies
report an increase in EBITDA numbers. Like, they're just going to have higher profits. I just think
that's ludicrous. Like, really? Like, in the next two quarters, more businesses are just going to
make more money because of AI and that's how we'll know. It's going to take time. It's going to take
way more time than that. And that's been the case with every major tech revolution, whether it was
search or desktop or mobile. We all know about the Gartner hype cycle and these things just take
time to implement. So yes, if the AI vendors in Silicon Valley is racing ahead on capabilities,
sure, maybe the businesses just need to catch up. And so with reasoning, a lot of people have talked
about how it lays the groundwork for these agents.
Because again, like seeing it, it's pretty amazing using these models and then watching it
go through the multi-step process.
And the idea is like, all right, if you're going to have an agent that's going to go and
take action for you on your behalf, it's going to need to be able to like process step
by step in order to do that.
And this is the way that OpenAI is describing it.
So this is from their head of research, Bob Magoo in the verge.
we have been spending many months working on reasoning because we think this is actually the critical
breakthrough. Fundamentally, this is a new modality for models in order to be able to solve
the really hard problems that it takes in order to progress toward human-like levels of
intelligence. So I'm curious, like, do you think this is a step toward agents? And why is
reasoning this sort of, as he describes it, the fundamental or the critical breakthrough?
Well, I thought, I think that what you were just, earlier you described the tweet from someone and what they were describing in terms of when they played around with GBT, this new O-1, it made it sound like there was some gray area here where it had almost agentic properties.
Why is reasoning important?
Well, I think this just gives it more broader capability.
And I think it's really interesting that OpenAI hasn't actually used.
the word agent once in its announcement, and it doesn't sound like any of its scientists have
talked about that either.
Well, doesn't Bloomberg have this, like, didn't Bloomberg publish the Open AIs like
imagining its future, and it has like five levels, and level one is chatbots, level
two is reasoning, level three is agents, level four as the AI innovates, and then level five
is the AI does work at the level of an organization.
So the suggestion would be that we're at level two.
and that agent level is what's next.
Yeah, yeah.
And I thought it was really interesting in Open AIs.
See, I think right now it's just a matter of, it's like you said,
they put the technology out there and they don't really know necessarily how it's going to be used.
It's sort of the experimentation is done by everyone else out in the world, among businesses and organizations.
They did give some suggestions in their blog post.
So they said, I'm just reading from one section, which I thought was the most interesting,
was that 01 can be used by healthcare researchers to annotate cell sequencing data by physicists to generate complicated mathematical formulas needed for quantum optics.
I'm going to stop there.
Those are really, really specific things.
Technical stuff.
Very specific.
And then they end it with this bit.
Then they say, and by developers in all fields to build and execute multi-step workflows.
That's like anything, basically.
So they go from, they basically make it.
Maybe that guy who told you that, you're going to see this in EBITA in the next few quarters.
Maybe they're right.
I don't know.
You think he was right?
No, probably not.
But I can see why he thinks that, right?
Because, again, this is going to come to like the very labor intensive, very value, high value tasks in our economy.
That's what it's going to do.
Yeah.
But don't you think also, like already there's been a lot of concern among businesses about halluciners.
right from these models now when you give them the capability of taking action so again sorry to
go back to the sales force but it's just because i have a concrete example that they gave me which was
which was this agent a agent in their in the customer service chat bot um which could retrieve data
and basically resolve a ticket in the same way that a customer service representative would do um that's
putting even more faith in these models. Like, it's one thing to make a mistake about
information, but to actually screw up the process of, you know, resolving someone's complaint
or getting some information for them, that's a bit more high stakes. So I'm, to be honest,
I have no idea how this will play out. And I think businesses are going to be very cautious just
because there has been so much concern about hallucinations till now.
Right.
Yeah.
And Open AI, even in its blog post, says that it does not solve.
This does not solve hallucinations.
And then you can, you could also go to some of the critics.
They're having a field day with this.
Here's Eric, sorry, Ed Zittron.
We have our big stupid magic trick, folks.
Strawberry, it's Open AI's new model.
It will take 10 to 20 seconds to give slightly better answers.
And it's buggy. Open AI is getting desperate. And this is a real deal pale horse.
I think that's a little bit over the top. What do you think?
It has to be. I mean, it's said. That's his thing. But I spoke to someone last night who had tried it as well. I haven't had a chance to try it myself. And they said they didn't. And this was someone who tries these models a lot. And they said they couldn't really see a big upgrade or a major step change or even a particularly.
big difference between what Anthropics Claude can do. But again, I really need to try this
out myself. And I think it's quite hard to kind of put a judgment. They literally announced
it last night. So we can all test it. We can all play around with it. But I think it will really
get a sense of how useful this is when people start using it for work. Yes. Okay. So let's end
this one just with the unfair question. Buying or selling the high.
on on strawberry or oh one i'll go first i'm buying it i'm buying it i think it's going to be more
difficult for like the quote-unquote word cells versus the shape rotators right the people who
deal in text versus the people who deal in numbers but i think this was sort of a necessary step
and you see the benchmark improvements here and it's not going to look immediately just like this
you know the way that some of the other let's let's say code completion you know initially
where it's like, okay, so it can complete some code
and now it's starting to really look like it can code things up
on its own. I think that this is going to be something
that folks in the technical fields are going to use
and it will actually be a significant step up.
Okay, how about you buying or selling?
That's a hard one to answer.
I guess I'll just say sell for now
because this thing about the benchmarks,
I personally don't fully understand the benchmarks
and what they're weighted against and what they're sourced again.
and, you know, are these completely standardized across the industry?
Those are really impressive numbers, but what do they actually mean?
And again, I think it's very hard to really judge this until you try it.
Right.
And I'll just give one more piece of evidence on my side.
I know this is unfair, but I just saw a thread with the stream of folks who've, like, tested 01 on their personal benchmarks.
Like a lot of people have their, like, own personal tests to give AI.
And it's just like person after person talking about how much better it is than their previous tests.
Okay.
Okay.
But the other thing here is really like maybe, yeah, I don't know.
Like this is preview, of course.
And you mentioned it and you're totally right that we've seen SORA, which has been a lot of hype and not released yet.
And we've seen the voice or GPT40 voice application.
and the status of that is still, you know, up in the air.
And, you know, should we give Open AI a couple weeks of gratitude for the magic intelligence in the sky
and have more of our toys soon?
I don't think so.
Well, I'll just have, in spite of everything I've said, there is an argument that actually
maybe behind the scenes Sam Altman and Open AI have just realized, like,
they need to focus on one thing that really matters.
And if I was going to look at those three things,
like the voice interface, SORA, and reasoning capabilities,
hands down, the bigger deal is reasoning capabilities.
So maybe what's happening behind the scenes
is they've just put a pause on these two other things,
which aren't that important.
And the reason I say that is because last week,
I talked to some people in the Valley,
and a couple of times it came up that there are
there's growing discontent about Sam Maltman's general lack of focus.
And people are saying he's disorganized and they're trying too many things.
And I spoke to one VC who even said that he was going to meet Sam Maltman that weekend.
And I said, if you could give him one piece of advice, what would you tell him?
And he said, I would tell him that he needs to look at everything he's doing and he needs to pick two things and just stick with those two things.
And he said, even if one of those things is this big chip infrastructure.
project that he's that's fine do that but then just do one other thing and nothing else because
there i think there is a concern that open a i has just got too many pans on the fire and um so so yeah
so i would say yeah i wouldn't discount strawberry and slash oh one just in case actually maybe
what's going on here is that they've realized they need to just focus on the higher stakes uh
bet that counts for more. Yeah, Parma, you're convincing me here because there would have been
sort of rumblings and rumors going around that Open AI has lost it and too many people left. Look,
Ely is gone. He just raised a billion and John Shulman's gone and he's at Anthropic now and,
you know, the rest of the leadership team has thinned out and there's been Steph Exodus
and Open AI ruined itself. And I think we can say definitively,
Well, maybe not definitively, but we can say with good confidence now that the company is still pushing the envelope.
And I was just on the chatbot arena where they battle different chatbots against each other and you vote, which gives the best output.
And GPT-40 is still the winner there.
Oh, really?
More so than Claude?
Oh, yeah.
It's definitely above.
It's actually Gemini is second.
And this is where people vote on what's best.
So now human manipulation can be responsible for some of it.
I do think that the death of Open AI has been prematurely reported.
That being said, there's a lot of competition.
You know, we can agree on that, but they're not dead yet.
Yeah.
Yeah.
And they also want some more money.
So it brings us to our next story.
This is actually coming from your outlet.
Bloomberg has a report out that OpenAI's fundraising is going to put the valuation of
the company at $150 billion.
And a couple of weeks ago, we just did a show talking about how it was going to be.
100 billion and that's crazy. But they are according to Bloomberg in talks to raise 6.5 billion
from investors and another 5 billion from debt in debt from banks. So they're going to basically bring
in 11 billion dollars. And it's the round like we talked about in this on the show in the past,
Thrive Capital, Microsoft, Nvidia and Apple are all talking about getting involved. So what do you think
all this money is going to go toward then. I mean, are they going to even be able to justify that
valuation? They're losing money, a lot of money now. I mean, what's the business case for investing
in them? Well, I mean, they get a revenue share with Microsoft, right? From, um, through Azure. So if Azure
does really well, then Microsoft, then, sorry, then Open AI can do quite well. They've got the
subscription business, which is actually pretty steady. They've got something like 200 million active users
of chat GPTs. So, you know, I mean, it's it's not a bad business. You could say long term
that can grow to something quite impressive. But these things take time, right? I mean, with
Amazon, it took many years for it to become a profitable business and maybe people are looking
at Open AI in the same way. The problem is that, as you well know, building and improving
foundation models is incredibly expensive. And it's not just the compute, right? It's the salaries. Like
These senior AI scientists and engineers.
How much are they making?
Well, okay, so I spoke to someone about this who, from an organization that was trying to hire,
yeah, that just was asking questions about salaries.
And so one kind of mid-tier AI startup was asked, so how much would you pay for like a senior
AI kind of executive?
And they said it's basically around over the course of four years, $6 million.
I've heard elsewhere, you know, two to three million.
a year salary including options over time yeah yeah and so one way they're trying to make the money
back is potentially raising the subscription fees for these bots i mean they they don't make any money
on the bots right now um but one report that came out of i think the information this week is
that um let's see it says in internal discussions at open a i subscription prices ranging up to
two thousand dollars per month were on the table uh so basically
the thought is maybe that these new reasoning models, again, the ones that were released this
week preview and mini. So there's like an actual model that Open AI has that, you know, I think
will eventually make its way into the product. And if the models do live up to this sort of
improved benchmarks and become super valuable for some cases, maybe they can charge much more. Is that
one way that they end up justifying the valuation? Maybe, but I can't see it being that high.
That's pretty high. I can totally see big numbers being tossed around, though. And I'll give you, the reason why I think that is, if it was last year, I was talking to somebody who was from an AI company that worked very closely with Open AI. And they talked to people at Open AI all the time. And they were like, oh, yeah, so Sam and Open AI, they're doing another funding round. And he wants to raise $100 billion. He was like, that's what they're saying in the company. He's trying. And I remember responding.
You mean a hundred billion valuation, right?
And he said, no, no, no.
They actually want to raise a hundred billion.
I could never corroborate that with anyone.
But my sense is that big numbers get tossed around a lot.
Right, like a trillion.
Yeah, yeah, exactly.
That's a great example.
The whole $7 trillion chip thing.
Was it $7 trillion?
Yeah.
It was a set $7 trillion.
Yeah.
Raising.
So a lot of huge, ridiculously.
galactic-sized numbers get tossed around when people talk about open AI and probably inside
open AI too. But fundamentally to your point, do they need to raise the price? Yes, they probably
do. I mean, look at us. We've both been paying for both Claude and ChatGPT because it's
sort of affordable. But maybe they can afford to raise the price. If they put in these
capabilities where these systems become integral to your work process, then yeah.
maybe you'll just have to pay more and maybe they can get away with it is uh is one reason why
companies like invidia and apple and uh microsoft might be involved in this round is just because like
even if they don't make their money back if open ai is able to keep pushing the envelope then their
valuation will just go up like it's a smart investment like a almost like there was a moment where
opening i i said if you're going to invest in us that you guys should think of it as a donation and maybe
in what in these companies cases that's fine because it is it's going to work out for
if it works out. And also they can afford, they can afford to do it, right? Like it's not like Apple
has just been told like they had to put $13 billion in an escrow account to pay this
Irish tax fine from the European Union. And even that's like it sounds like a huge number,
but really it's just pocket change for them. So an invest like an investment in open AI, I think
a company that's completely on the forefront of AI right now. And it literally sparked the
generative AI boom almost two years ago, I think is worth it for them.
Yeah. Okay. One more question for you about funding. So every now and again, you get these
rumblings that Open AI might be trying to raise from like basically Gulf oil states.
I'm going to get it wrong, Qatar, Saudi Arabia, but, you know, of that nature.
It just kind of seems interesting to me that it's going to these states and like companies
that initially started with this mission of AI being beneficial for the world, being like,
you know what we really need to do, like give an oil, you know, an oil oligarchy or, I don't
know, authoritarian type country a stake in what we're doing. What do you think about that?
Well, let's align our, do we align ourselves with the oil oligarchy in the Middle East or
the tech corporate oligarchy in the West? What to choose? I would go with the tech ones,
but anyway. Yeah, it depends which side of the, which part of the world you're in.
and what your worldview would be on that.
I mean, there's so much money out there, right?
They've got sovereign wealth funds,
which means they can make these huge bets on risky investments.
And Dubai and the UAE, there's been a lot of,
there's been a big push to try and diversify away from their very, very big dependency on oil
and try to move towards more of a tech-based economy.
So it absolutely makes sense for them to be putting money into something like this.
Right.
And you've been following these companies pretty closely, obviously writing the book about them.
Is there going to be a point where they just kind of run out a runway?
Like is this sustainable?
Is what sustainable?
The money or?
All the money being spent on the training of the models and AI.
I mean, it's just, it's so capital intensive.
Like eventually, and you're like we were talking.
earlier about how it's going to take some time to figure out the use cases, even though
there probably are use cases there. But like, how much time does this have before it has to show
results that return on the investment? Maybe a few, I'd say a few years. And I think there isn't
as much pressure on the big tech companies. Like, I wouldn't, it's very tempting to just say
it's not sustainable because it's so expensive and it's moving so fast and there's no obvious use
cases. But you have to remember these, these companies have billions of dollars on their
balance sheets. They can totally afford this, and they can afford to keep spending like this
for the next few years, at least. I mean, look how much money Mark Zuckerberg spent on the
Metaverse. And that really went nowhere. And he's still, like, Meta's still doing fine. In fact,
meta as a stock has outperformed all the other big tech companies in the last, I think,
since the start of the year. Thanks to its advertising. And still called Meta, by the way. So maybe
they haven't fully given up, but. Yeah. He's in it for the long term. For the long,
long haul. Um, but yeah, I, I, I think they, they, I think they will continue to spend
quite big on this, um, for the next few years. And because they, you know, although they want
it, they do need to see their enterprise, like, again, I'm thinking from the hypers
perspective, but you look, you talk to someone like Google and they have, like I literally
asked someone from their cloud team last, um, last week, um,
you know, what, what their inbound interest was for, from enterprise customers for Gemini and for their, for their API. And they were just couldn't, they couldn't give me a number, but they were like, it's so, it's just crazy. The amount of usage that we have. And I was like, has it, has it doubled in the last year? He said doubled would be an understatement, like the rate of, of usage increase. It has really gone up. So they're doing fine. But the question is just like, whether,
businesses will continue spending on these models.
We can go into this if you want,
but this is probably for a whole other episode,
like how do you measure return on investment?
Right, it's not simple.
And it's absolutely not.
And I think people are starting to accept that.
They're starting to see buying AI services
as a bit like having email.
Like, it's hard to measure as a business
how much having email actually makes you money.
But if you took email away from all your employees, your business would just fall apart.
It's absolutely integral to operating.
And so I think there's a sense that AI will become like that.
It will become a really important component of making a business more productive, but it will be really hard to measure exactly how it does that in numbers.
But the thing is most email programs are free and this is very expensive to produce and run.
But I think that's a quite, we'll cross that bridge when we come to it.
But I do want to hear a little bit more about sort of your perspective on Sam and Demis
and if there's stuff that the public don't know about them that they should know.
And then I also want to talk about this story about how some local news sites are using people
who have turned themselves into AI avatars to read the news and, like, do local news programs.
So why don't we do that right after this?
Hey, everyone, let me tell you about the Hustle Daily Show,
a podcast filled with business, tech news, and original stories to keep.
keep you in the loop on what's trending. More than 2 million professionals read the Hustle's
daily email for its irreverent and informative takes on business and tech news. Now, they have a
daily podcast called The Hustle Daily Show, where their team of writers break down the biggest
business headlines in 15 minutes or less and explain why you should care about them. So, search
for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast with Parmey Olson. She's the author of Supremacy.
Chatchip-T and the race that will change the world. It's out this week. Okay, so, pardon me,
I'm curious. I think a lot of us have a perception of Sam Altman. Maybe many, but somewhat
fewer, have a perception of Demis Asabas, who's the head of deep mind at Google. What do you
think, what did you learn about the two of them that is not fully represented in the public
consciousness that is important to know? Well, what I learned from both of them is that they both,
were incredibly mission-oriented people.
They had very different personalities.
Sam was a very outspoken, charismatic individual.
He was an entrepreneur's guru.
He even after his first startup looped,
essentially failed by Silicon Valley standards,
he somehow managed to create this incredible respectability
around himself among startups,
and venture capitalists in Silicon Valley
as this almost Yoda of
entrepreneurial advice.
He would post blog posts
with just like 99 pieces of advice
for startups.
And he was just an incredibly good communicator.
And someone who would also,
at the drop of a hat,
help other entrepreneurs.
So I've spoken to entrepreneurs who said,
yeah, I just sent him an email.
I didn't think he'd respond.
And then he responded right away
and introduced me to someone
who helped me raise money. And so he has really engendered a ton of goodwill among startups because
he's very responsive like that. And it's actually given him an incredibly powerful position
in the Valley just as a, as someone who's incredibly well connected. And that was even before
he started Open AI when he was the head of Y Combinator. Demis on the other side of the ocean,
a very different kind of personality. He started off as he was a chess prodigy growing up.
one of the best chess players in the world when he was only 13 and he was obsessed with games he
I don't know if you ever played the video game theme park I haven't but I know that's Demis's
game he did it as a teenager or something he did it when he was 17 he co-designed it with
peter molyneux at um bullfrog productions and then he went to cambridge really young and
and started his own video game company,
which in the same way that Sam's first startup failed,
Demis's first startup also failed,
and both entrepreneurs really learned a lot from those failures,
and both kind of came out of that
with this goal of wanting to build artificial general intelligence.
Demis did it first.
He was actually the first kind of put his neck on the line
as among the scientific community to do something
that was really a fringe theory back in 2000.
and start a company that actually wanted to build human-level AI.
It was seen as very crazy, and nobody would put money.
I actually spoke to an investor yesterday, who was one of the VCs who didn't back DeepMind,
and now they absolutely are kicking themselves for it now.
But it was just they had no business model.
And the only way they could make money, raise money was to come to Silicon Valley
where people are selling the future, and they raised money from Elon Musk and Peter Thiel.
And then, of course, they built deep mind.
I mean, Demis, I think, is like a really fascinating character.
He's absolutely obsessed with games.
And he's also really, really good at winning games.
If you ever play him at foosball, he will beat you.
He's just, like, very, very good.
Or chess or soccer.
And I think maybe that's also a trait he shares with Sam,
that they both, you know, are very competitive and want to win.
And Demis also has an incredible, almost obsession with winning a Nobel Prize.
This is what I've heard from people who worked with him.
That that was kind of the goals that he put for the company.
But I think both men, you know, played in my mind, I wanted to kind of hang the story on these two guys.
There's so many other, there's a huge cast of characters in the field of AI.
But I don't like to tell a story about AI governance, sometimes you just need to boil it down to a couple people.
to make it just a little bit clearer and more simple.
And so that's why I focused on just those two.
Okay, so how do these two entrepreneurs go from these idealists
who are trying to solve all the world's problems
to people who just have to basically, one goes into Google,
the other one is raising billions from Microsoft?
Is it just that AI is so expensive to build
that they had no choice but to sort of go from iOS?
idealism to money. And like we mentioned in the first half, you know, basically, you know,
I don't know, maybe even knock on like the Gulf oil state stores and ask them for cash.
Yeah. It's a real, this is really a story of people essentially compromising their original
principles. And I think the thing with these two characters, Sam and Demis, is that they're both such
extraordinary individuals. They're incredibly passionate and capable, but they're also incredibly
competitive. And they wanted this goal of building AGI so much that they were willing to
pay a price to do that. Some people would call that a Faustian bargain. Sorry, it's a very kind of
do I think they're going to actually reach AGI? Yep, that's the question. I don't know that I really
believe in the whole idea of AGI. I think it's a construct that is very almost marketing driven
to get people excited about AI. I think what in the same way that many, many years ago,
people imagined the future as being so different. We would have flying cars, you know,
it would be more like the Jetsons. But what ended up happening is we just had the internet. So I feel like
what we'll actually end up having with AI won't be how we have this. Yes, exactly. We have this kind of very
humans instinctively have this romanticized view of what future tech will look like and we often get it
wrong and we often personify it way too much. So I just don't see it necessarily being in the way
these Demis and Sam envisioned as being an kind of omniscient almost in its ability to
to solve our problems, like to solve climate change or to cure a certain disease.
I think it's going to be a bit more nuanced and complex than that.
But I think, yeah, to your question about, you know, what happened in terms of those, the ideals they had,
they were both just so intent on realizing their goal that they paid that price of aligning themselves with,
the companies that were going to give them the resources to actually build AI, but it meant they
had to give up that responsible governance that they were trying to put in place.
Yeah. Okay, one more story I want to get to before we head out, which is I was looking up your
interviews on YouTube and saw you like stand before the camera and deliver this monologue. And as
I'm watching, I realize, oh, that's not par me. It's AI parmey. And you're the perfect person to
speak about this because this week I read a wild story in the in Wired about a Hawaiian journalist who
worked at this paper called the Garden Island and basically it is a print type not print it's
a words news site and it's covering this island of a you know a couple thousand people maybe 10
no maybe like 70,000 people or so and the news site all of a sudden started to do news video
And who are the anchors of these news videos?
They're actually AI avatars that will basically take news from the site and then read them to you.
And you can watch the video as if it's like a regular bit of news.
So just thinking about the direction of AI, we were like talking earlier about like the ways that it might be applied.
What do you think about the fact that, you know, we're starting to see effectively news anchors being, I don't wouldn't call it replaced yet, but being mimicked by AI.
avatar is like the one that you became in your video and actually reading real news to real people
on news sites like the Garden Island. I think in a nutshell, I think our future is just going to be
so noisy. We're going to have so, so much information in so many different forms. It's going to be
not just text. We're going to have so much more video. We're going to have so many more images
because AI is just going to be generating so much of it.
And that's come, like, whenever I talk to businesses about how they're using generative AI,
they always, always say, we're not replacing jobs.
We're not replacing humans.
We're augmenting them, which I think, like, it sounds a bit disingenuous,
and that's just their way of protecting themselves reputational.
But it's kind of true.
Like, I think we are, in the case of the newscasters, like, it's not replacing,
newscasters, right? It's just creating more videos for people to watch potentially of news so that
they don't necessarily read it on text. It's just another way of consuming news. And that's what I
see potentially happening with generative AI is that, well, I genuinely cannot see major TV news
networks replacing their anchors with generative AI anchors anytime soon, you know, if ever.
But I certainly could see, you know, local radio stations, local website, not even local TV
stations, but like really small outlets that just don't do video.
Now here's a chance for them to do video.
And so we're just going to have lots, lots more video content available to us through
all sorts of different platforms.
Yeah, it's amazing.
I was like thinking about it for big technology.
Like what would have happened?
And obviously the technology is not there yet.
But like, what would have happened if we would have fed some AI?
the news stories about strawberry and the open AI evaluation and then written like one sentence
sort of perspectives from each of our side and then fed the AI your book and then say create a
podcast episode I mean that stuff is crazy now I would never do it I think that our stuff is better
than what you would get from AI generated slop but I yesterday I spoke with an editor who had
his body scanned and uses this video as an added, added feature on his website, and it was
amazing. He was like, first of all, he was like working with the software company and was like,
my accent is, is, and my English is not as good as your technology is making it out to be.
So I wanted to sound like his quote was dirtier, you know, I think he meant rougher, not like
I don't think he wanted it with swear words and stuff.
and they did it and it sounds a lot and looks a lot like him and they're getting thousands of views
on each of these videos.
They're getting thousands of views really.
See, this is what I don't get is do people actually like this?
And I think what is going to happen, I mean, you said thousands of views, right?
So the more people see AI generated content, the more they get used to it and the more they get comfortable with it.
So even though video avatars of news anchors and this gentleman.
and you mentioned, probably look a little bit on Canney Valley.
Actually, if you watch that kind of content for long enough, you start to just be okay with it.
And then maybe like when it comes up in your TikTok feed or Instagram or you find it on a website, you just, you click on it because you're okay with it.
You're used to it.
Exactly.
Oh, God.
Well, everybody, this has been a real discussion between real me.
And I hope that this is real you, Parme?
As far as I can tell. Give a sign of life. I don't know. I couldn't tell.
Thumbs up. There's a thumbs up. All right. Right. Your avatar couldn't do that because it's just.
Yes. See? There we go. Okay. So, but I am so glad we got a chance to speak and I do encourage people to pick up the book. It's called Supremacy, AI, Chat, CheapT, and the race that will change the world in man, supremacy. It's the right, the right title for it, right? Oh, thank you.
Yeah. Well, congratulations on the release Parmi. Great speaking with you. Thanks again for coming on.
Oh, likewise. It was really fun. Thank you.
And it was for me as well. All right, everybody, thanks for listening on Wednesday.
I'll have an interview with the CEO of LinkedIn, Ryan Roslanski. So stay tuned for that.
Yes, we're talking about AI. Of course, we are there at Microsoft subsidiary.
And then I'll be back on Friday with Ron John breaking down next week's news.
Thanks again for listening. And we'll see you next time on Big Technology Podcast.