Big Technology Podcast - OpenAI Raises $40 billion, Is AI a Letdown?, Musk Sells X to xAI
Episode Date: April 2, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) OpenAI's $40 billion fundraise 2) Is the $40 billion number real? 3) Can OpenAI live up to the expectatio...ns that come along with the money? 4) What OpenAI will spend the cash on 5) AI products are growing fast 6) Would you go to AI therapy? 7) Is AI a letdown? 8) Why AI boasts have gotten ahead of the technology 9) AI's brand risk 10) Was the problem with Apple Intelligence actually AI, not Apple? 11) Amazon launches Alexa Plus with missing features 12) Elon Musk's xAI acquires Elon Musk's X --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Open AI has raised $40 billion, the largest funding round in history.
What's it going to do with the money?
And Alexa Plus finally debuts, analysts ask whether AI is a letdown,
and does it matter that Elon Musk sold X to XAI?
That's coming up right after this.
Welcome to Big Technology Podcast Friday edition,
when we break down the news in our traditional cool-headed and nuanced format.
We're running this week's show on Wednesday, and we have a big interview coming Friday.
So this week, we are swapping our shows.
We have so much to talk about this week.
We're going to talk about Open AI's fundraising.
We're going to talk about the incredible momentum that Open AI and other AI companies have right now
and how AI is picking up speed in a way that it hadn't, at least all through last year.
We're also going to cover whether AI still is underwhelming, despite the consumer use.
It sounds like a bit of a contradiction.
But we're going to get into it.
And then, of course, we're going to talk about Elon Musk selling X to X.
to XAI. Joining us, as always, for our Friday show, but today on Wednesday is Ron John Roy of
Margins. Ranjan, great to see you. Welcome back. It's good to be back. Thank you, Masa, son, and
Sam, for giving me, giving me this beautiful news to start the week with. Exactly. Now,
Ranjan, there's going to be some confused users. They're going to be like, this is the Friday
show. And you just said, as always, Ranjan hasn't been on for the last three weeks. And I know
we have new listeners who've come on. So just to go through what we do here,
We usually do a flagship interview on Wednesday and then Ron John and I break down the news every Friday.
We'll have our flagship interview on Friday.
Ron John was on vacation.
He is back, so you will hear from him consistently on Fridays as we break down the news.
And so let's get right to it.
Big funding news.
And that is an understatement.
Open AI has finalized $40 billion in funding at a $300 billion valuation.
That is according to Bloomberg.
Ranjan, this is the biggest fundraising in history by far.
Last year, OpenAI had the title of the biggest fundraising ever.
It was $6.6 billion.
This year, they've multiplied that by about five times to 40, or even more,
more than five times to $40 billion.
In normal times, this would be the biggest business story of the year.
It seems like it's gone by in kind of like a ho-hum fashion,
where there aren't many people that have been talking about just how crazy this is.
I mean, it's rewriting the rules for private company financing.
What do you make of the financing?
I know that you're already seizing onto the fact that SoftBank is in the lead.
And what do you think Open AI is going to do with all that cash?
All right.
So first, in terms of SoftBank being involved and is this big news,
I think the $40 billion headline number we have to take with a grain of salt.
Because when you start digging into the numbers underlying it, there's around $10 billion that's actually supposed to be raised, which still is a shocking number and is bigger than $6.6 billion from last year.
The other $30 billion is supposed to be through the end of 2025 and thereafter, but basically be going to the Stargate project, which is a series of data centers that's involving Oracle and others and helping build those data centers.
So the idea that this is $40 billion going to allow us to create more studio jibbley images isn't exactly accurate.
Like it's it's a large number, but it's so convoluted like so much of these stories, so many of these stories that I think it's not as shocking as it seems at first glance.
Okay, so let's break down the numbers.
So SoftBank's going to lead the round.
It's going to be $7.5 billion right away.
So that alone is the largest VC round in history.
And $2.5 billion from an investor syndicate that also includes Microsoft,
KOTU, Altimeter Capital, and Thrive Capital, which led the last round.
Then there's a second tranche of $30 billion that's supposed to be invested by the end of 2025,
including $22.5 billion from SoftBank and $7.5 billion from a syndicate.
I guess the caveat here is that if open up,
and this is from Bloomberg, if Open AI's restructuring isn't completed by the end of the year,
SoftBink would have the option to reduce its total contribution to $20 billion from $30 billion.
So Open AI really does need to complete this for-profit restructuring.
That being said, I don't understand how this is not real, Ranjan.
I mean, it is the agreement.
It's supposed to come next year.
Why are we already saying that this is fake?
I'm not saying it's fake at all.
I'm saying it's taking a bunch of different announcements and kind of mixing them together.
Again, Stargate, we heard numbers as crazy as $500 billion, $100 billion this year,
if we remember the announcement in the White House a few months ago.
So it's taking part of that and then kind of, again, mixing it into this announcement,
where again, it's $7.5 and $2.5.
It's $10 billion coming up front that is a lot of cash.
And we can definitely get into how they're going to use the money and whether that's going to be enough for them to actually handle their burn over the next three years.
Because they're supposed to turn profitable if you remember the numbers in 2028 after burning $7 billion in 2025.
So they're still forecasting a lot of burn.
But I still, I don't know, I think, and I'm not giving the overall press and market so much credit that they're getting into the nuance and recognizing that's why it's not that exciting a story.
I just think these numbers get so big and the deals get so convoluted that it's hard to try to make sense of it and process it and get excited about it.
Yeah.
And it's also, I think the main reason why people have been like, okay, whatever is because OpenAI already announced this $500 billion project Stargate.
And so that people see $40 billion, which is like by far the biggest VC funding round in history multiples of what that typically.
would be. And they're just like, okay, all right, it's one-tenth of what you told us. And maybe there
is a backlash in some ways to these big boasts of these large dollar amounts when you kind of
come in in any other environment would be one of the most impressive financial announcements in
history. People are just like, well, it's much smaller. Even your valuation is smaller than the
amount that you pledged to raise. Well, also, when we talk about that valuation, their revenue
projections are incredible, incredible. So they're on track to make $3.7 billion this year.
They're forecasting to triple that next year to $12.5 billion. So first thing, the valuation,
and this is kind of what blows my mind, the $2.5 billion syndicate of KOTU, Altimeter, Thrive,
they're investing at a hundred X revenue multiple. Like, we're not talking in the 20s and 40s here,
more. They're going in at 100x on the forecast that SoftBank's going to end up putting in
another $30 billion anyways. Like how you even start to get to those kind of numbers is shocking.
But then my favorite part about the revenue projection is a third of the, so they're expecting
next year $12.5 billion. That's going to then go to $28 billion. And a third of that revenue
is going to come from SoftBank, SoftBank spending on Open AI for all of its own companies
and portfolio companies. So you just start to see, I mean, the mathematical gymnastics
and financial gymnastics involved here are only worthy, again, of Masa San.
Yeah, in tech circles, I think it's fashionable to say that if you invest in Open AI,
you're betting that it's either going to infinity or zero.
Basically, that they invent AGI and it is the, or superintelligence, really, it does everything for humanity and your $7.5 billion that you put in in 2025, SoftBank becomes one of the biggest bargains of all time.
The other side of that is they do just burn so much money trying to serve Studio Ghibli images and you flush that money down the toilet, basically trying to enable that behavior.
Well, that's why the growth number, the user growth is spectacular.
Sam Altman has been on a tweeting tear this week,
kind of talking about biblical demand for the platform,
saying that I think that they had added a million users over a few months
and then added a million users in an hour.
And again, the studio, Jeebly, which you had a great show with Brian last week
and explaining it's the whole viral format of,
creating images in the style of a Japanese anime type thing that just went completely viral.
I will say plenty of non-AI normie friends I saw posting those on Instagram even,
not just on X.
Like, it was real.
It's real.
People are going on it.
People are doing it.
But that's not, that's cool.
That's not making you money.
Like that's not necessarily the best forecast for, uh, you getting,
tier, $28 billion revenue in just three years and somehow also being profitable on that
revenue. Yeah, the studio Ghibi thing continued to be crazy over the weekend. My brother, who
is not ever sent me anything AI, drops the family chat, flooded the family chat with
the images of him and his wife and his daughter and created multiple chat chippy T accounts to
be able to make more images. That is how insane the demand was.
You're right. It's not a profitable use case, even though we know that Open AI had lots of signups.
I mean, adding a million users in an hour is impressive, but they're melting these GPUs, which costs 20 to 40,000 a pop.
So let's just take this to its logical conclusion. You are Open AI. You just raised 40 billion, or if you're in Ron John's camp, at least 10, and we'll see what happens next.
What does that money go towards?
this is where i mean obviously and sam was tweeting about like if anyone has a hundred k GPUs send
them our way we're we're as you said melting on demand that there we need anything we can get this is
getting crazy so obviously it's going they're still positioning this as it goes to the compute
it goes to just and building large new foundation models gpt5 one day so it seems like and
And we've talked about this a lot.
The plan over the next few years is still the same strategy that's been there for the last two.
Launch some new, really cool products, but then the entire bet is on the transformational foundation
model, GPT 5, 6, whatever it's going to be.
As you said, AGI, things that change everything, and that's the only way they're going to make
their money.
Right.
I think it was Dylan Patel who talked about.
about how the next wave of he's going to come on the show from semi-analysis.
He's going to come on the show in a little bit in a couple of weeks.
We recorded already.
And basically what he said was the bet is not that they're just going to make a better chatbot
is that they are going to automate effectively full industries, including software engineering.
And that is interesting because you're right that this is something we talk about all the time,
how the product is important and how it's a consumer and an application company at this point.
And OpenAI showed incredible momentum, right?
They have 500 million people using chat GPT every week.
That's brand new numbers that they announced in their fundraising.
And that would be amazing if there was, it is amazing.
But it would be even more incredible if there was no hardware cost to be able to serve the product.
Like when Instagram hits 500 million active users, that's incredible.
And the best thing about that is you can serve that product without investing billions in GPUs.
but OpenAI is investing billions in GPUs,
and it's going to burn that $7 billion.
What did you say in 2026 or 2027?
And so I'm wondering, how do we, in 27,
so I'm wondering, Rajan,
how do we think about this?
Because you at once have this massive investment in GPUs
and a very successful consumer product.
And I'm trying to make sense of it
because I want to say I'm bullish on OpenAI
because of all the users. But the bearish thing is the expectations now after the money coming in
are beyond through the roof. There's no more roof anymore. They're through the stratosphere.
And it's going to be very, very difficult to meet those expectations.
No, I think you said it. It's infinity or zero, which is not really the bet and the decision calculus
you want to hear as an investor, I feel. But apparently many of the world's leading investors are very
happy to hear that. I think that's exactly it. It's that the, uh, this is not traditional
software. These are not 70 to 80% margins. It's not, you know, like no marginal cost to serve
new users or almost zero marginal cost. This is a completely different, it's almost an industrial
product in the way it's built right now. And they're, they're continuing to go in that direction.
But again, in their credit, I will say,
and we're going to get into the conversation around is AI a bit mid right now and is it is it
creating a letdown they're creating these moments they are the household name they are creating
very cool products they continue to and again operator not great but looks really cool deep
research both looks and acts really cool and is great like they're still leading the way on generating
excitement across the entire industry. So they have that going for them. But still, the numbers are
tough. Numbers are tough. So now I'm going to contradict myself on the infinity or zero thesis.
And it is interesting because we're always talking about this question on the show about whether
the business is going to work. And of course, we talk about it because if the business works,
then the products will continue to get better. If the business falls apart, we won't see any more
advancement. Everything is linked in this question about whether there's going to be a return for
open AI, its investors, et cetera. And let me now make the middle case of saying it won't be infinity
and it won't be zero. And that is that if you take the applications that we have today, chat GPT,
the image generation, the video generation, the voice generation, all this stuff is quite useful.
And in fact, we shouldn't gloss over the fact that open AI in a year has gone from 100 million
chat GPT users to 500 million chat chipt users. I mean, that's extraordinary growth. And at the pace
they're going with these image generation rollouts and every new product thing they did, we know
voice played into this. We did a full episode on that. They're going to get, there will get to a
billion users of chat GPT. No doubt in my mind. That's coming. And what happens is every time there's
an advance, it cost a tremendous amount of money to build that advance. But every single time we also see
that companies figure out how to deliver that more efficiently.
So I would say if OpenAI stopped development today
and just found a way to make what's in chat GPT more efficient,
they could run a profitable business with that 500 million or that billion users.
And that's how you get to somewhere where a company can persist
and can deliver a lot of value and can be profitable,
but it just has to give up on some of these wild ambitious.
if for whatever reason, they find out maybe like Jan LeCoon said on the show a little while ago,
that you're not going to be able to just scale up these models and get AGI.
Well, okay, to push back a little on that, it sounds like,
and we've seen this in all different types of the companies that flamed out.
The story was always acquire users at an expensive cost,
and then you could just turn down your marketing spend and then become profitable.
and that did not work for a lot of companies.
In this case, the thesis that, like, acquire the users
and then make the compute more efficient,
which I don't argue is going to happen.
We saw it with the deep seek effect itself,
that it should be getting more efficient and cheaper.
I think the only problem here is that's not the philosophy of how they're building.
It's still bigger and more expensive,
and they've really laid it out that that's how,
we are going to win.
So the idea that they're going to really move,
like if they're really going to automate entire industries,
the economics, even for the companies who would be automated,
aren't there given bigger, more expensive models.
So I still think it's a tough one.
Yeah, if you're Masa, you don't want to hear what I just said.
You want to hear that you're going to infinity.
And that's the only option.
And let's put a pin on this automating entire industries.
boast because we will come back to it in a moment. But I do think we should take a moment just to
appreciate perhaps how these products are gaining steam. Because I do think that there was another
side of this. There's two sides of the AI discussion. One is, is it going to be profitable?
Two is, is it useful and is anybody going to want to use it? We didn't really have clarity on the
second question up until recently. And as I'm putting together some of the stories that we're
looking at this week. I'm starting to say, wow, it really is happening for AI. So again, 500 million
people use chat GPT every week. It's not just the number, but it's the velocity to which they got
there in March 24. So a year ago, they had 100 million users. They vetted 400 million in a year,
which is crazy. Now the revenue side of it. This is from the information. ChatGPT revenue surges
30% in just three months.
And the company, they say, has hit 20 million paid subscribers.
This is something that OpenAI has disclosed.
That's up from 15.5 million at the end of last year.
And this is the information that says,
it turns out a lot of people are willing to pay for a chat bot that can code right
and give personalized health advice and medical diagnoses and cook up detailed financial plans
among countless other tasks.
The strong growth rate suggests chat GPT is current.
generating at least $415 million in revenue per month, a pace of about $5 billion per year,
and that is significant money. So OpenAI is really on the upswing. Now, one more thing,
let's talk about the other bots. It's not just them. This is from TechCrunch. ChatCHIPT
isn't the only chat bot that's gaining users. We see that Google, this is according to similar web,
Google's Gemini web traffic grew to 10.9 million average daily visits worldwide in March, that's up 7.4% month over month, while daily visits to co-pilot, that's Microsoft Bot, increased to 2.4 million, up 2.1% from February.
Similar web also says, Anthropics Claude reached 3.3 million average daily visits in March and DeepSeekhead 16.5 in this million visits in the same month.
So chat GPT, of course, is way more, but you're seeing growth across the board.
And to me, this is just a moment.
We have to admit it.
It's a moment where it really is coming together for AI, maybe punctuated by all these AI images that we're seeing with chat GPT.
Do you agree?
I completely agree.
I will say this in the last few months, the, and I would actually say even thinking from the Super Bowl,
there's still a little bit of skepticism I would hear from everyone.
Everyone I know is using some kind of chatbot right now.
A lot of people are paying for it.
It's become kind of the norm.
Oh, I have a chat GPT plus subscription or a Claude subscription.
But like it feels like more and more people are putting in their budget,
20 bucks a month, which we have been doing for a long time.
And choosing one.
And it just is becoming more and more part of their daily habit.
I think Gemini seems interesting because, again, it's free, it's really good, it's getting better,
and it's integrated into the entire Google suite of products.
So it's still in terms of who could be number two or even overtake number one.
But overall, I do agree.
We've passed the inflection point.
It's normal.
And when people, again, I was just traveling for a few weeks.
And the number of people around me I saw taking pictures and putting them into chat GPT for,
translations or just even asking questions around travel. I myself, this was an all perplexity,
chat GPT, even met a Rayban asking questions on the fly trip for me. The days of just pure Google
search are long gone. So it's happening. I completely agree with that. Who's going to make money
and how? That's a separate question. Right. And this is definitely going from a moment where it goes
from toy to being practical.
And just thinking about the image gen that we saw from chat chippy T over the past week,
yes, it's been fun to turn ourselves into the Muppets, but it actually has like a real business
use case as well.
And this is, again, from the information, there's this company called Solo Wood Flowers.
It's a Utah-based e-commerce company that sells replica flowers, mostly used in weddings.
It canceled its plans to spend between $150,000 and $200,000.
on photography this year after its manager saw Chat Chip T's ability to place customers' real image into an AI-generated scene.
So we're going to see this really have, I think, significant economic impact and consequences in the advertising industry.
It's also going to definitely be used in the interior planning, interior design industry.
There was a Dallas-based real estate developer that posted on X.
This was a great thread.
Images of an empty apartment and then ask ChachyPT to show it furnished.
And that inspiration is, is, and it looked good.
And we're going to see inspiration there.
People will design websites and apps.
I've seen some of those come through.
People will design merchandise with it.
You might even see building renderings and custom charts.
And I thought that was really interesting.
Ethan Mollick from one useful thing, the Wharton professor, one useful thing.
One useful thing on Substack, he had the new chat chip ET image generator create some pretty good infographics.
And his point here was that basically the image is coming from the model, not being sent off to some image generator.
And so you're starting to see just really intelligent AI image generation where can actually get accurate text and real infographics coming into the model.
So again, just going from effectively toy to something with real business value.
Yeah, I think the visual side of things or the image side of things makes this even more promising to reach a wider audience because it just feels and looks more real.
Like it's one thing with text, it's just not as, I don't know, exciting or enticing, but it's also like when you prompt understanding what that output is is more difficult to understand the nuance.
But to see, okay, wait, if I prompt this one way, this is the type of actual visual.
representation of that prompt. And if I change a few words, that changes the actual style or
like really clear images, I think makes this a lot more accessible for people. I will push back
a bit that there is this constant gap between what you see as a cool example on X and when you go
in the real world and try to create things for your business. And I have been working with a lot of
generative image, generative AI image, especially for marketing and advertising over the last
couple of years. And it's gotten a lot better, but to really get it consistent and good enough
to push into the marketing sphere, unless maybe you're a TMAWED, I think is a, it's still
very difficult to make it, like, incredibly consistent. It's really easy to be like, okay,
create me a fake DTC brand called Ron John's snacks of healthy snacks.
and it makes like a really nice looking packaging logo
and then even a website.
I was actually playing around with this
and this came out nicely.
And I'm like, holy crap,
I could actually turn this into a Shopify website
if I actually could manufacture snacks.
But anyway, but like,
but I'm going to be launching like a thousand random products, I think, soon.
But in reality, like to do that on a consistent basis
for a real business, it's not there yet.
it's getting closer, but it's why you get this kind of feeling of letdown because you're
promised this great thing. You can do a really cool experiment or jiblify yourself, and that's
quick and cool and it works. But when you actually got to go to try to put this in a work context,
it's suddenly not as cool. Like the custom charts, I guarantee you, if right now people actually
take their data. And I use
Claude and ChatGPT to actually feed
CSVs and create graphs and stuff and it
works, but to like
upload the charts themselves and to extract
data or to transform them into
different visualizations. If that stuff's
not perfect, it's going to be
a problem.
Don't fight the revolution, Ron John.
Cancel the flower photo shoot.
Fire your ad agency. Welcome
to the moment. It's here. I'm keeping my
flower photo shoot. At least
in 2025. It's in the budget.
for my replica flowers.
I don't use real flowers.
No, no, all jokes aside, I think you make an excellent point here.
We're going to cover it in the next segment.
And one more thing about this.
I was, again, going through all the examples
that I've seen of AI coming through.
This one kind of doesn't really, I don't know.
I can't say I hope it's not true because I hope it is true,
but I kind of don't feel like this is possibly accurate.
There is a study out of Dartmouth, again, from the information.
that says that a custom-built AI chatbot called Therobot reduced patient symptoms at a level
comparable to traditional therapy. People with depression reported 51% better, feeling 51% better on
average. Well, many people with anxiety reported a 31% average improvement. And this was the first
clinical trial on AI therapy via chatbot, according to the researchers. I kind of don't
even think that last sentence even holds muster are you going to go to an AI therapist is this is are
we there i that story actually reminded me of the evan ratliff shell game episode an interview did
where he cloned his his voice with an AI and sent that to the therapist and sent it to an AI
therapist like i don't know i i do think this like for this it's chat it's back and forth conversation
It's relatively structured and programmatic answers.
And I say that, I mean, with a grain of salt, but like it's a trained psychologist
is going to have like really structured ways of answering your question.
So there's no reason that shouldn't happen here.
Whether people are okay with that or feel comfortable with that is another question.
But if it's literally texting back and forth, is it really that different than sitting
in a room with someone and talking to them face to face?
it's different, but could it do a relatively good job?
I think, I don't think that's crazy.
Yeah, I would certainly not sit down with an AI therapist, not yet at least.
Sit down or text with, like chat with it either.
You let these things into your inner self and then they can manipulate you.
I just don't feel okay with a computer program doing that.
But I do think going back to the Evan Ratliff episode, it was really funny when the AI
therapist is telling his AI bot to breathe into a balloon and fill the balloon with all of his
anxiety and worry and then let the balloon float away. And the AI bot is like, I am breathing into
the balloon now. The therapist is like, good, good. I don't know. Is it really is scrolling through
your Instagram feed really that different about letting a technical system know all of your
innermost desires and then present you with algorithmically generated or curated content to
answer those needs and desires. Not to get too philosophical here, but. Yes, I think it is.
I mean, I really think that you're talking, Instagram is only a subtle manipulation, though it is
manipulation. I think that when you're, I mean, when you're speaking with an AI therapist,
maybe I should try it before I knock it, because I have spoken with the replica.
AI companion. And that certainly opened my eyes once I started talking to that.
How'd that go? How did that go? Yeah. Are you in love? It was a little too real for me, man.
Are you in love? I didn't fall in love, but I could definitely see how people do.
Yeah. Yeah. So I do think that, yeah, these things can get real. And what happens if you're,
you develop a deep relationship with this AI therapist? The company updates the, you know,
the software and the next thing you know, it forgets everything about you.
It would be a pretty dramatic incident.
I could just imagine the like scroll bar loading.
It's like, sorry, we have lost your data.
Please start again.
Could you imagine?
It's happened before.
It's happened with replica.
Okay, one last story about AI.
Then we're going to move on to, we'll move on to our next segment.
Open AI is planning to release an open weight language model in the coming months.
This is from Sam Altman.
We are excited to release a powerful new open weight language model.
With reasoning in the next coming months, we want to talk to developers about how make it
maximally useful. It's the first open-weight language model release since GPT-2. We've been thinking
about this for a long time, but other priorities took precedence. Now it feels important to do.
I have so many questions, but I want to hear what you think about this, Ron.
Well, again, I'd love that. I mean, you just raised $40 billion, so you can at least make it
again, maybe a year or two without needing to raise again. So you can start to say, maybe we will.
totally open source our models and at least provide the weightings. I think this is one of those
odd interesting things where open AI to me still lives as kind of a research house and versus like a
fully operational capitalist business. To me there's still this they want to be a leader
among the research community. They want to be a leader among AI thinkers.
And obviously, DeepSeek created the entire conversation around the need to or the potential around open weight models.
So, but I will also agree.
I'm probably equally confused as to why now.
I'm just trying to help them.
They agree.
It's the product and it's not the model.
So Samsung.
It's all about model being commoditized.
And now it's just product that matters.
But we're going to invest $10 billion in our next model.
that's weird
that is weird
you're right it's hard to
it's hard to basically square that circle
yeah I think
there's a lot of
circles that are not squared
in the overall open AI structure and story
but they make good products
they got good models
here's my conspiracy
okay there we go
Elon Musk is trying to stop
open AI from going for profit
saying that the company abandoned its original open source methods.
And what happens?
Open AI raises 10 to 40 billion.
We still don't know yet predicated on it transitioning to a for-profit.
What's going to make that for-profit transition easier?
An open-source model.
I like it.
That's it.
No, I mean, I genuinely cannot think of another.
I tried a second ago.
I didn't even feel convinced myself as I was trying to explain my theory.
I like that one a lot better. And actually, that's going at Elon in that way. That's fun.
That's fun. Oh, yeah. I mean, you can just see it now. Your Honor. We are open AI. Just look at this.
Listen, looked at all the participation. We got from thousands of developers as we requested feedback.
Clearly, we are serving the market in the way our initial charter intended. We motion for this case to be dismissed.
Case closed, Masa, your check will be wired immediately.
There we go.
All right.
Let's take a quick break.
When we come back, we're going to talk about a couple of, I would say, anti-AI
op-eds, including this one from the New York Times, the tech fantasy that powers AI is running on fumes,
and another from CNN saying the problem with Apple intelligence isn't Apple.
It's AI itself.
Do these authors have any merit in their attack?
We will dig into it right after this.
Hey, everyone, let me tell you about The Hustle Daily Show,
a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines
in 15 minutes or less and explain why you should care about them.
So search for The Hustled Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast Friday edition running on a Wednesday as we break down the news in anticipation of an interview dropping Friday.
Of course, Ronan and I will be back not this Friday, but a week from Friday.
So let's talk about AI being somewhat mid.
There is a New York Times op-ed called The Tech Fantasy that powers AI is running on fumes.
It's by a trustee McMillan Kottom.
She says,
Behold the decade of mid-tech.
This is what I want to say
every time someone asks me,
what about AI with the breathless
anticipation of a boy who thinks this summer
he, oh my God,
I cannot, do I have to read this?
I think you do.
I think you do.
This is what I want to say
every time someone asks me,
what about AI with the breathless anticipation
of a boy who thinks
this is the summer he finally gets to touch a boob.
I'm far from.
Luddite. It is precisely because I use this technology that I know mid when I see it.
She goes on to argue that artificial intelligence is no revolution. It is middling tech.
It is something that is promised to do magical things. But when you put it into production,
it doesn't. Think about checkout, she says. Check out automation was supposed to change the experience
at the supermarket. She calls it pretty mid. Cachiers are still better at managing the point of
sale. She says, think about facial recognition. That's supposed to get you through security faster.
And the TSA's adoption, however, hasn't particularly revolutionized the airport experience or made
security screening lines shorter. Artificial intelligence is supposed to be more than radical, more
radical than automation. Tech billionaires promise us that workers who can't or won't use AI will be
left behind. Politicians promise to make policy that unleashes the power of AI to do something,
though many of them aren't sure exactly what. I wanted to hate this op-ed, but as I read it,
I started to think, you know what, we still haven't seen the killer use cases for AI. Yes, you can
use chat GPT to help you sort through a document a little bit more, but has it lived up to the boast of
of the technology being that revolutionary and, you know, really empowering the people that use
it beyond those that don't. I don't know. I couldn't fully hate this story.
This article, this story took me on an emotional journey as well. Because I think same thing.
I wanted to hate it. But also, it raises the point. I feel it's too, it's still selling the promise
too short, but it's not lying about the present. And this is where, and I've been ranting about
this for a long time, AI companies have a branding problem. And we see this the way the Super Bowl
ads and the way Gemini was presented and the way chat GPT tried to whatever present, whatever
they were trying to do with that ad. Overall, everything, the promise of automating entire
industries or these big boastful things or people on X doing threads about 10 crazy use cases
I just discovered with the new Claude update.
Like that is what everyone is building the like the promise around and what they're
raising money around.
So they have to do that.
And that's not where the technology is right now.
It's not.
If you are very good at using it, you can make it do a lot of those things.
and there's more and more again products being built on top of these models
to allow more and more people to do these things.
But because the industry is promising that that's the present
as opposed to the even near future,
it's going to leave a lot of people feeling like this.
Again, my mom going back to me at the Super Bowl,
turning to me and being like,
so what can I do with this AI?
And not being able to give her a clear, like simple explanation.
and these products not delivering that for her.
And we're going to talk about Apple intelligence, our favorite topic.
But certainly that as well, not actually delivering on what they show in the ad.
So I think this is something the industry, and I think this year there's going to be a reckoning with it,
that when you are promising too much, at a certain point, consumer fatigue's going to hit in.
It's going to hit.
And people, you might see those user numbers start dropping.
And then you can actually hopefully get to the actual work.
But I think the industry is moving in the wrong direction on that side.
And this article captured it.
I'm totally on board with you.
And it's funny because it's the exact emotional journey that I had, where I was like,
you're wrong.
There's so much useful stuff you can do with AI.
And there were passages in there that felt beyond over the top for me.
Here's one.
The tech fantasy is running on fumes.
We all know it's not going to work.
But the fantasy compels risk-averse universities and excites financial speculators because it promises
is the power to control what learning does without paying the cost for how real learning happens.
Just this idea of that, okay, we're all going to, we all know it's not going to work.
Just struck me as being totally removed from the details.
But then she also talks a little bit about why she is so negative on the technology.
And again, it's the delta between the promise and the reality.
This is what she writes.
Every day an internet ad shows me a way that AI can predict my lecture, transcribe my lecture,
while a student presumably does something other than listen,
annotate the lecture, anticipate essay prompts,
research questions, test questions,
and then finally write an assigned paper.
How can professors out-teach an exponentially generative prediction machine?
How can we inculcate academic values like risk-taking, deep reading, and honesty
when is this cheap and easy to bypass them?
So she's an academic,
but I think this point is valid in that the industry has promised to do all this.
I mean, think about how often we are talking about industry promises of AGI.
And I think I'm pretty proud of the fact that on this show, we haven't sort of gone with the marketing hype and sort of try to take a nuanced and cool-headed approach to what we're hearing from the industry.
And there's a reason for that.
We think that it's important for listeners to get the truth here.
And by extension, Ranjan, you've been talking for a long time about this branding issue, that the promises from the industry that you're
going to have, you know, these brilliant AIs that are going to be walking around with you feel
like they should be here already. And I think the industry oversells what there is today.
It's not there. It's going to take years. Sort of undersells what we already have. Right.
So there's both this overselling and underselling happen in terms of the actual capabilities.
And then it's no surprise that you're left with somebody who is,
not a techie, but deals with the technology and kind of looks at you and says,
you know what, shut up, right?
Yeah, I think that's exactly it.
I mean, when you, and in that gap between expectation and reality, like, like even the
ChatsyPT generating images for brands, like I worked on something and with the chief
marketing officer of a fashion brand. And I'm like, okay, here is the product on an completely
artificially generated model, which, this is like a year ago, and it's blowing my mind that I've
been able to do this. And the first comment is the print on the fabric is not exactly the same.
And it's a very intricate, detailed print. And it's like, wait, do you not understand just what just
happened? I just created a person and put this product on them. And the,
first reaction was a bit of disappointment because the expectation was it was going to be perfect.
And I'm sure, like, this is happening across the entire industry, especially when you get into
the more enterprise and professional use cases. And I do think this is this is the exact branding
problem in addition to the fact that she even throws in that like it's been a doge has been
an infomercial for AI, that the use cases and where it's living and who's promoting it
is causing some problems too on the branding side.
Right.
Now let's go to this CNN story.
CNN says Apple's AI isn't a letdown.
AI is the letdown.
And I think continuing on this Apple beatdown that's been going on in the press for the past
couple of weeks, it says the real reason companies are doing this is because Wall Street
wants them to.
investors have been salivating for an Apple super cycle, a tech upgrade so enticing that consumers will rush to get their hands on the new model.
Fact check, true.
In a rush to police shareholders, Apple has made a rare stumble.
The company is owning its error and now delaying the Apple intelligence features to the coming years.
And this goes to a little bit, this is actually a very incisive point that this author makes here.
In June, they write, Apple floated a compelling scenario in the new-fangled Siri.
Imagine yourself frazzled and running late for work, simply saying into your phone,
Hey, Siri, what time does my mom's flight land?
And is it at JFK or LaGuardia?
In theory, Siri can scan your emails and texts with your mom to give you an answer
that saves you several annoying steps of opening your email to find the flight number,
copying it, then pasting it into Google to find the flight status.
If it's 100% accurate, it's a fantastic time.
server. If it's anything less than 100% accurate, it's useless because even if there's a 2%
chance it's wrong, there's a 2% chance you're stranding your mom at the airport and your
mom will be rightly very disappointed. Our moms deserve better. Our moms deserve better, I agree.
The thing that kills the moms. Here's to the moms and picking them up at the airport
100% of the time. The thing that kills me about this is honestly, that that query should be
answerable at 100% success rate. I'm sorry, Apple, you guys should figure that out. That is a
straightforward thing. But this is, again, going back to the problem that I do, well, actually,
I will disagree that Apple's AI is a letdown. And I know regular listeners know that's how I feel.
But that is a problem that actually most AI systems and chatbots that have, if you upload a bunch
of emails and you asked that exact question into Claude, it will get that answer right.
So I think Apple, the biggest letdown, and again, going back to the gap between promise
and reality, is they essentially promised everything all at once to everyone, rather than being
like, okay, let's solve the go to your inbox and answer all of your travel questions, travel
planning. Make travel planning like a little feature, make it a little app, make it like a pop-up
Apple tips for Apple intelligence. But instead, the idea was all questions could be answered right
away. And that's, of course, it's going to be a letdown. But again, I think AI should be able to
today solve a lot of this stuff. Yeah, I wanted to read that out there, A, because it sort of harks
back to our attempt to use Siri. Yes. And failing miserably. And, and,
And I think you kind of seized on my follow-up point here, which is AI can and should get that right.
There's no excuse not to get that right.
And it is going to start delivering.
And that's why, like, when we talked earlier in the show about how AI is finally hitting its stride, this type of stuff is going to push it even further.
I'll give you one example.
I was in my Gmail inbox and just used the Gemini.
I'm always reticent to use these things because they usually don't work.
But I said, okay, I have a pretty complex task, and it will be worth wasting the 30 seconds
on a Gemini query to see if it can work.
And that was, I wanted it to pull out all the paid subscribers of big technology.
I wanted to pull out their emails and separate them by commas so I can invite them into our
Discord.
And I typed that into Gemini, and lo and behold, Gemini produced the list, perfectly accurate
from a number of emails going back a month.
and I was just able to copy and paste that into the BCC field and invite the subscribers into
the Discord. That's incredible. I mean, it is effectively applying a conversational
probabilistic technology into a deterministic scenario and it proving that it can execute.
And once it starts getting that stuff right and doing it for a broad degree of use cases,
whether that's Google or Apple or Amazon or Microsoft or all of them, that's when you're going to
see the movement.
But I think that in that exact example, in that exact example is kind of reference, like,
it's a good reference point to one of the points in the article.
She's kind of going at Kevin Ruse at the Times and she's like, he had said...
It sounds like she listened to a hard fork episode.
Yeah, and was pissed off.
Yeah.
She both goes in on Kevin and Casey.
Okay, continue.
Because Kevin said there are people who use AI systems and know that they're not perfect.
And that those are the regular users that there's a right way and a wrong way to query chat.
And then the author is, this is where we, the people, are apparently failing at AI.
Because in addition to being humans with jobs and social lives and laundry to fold and art to make and kids' trays,
we should also learn how to tiptoe around the limitations of large language models that may or may not return accurate information.
to us. I like the line. It's a good line. It's good writing at the end. But I also, to me, this is
where Apple, it's a letdown on their part because they promise to a human with a job in social
lives and laundry to fold that they'll get all these right. The example you gave is a perfect
example of if you kind of know what's possible and how to ask, it's going to get it right.
And it's incredible. But this gap between knowing how to use.
it like that's where either there needs to be more user education or the product needs to get better
because the models are good enough to answer all these kind of queries yeah i don't know i mean no one
is expecting you to use the lLMs to make your life better just to me seems like all right tech
companies built these tools like go and use it if you want or don't use it if you don't want
sort of i don't know why i'm in the comment but these investors are certainly needing that
to happen. They need that, but you don't have to, as a consumer, you have agency. And 20 million
people feel it worthwhile to pay for chat GPT. So clearly this is working for some people.
I would argue your agency with Apple intelligence is a bit limited when it's shoved into every part of
the iPhone and product. And accidentally I call Siri, even on my MacBook right now. Yeah. I think
they're questioning free will
in terms of interacting
to large language.
I don't know.
Just turn it right off.
Do you think, do you know how to?
I haven't tried.
I know what's possible.
I haven't tried either, but I might go and do it
and we should talk about it next week
because I have a feeling
it's probably going to be a pain in the ass.
All right. Next week, folks,
you'll tune in. Ranjan and I will both try to turn off
Apple Intelligence Live on the air
and we'll see if we can do it.
All right, very quickly on this line,
Amazon's Alexa Plus is out, and this is from the Washington Post. It's missing some features.
The new enabled assistant Alexa Plus is launching on Monday. So that's Monday of this week.
But not all the features the company showcased are ready. Some of the new features that aren't
going to be coming out include ordering takeout on GrubHub based on a conversation about
what you're craving or using Alexa Plus to visually identify family members and remind
them to do specific chores like walking the dog. I guess that's if you have the security camera
in the house. Other stuff like brainstorming a gift idea or generating a story to entertain your
kids also won't be released until later. So I don't know. We saw the live demo at the release event,
but I think this is just another case of Amazon or of a company making a big promise about an AI
assistant. Now, at least they shipped it. I guess they shipped something. I try.
try to get it to work on my devices. I think I have to disable multi-language and I can start
using them. And so we can report back on that next week. But I don't know, should we be excited
that they launched or also just be like, all right, here we go again. They are missing features,
of course. This is where I am genuinely excited about this. As we talked about a few weeks ago,
I might get rid of all my home pods and move back to Alexa.
But to me, what was very interesting about that announcement is brainstorming a gift idea
is a pretty straightforward generative AI question.
Generating a story to entertain your kids, I do that all the time with chat GPT voice,
and it's amazing.
I'll literally be like, tell me a story about this really specific subject involving my kid,
like this really specific scenario and he loves it so it can do that job so why like using facial
recognition to identify family members and remind them to do specific chores that's a tougher
problem that's a that's a tough problem to solve but and ordering on grubhub and not getting that
wrong and like at a hundred percent accuracy otherwise people would be really pissed i get that stuff
taking time but why advertise that stuff if it's not close
to ready. That's just like, guys, bring back consumer trust in this. Make people happy. Make
them excited. You know what? Open AI is actually doing that pretty well, like in terms of
they are. They are doing it the best of everyone, which explains why they have the product that's
working. Yep. No, that's why they're 500 million. That's why people trusted even more because
you go there. You actually, do you know what I'm going to admit right now? I was unable to jiblify
an image.
I kept hitting a content
policy thing when I tried to put a picture
of me or my wife up there and like
try to jiblify it.
Were you able to do people?
I have been able to, yeah.
What is, what, I'm paying you
Open AI. I'm paying you.
Let me jiblify myself.
I don't know. Yeah, what I would do is just
I would just go to the web
version. Have you been using the app? The web version
is seemingly more permissive.
No, web version. I actually got
a little obsessed with this because I, on Sunday, I landed back in New York. And all I saw was this
all over different social media platforms. And I'm like, okay, how can I not do this right now?
I'm going to write a whole New York Times op-ed on how AI is letting me down because
chat, GBT, is not letting me jiblify myself. It's mid. It's mid. Okay, let's just end this
segment. I think we have consensus here, which is that we both believe that AI is not going to just
fizzle out, and it's not a, you know, fake revolution, so to speak. But we also think that
the overpromising is going to have some serious consequences, and we're already starting to see
some signs of that backlash. Agreed on that. All right, a couple of minutes left. Let's just
talk about the X-AI acquisition. So X-A-I bought, or the X acquisition, too many X's,
XAI bought X so Elon Musk's AI company bought X it's kind of a weird deal so there was one set of
advisors working for both sides it is a value it put XAI's valuation at 80 billion even though
there was no new money in and that increased its valuation from 80 billion to 50 billion so that
33 billion that X got is actually probably smaller if you just use the last fundraising
amount. We have a professor from
UCLA that says it's funny money.
It's telling the Wall Street Journal. It's funny money.
It's like using monopoly money to buy
Pokemon cards. And as someone who
has done that in the past, let me tell you.
Don't knock it till you rock it.
Professor Andrew Verstein.
And it is
interesting that basically
we're seeing AI, which is the
next platform, swallow
social media, which is the
last platform. And this is what Axio
says. AI eats social media is XAI, Swallows X. All your X data was going to be used to train
these models anyway, and now it definitely is. And there's no getting away from it. So that's the
headline on the deal. Ranjan, I'm curious what you think about it. And if there's anything
you think the common narrative might be missing about what this deal means. I think it's like
using Monopoly money to buy Pokemon cards. If that's the common narrative, is the right one.
Again, kudos to Elon Musk and the advisors who worked on both sides of the deal,
because you never see something like that, to raise the valuation of XAI from 50 to 30, 50 to 80,
and then simply add in the $33 billion price tag.
And maybe that's what you're attributing the rise in valuation to be.
The simple add-on of the $30 billion or so for X is,
quite incredible to then be able to just make up whatever valuation you want for
X is incredible because remember he bought it for 47 billion I believe it was
they valued it at 45-ish and and they said $33 billion with the valuation minus debt
like basically he's he's able to just say oh yeah it hasn't lost any value even though
we've all seen endless reports and you can even see it in
the advertising when you load your X feed like just how ridiculous it is that they're not making
money they're losing money it's not going in the right direction and he just was able to say
oh yeah it's worth what it was when I bought it that's okay and now it's part of XAI my other
company that has in obscene valuation it's the same investors on both sides it's the same
bankers and lawyers on both sides I mean this one Masa is jealous of this one yeah
I mean, he thought his $40 billion and were forecasting revenues about the investor actually paying money to the portfolio company.
This puts that to shame.
Well, the interesting thing is that now X-A-I, X-A-I's revenue is going to be coming from X, which is interesting.
Like, X will be the revenue arm of X-A-I in some ways because you're going to pay for GROC through Twitter or old Twitter.
you're going to have ads still coming in to X.
So that adds an interesting wrinkle to it.
But let's just end.
Let's end on this.
This is the close, well, it's not fully the closing chapter because we don't know what's going to happen to XAI, but this, let's say, the intermediate closing chapter, which makes no sense.
But you know what I'm saying on the X saga.
Was that a good buy for Elon Musk strictly from a business standpoint?
I think it was a great buy for Elon Musk from a business standpoint.
because he still owns it, no cash changed hands,
and he got to just label a valuation that he wanted to
on a property that is not worth that.
So that kind of financial engineering,
I think we should all be fascinated, proud, and terrified of.
Well, I think that says it all.
Ranjan, it's so great to have you back.
Welcome back to the show.
It's good to be back.
See you next week.
All right.
see you next week. On Friday, yes. Thanks everybody for listening. Special episode coming up this
Friday, so stay tuned for that. And then Ron Jenel and I will be back a week from Friday to break
down the week's news as usual. We're back, baby, back in action, breaking down AI news. Like,
it's been no time at all. All right, thanks for listening and we'll see you next time on Big Technology
Podcast.