Big Technology Podcast - Elon Sues OpenAI, Should Sundar Stay?, RIP Apple Car
Episode Date: March 1, 2024Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) Elon Musk suing OpenAI for breaching its founding agreement 2) What can Musk accomplish with the lawsuit ...3) The SEC looks into OpenAI’s conduct 4) Microsoft funds OpenAI competition MistralAI 5) Does OpenAi’s chaos make Google look good? 6) The chaos inside Google’s organization that led to its Gemini 7) Is Sundar Pichai the right leader for Google? 8) The death of the Apple car 9) Apple’s generative AI potential 10) Apple’s attempt to acquire Bing 11) Amazon aggregator and Covid darling Thrasio goes bankrupt 12) AI Willy Wonka --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology Premium? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Let's talk about Elon Musk's suing Open AI, the state of Google amid its latest crisis,
including Sundar Pichai's job security, the death of the Apple Car, an Amazon aggregator
that boomed during COVID going bust, and AI Willy Wonka inspiring panic and confusion
among parents and kids alike in Scotland. Oh, that's going to be fun. All that and more
coming up right after this. Welcome to Big Technology Podcast Friday edition,
where we break down the news in our traditional cool-headed and nuanced format. Wow, do we have
show for you today. It was looking like the week was just going along with one story, Google being the
AI scandalized company of the year. And then next thing you know, Elon Musk pulls out this lawsuit
suing Open AI for breaching its founding mission. So we're going to talk about that first. I want to
first welcome in Ron John Roy, Ron John of Margins. You can find it at readmargins.com. Ron John,
welcome. You know it's a big week when Elon Musk is not getting sued, but suing someone else.
So the gloves are off right now.
And honestly, it's not surprising.
And I almost can't blame Elon, right?
He kicked in $44 million at the start of Open AI to build what was agreed upon as a,
and this is from the Wall Street Journal.
It was supposed to be a public open source AI company.
Then OpenAI gets in bed with Microsoft.
And now Musk is basically saying that it's holding up Microsoft.
It's a closed source.
It's entirely meant to improve Microsoft's bottom line.
And he's coming after the company.
So I am curious to hear what you think about whether this lawsuit has merit and what's going to happen with it.
I think this is incredibly interesting because it's yet another wrinkle where OpenAI still has never gotten past its original sin, let's say.
And I wouldn't even call it a sin necessarily because the idea.
No, no, but it's technically a founding mission.
Founding mission. Founding mission sounds better. A public benefit corporation to empower AI to help
humanity, whatever else. The whole point of it was to build this outside the hands of big technology
companies who would only use it for profit. And that's exactly what they did. And we've seen this
tension bubble up over and over again. I mean, the Sam Altman firing and rehiring is still one of the
weirdest most amazing corporate things to happen for a company of this value ever, if not recently.
And this is, I think I'm excited to see if Elon Musk's lawsuit is what actually pushes this
conversation to where it has to resolve itself at some point, because they're still trying to
thread the line between what exactly is open AI.
Yeah, it was so interesting because like some, like the Wall Street Journal took this
angle. It sets up a potential courtroom debate over how scared we should be about advances in AI and
how soon. And I'm like, that is not what it sets up. I mean, it is, it really sets up an important
deconstruction of whether you can start a company as a nonprofit mission driven and then turn it
us into something for profit. I mean, Open AI is valued at $80 billion right now. Basically,
all these open AI employees said they were going to follow Sam Altman to Microsoft or push so hard
to get Sam Altman back because they had a lot of money
that was tied into this.
So it's like it's become an entirely capitalistic enterprise,
which like no problem with that,
if that's how you start, but it just isn't how it started.
And it leads me to believe like,
can this be something that can open AI be an entity
that it continues to exist for a long time in its current state?
Like has it gotten so big that it just cannot persist?
Because importantly, and I think this is important to discuss,
Elon isn't selling, isn't suing them,
for money. I mean, he's the richest man in the world. He's not looking for money, but he's looking
for is a number of things which we'll discuss, but basically injunctive relief here that's going to
change the way that the company operates. And it does need to change. When the Sam Elton got
fire, there were these amazing diagrams of how you had like, you know, a board of directors and then
Open AI, the nonprofit, which wholly owns and controls Open AI, GP LLC, which controls a holding
company, which owns the employees and investors, which then goes down to Open AI Global,
which is the capped profit company. It's not even a for-profit company. They have a capped
profit. The whole thing is such corporate law engineering. I'm not even sure exactly what the
word would be here, like relative to financial engineering, that it needs to clean up its corporate
structure, its cap table, what it's allowed to do and what it's not allowed to do, but it still
hasn't. And I think this lawsuit hopefully can push it to. I do agree that the AGI focus in the
Wall Street Journal doesn't make sense to me really. Because yeah, this is just about the corporate
structure. Who is and when will we achieve? Who is close to AGI and when will achieve it? I don't
know. That's a debate for another time. Right. Well, actually, Must does include AGI in the lawsuit.
He's almost predicated on the fact. He's saying under Open AI's new board is not just developing.
but it's actually refining an aGI so it's saying it has aGI artificial general intelligence
human level artificial intelligence to maximize profits for Microsoft rather than for the benefit
of humanity so it's almost as if mesk has to argue that they do have aGI in order to prove his
point but then you look at what he's trying to do and that's where it gets really interesting
so you know yes part of it is opening i needs to clean up its corporate structure but another part of
of it is what Musk is asking for. And this is again from the journal. In the lawsuit, he asked for an
order, compelling open AI to, one, make all its research and technology open to the public, and
two, for the company and Altman be required to give up all money received as a result of the
practices alleged to be unlawful. I mean, basically he's trying to take a grenade to the thing and
restore it to that's original mission, which he chipped in a lot of money for. He obviously left,
but I think this is going to be a serious problem for open AI.
I agree.
And it is a bit rich when, remember, Elon Musk now has x.aI, Twitter, which has X has
grok.
Tesla is an AI company.
So there's certainly self-interest here in terms of taking open AI out of the equation.
But I think between this and you have the SEC this week announced that there is going to be
review of the internal communications at Open AI around the firing of Sam Altman.
I mean, because clearly any kind of investor in an $80 billion company to have corporate
shenanigans of that level, just go back and forth just that informally, I think it's important
for anyone who's investing this company to really have it looked at and understand what is
going on over there. They put out great products. And it looks like with SORA, they might have
another huge hit, but the actual organization, there still needs to be a lot of review over how
the thing is structured, because it doesn't make any sense. Well, let me ask you this then. I mean,
is there a new structure that they can implement that will satisfy Musk? I mean, basically,
everybody that I'm hearing right now is saying that, like, the judge is not going to necessarily
transfer, transform the structure of open AI. But you could see open AI potentially pressed to settle,
You can have long discovery that might make it painful for them, that might push them towards
Musk's side. What can they do right now? Like, do they need to effectively become a for-profit
company and give Musk a stake in that? Is that what we're looking at? I mean, maybe. Maybe.
I was kind of confused when they did go down this road, again, of really opaque, weird corporate
structure, capped for-profit company. They were trying to halfway it between the original mission
and a traditional for-profit company.
And I don't think any of us have any issue with them being a for-profit company.
So just make the full transition.
And I think everyone can be happy.
And if this forces it, I think that's a good thing.
I mean, I think it might be successful.
Like I saw that and I like initially was like, ha, ha, you know, Elon's suing open AI.
He's had this problem with them forever.
But the more you look at this, the more you see that, yes, there was definitely a breach of some of that founding agreement.
and Musk, importantly, he has resources like a big corporation, and he can see this through.
It's not like, you know, one disgruntled executive or a VC suing a company.
It's Elon, and he can really cause some damage here if he wants to.
No, I think he can, and I think Open AI remains the most fascinating company in tech right now,
because between this organizational structure debate, between the actual business,
side, which they're still doing pretty well on, but to, you know, reach that kind of valuation still presents a lot of challenges. I think almost from a PR perspective, Sam Altman and the whole, I'm going to raise $7 trillion, was it? Whatever trillions it was, these are mistakes. Like he has gotten to a point making these kind of statements that present Open AI itself as a company as something different.
presents him as something different as opposed to this, it moves it so much further away from
benevolent company with a great product that's pushing forward AI. I mean, even thinking about,
you know, I was listening to the episode you did with the NVIDIA head of DevOps CTO. You know,
he was talking about how artificial intelligence, traditionally, every big conference, everything was
always about the research papers being done. And the release of chat GPT,
transformed the way everyone who kind of lived in academia,
it transformed the entire conversation into actual products,
actual consumer utilization, actual business utilization.
And that's, it's transformative what they did,
but they still have to kind of go back to that founding mission
or original sin,
whatever we're going to call it.
Now the SEC's entry into this is kind of interesting.
I'm kind of like SEC go away.
So first of all, like their main point is,
they're looking into whether the company's investors were misled during the boardroom crisis last
November when Sam Alton was fired and reinstated. Like, how could there be any misleading of
investors? They didn't say anything. To me, this seems just like the SEC raising its hands and
saying, I want to be involved in this as well. It's fair that in terms of misleading, no one has any
sense that they had their shit together back in the fall when the Sam Altman firing happened. I don't
I don't think anyone anywhere is pretending that things were hidden because everything seemed to be out in the open to a shocking degree.
So that part I have trouble with, but it's still fair. I think it is important.
This is one of the most valuable private companies in the world.
What exactly happened during that time?
What kind of investments are being done under the radar or in conflicted ways?
I think that is important to know, actually.
like, you know, when Sam Altman brings up conversations around raising $7 trillion, what is the
relationship between that and Open AI? There's enough conflict of interest clearly already in the way
things have been set up, the way the investment fund that Sam Altman had already created, and I believe
was the sole owner of within Open AI. I think all of these things, there needs to be more
information about exactly how things are structured underneath. Now, this is also from Axios.
say an internal investigation by law firm Wilmer Hale into the crisis is nearing its end of the
New York Times. So credit to the New York Times for that. But that's pretty interesting. Like,
I don't know, maybe that will come out and we'll be able to finally learn a little bit more about
what went on there. Because it's not over, right? It's continuing to roll on.
That was my favorite part of the New York Times, Axio coverage that Axios covered,
was, of course, the internal investigation by the law firm has nearing its end. And they'll probably
find no wrongdoing and I'll just move on like it's just one again one of those that
internal investigations by the same corporate law firm that is probably you know working on
all of your financing needs and all of your other corporate law needs are probably not
going to get you the truth necessarily you never know all it takes is one lawyer who was on that
who has loose lips or like drops a document you know these investigations can't turn things up
so I maybe open AI in the spirit of being open we'll share that
that report with all of us.
I'm sure they will.
I'm sure they will.
Maybe chat GPT, it will accidentally be fed in and we will be able to access all that
information via unsecure information.
Right.
Did you see that open AI is also, they have this lawsuit with the New York Times and
they're saying the Times hacked chat GPT by like writing prompts trying to find their stories?
That's what they call a hacking.
That's, I think that's exactly what happened.
Wait, they called it a hacking?
Yeah.
Is this I'm guessing related to the New York Times.
lawsuit where they recreated and showed that like their their articles could be created
verbatim right but then yeah I saw in the original open AI response their claim was that the
New York Times created very very specific prompts to get those that language replicated right
but they're actually now they're using the language hack which is interesting so I call it I call
it good prompting good prompting I agree I'm with you it's good prompting you know how we've
talked on this show about how Microsoft, so this last part about open AI, we've talked on the show
about how Microsoft probably has grown a little bit nervous about the structure of open AI and is
looking to diversify its best, its bets. Well, a very interesting thing happened this week where
they actually invested 16.3 million in Mistral AI, which is that Paris-based AI startup that
raised a ton of money before it even got to work on a product just by the caliber of its researchers.
But I think people pointed this out, and rightly so this week, that Microsoft investing in an open AI competitor is a beginning sign of that company, whether that's building internally or investing externally, trying to wean itself off its dependency of open AI.
And that is another really important wrinkle to watch as this saga plays out.
I think Microsoft has very clearly made that decision that they're not betting only on open AI.
In fact, they're going to diversify.
They recognize that being the platform rather than is much more important.
And mistral, I think they've raised $415 million already.
Basically, with 22 people in seven months, they got valued at $2 billion.
Yeah, back in December, they'd gotten up to $415 million raised.
Like, you know, Microsoft is going to make sure that all their bases are covered
and they're not betting on Sam Altman given what happened.
and I think it's the right move.
Listen, Ron John, if we add just another 20 people, you know, in six months or so,
you and I can probably call ourselves the big technology AI research house.
I'm thinking 400 million coming to us.
B-T-A-I-R.
It's like the Long Island blockchain iced tea company.
I think the key is 22 people, though.
Once you have 22 people, you hit that scale, $2 billion valuation, no question.
All right. When we line up the funding round, we'll make sure to get some listeners involved.
So, you know, leave a comment and you're in. There's a chance, you know. I feel like we have to do like a disclaimer now. But anyway, this is not financial advice and you will not be receiving any equity stake in big technology AI research company.
So as I'm going through this week writing a story about what's happening within Google, you know, Google seemed like the biggest train wreck in the AI world.
people were calling for Sundar's job which we should talk about but this whole AI open
AI thing is just making them look good right it's like wait a second like you know they're not
the biggest bonfire right now in the AI industry Google that that is like there's always going
to be these problems with open AI it almost so like as I'm hitting send on this story which I
published today on Friday on big technology I'm just like oh man like is this old news already but the the
The problems that Google has encountered are not going to go away either, I guess.
Yeah, I think it reminds us, this is still so early, and that's good.
That's good for us as consumers.
That's good for innovation.
And it's exciting.
I still contend, and we've been saying this for months, this is one of the most exciting times in technology for all of us in years.
This is true transformation.
This is not just getting your food delivered because of Zerpy funding.
This is for real, but who's going to win on this?
I think there's a lot to say.
But Google definitely, we talked about it last week, has had some issues with the way
Gemini's image creation tool rolled out.
And Ben Thompson at Stratichry, Stratory, Stratory, had a very, I know, had a very interesting
piece this week about Google's culture and Sundar's leadership and how their ability to
ship and move things forward really is affecting the way we're all publicly watching the Gemini
rollout. One issue I took with this piece, though, is, and I think is this idea that he
had said, Google specifically and tech companies broadly have long been sensitive to accusations
of bias. That has extended to image generation. I think there's a lot of talk right now about,
how, you know, is Google too woke about this?
For years, tech companies did not care about accusations of bias.
There was no, that was nowhere in the conversation.
Clearly, it's in the conversation right now, and they're all working with, you know,
very biased models, especially around images.
But the idea that companies for years have been walking on eggshells and have been
afraid to roll things out, I think is ridiculous.
I think it's just a sign that it, when we talked about this last week, it, it,
was bad AI. It was bad programming. It was bad product management. It was not some, you know,
organizational, massive, woke issue that caused the failed Gemini rollout. Yeah, and that's exactly
what I found. Like, I was speaking with people within the company and around the company. And it
turns out that they basically cobbled together three different divisions, a product team,
trust and safety, and this other division that no one talks about called responsible AI.
and basically said, you know, you're responsible for the testing and the rollout of this product.
And even people within the trust and safety division of this company aren't sure how the Gemini,
which we talked about, how effectively, like if you asked for any type of people,
it would add in the background words asking for, you know, diverse portrayals of these people
without removing those words when it comes to places where there certainly should not be diverse portrayals,
such as Nazis, right?
And somehow this got out the door,
and it was basically a lack of coordination
between these three divisions
and certainly not a high-level stakeholder
overseeing all these things
or a high-level executive overseeing these things
that really led to a lot of these issues.
And we're now at a point where Sundar came out
and said to fix this,
they're going to start with structural changes.
which means it's probably going to be some organizational changes.
This sort of chaos is not going to be tolerated anymore within that company.
And they already have, and I just reported this as a mini scoop on Big Technology Day,
they're already going to have their trust and safety team doing weekend shifts this weekend,
which rarely happens within Google, trying to test Gemini for adversarial testing
on certain high priority topics.
So basically, they're going to start doing all.
all that testing that they should have done before.
And so if this was Google with an agenda,
like I just don't see it because it's like so chaotic
at this point that you're just not really seeing the results
that any company with a coherent operation would be looking for.
And that's why you're having the structural changes
and that's why you're having this teamworking weekends.
Well, not to get to Elon Musk here,
but I mean, the idea that working weekends
has to be a kind of organizational shift
when you have just released one of the biggest products
that will define the future of your company,
I think people, especially the trust and safety team
should have been working weekends beforehand.
So I'm glad they are now.
But I think I completely agree in the way you outlined it,
you know, three different teams lack of coordination,
because I actually take the case of Adobe,
the way they have approached specifically image generation,
and to do it without, with training only on uncopyrighted images,
stock images they own,
bringing in like outside photographers to shoot images specifically for the training of the model.
They have trained this and understood it in a, you know,
organizationally cohesive way at the foundational layer of the model.
Whereas Google, it was so clear that this and this came out,
that it was just, you know, appending a prompt that you already enter to say,
like make this image diverse, which is the clumsiest thing you can do.
These are things that have to be done where everyone is on board and it starts from the beginning
of how you actually build these kind of technologies. And Adobe is shown you can do it. It's at scale and
it's okay. Right. So okay, but we've kind of skirted around the main question here, which is that
yes, we're early on. And yes, there's a lot of room to adapt. But given the potential and given
how important this is, you really have to be sure that you have the right leader in place when you're
embarking on this journey. So I'm curious from your perspective, as Sundar Pichai done enough,
the CEO of Google, CEO of Alphabet, has he done enough to show you that he's the right person?
I mean, yes, he's not behind some kind of wild conspiracy to make the Nazis woke, but he's
also been the person who's presided over this organizational mess, this slow rollout, and allowed
companies like NVIDIA, like Open AI, to take the momentum from him. Well, see, this is where I've, like,
difficult time because on these public facing flubs, Sundar is not having a good run. He's like
certainly they're not winning the PR war right now. But on the other side, again, the stock was up
60% last year. So it's very hard to fire a CEO who's shown strong performance. But then also two
things in the last week, actually, a week and a half that I think are really interesting relative
to do this. While they're fighting the PR war about woke image generation, they just raised the
price of Google workspace per user by 20%. I mean, when you talk about pricing power, that is
incredible. And I mean, as a user of Google workspace, I'm not going anywhere. I'm guessing most
people who are set up on Google workspace are not going anywhere either. You're not. If you are,
you're going over to teams and that they have pricing power as well. So at the core business,
they are still very strong.
They have incredible pricing power.
They're still going to be able to churn out money.
So even if they're making these mistakes, I think, like, at the core, Google is still strong.
But the other, I'll admit, and I feel like every week is a back and forth with our relationship
with our love for Sundar.
Gemini in workspace has been rolled out now.
And I've been using it more within Gmail, within sheets, within slides.
like summarizing things and it's pretty good. It's not great. There's a lot of work to do,
but I actually think it's starting to show what can be done in terms of like having this built
in tool, having Gemini directly integrated into all the tools you already use and then getting
the value and then not needing to go out to chat GPT, being able to generate slide ideas or document
text directly in the app that you are already spending all your time in. That's always been
where they're going to win or lose, not in these consumer-facing products.
And I think they finally are showing, and they just release this, that they have a shot.
Let me give you like the case that I, because I go back and forth my mind on this a lot.
The case for Sundar, and I'll give you the case for and a case against, the case for Sundar is, yes,
they just hit their all-time highs in the stock market in January. So it's extremely rare to be
talking about whether a CEO's job is safe when it's five weeks after the company hit an all-time high.
in the market like that is a big part of what they're judged on um they also do have an
AI research house that has effectively thrived under sundar google deep mind which is now combined
working together on products and um and they're building and they're shipping and jemini 1.5
is pretty impressive from all all the looks of it and it looks like okay this is a serious this
is a serious competitor here so that's the argument for like why would you remove the guy the argument
against is this don't tell me about all-time highs because we're talking about a moment where
there's like opportunities for historic generation of market cap you look at invidia which rode
this wave based off the transformer model which google developed and of course google's not a hardware
company but invidia built the software as well wasn't shy about it recognized it immediately
enacted on it and they've added almost two trillion dollars to their market cap you look at
Microsoft right now because they've handled this the right way the most valuable company in the
world if Google were to have handled this well from the start release the consumer product first
not had all these debacles been organizationally competent Google today would be the most
valuable company in the world without a doubt the inventor of this the the implementer of it and
they would have the everything internally making it happen and open AI and Microsoft would be
looking like they were catching up with all these organizational issues. So you put that all together
and you say, damn, like that is, that is a very hard record to defend. I think your against case
is stronger than the four case and might have just pushed me over. I feel we need like a dial
every week of how our love for Sundar and whether where it is. Because love is a strong word, Ranjan.
All right. For me, for me, it's love. For me, it's love. I'm rooting for him. I'm rooting for him. I'm rooting for
you soon are, but your case against was pretty strong right there. And I, okay, that's fair.
I agree that what could, this is the one of the few times. Normally, I will never try to, you know,
say what was the missed opportunity around market cap when your stock is already up a lot.
But that's a very fair point that right now, this is one of those moments that all of that growth
you should have had a much bigger part of. This might be wrong, but I'm going to throw it out anyway.
I think that there's almost like a low floor to replace him.
Like if you replace Sundar with a terrible leader,
Google can only drop so much
because search will continue to be a cash cow.
Who do you replace him with?
I have no idea.
That's where the discussion stops from me.
I have no idea who you would.
Cheryl Sandberg from out of nowhere coming in.
That's the move.
Cheryl Sandberg to Google.
She's already going to be running business at Snapchat.
We can't hire her out to every company.
Oh, yeah, yeah.
You're right.
You're right.
she's got to go to snap okay all right this yeah cancel sandberg to google she's going to snap still
right and so for me that's the against case if you're going to hold an against case
this this stuff that people are talking about the culture world stuff just doesn't hold muster for me
like ben thompson these are his words and i like ben's work he said google's power and its potential
to help them execute okay um i'm just going to read from the start uh the point of the company ought not to be
be to tell users what to think but to help them make important decisions. That means first and
foremost excising the company of employees attracted to Google's power and its potential to help
them execute their political program and return decision making to those who actually want to make
a good product. That by extension must mean removing those who let the former run amok up to and
including CEO Sundar Pichai. I don't know. That's not to me that isn't like the reason why to do it.
I don't think that this Google is like a woke product factory, even though a lot of their
products end up trending that way.
Like to me, if anything, it's, yeah, it's clumsiness and sloppiness more than anything else.
No, it doesn't mean to say that Google doesn't have a point of view.
Like, clearly they do, right?
They didn't like apologize wholesale for what their Gemini thing was doing just for the
instances where, you know, they should have not drawn people in diverse settings.
But I don't think that's holding back the product, honestly.
I mean, I think every company is doing this.
We know that AI has biased, if left alone, like trying to counteract that as industry standard.
Yep.
No, I agree.
So we'll see.
I mean, I'll tell you this much.
I've never heard more chatter about a CEO's job.
And I've been contributing regularly to CNBC since 2020.
So almost four years, I had never been asked about a big tech CEO's job or any CEO's job until I was asked about it on Monday regarding Sundar for Chai.
So we'll see.
It's one of those things where, like, I could definitely.
see them continue to have patience and continue to work on this. But also, like, I wouldn't be
stunned if one day we, like, wake up and, and he's out and like Larry Page is back, for instance,
as an interim CEO. Okay, that could be the move. That would be the logical move, I think. It would
have to be at that scale. And that you can see everyone getting behind something like that and
supporting it if he's, if he's ready for the challenge. So speaking of Google, like, it's very
interesting, like the directions that they're taking this stuff, right? So,
there was another story this week and ad week talking about how Google's paying publisher to test
an unreleased generative AI platform. And I'll just summarize. Basically, it's a platform
that helps publishers aggregate other stories or aggregate other information and turn it into
new stories. And Google is paying these publishers like five figures a year to test it. And to me,
like this was even more damning than the Gemini thing. Because I'm just like, is this really like
the web that you want to build if you're Google, just filling it with websites where publishers
use generative AI tools to build crappy stories based off of other publications work,
that to me seemed kind of crazy. Well, this was really interesting for me because the whole
what is the future of the web is a discussion that I think, I've thought about it a lot around,
you know, like what does an article look like? What kind of information should live in a web page
format that's just going to get ingested into an LLM and then presented on a perplexity or a
Gemini or a chat GPT. And I think it's going to fundamentally change the web. And I think it should
because I think SEO-driven writing, web pages, publishing has ruined the web in a way. And I think
we all know the recipe sites as like with the incredibly long intros is kind of the poster
child of this or just incredibly spammy sites overall. So I think,
that conversation around how is the web going to change with generative AI is an important
one but Google of all companies it shocks me that they're going to do it because they are the ones
who built that web created the incentives that built that web and are the ones with the most to
lose if the web does fundamentally change so on one side maybe this is another sign of siloed
organizational initiatives where no one some random journalistic focused group with
than Google launched this, or maybe the bull case for it is, they recognize that the web is changing,
so they're going to drive that change, even though it's going to fundamentally disrupt
their own monopolistic business. Do you give them that courage, Alex?
That's interesting. Or maybe they're like, yeah, this is going to, this is going to happen
anyway. Let's help steer it and make sure that it happens in a way that's, you know, a little bit.
Do you think they're actually playing that game, which is a courageous one? And we've talked about,
which are the companies that kind of drive their own disruption in order to get to where things are
going do you think google's going to do it no i don't think so i just don't think they have enough
foresight there or guts yeah i agree on that one too a lot of agreement when it comes to google
but apple though oh you want to go into apple car are we going to debate this apple car
i think we got to go into apple car this is big news here oh we also have that that that
Microsoft tried to sell Bing to Apple.
Okay, let's do the Apple car, and then maybe we'll come back to this Bing story.
So the Apple car is dead.
Rest in peace, Apple car, you are a disaster.
Many, many years of development within Apple,
$10 billion of investment, according to reports.
It seems like dozens, but it's definitely multiple executives driving the division.
The entire idea for Apple building the Apple car was the company is great.
at matching world-class hardware and world-class software.
Like the phone is like the essential example of that.
The iPhone, you have the world-class phone,
and then you have the world-class operating system.
They work so well together,
and that's why it's like the leading phone in the world.
With a car, it's the same thing.
Car is hardware, and then inside the car is going to be software.
Whether that's the dash and all the controls that you have
or the autonomous driving software,
and Apple believed that it was able to match these two perfectly,
and it was going to work.
And so this is like the big question.
So obviously it fell apart.
And this is like the big debate that's happened this week.
Is it a good thing that Apple gave up on this project
and decided to focus on other things,
a.k.a. generative AI.
Or is it an indictment of the company's culture?
And I'll just go, I'll just share my perspective quickly.
I've reported on this.
I think it's an indictment of the company's culture.
They just could not get the people together to build this product and do it in a way
that worked within Apple's very siloed culture and design-led culture.
And I'll just give one example and then I'll turn it over to you, Rajan.
Okay, what this company did was they took sensors and they buried it in the car, right?
Because the car would look better if those sensors were deeper in as opposed to like the typical, you know, self-driving car,
which looks like, you know, a submarine with all those big sensors on top.
The car looked great, but the sensors didn't work because the field of view was limited.
because they were inside the car and they just couldn't get the data they needed to do self-driving.
This is seriously, someone who worked on this project told me this when I was reporting out always day one.
So I'm curious what you, so to me it's definitely a culture thing and a broader sign of Apple's inability to really get into new product areas.
And this is huge because automotive, if you think about where you grow as a $3 trillion company that makes hardware and software products together, automotive is a natural fit.
And now that door is closed off on it.
So I think this is a very damning end to a very damning chapter of Apple and sort of caps what the company can be in the future.
So I disagree that it's an indictment on their culture.
I actually think it's almost a testament to the fact that they're willing to cut it now.
They've seen where the market's going.
Electric vehicles overall are not going to be a high margin product.
The competition from, you know, BYD, especially even the value of the average Tesla has decreased 50% in the last.
year. Like EVs originally were more of a premium luxury product and now they're not. Could Apple
do that with their own car? They certainly could, but is it worth them doing that and how big would
that market be? Again, everyone in the world, not in the world, but a large percentage of people
in the world will be willing to pay $800 than $1,000, then $1,400, then $1,400 for a phone.
The amount of people that will pay 100,000, 125,000, 150,000 for a car, it's a lot more limited.
Another thing I thought that was interesting was the idea that Apple, within these R&D efforts, still has built a lot of technology that will be used in other ways.
Even for the Vision Pro, I'd read that a lot of the way Apple was thinking about the windshield of the car translated into the Vision Pro.
So you saw that there are secondary benefits.
And then also from a software perspective, I mean, I use Apple CarPlay and it's far superior
to any other thing I've ever seen in any kind of car.
Like they're in the majority of vehicles or not a large number of vehicles right now.
Their software is ubiquitous in cars already.
So why build your own car?
And then the last point that I thought was interesting was gross margin.
Apple's at an all-time high gross profit margin at 45%.
You are not going to achieve that with electric cars unless you're selling at such an expensive level that the total addressable market is tiny.
So Apple, if they're to release this, almost by definition would have to reduce their high profit margins.
So at every level, it made sense, I think, to actually not continue chasing this.
And there are a lot of companies that might have kept going, but they, they, they stopped.
So I think that this is revisionist history because if you think about the way that Apple, I mean,
if you think about what you just said, all of this is logical after the fact, but the fact is the
company spent $10 billion and couldn't do it. And I think, yes, can we applaud the fact that it
realized it couldn't do it and decided to stop for sure? But isn't it a problem that it couldn't do it?
No, they could they could do it. They could have released.
a car that's going back to I think the testament to their culture is but they didn't
release a car exactly they could have released something that was not apple that was not but they
couldn't make that's what I'm saying though they couldn't make the apple car yeah and and maybe
it can't be done in the context or it shouldn't not that it can't be done that it shouldn't be done given
all the constraints I'd mentioned earlier profit margin competition technological limitations I think like I think
the fact that they did not release some half-baked car that did not live up to the standards that
they try to put throughout all their products, Syria side, I think it's a good sign that they're
still operating in a somewhat disciplined way. But my point is, like, why couldn't they make that
product that they aspire to? It's not that it was physically impossible. It's that they struggled to do
it. I mean, I think, okay, the question of the like design limitations and the,
mistakes around how sensors can work and whether it fit into their aesthetic.
I actually don't think that's necessarily a bad thing to be so driven by their aesthetic because
that's what's made the rest of their products so successful. It's at the core of what they do
and their entire offering and brand. So to sacrifice on the aesthetic to make the sensors work,
maybe they shouldn't. Okay. I don't think that's necessarily the wrong decision there or it makes
them look foolish. No, that's that's fair. But let me ask you this is my last follow up to this.
all right Apple's had is going to experience revenue declines five of six quarters right like four in a row last year then they bumped one but they'll be down this quarter for sure you know it seems like a lot of their ancillary products are really struggling to find persistent growth the phone is doing well but you know there's definitely risk there in China in particular so if you're Apple don't you need to find the growth somewhere wasn't this the place well
Certainly the Vision Pro is one of the big bets, but I still think there's a lot of opportunity
around strengthening the ecosystem. And then there was reporting that a lot of these employees
would be moving over to generative AI. And I genuinely believe if you get Siri just baseline. I know
I love bashing on Siri, but I cannot say it enough. My poor wife almost wants to, how much she
complains because I refitted our house with home pods and how bad Siri is.
If you just get your generative AI abilities, even at the level of a Google or a Microsoft
or whoever else, it superpowers every single device that we're all living in their ecosystem
already. So I think there's still a huge opportunity across the entire business with generative
AI that they have barely scratched the surface on. So moving employees over there, at least
they're making the right decision yeah i think that is interesting and it will be it'll be fascinating
if you think about how much google didn't want the reputational hit for letting generative ai stuff run wild
i mean geez like apple who's like the most reputation conscious company in the world like that
is going to be really interesting to see what exactly they do oh but you would actually i think
made the point to me when we were texting this week on apple that in terms of reputational risk
electric cars, especially at this stage, especially if you're trying to move into self-driving,
have a huge risk because there will be deaths. And the last company that wants to deal with that
is Apple. Okay, yep, you got me because like someone that was your argument.
That was, well, I just put it in there as a discussion point, but way to turn it on against me.
But basically what I said was I saw a tweet that said, you know, I think at some point Apple is the most
reputation, conscious company in the world, them working on a product that kills 40,000 people
in the U.S. alone every year doesn't seem like the right mix. Like, could you imagine just, I mean,
just for the sake of argument, right? And this obviously, well, anyway, it's disastrous. But like
an Apple car with like the blood of a pedestrian on it after like autonomously just murdered someone.
Well, thank you for providing me the ammunition against your own argument. I know. I'm not putting
my counter arguments in the dock anymore. Nuance is dead on this show.
I'm done with it.
It's just, it's, yeah, win or lose.
That's how it works.
Okay, well, I'll take the L on this one.
And it is interesting that given this move to gender of AI,
that Apple there was a story that Apple held exploratory talks with Microsoft
about acquiring Bing as recently as 2020 and basically said it wasn't good enough.
This is from the Google saying this in the lawsuit that they're having.
In each instance, Apple took a hard look at the relative.
of Bing versus Google and concluded that Google was superior here, the superior default
choice for its safari users.
That is competition.
Okay, no, mention the fact that they spent billions of dollars remaining the default.
But just imagine what would have happened if Apple would have acquired Bing.
Do you think we'd be having this different conversation here?
I mean, that serious growth could come out of there if they worked on that, put AI, and
it made it the default.
I wonder, that's an interesting point, because I wonder, first of all, would
Microsoft have gone after open AI as strongly if they didn't see a potential corollary to search
because they had already offloaded Bing could be an interesting parallel history.
And then in term, yeah, in term, I think that the world would have been a different one if
Apple had bought Bing or Bing would have just been so bad that we stopped using iPhones perhaps
and then it became the default.
It would have been great, Pandora.
What is a Trojan horse for Google to get Apple to buy?
buy Bing and just make Android the winner.
That would have been the move.
This is what I'm talking about, trillions of dollars of potential market cap that sooner could have captured if you would have been able to execute these simple strategies effectively.
Simple strategies.
That's it.
Let's take a break.
We have more to talk about, including this top Amazon aggregator Thrasio filing for bankruptcy.
And then, of course, our fun story of the week, AI, Willy Wonka and the crackhead experience that people have had out in Glasgow.
Yeah. All right, back right after this.
Hey, everyone.
Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email
for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or less
and explain why you should care about them.
So, search for The Hustle Daily Show and your Facebook.
favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast Friday edition, breaking down the week's news.
There was a very interesting bankruptcy filing, Ron John, that you pointed out that this top
Amazon aggregator Thrasio, which was this COVID darling filed for bankruptcy this week.
So tell us a little bit about what happened and what the implications are there.
Yeah, I feel we need to start naming this as a recurring segment because these are my favorite
stories, the stories of the COVID darlings that perfectly captured a story of 2020,
2020, 2021 raised a ton of money. And basically, week to week, month to month, we see another
one of these flame out because we're just hitting that point. The cash is running out.
The debt financing is being unable to be managed. So Thrasio was an Amazon aggregator.
And the story circa 2020, 2021 was so beautiful. It was the idea that you have all these individual brands,
$1 million, $5 million in sales, $10 million in sales on Amazon.
So what if you are to just roll them up and then with operational efficiencies and leverage
with Amazon be able to get better deals on marketing and fulfillment and understand how to market
products and cross-sell them all across the way you operate, that it actually made, in
theory, perfect sense.
Of course, what happened is Drasio raised $3.4 billion.
dollars, most of that is debt, not equity. So it might sound like a much higher number, but
raised an incredible amount of debt to finance these purchases and just declare bankruptcy
saying that they're going to shave $495 million off of the debt load. And creditors are going
to actually have to commit some fresh capital, $90 million to keep this thing running at all.
Basically, it's yet another, like, you know, you have the Peloton stocks in the world. You have all
all the high flyers from the COVID era.
And this is another area, Amazon aggregators
and these ideas of portfolio roll-up
e-commerce companies being the future
that just did not pan out.
So do you think that there's anything
that Amazon could have done differently?
Like this seems like it would be effectively good
for the Amazon ecosystem.
You think there's anything Amazon could have done differently
to allow these type of businesses to thrive?
Or was it simply just another one
of the COVID fever dreams
that was just destined to,
feel no matter what. I think it was both a COVID fever dream, but also I agree that I think Amazon
itself has made decisions around, especially over the past number of years, to move more towards
third-party Chinese suppliers that are selling cheaper products. And anyone who's shopped on Amazon
knows, like moving away from general quality towards just what can I buy quickly and cheaply has
become more of the core offering. And that's why the Sheeans and TEMUs of the world, in addition to
COVID-era fever dream from a financing perspective, now you have on the lower end incredible amounts
of competition that weren't there four years ago. So I think at every level, this model,
it should have been good for Amazon. It should have been good for the company. Again, in theory,
in a business school case study, you can easily see how you could argue that let's roll these up,
it properly, get marketing efficiencies, get logistics efficiencies, but it didn't work.
And I think we'll see more of these still coming along the way. I think hop in the events
company recently declared bankruptcy. I don't know if Clubhouse is still around. But I always
love these because it brings me back to those, the crazy days of these stories and just
watching pure mania take place. I mean, one big picture question for you is how did so many
companies and smart people get taken during these COVID moments.
Like obviously we were not going to be on Zoom forever.
We were not going to be pelotoning forever.
But yet there was such, and this Thrasio example is another example of it, right?
Like there was there was such enthusiasm for people putting money on,
within companies, effectively with the belief that COVID era conditions were going to last forever.
I think, but I don't know, it's a tough one because on one side,
you can argue how are smart people taken, but on the other, it was at the moment still the
rationally correct thing to do. And a lot of people made a lot of money and to do something
different limited the amount of potential gains you would have. So I think it actually was
economically rational at the moment. And especially I think that's why even more so you have
all the stories of people who did take money out in secondary sales smartly at the time to the
loss of the majority of investors, but I think it's tough to say taken. I think it made sense
if you were just looking at a quick flip. Obviously, building a long-term business is a whole
other thing, but it certainly made economic sense. Right. And now it's just like, now it's
the hangover. Yep. And it's going to continue, and I'm sure we're going to have more of these
stories along the way, just to remind us of those days we were on Clubhouse. Yes. I'm glad that's
over for multiple reasons. So speaking of hangovers, a lot of kids in Scotland are hung over from all
the candy they ate at this absurd AI, almost AI generated Willy Wonka festival. So it was an
experiential thing. And the reason why I think, so just to set the stage, there was this Willy Wonka
exhibition in Scotland where you could take your kids effectively and walk over the chocolate
river and go through chocolate rainbows and get a chance to meet willie wanka himself and once you
got in it was true fire festival level stuff it had you know printouts of backgrounds that you were
supposed to take a picture with along the wall black curtains everywhere the scripts were terrible
and it even came up with this uh actor that seemed like it was completely AI generated uh called
the unknown which was effectively this guy in a black parka and mask who would
hear out from behind a window and, like, scare kids.
And, like, there's this great scene on TikTok where Willie Wonka says,
and now the unknown, and the unknown comes out and goes,
and you hear all these kids go, no.
I think this is, okay, the question, this is a topic that's going to come up over and
over again in the next year, a few years, because AI has become so excited.
accessible, there's going to be so much bad content that is created. And I still am a firm believer
that AI is a tool and you can either, if you create bad content, it's not that the AI is bad.
It's you're using AI badly. And I think this was a perfect case. If you even look, the invitation
images on the event are the most like when we talk about AI does hands badly, eight fingers,
Willy Wonka's eyes are kind of curved in weird ways.
It's the most odd-looking dystopian, clearly AI-generated thing.
And this is like the AI, using AI badly come to life.
You can argue Gemini, woke Gemini was Google executing on generative AI badly.
This was just using AI badly and these poor kids having to go to this.
Though maybe, maybe they actually enjoyed it in the end because it was just so ridiculous.
Yeah, so there's, there are amazing things that came out of this.
So first of all, the actor spoke to the Hollywood reporter.
Willy Wonka actor at Glasgow Fiasco speaks out.
The script was AI gibberish.
And Willy Wonka actus of a UK immersive experience says the script was 15 pages of AI generated gibberish.
And there wasn't even any chocolate at the event described into chaos.
okay he goes the bit that got me was where i had to say this is the man we don't know his name
we know him as the unknown the unknown is an evil chocolate maker who lives in the walls
it was terrifying for the kids is he an evil man who makes chocolate or is the chocolate itself
evil okay the um the thing that's come out of this is uh let's see rolling stone has made a
a has written an article in support of the unknown so so it goes justice for the unknown the
role doll character uh that wasn't and they say that the unknown is a fascinating extraction
of dolly and horror by a piece of software trying to process his writing so effectively um here i'll
just read it doll had a knack for malevolent presences no doubt he would have written a creature
more richly imagined and logically consistent than the unknown had he ever suffered the indeed
dignity of translating Wonka's chocolate factory into a pop-up theme park.
Still, an AI had to generate a new interactive scenario based on existing Wonka lore,
complete with a threatening element for drama, and there's something to be said for its
crummy solution, the ghost in the machine. It's a doomed attempt to grasp Dahl's darkest
impulses and abject failure of imitative art and all the funnier, all the funnier for those
very reasons. It ties the whole occasion together. You've got to hand it to the unknown, the
concept sticks with you. I mean, I like that, right? It's kind of like the AI is processed Doll's
writing, decided to make a theme park, and basically hallucinated a character that kind of makes
sense in Doll's universe. So after some long consideration, I'm on Team Unknown. I think I'm
pro-Unknown as well on Team Unknown. I saw him and it's so ridiculous when he pops out and kids
are just kind of looking at him. It is kind of amazing. But in terms of Rolling Stone had another
piece. This is one I thought was really interesting. So the guy who organizes this, who organize this event, this is not his first run-in with generative AI. He actually has an Amazon page where he has 16 books published, and many of them were published on the same day, and the journalists had gone in and looked at some of them, and they're just, again, clearly bad generative AI. So using generative AI to spout-out-out-out-kind.
First, he did it with books, sold him on Amazon.
And of course, there's a lot of vaccine conspiracy, QAnani type stuff, themes running throughout all these books.
And, you know, this is a new type of huckster in today's market right now.
And it'll be interesting to see.
This is far from pig butchering, I would say, in terms of like, new age scams.
But these are going to increase in frequency, I'm sure.
Yeah, it is fascinating. I mean, it's basically like you could effectively go to chat CheapT and say spit out a
Manka theme park. Where do I put everything? You know, use your multimodal AI to draw me a diagram of what should happen and then write me scripts and it could do it.
The next thing you know, you have the unknown. So who says, go ahead.
Did you see a cariswisher released her book? Yeah. And then there's been a flood of Kara Swisher biographies that are AI generated on Amazon.
So basically, big author releases a new book and then people start creating generative AI-driven
biographies or something related to that search, knowing people will be going for that
search just to confuse them and try to sell a similar book.
There's going to be a flood of this stuff now.
Yeah, it's crazy.
And to me, I'm just like, wow, a tech autobiography generates enough interest that people are
like going and putting chat chip-chip-t driven copies on there.
God, we are covering the right industry.
Yeah, exactly.
People make big technology fake podcast soon.
That exists.
There is a fake one.
Seriously.
No, this is the real one for those listening.
This is the real one, but there is a fake big technology podcast out there.
Wait, really?
For real.
Yeah.
All right.
But I think, I don't know.
I think it's kind of leveled off or something, but it's, it happens all the time.
People will rip off content and stuff like that.
Yeah.
This is definitely, his ego boost for me every time I see it.
I'm like, you're doing something right, man, doing something right.
Well, in praise of the unknown and different things that AI generates that may haunt us or scare us, but also might delight us.
I want to say thanks, everybody, for listening.
It's been a great week spending time with you, as always.
Thank you for being here, Ron John, and thanks to all you, the listeners.
Have a good weekend.
Have a good weekend, everybody.
Oh, on Wednesday, Ryan Peterson, the CEO of Flexport is going to come on to talk a little bit about speaking of competition with Amazon, whether his logistics company and Shopify are competing with Amazon and an update on how the Houthis are changing the supply chain.
It might be worth tuning in until we'll see you there on Wednesday.
Otherwise, Ron, I will be back on Friday for another show breaking down the news.
And we'll see you next time on Big Technology Podcast.