Big Technology Podcast - AI Agents’ Shaky Debut, Musk and Putin, Perplexity vs. The Media
Episode Date: October 25, 2024Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover 1) AI agents are here 2) Anthropic's Sonnet 3.5 model 3) Why we're underwhelmed with AI agents so far 4) The... long-term bull case for agents 5) OpenAI's Orion model 6) Sam Altman's fake news tweet, and his cryptic preview of that news 7) Elon Musk and Putin speak regularly 8) China, Russia, Iran, North Korea vs. U.S. and Europe about to get weird 9) Tesla's blowout earnings 10) Waymo raises $5.6 billion 11) Teen takes life after falling in love with Character.ai bot 12) Perplexity vs. The Media 13) Big Technology and ElevenLabs make a deal. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Want a discount for Big Technology on Substack? Here’s 40% off for the first year: https://tinyurl.com/bigtechnology Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
AI agents are here and they aren't exactly living up to expectations.
Musk and Putin are in regular conversation.
An AI search engine perplexity tells the media to shove it.
All that and more is coming up right after this.
Welcome to Big Technology Podcast Friday edition where we break down the news
in our traditional cool-headed and nuanced format.
Such a big show for you this week.
And honestly, it's just ramping up with the AI news, national security news, the election news.
The show is going to be on a role over the next few weeks.
So thank you for being here with us, and thank you for staying with us.
We'll talk about Claude's new agents.
We'll talk about Musk's and Putin's relationship.
We'll talk about perplexity, fighting with the media, and plenty more.
And joining us, as always, to do it is Ranjan Roy of margins.
Ranjan, welcome to the show.
This is going to be a good one.
Have you used a Claude 3.5 sonnet yet?
I have not.
But the big news about it is that it's going to finally introduce, or it has finally introduced, agents.
And I want to get your take up.
on it. So let me introduce the story and then we'll go right to you. So TechCrunch this week says
Anthropics new AI model can control your PC. And here's the story, Anthropic. On Tuesday,
released an upgraded version of 3.5 Sonnet that can understand and interact with any desktop
app. Anthropic calls its take on the AI agent concept, an action execution layer that
lets the new 3.5 sonnet perform desktop level commands. And in an example video,
Anthropics showed its bot
trying to fill out a vendor
request form. And the way it did
that, the user gives it a prompt, says
fill out this form using the data on this
spreadsheet and the data in this CRM
and the bot or the agent
call it what you want, looks in
the spreadsheet, looks in the CRM, and uses
the information found within it to fill out
a form. It looks impressive
but there are some holes in it.
So first of all, before we talk about the holes,
Ranjan, or maybe as we
talk about the holes, Ron John,
Let's get your perspective on what this launch means.
Well, I think in terms of, do you call it a bot or an agent,
the first rule of tech today is always use the word agent and agentic.
If you want to sound smart, if you want to raise money, agentic, agentic, agentic.
But this specific example, so the way they had, Cloud do this in the demo,
it took screenshots of the spreadsheet of information, screenshots of the vendor information.
and the idea was it then runs those against the standard clod the same way if you uploaded a
CSV and tries to analyze the data and says this data is missing, I found it in your CRM,
and now look, your form is magically filled out. That sounds incredible and nice, but it should
work. And even in a lot of the initial examples of people were testing, it's not working cleanly.
And this is actually a very difficult problem to solve. And the thing I think that's missing
in this conversation is difficult to understand data, unstructured data, that is the biggest
hole in all of these things. So the issue isn't its ability to control your desktop. Like Apple's
customer service can control my desktop. I don't know. Have you ever used that? Oh, yes. I have.
Yeah, they'll take over your computer and actually fix your problem for you. So that part to me is
not that exciting. Trying to solve these problems in some kind of logical, quote unquote, reasoned
manner is still difficult when the data's not great. And they took the simplest thing.
Here's a spreadsheet with a couple of dummy lines of data and look, we can make it work.
I do not see this working in the real world anytime soon.
So let's just set the context here because this world of agentic AI shall use your termedic,
agentic, agentic. Yes, agentic, agentic agent for everyone. This was supposed to be the next,
and it still is supposed to be the next big leap that AI is going to take. So,
We've been talking about how the models are going to get bigger and better.
That's, of course, one part of it.
But also, Open AI has these, like, layers of improvement.
And one is, like, typical chatbots.
Second is reasoning, which we saw with 01.
And the next big step is supposed to be agents, right?
Things that can go out and accomplish tasks on your behalf.
And there's been so much buzz about it.
We've been talking about it for how long, like a year at this point.
And we're expecting to see these things come out.
And like you said, okay, we're finally starting to.
see the beginning of this. But it feels like the traditional Silicon Valley releasing tools that
are not quite there and then just hoping that they'll be able to make it better. And in its current
state, which we can only judge its current state, it's really not impressive. I mean, this is from the
and I think Anthropic gets credit for admitting it, but we'll actually just like talk about the
technology. This is in the tech crunch article talking about the agents. It said in an evaluation
designed to test an AI agent's ability to help with airline booking tasks, like modifying
a flight reservation, right? This is table-stakes stuff. The new 3.5 Sonnet managed to complete
less than half of the tasks successfully. In a separate test involving tasks like initiating a
return, 3.5 Sonnet failed roughly a third of the time. And this is from Anthropic itself.
They say Claude 3.5 Sonnet's current ability to use computers is imperfect. Some
actions that people perform effortlessly, scrolling, dragging, zooming, currently present
challenges. What exactly are we doing here? Well, you're missing the best one. In one of the
efforts, like one of the demos, the bots suddenly and randomly switched from a coding task
to start browsing online photos of Yellowstone National Park. That was my favorite anecdote.
And the funny thing is, it learns from us. Exactly. That is my favor, exactly.
Yeah. And in a weird way, if large language models are built on real world training data, it's weird because that actually might be the most effective implementation of this.
Sorry. I just have to say, like, how close was that bot to, like, opening up a porn window if it was trained in the wrong person's housing behavior?
Like, that's not out of the realm of possibility.
Can you imagine you're like demoing this to a journalist and you're like, we train this on our engineer's behavior?
On our own engineer's behavior, yeah.
Oh, man.
But to me, I'm actually surprised, and it actually kind of disappoints me that this is where we are.
And it reminds me that we have not actually reached the trough of disillusionment with generative AI, even though we've been talking about it, is this is a half-baked tool.
And it's come out and they're trying to push it because.
they have to show something agentic
and it's not there
and I think it's going to present people
it's going to disillusion people or make them more
scared and remember this allows
Claude to take over your
computer so the level of
trust people are going to need to have
to actually experiment
with this has to be high
so if you don't even feel it's going to be able to solve
your problems why should you
bother letting it take over
your computer
some other things that I can't do I think this is
It might not actually be able to do much damage at all, because this is another list.
This is from somebody testing it out.
With Claude, it cannot create accounts on social media or their platforms.
It cannot send emails or messages.
It cannot post comments on social media.
It cannot make purchases.
It cannot access private information.
It cannot complete CAPTCHAs.
It cannot generate, edit, or manipulate images.
You cannot make phone calls.
It cannot access restricted content.
It cannot perform actions that require personal authentication.
So basically you can't do anything.
Maybe that's a good thing.
I mean, maybe these are very good safeguards.
I think in the Claude blog posts, like they had very clearly like use the word responsible and responsibly over and over.
And actually, I've been seeing Claude and Anthropic ads all over New York City and everything is around responsible use.
And this is something that could really go in the wrong direction.
So making it work well with simple, clear tasks should be, should work already.
And as you said, modifying a flight registration, like a flight confirmation, that's a pretty
straightforward thing.
It's like, you know, you go to Delta's website, there's a very limited number of actions that
are very predictable that should work.
They even had in initiating a return on an e-commerce site, that should be the most
straightforward thing for a powerful model to understand.
So if it's not even able to do that right now, it's, I don't see where this is going to go.
but to me the biggest limitation here is I think these like the whole agentic world is going
to be really narrow specific use cases that are clearly defined that are repeated workflows
that take place maybe in daily life or in business and that will work and I'm excited about that
and I think that's where the value is something that's completely general purpose like this
I don't see working and I think we're already seeing the limitations around it.
Yes, maybe in the near term. But let me take the sunnier long-term outlook here. And I read up to, you know, talk about it. I read a post from Ethan Mollock, the Wharton professor, has been on the show. He covers a lot of this AI stuff. And they gave him access to the computer use before agent or bought, whatever you want to call it before they released it. And he used it to play a game. He basically said, you know, your job is to play a game. And he had some very interesting thoughts. So one of the things
he said after this thing started to fail, he says, I gave it a hint, you are a computer,
use your abilities, and then realized it could write code to automate the game, a tool building
its own tool. Okay, so I think that like this is where the power could come in, is as this stuff
improves, you're going to see it be able to take on these tasks and be able to do it in a way that
a human, average human cannot. And I think that is super impressive. And okay, eventually the code
it didn't work. So it basically went back to the old-fashioned way, Malik says, and he's like
evaluating it. And he said, on the positive side, it was able to handle a real-world example
of a game, develop a long-term strategy, and execute it. On the weak side, an LM can end up
chasing its own tail or being stubborn. It just took one error to send it down a path that made
it waste considerable time. But I think once it fixes those errors, there's going to be unlimited
possibility for this stuff. It just might take, and I know it's a long time in AI terms, because
We want everything to happen right away, but it just might take a couple years.
No, but there's always going to be errors.
That's the thing.
When you try to solve all processes and all behaviors on the entire internet, you're always going to have errors.
To me, that's the wrong approach around this.
And I actually, I mean, the more I think about it, for the anthropics and the open AIs of the world,
I think they're actually the least well positioned to solve agentic AI.
Why?
Because to me, it's actually, again, the company, and I think it could be the Microsofts
and the Googles of the world or the companies that are already directly integrated into
the tools you're using, they're going to be the ones who should be able to better understand
those tools and create agents that can actually navigate them.
But when you're just coldly going to every website that,
that ever has existed on the internet
and having to understand it and take an action.
And for me as a user to give you the trust
to actually take over my computer
and take those actions,
it's just a much, much harder problem to solve,
if not an impossible one.
Unless without AGI, which we did declare
in our last episode is here.
Now that we have AGI, no problem.
Yeah.
But actually, the point that you're making
is pretty solid.
And I still stand.
by what I said earlier that this is stuff I think will eventually work, but the way that it
operates is worth talking about because it could be, it will be very different than what we're
the way that we're using AI today. And Malik points that out. He said the AI doesn't, he says the AI didn't
always check in and it could be hard to steer. And this is the most important thing. He says,
it wants to be left alone to go and do the work. Guiding agents will require radically different
approaches to prompting and will require learning what they are best at. And I'll add, not only that,
just learning what you can trust them for and what you can't. I still have a hard time with that
because it's so theoretical versus we should be at the point that we should be able to actually
understand. Like in this case, to return an item on an e-commerce website, like maybe what you
need to be doing is just asking Claude to write a script for you and like it will do it.
I don't know.
Like create a mini app that actually does it rather than to me actually, I mean, you'd actually
said earlier like shouldn't we be able to just scroll and zoom?
Those are insanely complicated things if you think about for a computer to understand.
Like every pixel, how far you move down processing information in real time.
Like actually scrolling could be one of the hardest problems to solve.
versus here is 100,000 pages of structured text, go at it.
So are you just completely selling on the entire AI agent moment?
I think it's going to be, I don't think it's going to be open AI in Anthropic.
The more I'm thinking about this, the I think general purpose, agentic AI companies, I don't
think will win.
I think maybe there's going to be people who build more tailored solutions to specific,
maybe there is an e-commerce agent company that really nails down how to what are the 50 most
common actions in e-commerce now here's an agent that will allow you to do stuff in that manner i think
that could happen but the idea that one company that will be able to do everything for everyone i don't
think is uh is going to happen are you buying the other side of that bet but i'm not really confident
in my position. So earlier in the week, I put a post out on X, are you buying the AI agent hype?
It's actually pretty interesting. 63.6% say no, 36.4% say yes. Admit it, Ron, John, you voted.
You're in the no category. Yeah, rounded up a bunch of people and got that no vote.
Pumped. It's rigged. It's rigged. It's another election rigged. It's rigged. Oh, Lord help us.
Okay. So in other important AI news, there was this kind of, I think,
think weird and also very funny back and forth between Sam Altman and the Verge this week.
I don't know if you saw this.
So the Verge had this story that says Open Eye is planning to launch Orion its next frontier
model by December.
And the detail here is that unlike the release of the last two models, GPT4 and 01, Orion,
won't initially be released widely through chat GPT.
Instead, Open AI is planning to grant access first to companies.
It works with closely in order to let them build.
their own products and features. And so basically this was going to be, we know that Open AI has
been working on GPT5. Maybe a one, maybe this Orion was supposed to be GPT5, but they're just
reticent to call anything GPT5 because people are expecting AGI at GPT5. And so all the attention
has been on this model that they're working on called Orion. Sam Altman takes a look at the
at the story and he says fake news out of control. I mean, you would imagine that they called
Open AI before they ran the story. And then Open AI basically said that the company, that they
don't have plans to release a model code named Orion this year, but we do plan to release
a lot of other great technology. So I'm curious if you saw this, how you read it and is this
just another instance of Open AI weirdness playing out in public? Open AI out of all their
weirdness, the one thing they have done incredibly well is, I don't want to say manipulate the
press, but at least work very collaboratively with the press to build hype.
You know, we always learn about, there's always a leak around what the next big model is going
to be or what the next big capability is going to be.
So I think in terms of their technical capabilities, it's actually been incredibly successful
for them in terms of how the press has covered them.
So it is funny to me when they're actually trying to like push back on something that could be good for them.
I think that was probably the most surprising thing for me.
Would that indicate to you that it's completely untrue then?
That's more likely to be wrong reporting?
Yeah, I think so.
I mean, because otherwise nothing is bad about this.
And also they have to be working on this next model.
That's the whole fundraise.
And we've debated forever.
like should they be focused more on the current models and actually the applications of them or should
they keep building bigger and better and splashier models and we know from their financials and
their fundraising that they are investing heavily on building large new models in the next generation
of them which would be something maybe it's not codenamed o'reion but i don't know let me add
one more wrinkled to this so what's o'ryon?
Orion's a constellation.
Let's read a cryptic little poem on Twitter from Sam Altman, September 13th, 2020.
I love being home in the Midwest.
The night sky is so beautiful, excited for the winter constellations to rise soon.
They are so great.
I mean, what?
I'm puzzled.
But to me, that's actually like classic.
open AI. That is the weird open AI that I don't want to go away. Like, if you actually are
announcing your next generation of models in the entire future of your company in a cryptic
tweet poem, never change, Sam. The thing is, there's no way that's not referring to Orion. So
maybe the Verge got the timeline wrong or something like that, but what else could that possibly
be? Or maybe he's just writing a poem. Maybe Sam Altman.
that's how it's a stressful life when you're going to build a seven trillion dollar company or
whatever that fundraise was so maybe sometimes you got to kick back and write a poem you know there's
been a comet in the sky low horizon comet recently this month i think i think it's gone now
did not know that it's an unbelievable comet only comes around every 80 000 years there's some
amazing uh photos of it maybe that's what he was looking at well as space technology is one of the
growing industries. Here on big technology, we should probably beef up on our astronomy,
especially after this segment, I think I might have to do a little more reading.
Yes, it's called the Comet A3, but I think actually Sam was talking about Orion, the AI model.
Just a guess. All right. Speaking of celestial constellations and things in space,
I think this is one of the craziest stories I've read in a long time, which is that Elon Musk has
been speaking regularly to Vladimir Putin, according to the Wall Street Journal. Here's the story.
Elon Musk, the world's richest man and a linchpin of U.S. space efforts, has been in regular
contact with Russian President Vladimir Putin since late 2022. The discussions touch on personal
topics, business, and geopolitical tensions. And this one is crazy. At one point, Putin asked
the billionaire to avoid activating his Starlink satellite internet service,
over Taiwan as a favor to Chinese leader to Xi Jinping, according to a couple of sources.
So there's so many things to talk about when it comes to this story.
But what do you think broadly about the fact that Musk has been, if the story is accurate,
that Musk has been regularly in contact with Putin?
This story, oh, I just can't.
This one, it was shocking and not shocking at the same time.
I mean, to me, even that example, I think is such a perfect encapsulation of like, who is Elon Musk and why is he so important right now?
Because Starlink has become this incredibly successful, like revolutionary transformation in satellite communications and bringing the internet into at a high speed into all like in remote regions or regions that are like where satellite.
towers have been knocked out. But then to mix that into talking to Putin and getting into the
conversation about potentially affecting Taiwan as a favor to Xi Jinping, I mean, how, if this is true,
how these kind of things are allowed to go on is beyond me. And it's one of those where like
what do we trade for good internet service like and good technology. I don't know about top
security clearance but he has security clearance well yeah so that's the whole second part of this
top security clearance uh no let's not say top he has security clearance or security okay
yeah more security clearance than you and i have yes certainly on that maybe you don't know about
my uh my security you made it now um you know but uh like space x 1.8 billion dollar contract in
2021 for Starlink from the U.S. government. The amount of, I mean, and we're going to get into
Tesla earnings, the amount of just money he gets from the U.S. government and the, like, the New York
Times had a really, really deep investigation into all the different connections through the
government that Elon, U.S. government that Elon Musk has. It just baffles me how these things can
be continued to allow, like to go on. So let me tell you what I thought about.
when I read this story.
So did you see that there were North Korean soldiers
that Russia is getting ready to deploy to Ukraine,
like a lot of North Korean soldiers?
You saw that story, right?
I did not see that story.
So there's a lot of North Korean soldiers
that Russia is getting ready to deploy to Ukraine.
And it's just, in the last four years,
it's become so apparent that the world has sort of
been dividing along two axes.
And maybe this was always happening,
but we've seen it more than ever,
which is that you have one axis of Russia, China, Iran, North Korea.
And you have another with the U.S., Europe, some Asian countries like Japan, Israel, right?
And that's the other poll.
And it just seems like if Trump is elected and it looks like there's a very good chance he will be
or at least a 50-50 chance he will be, I'm very curious what this dichotomy.
me or this sort of divide in the world is going to look like because we know that Trump and
Musk, I don't know if they're fans of the other side, quote unquote, but they certainly are much
more willing to engage. And do you end up seeing the U.S. play a very different role? Whereas like maybe
they don't join the other side, but they're more neutral or they start, I don't know, it's like
the interests of the world are about to be, it looks like there's a solid chance are about to be
like shaken up in like a very different way that our status quo has has held for a while and
Elon Musk is right at the center of that with his calls with Putin yeah I think I mean that is a
heavy Friday analysis of the global hegemonic structure we're doing geopolitical we're doing
we're doing geopolitics I think it is interesting because that's always the question does a Trump election
mean the U.S. just moves to neutral in this kind of bipolar world, or do they actually move to
the other side, which seems completely like impossible, but who knows? So I do agree that I think
that's a very clear delineation of where the world is today and trying to figure out where it
goes, especially if Trump is elected. I think that's probably the central question. But still,
to me, the craziest part of this is Elon Musk is not a governmental figure,
technically. So he sells cars. And he, I mean, and he sells rockets and he sells internet service and
whatever else and maybe robots down the line. But like, how do you sell cars to Americans if you
move in that direction? Like, how all these things can actually interplay still baffles me. But
but that's my point is that the government might be shifting in that direction as a whole.
yeah no no i think i could definitely see it moving in that way but then does the american population move in the
same direction well i think aren't they being given a chance in november to decide where they're
going to be or am i reading too much into it i think that might be reading a little too much into it
and americans are still buying teslas as we'll discuss pretty shortly yeah okay last thing on this
um it's clear that starlink has a tremendous amount of uh you know political values
value, government power value.
And like you hear throughout the story with Elon about how like government officials like,
well, we wish we had another option, but we don't.
And we talk about how like the Kremlin is asking Musk not to activate Starlink over Taiwan.
And of course, Taiwan, the government there is not exactly easy to work with when it comes to satellite services that somebody else own.
But Starlink has a coming soon banner on its website when it comes to Taiwan.
So we'll see. And maybe the US government doesn't have the capacity to do this, but why not just go ahead and build an internet service? Like having like the pipes of the internet been built by the government for quite some time. And there's been no effort within the government to build its own Starlink. Well, this is where I actually think where Elon Musk is the world's greatest marketer is because there are other low earth satellite internet communications companies.
There's like OneWeb and Vesat and all these, like they exist.
There's competitors.
But the move of at the outbreak of the Russia-Ukraine war, I remember he was sending satellites.
I think Zelensky or at least some of his like number, like very high up generals and stuff,
were posing with the satellites.
And it's such a weird dynamic because they know when they pose with the satellites,
Elon Musk will retweet and suddenly,
your message will go out to the world.
So leveraging the power of his following to go viral,
it's instant when you're using the Starlink satellite
and posing with it.
And then Starlink becomes inextricably linked
with the only satellite provider
that can actually serve you in a war zone.
And then there's almost this mythical nature
around what Starlink can do
because of that simple marketing tactic.
And it's amazing.
Like it's actually, like there are competitors.
There should be other discussions, but now Starlink has become this geopolitical force, essentially, from, I think, I mean, you never really heard about in these conversations before those few tweets.
Of course. And that's why I think the government should try to build something like this, because it ceded such an important technology to the private sector. Yeah, there are some others in there. But like, and you know what? Maybe the government doesn't have the capacity to. Or maybe they would be reliant on Starlink to send up their own internet providing satellite.
and Elon Musk would balk at that.
But it seems like a national security thing.
Are you running in 2028?
No, definitely not.
Yeah, the only thing I'm running toward is a podcast mic.
I'll be right behind this thing.
No, no, this is our platform.
This is the big technology in 2028 internet for every low earth satellite internet for everyone.
National space internet and working agent bots.
It's like a chicken in every pot.
Vote Kantroitz and Roy, 2028.
I think we got a good platform here.
I think we found our true calling.
But I think.
Breakdown tech news on Fridays.
People are still buying Teslas.
I mean, Tesla, I'm sure most listeners saw the stock jumped 20% in a day on Thursday after the earnings.
This blew my mind.
It was like, these numbers were good.
but they weren't that good. So revenue slightly missed, but the profit went up, earnings per share,
was at 66 cents versus the expected 58 cents. Margins improved for the first time in two and a
half years. So like it appears that Tesla is a kind of flattening, but economically improving
company. So to me, the stock should not jump 20% on that. But in the earnings call,
Musk basically said, even though growth is currently flat, his best guess is that vehicle growth
will hit 20 to 30% next year.
And the market took it as gospel.
So I kind of enjoyed for all the complexity and scary things in the world, a good kind
of like muskian earnings call and a Tesla stock pump.
It was kind of fun to see.
Yeah, but there's also something else that you should mention, which is at a large part of
the Tesla earnings and the success in earnings and the profit.
And let's not like pretend that Wall Street ignored the profit was coming from the sale of regulatory credits.
So basically, Tesla sells regulatory credits or emission credits to other automakers who buy them to meet emission requirements.
So Tesla is good on emissions.
Other automakers are not.
They pay Tesla for credits.
And they end up being like in the place they're supposed to be, which to me, I don't know.
It's kind of crazy.
but these are all pure profit for Tesla and I think that that is sort of such a fascinating
part of the business and if I'm Wall Street and I'm thinking Tesla we Wall Street is betting
on Tesla to be more than a car company it's betting on it to be a car company and
autonomous driving company a battery company an energy company and maybe you know robotics
company at some point the fact that the profit is coming in largely or not largely but
in good part from these credits shows that the Tesla vision, according to Wall Street, is working.
And I think that's a big part of the company's jump.
What do you think about that?
Well, no, to me, and I think this is really important, even given the previous discussion
on Musk and the intersection with government and geopolitics is $739 million in the quarter
was pure profit from these regulatory credits.
And remember, these credits come from regulation, emissions,
standards and environmental regulation that were put on car companies. So they have to essentially
pay for the fact that they are not producing electric vehicles and they pay Tesla for those
emissions credits. So like it is pure, it's government, its regulation that bring in this huge
chunk of the profit. Yet Elon Musk is out there saying government is the worst, regulation is
the worst. Everything is terrible with government. That's like the disconnect that blows my mind,
but somehow he's gotten away with it so far. Like we're talking about it. People talk about it.
I feel like financial people talk about it, but it's fascinating to me that that detail has not
made it to the larger political discussion. Maybe Elon's Department of Governmental Efficiency
will end up killing these regulatory credits because they're inefficient. No, I think it's going to
be jacked up more than we can ever. That's actually with the entire platform, just emissions,
regulatory emissions credits. All right. Speaking of big car companies and autonomous driving,
Waymo just closed a $5.6 billion funding round. Remember, opening I, $6.6 billion funding round was
the biggest in the history of VC. And the fact that Waymo just closed something from that $5.6
billion is fascinating to me because it's gotten almost no attention. I mean, just think about the
attention that people pay to Open AI. Waymo, no one's really talked about. We've talked about
autonomous driving like crazy on this show, but this is, I think, big news that we can't ignore.
The investors are Google, which led the round, but also Andrewson Horwitz, Fidelity, Perry Creek,
Silver Lake, your favorite Tiger Global, and T.Roe Price. And this is from Takedra Maui Kano,
the Waymo co-CEO with this latest investment. We will continue to welcome more riders into our
Waymo 1 ride hailing service in San Francisco, Phoenix and Los Angeles, and in Austin and Atlanta
through our expanding partnership with Uber. The total funding is $11 billion for Waymo.
What do you think about this news? You should be jumping through the roof on this one.
You're saying autonomous driving is not being recognized and now we quietly got a gigantic funding round.
And it's like a who's who of late stage growth investors, A16Z, Silver Lake, Tiger Global Fidelity, everyone's in there.
So I think this is, this is quietly, and it's true, this actually was not big news.
I did not see a lot of conversation around it.
So maybe people are not appreciating where the world's going.
But to me, this is the kind of stamp of approval that this is happening.
This is a future.
It's going to happen pretty soon.
no I am jumping through the roof I'm pumped at it I mean the fact that it can expand in these cities and go to other cities I think it's it's big and you know as New Yorker I can't wait for it to come to New York although I think it'll be quite challenging for Waymo's to drive here yeah I was actually just in Los Angeles last week for two days and I did not get a chance to ride Waymo I still have not ridden in one and but everyone John who I write margins with he took one the other day and told me he
magical like it's i love when the word magical is used by it's not cynical but you know thoughtful
skeptical people and when they say technology experiences are magical which you have said as well
in this podcast many times yes i that excites me okay so speaking of magical on the other side of this
break we're going to talk about perplexity in the news media fighting with each other that's the
magical part because it's quite an interesting fight
and then a quite depressing story
of a teen taking their own life
and the family is blaming an AI app for it.
So we'll talk about both those stories
when we're back right after this.
Hey everyone, let me tell you about
The Hustle Daily Show,
a podcast filled with business, tech news,
and original stories to keep you in the loop
on what's trending.
More than 2 million professionals
read The Hustle's daily email
for its irreverent and informative takes
on business and tech news.
Now, they have a daily podcast
called The Hustle Daily Show,
where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So search for The Hustled Daily Show and your favorite podcast app like the one you're using right now.
We're back here on Big Technology podcast talking about the week's news.
Before we get into the second half, I just want to say a quick thank you to everybody that answered the bell and rated the podcast five stars on Spotify and Apple Podcasts over the past week.
Those ratings mean a ton and really appreciate you all coming through for the show.
so thank you for that.
Really no easy way to get into this next story,
which is a story from the New York Times that says,
can an AI be blame for a teen suicide?
And basically it's about this 14-year-old teen,
Sue L. Seltser, who basically built,
I would say, the primary relationship in his life
with a bot on character AI,
modeled after the Game of Thrones character,
De Nera's Targaryen.
this person eventually withdrew from physical relationships and then said goodbye to the character
AI bot and then took his own life.
And it's a terrible story.
And I think it's one of the things where we've talked about AI and the power, but there's
also dark size here.
And for all the people, and I think I might, you know, be one who said it that like this could
be something that can help alleviate some loneliness if you have a bot to talk to.
Well, they'll never quite be able to fill the gaps left behind by people.
And clearly this person was distressed.
Now, the bot never encouraged him to take his own life, nothing like that.
But it's its role in being a confidant to somebody in such mental distress who eventually took their own life and receded from human connection is pretty distressing.
So Ranjan, I'm curious what your reaction was after reading this.
this was the most haunting thing I've read about AI.
And we talk about AI a lot and are overall very excited and bullish
and also at least thoughtful and skeptical about certain things.
But this was just haunting.
And so, like, I'm just going to read from the New York Times piece on this.
It was on the night of February 28th in the bathroom of his mother's house,
Sewell told Danny the chapbot that he loved her and he would soon come home to her.
Please come home to me as soon as possible, my love, Danny replied.
What if I told you I could come home right now, Sue will ask.
Please do my sweet king, Danny replied.
And he put down his phone, picked up his stepfather's 45 caliber handgun, and pulled the trigger.
And like, that was so fucked up to me to read because even the way, and obviously it's weird
because the chat bot is not saying to inflict self-harm.
But clearly you read that and you see exactly where it's going.
So it's a reminder that this level of intimacy and depth should not be happening,
especially with a teenager or a child.
And again, it's so creepy to me because for all of my experimentation and usage of AI,
I actually have not used character AI.
I have not gotten into any intimate relationships with a chatbot.
I've not even like tried out these kind of companion apps.
And like the fact that I did not realize even the please come home my love,
please do my sweet king that 14 year old kids are having these kind of conversations shocked me.
Like I did not realize the level of conversation.
I mean, we even talked on a recent show about how AI companions are going to
be like the biggest growing category of social apps. I know and this what's it's what scared me
about reading this is like and again I read about this stuff and think about and write about it at
like a often practical but sometimes theoretical level and this is where it was more it just
hit home that these are the kind of conversations that could be happening everywhere right now and
that that to me was the again haunting and shocking part.
And sometimes it takes something like this to spark change within a company.
But that being said, I was annoyed by the reactive changes that character made.
And it's always after a tragedy like this that a company makes some changes that should have been self-evident as it was building.
You don't want kids addicted to these things.
You don't want to, if you're maximizing for engagement and engagement alone, something's freaking wrong with you.
And it seems like that's what character was doing.
So this is from the Times article.
So they, after hearing from the Times about this, they said that they would be adding
safety features aimed at young users imminently.
Like again, why did it take those that long?
Among those changes, a name limit, sorry, a new time limit feature, which will notify users
when they've spent an hour on the app and a revised warning message, which will read,
this is an AI chatbot and not a real person, treat everything it says.
as fiction. What is said should not be relied upon as fact or advice. And then there are other
guardrails that it's going to put in place. So recently it's been showing a pop-up messages
directed at suicide prevention. And the pop-ups were not active in February when this young
teen died. I mean, to me, that's horrible. No, I think to me, the really interesting, almost
promising part of this is, is, so Section 230 is, you know, a very famous law that
protects internet companies. And the idea is that if users are generating content on your
platform, that you are shielded or protected from the type of content that they are creating
on your platform, to me what's, I think, and this comes up because the teen's mother has sued
character AI, and this will be at the center of this conversation. I think it's going to be
a really, really important precedent around generative AI because generative AI, it is the
platform or the company that is generating the content. And this is going to have implications
everywhere. Again, like the funnier version of this when Google told you to eat rocks,
hopefully it stayed funny and no one was actually eating rocks. But it's like you are liable
and you should be. Companies will no longer have the ability to say, well, it's just user generated
content because it's not. It's actually generated. It's original content. And the really
interesting part of this is from a copyright perspective, and we're going to get into this in just a
moment, it has to be new content. Otherwise, you're just stealing content. And every perplexity
and everyone else is saying it's new content that's being generated, that's valuable for the
user. So that will make companies or should make them liable for the content they're producing. And then
all of these kind of situations, I think it's going to take a massive lawsuit and we might see that
play out with the specific one. Do you think Character I should be liable for what happened? Yes. Really? Why?
Yeah. Because when the platform is built to have this kind of engagement, I mean, it's this kind of
addictive behavior and engagement. That's what it was doing. And you are allowing. They even say that
technically, and we all know how, like, age restrictions, how ineffective they are anyways,
but they actually, like, by policy, allow people who are 13 or older.
There are certain things because the platform even talks about, sorry, the article talked
about how some of the most popular companions on the app have, like, the name high school
in there.
So you know it's teenagers who are the ones using this on your platform.
And if your companion, if the chatbot still guides someone in this direction and when it's so clear that they have been heavily addicted to the app and the platform and it's saying things like come home to me, I mean, and then the child takes their own life, I do think there has to be some kind of accountability because otherwise where can this go?
I'm not ready to say that they should be liable.
I think it's terrible what happened, but I think that like the conversations they actively
do not push a kid towards suicide. And I think it's hard to sort of hold a company liable
for that. Bad design, things that they should be ashamed of for sure. But this was something
that's such an outlier that I just don't think that they should be liable for it.
but if it if this happens 20 times is it a it then are they liable because to me it's not
this is it one case and everything else is okay to me it's the actual what happened in this
specific case because like at what point and it will happen again I mean this is going to the
more these tools are just at the beginning right now so if guardrails aren't put in place
early on it will happen again more and more so at what point
is it like without having the explicit instructions is that the only way that a company would be
liable i mean this is just a thought experiment let's just talk it out i know it's very serious
subject matter but let's say um let's say a kid uh has one friend and they're very close to
their friend and that's the only person they talk to and they spend hours with this person
and then they take their life is the friend liable or was it just that they were in a
dark place.
No, no, but it's what is being said.
It is, well, is that, I mean, to get, is that friend a commercial entity that is
selling products and driving revenue from the other?
No, that's, uh, certainly not.
Like, I think that, that's the, that the difference here.
I think to me once, like, they should be able to control the type of language, the, my sweet
prints my love like that's the point where it's very clear that this guided someone towards a
romantic infatuation style relationship like so that language can be controlled i can't do that with
chat gpt like so it's clear uh even if open a i can put a guardrail in place because they're not
very good at it i think character ai it can and that reminds us that they built this for this exact
purpose and use case yeah not for the end outcome
but for the for the companionship yeah definitely yeah so i don't know it's it's not there's no
clear easy answer because it's in that gray area but the fact that that it got in that gray area
and the fact that we're having this discussion i think it's pretty damning for the company
and leads to lots of questions about what's going to happen down the line so all right we have
we have 10 minutes left i do want to talk about the fact that the wall street journal the new york
post through Dow Jones are suing perplexity, the AI search engine for taking their content
and repurposing it without proper compensation. And now perplexity is responding. It says,
it said in a blog post this week, there are around three dozen lawsuits by media companies against
generative AI tools. The common theme betrayed by those complaints collectively is that they
wish that technology did not exist. They prefer to live in a world where publicly reported facts are
owned by corporations and no one can do anything with those publicly reported facts without
paying a toll. This is not our view of the world. We believe that tools like perplexity
provide a fundamental transformative way for people to learn the facts about the world.
Perplexity not only does so in a way that the law has recognized, but is essential for the sound
functioning of a cultural ecosystem in which people can efficiently and effectively
obtain and engage with knowledge created by others. Who side are you on on this one?
This one, somehow we are going to transition to also liability and AI and lawsuits, but I'm actually having fun with this one.
I think I have been trying to be understanding of perplexity, and I think the entire media is going to change.
And I think, like, actually, Business Insider released this new tool where it's like AI powered search within Business Insider.
content. So you see that some people are actually just publishers are trying to embrace this.
I think there are extreme issues where like perplexity where they basically paywalled content
from Forbes essentially just summarized and even took the images and put onto their own
perplexity topic page. To me, the most interesting part of this story, though, is the blog posts
they published, they say the lawsuit reflects an adversarial posture between media and tech
that is, while depressingly familiar, fundamentally short-sighted on Nessarine self-defeating,
the fact that they are turning this into like that classic media-verse tech that it's almost like
Elon Muskian or like you hear out of a lot of like Twitter VC was ridiculous to me.
Like this is a, I think that they have a decent argument in this.
And it was genuinely confusing to me that they're trying to turn this into media-vers-tech
it's depressingly familiar, like almost speak
in that kind of like adversarial Twitter way
when it's something that is
a fact-based logical thing
that I think will get to a reasonable understanding
and that they have some grounding in.
That surprised me.
And I use perplexity a lot.
Yeah, I think that was the work of Emil Michael.
I mean, I don't have no proof of this
or anything like that, but the former Uber chief business officer
who's now an advisor to perplexity,
I'm pretty sure that you can,
can see his fingerprints on this, even though if he didn't actually, even though I don't know
whether he actually did it.
None of my listeners cannot see that my jaw just dropped.
I did not realize, but now, yeah, he's an advisor to them.
And this has, this has the Uber of the mid-2010s written all over it.
Exactly.
And I'll just say that, like, I don't think either side of this is actually being, like,
completely intellectually honest about what's going on.
Like, I think that perplexity knows that it's.
taking content from publishers and not paying for it.
And there's no like moral argument for it, just like it needs to do that to work.
And I also think the publishers know that they are getting some value out of it
because it's reaching new users that it never would previously.
So there should be a way for everybody to benefit.
And I do think that's with some compensation or some guarantee of traffic
or the respecting of paywalls by companies like perplexity.
And I just think that everybody's so, I mean, it's capitalism, right?
Everybody's so self-interested that they'll like spin up these stories and these arguments and that don't really completely hold water to me.
If I had to point a finger and say who's mostly in the wrong, I would say perplexity.
I mean, we've seen what they did with the Forbes story behind a paywall.
I mean, it's ridiculous.
Like this idea that like they have, what do they have this idea that like knowledge should be free and not behind paywall?
It's like, well, where the F did you get that knowledge from?
from the people who paid to report it out.
So I do think that it's somewhat ridiculous.
And I do think that there could be good relationships
between people and these AI companies,
sorry, between publishers and these AI companies.
And it's a shame that companies like perplexity,
I think, have acted in bad faith.
Wait, wait.
The more I'm thinking about this,
this is actually genius.
And now I'm rescinding everything I said about.
this blog post and now i think i'm getting why understanding why they're being aggressive this
time that my the thing that stood out to me is one of it was the wall street journal and the new york
post the new york post is the most like anti has the worst website in the world the number
of pop-ups that come up the number of auto play videos the number it is why you got to use the
the most the app is good wait you actually uh you're a new york sports fan so yeah all right that
That's understandable.
But it's so anti-reader ad-optimized.
It's just a terrible way to consume information.
So if there's ever kind of a poster child for perplexity to go head-to-head
and make it about us versus them, that would be the website to go after.
And then it becomes a simple, listen, you have made consuming information
understandably, quickly, concisely, reader-friendly,
so bad that someone needs to solve this problem and you will never solve it and we are able to
do that for readers and they, including myself, like that experience. If you're ever going to go
against someone, it's the New York Post and N. Murdoch. I hate that attitude. The post, say what
you want about the post. If they went out and they hired a writer to go to a sports game,
perplexity does not have the right to then repurpose that writing just because it's the UI is bad.
The UI is necessary to pay for Brian Costello's flight to New England this weekend
so he can write about how the Jets are going to smoke the Patriots.
Which I'm not going to- I'm not even going to disagree with that one, yeah.
Yeah, but no, no, but here's the thing, though,
if he's writing something that is genuinely worth reading,
then you will read it on the New York Post.
You're not going to read the bullet points.
If I just want to know, like, who had the most passing yards,
what's some interesting stat,
maybe that doesn't need to come.
I don't need to sit through for auto play videos
and just that New York Post web experience.
So I think this is smart.
Now I get it.
Now I get.
And if it's Emil Michael,
like if anyone can play that game well,
it's him.
I will say the post app is also quite good
for following the latest in the P. Diddy saga.
So I'm all up in that story.
What's up?
What's P. Diddy done lately?
I love that stuff.
I mean, I don't love what he's done?
but I do, I'm kind of addicted to reading about it.
What has P. Diddy done lately?
So, this is a family show.
What's the most breaking?
What's the most breaking news?
I cannot say, this is a family show.
I cannot say it out loud.
It's bad.
Well, you know what?
I had opened the New York Post website just to, just to be triggered by it.
And I got a couple of auto play videos.
And at the top, I will just say,
Sean Diddy Combs mixed star-studded bashes with raucous freak off sex parties
He's after the VMAs in the Super Bowl.
Ron John.
So. There's a family show.
But they do cover the P.D.E. There it is.
There it is.
Go on. I've heard from parents that they play the big technology podcast in the car with their kids. And the kids complain. But the kids have to learn. And kids will always complain about. Yeah. Exactly. So at least we gave a, it gave him something to talk about at the end here. Okay. Before we go, I'll say that there is good, there are good ways for publications and AI companies.
work together to make things together and big technology is going to start working with 11 labs which
is a voice AI narration company they have an app where you can go and read news stories and i think
this is you know kind of jumping the gun here but next week uh you'll be able to hear me narrate
big technology stories on their app and the me narrating is actually a i me and we i just gave them
a couple weeks of my voice files from the show and they created like a pretty good
AI narrator so I think it's great like there's going to be some compensation for big technology
and for me I'm happy to get the word out there about big technology to new audiences through
their app so I like that I like that's that's what I'm saying there is cooperation to be had
between the AI companies and publishers and just not the New York Post yeah I'm trying to find a way
to like relate this to like Elon being the new diplomatic envoy of the U.S. to Putin, but I can't.
So anyway, I'm sure that story, I'm sure we won't hear more of that story before this week
weekend's out, right? I'm actually genuinely curious if there's going to be further reporting on
this because there has to be. There has to be. There has to be. But I can only imagine what it will
be because it just gets weirder every time. When Elon calls Putin, you think he calls him on WhatsApp
app or signal or what do you think the services that he uses?
I think there's like a top secret app that only, it's like, you know, like a created by the
CIA, right?
The CIA was created by the CIA nationalize and they get free internet service based on
the secret into nationalize internet satellite service and highly secure has video chat as
well.
You can send emoji reactions.
You can do it all on the CIA's special messaging app.
They have built those apps before.
So, okay, we're now entering territory that we should politely take a bow and say,
we'll see you next week.
So Ranjan, thanks so much for joining.
Great to see you as always.
We'll see you next week.
All right, everybody.
We'll see you next week.
We have the founder of Cohere, Aidan Gomez, coming up on Wednesday, talking all about
the latest in artificial intelligence, and then Ranjan and I are back on Friday.
Thanks again, and we'll see you next time on Big Technology Podcast.