Big Technology Podcast - Sam Altman’s Gentle Singularity, Zuck’s AI Power Play, Burning Of The Waymos
Episode Date: June 13, 2025Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Sam Altman's 'Gentle Singularity' essay 2) Is Altman overhyping the technology's current capabilities 3)... Why the next few years may see crazy AI development 4) The case for and against humanoid robots 5) OpenAI's o3 pro model and the value of tool use 6) Meta's acquihire-zition of ScaleAI and founder Alexandr Wang 7) The case for the move and the rationale behind Zuck's aggressiveness 8) MetaAI posts 'private' conversations 9) Google traffic to web publishers falls off a cliff -- here's the data. 10) The burning of the Waymos 11) Alex fight robots --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Sam Altman shares his vision for the singularity as Open AI keeps shipping.
Mark Zuckerberg is on the war path to fix his company's AI effort.
The WWDC Fallout continues, and Waymo's are ablaze.
What does that mean?
That's coming up on a Big Technology Podcast Friday edition right after this.
Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional,
cool-headed, and nuanced format, so much to talk with you about this week.
If you thought we were going to spend the entire episode talking about WWDC, I'm sorry to say that's not going to happen today.
Instead, we have so much going on, including a vision-setting document from Sam Altman at OpenAI.
Some really interesting news coming out of meta, as Mark Zuckerberg tries, to write the AI ship.
Okay, we'll talk a couple minutes about WWDC because the company seems to be digging itself into a deeper hole.
And then, of course, the image of the week, Waymos lit on fire in Los Angeles amid the protests.
Joining us, as always, on Friday is Ron John Roy.
Ron John, great to see.
We're going to have a lot to talk about this week.
Waymos are ablaze and listeners cannot see,
but Alex is holding a TikTok-style influencer microphone,
I think in a corner of a hotel room maybe or?
At a friend's apartment.
So I do want to say, for those listening, watching,
I brought all of the proper equipment to record normal podcasts this week,
but I forgot one cable.
So that is podcast.
life. All right, let's talk about this post from Sam Altman, the gentle singularity.
Kind of an interesting way to put it. I'll just read the beginning. We are past the event
horizon, the takeoff as started. Humanity is close to building digital superintelligence,
and at least so far, it's much less weird than it seems like it should be. We have recently
built systems that are smarter than people in many ways and are able to significantly amplify
the outputs of the people using them. The least likely part of the work is behind us, the
scientific insights that got us to systems like GPT4 and O3 were hard one, but will take us
very far in some sense, some big sense, chat GPT is already more powerful than any human who has
ever lived.
Ronjan, I got to ask you, I mean, obviously, like, you know, you can make a case for many
of these claims as the CEO of OpenAI.
Why now?
And why do you think Altman feels the need to come out with this post?
Because this is like a major, I would say, vision setting document.
from him. So normally when I see a blog post from a founder of a company like open A
I call the gentle singularity that's very bombastic and future looking. I think I usually will
kind of discount it as more just marketing content. But actually, I don't disagree with a lot
of the things he's saying. I think he actually provides a pretty realistic view. In terms of
2025, we'll see we're already seeing agents that can do cognitive work, writing computer code,
In 20206, we'll see the arrival of systems that can figure out novel insights.
2027 may see the arrival of robots, then getting into, like, imagining what could 2035 look like.
I've been a long, like, proponent of the idea that innovation has slowed, that, like, a cell phone today looks like it did in 2011, basically.
that there's a lot of, like, for our day to day lives, there have not been just kind of like
dramatic changes since the late 2000s, early 2010s when we did see a kind of like fundamental
shift in the way we interact with technology. So I'm actually, this oddly enough, was kind of
exciting for me and kind of actually had me thinking about what could life in 2035 look like.
So in this post, Altman artfully writes a response.
to a lot of the core complaints that we see about AI.
Just to paraphrase, he says, you got it to write a paragraph.
Now you want a novel.
You got it to help a scientist in research.
Now you want to come up it to come up with the discoveries of its own.
So like, first of all, okay, well, who's setting that bar exactly?
You know, it is hype posts like this.
So you're almost arguing with yourself, Sam.
But the other side of this, which is really interesting is, yes, we've seen,
look, we are happy to talk about how impressive.
some of this technology is, but we haven't really seen it take the next step, right? It's amazing
in the chatbots right now. And, you know, trying to apply it outside is not as easy. And in fact,
there is a new paper that just came out where they looked at a company going from zero to 30% of
its code written by AI and a key measure of productivity only went up by 2.4%. Now, that's billions of
dollars in the real economy, but it's not exactly making a one,
a normal engineer, a 10x engineer. So talk a little bit. I mean, I understand like this is the
trajectory that Open AI wants to go on. And if you believe AI is going to get to the place
that a lot of folks are saying, then this is what we expect. However, how do you, how do you
sort of contrast that with the clear limitations we're seeing with the technology today?
Well, no, I think we're at that inflection point. I think it's going back to the model versus the
product and the app layer, but like we have seen the just kind of like foundational advancement
over the last few years just accelerate to a dramatic degree. But now we're going to start seeing
this applied. And that's everyone is working on it and actually getting this like genuinely
applicable at scale. Because as you said, like now a single engineer or across an engineering
department, you can automate a lot more of the code writing process. But what does that actually do
to overall productivity, it's still minimal.
So actually bringing AI into larger scaled systems,
both in our personal lives and our professional lives across enterprises,
I think we're going to start to see that more.
There's more of a focus on that right now.
So I think, again, I think the next two to three years,
we see a much, much bigger jump in the way work changes,
our lives change, versus the last few years as everyone just,
It still was living in kind of the toy phase of things.
But even if like it's an application issue, I would say that like I've, you know, some developers who will say that this code won't necessarily code in the way that your company codes.
It will bring in legacy code that you've phased out.
We'll have junior developers that will code things and not understand how they work and then ship them and break the app.
And I think those applications are in the most powerful application of this technology right now.
now, which is coding. And clearly that goes for like having it right thing and work across
systems. So talk a little bit about where you see the gap between what this technology
is capable of and why we're seeing these issues in implementation. I mean, part of this
has to be organizational or even in an individual level, just trying to find out the right use
cases. And it sounds like you believe that there's a long way to go in terms of what we can do
even with the current systems.
No, no, I don't think there's a long way to go.
I think we're finally working on the right pieces of it.
That, like, the foundation model race has gotten boring.
I mean, I feel like when's the last time any of us got truly excited
by some new foundation model update?
Now the things that are exciting, what you hear about, like,
what everyone's to, are actual outputs and, like, actual applications of,
AI. So I think we start to see the change a lot more. I think, again, to me, and I've been asking
this for this for a while, like if we could all just take a breath and move away from like the
almost rat race of foundation model advancements and actually be like, okay, now how do we
take the technology that exists as of June 13th, 2025, and then actually implement that into our
lives. And we're going to get into the Craig Federici, Joanna Stern from the Wall Street Journal
interview in just a little bit. But I actually thought a lot of what came out there was
people expected, even companies like Apple, you would just kind of plug in an LLM and it would
just solve everything. That's not how it works. Everyone's been learning the hard way that
it takes a lot more organizational and systems nuance to make things worth. But the reality is
finally set in. Apple, like, understanding that better than anyone else, but now the real work
can begin. So now that Ranjan has made a principled stand against AI hype and building up
the technology beyond its capabilities, I am now going to continue reading Sam's post
and give you all a dose of AI hype and building the technology past its current capabilities,
just for the, you know, exercise of getting Ranjan to respond.
to some of these claims.
So you had already mentioned that 2025 will see agents that are able to do real cognitive work.
Altman says 2026 will likely see the arrival of systems that can figure out novel insights.
27, we may see the arrival of robots that can do tasks in the real world.
He says, the 2030s are likely going to be wildly different from any time that has come before.
We do not know how far beyond human-level intelligence we can go,
but we are about to find out.
So what do you think about these predictions?
Are you on board with them?
Having said the beginning of Sam's post is directionally on point.
2035, given the last two years of technological advancement,
it is kind of crazy to think about what life could look like by then.
And it's kind of exciting, I think.
Like, I genuinely, and also terrifying in certain ways,
but like it should be different, like given what we have to work with right now.
And even like I have, with generative AI large language models, I am a true believer.
Like I don't agree with the Gary Marcuses of the world in terms of saying the technology is not good.
I think it has not been used to its potential or in the right way to date outside of chatbots.
But but I think I'm still sticking with it, 2035.
thinking about how different life could be versus 10 years from now, 2035 versus 2015 to 2025.
Like, how much has life really changed driven by technology in the last 10 years?
When you're just, I'm looking around my apartment right now.
Like, it doesn't look that fundamentally different the way I go to work and when I sit at work and all that stuff.
I guess like virtual conferencing and stuff is a big, big change.
But other than that, it all kind of looks the same.
People dress the same.
2035, we're all wearing moon suits and have a robotic best friend.
More than moon suits, here's what he says.
The rate of new wonders being achieved will be immense.
It's hard to even imagine today what we will have discovered by 2035.
He then gives a bunch of examples, but it concludes this paragraph by saying,
many people will choose to live their lives in much the same way.
but at least some people will probably decide to plug in.
I think that means connecting their brains with the AI.
Ron John, you talked about, you know, wanting to live differently.
Are you plugging in?
Oh, man, Sam, you had me.
You had me until there.
I don't know.
I remember, like, I have an aura ring on my finger.
I ended up getting one.
I have an Apple watch.
Like, I have the surface of me.
me now is connected in many ways. I have AirPods in right now. I wear meta raybans when I'm walking
around. So like it's not injected into me yet, but definitely like I don't know. What do you think
your outfits will look like in 2035 in terms of will they be covered in technology?
Will you be have a brain computer interface? Will you have a Johnny Ive medallion?
on a big Mark Zucker-Bergian chain around your neck?
What's it going to be?
I'm going full Wally.
Get me in a go-kart, give me a big soda, and put me on autopilot.
Full Wally.
No, I mean, I think it'll probably look a lot like it looks like today.
I do anticipate that we'll have humanoid robots around,
but the question is how good can the industry get them and how safe can the industry get them?
I think humanoid robot safety is something that's not talk to.
about enough, but if one of those things go rogue, you could have a terminator problem.
And you don't want a terminator problem.
Never a good thing.
That's one of the things you want to try to avoid.
But look, if you do your best and it happens, no one can really blame you, right?
Yeah, I mean, you tried.
You did fine.
It's the fault of Congress.
This is an idea that Sam had in the piece that I thought was interesting.
He goes, if we have to make the first million humanoid robots the old-fashioned way,
but then they can operate the entire supply chain, digging and refining minimal,
minerals, driving trucks, running factories to build more robots, which can build more chip
fabrication facilities, data centers, etc., then the rate of progress will obviously be quite
different. So he's describing like a humanoid robot explosion, similar to like the intelligence
explosion that some expect with AI. I thought that was an interesting idea. I am going running counter
to the greatest tech minds of our time, but like I don't get the whole humanoid robot thing.
we've debated this in the past as well. Like to me, applying the human form factor to robotics,
rather than actually having specialized robots that actually solve specific problems and are built,
because again, right now you go to any automated warehouse, it's not humanoid robots moving
around. It's robots that have been specifically designed to handle repetitive tasks of picking up
boxes and moving them and placing them and pulling out items. Like, like, I'm,
still team specialized robotic form factor versus team humanoid form factor and robotic form
factor i highly disagree i am on team you're a humanoid guy maybe human human humanoid with like
six or seven arms yeah why not seven arms then i would go seven arms yeah okay go seven no why not make a 12
do a full um what's it the um the you know the goddess with all the uh durga yeah dorga it was obviously a very
good design design decision to give those arms to Dorga. This idea that we have these functional
robots makes a lot of sense because those robots don't have a world model. They don't understand
the world as we do because they don't see it as we do. They don't understand physics, really.
I mean, they might be able to grasp thing and have that hard-coded in them. But it's similar
from going from like hard-coded AI to a large language model, which understands, right?
But like, you know, can be conversant on a bunch of different topics. When you
you build AI with a world model that understands physics, objects, how things work together,
then you want to go humanoid robot or maybe, you know, souped up robot that takes a
humanoid form because all of a sudden you can be functional. Like the idea that you can have
humanoid robots, which is one function, do all these things that Sam is discussing, which is, again,
digging, refining materials, driving trucks, which we already have steering wheels and they have
hands, right? Running factories and building more robots and building chip fab facilities.
That is an exceptional form. I don't think you want to go too specialized for each because
ultimately what's, you know, this is a very complex world that requires complex maneuvering
around to be really useful. In a weird way, I guess that's like the most human centric
or human forward view of it because I want to just kind of rebuild and remap everything to
actually be more efficient for the specialized robots but i think maybe you're right the dourga
model souped up eight to ten arms maybe that's some wheels on the feet right yeah yeah that's what
is anyone working on that boston dynamics i'm sure is probably i mean we're talking about
eons of evolution like something got something happened in a good way to get us to where we are right now
it really does work so let's just sort of conclude this by bringing this back down to earth with
The final passage from Sam's article, which I think is, like, really good.
He says for a long time, technical people in the startup industry have made fun of the idea guys,
people who had an idea and were looking for a team to build it.
It now looks to me like they are about to have their day in the sun.
This is, I think, pretty interesting.
It's kind of an homage to vibe coding.
But there has always been this idea of, like, you know, so many people are like, I got an idea for a startup.
And they just never build it because they don't have the technical talent or, let's say, the charisma to get a bunch of
of people around them to build it, these idea guys, and the technical people can just go out
and build it. But with vibe coding or with AI coding, maybe it does become the age of the idea
guy. What do you think? Yeah, I'm going to, Sam ends with me in agreement here. I 100% agree
with this. Like, I mean, I was having a conversation with like an early stage startup founder
recently who had not built a prototype and still just had a pitch deck. And I was like, to me,
there's no excuse for that right now.
Like anyone can build at least basic things right now.
And actually many people,
you do not have to be,
have a full technical team to build a functional product.
And that means that anyone with an idea
should be able to actually realize that idea in some form.
And that's,
or at least prototype.
At least prototype,
but even get to some level of functionality.
And I think that's actually exciting.
That's like the best,
most exciting part of generative AI for me.
So I think idea, guys, it's your time.
All right.
So final thing.
Let's talk about superintelligence.
This is the new word.
Sam says, open AI is a lot of things now,
but before anything else,
we are a super intelligence research company.
We have a lot of work in front of us,
but most of the path in front of us is now lit,
and the dark areas are receding fast.
We feel extraordinarily grateful to get to do what we do.
Okay, two questions for you.
One, why is everybody talking about superintelligence now?
We're going to get to it in a moment with meta.
I thought AGI was the buzzword.
Is that now something that is too low of an ambition?
I guess when you raise $40 billion, that is what it is.
And second, you don't take any issue with this.
It does seem to be, again, you're someone that doesn't like hype.
This is hype.
I mean, got to call it off for what it is.
Sorry, I mean, again, this has been quite the emotional roller coaster for me,
going through this because I've been supportive and then we end again with to me like how is it
not a bitter story of the AGI to ASI artificial general intelligence to superintelligence rebrand
it's crazy it's weird it's ridiculous like it's just it happened everyone has just comfortably
moved on from AGI started using super intelligence I think that's the name of Ilya's
Yes, safe super intelligence.
Safe super intelligence.
That was kind of the first, thinking from a pure branding perspective, that was the first inkling.
Clearly the messaging worked.
Everyone started saying it.
It absolved people from having to achieve AGI or when everyone is saying AGI is already here, yet life doesn't feel significantly different.
So I'm going to give super intelligence the, like, I mean, from a branding perspective, the fact that they have shifted to this conversation and now we're all just.
accepting it and moving on is crazy to me, but it's happened across the industry, I feel.
So kudos to the kudos to Ilya from a branding perspective and to the comms folks,
whoever came up with superintelligence first is a term.
You've done good.
Or you've made things harder for everybody else.
You bought a couple more years of runway.
Well, Alia obviously has raised billions without releasing a product.
By the way, on the subject of Villya, next week on the show, Dwarkesh,
is going to come on. And he has some very interesting thoughts about what Elia is up to and the type of AI that he may or may not be building and how that might help advance the state of the art. So stay tuned for that. That will come next Wednesday with Dwarkesh Patel or Wednesday, Wednesday, June 18th with Dwarkesh. So stay tuned for that. Really fun conversation. Okay. As this happens, though, we are seeing model improvement. And Ranjan, you said that when was the last time we were excited for a model release? And it's funny because I've sort of been,
like pour cold water over this Sam Altman statement while you've been sort of enthusiastic about it
through our conversation today. But I will say I definitely was excited for the O3 model. That model
to me is like the first model that really works and is useful in various ways to me in my daily
life. And now OpenAI is releasing O3 Pro, which is a better version of the model. It's going to be
available to those initially, to those paying $200 a month to Open AI, which unfortunately no longer
includes me, but there's a substack called latent space that talks a little bit about why this model
is an improvement and why I think it's going to help lead to better products, just to throw
that out there one more time. First of all, the post about current models says they are like
a really high IQ 12-year-old going to college. They might be smart, but they're not a useful
employee if they can't integrate. So this talking about 03, the author says,
this integration primarily comes down to tool calls, how well the model collaborates with humans,
external data, and other AIs. It's a great thinker, but it's got to grow into being a great doer.
O3, pro, makes real jumps here. It's noticeably better at discerting what its environment is,
accurately communicating what tools it has access to, when to ask questions about the outside
world rather than pretending it has the information access and choosing the right tool for the job.
When you think about improvement in models and what that leads to, I mean, we're going to see,
right, this is just the very, very early reflections on what this can do.
I think a model that does understand its environment, like I talked about, super important,
can ask questions to people, and then understands which tools to use when it has to do a task.
To me, I would say that's pretty important, and I'm excited to at some point get my hands on this.
I will fully agree the next great battle in AI is tool calling.
That's where we're going to see the maximum amount.
Like actually bringing these models and agentic AI, that's all that matters, is the ability for an action to understand its context and then take the next correct action.
And to do that, you have to know what tools you have access to and which tool is correct to interact with next.
So I think this is huge.
Actually, like, this is where, and I'll give you, it's on the model level, so fine, models matter, fine.
But like, I think this is very astute.
The tool calling is going to be the key to agentic AI, which is going to be the key,
and like integrating into existing the existing world,
systems, companies, processes, organizations, everything.
What is tool calling? Just explain what that is.
It's the ability of the model to actually call out to another tool,
either via API or script or whatever resource it uses to access another tool.
It's ability to, so currently you might be doing that manually by actually
like coding out API calls. There is a world where like a large language model should be able to
generate that on the fly, understand what tool it should call out to, and then actually
generate that connection in real time and make that call, transfer whatever data needs to be
transferred, take whatever action needs to be transferred. So like right now, if you use deep research,
you kind of start to see it in action. What is it doing? It's calling out to a bunch of websites that's
via the internet, the World Wide Web.
It's calling those websites, maybe it's downloading documents, and then it's going to parse
them.
Like, each one of those is an action that often requires a specific tool, but then you
imagine that in large systems that exist already.
And the ability, so you don't have to manually map out every single block on an agentic
workflow, like that is a huge area of opportunity right now.
And I really think that's the next great.
I battle, and it's at the model layer, so I'll give you that.
Okay, that's super interesting.
We definitely should do more on that, so folks expect more conversation about tool calling
on the show.
We have so much more to talk about.
We've got to talk about this meta thing, very quick reaction in WWDC, and the fact that
Waymos are on fire.
We'll have a very fast-moving second half right after this.
Hey, everyone, let me tell you about the Hustle Daily show, a podcast filled with business,
tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast, Friday edition, talking about all the week's AI news, a lot of more theoretical stuff.
Let's get more practical in business here in the second half.
Meta is making a very big investment in scale AI.
I call it like an aqua-hire-sition.
It's weird.
They're not buying the full company.
But this is from the...
I actually think I said that on air on CNBC.
No, no, I...
Hold on.
Yeah.
Can you coin that trademark the aqua-hire-sition?
That is what this is, and that describes this better than anything else, and it's amazing.
The word came out of my mouth on air, and I was like, what did I just say?
I'm going to roll with it.
Stick with that.
Go. Aquahirization.
So this is from the information.
Meta to pay nearly $15 billion for scale AI stake in startup's 28-year-old CEO.
I love how you like companies are investing in other companies and you get the CEO and like the top talent because of it.
It means something that's happened multiple times, including with inflection with Mustafa Soliman going over to Microsoft in Meta, which has had some regulatory issues, is taking note.
So this is from the story.
Meta has agreed to take a 49%.
stake in the data labeling firm
scale AI for $14.8 billion.
Meta will send the cash to scale's existing
shareholders and place the startup CEO
Alexander Wing, former
Big Technology podcast guest,
in a top position inside
meta. Meta would put him in charge
of a new super intelligence lab.
There is that word. Hit the bell.
Along with other top scale
technical employees.
That will put
him in competition with some of his
customers and friends, including OpenAI CEO, Sam Altman.
Another interesting point from the story, meta CEO Mark Zuckerberg, has been actively
recruiting top AI researchers in an effort to boost his company's AI efforts.
He was frustrated with the reaction to its latest AI offering, Lama 4, and aims to catch up
to competitors such as Google and OpenAI.
Ranjan, your reaction, good or bad move from Zuckerberg?
This one is tough for me.
I really go back and forth in terms of good or bad.
They are taking some action.
They've been falling behind and clearly they want to catch up.
So that's good that they're willing to take some bold action.
But again, aqua-hire zition, $15 billion for a 49% stake just to hire the guy.
Like, I was even confused again if it was truly an aqua-hire position.
But actually, like, they announced that the chief strategy officer,
scale AI, Jason Drogue, will now be CEO. And Alexander Wang is full on meta. He's not a little
scale, little meta, some kind of weird Elon Muskian dual role. He's all in. Is it really,
is he worth that much? Is he just some consultation to Mark Zuckerberg over the last few months
and giving him good advice, is he worth that much?
Let me do my best to make the case for this deal
because I think it is worth it.
And I don't think it's going to be the last one.
Because if you read through the lines,
it's not just Alexander Wang.
This is from Bloomberg.
Zuckerberg has spun up a private WhatsApp group chat
with senior leaders to discuss potential candidates.
The chat, which they've called recruiting party,
is active at virtually all hours of the day.
and Zuckerberg has also been hosting folks at his homes in Palo Alto, California, and Lake Tahoe,
and personally reaching out to potential recruits.
Okay, let me set the stage here.
So if the things that we talked about in the first half, if Sam Altman's predictions come true or have true,
that this is a rapidly advancing technology that's going to determine the future of technology,
you really can't afford to be mediocre for a couple of years and hope to catch up.
And I think that's been the alarm bell around Apple,
But certainly it's an alarm bill with meta because after they took the lead in open source, they were surpassed by deep seek.
And then Lama 4 was not up to expectations.
So I think Zuckerberg sees this.
And what he's doing is looking out at the landscape and saying there's basically three vectors that you can compete on with AI.
The first is GPU, just scaling up GPUs, meta has that, right?
They had a ton before this moment.
They used them to build, you know, very impressive lot of models.
off the bat, and they've got the GPUs.
The other two things that you need are data and talent.
And meta has a lot of data, but scale has proprietary data
that's being basically is being used to help companies scale up their models
beyond just using GPUs.
And then the talent thing is super important.
You'll remember that Sergey Brin came on the show a couple weeks ago
and said that he believes that algorithmic scaling,
not necessarily compute scaling will lead to the most improvements.
And the way you get algorithmic scaling is building new algorithms,
and the way you build new algorithms is with talent.
So to me, this is Zuckerberg clearly seeing an issue with his company
and making, I would say, the strategic exact right move to fix it,
unlike another company that we've seen within flagging AI product
that is, it seems like it's still in denial about,
what is wrong. So that to me is the case for Zuckerberg not only going after scale and Alexander
Wing, but starting this recruiting party and going hardcore on recruiting top talent. There are
there are reports that he's offered eight and nine figure cash amounts to top talent engineers
to come over to meta. That's like in the tens and maybe even up to a hundred
million dollars to person, not a company person to come over. So that I think he realizes the
stakes and he's making it happen and he's shown an ability to do this in the past. That is, I think,
the bull case. What do you think about that, Ron John? All right. I think that's, I mean,
again, anything when you look at these relative to market cap or even cash on hand are not
like existential for meta. So in that sense, I think it's not unreasonable. I think it's
also fair too from like a shareholder perspective that the further behind they fall there's more
risk to meta stock in terms of people being like questioning strategy versus spending like a very
aggressive move like this it's still i'm i guess at like scale i i you know kind of helped open
AI build their model so they have a clear understanding of what kind of data along with other
I mean major companies so so he has been at the center of all of this so maybe just that kind of
like proprietary knowledge is also has a significant amount of value it's still more this whole
aquilization model is just a sign of the times I guess more than anything else that I've seen in a while
but I buy what you're saying a little bit.
And I mean, it's clear that they do need new talent
because as you pointed out to me
and our text messages off air,
the product isn't exactly working even beyond the models.
This was my favorite story of the week.
So this is the most meta.
And let's just call it a Facebook thing
because this is like old school of Facebook.
So the new meta AI app, which many people may have downloaded, and it's a separated app that's essentially kind of like the chat interface, chatbot experience that you would expect from a chat GPT or perplexity.
But one of the small nuances is they had also positioned it as somewhat of an AI social network.
Now, it was a bit unclear what that meant, but people started noticing, and I actually had not even noticed the Discover experience in meta-a-I.
And I don't use it, I use it for generating images that are fun with my son, like he wants to do like half animals, half dolphin, half squid or something like that. And I'm like, all right, I'm going to use meta's image on this one. If you go to the Discover tab, it posts entireties of chats from people that probably don't realize that it's being posted. And it's like as a social network, even crazier.
It posts people's voices prompting meta AI.
So it has like a audio clip, audio message voice recording.
And there's all types of crazy situations.
A lot of people very personal, asking about like a legal brief for a custody battle,
asking about like relationships and depression.
My favorite one of this one I found on Reddit a screenshot, someone saying,
you're supposed to be my wingman, wear my big booty future wife at.
So all types of requests.
but but people almost certainly unknowingly posting their AI chats to a public social network feed
thank you meta slash Facebook for bringing some of that old unanticipated sharing activity
back to social networking yeah what's old is new again and you know there's some funny parts of
this. Like you mentioned, the guy who asked for the AI to be his wingman and trying to find
his big booty future wife. And towards the end of this screenshot that someone shared, he says,
big booty in a nice rack. And the AI is like, you got specific tastes. I like it. What kind of
conversations are these? But they're also, it's quite sad. And it sort of goes into this conversation
of people needing AIs for companionship, given that our society has done such a poor,
job and building community and sustaining and fostering community that people feel like they need
AIs to be their friends. And, you know, you just listen to these conversations with people and these
AIs who the AI has become their companion in many cases. And it's just like, oh, it's just such a
glaring, magnifying glass or I don't even know if that's a phrase that makes sense. I'm like
trying to, I'm making a light of it, but it's actually, it's terrifying and it's sad. And like,
I mean, a lot of the queries that have been posted around are, I mean, are, it's people who are just really looking for help and answers and companionship.
But do you want to hear my conspiracy theory on this?
I always love a good conspiracy theory.
Okay.
We have a podcast after all.
What will we be without conspiracies?
So I was thinking about like, I mean, on one hand, to do something this clumsy, I actually can see.
Like, it's just one product manager makes.
decision. And I saw that there are some people who actually, it looked like they were
purposefully posting to kind of show expertise around a subject or even like if you go
through a lot of people do like prayer affirmations and stuff, but then their handle is like
a church or something religious. So then you're like, okay, I can actually see this person
knew what they're doing and this idea that you push your prompt into a feed and then it's
getting liked and shared makes some sense. But then I was like,
Like, what's meta's biggest threat?
It's the chat GPTs of the world, owning, like, the true human relationship and data and questions and queries that really get into the soul of a person.
Suddenly, I think this is going to continue to become a much bigger story.
And, like, suddenly, this idea that people are going to share everything with a chatbot is a little scarier.
the more people start thinking, oh, wait, you know what happened with META.
I'm going to stop asking ChatsyPT and Claude these really personal questions.
And suddenly, meta is actually in a better position relative to OpenAI on kind of that personal
connection to chatbot.
What do you think?
That's a great conspiracy.
That is a great conspiracy.
I won't rule it out entirely.
All right.
Let me ask you one more question about this scale thing before we move on to META.
This came up in our Discord.
there's been plenty of reporting on why meta wanted to buy scale AI, but why did
scale AI want to sell? Are the main LM providers getting good enough at obtaining trading data
themselves? The deepseeks signal a top for services like this. What do you think?
Yeah, I definitely think so. I also think synthetic data in training foundation models is going
to become more and more of just a standard practice. Like the we've exhausted
the race for real world data.
Foundation models have also gotten very, very good.
And regular listeners will know that I'm definitely of the school
that we don't need bigger and bigger and bigger models.
So I think in that sense, like the game, scale AI played,
the service they provided was brilliantly timed.
They became like a critical part of overall LLM infrastructure.
But what they did, their job, again, like,
actually having people manually tag like large networks of people manually tagging data to make it more
ingestible for a large language model or for a training it's not going to be as relevant anymore
you can even now have large language models yet do the tagging itself so so the service they
provided was not going to last so good on alexander wang and timing in terms of making this move
okay so we talked again about meta seeing an issue and addressing it now let's just go quickly to
apple because i thought we were done with wdc coverage but then there's been a bunch of
executive interviews that have come out mainly with um Craig federi and jaws who's their head
of marketing and it just seems like this company is diluted they've said that um they are not
looking to build a chatbot but also that Siri is you know their mission is to make Siri the
best assistant. They said that they want that, you know, Apple intelligence is basically
out there already, but that they're not giving a shipping date because they don't want to
over promise. M.G. Siegler said this. In Spyglass, he said, Apple clearly wants to frame this
as people perhaps being upset because they simply don't understand the intentions here. He
says they don't want a chat bot. They want to do more than that, baking AI into every product.
I think that's actually a fine strategy, but only if your AI works really well. And well,
the state of Siri, the actual ship stuff over the past 15 or so years, suggests that it
doesn't. They have to get their AI house in order to hear Apple tell it. There's nothing wrong.
Just a minor delay. And internally at Apple, it's better than it's ever been. You're crazy to
think otherwise. Like, that's the message that Apple is giving. So talk a little bit about what
what you saw from the post-WWDC interviews.
To me, they were even worse than the underwhelming event itself.
And where does Apple go from here?
Okay, yeah.
Separate from the event, the post-event, the Joanna Stern from Wall Street Journal interview,
I mean, with Craig Federigi and who is the other one?
The head of marketing jobs.
It was one of the most fascinating Apple pieces of media I think I've seen in a long time
because she did an incredible job, just kind of like,
in a very calm way, but just repeating the right questions.
Craig Federigi, you could kind of see, like,
getting a bit frustrated, but still having that perfect smile and just kind of like...
No, no.
There was a moment where he was, like, he let the smile down,
and you look like he was going to lose it, right?
Yeah, you see him remember to smile, and then all of a sudden, bam, cheeks go up.
Okay, okay, okay. So you caught that as well.
There's, watch, it's a seven-minute clip.
if listeners maybe you'll catch it too and let us know like there's one moment i'm like oh shit he's
about to lose it right now and then total recovery and smile but overall it was i don't understand
the whole i the way they approached it the whole kind of narrative they're trying to push is
we'll release it when it's ready it's not ready yet it's a very complex problem everyone else is
just doing chatbots and we want to do more than a chat bot no
Everyone is not just doing chatbots.
There are incredible AI experiences and solutions and products that span far outside of a chatbot.
And they kept repeating that.
Again, like, querying your own data is doable.
Many, like, you can upload a bunch of documents to an AI service and actually query them.
Yes, it's a complex problem to do it across all of your data, across all of your apps on your iPhone at the operating system level.
I know it's it's complicated but so which leads me to the one thing she did not press them on
why did you do that marketing push why did you like apple in the past the beauty of the company
was here is this incredible story around a product and here is the product and it just works
and remember those commercials the like the girl from last of us like looking up someone
she didn't want to talk to and finding their information quickly and
I don't know. Like they were terrible. They launched the largest Apple style marketing campaign.
Why did you do that if you weren't ready? That's the one question I felt was not pushed on.
For sure. But I think the entire conversation was just exposing of Apple for, again, doing that.
And the attitude for Apple being like, I don't understand why anybody's upset. We're doing exactly what we said we would.
And we're still working. I mean, to me, there was like a lot.
lack of, I think, self-awareness and humility there.
Yeah, I think, like, they could just say, or they could have, yeah, I don't know,
do you think it would be better to say, you know what, we have been behind, we've screwed up,
and we are going to deliver, that's all we're doing where it's like hair on fire situation
of the company and we get it and we're going to deliver?
Or do you think it would be better that they took a, you know what, we are the best for privacy,
we only deliver product, which they kind of alluded to.
We only deliver products when they're at 100% so anyone not just tech forward people could use them.
They kind of alluded to that, but they didn't even really.
But what do you think is a better direction?
It's hard for me to say.
I think this, everything is fine.
It's probably the worst direction.
But ultimately, any other direction, it doesn't matter until you ship.
I mean, basically, they could have just come in and say, listen, like, this is something we wanted to do.
we understand it doesn't matter until we ship it so we are working hard to ship it that's all
do you think do you think they should have canceled w wdc given there was no real announcement
that would have been worse i think because that shows you just like you don't even care to show up
no no i mean you say you say you know what we are going to all people are working around the
clock we get it we are going to deliver the world's greatest AI system that anyone can use
So there's no reason to have a whole event to talk about operating system names and changing backgrounds on chats and stuff like that that could have been like update notes in an iOS app update or system update like I don't know.
Don't forget about the phone app.
You got to get everybody together to talk about the phone app and the messages app.
Wait, can you explain to me liquid glass?
Why is it exciting?
No.
Okay.
I want someone to.
Can't.
I really, I really, I saw something where it's like they're getting back to like what they're
great at design and liquid glass, and I still didn't get it. But I, I want to. I want to at least try.
You will try. That's the thing. You can be forced to at some point. Okay. All right. Let's just
very quickly hit this story. I think, look, we're not going to spend a lot of time talking about it,
but it's important for us to just stay on top of this story. It's an important one, which is how
generative AI is changing the web. This is a story. New sites are,
getting crushed by Google's new AI tools.
The AI Armageddon is here for online news publishers.
Chatpots are replacing Google searches,
eliminating the need to click on blue links and tanking referrals to news sites.
As a result, traffic that publishers relied on for years is plummeting.
Here's some stats.
Traffic from organic search to Huffington Post desktop and mobile websites
just fell by over half in the past three years.
Nearly by that much at the Washington Post.
Business Insider cut 21% of its staff last month.
As the CEO, Barbara Ping, and said that the cuts were aimed at helping the publication
endure extreme traffic drops outside of our control.
Organic search traffic to websites declined by 55% between April 2022 and April 2025,
according to data from the company's similar web.
They do analytics.
55%.
That's crazy.
And Google is going to be sending even fewer visitors with this new AI mode.
Not to mention, Google is now offering employee buyouts in the search organization and other organizations, while not offering them in places like DeepMind that does AI.
This is, I mean, we've done some reporting on this here with my story about world history encyclopedia, but it's very clear now that that was the rule and not the exception and the web is in some even deeper trouble.
Remember we said it was kind of on life support?
No, no, no.
This is like hospice now.
instead of the web is dead then we tempered that with the web is in secular decline now web is in hospice
is definitely another direction to take it um but yeah no i mean this is what we've been talking about
forever and it's it's definitely going to dramatically affect anyone who optimized for a pre-LLM world
like who didn't just publish and have the their web
website, like Business Insider is the greatest case of a company for long-time media folks,
the invention of the slideshow on a website to get an additional display ad to click for
every slide you cycle through was one of the most, like, ridiculous, but actually brilliant
innovations in monetizing web publishing.
Like, Business Insider forever, that's how they operated, and that is not working anymore.
And maybe there's going to be like chat GPT first publishers, but trying to game Google, to get traffic, to show display ads, to make money is that is beyond hospice. That is dead.
Done for it, right?
Yeah, yeah. Like overall, people having websites and interacting with them in different ways, I think Web has some room to breathe and, like, there's, it's not over yet, but monetize.
on display ads based on page views, that is long, long gone,
especially if you built like a powerhouse optimization engine,
circa mid-2010s on that, that's long gone.
Now talk about this mid-journey story.
Yeah, so we saw Disney and Universal Studios sued mid-journey.
We've talked about New York Times, sued Open AI.
One of my predictions has been like,
we're going to start to get some guidance or resolution.
I think by the end of this year in terms of like how copyright will play out. And we need it.
Like I feel it's one of the things holding the overall industry back, not having a clear
direction of what's indemnified and what isn't. But my favorite part of this, though,
was like, I mean, the New York Times Open AI, for people who had like looked into that,
they were able to recreate by prompt, like essentially the entire text of articles. But that's still
not as visually jarring as like literally asking show iron man flying action photo and there's a
photo of iron man there's like ones of the simpsons the minions i mean mid journey clearly trained on
copyrighted info and not and and returns that info that's a problem and like there has to be
some kind of resolution to all of this before
people will start actually like at a professional level using these technologies in a proper way.
It's sort of a perfect lead-in to our final story of the week, which is why Waymo's self-driving cars
became a target of protesters in Los Angeles. Time has a couple of theories here. They source
the Wall Street Journal that say that part of the reasons the cars were vandalized was to obstruct
traffic. They said some social media users suggested self-driving vehicles in particular
have become a new target because they are seen by protesters as part of the police surveillance
state because they have cameras and they have 360 views of their 60, 360 degree views
of their surroundings and their tool has been tapped by law enforcement. Other people are just
talking about the fact that you shouldn't feel bad for them. This is from one organizer. There
are people on here saying it's violent and domestic terrorism to set a Waymo car on fire?
A robot car, are you going to demand justice for robot dogs next?
But not the human beings being repeatedly shot with rubber bullets in the street.
What kind of politics is this?
Honestly, it seems to me that it's just kind of talking around the issue.
I think people are just afraid or they're uncomfortable broadly with AI,
which like despite all the progress we talk about on the show,
broadly the public, not comfortable with artificial intelligence, especially as they see it do things
like run over some of the previously protected rights like copyright and, you know, all these
companies are clearly trying to automate work in their own way. And the public is just
starting to really feel uneasy about it or has for a long time. And it's manifesting itself
in the physical form of burning these Waymos. What do you think? I'm going to not attribute that
level of importance in terms of, I don't know, it's a, you want to burn something. If you burn
a Waymo, it'll get a little more traction on social media. It's also a little more visually jarring
than like other cars if you were to burn them. So I think it's just, I don't know, I think
connecting it to a deep rooted, like distrust of AI. It's, I don't know. I think it's just
people wanted to burn something and you get in a little more engagement by burning a Waymo than
a Corolla. First of all, I just want to say I don't condone the burning of Waymos. I do not
condone it. No condoning of the burning of cards. But why do you think they get more
engagement on social media? It's because of this unease. It's because there's this feeling that it's
Skynet. All right. Okay. You're right. You're right. The reason behind it's more of a story
or like emotionally of an emotionally resonant thing
that will put you on one side or another
to burn a waymo than a carol.
Again, we do not condone the burning of cars
here on Big Technology podcast.
We're good on our disclaimers at this point.
But I don't know, but if that's the case,
how are we going to have humanoid robots,
Johnny Ives, PIN?
I mean, if people are going to burn Waymos
because they're afraid of cameras,
Because I don't know about the, I guess a humanoid robot would actually just fight back and not let you burn it.
Maybe not.
I mean, they're not going to be programmed to fight back.
Like, all this alignment work is going to be done for them not to fight back.
And I think you're hitting on.
Even if they're getting burnt and Tesla Optimus is not going to fight back and let itself be burnt.
Maybe Elon's won't.
But the others, Google will definitely be like, fine, whatever you need to do.
but I think you're really hitting on the point here
which is so great
like we talked about this in the beginning
let's just close with it like
there we're going to hear a lot of rhetoric
about AI in the physical world
humanoid robots all those
things along those nature
along that nature but
there's an assumption that people
are just going to allow this to happen
especially as even if Dario is wrong
and it doesn't cause 50% of
entry level jobs to go away
it's going to change people's lives
and this is something that's happening
you know, effectively top down versus bottom up in most cases.
There's just going to be discomfort there and people are going to keep attacking these things.
I'll just say his last thing. When I was at BuzzFeed, I did a series where I would fight with robots.
I tried to steal. Yes, I tried to steal. I did effectively steal.
It was so funny, I stole lunch out of a DoorDash robot. I just ripped it open and took the lunch out of it.
With DoorDash PR there. I fought a tackling robot at a football field. This was a series.
And I think underneath it all was just this thing that, like, I was like, I am not going to be the first.
I have an urge inside me to beat the crap out of these things.
And so will a good chunk of society.
And I think we're starting to see the beginnings of that.
Well, it's also good that you are preparing yourself for all modes of robot combat that could be required by 2035, according to the gentle singularity.
So maybe I need to start.
scrapping with robots just to
just to prepare myself a little bit
I'm not going to burn them no burning cars
no burning fight a robot sparring
a little sparring yeah and you'd be surprised
because they can fight back in some times
some situations the lunch delivery robot
I beat that one easily just ripped the top off
ran by the way that video
they put on Jimmy Kimmel for two weeks in a row
where like Jimmy took yeah they took our
video of the robot crossing the street
and then like
put a bus
like put like special effects
and had like a bus run into it
and the thing blew up
God bless
mid 2010's media
I was like it was a good time
but the football robot
definitely got the best of me
so very humbling
watch out for that one listeners
exactly all right
so we'll end it there
we look forward to a future
where humanoid robots
are among us a gentle singularity
unless if you ask the people
and then you might get
different answer. Ranta. So great to see you again. Thanks for coming on the show.
See you next week. All right, everybody. Thank you for listening. Again, I'll be back
on Wednesday with Dwarkesh Patel, and we will see you then on Big Technology Podcast.